code
stringlengths 2.5k
150k
| kind
stringclasses 1
value |
---|---|
perl ExtUtils::MM ExtUtils::MM
============
CONTENTS
--------
* [NAME](#NAME)
* [SYNOPSIS](#SYNOPSIS)
* [DESCRIPTION](#DESCRIPTION)
NAME
----
ExtUtils::MM - OS adjusted ExtUtils::MakeMaker subclass
SYNOPSIS
--------
```
require ExtUtils::MM;
my $mm = MM->new(...);
```
DESCRIPTION
-----------
**FOR INTERNAL USE ONLY**
ExtUtils::MM is a subclass of <ExtUtils::MakeMaker> which automatically chooses the appropriate OS specific subclass for you (ie. <ExtUtils::MM_Unix>, etc...).
It also provides a convenient alias via the MM class (I didn't want MakeMaker modules outside of ExtUtils/).
This class might turn out to be a temporary solution, but MM won't go away.
perl App::Prove::State::Result App::Prove::State::Result
=========================
CONTENTS
--------
* [NAME](#NAME)
* [VERSION](#VERSION)
* [DESCRIPTION](#DESCRIPTION)
* [SYNOPSIS](#SYNOPSIS)
* [METHODS](#METHODS)
+ [Class Methods](#Class-Methods)
- [new](#new)
+ [state\_version](#state_version)
+ [test\_class](#test_class)
- [generation](#generation)
- [last\_run\_time](#last_run_time)
- [tests](#tests)
- [test](#test)
- [test\_names](#test_names)
- [remove](#remove)
- [num\_tests](#num_tests)
- [raw](#raw)
NAME
----
App::Prove::State::Result - Individual test suite results.
VERSION
-------
Version 3.44
DESCRIPTION
-----------
The `prove` command supports a `--state` option that instructs it to store persistent state across runs. This module encapsulates the results for a single test suite run.
SYNOPSIS
--------
```
# Re-run failed tests
$ prove --state=failed,save -rbv
```
METHODS
-------
###
Class Methods
#### `new`
```
my $result = App::Prove::State::Result->new({
generation => $generation,
tests => \%tests,
});
```
Returns a new `App::Prove::State::Result` instance.
### `state_version`
Returns the current version of state storage.
### `test_class`
Returns the name of the class used for tracking individual tests. This class should either subclass from `App::Prove::State::Result::Test` or provide an identical interface.
#### `generation`
Getter/setter for the "generation" of the test suite run. The first generation is 1 (one) and subsequent generations are 2, 3, etc.
#### `last_run_time`
Getter/setter for the time of the test suite run.
#### `tests`
Returns the tests for a given generation. This is a hashref or a hash, depending on context called. The keys to the hash are the individual test names and the value is a hashref with various interesting values. Each k/v pair might resemble something like this:
```
't/foo.t' => {
elapsed => '0.0428488254547119',
gen => '7',
last_pass_time => '1219328376.07815',
last_result => '0',
last_run_time => '1219328376.07815',
last_todo => '0',
mtime => '1191708862',
seq => '192',
total_passes => '6',
}
```
#### `test`
```
my $test = $result->test('t/customer/create.t');
```
Returns an individual `App::Prove::State::Result::Test` instance for the given test name (usually the filename). Will return a new `App::Prove::State::Result::Test` instance if the name is not found.
#### `test_names`
Returns an list of test names, sorted by run order.
#### `remove`
```
$result->remove($test_name); # remove the test
my $test = $result->test($test_name); # fatal error
```
Removes a given test from results. This is a no-op if the test name is not found.
#### `num_tests`
Returns the number of tests for a given test suite result.
#### `raw`
Returns a hashref of raw results, suitable for serialization by YAML.
perl Params::Check Params::Check
=============
CONTENTS
--------
* [NAME](#NAME)
* [SYNOPSIS](#SYNOPSIS)
* [DESCRIPTION](#DESCRIPTION)
* [Template](#Template)
* [Functions](#Functions)
+ [check( \%tmpl, \%args, [$verbose] );](#check(-%5C%25tmpl,-%5C%25args,-%5B%24verbose%5D-);)
+ [allow( $test\_me, \@criteria );](#allow(-%24test_me,-%5C@criteria-);)
+ [last\_error()](#last_error())
* [Global Variables](#Global-Variables)
+ [$Params::Check::VERBOSE](#%24Params::Check::VERBOSE)
+ [$Params::Check::STRICT\_TYPE](#%24Params::Check::STRICT_TYPE)
+ [$Params::Check::ALLOW\_UNKNOWN](#%24Params::Check::ALLOW_UNKNOWN)
+ [$Params::Check::STRIP\_LEADING\_DASHES](#%24Params::Check::STRIP_LEADING_DASHES)
+ [$Params::Check::NO\_DUPLICATES](#%24Params::Check::NO_DUPLICATES)
+ [$Params::Check::PRESERVE\_CASE](#%24Params::Check::PRESERVE_CASE)
+ [$Params::Check::ONLY\_ALLOW\_DEFINED](#%24Params::Check::ONLY_ALLOW_DEFINED)
+ [$Params::Check::SANITY\_CHECK\_TEMPLATE](#%24Params::Check::SANITY_CHECK_TEMPLATE)
+ [$Params::Check::WARNINGS\_FATAL](#%24Params::Check::WARNINGS_FATAL)
+ [$Params::Check::CALLER\_DEPTH](#%24Params::Check::CALLER_DEPTH)
* [Acknowledgements](#Acknowledgements)
* [BUG REPORTS](#BUG-REPORTS)
* [AUTHOR](#AUTHOR)
* [COPYRIGHT](#COPYRIGHT)
NAME
----
Params::Check - A generic input parsing/checking mechanism.
SYNOPSIS
--------
```
use Params::Check qw[check allow last_error];
sub fill_personal_info {
my %hash = @_;
my $x;
my $tmpl = {
firstname => { required => 1, defined => 1 },
lastname => { required => 1, store => \$x },
gender => { required => 1,
allow => [qr/M/i, qr/F/i],
},
married => { allow => [0,1] },
age => { default => 21,
allow => qr/^\d+$/,
},
phone => { allow => [ sub { return 1 if /$valid_re/ },
'1-800-PERL' ]
},
id_list => { default => [],
strict_type => 1
},
employer => { default => 'NSA', no_override => 1 },
};
### check() returns a hashref of parsed args on success ###
my $parsed_args = check( $tmpl, \%hash, $VERBOSE )
or die qw[Could not parse arguments!];
... other code here ...
}
my $ok = allow( $colour, [qw|blue green yellow|] );
my $error = Params::Check::last_error();
```
DESCRIPTION
-----------
Params::Check is a generic input parsing/checking mechanism.
It allows you to validate input via a template. The only requirement is that the arguments must be named.
Params::Check can do the following things for you:
* Convert all keys to lowercase
* Check if all required arguments have been provided
* Set arguments that have not been provided to the default
* Weed out arguments that are not supported and warn about them to the user
* Validate the arguments given by the user based on strings, regexes, lists or even subroutines
* Enforce type integrity if required
Most of Params::Check's power comes from its template, which we'll discuss below:
Template
--------
As you can see in the synopsis, based on your template, the arguments provided will be validated.
The template can take a different set of rules per key that is used.
The following rules are available:
default This is the default value if none was provided by the user. This is also the type `strict_type` will look at when checking type integrity (see below).
required A boolean flag that indicates if this argument was a required argument. If marked as required and not provided, check() will fail.
strict\_type This does a `ref()` check on the argument provided. The `ref` of the argument must be the same as the `ref` of the default value for this check to pass.
This is very useful if you insist on taking an array reference as argument for example.
defined If this template key is true, enforces that if this key is provided by user input, its value is `defined`. This just means that the user is not allowed to pass `undef` as a value for this key and is equivalent to: allow => sub { defined $\_[0] && OTHER TESTS }
no\_override This allows you to specify `constants` in your template. ie, they keys that are not allowed to be altered by the user. It pretty much allows you to keep all your `configurable` data in one place; the `Params::Check` template.
store This allows you to pass a reference to a scalar, in which the data will be stored:
```
my $x;
my $args = check(foo => { default => 1, store => \$x }, $input);
```
This is basically shorthand for saying:
```
my $args = check( { foo => { default => 1 }, $input );
my $x = $args->{foo};
```
You can alter the global variable $Params::Check::NO\_DUPLICATES to control whether the `store`'d key will still be present in your result set. See the ["Global Variables"](#Global-Variables) section below.
allow A set of criteria used to validate a particular piece of data if it has to adhere to particular rules.
See the `allow()` function for details.
Functions
---------
###
check( \%tmpl, \%args, [$verbose] );
This function is not exported by default, so you'll have to ask for it via:
```
use Params::Check qw[check];
```
or use its fully qualified name instead.
`check` takes a list of arguments, as follows:
Template This is a hash reference which contains a template as explained in the `SYNOPSIS` and `Template` section.
Arguments This is a reference to a hash of named arguments which need checking.
Verbose A boolean to indicate whether `check` should be verbose and warn about what went wrong in a check or not.
You can enable this program wide by setting the package variable `$Params::Check::VERBOSE` to a true value. For details, see the section on `Global Variables` below.
`check` will return when it fails, or a hashref with lowercase keys of parsed arguments when it succeeds.
So a typical call to check would look like this:
```
my $parsed = check( \%template, \%arguments, $VERBOSE )
or warn q[Arguments could not be parsed!];
```
A lot of the behaviour of `check()` can be altered by setting package variables. See the section on `Global Variables` for details on this.
###
allow( $test\_me, \@criteria );
The function that handles the `allow` key in the template is also available for independent use.
The function takes as first argument a key to test against, and as second argument any form of criteria that are also allowed by the `allow` key in the template.
You can use the following types of values for allow:
string The provided argument MUST be equal to the string for the validation to pass.
regexp The provided argument MUST match the regular expression for the validation to pass.
subroutine The provided subroutine MUST return true in order for the validation to pass and the argument accepted.
(This is particularly useful for more complicated data).
array ref The provided argument MUST equal one of the elements of the array ref for the validation to pass. An array ref can hold all the above values.
It returns true if the key matched the criteria, or false otherwise.
###
last\_error()
Returns a string containing all warnings and errors reported during the last time `check` was called.
This is useful if you want to report then some other way than `carp`'ing when the verbose flag is on.
It is exported upon request.
Global Variables
-----------------
The behaviour of Params::Check can be altered by changing the following global variables:
###
$Params::Check::VERBOSE
This controls whether Params::Check will issue warnings and explanations as to why certain things may have failed. If you set it to 0, Params::Check will not output any warnings.
The default is 1 when <warnings> are enabled, 0 otherwise;
###
$Params::Check::STRICT\_TYPE
This works like the `strict_type` option you can pass to `check`, which will turn on `strict_type` globally for all calls to `check`.
The default is 0;
###
$Params::Check::ALLOW\_UNKNOWN
If you set this flag, unknown options will still be present in the return value, rather than filtered out. This is useful if your subroutine is only interested in a few arguments, and wants to pass the rest on blindly to perhaps another subroutine.
The default is 0;
###
$Params::Check::STRIP\_LEADING\_DASHES
If you set this flag, all keys passed in the following manner:
```
function( -key => 'val' );
```
will have their leading dashes stripped.
###
$Params::Check::NO\_DUPLICATES
If set to true, all keys in the template that are marked as to be stored in a scalar, will also be removed from the result set.
Default is false, meaning that when you use `store` as a template key, `check` will put it both in the scalar you supplied, as well as in the hashref it returns.
###
$Params::Check::PRESERVE\_CASE
If set to true, <Params::Check> will no longer convert all keys from the user input to lowercase, but instead expect them to be in the case the template provided. This is useful when you want to use similar keys with different casing in your templates.
Understand that this removes the case-insensitivity feature of this module.
Default is 0;
###
$Params::Check::ONLY\_ALLOW\_DEFINED
If set to true, <Params::Check> will require all values passed to be `defined`. If you wish to enable this on a 'per key' basis, use the template option `defined` instead.
Default is 0;
###
$Params::Check::SANITY\_CHECK\_TEMPLATE
If set to true, <Params::Check> will sanity check templates, validating for errors and unknown keys. Although very useful for debugging, this can be somewhat slow in hot-code and large loops.
To disable this check, set this variable to `false`.
Default is 1;
###
$Params::Check::WARNINGS\_FATAL
If set to true, <Params::Check> will `croak` when an error during template validation occurs, rather than return `false`.
Default is 0;
###
$Params::Check::CALLER\_DEPTH
This global modifies the argument given to `caller()` by `Params::Check::check()` and is useful if you have a custom wrapper function around `Params::Check::check()`. The value must be an integer, indicating the number of wrapper functions inserted between the real function call and `Params::Check::check()`.
Example wrapper function, using a custom stacktrace:
```
sub check {
my ($template, $args_in) = @_;
local $Params::Check::WARNINGS_FATAL = 1;
local $Params::Check::CALLER_DEPTH = $Params::Check::CALLER_DEPTH + 1;
my $args_out = Params::Check::check($template, $args_in);
my_stacktrace(Params::Check::last_error) unless $args_out;
return $args_out;
}
```
Default is 0;
Acknowledgements
----------------
Thanks to Richard Soderberg for his performance improvements.
BUG REPORTS
------------
Please report bugs or other issues to <[email protected]>.
AUTHOR
------
This module by Jos Boumans <[email protected]>.
COPYRIGHT
---------
This library is free software; you may redistribute and/or modify it under the same terms as Perl itself.
perl Exporter Exporter
========
CONTENTS
--------
* [NAME](#NAME)
* [SYNOPSIS](#SYNOPSIS)
* [DESCRIPTION](#DESCRIPTION)
+ [How to Export](#How-to-Export)
+ [Selecting What to Export](#Selecting-What-to-Export)
+ [How to Import](#How-to-Import)
* [Advanced Features](#Advanced-Features)
+ [Specialised Import Lists](#Specialised-Import-Lists)
+ [Exporting Without Using Exporter's import Method](#Exporting-Without-Using-Exporter's-import-Method)
+ [Exporting Without Inheriting from Exporter](#Exporting-Without-Inheriting-from-Exporter)
+ [Module Version Checking](#Module-Version-Checking)
+ [Managing Unknown Symbols](#Managing-Unknown-Symbols)
+ [Tag Handling Utility Functions](#Tag-Handling-Utility-Functions)
+ [Generating Combined Tags](#Generating-Combined-Tags)
+ [AUTOLOADed Constants](#AUTOLOADed-Constants)
* [Good Practices](#Good-Practices)
+ [Declaring @EXPORT\_OK and Friends](#Declaring-@EXPORT_OK-and-Friends)
+ [Playing Safe](#Playing-Safe)
+ [What Not to Export](#What-Not-to-Export)
* [SEE ALSO](#SEE-ALSO)
* [LICENSE](#LICENSE)
NAME
----
Exporter - Implements default import method for modules
SYNOPSIS
--------
In module *YourModule.pm*:
```
package YourModule;
use Exporter 'import';
our @EXPORT_OK = qw(munge frobnicate); # symbols to export on request
```
or
```
package YourModule;
require Exporter;
our @ISA = qw(Exporter); # inherit all of Exporter's methods
our @EXPORT_OK = qw(munge frobnicate); # symbols to export on request
```
or
```
package YourModule;
use parent 'Exporter'; # inherit all of Exporter's methods
our @EXPORT_OK = qw(munge frobnicate); # symbols to export on request
```
In other files which wish to use `YourModule`:
```
use YourModule qw(frobnicate); # import listed symbols
frobnicate ($left, $right) # calls YourModule::frobnicate
```
Take a look at ["Good Practices"](#Good-Practices) for some variants you will like to use in modern Perl code.
DESCRIPTION
-----------
The Exporter module implements an `import` method which allows a module to export functions and variables to its users' namespaces. Many modules use Exporter rather than implementing their own `import` method because Exporter provides a highly flexible interface, with an implementation optimised for the common case.
Perl automatically calls the `import` method when processing a `use` statement for a module. Modules and `use` are documented in <perlfunc> and <perlmod>. Understanding the concept of modules and how the `use` statement operates is important to understanding the Exporter.
###
How to Export
The arrays `@EXPORT` and `@EXPORT_OK` in a module hold lists of symbols that are going to be exported into the users name space by default, or which they can request to be exported, respectively. The symbols can represent functions, scalars, arrays, hashes, or typeglobs. The symbols must be given by full name with the exception that the ampersand in front of a function is optional, e.g.
```
our @EXPORT = qw(afunc $scalar @array); # afunc is a function
our @EXPORT_OK = qw(&bfunc %hash *typeglob); # explicit prefix on &bfunc
```
If you are only exporting function names it is recommended to omit the ampersand, as the implementation is faster this way.
###
Selecting What to Export
Do **not** export method names!
Do **not** export anything else by default without a good reason!
Exports pollute the namespace of the module user. If you must export try to use `@EXPORT_OK` in preference to `@EXPORT` and avoid short or common symbol names to reduce the risk of name clashes.
Generally anything not exported is still accessible from outside the module using the `YourModule::item_name` (or `$blessed_ref->method`) syntax. By convention you can use a leading underscore on names to informally indicate that they are 'internal' and not for public use.
(It is actually possible to get private functions by saying:
```
my $subref = sub { ... };
$subref->(@args); # Call it as a function
$obj->$subref(@args); # Use it as a method
```
However if you use them for methods it is up to you to figure out how to make inheritance work.)
As a general rule, if the module is trying to be object oriented then export nothing. If it's just a collection of functions then `@EXPORT_OK` anything but use `@EXPORT` with caution. For function and method names use barewords in preference to names prefixed with ampersands for the export lists.
Other module design guidelines can be found in <perlmod>.
###
How to Import
In other files which wish to use your module there are three basic ways for them to load your module and import its symbols:
`use YourModule;`
This imports all the symbols from YourModule's `@EXPORT` into the namespace of the `use` statement.
`use YourModule ();`
This causes perl to load your module but does not import any symbols.
`use YourModule qw(...);`
This imports only the symbols listed by the caller into their namespace. All listed symbols must be in your `@EXPORT` or `@EXPORT_OK`, else an error occurs. The advanced export features of Exporter are accessed like this, but with list entries that are syntactically distinct from symbol names.
Unless you want to use its advanced features, this is probably all you need to know to use Exporter.
Advanced Features
------------------
###
Specialised Import Lists
If any of the entries in an import list begins with !, : or / then the list is treated as a series of specifications which either add to or delete from the list of names to import. They are processed left to right. Specifications are in the form:
```
[!]name This name only
[!]:DEFAULT All names in @EXPORT
[!]:tag All names in $EXPORT_TAGS{tag} anonymous array
[!]/pattern/ All names in @EXPORT and @EXPORT_OK which match
```
A leading ! indicates that matching names should be deleted from the list of names to import. If the first specification is a deletion it is treated as though preceded by :DEFAULT. If you just want to import extra names in addition to the default set you will still need to include :DEFAULT explicitly.
e.g., *Module.pm* defines:
```
our @EXPORT = qw(A1 A2 A3 A4 A5);
our @EXPORT_OK = qw(B1 B2 B3 B4 B5);
our %EXPORT_TAGS = (T1 => [qw(A1 A2 B1 B2)], T2 => [qw(A1 A2 B3 B4)]);
```
Note that you cannot use tags in @EXPORT or @EXPORT\_OK.
Names in EXPORT\_TAGS must also appear in @EXPORT or @EXPORT\_OK.
An application using Module can say something like:
```
use Module qw(:DEFAULT :T2 !B3 A3);
```
Other examples include:
```
use Socket qw(!/^[AP]F_/ !SOMAXCONN !SOL_SOCKET);
use POSIX qw(:errno_h :termios_h !TCSADRAIN !/^EXIT/);
```
Remember that most patterns (using //) will need to be anchored with a leading ^, e.g., `/^EXIT/` rather than `/EXIT/`.
You can say `BEGIN { $Exporter::Verbose=1 }` to see how the specifications are being processed and what is actually being imported into modules.
###
Exporting Without Using Exporter's import Method
Exporter has a special method, 'export\_to\_level' which is used in situations where you can't directly call Exporter's import method. The export\_to\_level method looks like:
```
MyPackage->export_to_level(
$where_to_export, $package, @what_to_export
);
```
where `$where_to_export` is an integer telling how far up the calling stack to export your symbols, and `@what_to_export` is an array telling what symbols \*to\* export (usually this is `@_`). The `$package` argument is currently unused.
For example, suppose that you have a module, A, which already has an import function:
```
package A;
our @ISA = qw(Exporter);
our @EXPORT_OK = qw($b);
sub import
{
$A::b = 1; # not a very useful import method
}
```
and you want to Export symbol `$A::b` back to the module that called package A. Since Exporter relies on the import method to work, via inheritance, as it stands Exporter::import() will never get called. Instead, say the following:
```
package A;
our @ISA = qw(Exporter);
our @EXPORT_OK = qw($b);
sub import
{
$A::b = 1;
A->export_to_level(1, @_);
}
```
This will export the symbols one level 'above' the current package - ie: to the program or module that used package A.
Note: Be careful not to modify `@_` at all before you call export\_to\_level - or people using your package will get very unexplained results!
###
Exporting Without Inheriting from Exporter
By including Exporter in your `@ISA` you inherit an Exporter's import() method but you also inherit several other helper methods which you probably don't want and complicate the inheritance tree. To avoid this you can do:
```
package YourModule;
use Exporter qw(import);
```
which will export Exporter's own import() method into YourModule. Everything will work as before but you won't need to include Exporter in `@YourModule::ISA`.
Note: This feature was introduced in version 5.57 of Exporter, released with perl 5.8.3.
###
Module Version Checking
The Exporter module will convert an attempt to import a number from a module into a call to `$module_name->VERSION($value)`. This can be used to validate that the version of the module being used is greater than or equal to the required version.
For historical reasons, Exporter supplies a `require_version` method that simply delegates to `VERSION`. Originally, before `UNIVERSAL::VERSION` existed, Exporter would call `require_version`.
Since the `UNIVERSAL::VERSION` method treats the `$VERSION` number as a simple numeric value it will regard version 1.10 as lower than 1.9. For this reason it is strongly recommended that you use numbers with at least two decimal places, e.g., 1.09.
###
Managing Unknown Symbols
In some situations you may want to prevent certain symbols from being exported. Typically this applies to extensions which have functions or constants that may not exist on some systems.
The names of any symbols that cannot be exported should be listed in the `@EXPORT_FAIL` array.
If a module attempts to import any of these symbols the Exporter will give the module an opportunity to handle the situation before generating an error. The Exporter will call an export\_fail method with a list of the failed symbols:
```
@failed_symbols = $module_name->export_fail(@failed_symbols);
```
If the `export_fail` method returns an empty list then no error is recorded and all the requested symbols are exported. If the returned list is not empty then an error is generated for each symbol and the export fails. The Exporter provides a default `export_fail` method which simply returns the list unchanged.
Uses for the `export_fail` method include giving better error messages for some symbols and performing lazy architectural checks (put more symbols into `@EXPORT_FAIL` by default and then take them out if someone actually tries to use them and an expensive check shows that they are usable on that platform).
###
Tag Handling Utility Functions
Since the symbols listed within `%EXPORT_TAGS` must also appear in either `@EXPORT` or `@EXPORT_OK`, two utility functions are provided which allow you to easily add tagged sets of symbols to `@EXPORT` or `@EXPORT_OK`:
```
our %EXPORT_TAGS = (foo => [qw(aa bb cc)], bar => [qw(aa cc dd)]);
Exporter::export_tags('foo'); # add aa, bb and cc to @EXPORT
Exporter::export_ok_tags('bar'); # add aa, cc and dd to @EXPORT_OK
```
Any names which are not tags are added to `@EXPORT` or `@EXPORT_OK` unchanged but will trigger a warning (with `-w`) to avoid misspelt tags names being silently added to `@EXPORT` or `@EXPORT_OK`. Future versions may make this a fatal error.
###
Generating Combined Tags
If several symbol categories exist in `%EXPORT_TAGS`, it's usually useful to create the utility ":all" to simplify "use" statements.
The simplest way to do this is:
```
our %EXPORT_TAGS = (foo => [qw(aa bb cc)], bar => [qw(aa cc dd)]);
# add all the other ":class" tags to the ":all" class,
# deleting duplicates
{
my %seen;
push @{$EXPORT_TAGS{all}},
grep {!$seen{$_}++} @{$EXPORT_TAGS{$_}} foreach keys %EXPORT_TAGS;
}
```
*CGI.pm* creates an ":all" tag which contains some (but not really all) of its categories. That could be done with one small change:
```
# add some of the other ":class" tags to the ":all" class,
# deleting duplicates
{
my %seen;
push @{$EXPORT_TAGS{all}},
grep {!$seen{$_}++} @{$EXPORT_TAGS{$_}}
foreach qw/html2 html3 netscape form cgi internal/;
}
```
Note that the tag names in `%EXPORT_TAGS` don't have the leading ':'.
###
`AUTOLOAD`ed Constants
Many modules make use of `AUTOLOAD`ing for constant subroutines to avoid having to compile and waste memory on rarely used values (see <perlsub> for details on constant subroutines). Calls to such constant subroutines are not optimized away at compile time because they can't be checked at compile time for constancy.
Even if a prototype is available at compile time, the body of the subroutine is not (it hasn't been `AUTOLOAD`ed yet). perl needs to examine both the `()` prototype and the body of a subroutine at compile time to detect that it can safely replace calls to that subroutine with the constant value.
A workaround for this is to call the constants once in a `BEGIN` block:
```
package My ;
use Socket ;
foo( SO_LINGER ); ## SO_LINGER NOT optimized away; called at runtime
BEGIN { SO_LINGER }
foo( SO_LINGER ); ## SO_LINGER optimized away at compile time.
```
This forces the `AUTOLOAD` for `SO_LINGER` to take place before SO\_LINGER is encountered later in `My` package.
If you are writing a package that `AUTOLOAD`s, consider forcing an `AUTOLOAD` for any constants explicitly imported by other packages or which are usually used when your package is `use`d.
Good Practices
---------------
###
Declaring `@EXPORT_OK` and Friends
When using `Exporter` with the standard `strict` and `warnings` pragmas, the `our` keyword is needed to declare the package variables `@EXPORT_OK`, `@EXPORT`, `@ISA`, etc.
```
our @ISA = qw(Exporter);
our @EXPORT_OK = qw(munge frobnicate);
```
If backward compatibility for Perls **under** 5.6 is important, one must write instead a `use vars` statement.
```
use vars qw(@ISA @EXPORT_OK);
@ISA = qw(Exporter);
@EXPORT_OK = qw(munge frobnicate);
```
###
Playing Safe
There are some caveats with the use of runtime statements like `require Exporter` and the assignment to package variables, which can be very subtle for the unaware programmer. This may happen for instance with mutually recursive modules, which are affected by the time the relevant constructions are executed.
The ideal way to never have to think about that is to use `BEGIN` blocks and the simple import method. So the first part of the ["SYNOPSIS"](#SYNOPSIS) code could be rewritten as:
```
package YourModule;
use strict;
use warnings;
use Exporter 'import';
BEGIN {
our @EXPORT_OK = qw(munge frobnicate); # symbols to export on request
}
```
Or if you need to inherit from Exporter:
```
package YourModule;
use strict;
use warnings;
BEGIN {
require Exporter;
our @ISA = qw(Exporter); # inherit all of Exporter's methods
our @EXPORT_OK = qw(munge frobnicate); # symbols to export on request
}
```
The `BEGIN` will assure that the loading of *Exporter.pm* and the assignments to `@ISA` and `@EXPORT_OK` happen immediately like `use`, leaving no room for something to get awry or just plain wrong.
With respect to loading `Exporter` and inheriting, there are alternatives with the use of modules like `base` and `parent`.
```
use base qw(Exporter);
# or
use parent qw(Exporter);
```
Any of these statements are nice replacements for `BEGIN { require Exporter; our @ISA = qw(Exporter); }` with the same compile-time effect. The basic difference is that `base` code interacts with declared `fields` while `parent` is a streamlined version of the older `base` code to just establish the IS-A relationship.
For more details, see the documentation and code of <base> and <parent>.
Another thorough remedy to that runtime vs. compile-time trap is to use <Exporter::Easy>, which is a wrapper of Exporter that allows all boilerplate code at a single gulp in the use statement.
```
use Exporter::Easy (
OK => [ qw(munge frobnicate) ],
);
# @ISA setup is automatic
# all assignments happen at compile time
```
###
What Not to Export
You have been warned already in ["Selecting What to Export"](#Selecting-What-to-Export) to not export:
* method names (because you don't need to and that's likely to not do what you want),
* anything by default (because you don't want to surprise your users... badly)
* anything you don't need to (because less is more)
There's one more item to add to this list. Do **not** export variable names. Just because `Exporter` lets you do that, it does not mean you should.
```
@EXPORT_OK = qw($svar @avar %hvar); # DON'T!
```
Exporting variables is not a good idea. They can change under the hood, provoking horrible effects at-a-distance that are too hard to track and to fix. Trust me: they are not worth it.
To provide the capability to set/get class-wide settings, it is best instead to provide accessors as subroutines or class methods instead.
SEE ALSO
---------
`Exporter` is definitely not the only module with symbol exporter capabilities. At CPAN, you may find a bunch of them. Some are lighter. Some provide improved APIs and features. Pick the one that fits your needs. The following is a sample list of such modules.
```
Exporter::Easy
Exporter::Lite
Exporter::Renaming
Exporter::Tidy
Sub::Exporter / Sub::Installer
Perl6::Export / Perl6::Export::Attrs
```
LICENSE
-------
This library is free software. You can redistribute it and/or modify it under the same terms as Perl itself.
| programming_docs |
perl Test2::API::InterceptResult Test2::API::InterceptResult
===========================
CONTENTS
--------
* [NAME](#NAME)
* [DESCRIPTION](#DESCRIPTION)
* [SYNOPSIS](#SYNOPSIS)
* [METHODS](#METHODS)
+ [CONSTRUCTION](#CONSTRUCTION)
+ [NORMALIZATION](#NORMALIZATION)
+ [FILTERING](#FILTERING)
- [%PARAMS](#%25PARAMS)
- [METHODS](#METHODS1)
+ [MAPPING](#MAPPING)
* [SOURCE](#SOURCE)
* [MAINTAINERS](#MAINTAINERS)
* [AUTHORS](#AUTHORS)
* [COPYRIGHT](#COPYRIGHT)
NAME
----
Test2::API::InterceptResult - Representation of a list of events.
DESCRIPTION
-----------
This class represents a list of events, normally obtained using `intercept()` from <Test2::API>.
This class is intended for people who with to verify the results of test tools they write.
This class provides methods to normalize, summarize, or map the list of events. The output of these operations makes verifying your testing tools and the events they generate significantly easier. In most cases this spares you from needing a deep understanding of the event/facet model.
SYNOPSIS
--------
Usually you get an instance of this class when you use `intercept()` from <Test2::API>.
```
use Test2::V0;
use Test2::API qw/intercept/;
my $events = intercept {
ok(1, "pass");
ok(0, "fail");
todo "broken" => sub { ok(0, "fixme") };
plan 3;
};
# This is typically the most useful construct
# squash_info() merges assertions and diagnostics that are associated
# (and returns a new instance with the modifications)
# flatten() condenses the facet data into the key details for each event
# (and returns those structures in an arrayref)
is(
$events->squash_info->flatten(),
[
{
causes_failure => 0,
name => 'pass',
pass => 1,
trace_file => 'xxx.t',
trace_line => 5,
},
{
causes_failure => 1,
name => 'fail',
pass => 0,
trace_file => 'xxx.t',
trace_line => 6,
# There can be more than one diagnostics message so this is
# always an array when present.
diag => ["Failed test 'fail'\nat xxx.t line 6."],
},
{
causes_failure => 0,
name => 'fixme',
pass => 0,
trace_file => 'xxx.t',
trace_line => 7,
# There can be more than one diagnostics message or todo
# reason, so these are always an array when present.
todo => ['broken'],
# Diag message was turned into a note since the assertion was
# TODO
note => ["Failed test 'fixme'\nat xxx.t line 7."],
},
{
causes_failure => 0,
plan => 3,
trace_file => 'xxx.t',
trace_line => 8,
},
],
"Flattened events look like we expect"
);
```
See <Test2::API::InterceptResult::Event> for a full description of what `flatten()` provides for each event.
METHODS
-------
Please note that no methods modify the original instance unless asked to do so.
### CONSTRUCTION
$events = Test2::API::InterceptResult->new(@EVENTS)
$events = Test2::API::InterceptResult->new\_from\_ref(\@EVENTS) These create a new instance of Test2::API::InterceptResult from the given events.
In the first form a new blessed arrayref is returned. In the 'new\_from\_ref' form the reference you pass in is directly blessed.
Both of these will throw an exception if called in void context. This is mainly important for the 'filtering' methods listed below which normally return a new instance, they throw an exception in such cases as it probably means someone meant to filter the original in place.
$clone = $events->clone() Make a clone of the original events. Note that this is a deep copy, the entire structure is duplicated. This uses `dclone` from [Storable](storable) to achieve the deep clone.
### NORMALIZATION
@events = $events->event\_list This returns all the events in list-form.
$hub = $events->hub This returns a new <Test2::Hub> instance that has processed all the events contained in the instance. This gives you a simple way to inspect the state changes your events cause.
$state = $events->state This returns a summary of the state of a hub after processing all the events.
```
{
count => 2, # Number of assertions made
failed => 1, # Number of test failures seen
is_passing => 0, # Boolean, true if the test would be passing
# after the events are processed.
plan => 2, # Plan, either a number, undef, 'SKIP', or 'NO PLAN'
follows_plan => 1, # True if there is a plan and it was followed.
# False if the plan and assertions did not
# match, undef if no plan was present in the
# event list.
bailed_out => undef, # undef unless there was a bail-out in the
# events in which case this will be a string
# explaining why there was a bailout, if no
# reason was given this will simply be set to
# true (1).
skip_reason => undef, # If there was a skip_all this will give the
# reason.
}
```
$new = $events->upgrade
$events->upgrade(in\_place => $BOOL) **Note:** This normally returns a new instance, leaving the original unchanged. If you call it in void context it will throw an exception. If you want to modify the original you must pass in the `in_place => 1` option. You may call this in void context when you ask to modify it in place. The in-place form returns the instance that was modified so you can chain methods.
This will create a clone of the list where all events have been converted into <Test2::API::InterceptResult::Event> instances. This is extremely helpful as <Test2::API::InterceptResult::Event> provide a much better interface for working with events. This allows you to avoid thinking about legacy event types.
This also means your tests against the list are not fragile if the tool you are testing randomly changes what type of events it generates (IE Changing from <Test2::Event::Ok> to <Test2::Event::Pass>, both make assertions and both will normalize to identical (or close enough) <Test2::API::InterceptResult::Event> instances.
Really you almost always want this, the only reason it is not done automatically is to make sure the `intercept()` tool is backwards compatible.
$new = $events->squash\_info
$events->squash\_info(in\_place => $BOOL) **Note:** This normally returns a new instance, leaving the original unchanged. If you call it in void context it will throw an exception. If you want to modify the original you must pass in the `in_place => 1` option. You may call this in void context when you ask to modify it in place. The in-place form returns the instance that was modified so you can chain methods.
**Note:** All events in the new or modified instance will be converted to <Test2::API::InterceptResult::Event> instances. There is no way to avoid this, the squash operation requires the upgraded event class.
<Test::More> and many other legacy tools would send notes, diags, and assertions as seperate events. A subtest in <Test::More> would send a note with the subtest name, the subtest assertion, and finally a diagnostics event if the subtest failed. This method will normalize things by squashing the note and diag into the same event as the subtest (This is different from putting them into the subtest, which is not what happens).
### FILTERING
**Note:** These normally return new instances, leaving the originals unchanged. If you call them in void context they will throw exceptions. If you want to modify the originals you must pass in the `in_place => 1` option. You may call these in void context when you ask to modify them in place. The in-place forms return the instance that was modified so you can chain methods.
####
%PARAMS
These all accept the same 2 optional parameters:
in\_place => $BOOL When true the method will modify the instance in place instead of returning a new instance.
args => \@ARGS If you wish to pass parameters into the event method being used for filtering, you may do so here.
#### METHODS
$events->grep($CALL, %PARAMS) This is essentially:
```
Test2::API::InterceptResult->new(
grep { $_->$CALL( @{$PARAMS{args}} ) } $self->event_list,
);
```
**Note:** that $CALL is called on an upgraded version of the event, though the events returned will be the original ones, not the upgraded ones.
$CALL may be either the name of a method on <Test2::API::InterceptResult::Event>, or a coderef.
$events->asserts(%PARAMS) This is essentially:
```
$events->grep(has_assert => @{$PARAMS{args}})
```
It returns a new instance containing only the events that made assertions.
$events->subtests(%PARAMS) This is essentially:
```
$events->grep(has_subtest => @{$PARAMS{args}})
```
It returns a new instance containing only the events that have subtests.
$events->diags(%PARAMS) This is essentially:
```
$events->grep(has_diags => @{$PARAMS{args}})
```
It returns a new instance containing only the events that have diags.
$events->notes(%PARAMS) This is essentially:
```
$events->grep(has_notes => @{$PARAMS{args}})
```
It returns a new instance containing only the events that have notes.
$events->errors(%PARAMS) **Note:** Errors are NOT failing assertions. Failing assertions are a different thing.
This is essentially:
```
$events->grep(has_errors => @{$PARAMS{args}})
```
It returns a new instance containing only the events that have errors.
$events->plans(%PARAMS) This is essentially:
```
$events->grep(has_plan => @{$PARAMS{args}})
```
It returns a new instance containing only the events that set the plan.
$events->causes\_fail(%PARAMS)
$events->causes\_failure(%PARAMS) These are essentially:
```
$events->grep(causes_fail => @{$PARAMS{args}})
$events->grep(causes_failure => @{$PARAMS{args}})
```
**Note:** `causes_fail()` and `causes_failure()` are both aliases for eachother in events, so these methods are effectively aliases here as well.
It returns a new instance containing only the events that cause failure.
### MAPPING
These methods **ALWAYS** return an arrayref.
**Note:** No methods on <Test2::API::InterceptResult::Event> alter the event in any way.
**Important Notes about Events**:
<Test2::API::InterceptResult::Event> was tailor-made to be used in event-lists. Most methods that are not applicable to a given event will return an empty list, so you normally do not need to worry about unwanted `undef` values or exceptions being thrown. Mapping over event methods is an entended use, so it works well to produce lists.
**Exceptions to the rule:**
Some methods such as `causes_fail` always return a boolean true or false for all events. Any method prefixed with `the_` conveys the intent that the event should have exactly 1 of something, so those will throw an exception when that condition is not true.
$arrayref = $events->map($CALL, %PARAMS) This is essentially:
```
[ map { $_->$CALL(@{ $PARAMS{args} }) } $events->upgrade->event_list ];
```
$CALL may be either the name of a method on <Test2::API::InterceptResult::Event>, or a coderef.
$arrayref = $events->flatten(%PARAMS) This is essentially:
```
[ map { $_->flatten(@{ $PARAMS{args} }) } $events->upgrade->event_list ];
```
It returns a new list of flattened structures.
See <Test2::API::InterceptResult::Event> for details on what `flatten()` returns.
$arrayref = $events->briefs(%PARAMS) This is essentially:
```
[ map { $_->briefs(@{ $PARAMS{args} }) } $events->upgrade->event_list ];
```
It returns a new list of event briefs.
See <Test2::API::InterceptResult::Event> for details on what `brief()` returns.
$arrayref = $events->summaries(%PARAMS) This is essentially:
```
[ map { $_->summaries(@{ $PARAMS{args} }) } $events->upgrade->event_list ];
```
It returns a new list of event summaries.
See <Test2::API::InterceptResult::Event> for details on what `summary()` returns.
$arrayref = $events->subtest\_results(%PARAMS) This is essentially:
```
[ map { $_->subtest_result(@{ $PARAMS{args} }) } $events->upgrade->event_list ];
```
It returns a new list of event summaries.
See <Test2::API::InterceptResult::Event> for details on what `subtest_result()` returns.
$arrayref = $events->diag\_messages(%PARAMS) This is essentially:
```
[ map { $_->diag_messages(@{ $PARAMS{args} }) } $events->upgrade->event_list ];
```
It returns a new list of diagnostic messages (strings).
See <Test2::API::InterceptResult::Event> for details on what `diag_messages()` returns.
$arrayref = $events->note\_messages(%PARAMS) This is essentially:
```
[ map { $_->note_messages(@{ $PARAMS{args} }) } $events->upgrade->event_list ];
```
It returns a new list of notification messages (strings).
See <Test2::API::InterceptResult::Event> for details on what `note_messages()` returns.
$arrayref = $events->error\_messages(%PARAMS) This is essentially:
```
[ map { $_->error_messages(@{ $PARAMS{args} }) } $events->upgrade->event_list ];
```
It returns a new list of error messages (strings).
See <Test2::API::InterceptResult::Event> for details on what `error_messages()` returns.
SOURCE
------
The source code repository for Test2 can be found at *http://github.com/Test-More/test-more/*.
MAINTAINERS
-----------
Chad Granum <[email protected]> AUTHORS
-------
Chad Granum <[email protected]> COPYRIGHT
---------
Copyright 2020 Chad Granum <[email protected]>.
This program is free software; you can redistribute it and/or modify it under the same terms as Perl itself.
See *http://dev.perl.org/licenses/*
perl perlfaq9 perlfaq9
========
CONTENTS
--------
* [NAME](#NAME)
* [VERSION](#VERSION)
* [DESCRIPTION](#DESCRIPTION)
+ [Should I use a web framework?](#Should-I-use-a-web-framework?)
+ [Which web framework should I use?](#Which-web-framework-should-I-use?)
+ [What is Plack and PSGI?](#What-is-Plack-and-PSGI?)
+ [How do I remove HTML from a string?](#How-do-I-remove-HTML-from-a-string?)
+ [How do I extract URLs?](#How-do-I-extract-URLs?)
+ [How do I fetch an HTML file?](#How-do-I-fetch-an-HTML-file?)
+ [How do I automate an HTML form submission?](#How-do-I-automate-an-HTML-form-submission?)
+ [How do I decode or create those %-encodings on the web?](#How-do-I-decode-or-create-those-%25-encodings-on-the-web?)
+ [How do I redirect to another page?](#How-do-I-redirect-to-another-page?)
+ [How do I put a password on my web pages?](#How-do-I-put-a-password-on-my-web-pages?)
+ [How do I make sure users can't enter values into a form that causes my CGI script to do bad things?](#How-do-I-make-sure-users-can't-enter-values-into-a-form-that-causes-my-CGI-script-to-do-bad-things?)
+ [How do I parse a mail header?](#How-do-I-parse-a-mail-header?)
+ [How do I check a valid mail address?](#How-do-I-check-a-valid-mail-address?)
+ [How do I decode a MIME/BASE64 string?](#How-do-I-decode-a-MIME/BASE64-string?)
+ [How do I find the user's mail address?](#How-do-I-find-the-user's-mail-address?)
+ [How do I send email?](#How-do-I-send-email?)
+ [How do I use MIME to make an attachment to a mail message?](#How-do-I-use-MIME-to-make-an-attachment-to-a-mail-message?)
+ [How do I read email?](#How-do-I-read-email?)
+ [How do I find out my hostname, domainname, or IP address?](#How-do-I-find-out-my-hostname,-domainname,-or-IP-address?)
+ [How do I fetch/put an (S)FTP file?](#How-do-I-fetch/put-an-(S)FTP-file?)
+ [How can I do RPC in Perl?](#How-can-I-do-RPC-in-Perl?)
* [AUTHOR AND COPYRIGHT](#AUTHOR-AND-COPYRIGHT)
NAME
----
perlfaq9 - Web, Email and Networking
VERSION
-------
version 5.20210520
DESCRIPTION
-----------
This section deals with questions related to running web sites, sending and receiving email as well as general networking.
###
Should I use a web framework?
Yes. If you are building a web site with any level of interactivity (forms / users / databases), you will want to use a framework to make handling requests and responses easier.
If there is no interactivity then you may still want to look at using something like [Template Toolkit](https://metacpan.org/module/Template) or <Plack::Middleware::TemplateToolkit> so maintenance of your HTML files (and other assets) is easier.
###
Which web framework should I use?
There is no simple answer to this question. Perl frameworks can run everything from basic file servers and small scale intranets to massive multinational multilingual websites that are the core to international businesses.
Below is a list of a few frameworks with comments which might help you in making a decision, depending on your specific requirements. Start by reading the docs, then ask questions on the relevant mailing list or IRC channel.
[Catalyst](catalyst) Strongly object-oriented and fully-featured with a long development history and a large community and addon ecosystem. It is excellent for large and complex applications, where you have full control over the server.
[Dancer2](dancer2) Free of legacy weight, providing a lightweight and easy to learn API. Has a growing addon ecosystem. It is best used for smaller projects and very easy to learn for beginners.
[Mojolicious](mojolicious) Self-contained and powerful for both small and larger projects, with a focus on HTML5 and real-time web technologies such as WebSockets.
<Web::Simple>
Strongly object-oriented and minimal, built for speed and intended as a toolkit for building micro web apps, custom frameworks or for tieing together existing Plack-compatible web applications with one central dispatcher.
All of these interact with or use [Plack](plack) which is worth understanding the basics of when building a website in Perl (there is a lot of useful [Plack::Middleware](https://metacpan.org/search?q=plack%3A%3Amiddleware)).
###
What is Plack and PSGI?
[PSGI](psgi) is the Perl Web Server Gateway Interface Specification, it is a standard that many Perl web frameworks use, you should not need to understand it to build a web site, the part you might want to use is [Plack](plack).
[Plack](plack) is a set of tools for using the PSGI stack. It contains [middleware](https://metacpan.org/search?q=plack%3A%3Amiddleware) components, a reference server and utilities for Web application frameworks. Plack is like Ruby's Rack or Python's Paste for WSGI.
You could build a web site using [Plack](plack) and your own code, but for anything other than a very basic web site, using a web framework (that uses <https://plackperl.org>) is a better option.
###
How do I remove HTML from a string?
Use <HTML::Strip>, or <HTML::FormatText> which not only removes HTML but also attempts to do a little simple formatting of the resulting plain text.
###
How do I extract URLs?
<HTML::SimpleLinkExtor> will extract URLs from HTML, it handles anchors, images, objects, frames, and many other tags that can contain a URL. If you need anything more complex, you can create your own subclass of <HTML::LinkExtor> or <HTML::Parser>. You might even use <HTML::SimpleLinkExtor> as an example for something specifically suited to your needs.
You can use <URI::Find> or <URL::Search> to extract URLs from an arbitrary text document.
###
How do I fetch an HTML file?
(contributed by brian d foy)
The core <HTTP::Tiny> module can fetch web resources and give their content back to you as a string:
```
use HTTP::Tiny;
my $ua = HTTP::Tiny->new;
my $html = $ua->get( "http://www.example.com/index.html" )->{content};
```
It can also store the resource directly in a file:
```
$ua->mirror( "http://www.example.com/index.html", "foo.html" );
```
If you need to do something more complicated, the <HTTP::Tiny> object can be customized by setting attributes, or you can use <LWP::UserAgent> from the libwww-perl distribution or <Mojo::UserAgent> from the Mojolicious distribution to make common tasks easier. If you want to simulate an interactive web browser, you can use the <WWW::Mechanize> module.
###
How do I automate an HTML form submission?
If you are doing something complex, such as moving through many pages and forms or a web site, you can use <WWW::Mechanize>. See its documentation for all the details.
If you're submitting values using the GET method, create a URL and encode the form using the `www_form_urlencode` method from <HTTP::Tiny>:
```
use HTTP::Tiny;
my $ua = HTTP::Tiny->new;
my $query = $ua->www_form_urlencode([ q => 'DB_File', lucky => 1 ]);
my $url = "https://metacpan.org/search?$query";
my $content = $ua->get($url)->{content};
```
If you're using the POST method, the `post_form` method will encode the content appropriately.
```
use HTTP::Tiny;
my $ua = HTTP::Tiny->new;
my $url = 'https://metacpan.org/search';
my $form = [ q => 'DB_File', lucky => 1 ];
my $content = $ua->post_form($url, $form)->{content};
```
###
How do I decode or create those %-encodings on the web?
Most of the time you should not need to do this as your web framework, or if you are making a request, the [LWP](lwp) or other module would handle it for you.
To encode a string yourself, use the <URI::Escape> module. The `uri_escape` function returns the escaped string:
```
my $original = "Colon : Hash # Percent %";
my $escaped = uri_escape( $original );
print "$escaped\n"; # 'Colon%20%3A%20Hash%20%23%20Percent%20%25'
```
To decode the string, use the `uri_unescape` function:
```
my $unescaped = uri_unescape( $escaped );
print $unescaped; # back to original
```
Remember not to encode a full URI, you need to escape each component separately and then join them together.
###
How do I redirect to another page?
Most Perl Web Frameworks will have a mechanism for doing this, using the [Catalyst](catalyst) framework it would be:
```
$c->res->redirect($url);
$c->detach();
```
If you are using Plack (which most frameworks do), then <Plack::Middleware::Rewrite> is worth looking at if you are migrating from Apache or have URL's you want to always redirect.
###
How do I put a password on my web pages?
See if the web framework you are using has an authentication system and if that fits your needs.
Alternativly look at <Plack::Middleware::Auth::Basic>, or one of the other [Plack authentication](https://metacpan.org/search?q=plack+auth) options.
###
How do I make sure users can't enter values into a form that causes my CGI script to do bad things?
(contributed by brian d foy)
You can't prevent people from sending your script bad data. Even if you add some client-side checks, people may disable them or bypass them completely. For instance, someone might use a module such as [LWP](lwp) to submit to your web site. If you want to prevent data that try to use SQL injection or other sorts of attacks (and you should want to), you have to not trust any data that enter your program.
The <perlsec> documentation has general advice about data security. If you are using the [DBI](dbi) module, use placeholder to fill in data. If you are running external programs with `system` or `exec`, use the list forms. There are many other precautions that you should take, too many to list here, and most of them fall under the category of not using any data that you don't intend to use. Trust no one.
###
How do I parse a mail header?
Use the <Email::MIME> module. It's well-tested and supports all the craziness that you'll see in the real world (comment-folding whitespace, encodings, comments, etc.).
```
use Email::MIME;
my $message = Email::MIME->new($rfc2822);
my $subject = $message->header('Subject');
my $from = $message->header('From');
```
If you've already got some other kind of email object, consider passing it to <Email::Abstract> and then using its cast method to get an <Email::MIME> object:
```
my $abstract = Email::Abstract->new($mail_message_object);
my $email_mime_object = $abstract->cast('Email::MIME');
```
###
How do I check a valid mail address?
(partly contributed by Aaron Sherman)
This isn't as simple a question as it sounds. There are two parts:
a) How do I verify that an email address is correctly formatted?
b) How do I verify that an email address targets a valid recipient?
Without sending mail to the address and seeing whether there's a human on the other end to answer you, you cannot fully answer part *b*, but the <Email::Valid> module will do both part *a* and part *b* as far as you can in real-time.
Our best advice for verifying a person's mail address is to have them enter their address twice, just as you normally do to change a password. This usually weeds out typos. If both versions match, send mail to that address with a personal message. If you get the message back and they've followed your directions, you can be reasonably assured that it's real.
A related strategy that's less open to forgery is to give them a PIN (personal ID number). Record the address and PIN (best that it be a random one) for later processing. In the mail you send, include a link to your site with the PIN included. If the mail bounces, you know it's not valid. If they don't click on the link, either they forged the address or (assuming they got the message) following through wasn't important so you don't need to worry about it.
###
How do I decode a MIME/BASE64 string?
The <MIME::Base64> package handles this as well as the MIME/QP encoding. Decoding base 64 becomes as simple as:
```
use MIME::Base64;
my $decoded = decode_base64($encoded);
```
The <Email::MIME> module can decode base 64-encoded email message parts transparently so the developer doesn't need to worry about it.
###
How do I find the user's mail address?
Ask them for it. There are so many email providers available that it's unlikely the local system has any idea how to determine a user's email address.
The exception is for organization-specific email (e.g. [email protected]) where policy can be codified in your program. In that case, you could look at $ENV{USER}, $ENV{LOGNAME}, and getpwuid($<) in scalar context, like so:
```
my $user_name = getpwuid($<)
```
But you still cannot make assumptions about whether this is correct, unless your policy says it is. You really are best off asking the user.
###
How do I send email?
Use the <Email::Stuffer> module, like so:
```
# first, create your message
my $message = Email::Stuffer->from('[email protected]')
->to('[email protected]')
->subject('Happy birthday!')
->text_body("Happy birthday to you!\n");
$message->send_or_die;
```
By default, <Email::Sender::Simple> (the `send` and `send_or_die` methods use this under the hood) will try `sendmail` first, if it exists in your $PATH. This generally isn't the case. If there's a remote mail server you use to send mail, consider investigating one of the Transport classes. At time of writing, the available transports include:
<Email::Sender::Transport::Sendmail>
This is the default. If you can use the [mail(1)](http://man.he.net/man1/mail) or [mailx(1)](http://man.he.net/man1/mailx) program to send mail from the machine where your code runs, you should be able to use this.
<Email::Sender::Transport::SMTP>
This transport contacts a remote SMTP server over TCP. It optionally uses TLS or SSL and can authenticate to the server via SASL.
Telling <Email::Stuffer> to use your transport is straightforward.
```
$message->transport($email_sender_transport_object)->send_or_die;
```
###
How do I use MIME to make an attachment to a mail message?
<Email::MIME> directly supports multipart messages. <Email::MIME> objects themselves are parts and can be attached to other <Email::MIME> objects. Consult the <Email::MIME> documentation for more information, including all of the supported methods and examples of their use.
<Email::Stuffer> uses <Email::MIME> under the hood to construct messages, and wraps the most common attachment tasks with the simple `attach` and `attach_file` methods.
```
Email::Stuffer->to('[email protected]')
->subject('The file')
->attach_file('stuff.csv')
->send_or_die;
```
###
How do I read email?
Use the <Email::Folder> module, like so:
```
use Email::Folder;
my $folder = Email::Folder->new('/path/to/email/folder');
while(my $message = $folder->next_message) {
# next_message returns Email::Simple objects, but we want
# Email::MIME objects as they're more robust
my $mime = Email::MIME->new($message->as_string);
}
```
There are different classes in the <Email::Folder> namespace for supporting various mailbox types. Note that these modules are generally rather limited and only support **reading** rather than writing.
###
How do I find out my hostname, domainname, or IP address?
(contributed by brian d foy)
The <Net::Domain> module, which is part of the Standard Library starting in Perl 5.7.3, can get you the fully qualified domain name (FQDN), the host name, or the domain name.
```
use Net::Domain qw(hostname hostfqdn hostdomain);
my $host = hostfqdn();
```
The <Sys::Hostname> module, part of the Standard Library, can also get the hostname:
```
use Sys::Hostname;
$host = hostname();
```
The <Sys::Hostname::Long> module takes a different approach and tries harder to return the fully qualified hostname:
```
use Sys::Hostname::Long 'hostname_long';
my $hostname = hostname_long();
```
To get the IP address, you can use the `gethostbyname` built-in function to turn the name into a number. To turn that number into the dotted octet form (a.b.c.d) that most people expect, use the `inet_ntoa` function from the [Socket](socket) module, which also comes with perl.
```
use Socket;
my $address = inet_ntoa(
scalar gethostbyname( $host || 'localhost' )
);
```
###
How do I fetch/put an (S)FTP file?
<Net::FTP>, and <Net::SFTP> allow you to interact with FTP and SFTP (Secure FTP) servers.
###
How can I do RPC in Perl?
Use one of the RPC modules( <https://metacpan.org/search?q=RPC> ).
AUTHOR AND COPYRIGHT
---------------------
Copyright (c) 1997-2010 Tom Christiansen, Nathan Torkington, and other authors as noted. All rights reserved.
This documentation is free; you can redistribute it and/or modify it under the same terms as Perl itself.
Irrespective of its distribution, all code examples in this file are hereby placed into the public domain. You are permitted and encouraged to use this code in your own programs for fun or for profit as you see fit. A simple comment in the code giving credit would be courteous but is not required.
| programming_docs |
perl pod2html pod2html
========
CONTENTS
--------
* [NAME](#NAME)
* [SYNOPSIS](#SYNOPSIS)
* [DESCRIPTION](#DESCRIPTION)
* [ARGUMENTS](#ARGUMENTS)
* [AUTHOR](#AUTHOR)
* [BUGS](#BUGS)
* [SEE ALSO](#SEE-ALSO)
* [COPYRIGHT](#COPYRIGHT)
NAME
----
pod2html - convert .pod files to .html files
SYNOPSIS
--------
```
pod2html --help --htmldir=<name> --htmlroot=<URL>
--infile=<name> --outfile=<name>
--podpath=<name>:...:<name> --podroot=<name>
--cachedir=<name> --flush --recurse --norecurse
--quiet --noquiet --verbose --noverbose
--index --noindex --backlink --nobacklink
--header --noheader --poderrors --nopoderrors
--css=<URL> --title=<name>
```
DESCRIPTION
-----------
Converts files from pod format (see <perlpod>) to HTML format.
ARGUMENTS
---------
pod2html takes the following arguments:
help
```
--help
```
Displays the usage message.
htmldir
```
--htmldir=name
```
Sets the directory to which all cross references in the resulting HTML file will be relative. Not passing this causes all links to be absolute since this is the value that tells Pod::Html the root of the documentation tree.
Do not use this and --htmlroot in the same call to pod2html; they are mutually exclusive.
htmlroot
```
--htmlroot=URL
```
Sets the base URL for the HTML files. When cross-references are made, the HTML root is prepended to the URL.
Do not use this if relative links are desired: use --htmldir instead.
Do not pass both this and --htmldir to pod2html; they are mutually exclusive.
infile
```
--infile=name
```
Specify the pod file to convert. Input is taken from STDIN if no infile is specified.
outfile
```
--outfile=name
```
Specify the HTML file to create. Output goes to STDOUT if no outfile is specified.
podroot
```
--podroot=name
```
Specify the base directory for finding library pods.
podpath
```
--podpath=name:...:name
```
Specify which subdirectories of the podroot contain pod files whose HTML converted forms can be linked-to in cross-references.
cachedir
```
--cachedir=name
```
Specify which directory is used for storing cache. Default directory is the current working directory.
flush
```
--flush
```
Flush the cache.
backlink
```
--backlink
```
Turn =head1 directives into links pointing to the top of the HTML file.
nobacklink
```
--nobacklink
```
Do not turn =head1 directives into links pointing to the top of the HTML file (default behaviour).
header
```
--header
```
Create header and footer blocks containing the text of the "NAME" section.
noheader
```
--noheader
```
Do not create header and footer blocks containing the text of the "NAME" section (default behaviour).
poderrors
```
--poderrors
```
Include a "POD ERRORS" section in the outfile if there were any POD errors in the infile (default behaviour).
nopoderrors
```
--nopoderrors
```
Do not include a "POD ERRORS" section in the outfile if there were any POD errors in the infile.
index
```
--index
```
Generate an index at the top of the HTML file (default behaviour).
noindex
```
--noindex
```
Do not generate an index at the top of the HTML file.
recurse
```
--recurse
```
Recurse into subdirectories specified in podpath (default behaviour).
norecurse
```
--norecurse
```
Do not recurse into subdirectories specified in podpath.
css
```
--css=URL
```
Specify the URL of cascading style sheet to link from resulting HTML file. Default is none style sheet.
title
```
--title=title
```
Specify the title of the resulting HTML file.
quiet
```
--quiet
```
Don't display mostly harmless warning messages.
noquiet
```
--noquiet
```
Display mostly harmless warning messages (default behaviour). But this is not the same as "verbose" mode.
verbose
```
--verbose
```
Display progress messages.
noverbose
```
--noverbose
```
Do not display progress messages (default behaviour).
AUTHOR
------
Tom Christiansen, <[email protected]>.
BUGS
----
See <Pod::Html> for a list of known bugs in the translator.
SEE ALSO
---------
<perlpod>, <Pod::Html>
COPYRIGHT
---------
This program is distributed under the Artistic License.
perl TAP::Parser::Result::Version TAP::Parser::Result::Version
============================
CONTENTS
--------
* [NAME](#NAME)
* [VERSION](#VERSION)
* [DESCRIPTION](#DESCRIPTION)
* [OVERRIDDEN METHODS](#OVERRIDDEN-METHODS)
+ [Instance Methods](#Instance-Methods)
- [version](#version)
NAME
----
TAP::Parser::Result::Version - TAP syntax version token.
VERSION
-------
Version 3.44
DESCRIPTION
-----------
This is a subclass of <TAP::Parser::Result>. A token of this class will be returned if a version line is encountered.
```
TAP version 13
ok 1
not ok 2
```
The first version of TAP to include an explicit version number is 13.
OVERRIDDEN METHODS
-------------------
Mainly listed here to shut up the pitiful screams of the pod coverage tests. They keep me awake at night.
* `as_string`
* `raw`
###
Instance Methods
#### `version`
```
if ( $result->is_version ) {
print $result->version;
}
```
This is merely a synonym for `as_string`.
perl feature feature
=======
CONTENTS
--------
* [NAME](#NAME)
* [SYNOPSIS](#SYNOPSIS)
* [DESCRIPTION](#DESCRIPTION)
+ [Lexical effect](#Lexical-effect)
+ [no feature](#no-feature)
* [AVAILABLE FEATURES](#AVAILABLE-FEATURES)
+ [The 'say' feature](#The-'say'-feature)
+ [The 'state' feature](#The-'state'-feature)
+ [The 'switch' feature](#The-'switch'-feature)
+ [The 'unicode\_strings' feature](#The-'unicode_strings'-feature)
+ [The 'unicode\_eval' and 'evalbytes' features](#The-'unicode_eval'-and-'evalbytes'-features)
+ [The 'current\_sub' feature](#The-'current_sub'-feature)
+ [The 'array\_base' feature](#The-'array_base'-feature)
+ [The 'fc' feature](#The-'fc'-feature)
+ [The 'lexical\_subs' feature](#The-'lexical_subs'-feature)
+ [The 'postderef' and 'postderef\_qq' features](#The-'postderef'-and-'postderef_qq'-features)
+ [The 'signatures' feature](#The-'signatures'-feature)
+ [The 'refaliasing' feature](#The-'refaliasing'-feature)
+ [The 'bitwise' feature](#The-'bitwise'-feature)
+ [The 'declared\_refs' feature](#The-'declared_refs'-feature)
+ [The 'isa' feature](#The-'isa'-feature)
+ [The 'indirect' feature](#The-'indirect'-feature)
+ [The 'multidimensional' feature](#The-'multidimensional'-feature)
+ [The 'bareword\_filehandles' feature.](#The-'bareword_filehandles'-feature.)
+ [The 'try' feature.](#The-'try'-feature.)
+ [The 'defer' feature](#The-'defer'-feature)
+ [The 'extra\_paired\_delimiters' feature](#The-'extra_paired_delimiters'-feature)
* [FEATURE BUNDLES](#FEATURE-BUNDLES)
* [IMPLICIT LOADING](#IMPLICIT-LOADING)
* [CHECKING FEATURES](#CHECKING-FEATURES)
NAME
----
feature - Perl pragma to enable new features
SYNOPSIS
--------
```
use feature qw(fc say);
# Without the "use feature" above, this code would not be able to find
# the built-ins "say" or "fc":
say "The case-folded version of $x is: " . fc $x;
# set features to match the :5.10 bundle, which may turn off or on
# multiple features (see below)
use feature ':5.10';
# implicitly loads :5.10 feature bundle
use v5.10;
```
DESCRIPTION
-----------
It is usually impossible to add new syntax to Perl without breaking some existing programs. This pragma provides a way to minimize that risk. New syntactic constructs, or new semantic meanings to older constructs, can be enabled by `use feature 'foo'`, and will be parsed only when the appropriate feature pragma is in scope. (Nevertheless, the `CORE::` prefix provides access to all Perl keywords, regardless of this pragma.)
###
Lexical effect
Like other pragmas (`use strict`, for example), features have a lexical effect. `use feature qw(foo)` will only make the feature "foo" available from that point to the end of the enclosing block.
```
{
use feature 'say';
say "say is available here";
}
print "But not here.\n";
```
###
`no feature`
Features can also be turned off by using `no feature "foo"`. This too has lexical effect.
```
use feature 'say';
say "say is available here";
{
no feature 'say';
print "But not here.\n";
}
say "Yet it is here.";
```
`no feature` with no features specified will reset to the default group. To disable *all* features (an unusual request!) use `no feature ':all'`.
AVAILABLE FEATURES
-------------------
###
The 'say' feature
`use feature 'say'` tells the compiler to enable the Raku-inspired `say` function.
See ["say" in perlfunc](perlfunc#say) for details.
This feature is available starting with Perl 5.10.
###
The 'state' feature
`use feature 'state'` tells the compiler to enable `state` variables.
See ["Persistent Private Variables" in perlsub](perlsub#Persistent-Private-Variables) for details.
This feature is available starting with Perl 5.10.
###
The 'switch' feature
**WARNING**: This feature is still experimental and the implementation may change or be removed in future versions of Perl. For this reason, Perl will warn when you use the feature, unless you have explicitly disabled the warning:
```
no warnings "experimental::smartmatch";
```
`use feature 'switch'` tells the compiler to enable the Raku given/when construct.
See ["Switch Statements" in perlsyn](perlsyn#Switch-Statements) for details.
This feature is available starting with Perl 5.10.
###
The 'unicode\_strings' feature
`use feature 'unicode_strings'` tells the compiler to use Unicode rules in all string operations executed within its scope (unless they are also within the scope of either `use locale` or `use bytes`). The same applies to all regular expressions compiled within the scope, even if executed outside it. It does not change the internal representation of strings, but only how they are interpreted.
`no feature 'unicode_strings'` tells the compiler to use the traditional Perl rules wherein the native character set rules is used unless it is clear to Perl that Unicode is desired. This can lead to some surprises when the behavior suddenly changes. (See ["The "Unicode Bug"" in perlunicode](perlunicode#The-%22Unicode-Bug%22) for details.) For this reason, if you are potentially using Unicode in your program, the `use feature 'unicode_strings'` subpragma is **strongly** recommended.
This feature is available starting with Perl 5.12; was almost fully implemented in Perl 5.14; and extended in Perl 5.16 to cover `quotemeta`; was extended further in Perl 5.26 to cover [the range operator](perlop#Range-Operators); and was extended again in Perl 5.28 to cover [special-cased whitespace splitting](perlfunc#split).
###
The 'unicode\_eval' and 'evalbytes' features
Together, these two features are intended to replace the legacy string `eval` function, which behaves problematically in some instances. They are available starting with Perl 5.16, and are enabled by default by a `use 5.16` or higher declaration.
`unicode_eval` changes the behavior of plain string `eval` to work more consistently, especially in the Unicode world. Certain (mis)behaviors couldn't be changed without breaking some things that had come to rely on them, so the feature can be enabled and disabled. Details are at ["Under the "unicode\_eval" feature" in perlfunc](perlfunc#Under-the-%22unicode_eval%22-feature).
`evalbytes` is like string `eval`, but it treats its argument as a byte string. Details are at ["evalbytes EXPR" in perlfunc](perlfunc#evalbytes-EXPR). Without a `use feature 'evalbytes'` nor a `use v5.16` (or higher) declaration in the current scope, you can still access it by instead writing `CORE::evalbytes`.
###
The 'current\_sub' feature
This provides the `__SUB__` token that returns a reference to the current subroutine or `undef` outside of a subroutine.
This feature is available starting with Perl 5.16.
###
The 'array\_base' feature
This feature supported the legacy `$[` variable. See ["$[" in perlvar](perlvar#%24%5B). It was on by default but disabled under `use v5.16` (see ["IMPLICIT LOADING"](#IMPLICIT-LOADING), below) and unavailable since perl 5.30.
This feature is available under this name starting with Perl 5.16. In previous versions, it was simply on all the time, and this pragma knew nothing about it.
###
The 'fc' feature
`use feature 'fc'` tells the compiler to enable the `fc` function, which implements Unicode casefolding.
See ["fc" in perlfunc](perlfunc#fc) for details.
This feature is available from Perl 5.16 onwards.
###
The 'lexical\_subs' feature
In Perl versions prior to 5.26, this feature enabled declaration of subroutines via `my sub foo`, `state sub foo` and `our sub foo` syntax. See ["Lexical Subroutines" in perlsub](perlsub#Lexical-Subroutines) for details.
This feature is available from Perl 5.18 onwards. From Perl 5.18 to 5.24, it was classed as experimental, and Perl emitted a warning for its usage, except when explicitly disabled:
```
no warnings "experimental::lexical_subs";
```
As of Perl 5.26, use of this feature no longer triggers a warning, though the `experimental::lexical_subs` warning category still exists (for compatibility with code that disables it). In addition, this syntax is not only no longer experimental, but it is enabled for all Perl code, regardless of what feature declarations are in scope.
###
The 'postderef' and 'postderef\_qq' features
The 'postderef\_qq' feature extends the applicability of [postfix dereference syntax](perlref#Postfix-Dereference-Syntax) so that postfix array and scalar dereference are available in double-quotish interpolations. For example, it makes the following two statements equivalent:
```
my $s = "[@{ $h->{a} }]";
my $s = "[$h->{a}->@*]";
```
This feature is available from Perl 5.20 onwards. In Perl 5.20 and 5.22, it was classed as experimental, and Perl emitted a warning for its usage, except when explicitly disabled:
```
no warnings "experimental::postderef";
```
As of Perl 5.24, use of this feature no longer triggers a warning, though the `experimental::postderef` warning category still exists (for compatibility with code that disables it).
The 'postderef' feature was used in Perl 5.20 and Perl 5.22 to enable postfix dereference syntax outside double-quotish interpolations. In those versions, using it triggered the `experimental::postderef` warning in the same way as the 'postderef\_qq' feature did. As of Perl 5.24, this syntax is not only no longer experimental, but it is enabled for all Perl code, regardless of what feature declarations are in scope.
###
The 'signatures' feature
This enables syntax for declaring subroutine arguments as lexical variables. For example, for this subroutine:
```
sub foo ($left, $right) {
return $left + $right;
}
```
Calling `foo(3, 7)` will assign `3` into `$left` and `7` into `$right`.
See ["Signatures" in perlsub](perlsub#Signatures) for details.
This feature is available from Perl 5.20 onwards. From Perl 5.20 to 5.34, it was classed as experimental, and Perl emitted a warning for its usage, except when explicitly disabled:
```
no warnings "experimental::signatures";
```
As of Perl 5.36, use of this feature no longer triggers a warning, though the `experimental::signatures` warning category still exists (for compatibility with code that disables it). This feature is now considered stable, and is enabled automatically by `use v5.36` (or higher).
###
The 'refaliasing' feature
**WARNING**: This feature is still experimental and the implementation may change or be removed in future versions of Perl. For this reason, Perl will warn when you use the feature, unless you have explicitly disabled the warning:
```
no warnings "experimental::refaliasing";
```
This enables aliasing via assignment to references:
```
\$a = \$b; # $a and $b now point to the same scalar
\@a = \@b; # to the same array
\%a = \%b;
\&a = \&b;
foreach \%hash (@array_of_hash_refs) {
...
}
```
See ["Assigning to References" in perlref](perlref#Assigning-to-References) for details.
This feature is available from Perl 5.22 onwards.
###
The 'bitwise' feature
This makes the four standard bitwise operators (`& | ^ ~`) treat their operands consistently as numbers, and introduces four new dotted operators (`&. |. ^. ~.`) that treat their operands consistently as strings. The same applies to the assignment variants (`&= |= ^= &.= |.= ^.=`).
See ["Bitwise String Operators" in perlop](perlop#Bitwise-String-Operators) for details.
This feature is available from Perl 5.22 onwards. Starting in Perl 5.28, `use v5.28` will enable the feature. Before 5.28, it was still experimental and would emit a warning in the "experimental::bitwise" category.
###
The 'declared\_refs' feature
**WARNING**: This feature is still experimental and the implementation may change or be removed in future versions of Perl. For this reason, Perl will warn when you use the feature, unless you have explicitly disabled the warning:
```
no warnings "experimental::declared_refs";
```
This allows a reference to a variable to be declared with `my`, `state`, our `our`, or localized with `local`. It is intended mainly for use in conjunction with the "refaliasing" feature. See ["Declaring a Reference to a Variable" in perlref](perlref#Declaring-a-Reference-to-a-Variable) for examples.
This feature is available from Perl 5.26 onwards.
###
The 'isa' feature
This allows the use of the `isa` infix operator, which tests whether the scalar given by the left operand is an object of the class given by the right operand. See ["Class Instance Operator" in perlop](perlop#Class-Instance-Operator) for more details.
This feature is available from Perl 5.32 onwards. From Perl 5.32 to 5.34, it was classed as experimental, and Perl emitted a warning for its usage, except when explicitly disabled:
```
no warnings "experimental::isa";
```
As of Perl 5.36, use of this feature no longer triggers a warning (though the `experimental::isa` warning category stilll exists for compatibility with code that disables it). This feature is now considered stable, and is enabled automatically by `use v5.36` (or higher).
###
The 'indirect' feature
This feature allows the use of [indirect object syntax](perlobj#Indirect-Object-Syntax) for method calls, e.g. `new Foo 1, 2;`. It is enabled by default, but can be turned off to disallow indirect object syntax.
This feature is available under this name from Perl 5.32 onwards. In previous versions, it was simply on all the time. To disallow (or warn on) indirect object syntax on older Perls, see the <indirect> CPAN module.
###
The 'multidimensional' feature
This feature enables multidimensional array emulation, a perl 4 (or earlier) feature that was used to emulate multidimensional arrays with hashes. This works by converting code like `$foo{$x, $y}` into `$foo{join($;, $x, $y)}`. It is enabled by default, but can be turned off to disable multidimensional array emulation.
When this feature is disabled the syntax that is normally replaced will report a compilation error.
This feature is available under this name from Perl 5.34 onwards. In previous versions, it was simply on all the time.
You can use the <multidimensional> module on CPAN to disable multidimensional array emulation for older versions of Perl.
###
The 'bareword\_filehandles' feature.
This feature enables bareword filehandles for builtin functions operations, a generally discouraged practice. It is enabled by default, but can be turned off to disable bareword filehandles, except for the exceptions listed below.
The perl built-in filehandles `STDIN`, `STDOUT`, `STDERR`, `DATA`, `ARGV`, `ARGVOUT` and the special `_` are always enabled.
This feature is enabled under this name from Perl 5.34 onwards. In previous versions it was simply on all the time.
You can use the <bareword::filehandles> module on CPAN to disable bareword filehandles for older versions of perl.
###
The 'try' feature.
**WARNING**: This feature is still experimental and the implementation may change or be removed in future versions of Perl. For this reason, Perl will warn when you use the feature, unless you have explicitly disabled the warning:
```
no warnings "experimental::try";
```
This feature enables the `try` and `catch` syntax, which allows exception handling, where exceptions thrown from the body of the block introduced with `try` are caught by executing the body of the `catch` block.
For more information, see ["Try Catch Exception Handling" in perlsyn](perlsyn#Try-Catch-Exception-Handling).
###
The 'defer' feature
**WARNING**: This feature is still experimental and the implementation may change or be removed in future versions of Perl. For this reason, Perl will warn when you use the feature, unless you have explicitly disabled the warning:
```
no warnings "experimental::defer";
```
This feature enables the `defer` block syntax, which allows a block of code to be deferred until when the flow of control leaves the block which contained it. For more details, see ["defer" in perlsyn](perlsyn#defer).
###
The 'extra\_paired\_delimiters' feature
**WARNING**: This feature is still experimental and the implementation may change or be removed in future versions of Perl. For this reason, Perl will warn when you use the feature, unless you have explicitly disabled the warning:
```
no warnings "experimental::extra_paired_delimiters";
```
This feature enables the use of more paired string delimiters than the traditional four, `< >`, `( )`, `{ }`, and `[ ]`. When this feature is on, for example, you can say `qr«pat»`.
This feature is available starting in Perl 5.36.
The complete list of accepted paired delimiters as of Unicode 14.0 is:
```
( ) U+0028, U+0029 LEFT/RIGHT PARENTHESIS
< > U+003C, U+003E LESS-THAN/GREATER-THAN SIGN
[ ] U+005B, U+005D LEFT/RIGHT SQUARE BRACKET
{ } U+007B, U+007D LEFT/RIGHT CURLY BRACKET
« » U+00AB, U+00BB LEFT/RIGHT-POINTING DOUBLE ANGLE QUOTATION MARK
» « U+00BB, U+00AB RIGHT/LEFT-POINTING DOUBLE ANGLE QUOTATION MARK
܆ ܇ U+0706, U+0707 SYRIAC COLON SKEWED LEFT/RIGHT
༺ ༻ U+0F3A, U+0F3B TIBETAN MARK GUG RTAGS GYON, TIBETAN MARK GUG
RTAGS GYAS
༼ ༽ U+0F3C, U+0F3D TIBETAN MARK ANG KHANG GYON, TIBETAN MARK ANG
KHANG GYAS
᚛ ᚜ U+169B, U+169C OGHAM FEATHER MARK, OGHAM REVERSED FEATHER MARK
‘ ’ U+2018, U+2019 LEFT/RIGHT SINGLE QUOTATION MARK
’ ‘ U+2019, U+2018 RIGHT/LEFT SINGLE QUOTATION MARK
“ ” U+201C, U+201D LEFT/RIGHT DOUBLE QUOTATION MARK
” “ U+201D, U+201C RIGHT/LEFT DOUBLE QUOTATION MARK
‵ ′ U+2035, U+2032 REVERSED PRIME, PRIME
‶ ″ U+2036, U+2033 REVERSED DOUBLE PRIME, DOUBLE PRIME
‷ ‴ U+2037, U+2034 REVERSED TRIPLE PRIME, TRIPLE PRIME
‹ › U+2039, U+203A SINGLE LEFT/RIGHT-POINTING ANGLE QUOTATION MARK
› ‹ U+203A, U+2039 SINGLE RIGHT/LEFT-POINTING ANGLE QUOTATION MARK
⁅ ⁆ U+2045, U+2046 LEFT/RIGHT SQUARE BRACKET WITH QUILL
⁍ ⁌ U+204D, U+204C BLACK RIGHT/LEFTWARDS BULLET
⁽ ⁾ U+207D, U+207E SUPERSCRIPT LEFT/RIGHT PARENTHESIS
₍ ₎ U+208D, U+208E SUBSCRIPT LEFT/RIGHT PARENTHESIS
→ ← U+2192, U+2190 RIGHT/LEFTWARDS ARROW
↛ ↚ U+219B, U+219A RIGHT/LEFTWARDS ARROW WITH STROKE
↝ ↜ U+219D, U+219C RIGHT/LEFTWARDS WAVE ARROW
↠ ↞ U+21A0, U+219E RIGHT/LEFTWARDS TWO HEADED ARROW
↣ ↢ U+21A3, U+21A2 RIGHT/LEFTWARDS ARROW WITH TAIL
↦ ↤ U+21A6, U+21A4 RIGHT/LEFTWARDS ARROW FROM BAR
↪ ↩ U+21AA, U+21A9 RIGHT/LEFTWARDS ARROW WITH HOOK
↬ ↫ U+21AC, U+21AB RIGHT/LEFTWARDS ARROW WITH LOOP
↱ ↰ U+21B1, U+21B0 UPWARDS ARROW WITH TIP RIGHT/LEFTWARDS
↳ ↲ U+21B3, U+21B2 DOWNWARDS ARROW WITH TIP RIGHT/LEFTWARDS
⇀ ↼ U+21C0, U+21BC RIGHT/LEFTWARDS HARPOON WITH BARB UPWARDS
⇁ ↽ U+21C1, U+21BD RIGHT/LEFTWARDS HARPOON WITH BARB DOWNWARDS
⇉ ⇇ U+21C9, U+21C7 RIGHT/LEFTWARDS PAIRED ARROWS
⇏ ⇍ U+21CF, U+21CD RIGHT/LEFTWARDS DOUBLE ARROW WITH STROKE
⇒ ⇐ U+21D2, U+21D0 RIGHT/LEFTWARDS DOUBLE ARROW
⇛ ⇚ U+21DB, U+21DA RIGHT/LEFTWARDS TRIPLE ARROW
⇝ ⇜ U+21DD, U+21DC RIGHT/LEFTWARDS SQUIGGLE ARROW
⇢ ⇠ U+21E2, U+21E0 RIGHT/LEFTWARDS DASHED ARROW
⇥ ⇤ U+21E5, U+21E4 RIGHT/LEFTWARDS ARROW TO BAR
⇨ ⇦ U+21E8, U+21E6 RIGHT/LEFTWARDS WHITE ARROW
⇴ ⬰ U+21F4, U+2B30 RIGHT/LEFT ARROW WITH SMALL CIRCLE
⇶ ⬱ U+21F6, U+2B31 THREE RIGHT/LEFTWARDS ARROWS
⇸ ⇷ U+21F8, U+21F7 RIGHT/LEFTWARDS ARROW WITH VERTICAL STROKE
⇻ ⇺ U+21FB, U+21FA RIGHT/LEFTWARDS ARROW WITH DOUBLE VERTICAL
STROKE
⇾ ⇽ U+21FE, U+21FD RIGHT/LEFTWARDS OPEN-HEADED ARROW
∈ ∋ U+2208, U+220B ELEMENT OF, CONTAINS AS MEMBER
∉ ∌ U+2209, U+220C NOT AN ELEMENT OF, DOES NOT CONTAIN AS MEMBER
∊ ∍ U+220A, U+220D SMALL ELEMENT OF, SMALL CONTAINS AS MEMBER
≤ ≥ U+2264, U+2265 LESS-THAN/GREATER-THAN OR EQUAL TO
≦ ≧ U+2266, U+2267 LESS-THAN/GREATER-THAN OVER EQUAL TO
≨ ≩ U+2268, U+2269 LESS-THAN/GREATER-THAN BUT NOT EQUAL TO
≪ ≫ U+226A, U+226B MUCH LESS-THAN/GREATER-THAN
≮ ≯ U+226E, U+226F NOT LESS-THAN/GREATER-THAN
≰ ≱ U+2270, U+2271 NEITHER LESS-THAN/GREATER-THAN NOR EQUAL TO
≲ ≳ U+2272, U+2273 LESS-THAN/GREATER-THAN OR EQUIVALENT TO
≴ ≵ U+2274, U+2275 NEITHER LESS-THAN/GREATER-THAN NOR EQUIVALENT TO
≺ ≻ U+227A, U+227B PRECEDES/SUCCEEDS
≼ ≽ U+227C, U+227D PRECEDES/SUCCEEDS OR EQUAL TO
≾ ≿ U+227E, U+227F PRECEDES/SUCCEEDS OR EQUIVALENT TO
⊀ ⊁ U+2280, U+2281 DOES NOT PRECEDE/SUCCEED
⊂ ⊃ U+2282, U+2283 SUBSET/SUPERSET OF
⊄ ⊅ U+2284, U+2285 NOT A SUBSET/SUPERSET OF
⊆ ⊇ U+2286, U+2287 SUBSET/SUPERSET OF OR EQUAL TO
⊈ ⊉ U+2288, U+2289 NEITHER A SUBSET/SUPERSET OF NOR EQUAL TO
⊊ ⊋ U+228A, U+228B SUBSET/SUPERSET OF WITH NOT EQUAL TO
⊣ ⊢ U+22A3, U+22A2 LEFT/RIGHT TACK
⊦ ⫞ U+22A6, U+2ADE ASSERTION, SHORT LEFT TACK
⊨ ⫤ U+22A8, U+2AE4 TRUE, VERTICAL BAR DOUBLE LEFT TURNSTILE
⊩ ⫣ U+22A9, U+2AE3 FORCES, DOUBLE VERTICAL BAR LEFT TURNSTILE
⊰ ⊱ U+22B0, U+22B1 PRECEDES/SUCCEEDS UNDER RELATION
⋐ ⋑ U+22D0, U+22D1 DOUBLE SUBSET/SUPERSET
⋖ ⋗ U+22D6, U+22D7 LESS-THAN/GREATER-THAN WITH DOT
⋘ ⋙ U+22D8, U+22D9 VERY MUCH LESS-THAN/GREATER-THAN
⋜ ⋝ U+22DC, U+22DD EQUAL TO OR LESS-THAN/GREATER-THAN
⋞ ⋟ U+22DE, U+22DF EQUAL TO OR PRECEDES/SUCCEEDS
⋠ ⋡ U+22E0, U+22E1 DOES NOT PRECEDE/SUCCEED OR EQUAL
⋦ ⋧ U+22E6, U+22E7 LESS-THAN/GREATER-THAN BUT NOT EQUIVALENT TO
⋨ ⋩ U+22E8, U+22E9 PRECEDES/SUCCEEDS BUT NOT EQUIVALENT TO
⋲ ⋺ U+22F2, U+22FA ELEMENT OF/CONTAINS WITH LONG HORIZONTAL STROKE
⋳ ⋻ U+22F3, U+22FB ELEMENT OF/CONTAINS WITH VERTICAL BAR AT END OF
HORIZONTAL STROKE
⋴ ⋼ U+22F4, U+22FC SMALL ELEMENT OF/CONTAINS WITH VERTICAL BAR AT
END OF HORIZONTAL STROKE
⋶ ⋽ U+22F6, U+22FD ELEMENT OF/CONTAINS WITH OVERBAR
⋷ ⋾ U+22F7, U+22FE SMALL ELEMENT OF/CONTAINS WITH OVERBAR
⌈ ⌉ U+2308, U+2309 LEFT/RIGHT CEILING
⌊ ⌋ U+230A, U+230B LEFT/RIGHT FLOOR
⌦ ⌫ U+2326, U+232B ERASE TO THE RIGHT/LEFT
⟨ ⟩ U+2329, U+232A LEFT/RIGHT-POINTING ANGLE BRACKET
⍈ ⍇ U+2348, U+2347 APL FUNCTIONAL SYMBOL QUAD RIGHT/LEFTWARDS ARROW
⏩ ⏪ U+23E9, U+23EA BLACK RIGHT/LEFT-POINTING DOUBLE TRIANGLE
⏭ ⏮ U+23ED, U+23EE BLACK RIGHT/LEFT-POINTING DOUBLE TRIANGLE WITH
VERTICAL BAR
☛ ☚ U+261B, U+261A BLACK RIGHT/LEFT POINTING INDEX
☞ ☜ U+261E, U+261C WHITE RIGHT/LEFT POINTING INDEX
⚞ ⚟ U+269E, U+269F THREE LINES CONVERGING RIGHT/LEFT
❨ ❩ U+2768, U+2769 MEDIUM LEFT/RIGHT PARENTHESIS ORNAMENT
❪ ❫ U+276A, U+276B MEDIUM FLATTENED LEFT/RIGHT PARENTHESIS ORNAMENT
❬ ❭ U+276C, U+276D MEDIUM LEFT/RIGHT-POINTING ANGLE BRACKET
ORNAMENT
❮ ❯ U+276E, U+276F HEAVY LEFT/RIGHT-POINTING ANGLE QUOTATION MARK
ORNAMENT
❰ ❱ U+2770, U+2771 HEAVY LEFT/RIGHT-POINTING ANGLE BRACKET ORNAMENT
❲ ❳ U+2772, U+2773 LIGHT LEFT/RIGHT TORTOISE SHELL BRACKET ORNAMENT
❴ ❵ U+2774, U+2775 MEDIUM LEFT/RIGHT CURLY BRACKET ORNAMENT
⟃ ⟄ U+27C3, U+27C4 OPEN SUBSET/SUPERSET
⟅ ⟆ U+27C5, U+27C6 LEFT/RIGHT S-SHAPED BAG DELIMITER
⟈ ⟉ U+27C8, U+27C9 REVERSE SOLIDUS PRECEDING SUBSET, SUPERSET
PRECEDING SOLIDUS
⟞ ⟝ U+27DE, U+27DD LONG LEFT/RIGHT TACK
⟦ ⟧ U+27E6, U+27E7 MATHEMATICAL LEFT/RIGHT WHITE SQUARE BRACKET
⟨ ⟩ U+27E8, U+27E9 MATHEMATICAL LEFT/RIGHT ANGLE BRACKET
⟪ ⟫ U+27EA, U+27EB MATHEMATICAL LEFT/RIGHT DOUBLE ANGLE BRACKET
⟬ ⟭ U+27EC, U+27ED MATHEMATICAL LEFT/RIGHT WHITE TORTOISE SHELL
BRACKET
⟮ ⟯ U+27EE, U+27EF MATHEMATICAL LEFT/RIGHT FLATTENED PARENTHESIS
⟴ ⬲ U+27F4, U+2B32 RIGHT/LEFT ARROW WITH CIRCLED PLUS
⟶ ⟵ U+27F6, U+27F5 LONG RIGHT/LEFTWARDS ARROW
⟹ ⟸ U+27F9, U+27F8 LONG RIGHT/LEFTWARDS DOUBLE ARROW
⟼ ⟻ U+27FC, U+27FB LONG RIGHT/LEFTWARDS ARROW FROM BAR
⟾ ⟽ U+27FE, U+27FD LONG RIGHT/LEFTWARDS DOUBLE ARROW FROM BAR
⟿ ⬳ U+27FF, U+2B33 LONG RIGHT/LEFTWARDS SQUIGGLE ARROW
⤀ ⬴ U+2900, U+2B34 RIGHT/LEFTWARDS TWO-HEADED ARROW WITH VERTICAL
STROKE
⤁ ⬵ U+2901, U+2B35 RIGHT/LEFTWARDS TWO-HEADED ARROW WITH DOUBLE
VERTICAL STROKE
⤃ ⤂ U+2903, U+2902 RIGHT/LEFTWARDS DOUBLE ARROW WITH VERTICAL
STROKE
⤅ ⬶ U+2905, U+2B36 RIGHT/LEFTWARDS TWO-HEADED ARROW FROM BAR
⤇ ⤆ U+2907, U+2906 RIGHT/LEFTWARDS DOUBLE ARROW FROM BAR
⤍ ⤌ U+290D, U+290C RIGHT/LEFTWARDS DOUBLE DASH ARROW
⤏ ⤎ U+290F, U+290E RIGHT/LEFTWARDS TRIPLE DASH ARROW
⤐ ⬷ U+2910, U+2B37 RIGHT/LEFTWARDS TWO-HEADED TRIPLE DASH ARROW
⤑ ⬸ U+2911, U+2B38 RIGHT/LEFTWARDS ARROW WITH DOTTED STEM
⤔ ⬹ U+2914, U+2B39 RIGHT/LEFTWARDS ARROW WITH TAIL WITH VERTICAL
STROKE
⤕ ⬺ U+2915, U+2B3A RIGHT/LEFTWARDS ARROW WITH TAIL WITH DOUBLE
VERTICAL STROKE
⤖ ⬻ U+2916, U+2B3B RIGHT/LEFTWARDS TWO-HEADED ARROW WITH TAIL
⤗ ⬼ U+2917, U+2B3C RIGHT/LEFTWARDS TWO-HEADED ARROW WITH TAIL WITH
VERTICAL STROKE
⤘ ⬽ U+2918, U+2B3D RIGHT/LEFTWARDS TWO-HEADED ARROW WITH TAIL WITH
DOUBLE VERTICAL STROKE
⤚ ⤙ U+291A, U+2919 RIGHT/LEFTWARDS ARROW-TAIL
⤜ ⤛ U+291C, U+291B RIGHT/LEFTWARDS DOUBLE ARROW-TAIL
⤞ ⤝ U+291E, U+291D RIGHT/LEFTWARDS ARROW TO BLACK DIAMOND
⤠ ⤟ U+2920, U+291F RIGHT/LEFTWARDS ARROW FROM BAR TO BLACK DIAMOND
⤳ ⬿ U+2933, U+2B3F WAVE ARROW POINTING DIRECTLY RIGHT/LEFT
⤷ ⤶ U+2937, U+2936 ARROW POINTING DOWNWARDS THEN CURVING RIGHT/
LEFTWARDS
⥅ ⥆ U+2945, U+2946 RIGHT/LEFTWARDS ARROW WITH PLUS BELOW
⥇ ⬾ U+2947, U+2B3E RIGHT/LEFTWARDS ARROW THROUGH X
⥓ ⥒ U+2953, U+2952 RIGHT/LEFTWARDS HARPOON WITH BARB UP TO BAR
⥗ ⥖ U+2957, U+2956 RIGHT/LEFTWARDS HARPOON WITH BARB DOWN TO BAR
⥛ ⥚ U+295B, U+295A RIGHT/LEFTWARDS HARPOON WITH BARB UP FROM BAR
⥟ ⥞ U+295F, U+295E RIGHT/LEFTWARDS HARPOON WITH BARB DOWN FROM BAR
⥤ ⥢ U+2964, U+2962 RIGHT/LEFTWARDS HARPOON WITH BARB UP ABOVE
RIGHT/LEFTWARDS HARPOON WITH BARB DOWN
⥬ ⥪ U+296C, U+296A RIGHT/LEFTWARDS HARPOON WITH BARB UP ABOVE LONG
DASH
⥭ ⥫ U+296D, U+296B RIGHT/LEFTWARDS HARPOON WITH BARB DOWN BELOW
LONG DASH
⥱ ⭀ U+2971, U+2B40 EQUALS SIGN ABOVE RIGHT/LEFTWARDS ARROW
⥲ ⭁ U+2972, U+2B41 TILDE OPERATOR ABOVE RIGHTWARDS ARROW, REVERSE
TILDE OPERATOR ABOVE LEFTWARDS ARROW
⥴ ⭋ U+2974, U+2B4B RIGHTWARDS ARROW ABOVE TILDE OPERATOR,
LEFTWARDS ARROW ABOVE REVERSE TILDE OPERATOR
⥵ ⭂ U+2975, U+2B42 RIGHTWARDS ARROW ABOVE ALMOST EQUAL TO,
LEFTWARDS ARROW ABOVE REVERSE ALMOST EQUAL TO
⥹ ⥻ U+2979, U+297B SUBSET/SUPERSET ABOVE RIGHT/LEFTWARDS ARROW
⦃ ⦄ U+2983, U+2984 LEFT/RIGHT WHITE CURLY BRACKET
⦅ ⦆ U+2985, U+2986 LEFT/RIGHT WHITE PARENTHESIS
⦇ ⦈ U+2987, U+2988 Z NOTATION LEFT/RIGHT IMAGE BRACKET
⦉ ⦊ U+2989, U+298A Z NOTATION LEFT/RIGHT BINDING BRACKET
⦋ ⦌ U+298B, U+298C LEFT/RIGHT SQUARE BRACKET WITH UNDERBAR
⦍ ⦐ U+298D, U+2990 LEFT/RIGHT SQUARE BRACKET WITH TICK IN TOP
CORNER
⦏ ⦎ U+298F, U+298E LEFT/RIGHT SQUARE BRACKET WITH TICK IN BOTTOM
CORNER
⦑ ⦒ U+2991, U+2992 LEFT/RIGHT ANGLE BRACKET WITH DOT
⦓ ⦔ U+2993, U+2994 LEFT/RIGHT ARC LESS-THAN/GREATER-THAN BRACKET
⦕ ⦖ U+2995, U+2996 DOUBLE LEFT/RIGHT ARC GREATER-THAN/LESS-THAN
BRACKET
⦗ ⦘ U+2997, U+2998 LEFT/RIGHT BLACK TORTOISE SHELL BRACKET
⦨ ⦩ U+29A8, U+29A9 MEASURED ANGLE WITH OPEN ARM ENDING IN ARROW
POINTING UP AND RIGHT/LEFT
⦪ ⦫ U+29AA, U+29AB MEASURED ANGLE WITH OPEN ARM ENDING IN ARROW
POINTING DOWN AND RIGHT/LEFT
⦳ ⦴ U+29B3, U+29B4 EMPTY SET WITH RIGHT/LEFT ARROW ABOVE
⧀ ⧁ U+29C0, U+29C1 CIRCLED LESS-THAN/GREATER-THAN
⧘ ⧙ U+29D8, U+29D9 LEFT/RIGHT WIGGLY FENCE
⧚ ⧛ U+29DA, U+29DB LEFT/RIGHT DOUBLE WIGGLY FENCE
⧼ ⧽ U+29FC, U+29FD LEFT/RIGHT-POINTING CURVED ANGLE BRACKET
⩹ ⩺ U+2A79, U+2A7A LESS-THAN/GREATER-THAN WITH CIRCLE INSIDE
⩻ ⩼ U+2A7B, U+2A7C LESS-THAN/GREATER-THAN WITH QUESTION MARK ABOVE
⩽ ⩾ U+2A7D, U+2A7E LESS-THAN/GREATER-THAN OR SLANTED EQUAL TO
⩿ ⪀ U+2A7F, U+2A80 LESS-THAN/GREATER-THAN OR SLANTED EQUAL TO WITH
DOT INSIDE
⪁ ⪂ U+2A81, U+2A82 LESS-THAN/GREATER-THAN OR SLANTED EQUAL TO WITH
DOT ABOVE
⪃ ⪄ U+2A83, U+2A84 LESS-THAN/GREATER-THAN OR SLANTED EQUAL TO WITH
DOT ABOVE RIGHT/LEFT
⪅ ⪆ U+2A85, U+2A86 LESS-THAN/GREATER-THAN OR APPROXIMATE
⪇ ⪈ U+2A87, U+2A88 LESS-THAN/GREATER-THAN AND SINGLE-LINE NOT
EQUAL TO
⪉ ⪊ U+2A89, U+2A8A LESS-THAN/GREATER-THAN AND NOT APPROXIMATE
⪍ ⪎ U+2A8D, U+2A8E LESS-THAN/GREATER-THAN ABOVE SIMILAR OR EQUAL
⪕ ⪖ U+2A95, U+2A96 SLANTED EQUAL TO OR LESS-THAN/GREATER-THAN
⪗ ⪘ U+2A97, U+2A98 SLANTED EQUAL TO OR LESS-THAN/GREATER-THAN WITH
DOT INSIDE
⪙ ⪚ U+2A99, U+2A9A DOUBLE-LINE EQUAL TO OR LESS-THAN/GREATER-THAN
⪛ ⪜ U+2A9B, U+2A9C DOUBLE-LINE SLANTED EQUAL TO OR LESS-THAN/
GREATER-THAN
⪝ ⪞ U+2A9D, U+2A9E SIMILAR OR LESS-THAN/GREATER-THAN
⪟ ⪠ U+2A9F, U+2AA0 SIMILAR ABOVE LESS-THAN/GREATER-THAN ABOVE
EQUALS SIGN
⪡ ⪢ U+2AA1, U+2AA2 DOUBLE NESTED LESS-THAN/GREATER-THAN
⪦ ⪧ U+2AA6, U+2AA7 LESS-THAN/GREATER-THAN CLOSED BY CURVE
⪨ ⪩ U+2AA8, U+2AA9 LESS-THAN/GREATER-THAN CLOSED BY CURVE ABOVE
SLANTED EQUAL
⪪ ⪫ U+2AAA, U+2AAB SMALLER THAN/LARGER THAN
⪬ ⪭ U+2AAC, U+2AAD SMALLER THAN/LARGER THAN OR EQUAL TO
⪯ ⪰ U+2AAF, U+2AB0 PRECEDES/SUCCEEDS ABOVE SINGLE-LINE EQUALS SIGN
⪱ ⪲ U+2AB1, U+2AB2 PRECEDES/SUCCEEDS ABOVE SINGLE-LINE NOT EQUAL TO
⪳ ⪴ U+2AB3, U+2AB4 PRECEDES/SUCCEEDS ABOVE EQUALS SIGN
⪵ ⪶ U+2AB5, U+2AB6 PRECEDES/SUCCEEDS ABOVE NOT EQUAL TO
⪷ ⪸ U+2AB7, U+2AB8 PRECEDES/SUCCEEDS ABOVE ALMOST EQUAL TO
⪹ ⪺ U+2AB9, U+2ABA PRECEDES/SUCCEEDS ABOVE NOT ALMOST EQUAL TO
⪻ ⪼ U+2ABB, U+2ABC DOUBLE PRECEDES/SUCCEEDS
⪽ ⪾ U+2ABD, U+2ABE SUBSET/SUPERSET WITH DOT
⪿ ⫀ U+2ABF, U+2AC0 SUBSET/SUPERSET WITH PLUS SIGN BELOW
⫁ ⫂ U+2AC1, U+2AC2 SUBSET/SUPERSET WITH MULTIPLICATION SIGN BELOW
⫃ ⫄ U+2AC3, U+2AC4 SUBSET/SUPERSET OF OR EQUAL TO WITH DOT ABOVE
⫅ ⫆ U+2AC5, U+2AC6 SUBSET/SUPERSET OF ABOVE EQUALS SIGN
⫇ ⫈ U+2AC7, U+2AC8 SUBSET/SUPERSET OF ABOVE TILDE OPERATOR
⫉ ⫊ U+2AC9, U+2ACA SUBSET/SUPERSET OF ABOVE ALMOST EQUAL TO
⫋ ⫌ U+2ACB, U+2ACC SUBSET/SUPERSET OF ABOVE NOT EQUAL TO
⫏ ⫐ U+2ACF, U+2AD0 CLOSED SUBSET/SUPERSET
⫑ ⫒ U+2AD1, U+2AD2 CLOSED SUBSET/SUPERSET OR EQUAL TO
⫕ ⫖ U+2AD5, U+2AD6 SUBSET/SUPERSET ABOVE SUBSET/SUPERSET
⫥ ⊫ U+2AE5, U+22AB DOUBLE VERTICAL BAR DOUBLE LEFT/RIGHT TURNSTILE
⫷ ⫸ U+2AF7, U+2AF8 TRIPLE NESTED LESS-THAN/GREATER-THAN
⫹ ⫺ U+2AF9, U+2AFA DOUBLE-LINE SLANTED LESS-THAN/GREATER-THAN OR
EQUAL TO
⭆ ⭅ U+2B46, U+2B45 RIGHT/LEFTWARDS QUADRUPLE ARROW
⭇ ⭉ U+2B47, U+2B49 REVERSE TILDE OPERATOR ABOVE RIGHTWARDS ARROW,
TILDE OPERATOR ABOVE LEFTWARDS ARROW
⭈ ⭊ U+2B48, U+2B4A RIGHTWARDS ARROW ABOVE REVERSE ALMOST EQUAL
TO, LEFTWARDS ARROW ABOVE ALMOST EQUAL TO
⭌ ⥳ U+2B4C, U+2973 RIGHTWARDS ARROW ABOVE REVERSE TILDE OPERATOR,
LEFTWARDS ARROW ABOVE TILDE OPERATOR
⭢ ⭠ U+2B62, U+2B60 RIGHT/LEFTWARDS TRIANGLE-HEADED ARROW
⭬ ⭪ U+2B6C, U+2B6A RIGHT/LEFTWARDS TRIANGLE-HEADED DASHED ARROW
⭲ ⭰ U+2B72, U+2B70 RIGHT/LEFTWARDS TRIANGLE-HEADED ARROW TO BAR
⭼ ⭺ U+2B7C, U+2B7A RIGHT/LEFTWARDS TRIANGLE-HEADED ARROW WITH
DOUBLE VERTICAL STROKE
⮆ ⮄ U+2B86, U+2B84 RIGHT/LEFTWARDS TRIANGLE-HEADED PAIRED ARROWS
⮊ ⮈ U+2B8A, U+2B88 RIGHT/LEFTWARDS BLACK CIRCLED WHITE ARROW
⮕ ⬅ U+2B95, U+2B05 RIGHT/LEFTWARDS BLACK ARROW
⮚ ⮘ U+2B9A, U+2B98 THREE-D TOP-LIGHTED RIGHT/LEFTWARDS EQUILATERAL
ARROWHEAD
⮞ ⮜ U+2B9E, U+2B9C BLACK RIGHT/LEFTWARDS EQUILATERAL ARROWHEAD
⮡ ⮠ U+2BA1, U+2BA0 DOWNWARDS TRIANGLE-HEADED ARROW WITH LONG TIP
RIGHT/LEFTWARDS
⮣ ⮢ U+2BA3, U+2BA2 UPWARDS TRIANGLE-HEADED ARROW WITH LONG TIP
RIGHT/LEFTWARDS
⮩ ⮨ U+2BA9, U+2BA8 BLACK CURVED DOWNWARDS AND RIGHT/LEFTWARDS ARROW
⮫ ⮪ U+2BAB, U+2BAA BLACK CURVED UPWARDS AND RIGHT/LEFTWARDS ARROW
⮱ ⮰ U+2BB1, U+2BB0 RIBBON ARROW DOWN RIGHT/LEFT
⮳ ⮲ U+2BB3, U+2BB2 RIBBON ARROW UP RIGHT/LEFT
⯮ ⯬ U+2BEE, U+2BEC RIGHT/LEFTWARDS TWO-HEADED ARROW WITH TRIANGLE
ARROWHEADS
⸂ ⸃ U+2E02, U+2E03 LEFT/RIGHT SUBSTITUTION BRACKET
⸃ ⸂ U+2E03, U+2E02 RIGHT/LEFT SUBSTITUTION BRACKET
⸄ ⸅ U+2E04, U+2E05 LEFT/RIGHT DOTTED SUBSTITUTION BRACKET
⸅ ⸄ U+2E05, U+2E04 RIGHT/LEFT DOTTED SUBSTITUTION BRACKET
⸉ ⸊ U+2E09, U+2E0A LEFT/RIGHT TRANSPOSITION BRACKET
⸊ ⸉ U+2E0A, U+2E09 RIGHT/LEFT TRANSPOSITION BRACKET
⸌ ⸍ U+2E0C, U+2E0D LEFT/RIGHT RAISED OMISSION BRACKET
⸍ ⸌ U+2E0D, U+2E0C RIGHT/LEFT RAISED OMISSION BRACKET
⸑ ⸐ U+2E11, U+2E10 REVERSED FORKED PARAGRAPHOS, FORKED PARAGRAPHOS
⸜ ⸝ U+2E1C, U+2E1D LEFT/RIGHT LOW PARAPHRASE BRACKET
⸝ ⸜ U+2E1D, U+2E1C RIGHT/LEFT LOW PARAPHRASE BRACKET
⸠ ⸡ U+2E20, U+2E21 LEFT/RIGHT VERTICAL BAR WITH QUILL
⸡ ⸠ U+2E21, U+2E20 RIGHT/LEFT VERTICAL BAR WITH QUILL
⸢ ⸣ U+2E22, U+2E23 TOP LEFT/RIGHT HALF BRACKET
⸤ ⸥ U+2E24, U+2E25 BOTTOM LEFT/RIGHT HALF BRACKET
⸦ ⸧ U+2E26, U+2E27 LEFT/RIGHT SIDEWAYS U BRACKET
⸨ ⸩ U+2E28, U+2E29 LEFT/RIGHT DOUBLE PARENTHESIS
⸶ ⸷ U+2E36, U+2E37 DAGGER WITH LEFT/RIGHT GUARD
⹂ „ U+2E42, U+201E DOUBLE LOW-REVERSED-9 QUOTATION MARK, DOUBLE
LOW-9 QUOTATION MARK
⹕ ⹖ U+2E55, U+2E56 LEFT/RIGHT SQUARE BRACKET WITH STROKE
⹗ ⹘ U+2E57, U+2E58 LEFT/RIGHT SQUARE BRACKET WITH DOUBLE STROKE
⹙ ⹚ U+2E59, U+2E5A TOP HALF LEFT/RIGHT PARENTHESIS
⹛ ⹜ U+2E5B, U+2E5C BOTTOM HALF LEFT/RIGHT PARENTHESIS
〈 〉 U+3008, U+3009 LEFT/RIGHT ANGLE BRACKET
《 》 U+300A, U+300B LEFT/RIGHT DOUBLE ANGLE BRACKET
「 」 U+300C, U+300D LEFT/RIGHT CORNER BRACKET
『 』 U+300E, U+300F LEFT/RIGHT WHITE CORNER BRACKET
【 】 U+3010, U+3011 LEFT/RIGHT BLACK LENTICULAR BRACKET
〔 〕 U+3014, U+3015 LEFT/RIGHT TORTOISE SHELL BRACKET
〖 〗 U+3016, U+3017 LEFT/RIGHT WHITE LENTICULAR BRACKET
〘 〙 U+3018, U+3019 LEFT/RIGHT WHITE TORTOISE SHELL BRACKET
〚 〛 U+301A, U+301B LEFT/RIGHT WHITE SQUARE BRACKET
〝 〞 U+301D, U+301E REVERSED DOUBLE PRIME QUOTATION MARK, DOUBLE
PRIME QUOTATION MARK
꧁ ꧂ U+A9C1, U+A9C2 JAVANESE LEFT/RIGHT RERENGGAN
﴾ ﴿ U+FD3E, U+FD3F ORNATE LEFT/RIGHT PARENTHESIS
﹙ ﹚ U+FE59, U+FE5A SMALL LEFT/RIGHT PARENTHESIS
﹛ ﹜ U+FE5B, U+FE5C SMALL LEFT/RIGHT CURLY BRACKET
﹝ ﹞ U+FE5D, U+FE5E SMALL LEFT/RIGHT TORTOISE SHELL BRACKET
﹤ ﹥ U+FE64, U+FE65 SMALL LESS-THAN/GREATER-THAN SIGN
( ) U+FF08, U+FF09 FULLWIDTH LEFT/RIGHT PARENTHESIS
< > U+FF1C, U+FF1E FULLWIDTH LESS-THAN/GREATER-THAN SIGN
[ ] U+FF3B, U+FF3D FULLWIDTH LEFT/RIGHT SQUARE BRACKET
{ } U+FF5B, U+FF5D FULLWIDTH LEFT/RIGHT CURLY BRACKET
⦅ ⦆ U+FF5F, U+FF60 FULLWIDTH LEFT/RIGHT WHITE PARENTHESIS
「 」 U+FF62, U+FF63 HALFWIDTH LEFT/RIGHT CORNER BRACKET
→ ← U+FFEB, U+FFE9 HALFWIDTH RIGHT/LEFTWARDS ARROW
𝄃 𝄂 U+1D103, U+1D102 MUSICAL SYMBOL REVERSE FINAL BARLINE, MUSICAL
SYMBOL FINAL BARLINE
𝄆 𝄇 U+1D106, U+1D107 MUSICAL SYMBOL LEFT/RIGHT REPEAT SIGN
👉 👈 U+1F449, U+1F448 WHITE RIGHT/LEFT POINTING BACKHAND INDEX
🔈 🕨 U+1F508, U+1F568 SPEAKER, RIGHT SPEAKER
🔉 🕩 U+1F509, U+1F569 SPEAKER WITH ONE SOUND WAVE, RIGHT SPEAKER WITH
ONE SOUND WAVE
🔊 🕪 U+1F50A, U+1F56A SPEAKER WITH THREE SOUND WAVES, RIGHT SPEAKER
WITH THREE SOUND WAVES
🕻 🕽 U+1F57B, U+1F57D LEFT/RIGHT HAND TELEPHONE RECEIVER
🖙 🖘 U+1F599, U+1F598 SIDEWAYS WHITE RIGHT/LEFT POINTING INDEX
🖛 🖚 U+1F59B, U+1F59A SIDEWAYS BLACK RIGHT/LEFT POINTING INDEX
🖝 🖜 U+1F59D, U+1F59C BLACK RIGHT/LEFT POINTING BACKHAND INDEX
🗦 🗧 U+1F5E6, U+1F5E7 THREE RAYS LEFT/RIGHT
🠂 🠀 U+1F802, U+1F800 RIGHT/LEFTWARDS ARROW WITH SMALL TRIANGLE
ARROWHEAD
🠆 🠄 U+1F806, U+1F804 RIGHT/LEFTWARDS ARROW WITH MEDIUM TRIANGLE
ARROWHEAD
🠊 🠈 U+1F80A, U+1F808 RIGHT/LEFTWARDS ARROW WITH LARGE TRIANGLE
ARROWHEAD
🠒 🠐 U+1F812, U+1F810 RIGHT/LEFTWARDS ARROW WITH SMALL EQUILATERAL
ARROWHEAD
🠖 🠔 U+1F816, U+1F814 RIGHT/LEFTWARDS ARROW WITH EQUILATERAL ARROWHEAD
🠚 🠘 U+1F81A, U+1F818 HEAVY RIGHT/LEFTWARDS ARROW WITH EQUILATERAL
ARROWHEAD
🠞 🠜 U+1F81E, U+1F81C HEAVY RIGHT/LEFTWARDS ARROW WITH LARGE
EQUILATERAL ARROWHEAD
🠢 🠠 U+1F822, U+1F820 RIGHT/LEFTWARDS TRIANGLE-HEADED ARROW WITH
NARROW SHAFT
🠦 🠤 U+1F826, U+1F824 RIGHT/LEFTWARDS TRIANGLE-HEADED ARROW WITH
MEDIUM SHAFT
🠪 🠨 U+1F82A, U+1F828 RIGHT/LEFTWARDS TRIANGLE-HEADED ARROW WITH BOLD
SHAFT
🠮 🠬 U+1F82E, U+1F82C RIGHT/LEFTWARDS TRIANGLE-HEADED ARROW WITH
HEAVY SHAFT
🠲 🠰 U+1F832, U+1F830 RIGHT/LEFTWARDS TRIANGLE-HEADED ARROW WITH VERY
HEAVY SHAFT
🠶 🠴 U+1F836, U+1F834 RIGHT/LEFTWARDS FINGER-POST ARROW
🠺 🠸 U+1F83A, U+1F838 RIGHT/LEFTWARDS SQUARED ARROW
🠾 🠼 U+1F83E, U+1F83C RIGHT/LEFTWARDS COMPRESSED ARROW
🡂 🡀 U+1F842, U+1F840 RIGHT/LEFTWARDS HEAVY COMPRESSED ARROW
🡆 🡄 U+1F846, U+1F844 RIGHT/LEFTWARDS HEAVY ARROW
🡒 🡐 U+1F852, U+1F850 RIGHT/LEFTWARDS SANS-SERIF ARROW
🡢 🡠 U+1F862, U+1F860 WIDE-HEADED RIGHT/LEFTWARDS LIGHT BARB ARROW
🡪 🡨 U+1F86A, U+1F868 WIDE-HEADED RIGHT/LEFTWARDS BARB ARROW
🡲 🡰 U+1F872, U+1F870 WIDE-HEADED RIGHT/LEFTWARDS MEDIUM BARB ARROW
🡺 🡸 U+1F87A, U+1F878 WIDE-HEADED RIGHT/LEFTWARDS HEAVY BARB ARROW
🢂 🢀 U+1F882, U+1F880 WIDE-HEADED RIGHT/LEFTWARDS VERY HEAVY BARB
ARROW
🢒 🢐 U+1F892, U+1F890 RIGHT/LEFTWARDS TRIANGLE ARROWHEAD
🢖 🢔 U+1F896, U+1F894 RIGHT/LEFTWARDS WHITE ARROW WITHIN TRIANGLE
ARROWHEAD
🢚 🢘 U+1F89A, U+1F898 RIGHT/LEFTWARDS ARROW WITH NOTCHED TAIL
🢡 🢠 U+1F8A1, U+1F8A0 RIGHTWARDS BOTTOM SHADED WHITE ARROW,
LEFTWARDS BOTTOM-SHADED WHITE ARROW
🢣 🢢 U+1F8A3, U+1F8A2 RIGHT/LEFTWARDS TOP SHADED WHITE ARROW
🢥 🢦 U+1F8A5, U+1F8A6 RIGHT/LEFTWARDS RIGHT-SHADED WHITE ARROW
🢧 🢤 U+1F8A7, U+1F8A4 RIGHT/LEFTWARDS LEFT-SHADED WHITE ARROW
🢩 🢨 U+1F8A9, U+1F8A8 RIGHT/LEFTWARDS BACK-TILTED SHADOWED WHITE ARROW
🢫 🢪 U+1F8AB, U+1F8AA RIGHT/LEFTWARDS FRONT-TILTED SHADOWED WHITE
ARROW
```
FEATURE BUNDLES
----------------
It's possible to load multiple features together, using a *feature bundle*. The name of a feature bundle is prefixed with a colon, to distinguish it from an actual feature.
```
use feature ":5.10";
```
The following feature bundles are available:
```
bundle features included
--------- -----------------
:default indirect multidimensional
bareword_filehandles
:5.10 bareword_filehandles indirect
multidimensional say state switch
:5.12 bareword_filehandles indirect
multidimensional say state switch
unicode_strings
:5.14 bareword_filehandles indirect
multidimensional say state switch
unicode_strings
:5.16 bareword_filehandles current_sub evalbytes
fc indirect multidimensional say state
switch unicode_eval unicode_strings
:5.18 bareword_filehandles current_sub evalbytes
fc indirect multidimensional say state
switch unicode_eval unicode_strings
:5.20 bareword_filehandles current_sub evalbytes
fc indirect multidimensional say state
switch unicode_eval unicode_strings
:5.22 bareword_filehandles current_sub evalbytes
fc indirect multidimensional say state
switch unicode_eval unicode_strings
:5.24 bareword_filehandles current_sub evalbytes
fc indirect multidimensional postderef_qq
say state switch unicode_eval
unicode_strings
:5.26 bareword_filehandles current_sub evalbytes
fc indirect multidimensional postderef_qq
say state switch unicode_eval
unicode_strings
:5.28 bareword_filehandles bitwise current_sub
evalbytes fc indirect multidimensional
postderef_qq say state switch unicode_eval
unicode_strings
:5.30 bareword_filehandles bitwise current_sub
evalbytes fc indirect multidimensional
postderef_qq say state switch unicode_eval
unicode_strings
:5.32 bareword_filehandles bitwise current_sub
evalbytes fc indirect multidimensional
postderef_qq say state switch unicode_eval
unicode_strings
:5.34 bareword_filehandles bitwise current_sub
evalbytes fc indirect multidimensional
postderef_qq say state switch unicode_eval
unicode_strings
:5.36 bareword_filehandles bitwise current_sub
evalbytes fc isa postderef_qq say signatures
state unicode_eval unicode_strings
```
The `:default` bundle represents the feature set that is enabled before any `use feature` or `no feature` declaration.
Specifying sub-versions such as the `0` in `5.14.0` in feature bundles has no effect. Feature bundles are guaranteed to be the same for all sub-versions.
```
use feature ":5.14.0"; # same as ":5.14"
use feature ":5.14.1"; # same as ":5.14"
```
IMPLICIT LOADING
-----------------
Instead of loading feature bundles by name, it is easier to let Perl do implicit loading of a feature bundle for you.
There are two ways to load the `feature` pragma implicitly:
* By using the `-E` switch on the Perl command-line instead of `-e`. That will enable the feature bundle for that version of Perl in the main compilation unit (that is, the one-liner that follows `-E`).
* By explicitly requiring a minimum Perl version number for your program, with the `use VERSION` construct. That is,
```
use v5.10.0;
```
will do an implicit
```
no feature ':all';
use feature ':5.10';
```
and so on. Note how the trailing sub-version is automatically stripped from the version.
But to avoid portability warnings (see ["use" in perlfunc](perlfunc#use)), you may prefer:
```
use 5.010;
```
with the same effect.
If the required version is older than Perl 5.10, the ":default" feature bundle is automatically loaded instead.
Unlike `use feature ":5.12"`, saying `use v5.12` (or any higher version) also does the equivalent of `use strict`; see ["use" in perlfunc](perlfunc#use) for details.
CHECKING FEATURES
------------------
`feature` provides some simple APIs to check which features are enabled.
These functions cannot be imported and must be called by their fully qualified names. If you don't otherwise need to set a feature you will need to ensure `feature` is loaded with:
```
use feature ();
```
feature\_enabled($feature)
feature\_enabled($feature, $depth)
```
package MyStandardEnforcer;
use feature ();
use Carp "croak";
sub import {
croak "disable indirect!" if feature::feature_enabled("indirect");
}
```
Test whether a named feature is enabled at a given level in the call stack, returning a true value if it is. `$depth` defaults to 1, which checks the scope that called the scope calling feature::feature\_enabled().
croaks for an unknown feature name.
features\_enabled()
features\_enabled($depth)
```
package ReportEnabledFeatures;
use feature "say";
sub import {
say STDERR join " ", feature::features_enabled();
}
```
Returns a list of the features enabled at a given level in the call stack. `$depth` defaults to 1, which checks the scope that called the scope calling feature::features\_enabled().
feature\_bundle()
feature\_bundle($depth) Returns the feature bundle, if any, selected at a given level in the call stack. `$depth` defaults to 1, which checks the scope that called the scope calling feature::feature\_bundle().
Returns an undefined value if no feature bundle is selected in the scope.
The bundle name returned will be for the earliest bundle matching the selected bundle, so:
```
use feature ();
use v5.12;
BEGIN { print feature::feature_bundle(0); }
```
will print `5.11`.
This returns internal state, at this point `use v5.12;` sets the feature bundle, but `use feature ":5.12";` does not set the feature bundle. This may change in a future release of perl.
| programming_docs |
perl Memoize::AnyDBM_File Memoize::AnyDBM\_File
=====================
CONTENTS
--------
* [NAME](#NAME)
* [DESCRIPTION](#DESCRIPTION)
NAME
----
Memoize::AnyDBM\_File - glue to provide EXISTS for AnyDBM\_File for Storable use
DESCRIPTION
-----------
See [Memoize](memoize).
perl Locale::Maketext::Cookbook Locale::Maketext::Cookbook
==========================
CONTENTS
--------
* [NAME](#NAME)
* [INTRODUCTION](#INTRODUCTION)
* [ONESIDED LEXICONS](#ONESIDED-LEXICONS)
* [DECIMAL PLACES IN NUMBER FORMATTING](#DECIMAL-PLACES-IN-NUMBER-FORMATTING)
NAME
----
Locale::Maketext::Cookbook - recipes for using Locale::Maketext
INTRODUCTION
------------
This is a work in progress. Not much progress by now :-)
ONESIDED LEXICONS
------------------
*Adapted from a suggestion by Dan Muey*
It may be common (for example at your main lexicon) that the hash keys and values coincide. Like that
```
q{Hello, tell me your name}
=> q{Hello, tell me your name}
```
It would be nice to just write:
```
q{Hello, tell me your name} => ''
```
and have this magically inflated to the first form. Among the advantages of such representation, that would lead to smaller files, less prone to mistyping or mispasting, and handy to someone translating it which can simply copy the main lexicon and enter the translation instead of having to remove the value first.
That can be achieved by overriding `init` in your class and working on the main lexicon with code like that:
```
package My::I18N;
...
sub init {
my $lh = shift; # a newborn handle
$lh->SUPER::init();
inflate_lexicon(\%My::I18N::en::Lexicon);
return;
}
sub inflate_lexicon {
my $lex = shift;
while (my ($k, $v) = each %$lex) {
$v = $k if !defined $v || $v eq '';
}
}
```
Here we are assuming `My::I18N::en` to own the main lexicon.
There are some downsides here: the size economy will not stand at runtime after this `init()` runs. But it should not be that critical, since if you don't have space for that, you won't have space for any other language besides the main one as well. You could do that too with ties, expanding the value at lookup time which should be more time expensive as an option.
DECIMAL PLACES IN NUMBER FORMATTING
------------------------------------
*After CPAN RT #36136 (<https://rt.cpan.org/Ticket/Display.html?id=36136>)*
The documentation of <Locale::Maketext> advises that the standard bracket method `numf` is limited and that you must override that for better results. It even suggests the use of <Number::Format>.
One such defect of standard `numf` is to not be able to use a certain decimal precision. For example,
```
$lh->maketext('pi is [numf,_1]', 355/113);
```
outputs
```
pi is 3.14159292035398
```
Since pi ≈ 355/116 is only accurate to 6 decimal places, you would want to say:
```
$lh->maketext('pi is [numf,_1,6]', 355/113);
```
and get "pi is 3.141592".
One solution for that could use `Number::Format` like that:
```
package Wuu;
use base qw(Locale::Maketext);
use Number::Format;
# can be overridden according to language conventions
sub _numf_params {
return (
-thousands_sep => '.',
-decimal_point => ',',
-decimal_digits => 2,
);
}
# builds a Number::Format
sub _numf_formatter {
my ($lh, $scale) = @_;
my @params = $lh->_numf_params;
if ($scale) { # use explicit scale rather than default
push @params, (-decimal_digits => $scale);
}
return Number::Format->new(@params);
}
sub numf {
my ($lh, $n, $scale) = @_;
# get the (cached) formatter
my $nf = $lh->{__nf}{$scale} ||= $lh->_numf_formatter($scale);
# format the number itself
return $nf->format_number($n);
}
package Wuu::pt;
use base qw(Wuu);
```
and then
```
my $lh = Wuu->get_handle('pt');
$lh->maketext('A [numf,_1,3] km de distância', 1550.2222);
```
would return "A 1.550,222 km de distância".
Notice that the standard utility methods of `Locale::Maketext` are irremediably limited because they could not aim to do everything that could be expected from them in different languages, cultures and applications. So extending `numf`, `quant`, and `sprintf` is natural as soon as your needs exceed what the standard ones do.
perl perlapio perlapio
========
CONTENTS
--------
* [NAME](#NAME)
* [SYNOPSIS](#SYNOPSIS)
* [DESCRIPTION](#DESCRIPTION)
+ [Co-existence with stdio](#Co-existence-with-stdio)
+ ["Fast gets" Functions](#%22Fast-gets%22-Functions)
+ [Other Functions](#Other-Functions)
NAME
----
perlapio - perl's IO abstraction interface.
SYNOPSIS
--------
```
#define PERLIO_NOT_STDIO 0 /* For co-existence with stdio only */
#include <perlio.h> /* Usually via #include <perl.h> */
PerlIO *PerlIO_stdin(void);
PerlIO *PerlIO_stdout(void);
PerlIO *PerlIO_stderr(void);
PerlIO *PerlIO_open(const char *path,const char *mode);
PerlIO *PerlIO_fdopen(int fd, const char *mode);
PerlIO *PerlIO_reopen(const char *path, /* deprecated */
const char *mode, PerlIO *old);
int PerlIO_close(PerlIO *f);
int PerlIO_stdoutf(const char *fmt,...)
int PerlIO_puts(PerlIO *f,const char *string);
int PerlIO_putc(PerlIO *f,int ch);
SSize_t PerlIO_write(PerlIO *f,const void *buf,size_t numbytes);
int PerlIO_printf(PerlIO *f, const char *fmt,...);
int PerlIO_vprintf(PerlIO *f, const char *fmt, va_list args);
int PerlIO_flush(PerlIO *f);
int PerlIO_fill(PerlIO *f);
int PerlIO_eof(PerlIO *f);
int PerlIO_error(PerlIO *f);
void PerlIO_clearerr(PerlIO *f);
int PerlIO_getc(PerlIO *d);
int PerlIO_ungetc(PerlIO *f,int ch);
SSize_t PerlIO_read(PerlIO *f, void *buf, size_t numbytes);
Size_t PerlIO_unread(PerlIO *f,const void *vbuf, size_t count
int PerlIO_fileno(PerlIO *f);
void PerlIO_setlinebuf(PerlIO *f);
Off_t PerlIO_tell(PerlIO *f);
int PerlIO_seek(PerlIO *f, Off_t offset, int whence);
void PerlIO_rewind(PerlIO *f);
int PerlIO_getpos(PerlIO *f, SV *save); /* prototype changed */
int PerlIO_setpos(PerlIO *f, SV *saved); /* prototype changed */
int PerlIO_fast_gets(PerlIO *f);
int PerlIO_has_cntptr(PerlIO *f);
SSize_t PerlIO_get_cnt(PerlIO *f);
char *PerlIO_get_ptr(PerlIO *f);
void PerlIO_set_ptrcnt(PerlIO *f, char *ptr, SSize_t count);
int PerlIO_canset_cnt(PerlIO *f); /* deprecated */
void PerlIO_set_cnt(PerlIO *f, int count); /* deprecated */
int PerlIO_has_base(PerlIO *f);
char *PerlIO_get_base(PerlIO *f);
SSize_t PerlIO_get_bufsiz(PerlIO *f);
PerlIO *PerlIO_importFILE(FILE *stdio, const char *mode);
FILE *PerlIO_exportFILE(PerlIO *f, const char *mode);
FILE *PerlIO_findFILE(PerlIO *f);
void PerlIO_releaseFILE(PerlIO *f,FILE *stdio);
int PerlIO_apply_layers(pTHX_ PerlIO *f, const char *mode,
const char *layers);
int PerlIO_binmode(pTHX_ PerlIO *f, int ptype, int imode,
const char *layers);
void PerlIO_debug(const char *fmt,...);
```
DESCRIPTION
-----------
Perl's source code, and extensions that want maximum portability, should use the above functions instead of those defined in ANSI C's *stdio.h*. The perl headers (in particular "perlio.h") will `#define` them to the I/O mechanism selected at Configure time.
The functions are modeled on those in *stdio.h*, but parameter order has been "tidied up a little".
`PerlIO *` takes the place of FILE \*. Like FILE \* it should be treated as opaque (it is probably safe to assume it is a pointer to something).
There are currently two implementations:
1. USE\_STDIO All above are #define'd to stdio functions or are trivial wrapper functions which call stdio. In this case *only* PerlIO \* is a FILE \*. This has been the default implementation since the abstraction was introduced in perl5.003\_02.
2. USE\_PERLIO Introduced just after perl5.7.0, this is a re-implementation of the above abstraction which allows perl more control over how IO is done as it decouples IO from the way the operating system and C library choose to do things. For USE\_PERLIO PerlIO \* has an extra layer of indirection - it is a pointer-to-a-pointer. This allows the PerlIO \* to remain with a known value while swapping the implementation around underneath *at run time*. In this case all the above are true (but very simple) functions which call the underlying implementation.
This is the only implementation for which `PerlIO_apply_layers()` does anything "interesting".
The USE\_PERLIO implementation is described in <perliol>.
Because "perlio.h" is a thin layer (for efficiency) the semantics of these functions are somewhat dependent on the underlying implementation. Where these variations are understood they are noted below.
Unless otherwise noted, functions return 0 on success, or a negative value (usually `EOF` which is usually -1) and set `errno` on error.
**PerlIO\_stdin()**, **PerlIO\_stdout()**, **PerlIO\_stderr()**
Use these rather than `stdin`, `stdout`, `stderr`. They are written to look like "function calls" rather than variables because this makes it easier to *make them* function calls if platform cannot export data to loaded modules, or if (say) different "threads" might have different values.
**PerlIO\_open(path, mode)**, **PerlIO\_fdopen(fd,mode)**
These correspond to fopen()/fdopen() and the arguments are the same. Return `NULL` and set `errno` if there is an error. There may be an implementation limit on the number of open handles, which may be lower than the limit on the number of open files - `errno` may not be set when `NULL` is returned if this limit is exceeded.
**PerlIO\_reopen(path,mode,f)**
While this currently exists in both implementations, perl itself does not use it. *As perl does not use it, it is not well tested.*
Perl prefers to `dup` the new low-level descriptor to the descriptor used by the existing PerlIO. This may become the behaviour of this function in the future.
**PerlIO\_printf(f,fmt,...)**, **PerlIO\_vprintf(f,fmt,a)**
These are fprintf()/vfprintf() equivalents.
**PerlIO\_stdoutf(fmt,...)**
This is printf() equivalent. printf is #defined to this function, so it is (currently) legal to use `printf(fmt,...)` in perl sources.
**PerlIO\_read(f,buf,count)**, **PerlIO\_write(f,buf,count)**
These correspond functionally to fread() and fwrite() but the arguments and return values are different. The PerlIO\_read() and PerlIO\_write() signatures have been modeled on the more sane low level read() and write() functions instead: The "file" argument is passed first, there is only one "count", and the return value can distinguish between error and `EOF`.
Returns a byte count if successful (which may be zero or positive), returns negative value and sets `errno` on error. Depending on implementation `errno` may be `EINTR` if operation was interrupted by a signal.
**PerlIO\_fill(f)**
Fills the buffer associated with `f` with data from the layer below. `PerlIO_read` calls this as part of its normal operation. Returns 0 upon success; -1 on failure.
**PerlIO\_close(f)**
Depending on implementation `errno` may be `EINTR` if operation was interrupted by a signal.
**PerlIO\_puts(f,s)**, **PerlIO\_putc(f,c)**
These correspond to fputs() and fputc(). Note that arguments have been revised to have "file" first.
**PerlIO\_ungetc(f,c)**
This corresponds to ungetc(). Note that arguments have been revised to have "file" first. Arranges that next read operation will return the byte **c**. Despite the implied "character" in the name only values in the range 0..0xFF are defined. Returns the byte **c** on success or -1 (`EOF`) on error. The number of bytes that can be "pushed back" may vary, only 1 character is certain, and then only if it is the last character that was read from the handle.
**PerlIO\_unread(f,buf,count)**
This allows one to unget more than a single byte. It effectively unshifts `count` bytes onto the beginning of the buffer `buf`, so that the next read operation(s) will return them before anything else that was in the buffer.
Returns the number of unread bytes.
**PerlIO\_getc(f)**
This corresponds to getc(). Despite the c in the name only byte range 0..0xFF is supported. Returns the character read or -1 (`EOF`) on error.
**PerlIO\_eof(f)**
This corresponds to feof(). Returns a true/false indication of whether the handle is at end of file. For terminal devices this may or may not be "sticky" depending on the implementation. The flag is cleared by PerlIO\_seek(), or PerlIO\_rewind().
**PerlIO\_error(f)**
This corresponds to ferror(). Returns a true/false indication of whether there has been an IO error on the handle.
**PerlIO\_fileno(f)**
This corresponds to fileno(), note that on some platforms, the meaning of "fileno" may not match Unix. Returns -1 if the handle has no open descriptor associated with it.
**PerlIO\_clearerr(f)**
This corresponds to clearerr(), i.e., clears 'error' and (usually) 'eof' flags for the "stream". Does not return a value.
**PerlIO\_flush(f)**
This corresponds to fflush(). Sends any buffered write data to the underlying file. If called with `NULL` this may flush all open streams (or core dump with some USE\_STDIO implementations). Calling on a handle open for read only, or on which last operation was a read of some kind may lead to undefined behaviour on some USE\_STDIO implementations. The USE\_PERLIO (layers) implementation tries to behave better: it flushes all open streams when passed `NULL`, and attempts to retain data on read streams either in the buffer or by seeking the handle to the current logical position.
**PerlIO\_seek(f,offset,whence)**
This corresponds to fseek(). Sends buffered write data to the underlying file, or discards any buffered read data, then positions the file descriptor as specified by **offset** and **whence** (sic). This is the correct thing to do when switching between read and write on the same handle (see issues with PerlIO\_flush() above). Offset is of type `Off_t` which is a perl Configure value which may not be same as stdio's `off_t`.
**PerlIO\_tell(f)**
This corresponds to ftell(). Returns the current file position, or (Off\_t) -1 on error. May just return value system "knows" without making a system call or checking the underlying file descriptor (so use on shared file descriptors is not safe without a PerlIO\_seek()). Return value is of type `Off_t` which is a perl Configure value which may not be same as stdio's `off_t`.
**PerlIO\_getpos(f,p)**, **PerlIO\_setpos(f,p)**
These correspond (loosely) to fgetpos() and fsetpos(). Rather than stdio's Fpos\_t they expect a "Perl Scalar Value" to be passed. What is stored there should be considered opaque. The layout of the data may vary from handle to handle. When not using stdio or if platform does not have the stdio calls then they are implemented in terms of PerlIO\_tell() and PerlIO\_seek().
**PerlIO\_rewind(f)**
This corresponds to rewind(). It is usually defined as being
```
PerlIO_seek(f,(Off_t)0L, SEEK_SET);
PerlIO_clearerr(f);
```
**PerlIO\_tmpfile()**
This corresponds to tmpfile(), i.e., returns an anonymous PerlIO or NULL on error. The system will attempt to automatically delete the file when closed. On Unix the file is usually `unlink`-ed just after it is created so it does not matter how it gets closed. On other systems the file may only be deleted if closed via PerlIO\_close() and/or the program exits via `exit`. Depending on the implementation there may be "race conditions" which allow other processes access to the file, though in general it will be safer in this regard than ad. hoc. schemes.
**PerlIO\_setlinebuf(f)**
This corresponds to setlinebuf(). Does not return a value. What constitutes a "line" is implementation dependent but usually means that writing "\n" flushes the buffer. What happens with things like "this\nthat" is uncertain. (Perl core uses it *only* when "dumping"; it has nothing to do with $| auto-flush.)
###
Co-existence with stdio
There is outline support for co-existence of PerlIO with stdio. Obviously if PerlIO is implemented in terms of stdio there is no problem. However in other cases then mechanisms must exist to create a FILE \* which can be passed to library code which is going to use stdio calls.
The first step is to add this line:
```
#define PERLIO_NOT_STDIO 0
```
*before* including any perl header files. (This will probably become the default at some point). That prevents "perlio.h" from attempting to #define stdio functions onto PerlIO functions.
XS code is probably better using "typemap" if it expects FILE \* arguments. The standard typemap will be adjusted to comprehend any changes in this area.
**PerlIO\_importFILE(f,mode)**
Used to get a PerlIO \* from a FILE \*.
The mode argument should be a string as would be passed to fopen/PerlIO\_open. If it is NULL then - for legacy support - the code will (depending upon the platform and the implementation) either attempt to empirically determine the mode in which *f* is open, or use "r+" to indicate a read/write stream.
Once called the FILE \* should *ONLY* be closed by calling `PerlIO_close()` on the returned PerlIO \*.
The PerlIO is set to textmode. Use PerlIO\_binmode if this is not the desired mode.
This is **not** the reverse of PerlIO\_exportFILE().
**PerlIO\_exportFILE(f,mode)**
Given a PerlIO \* create a 'native' FILE \* suitable for passing to code expecting to be compiled and linked with ANSI C *stdio.h*. The mode argument should be a string as would be passed to fopen/PerlIO\_open. If it is NULL then - for legacy support - the FILE \* is opened in same mode as the PerlIO \*.
The fact that such a FILE \* has been 'exported' is recorded, (normally by pushing a new :stdio "layer" onto the PerlIO \*), which may affect future PerlIO operations on the original PerlIO \*. You should not call `fclose()` on the file unless you call `PerlIO_releaseFILE()` to disassociate it from the PerlIO \*. (Do not use PerlIO\_importFILE() for doing the disassociation.)
Calling this function repeatedly will create a FILE \* on each call (and will push an :stdio layer each time as well).
**PerlIO\_releaseFILE(p,f)**
Calling PerlIO\_releaseFILE informs PerlIO that all use of FILE \* is complete. It is removed from the list of 'exported' FILE \*s, and the associated PerlIO \* should revert to its original behaviour.
Use this to disassociate a file from a PerlIO \* that was associated using PerlIO\_exportFILE().
**PerlIO\_findFILE(f)**
Returns a native FILE \* used by a stdio layer. If there is none, it will create one with PerlIO\_exportFILE. In either case the FILE \* should be considered as belonging to PerlIO subsystem and should only be closed by calling `PerlIO_close()`.
###
"Fast gets" Functions
In addition to standard-like API defined so far above there is an "implementation" interface which allows perl to get at internals of PerlIO. The following calls correspond to the various FILE\_xxx macros determined by Configure - or their equivalent in other implementations. This section is really of interest to only those concerned with detailed perl-core behaviour, implementing a PerlIO mapping or writing code which can make use of the "read ahead" that has been done by the IO system in the same way perl does. Note that any code that uses these interfaces must be prepared to do things the traditional way if a handle does not support them.
**PerlIO\_fast\_gets(f)**
Returns true if implementation has all the interfaces required to allow perl's `sv_gets` to "bypass" normal IO mechanism. This can vary from handle to handle.
```
PerlIO_fast_gets(f) = PerlIO_has_cntptr(f) && \
PerlIO_canset_cnt(f) && \
'Can set pointer into buffer'
```
**PerlIO\_has\_cntptr(f)**
Implementation can return pointer to current position in the "buffer" and a count of bytes available in the buffer. Do not use this - use PerlIO\_fast\_gets.
**PerlIO\_get\_cnt(f)**
Return count of readable bytes in the buffer. Zero or negative return means no more bytes available.
**PerlIO\_get\_ptr(f)**
Return pointer to next readable byte in buffer, accessing via the pointer (dereferencing) is only safe if PerlIO\_get\_cnt() has returned a positive value. Only positive offsets up to value returned by PerlIO\_get\_cnt() are allowed.
**PerlIO\_set\_ptrcnt(f,p,c)**
Set pointer into buffer, and a count of bytes still in the buffer. Should be used only to set pointer to within range implied by previous calls to `PerlIO_get_ptr` and `PerlIO_get_cnt`. The two values *must* be consistent with each other (implementation may only use one or the other or may require both).
**PerlIO\_canset\_cnt(f)**
Implementation can adjust its idea of number of bytes in the buffer. Do not use this - use PerlIO\_fast\_gets.
**PerlIO\_set\_cnt(f,c)**
Obscure - set count of bytes in the buffer. Deprecated. Only usable if PerlIO\_canset\_cnt() returns true. Currently used in only doio.c to force count less than -1 to -1. Perhaps should be PerlIO\_set\_empty or similar. This call may actually do nothing if "count" is deduced from pointer and a "limit". Do not use this - use PerlIO\_set\_ptrcnt().
**PerlIO\_has\_base(f)**
Returns true if implementation has a buffer, and can return pointer to whole buffer and its size. Used by perl for **-T** / **-B** tests. Other uses would be very obscure...
**PerlIO\_get\_base(f)**
Return *start* of buffer. Access only positive offsets in the buffer up to the value returned by PerlIO\_get\_bufsiz().
**PerlIO\_get\_bufsiz(f)**
Return the *total number of bytes* in the buffer, this is neither the number that can be read, nor the amount of memory allocated to the buffer. Rather it is what the operating system and/or implementation happened to `read()` (or whatever) last time IO was requested.
###
Other Functions
PerlIO\_apply\_layers(aTHX\_ f,mode,layers) The new interface to the USE\_PERLIO implementation. The layers ":crlf" and ":raw" are the only ones allowed for other implementations and those are silently ignored. (As of perl5.8 ":raw" is deprecated.) Use PerlIO\_binmode() below for the portable case.
PerlIO\_binmode(aTHX\_ f,ptype,imode,layers) The hook used by perl's `binmode` operator. **ptype** is perl's character for the kind of IO:
'<' read
'>' write
'+' read/write **imode** is `O_BINARY` or `O_TEXT`.
**layers** is a string of layers to apply; only ":crlf" makes sense in the non-USE\_PERLIO case. (As of perl5.8 ":raw" is deprecated in favour of passing NULL.)
Portable cases are:
```
PerlIO_binmode(aTHX_ f,ptype,O_BINARY,NULL);
and
PerlIO_binmode(aTHX_ f,ptype,O_TEXT,":crlf");
```
On Unix these calls probably have no effect whatsoever. Elsewhere they alter "\n" to CR,LF translation and possibly cause a special text "end of file" indicator to be written or honoured on read. The effect of making the call after doing any IO to the handle depends on the implementation. (It may be ignored, affect any data which is already buffered as well, or only apply to subsequent data.)
PerlIO\_debug(fmt,...) PerlIO\_debug is a printf()-like function which can be used for debugging. No return value. Its main use is inside PerlIO where using real printf, warn() etc. would recursively call PerlIO and be a problem.
PerlIO\_debug writes to the file named by $ENV{'PERLIO\_DEBUG'} or defaults to stderr if the environment variable is not defined. Typical use might be
```
Bourne shells (sh, ksh, bash, zsh, ash, ...):
PERLIO_DEBUG=/tmp/perliodebug.log ./perl -Di somescript some args
Csh/Tcsh:
setenv PERLIO_DEBUG /tmp/perliodebug.log
./perl -Di somescript some args
If you have the "env" utility:
env PERLIO_DEBUG=/tmp/perliodebug.log ./perl -Di somescript args
Win32:
set PERLIO_DEBUG=perliodebug.log
perl -Di somescript some args
```
On a Perl built without `-DDEBUGGING`, or when the `-Di` command-line switch is not specified, or under taint, PerlIO\_debug() is a no-op.
| programming_docs |
perl ExtUtils::PL2Bat ExtUtils::PL2Bat
================
CONTENTS
--------
* [NAME](#NAME)
* [VERSION](#VERSION)
* [OVERVIEW](#OVERVIEW)
* [FUNCTIONS](#FUNCTIONS)
+ [pl2bat(%opts)](#pl2bat(%25opts))
* [ACKNOWLEDGEMENTS](#ACKNOWLEDGEMENTS)
* [AUTHOR](#AUTHOR)
* [COPYRIGHT AND LICENSE](#COPYRIGHT-AND-LICENSE)
NAME
----
ExtUtils::PL2Bat - Batch file creation to run perl scripts on Windows
VERSION
-------
version 0.004
OVERVIEW
--------
This module converts a perl script into a batch file that can be executed on Windows/DOS-like operating systems. This is intended to allow you to use a Perl script like regular programs and batch files where you just enter the name of the script [probably minus the extension] plus any command-line arguments and the script is found in your **PATH** and run.
FUNCTIONS
---------
###
pl2bat(%opts)
This function takes a perl script and write a batch file that contains the script. This is sometimes necessary
* `in`
The name of the script that is to be batchified. This argument is mandatory.
* `out`
The name of the output batch file. If not given, it will be generated using `in` and `stripsuffix`.
* `ntargs`
Arguments to invoke perl with in generated batch file when run from Windows NT. Defaults to '-x -S %0 %\*'.
* `otherargs`
Arguments to invoke perl with in generated batch file except when run from Windows NT (ie. when run from DOS, Windows 3.1, or Windows 95). Defaults to '-x -S "%0" %1 %2 %3 %4 %5 %6 %7 %8 %9'.
* `stripsuffix`
Strip a suffix string from file name before appending a ".bat" suffix. The suffix is not case-sensitive. It can be a regex or a string and a trailing `$` is always assumed). Defaults to `qr/\.plx?/`.
* `usewarnings`
With the `usewarnings` option, `" -w"` is added after the value of `$Config{startperl}`. If a line matching `/^#!.*perl/` already exists in the script, then it is not changed and the **-w** option is ignored.
* `update`
If the script appears to have already been processed by **pl2bat**, then the script is skipped and not processed unless `update` was specified. If `update` is specified, the existing preamble is replaced.
ACKNOWLEDGEMENTS
----------------
This code was taken from Module::Build and then modified; which had taken it from perl's pl2bat script. This module is an attempt at unifying all three implementations.
AUTHOR
------
Leon Timmermans <[email protected]>
COPYRIGHT AND LICENSE
----------------------
This software is copyright (c) 2015 by Leon Timmermans.
This is free software; you can redistribute it and/or modify it under the same terms as the Perl 5 programming language system itself.
perl ExtUtils::MM_MacOS ExtUtils::MM\_MacOS
===================
CONTENTS
--------
* [NAME](#NAME)
* [SYNOPSIS](#SYNOPSIS)
* [DESCRIPTION](#DESCRIPTION)
NAME
----
ExtUtils::MM\_MacOS - once produced Makefiles for MacOS Classic
SYNOPSIS
--------
```
# MM_MacOS no longer contains any code. This is just a stub.
```
DESCRIPTION
-----------
Once upon a time, MakeMaker could produce an approximation of a correct Makefile on MacOS Classic (MacPerl). Due to a lack of maintainers, this fell out of sync with the rest of MakeMaker and hadn't worked in years. Since there's little chance of it being repaired, MacOS Classic is fading away, and the code was icky to begin with, the code has been deleted to make maintenance easier.
Anyone interested in resurrecting this file should pull the old version from the MakeMaker CVS repository and contact [email protected].
perl overload overload
========
CONTENTS
--------
* [NAME](#NAME)
* [SYNOPSIS](#SYNOPSIS)
* [DESCRIPTION](#DESCRIPTION)
+ [Fundamentals](#Fundamentals)
- [Declaration](#Declaration)
- [Calling Conventions and Magic Autogeneration](#Calling-Conventions-and-Magic-Autogeneration)
- [Mathemagic, Mutators, and Copy Constructors](#Mathemagic,-Mutators,-and-Copy-Constructors)
+ [Overloadable Operations](#Overloadable-Operations)
+ [Magic Autogeneration](#Magic-Autogeneration)
- [Minimal Set of Overloaded Operations](#Minimal-Set-of-Overloaded-Operations)
+ [Special Keys for use overload](#Special-Keys-for-use-overload)
- [nomethod](#nomethod)
- [fallback](#fallback)
- [Copy Constructor](#Copy-Constructor)
+ [How Perl Chooses an Operator Implementation](#How-Perl-Chooses-an-Operator-Implementation)
+ [Losing Overloading](#Losing-Overloading)
+ [Inheritance and Overloading](#Inheritance-and-Overloading)
+ [Run-time Overloading](#Run-time-Overloading)
+ [Public Functions](#Public-Functions)
+ [Overloading Constants](#Overloading-Constants)
* [IMPLEMENTATION](#IMPLEMENTATION)
* [COOKBOOK](#COOKBOOK)
+ [Two-face Scalars](#Two-face-Scalars)
+ [Two-face References](#Two-face-References)
+ [Symbolic Calculator](#Symbolic-Calculator)
+ [Really Symbolic Calculator](#Really-Symbolic-Calculator)
* [AUTHOR](#AUTHOR)
* [SEE ALSO](#SEE-ALSO)
* [DIAGNOSTICS](#DIAGNOSTICS)
* [BUGS AND PITFALLS](#BUGS-AND-PITFALLS)
NAME
----
overload - Package for overloading Perl operations
SYNOPSIS
--------
```
package SomeThing;
use overload
'+' => \&myadd,
'-' => \&mysub;
# etc
...
package main;
$a = SomeThing->new( 57 );
$b = 5 + $a;
...
if (overload::Overloaded $b) {...}
...
$strval = overload::StrVal $b;
```
DESCRIPTION
-----------
This pragma allows overloading of Perl's operators for a class. To overload built-in functions, see ["Overriding Built-in Functions" in perlsub](perlsub#Overriding-Built-in-Functions) instead.
### Fundamentals
#### Declaration
Arguments of the `use overload` directive are (key, value) pairs. For the full set of legal keys, see ["Overloadable Operations"](#Overloadable-Operations) below.
Operator implementations (the values) can be subroutines, references to subroutines, or anonymous subroutines - in other words, anything legal inside a `&{ ... }` call. Values specified as strings are interpreted as method names. Thus
```
package Number;
use overload
"-" => "minus",
"*=" => \&muas,
'""' => sub { ...; };
```
declares that subtraction is to be implemented by method `minus()` in the class `Number` (or one of its base classes), and that the function `Number::muas()` is to be used for the assignment form of multiplication, `*=`. It also defines an anonymous subroutine to implement stringification: this is called whenever an object blessed into the package `Number` is used in a string context (this subroutine might, for example, return the number as a Roman numeral).
####
Calling Conventions and Magic Autogeneration
The following sample implementation of `minus()` (which assumes that `Number` objects are simply blessed references to scalars) illustrates the calling conventions:
```
package Number;
sub minus {
my ($self, $other, $swap) = @_;
my $result = $$self - $other; # *
$result = -$result if $swap;
ref $result ? $result : bless \$result;
}
# * may recurse once - see table below
```
Three arguments are passed to all subroutines specified in the `use overload` directive (with exceptions - see below, particularly ["nomethod"](#nomethod)).
The first of these is the operand providing the overloaded operator implementation - in this case, the object whose `minus()` method is being called.
The second argument is the other operand, or `undef` in the case of a unary operator.
The third argument is set to TRUE if (and only if) the two operands have been swapped. Perl may do this to ensure that the first argument (`$self`) is an object implementing the overloaded operation, in line with general object calling conventions. For example, if `$x` and `$y` are `Number`s:
```
operation | generates a call to
============|======================
$x - $y | minus($x, $y, '')
$x - 7 | minus($x, 7, '')
7 - $x | minus($x, 7, 1)
```
Perl may also use `minus()` to implement other operators which have not been specified in the `use overload` directive, according to the rules for ["Magic Autogeneration"](#Magic-Autogeneration) described later. For example, the `use overload` above declared no subroutine for any of the operators `--`, `neg` (the overload key for unary minus), or `-=`. Thus
```
operation | generates a call to
============|======================
-$x | minus($x, 0, 1)
$x-- | minus($x, 1, undef)
$x -= 3 | minus($x, 3, undef)
```
Note the `undef`s: where autogeneration results in the method for a standard operator which does not change either of its operands, such as `-`, being used to implement an operator which changes the operand ("mutators": here, `--` and `-=`), Perl passes undef as the third argument. This still evaluates as FALSE, consistent with the fact that the operands have not been swapped, but gives the subroutine a chance to alter its behaviour in these cases.
In all the above examples, `minus()` is required only to return the result of the subtraction: Perl takes care of the assignment to $x. In fact, such methods should *not* modify their operands, even if `undef` is passed as the third argument (see ["Overloadable Operations"](#Overloadable-Operations)).
The same is not true of implementations of `++` and `--`: these are expected to modify their operand. An appropriate implementation of `--` might look like
```
use overload '--' => "decr",
# ...
sub decr { --${$_[0]}; }
```
If the "bitwise" feature is enabled (see <feature>), a fifth TRUE argument is passed to subroutines handling `&`, `|`, `^` and `~`. This indicates that the caller is expecting numeric behaviour. The fourth argument will be `undef`, as that position (`$_[3]`) is reserved for use by ["nomethod"](#nomethod).
####
Mathemagic, Mutators, and Copy Constructors
The term 'mathemagic' describes the overloaded implementation of mathematical operators. Mathemagical operations raise an issue. Consider the code:
```
$a = $b;
--$a;
```
If `$a` and `$b` are scalars then after these statements
```
$a == $b - 1
```
An object, however, is a reference to blessed data, so if `$a` and `$b` are objects then the assignment `$a = $b` copies only the reference, leaving `$a` and `$b` referring to the same object data. One might therefore expect the operation `--$a` to decrement `$b` as well as `$a`. However, this would not be consistent with how we expect the mathematical operators to work.
Perl resolves this dilemma by transparently calling a copy constructor before calling a method defined to implement a mutator (`--`, `+=`, and so on.). In the above example, when Perl reaches the decrement statement, it makes a copy of the object data in `$a` and assigns to `$a` a reference to the copied data. Only then does it call `decr()`, which alters the copied data, leaving `$b` unchanged. Thus the object metaphor is preserved as far as possible, while mathemagical operations still work according to the arithmetic metaphor.
Note: the preceding paragraph describes what happens when Perl autogenerates the copy constructor for an object based on a scalar. For other cases, see ["Copy Constructor"](#Copy-Constructor).
###
Overloadable Operations
The complete list of keys that can be specified in the `use overload` directive are given, separated by spaces, in the values of the hash `%overload::ops`:
```
with_assign => '+ - * / % ** << >> x .',
assign => '+= -= *= /= %= **= <<= >>= x= .=',
num_comparison => '< <= > >= == !=',
'3way_comparison'=> '<=> cmp',
str_comparison => 'lt le gt ge eq ne',
binary => '& &= | |= ^ ^= &. &.= |. |.= ^. ^.=',
unary => 'neg ! ~ ~.',
mutators => '++ --',
func => 'atan2 cos sin exp abs log sqrt int',
conversion => 'bool "" 0+ qr',
iterators => '<>',
filetest => '-X',
dereferencing => '${} @{} %{} &{} *{}',
matching => '~~',
special => 'nomethod fallback ='
```
Most of the overloadable operators map one-to-one to these keys. Exceptions, including additional overloadable operations not apparent from this hash, are included in the notes which follow. This list is subject to growth over time.
A warning is issued if an attempt is made to register an operator not found above.
* `not`
The operator `not` is not a valid key for `use overload`. However, if the operator `!` is overloaded then the same implementation will be used for `not` (since the two operators differ only in precedence).
* `neg`
The key `neg` is used for unary minus to disambiguate it from binary `-`.
* `++`, `--`
Assuming they are to behave analogously to Perl's `++` and `--`, overloaded implementations of these operators are required to mutate their operands.
No distinction is made between prefix and postfix forms of the increment and decrement operators: these differ only in the point at which Perl calls the associated subroutine when evaluating an expression.
* *Assignments*
```
+= -= *= /= %= **= <<= >>= x= .=
&= |= ^= &.= |.= ^.=
```
Simple assignment is not overloadable (the `'='` key is used for the ["Copy Constructor"](#Copy-Constructor)). Perl does have a way to make assignments to an object do whatever you want, but this involves using tie(), not overload - see ["tie" in perlfunc](perlfunc#tie) and the ["COOKBOOK"](#COOKBOOK) examples below.
The subroutine for the assignment variant of an operator is required only to return the result of the operation. It is permitted to change the value of its operand (this is safe because Perl calls the copy constructor first), but this is optional since Perl assigns the returned value to the left-hand operand anyway.
An object that overloads an assignment operator does so only in respect of assignments to that object. In other words, Perl never calls the corresponding methods with the third argument (the "swap" argument) set to TRUE. For example, the operation
```
$a *= $b
```
cannot lead to `$b`'s implementation of `*=` being called, even if `$a` is a scalar. (It can, however, generate a call to `$b`'s method for `*`).
* *Non-mutators with a mutator variant*
```
+ - * / % ** << >> x .
& | ^ &. |. ^.
```
As described [above](#Calling-Conventions-and-Magic-Autogeneration), Perl may call methods for operators like `+` and `&` in the course of implementing missing operations like `++`, `+=`, and `&=`. While these methods may detect this usage by testing the definedness of the third argument, they should in all cases avoid changing their operands. This is because Perl does not call the copy constructor before invoking these methods.
* `int`
Traditionally, the Perl function `int` rounds to 0 (see ["int" in perlfunc](perlfunc#int)), and so for floating-point-like types one should follow the same semantic.
* *String, numeric, boolean, and regexp conversions*
```
"" 0+ bool
```
These conversions are invoked according to context as necessary. For example, the subroutine for `'""'` (stringify) may be used where the overloaded object is passed as an argument to `print`, and that for `'bool'` where it is tested in the condition of a flow control statement (like `while`) or the ternary `?:` operation.
Of course, in contexts like, for example, `$obj + 1`, Perl will invoke `$obj`'s implementation of `+` rather than (in this example) converting `$obj` to a number using the numify method `'0+'` (an exception to this is when no method has been provided for `'+'` and ["fallback"](#fallback) is set to TRUE).
The subroutines for `'""'`, `'0+'`, and `'bool'` can return any arbitrary Perl value. If the corresponding operation for this value is overloaded too, the operation will be called again with this value.
As a special case if the overload returns the object itself then it will be used directly. An overloaded conversion returning the object is probably a bug, because you're likely to get something that looks like `YourPackage=HASH(0x8172b34)`.
```
qr
```
The subroutine for `'qr'` is used wherever the object is interpolated into or used as a regexp, including when it appears on the RHS of a `=~` or `!~` operator.
`qr` must return a compiled regexp, or a ref to a compiled regexp (such as `qr//` returns), and any further overloading on the return value will be ignored.
* *Iteration*
If `<>` is overloaded then the same implementation is used for both the *read-filehandle* syntax `<$var>` and *globbing* syntax `<${var}>`.
* *File tests*
The key `'-X'` is used to specify a subroutine to handle all the filetest operators (`-f`, `-x`, and so on: see ["-X" in perlfunc](perlfunc#-X) for the full list); it is not possible to overload any filetest operator individually. To distinguish them, the letter following the '-' is passed as the second argument (that is, in the slot that for binary operators is used to pass the second operand).
Calling an overloaded filetest operator does not affect the stat value associated with the special filehandle `_`. It still refers to the result of the last `stat`, `lstat` or unoverloaded filetest.
This overload was introduced in Perl 5.12.
* *Matching*
The key `"~~"` allows you to override the smart matching logic used by the `~~` operator and the switch construct (`given`/`when`). See ["Switch Statements" in perlsyn](perlsyn#Switch-Statements) and <feature>.
Unusually, the overloaded implementation of the smart match operator does not get full control of the smart match behaviour. In particular, in the following code:
```
package Foo;
use overload '~~' => 'match';
my $obj = Foo->new();
$obj ~~ [ 1,2,3 ];
```
the smart match does *not* invoke the method call like this:
```
$obj->match([1,2,3],0);
```
rather, the smart match distributive rule takes precedence, so $obj is smart matched against each array element in turn until a match is found, so you may see between one and three of these calls instead:
```
$obj->match(1,0);
$obj->match(2,0);
$obj->match(3,0);
```
Consult the match table in ["Smartmatch Operator" in perlop](perlop#Smartmatch-Operator) for details of when overloading is invoked.
* *Dereferencing*
```
${} @{} %{} &{} *{}
```
If these operators are not explicitly overloaded then they work in the normal way, yielding the underlying scalar, array, or whatever stores the object data (or the appropriate error message if the dereference operator doesn't match it). Defining a catch-all `'nomethod'` (see [below](#nomethod)) makes no difference to this as the catch-all function will not be called to implement a missing dereference operator.
If a dereference operator is overloaded then it must return a *reference* of the appropriate type (for example, the subroutine for key `'${}'` should return a reference to a scalar, not a scalar), or another object which overloads the operator: that is, the subroutine only determines what is dereferenced and the actual dereferencing is left to Perl. As a special case, if the subroutine returns the object itself then it will not be called again - avoiding infinite recursion.
* *Special*
```
nomethod fallback =
```
See ["Special Keys for `use overload`"](#Special-Keys-for-use-overload).
###
Magic Autogeneration
If a method for an operation is not found then Perl tries to autogenerate a substitute implementation from the operations that have been defined.
Note: the behaviour described in this section can be disabled by setting `fallback` to FALSE (see ["fallback"](#fallback)).
In the following tables, numbers indicate priority. For example, the table below states that, if no implementation for `'!'` has been defined then Perl will implement it using `'bool'` (that is, by inverting the value returned by the method for `'bool'`); if boolean conversion is also unimplemented then Perl will use `'0+'` or, failing that, `'""'`.
```
operator | can be autogenerated from
|
| 0+ "" bool . x
=========|==========================
0+ | 1 2
"" | 1 2
bool | 1 2
int | 1 2 3
! | 2 3 1
qr | 2 1 3
. | 2 1 3
x | 2 1 3
.= | 3 2 4 1
x= | 3 2 4 1
<> | 2 1 3
-X | 2 1 3
```
Note: The iterator (`'<>'`) and file test (`'-X'`) operators work as normal: if the operand is not a blessed glob or IO reference then it is converted to a string (using the method for `'""'`, `'0+'`, or `'bool'`) to be interpreted as a glob or filename.
```
operator | can be autogenerated from
|
| < <=> neg -= -
=========|==========================
neg | 1
-= | 1
-- | 1 2
abs | a1 a2 b1 b2 [*]
< | 1
<= | 1
> | 1
>= | 1
== | 1
!= | 1
* one from [a1, a2] and one from [b1, b2]
```
Just as numeric comparisons can be autogenerated from the method for `'<=>'`, string comparisons can be autogenerated from that for `'cmp'`:
```
operators | can be autogenerated from
====================|===========================
lt gt le ge eq ne | cmp
```
Similarly, autogeneration for keys `'+='` and `'++'` is analogous to `'-='` and `'--'` above:
```
operator | can be autogenerated from
|
| += +
=========|==========================
+= | 1
++ | 1 2
```
And other assignment variations are analogous to `'+='` and `'-='` (and similar to `'.='` and `'x='` above):
```
operator || *= /= %= **= <<= >>= &= ^= |= &.= ^.= |.=
-------------------||-------------------------------------------
autogenerated from || * / % ** << >> & ^ | &. ^. |.
```
Note also that the copy constructor (key `'='`) may be autogenerated, but only for objects based on scalars. See ["Copy Constructor"](#Copy-Constructor).
####
Minimal Set of Overloaded Operations
Since some operations can be automatically generated from others, there is a minimal set of operations that need to be overloaded in order to have the complete set of overloaded operations at one's disposal. Of course, the autogenerated operations may not do exactly what the user expects. The minimal set is:
```
+ - * / % ** << >> x
<=> cmp
& | ^ ~ &. |. ^. ~.
atan2 cos sin exp log sqrt int
"" 0+ bool
~~
```
Of the conversions, only one of string, boolean or numeric is needed because each can be generated from either of the other two.
###
Special Keys for `use overload`
#### `nomethod`
The `'nomethod'` key is used to specify a catch-all function to be called for any operator that is not individually overloaded. The specified function will be passed four parameters. The first three arguments coincide with those that would have been passed to the corresponding method if it had been defined. The fourth argument is the `use overload` key for that missing method. If the "bitwise" feature is enabled (see <feature>), a fifth TRUE argument is passed to subroutines handling `&`, `|`, `^` and `~` to indicate that the caller is expecting numeric behaviour.
For example, if `$a` is an object blessed into a package declaring
```
use overload 'nomethod' => 'catch_all', # ...
```
then the operation
```
3 + $a
```
could (unless a method is specifically declared for the key `'+'`) result in a call
```
catch_all($a, 3, 1, '+')
```
See ["How Perl Chooses an Operator Implementation"](#How-Perl-Chooses-an-Operator-Implementation).
#### `fallback`
The value assigned to the key `'fallback'` tells Perl how hard it should try to find an alternative way to implement a missing operator.
* defined, but FALSE
```
use overload "fallback" => 0, # ... ;
```
This disables ["Magic Autogeneration"](#Magic-Autogeneration).
* `undef`
In the default case where no value is explicitly assigned to `fallback`, magic autogeneration is enabled.
* TRUE
The same as for `undef`, but if a missing operator cannot be autogenerated then, instead of issuing an error message, Perl is allowed to revert to what it would have done for that operator if there had been no `use overload` directive.
Note: in most cases, particularly the ["Copy Constructor"](#Copy-Constructor), this is unlikely to be appropriate behaviour.
See ["How Perl Chooses an Operator Implementation"](#How-Perl-Chooses-an-Operator-Implementation).
####
Copy Constructor
As mentioned [above](#Mathemagic%2C-Mutators%2C-and-Copy-Constructors), this operation is called when a mutator is applied to a reference that shares its object with some other reference. For example, if `$b` is mathemagical, and `'++'` is overloaded with `'incr'`, and `'='` is overloaded with `'clone'`, then the code
```
$a = $b;
# ... (other code which does not modify $a or $b) ...
++$b;
```
would be executed in a manner equivalent to
```
$a = $b;
# ...
$b = $b->clone(undef, "");
$b->incr(undef, "");
```
Note:
* The subroutine for `'='` does not overload the Perl assignment operator: it is used only to allow mutators to work as described here. (See ["Assignments"](#Assignments) above.)
* As for other operations, the subroutine implementing '=' is passed three arguments, though the last two are always `undef` and `''`.
* The copy constructor is called only before a call to a function declared to implement a mutator, for example, if `++$b;` in the code above is effected via a method declared for key `'++'` (or 'nomethod', passed `'++'` as the fourth argument) or, by autogeneration, `'+='`. It is not called if the increment operation is effected by a call to the method for `'+'` since, in the equivalent code,
```
$a = $b;
$b = $b + 1;
```
the data referred to by `$a` is unchanged by the assignment to `$b` of a reference to new object data.
* The copy constructor is not called if Perl determines that it is unnecessary because there is no other reference to the data being modified.
* If `'fallback'` is undefined or TRUE then a copy constructor can be autogenerated, but only for objects based on scalars. In other cases it needs to be defined explicitly. Where an object's data is stored as, for example, an array of scalars, the following might be appropriate:
```
use overload '=' => sub { bless [ @{$_[0]} ] }, # ...
```
* If `'fallback'` is TRUE and no copy constructor is defined then, for objects not based on scalars, Perl may silently fall back on simple assignment - that is, assignment of the object reference. In effect, this disables the copy constructor mechanism since no new copy of the object data is created. This is almost certainly not what you want. (It is, however, consistent: for example, Perl's fallback for the `++` operator is to increment the reference itself.)
###
How Perl Chooses an Operator Implementation
Which is checked first, `nomethod` or `fallback`? If the two operands of an operator are of different types and both overload the operator, which implementation is used? The following are the precedence rules:
1. If the first operand has declared a subroutine to overload the operator then use that implementation.
2. Otherwise, if fallback is TRUE or undefined for the first operand then see if the [rules for autogeneration](#Magic-Autogeneration) allows another of its operators to be used instead.
3. Unless the operator is an assignment (`+=`, `-=`, etc.), repeat step (1) in respect of the second operand.
4. Repeat Step (2) in respect of the second operand.
5. If the first operand has a "nomethod" method then use that.
6. If the second operand has a "nomethod" method then use that.
7. If `fallback` is TRUE for both operands then perform the usual operation for the operator, treating the operands as numbers, strings, or booleans as appropriate for the operator (see note).
8. Nothing worked - die.
Where there is only one operand (or only one operand with overloading) the checks in respect of the other operand above are skipped.
There are exceptions to the above rules for dereference operations (which, if Step 1 fails, always fall back to the normal, built-in implementations - see Dereferencing), and for `~~` (which has its own set of rules - see `Matching` under ["Overloadable Operations"](#Overloadable-Operations) above).
Note on Step 7: some operators have a different semantic depending on the type of their operands. As there is no way to instruct Perl to treat the operands as, e.g., numbers instead of strings, the result here may not be what you expect. See ["BUGS AND PITFALLS"](#BUGS-AND-PITFALLS).
###
Losing Overloading
The restriction for the comparison operation is that even if, for example, `cmp` should return a blessed reference, the autogenerated `lt` function will produce only a standard logical value based on the numerical value of the result of `cmp`. In particular, a working numeric conversion is needed in this case (possibly expressed in terms of other conversions).
Similarly, `.=` and `x=` operators lose their mathemagical properties if the string conversion substitution is applied.
When you chop() a mathemagical object it is promoted to a string and its mathemagical properties are lost. The same can happen with other operations as well.
###
Inheritance and Overloading
Overloading respects inheritance via the @ISA hierarchy. Inheritance interacts with overloading in two ways.
Method names in the `use overload` directive If `value` in
```
use overload key => value;
```
is a string, it is interpreted as a method name - which may (in the usual way) be inherited from another class.
Overloading of an operation is inherited by derived classes Any class derived from an overloaded class is also overloaded and inherits its operator implementations. If the same operator is overloaded in more than one ancestor then the implementation is determined by the usual inheritance rules.
For example, if `A` inherits from `B` and `C` (in that order), `B` overloads `+` with `\&D::plus_sub`, and `C` overloads `+` by `"plus_meth"`, then the subroutine `D::plus_sub` will be called to implement operation `+` for an object in package `A`.
Note that in Perl version prior to 5.18 inheritance of the `fallback` key was not governed by the above rules. The value of `fallback` in the first overloaded ancestor was used. This was fixed in 5.18 to follow the usual rules of inheritance.
###
Run-time Overloading
Since all `use` directives are executed at compile-time, the only way to change overloading during run-time is to
```
eval 'use overload "+" => \&addmethod';
```
You can also use
```
eval 'no overload "+", "--", "<="';
```
though the use of these constructs during run-time is questionable.
###
Public Functions
Package `overload.pm` provides the following public functions:
overload::StrVal(arg) Gives the string value of `arg` as in the absence of stringify overloading. If you are using this to get the address of a reference (useful for checking if two references point to the same thing) then you may be better off using `builtin::refaddr()` or `Scalar::Util::refaddr()`, which are faster.
overload::Overloaded(arg) Returns true if `arg` is subject to overloading of some operations.
overload::Method(obj,op) Returns `undef` or a reference to the method that implements `op`.
Such a method always takes three arguments, which will be enforced if it is an XS method.
###
Overloading Constants
For some applications, the Perl parser mangles constants too much. It is possible to hook into this process via `overload::constant()` and `overload::remove_constant()` functions.
These functions take a hash as an argument. The recognized keys of this hash are:
integer to overload integer constants,
float to overload floating point constants,
binary to overload octal and hexadecimal constants,
q to overload `q`-quoted strings, constant pieces of `qq`- and `qx`-quoted strings and here-documents,
qr to overload constant pieces of regular expressions.
The corresponding values are references to functions which take three arguments: the first one is the *initial* string form of the constant, the second one is how Perl interprets this constant, the third one is how the constant is used. Note that the initial string form does not contain string delimiters, and has backslashes in backslash-delimiter combinations stripped (thus the value of delimiter is not relevant for processing of this string). The return value of this function is how this constant is going to be interpreted by Perl. The third argument is undefined unless for overloaded `q`- and `qr`- constants, it is `q` in single-quote context (comes from strings, regular expressions, and single-quote HERE documents), it is `tr` for arguments of `tr`/`y` operators, it is `s` for right-hand side of `s`-operator, and it is `qq` otherwise.
Since an expression `"ab$cd,,"` is just a shortcut for `'ab' . $cd . ',,'`, it is expected that overloaded constant strings are equipped with reasonable overloaded catenation operator, otherwise absurd results will result. Similarly, negative numbers are considered as negations of positive constants.
Note that it is probably meaningless to call the functions overload::constant() and overload::remove\_constant() from anywhere but import() and unimport() methods. From these methods they may be called as
```
sub import {
shift;
return unless @_;
die "unknown import: @_" unless @_ == 1 and $_[0] eq ':constant';
overload::constant integer => sub {Math::BigInt->new(shift)};
}
```
IMPLEMENTATION
--------------
What follows is subject to change RSN.
The table of methods for all operations is cached in magic for the symbol table hash for the package. The cache is invalidated during processing of `use overload`, `no overload`, new function definitions, and changes in @ISA.
(Every SVish thing has a magic queue, and magic is an entry in that queue. This is how a single variable may participate in multiple forms of magic simultaneously. For instance, environment variables regularly have two forms at once: their %ENV magic and their taint magic. However, the magic which implements overloading is applied to the stashes, which are rarely used directly, thus should not slow down Perl.)
If a package uses overload, it carries a special flag. This flag is also set when new functions are defined or @ISA is modified. There will be a slight speed penalty on the very first operation thereafter that supports overloading, while the overload tables are updated. If there is no overloading present, the flag is turned off. Thus the only speed penalty thereafter is the checking of this flag.
It is expected that arguments to methods that are not explicitly supposed to be changed are constant (but this is not enforced).
COOKBOOK
--------
Please add examples to what follows!
###
Two-face Scalars
Put this in *two\_face.pm* in your Perl library directory:
```
package two_face; # Scalars with separate string and
# numeric values.
sub new { my $p = shift; bless [@_], $p }
use overload '""' => \&str, '0+' => \&num, fallback => 1;
sub num {shift->[1]}
sub str {shift->[0]}
```
Use it as follows:
```
require two_face;
my $seven = two_face->new("vii", 7);
printf "seven=$seven, seven=%d, eight=%d\n", $seven, $seven+1;
print "seven contains 'i'\n" if $seven =~ /i/;
```
(The second line creates a scalar which has both a string value, and a numeric value.) This prints:
```
seven=vii, seven=7, eight=8
seven contains 'i'
```
###
Two-face References
Suppose you want to create an object which is accessible as both an array reference and a hash reference.
```
package two_refs;
use overload '%{}' => \&gethash, '@{}' => sub { $ {shift()} };
sub new {
my $p = shift;
bless \ [@_], $p;
}
sub gethash {
my %h;
my $self = shift;
tie %h, ref $self, $self;
\%h;
}
sub TIEHASH { my $p = shift; bless \ shift, $p }
my %fields;
my $i = 0;
$fields{$_} = $i++ foreach qw{zero one two three};
sub STORE {
my $self = ${shift()};
my $key = $fields{shift()};
defined $key or die "Out of band access";
$$self->[$key] = shift;
}
sub FETCH {
my $self = ${shift()};
my $key = $fields{shift()};
defined $key or die "Out of band access";
$$self->[$key];
}
```
Now one can access an object using both the array and hash syntax:
```
my $bar = two_refs->new(3,4,5,6);
$bar->[2] = 11;
$bar->{two} == 11 or die 'bad hash fetch';
```
Note several important features of this example. First of all, the *actual* type of $bar is a scalar reference, and we do not overload the scalar dereference. Thus we can get the *actual* non-overloaded contents of $bar by just using `$$bar` (what we do in functions which overload dereference). Similarly, the object returned by the TIEHASH() method is a scalar reference.
Second, we create a new tied hash each time the hash syntax is used. This allows us not to worry about a possibility of a reference loop, which would lead to a memory leak.
Both these problems can be cured. Say, if we want to overload hash dereference on a reference to an object which is *implemented* as a hash itself, the only problem one has to circumvent is how to access this *actual* hash (as opposed to the *virtual* hash exhibited by the overloaded dereference operator). Here is one possible fetching routine:
```
sub access_hash {
my ($self, $key) = (shift, shift);
my $class = ref $self;
bless $self, 'overload::dummy'; # Disable overloading of %{}
my $out = $self->{$key};
bless $self, $class; # Restore overloading
$out;
}
```
To remove creation of the tied hash on each access, one may an extra level of indirection which allows a non-circular structure of references:
```
package two_refs1;
use overload '%{}' => sub { ${shift()}->[1] },
'@{}' => sub { ${shift()}->[0] };
sub new {
my $p = shift;
my $a = [@_];
my %h;
tie %h, $p, $a;
bless \ [$a, \%h], $p;
}
sub gethash {
my %h;
my $self = shift;
tie %h, ref $self, $self;
\%h;
}
sub TIEHASH { my $p = shift; bless \ shift, $p }
my %fields;
my $i = 0;
$fields{$_} = $i++ foreach qw{zero one two three};
sub STORE {
my $a = ${shift()};
my $key = $fields{shift()};
defined $key or die "Out of band access";
$a->[$key] = shift;
}
sub FETCH {
my $a = ${shift()};
my $key = $fields{shift()};
defined $key or die "Out of band access";
$a->[$key];
}
```
Now if $baz is overloaded like this, then `$baz` is a reference to a reference to the intermediate array, which keeps a reference to an actual array, and the access hash. The tie()ing object for the access hash is a reference to a reference to the actual array, so
* There are no loops of references.
* Both "objects" which are blessed into the class `two_refs1` are references to a reference to an array, thus references to a *scalar*. Thus the accessor expression `$$foo->[$ind]` involves no overloaded operations.
###
Symbolic Calculator
Put this in *symbolic.pm* in your Perl library directory:
```
package symbolic; # Primitive symbolic calculator
use overload nomethod => \&wrap;
sub new { shift; bless ['n', @_] }
sub wrap {
my ($obj, $other, $inv, $meth) = @_;
($obj, $other) = ($other, $obj) if $inv;
bless [$meth, $obj, $other];
}
```
This module is very unusual as overloaded modules go: it does not provide any usual overloaded operators, instead it provides an implementation for `["nomethod"](#nomethod)`. In this example the `nomethod` subroutine returns an object which encapsulates operations done over the objects: `symbolic->new(3)` contains `['n', 3]`, `2 + symbolic->new(3)` contains `['+', 2, ['n', 3]]`.
Here is an example of the script which "calculates" the side of circumscribed octagon using the above package:
```
require symbolic;
my $iter = 1; # 2**($iter+2) = 8
my $side = symbolic->new(1);
my $cnt = $iter;
while ($cnt--) {
$side = (sqrt(1 + $side**2) - 1)/$side;
}
print "OK\n";
```
The value of $side is
```
['/', ['-', ['sqrt', ['+', 1, ['**', ['n', 1], 2]],
undef], 1], ['n', 1]]
```
Note that while we obtained this value using a nice little script, there is no simple way to *use* this value. In fact this value may be inspected in debugger (see <perldebug>), but only if `bareStringify` **O**ption is set, and not via `p` command.
If one attempts to print this value, then the overloaded operator `""` will be called, which will call `nomethod` operator. The result of this operator will be stringified again, but this result is again of type `symbolic`, which will lead to an infinite loop.
Add a pretty-printer method to the module *symbolic.pm*:
```
sub pretty {
my ($meth, $a, $b) = @{+shift};
$a = 'u' unless defined $a;
$b = 'u' unless defined $b;
$a = $a->pretty if ref $a;
$b = $b->pretty if ref $b;
"[$meth $a $b]";
}
```
Now one can finish the script by
```
print "side = ", $side->pretty, "\n";
```
The method `pretty` is doing object-to-string conversion, so it is natural to overload the operator `""` using this method. However, inside such a method it is not necessary to pretty-print the *components* $a and $b of an object. In the above subroutine `"[$meth $a $b]"` is a catenation of some strings and components $a and $b. If these components use overloading, the catenation operator will look for an overloaded operator `.`; if not present, it will look for an overloaded operator `""`. Thus it is enough to use
```
use overload nomethod => \&wrap, '""' => \&str;
sub str {
my ($meth, $a, $b) = @{+shift};
$a = 'u' unless defined $a;
$b = 'u' unless defined $b;
"[$meth $a $b]";
}
```
Now one can change the last line of the script to
```
print "side = $side\n";
```
which outputs
```
side = [/ [- [sqrt [+ 1 [** [n 1 u] 2]] u] 1] [n 1 u]]
```
and one can inspect the value in debugger using all the possible methods.
Something is still amiss: consider the loop variable $cnt of the script. It was a number, not an object. We cannot make this value of type `symbolic`, since then the loop will not terminate.
Indeed, to terminate the cycle, the $cnt should become false. However, the operator `bool` for checking falsity is overloaded (this time via overloaded `""`), and returns a long string, thus any object of type `symbolic` is true. To overcome this, we need a way to compare an object to 0. In fact, it is easier to write a numeric conversion routine.
Here is the text of *symbolic.pm* with such a routine added (and slightly modified str()):
```
package symbolic; # Primitive symbolic calculator
use overload
nomethod => \&wrap, '""' => \&str, '0+' => \#
sub new { shift; bless ['n', @_] }
sub wrap {
my ($obj, $other, $inv, $meth) = @_;
($obj, $other) = ($other, $obj) if $inv;
bless [$meth, $obj, $other];
}
sub str {
my ($meth, $a, $b) = @{+shift};
$a = 'u' unless defined $a;
if (defined $b) {
"[$meth $a $b]";
} else {
"[$meth $a]";
}
}
my %subr = ( n => sub {$_[0]},
sqrt => sub {sqrt $_[0]},
'-' => sub {shift() - shift()},
'+' => sub {shift() + shift()},
'/' => sub {shift() / shift()},
'*' => sub {shift() * shift()},
'**' => sub {shift() ** shift()},
);
sub num {
my ($meth, $a, $b) = @{+shift};
my $subr = $subr{$meth}
or die "Do not know how to ($meth) in symbolic";
$a = $a->num if ref $a eq __PACKAGE__;
$b = $b->num if ref $b eq __PACKAGE__;
$subr->($a,$b);
}
```
All the work of numeric conversion is done in %subr and num(). Of course, %subr is not complete, it contains only operators used in the example below. Here is the extra-credit question: why do we need an explicit recursion in num()? (Answer is at the end of this section.)
Use this module like this:
```
require symbolic;
my $iter = symbolic->new(2); # 16-gon
my $side = symbolic->new(1);
my $cnt = $iter;
while ($cnt) {
$cnt = $cnt - 1; # Mutator '--' not implemented
$side = (sqrt(1 + $side**2) - 1)/$side;
}
printf "%s=%f\n", $side, $side;
printf "pi=%f\n", $side*(2**($iter+2));
```
It prints (without so many line breaks)
```
[/ [- [sqrt [+ 1 [** [/ [- [sqrt [+ 1 [** [n 1] 2]]] 1]
[n 1]] 2]]] 1]
[/ [- [sqrt [+ 1 [** [n 1] 2]]] 1] [n 1]]]=0.198912
pi=3.182598
```
The above module is very primitive. It does not implement mutator methods (`++`, `-=` and so on), does not do deep copying (not required without mutators!), and implements only those arithmetic operations which are used in the example.
To implement most arithmetic operations is easy; one should just use the tables of operations, and change the code which fills %subr to
```
my %subr = ( 'n' => sub {$_[0]} );
foreach my $op (split " ", $overload::ops{with_assign}) {
$subr{$op} = $subr{"$op="} = eval "sub {shift() $op shift()}";
}
my @bins = qw(binary 3way_comparison num_comparison str_comparison);
foreach my $op (split " ", "@overload::ops{ @bins }") {
$subr{$op} = eval "sub {shift() $op shift()}";
}
foreach my $op (split " ", "@overload::ops{qw(unary func)}") {
print "defining '$op'\n";
$subr{$op} = eval "sub {$op shift()}";
}
```
Since subroutines implementing assignment operators are not required to modify their operands (see ["Overloadable Operations"](#Overloadable-Operations) above), we do not need anything special to make `+=` and friends work, besides adding these operators to %subr and defining a copy constructor (needed since Perl has no way to know that the implementation of `'+='` does not mutate the argument - see ["Copy Constructor"](#Copy-Constructor)).
To implement a copy constructor, add `'=' => \&cpy` to `use overload` line, and code (this code assumes that mutators change things one level deep only, so recursive copying is not needed):
```
sub cpy {
my $self = shift;
bless [@$self], ref $self;
}
```
To make `++` and `--` work, we need to implement actual mutators, either directly, or in `nomethod`. We continue to do things inside `nomethod`, thus add
```
if ($meth eq '++' or $meth eq '--') {
@$obj = ($meth, (bless [@$obj]), 1); # Avoid circular reference
return $obj;
}
```
after the first line of wrap(). This is not a most effective implementation, one may consider
```
sub inc { $_[0] = bless ['++', shift, 1]; }
```
instead.
As a final remark, note that one can fill %subr by
```
my %subr = ( 'n' => sub {$_[0]} );
foreach my $op (split " ", $overload::ops{with_assign}) {
$subr{$op} = $subr{"$op="} = eval "sub {shift() $op shift()}";
}
my @bins = qw(binary 3way_comparison num_comparison str_comparison);
foreach my $op (split " ", "@overload::ops{ @bins }") {
$subr{$op} = eval "sub {shift() $op shift()}";
}
foreach my $op (split " ", "@overload::ops{qw(unary func)}") {
$subr{$op} = eval "sub {$op shift()}";
}
$subr{'++'} = $subr{'+'};
$subr{'--'} = $subr{'-'};
```
This finishes implementation of a primitive symbolic calculator in 50 lines of Perl code. Since the numeric values of subexpressions are not cached, the calculator is very slow.
Here is the answer for the exercise: In the case of str(), we need no explicit recursion since the overloaded `.`-operator will fall back to an existing overloaded operator `""`. Overloaded arithmetic operators *do not* fall back to numeric conversion if `fallback` is not explicitly requested. Thus without an explicit recursion num() would convert `['+', $a, $b]` to `$a + $b`, which would just rebuild the argument of num().
If you wonder why defaults for conversion are different for str() and num(), note how easy it was to write the symbolic calculator. This simplicity is due to an appropriate choice of defaults. One extra note: due to the explicit recursion num() is more fragile than sym(): we need to explicitly check for the type of $a and $b. If components $a and $b happen to be of some related type, this may lead to problems.
###
*Really* Symbolic Calculator
One may wonder why we call the above calculator symbolic. The reason is that the actual calculation of the value of expression is postponed until the value is *used*.
To see it in action, add a method
```
sub STORE {
my $obj = shift;
$#$obj = 1;
@$obj->[0,1] = ('=', shift);
}
```
to the package `symbolic`. After this change one can do
```
my $a = symbolic->new(3);
my $b = symbolic->new(4);
my $c = sqrt($a**2 + $b**2);
```
and the numeric value of $c becomes 5. However, after calling
```
$a->STORE(12); $b->STORE(5);
```
the numeric value of $c becomes 13. There is no doubt now that the module symbolic provides a *symbolic* calculator indeed.
To hide the rough edges under the hood, provide a tie()d interface to the package `symbolic`. Add methods
```
sub TIESCALAR { my $pack = shift; $pack->new(@_) }
sub FETCH { shift }
sub nop { } # Around a bug
```
(the bug, fixed in Perl 5.14, is described in ["BUGS"](#BUGS)). One can use this new interface as
```
tie $a, 'symbolic', 3;
tie $b, 'symbolic', 4;
$a->nop; $b->nop; # Around a bug
my $c = sqrt($a**2 + $b**2);
```
Now numeric value of $c is 5. After `$a = 12; $b = 5` the numeric value of $c becomes 13. To insulate the user of the module add a method
```
sub vars { my $p = shift; tie($_, $p), $_->nop foreach @_; }
```
Now
```
my ($a, $b);
symbolic->vars($a, $b);
my $c = sqrt($a**2 + $b**2);
$a = 3; $b = 4;
printf "c5 %s=%f\n", $c, $c;
$a = 12; $b = 5;
printf "c13 %s=%f\n", $c, $c;
```
shows that the numeric value of $c follows changes to the values of $a and $b.
AUTHOR
------
Ilya Zakharevich <*[email protected]*>.
SEE ALSO
---------
The `overloading` pragma can be used to enable or disable overloaded operations within a lexical scope - see <overloading>.
DIAGNOSTICS
-----------
When Perl is run with the **-Do** switch or its equivalent, overloading induces diagnostic messages.
Using the `m` command of Perl debugger (see <perldebug>) one can deduce which operations are overloaded (and which ancestor triggers this overloading). Say, if `eq` is overloaded, then the method `(eq` is shown by debugger. The method `()` corresponds to the `fallback` key (in fact a presence of this method shows that this package has overloading enabled, and it is what is used by the `Overloaded` function of module `overload`).
The module might issue the following warnings:
Odd number of arguments for overload::constant (W) The call to overload::constant contained an odd number of arguments. The arguments should come in pairs.
'%s' is not an overloadable type (W) You tried to overload a constant type the overload package is unaware of.
'%s' is not a code reference (W) The second (fourth, sixth, ...) argument of overload::constant needs to be a code reference. Either an anonymous subroutine, or a reference to a subroutine.
overload arg '%s' is invalid (W) `use overload` was passed an argument it did not recognize. Did you mistype an operator?
BUGS AND PITFALLS
------------------
* A pitfall when fallback is TRUE and Perl resorts to a built-in implementation of an operator is that some operators have more than one semantic, for example `|`:
```
use overload '0+' => sub { $_[0]->{n}; },
fallback => 1;
my $x = bless { n => 4 }, "main";
my $y = bless { n => 8 }, "main";
print $x | $y, "\n";
```
You might expect this to output "12". In fact, it prints "<": the ASCII result of treating "|" as a bitwise string operator - that is, the result of treating the operands as the strings "4" and "8" rather than numbers. The fact that numify (`0+`) is implemented but stringify (`""`) isn't makes no difference since the latter is simply autogenerated from the former.
The only way to change this is to provide your own subroutine for `'|'`.
* Magic autogeneration increases the potential for inadvertently creating self-referential structures. Currently Perl will not free self-referential structures until cycles are explicitly broken. For example,
```
use overload '+' => 'add';
sub add { bless [ \$_[0], \$_[1] ] };
```
is asking for trouble, since
```
$obj += $y;
```
will effectively become
```
$obj = add($obj, $y, undef);
```
with the same result as
```
$obj = [\$obj, \$foo];
```
Even if no *explicit* assignment-variants of operators are present in the script, they may be generated by the optimizer. For example,
```
"obj = $obj\n"
```
may be optimized to
```
my $tmp = 'obj = ' . $obj; $tmp .= "\n";
```
* The symbol table is filled with names looking like line-noise.
* This bug was fixed in Perl 5.18, but may still trip you up if you are using older versions:
For the purpose of inheritance every overloaded package behaves as if `fallback` is present (possibly undefined). This may create interesting effects if some package is not overloaded, but inherits from two overloaded packages.
* Before Perl 5.14, the relation between overloading and tie()ing was broken. Overloading was triggered or not based on the *previous* class of the tie()d variable.
This happened because the presence of overloading was checked too early, before any tie()d access was attempted. If the class of the value FETCH()ed from the tied variable does not change, a simple workaround for code that is to run on older Perl versions is to access the value (via `() = $foo` or some such) immediately after tie()ing, so that after this call the *previous* class coincides with the current one.
* Barewords are not covered by overloaded string constants.
* The range operator `..` cannot be overloaded.
| programming_docs |
perl perlsolaris perlsolaris
===========
CONTENTS
--------
* [NAME](#NAME)
* [DESCRIPTION](#DESCRIPTION)
+ [Solaris Version Numbers.](#Solaris-Version-Numbers.)
* [RESOURCES](#RESOURCES)
* [SETTING UP](#SETTING-UP)
+ [File Extraction Problems on Solaris.](#File-Extraction-Problems-on-Solaris.)
+ [Compiler and Related Tools on Solaris.](#Compiler-and-Related-Tools-on-Solaris.)
- [Include /usr/ccs/bin/ in your PATH.](#Include-/usr/ccs/bin/-in-your-PATH.)
- [Avoid /usr/ucb/cc.](#Avoid-/usr/ucb/cc.)
- [Sun's C Compiler](#Sun's-C-Compiler)
- [GCC](#GCC)
- [GNU as and GNU ld](#GNU-as-and-GNU-ld)
- [Sun and GNU make](#Sun-and-GNU-make)
- [Avoid libucb.](#Avoid-libucb.)
+ [Environment for Compiling perl on Solaris](#Environment-for-Compiling-perl-on-Solaris)
- [PATH](#PATH)
- [LD\_LIBRARY\_PATH](#LD_LIBRARY_PATH)
* [RUN CONFIGURE.](#RUN-CONFIGURE.)
+ [64-bit perl on Solaris.](#64-bit-perl-on-Solaris.)
- [General 32-bit vs. 64-bit issues.](#General-32-bit-vs.-64-bit-issues.)
- [Large File Support](#Large-File-Support)
- [Building an LP64 perl](#Building-an-LP64-perl)
- [Long Doubles.](#Long-Doubles.)
+ [Threads in perl on Solaris.](#Threads-in-perl-on-Solaris.)
+ [Malloc Issues with perl on Solaris.](#Malloc-Issues-with-perl-on-Solaris.)
* [MAKE PROBLEMS.](#MAKE-PROBLEMS.)
* [MAKE TEST](#MAKE-TEST)
+ [op/stat.t test 4 in Solaris](#op/stat.t-test-4-in-Solaris)
+ [nss\_delete core dump from op/pwent or op/grent](#nss_delete-core-dump-from-op/pwent-or-op/grent)
* [CROSS-COMPILATION](#CROSS-COMPILATION)
* [PREBUILT BINARIES OF PERL FOR SOLARIS.](#PREBUILT-BINARIES-OF-PERL-FOR-SOLARIS.)
* [RUNTIME ISSUES FOR PERL ON SOLARIS.](#RUNTIME-ISSUES-FOR-PERL-ON-SOLARIS.)
+ [Limits on Numbers of Open Files on Solaris.](#Limits-on-Numbers-of-Open-Files-on-Solaris.)
* [SOLARIS-SPECIFIC MODULES.](#SOLARIS-SPECIFIC-MODULES.)
* [SOLARIS-SPECIFIC PROBLEMS WITH MODULES.](#SOLARIS-SPECIFIC-PROBLEMS-WITH-MODULES.)
+ [Proc::ProcessTable on Solaris](#Proc::ProcessTable-on-Solaris)
+ [BSD::Resource on Solaris](#BSD::Resource-on-Solaris)
+ [Net::SSLeay on Solaris](#Net::SSLeay-on-Solaris)
* [SunOS 4.x](#SunOS-4.x)
* [AUTHOR](#AUTHOR)
NAME
----
perlsolaris - Perl version 5 on Solaris systems
DESCRIPTION
-----------
This document describes various features of Sun's Solaris operating system that will affect how Perl version 5 (hereafter just perl) is compiled and/or runs. Some issues relating to the older SunOS 4.x are also discussed, though they may be out of date.
For the most part, everything should just work.
Starting with Solaris 8, perl5.00503 (or higher) is supplied with the operating system, so you might not even need to build a newer version of perl at all. The Sun-supplied version is installed in /usr/perl5 with */usr/bin/perl* pointing to */usr/perl5/bin/perl*. Do not disturb that installation unless you really know what you are doing. If you remove the perl supplied with the OS, you will render some bits of your system inoperable. If you wish to install a newer version of perl, install it under a different prefix from /usr/perl5. Common prefixes to use are /usr/local and /opt/perl.
You may wish to put your version of perl in the PATH of all users by changing the link */usr/bin/perl*. This is probably OK, as most perl scripts shipped with Solaris use an explicit path. (There are a few exceptions, such as */usr/bin/rpm2cpio* and */etc/rcm/scripts/README*, but these are also sufficiently generic that the actual version of perl probably doesn't matter too much.)
Solaris ships with a range of Solaris-specific modules. If you choose to install your own version of perl you will find the source of many of these modules is available on CPAN under the Sun::Solaris:: namespace.
Solaris may include two versions of perl, e.g. Solaris 9 includes both 5.005\_03 and 5.6.1. This is to provide stability across Solaris releases, in cases where a later perl version has incompatibilities with the version included in the preceding Solaris release. The default perl version will always be the most recent, and in general the old version will only be retained for one Solaris release. Note also that the default perl will NOT be configured to search for modules in the older version, again due to compatibility/stability concerns. As a consequence if you upgrade Solaris, you will have to rebuild/reinstall any additional CPAN modules that you installed for the previous Solaris version. See the CPAN manpage under 'autobundle' for a quick way of doing this.
As an interim measure, you may either change the #! line of your scripts to specifically refer to the old perl version, e.g. on Solaris 9 use #!/usr/perl5/5.00503/bin/perl to use the perl version that was the default for Solaris 8, or if you have a large number of scripts it may be more convenient to make the old version of perl the default on your system. You can do this by changing the appropriate symlinks under /usr/perl5 as follows (example for Solaris 9):
```
# cd /usr/perl5
# rm bin man pod
# ln -s ./5.00503/bin
# ln -s ./5.00503/man
# ln -s ./5.00503/lib/pod
# rm /usr/bin/perl
# ln -s ../perl5/5.00503/bin/perl /usr/bin/perl
```
In both cases this should only be considered to be a temporary measure - you should upgrade to the later version of perl as soon as is practicable.
Note also that the perl command-line utilities (e.g. perldoc) and any that are added by modules that you install will be under /usr/perl5/bin, so that directory should be added to your PATH.
###
Solaris Version Numbers.
For consistency with common usage, perl's Configure script performs some minor manipulations on the operating system name and version number as reported by uname. Here's a partial translation table:
```
Sun: perl's Configure:
uname uname -r Name osname osvers
SunOS 4.1.3 Solaris 1.1 sunos 4.1.3
SunOS 5.6 Solaris 2.6 solaris 2.6
SunOS 5.8 Solaris 8 solaris 2.8
SunOS 5.9 Solaris 9 solaris 2.9
SunOS 5.10 Solaris 10 solaris 2.10
```
The complete table can be found in the Sun Managers' FAQ <ftp://ftp.cs.toronto.edu/pub/jdd/sunmanagers/faq> under "9.1) Which Sun models run which versions of SunOS?".
RESOURCES
---------
There are many, many sources for Solaris information. A few of the important ones for perl:
Solaris FAQ The Solaris FAQ is available at <http://www.science.uva.nl/pub/solaris/solaris2.html>.
The Sun Managers' FAQ is available at <ftp://ftp.cs.toronto.edu/pub/jdd/sunmanagers/faq>
Precompiled Binaries Precompiled binaries, links to many sites, and much, much more are available at <http://www.sunfreeware.com/> and <http://www.blastwave.org/>.
Solaris Documentation All Solaris documentation is available on-line at <http://docs.sun.com/>.
SETTING UP
-----------
###
File Extraction Problems on Solaris.
Be sure to use a tar program compiled under Solaris (not SunOS 4.x) to extract the perl-5.x.x.tar.gz file. Do not use GNU tar compiled for SunOS4 on Solaris. (GNU tar compiled for Solaris should be fine.) When you run SunOS4 binaries on Solaris, the run-time system magically alters pathnames matching m#lib/locale# so that when tar tries to create lib/locale.pm, a file named lib/oldlocale.pm gets created instead. If you found this advice too late and used a SunOS4-compiled tar anyway, you must find the incorrectly renamed file and move it back to lib/locale.pm.
###
Compiler and Related Tools on Solaris.
You must use an ANSI C compiler to build perl. Perl can be compiled with either Sun's add-on C compiler or with gcc. The C compiler that shipped with SunOS4 will not do.
####
Include /usr/ccs/bin/ in your PATH.
Several tools needed to build perl are located in /usr/ccs/bin/: ar, as, ld, and make. Make sure that /usr/ccs/bin/ is in your PATH.
On all the released versions of Solaris (8, 9 and 10) you need to make sure the following packages are installed (this info is extracted from the Solaris FAQ):
for tools (sccs, lex, yacc, make, nm, truss, ld, as): SUNWbtool, SUNWsprot, SUNWtoo
for libraries & headers: SUNWhea, SUNWarc, SUNWlibm, SUNWlibms, SUNWdfbh, SUNWcg6h, SUNWxwinc
Additionally, on Solaris 8 and 9 you also need:
for 64 bit development: SUNWarcx, SUNWbtoox, SUNWdplx, SUNWscpux, SUNWsprox, SUNWtoox, SUNWlmsx, SUNWlmx, SUNWlibCx
And only on Solaris 8 you also need:
for libraries & headers: SUNWolinc
If you are in doubt which package contains a file you are missing, try to find an installation that has that file. Then do a
```
$ grep /my/missing/file /var/sadm/install/contents
```
This will display a line like this:
/usr/include/sys/errno.h f none 0644 root bin 7471 37605 956241356 SUNWhea
The last item listed (SUNWhea in this example) is the package you need.
####
Avoid /usr/ucb/cc.
You don't need to have /usr/ucb/ in your PATH to build perl. If you want /usr/ucb/ in your PATH anyway, make sure that /usr/ucb/ is NOT in your PATH before the directory containing the right C compiler.
####
Sun's C Compiler
If you use Sun's C compiler, make sure the correct directory (usually /opt/SUNWspro/bin/) is in your PATH (before /usr/ucb/).
#### GCC
If you use gcc, make sure your installation is recent and complete. perl versions since 5.6.0 build fine with gcc > 2.8.1 on Solaris >= 2.6.
You must Configure perl with
```
$ sh Configure -Dcc=gcc
```
If you don't, you may experience strange build errors.
If you have updated your Solaris version, you may also have to update your gcc. For example, if you are running Solaris 2.6 and your gcc is installed under /usr/local, check in /usr/local/lib/gcc-lib and make sure you have the appropriate directory, sparc-sun-solaris2.6/ or i386-pc-solaris2.6/. If gcc's directory is for a different version of Solaris than you are running, then you will need to rebuild gcc for your new version of Solaris.
You can get a precompiled version of gcc from <http://www.sunfreeware.com/> or <http://www.blastwave.org/>. Make sure you pick up the package for your Solaris release.
If you wish to use gcc to build add-on modules for use with the perl shipped with Solaris, you should use the Solaris::PerlGcc module which is available from CPAN. The perl shipped with Solaris is configured and built with the Sun compilers, and the compiler configuration information stored in Config.pm is therefore only relevant to the Sun compilers. The Solaris:PerlGcc module contains a replacement Config.pm that is correct for gcc - see the module for details.
####
GNU as and GNU ld
The following information applies to gcc version 2. Volunteers to update it as appropriately for gcc version 3 would be appreciated.
The versions of as and ld supplied with Solaris work fine for building perl. There is normally no need to install the GNU versions to compile perl.
If you decide to ignore this advice and use the GNU versions anyway, then be sure that they are relatively recent. Versions newer than 2.7 are apparently new enough. Older versions may have trouble with dynamic loading.
If you wish to use GNU ld, then you need to pass it the -Wl,-E flag. The hints/solaris\_2.sh file tries to do this automatically by setting the following Configure variables:
```
ccdlflags="$ccdlflags -Wl,-E"
lddlflags="$lddlflags -Wl,-E -G"
```
However, over the years, changes in gcc, GNU ld, and Solaris ld have made it difficult to automatically detect which ld ultimately gets called. You may have to manually edit config.sh and add the -Wl,-E flags yourself, or else run Configure interactively and add the flags at the appropriate prompts.
If your gcc is configured to use GNU as and ld but you want to use the Solaris ones instead to build perl, then you'll need to add -B/usr/ccs/bin/ to the gcc command line. One convenient way to do that is with
```
$ sh Configure -Dcc='gcc -B/usr/ccs/bin/'
```
Note that the trailing slash is required. This will result in some harmless warnings as Configure is run:
```
gcc: file path prefix `/usr/ccs/bin/' never used
```
These messages may safely be ignored. (Note that for a SunOS4 system, you must use -B/bin/ instead.)
Alternatively, you can use the GCC\_EXEC\_PREFIX environment variable to ensure that Sun's as and ld are used. Consult your gcc documentation for further information on the -B option and the GCC\_EXEC\_PREFIX variable.
####
Sun and GNU make
The make under /usr/ccs/bin works fine for building perl. If you have the Sun C compilers, you will also have a parallel version of make (dmake). This works fine to build perl, but can sometimes cause problems when running 'make test' due to underspecified dependencies between the different test harness files. The same problem can also affect the building of some add-on modules, so in those cases either specify '-m serial' on the dmake command line, or use /usr/ccs/bin/make instead. If you wish to use GNU make, be sure that the set-group-id bit is not set. If it is, then arrange your PATH so that /usr/ccs/bin/make is before GNU make or else have the system administrator disable the set-group-id bit on GNU make.
####
Avoid libucb.
Solaris provides some BSD-compatibility functions in /usr/ucblib/libucb.a. Perl will not build and run correctly if linked against -lucb since it contains routines that are incompatible with the standard Solaris libc. Normally this is not a problem since the solaris hints file prevents Configure from even looking in /usr/ucblib for libraries, and also explicitly omits -lucb.
###
Environment for Compiling perl on Solaris
#### PATH
Make sure your PATH includes the compiler (/opt/SUNWspro/bin/ if you're using Sun's compiler) as well as /usr/ccs/bin/ to pick up the other development tools (such as make, ar, as, and ld). Make sure your path either doesn't include /usr/ucb or that it includes it after the compiler and compiler tools and other standard Solaris directories. You definitely don't want /usr/ucb/cc.
#### LD\_LIBRARY\_PATH
If you have the LD\_LIBRARY\_PATH environment variable set, be sure that it does NOT include /lib or /usr/lib. If you will be building extensions that call third-party shared libraries (e.g. Berkeley DB) then make sure that your LD\_LIBRARY\_PATH environment variable includes the directory with that library (e.g. /usr/local/lib).
If you get an error message
```
dlopen: stub interception failed
```
it is probably because your LD\_LIBRARY\_PATH environment variable includes a directory which is a symlink to /usr/lib (such as /lib). The reason this causes a problem is quite subtle. The file libdl.so.1.0 actually \*only\* contains functions which generate 'stub interception failed' errors! The runtime linker intercepts links to "/usr/lib/libdl.so.1.0" and links in internal implementations of those functions instead. [Thanks to Tim Bunce for this explanation.]
RUN CONFIGURE.
---------------
See the INSTALL file for general information regarding Configure. Only Solaris-specific issues are discussed here. Usually, the defaults should be fine.
###
64-bit perl on Solaris.
See the INSTALL file for general information regarding 64-bit compiles. In general, the defaults should be fine for most people.
By default, perl-5.6.0 (or later) is compiled as a 32-bit application with largefile and long-long support.
####
General 32-bit vs. 64-bit issues.
Solaris 7 and above will run in either 32 bit or 64 bit mode on SPARC CPUs, via a reboot. You can build 64 bit apps whilst running 32 bit mode and vice-versa. 32 bit apps will run under Solaris running in either 32 or 64 bit mode. 64 bit apps require Solaris to be running 64 bit mode.
Existing 32 bit apps are properly known as LP32, i.e. Longs and Pointers are 32 bit. 64-bit apps are more properly known as LP64. The discriminating feature of a LP64 bit app is its ability to utilise a 64-bit address space. It is perfectly possible to have a LP32 bit app that supports both 64-bit integers (long long) and largefiles (> 2GB), and this is the default for perl-5.6.0.
For a more complete explanation of 64-bit issues, see the "Solaris 64-bit Developer's Guide" at <http://docs.sun.com/>
You can detect the OS mode using "isainfo -v", e.g.
```
$ isainfo -v # Ultra 30 in 64 bit mode
64-bit sparcv9 applications
32-bit sparc applications
```
By default, perl will be compiled as a 32-bit application. Unless you want to allocate more than ~ 4GB of memory inside perl, or unless you need more than 255 open file descriptors, you probably don't need perl to be a 64-bit app.
####
Large File Support
For Solaris 2.6 and onwards, there are two different ways for 32-bit applications to manipulate large files (files whose size is > 2GByte). (A 64-bit application automatically has largefile support built in by default.)
First is the "transitional compilation environment", described in lfcompile64(5). According to the man page,
```
The transitional compilation environment exports all the
explicit 64-bit functions (xxx64()) and types in addition to
all the regular functions (xxx()) and types. Both xxx() and
xxx64() functions are available to the program source. A
32-bit application must use the xxx64() functions in order
to access large files. See the lf64(5) manual page for a
complete listing of the 64-bit transitional interfaces.
```
The transitional compilation environment is obtained with the following compiler and linker flags:
```
getconf LFS64_CFLAGS -D_LARGEFILE64_SOURCE
getconf LFS64_LDFLAG # nothing special needed
getconf LFS64_LIBS # nothing special needed
```
Second is the "large file compilation environment", described in lfcompile(5). According to the man page,
```
Each interface named xxx() that needs to access 64-bit entities
to access large files maps to a xxx64() call in the
resulting binary. All relevant data types are defined to be
of correct size (for example, off_t has a typedef definition
for a 64-bit entity).
An application compiled in this environment is able to use
the xxx() source interfaces to access both large and small
files, rather than having to explicitly utilize the transitional
xxx64() interface calls to access large files.
```
Two exceptions are fseek() and ftell(). 32-bit applications should use fseeko(3C) and ftello(3C). These will get automatically mapped to fseeko64() and ftello64().
The large file compilation environment is obtained with
```
getconf LFS_CFLAGS -D_LARGEFILE_SOURCE -D_FILE_OFFSET_BITS=64
getconf LFS_LDFLAGS # nothing special needed
getconf LFS_LIBS # nothing special needed
```
By default, perl uses the large file compilation environment and relies on Solaris to do the underlying mapping of interfaces.
####
Building an LP64 perl
To compile a 64-bit application on an UltraSparc with a recent Sun Compiler, you need to use the flag "-xarch=v9". getconf(1) will tell you this, e.g.
```
$ getconf -a | grep v9
XBS5_LP64_OFF64_CFLAGS: -xarch=v9
XBS5_LP64_OFF64_LDFLAGS: -xarch=v9
XBS5_LP64_OFF64_LINTFLAGS: -xarch=v9
XBS5_LPBIG_OFFBIG_CFLAGS: -xarch=v9
XBS5_LPBIG_OFFBIG_LDFLAGS: -xarch=v9
XBS5_LPBIG_OFFBIG_LINTFLAGS: -xarch=v9
_XBS5_LP64_OFF64_CFLAGS: -xarch=v9
_XBS5_LP64_OFF64_LDFLAGS: -xarch=v9
_XBS5_LP64_OFF64_LINTFLAGS: -xarch=v9
_XBS5_LPBIG_OFFBIG_CFLAGS: -xarch=v9
_XBS5_LPBIG_OFFBIG_LDFLAGS: -xarch=v9
_XBS5_LPBIG_OFFBIG_LINTFLAGS: -xarch=v9
```
This flag is supported in Sun WorkShop Compilers 5.0 and onwards (now marketed under the name Forte) when used on Solaris 7 or later on UltraSparc systems.
If you are using gcc, you would need to use -mcpu=v9 -m64 instead. This option is not yet supported as of gcc 2.95.2; from install/SPECIFIC in that release:
```
GCC version 2.95 is not able to compile code correctly for sparc64
targets. Users of the Linux kernel, at least, can use the sparc32
program to start up a new shell invocation with an environment that
causes configure to recognize (via uname -a) the system as sparc-*-*
instead.
```
All this should be handled automatically by the hints file, if requested.
####
Long Doubles.
As of 5.8.1, long doubles are working if you use the Sun compilers (needed for additional math routines not included in libm).
###
Threads in perl on Solaris.
It is possible to build a threaded version of perl on Solaris. The entire perl thread implementation is still experimental, however, so beware.
###
Malloc Issues with perl on Solaris.
Starting from perl 5.7.1 perl uses the Solaris malloc, since the perl malloc breaks when dealing with more than 2GB of memory, and the Solaris malloc also seems to be faster.
If you for some reason (such as binary backward compatibility) really need to use perl's malloc, you can rebuild perl from the sources and Configure the build with
```
$ sh Configure -Dusemymalloc
```
You should not use perl's malloc if you are building with gcc. There are reports of core dumps, especially in the PDL module. The problem appears to go away under -DDEBUGGING, so it has been difficult to track down. Sun's compiler appears to be okay with or without perl's malloc. [XXX further investigation is needed here.]
MAKE PROBLEMS.
---------------
Dynamic Loading Problems With GNU as and GNU ld If you have problems with dynamic loading using gcc on SunOS or Solaris, and you are using GNU as and GNU ld, see the section ["GNU as and GNU ld"](#GNU-as-and-GNU-ld) above.
ld.so.1: ./perl: fatal: relocation error: If you get this message on SunOS or Solaris, and you're using gcc, it's probably the GNU as or GNU ld problem in the previous item ["GNU as and GNU ld"](#GNU-as-and-GNU-ld).
dlopen: stub interception failed The primary cause of the 'dlopen: stub interception failed' message is that the LD\_LIBRARY\_PATH environment variable includes a directory which is a symlink to /usr/lib (such as /lib). See ["LD\_LIBRARY\_PATH"](#LD_LIBRARY_PATH) above.
#error "No DATAMODEL\_NATIVE specified" This is a common error when trying to build perl on Solaris 2.6 with a gcc installation from Solaris 2.5 or 2.5.1. The Solaris header files changed, so you need to update your gcc installation. You can either rerun the fixincludes script from gcc or take the opportunity to update your gcc installation.
sh: ar: not found This is a message from your shell telling you that the command 'ar' was not found. You need to check your PATH environment variable to make sure that it includes the directory with the 'ar' command. This is a common problem on Solaris, where 'ar' is in the /usr/ccs/bin/ directory.
MAKE TEST
----------
###
op/stat.t test 4 in Solaris
*op/stat.t* test 4 may fail if you are on a tmpfs of some sort. Building in /tmp sometimes shows this behavior. The test suite detects if you are building in /tmp, but it may not be able to catch all tmpfs situations.
###
nss\_delete core dump from op/pwent or op/grent
See ["nss\_delete core dump from op/pwent or op/grent" in perlhpux](perlhpux#nss_delete-core-dump-from-op%2Fpwent-or-op%2Fgrent).
CROSS-COMPILATION
------------------
Nothing too unusual here. You can easily do this if you have a cross-compiler available; A usual Configure invocation when targetting a Solaris x86 looks something like this:
```
sh ./Configure -des -Dusecrosscompile \
-Dcc=i386-pc-solaris2.11-gcc \
-Dsysroot=$SYSROOT \
-Alddlflags=" -Wl,-z,notext" \
-Dtargethost=... # The usual cross-compilation options
```
The lddlflags addition is the only abnormal bit.
PREBUILT BINARIES OF PERL FOR SOLARIS.
---------------------------------------
You can pick up prebuilt binaries for Solaris from <http://www.sunfreeware.com/>, <http://www.blastwave.org>, ActiveState <http://www.activestate.com/>, and <http://www.perl.com/> under the Binaries list at the top of the page. There are probably other sources as well. Please note that these sites are under the control of their respective owners, not the perl developers.
RUNTIME ISSUES FOR PERL ON SOLARIS.
------------------------------------
###
Limits on Numbers of Open Files on Solaris.
The stdio(3C) manpage notes that for LP32 applications, only 255 files may be opened using fopen(), and only file descriptors 0 through 255 can be used in a stream. Since perl calls open() and then fdopen(3C) with the resulting file descriptor, perl is limited to 255 simultaneous open files, even if sysopen() is used. If this proves to be an insurmountable problem, you can compile perl as a LP64 application, see ["Building an LP64 perl"](#Building-an-LP64-perl) for details. Note also that the default resource limit for open file descriptors on Solaris is 255, so you will have to modify your ulimit or rctl (Solaris 9 onwards) appropriately.
SOLARIS-SPECIFIC MODULES.
--------------------------
See the modules under the Solaris:: and Sun::Solaris namespaces on CPAN, see <http://www.cpan.org/modules/by-module/Solaris/> and <http://www.cpan.org/modules/by-module/Sun/>.
SOLARIS-SPECIFIC PROBLEMS WITH MODULES.
----------------------------------------
###
Proc::ProcessTable on Solaris
Proc::ProcessTable does not compile on Solaris with perl5.6.0 and higher if you have LARGEFILES defined. Since largefile support is the default in 5.6.0 and later, you have to take special steps to use this module.
The problem is that various structures visible via procfs use off\_t, and if you compile with largefile support these change from 32 bits to 64 bits. Thus what you get back from procfs doesn't match up with the structures in perl, resulting in garbage. See proc(4) for further discussion.
A fix for Proc::ProcessTable is to edit Makefile to explicitly remove the largefile flags from the ones MakeMaker picks up from Config.pm. This will result in Proc::ProcessTable being built under the correct environment. Everything should then be OK as long as Proc::ProcessTable doesn't try to share off\_t's with the rest of perl, or if it does they should be explicitly specified as off64\_t.
###
BSD::Resource on Solaris
BSD::Resource versions earlier than 1.09 do not compile on Solaris with perl 5.6.0 and higher, for the same reasons as Proc::ProcessTable. BSD::Resource versions starting from 1.09 have a workaround for the problem.
###
Net::SSLeay on Solaris
Net::SSLeay requires a /dev/urandom to be present. This device is available from Solaris 9 onwards. For earlier Solaris versions you can either get the package SUNWski (packaged with several Sun software products, for example the Sun WebServer, which is part of the Solaris Server Intranet Extension, or the Sun Directory Services, part of Solaris for ISPs) or download the ANDIrand package from <http://www.cosy.sbg.ac.at/~andi/>. If you use SUNWski, make a symbolic link /dev/urandom pointing to /dev/random. For more details, see Document ID27606 entitled "Differing /dev/random support requirements within Solaris[TM] Operating Environments", available at <http://sunsolve.sun.com> .
It may be possible to use the Entropy Gathering Daemon (written in Perl!), available from <http://www.lothar.com/tech/crypto/>.
SunOS 4.x
----------
In SunOS 4.x you most probably want to use the SunOS ld, /usr/bin/ld, since the more recent versions of GNU ld (like 2.13) do not seem to work for building Perl anymore. When linking the extensions, the GNU ld gets very unhappy and spews a lot of errors like this
```
... relocation truncated to fit: BASE13 ...
```
and dies. Therefore the SunOS 4.1 hints file explicitly sets the ld to be */usr/bin/ld*.
As of Perl 5.8.1 the dynamic loading of libraries (DynaLoader, XSLoader) also seems to have become broken in in SunOS 4.x. Therefore the default is to build Perl statically.
Running the test suite in SunOS 4.1 is a bit tricky since the *dist/Tie-File/t/09\_gen\_rs.t* test hangs (subtest #51, FWIW) for some unknown reason. Just stop the test and kill that particular Perl process.
There are various other failures, that as of SunOS 4.1.4 and gcc 3.2.2 look a lot like gcc bugs. Many of the failures happen in the Encode tests, where for example when the test expects "0" you get "0" which should after a little squinting look very odd indeed. Another example is earlier in *t/run/fresh\_perl* where chr(0xff) is expected but the test fails because the result is chr(0xff). Exactly.
This is the "make test" result from the said combination:
```
Failed 27 test scripts out of 745, 96.38% okay.
```
Running the `harness` is painful because of the many failing Unicode-related tests will output megabytes of failure messages, but if one patiently waits, one gets these results:
```
Failed Test Stat Wstat Total Fail Failed List of Failed
-----------------------------------------------------------------------------
...
../ext/Encode/t/at-cn.t 4 1024 29 4 13.79% 14-17
../ext/Encode/t/at-tw.t 10 2560 17 10 58.82% 2 4 6 8 10 12
14-17
../ext/Encode/t/enc_data.t 29 7424 ?? ?? % ??
../ext/Encode/t/enc_eucjp.t 29 7424 ?? ?? % ??
../ext/Encode/t/enc_module.t 29 7424 ?? ?? % ??
../ext/Encode/t/encoding.t 29 7424 ?? ?? % ??
../ext/Encode/t/grow.t 12 3072 24 12 50.00% 2 4 6 8 10 12 14
16 18 20 22 24
Failed Test Stat Wstat Total Fail Failed List of Failed
------------------------------------------------------------------------------
../ext/Encode/t/guess.t 255 65280 29 40 137.93% 10-29
../ext/Encode/t/jperl.t 29 7424 15 30 200.00% 1-15
../ext/Encode/t/mime-header.t 2 512 10 2 20.00% 2-3
../ext/Encode/t/perlio.t 22 5632 38 22 57.89% 1-4 9-16 19-20
23-24 27-32
../ext/List/Util/t/shuffle.t 0 139 ?? ?? % ??
../ext/PerlIO/t/encoding.t 14 1 7.14% 11
../ext/PerlIO/t/fallback.t 9 2 22.22% 3 5
../ext/Socket/t/socketpair.t 0 2 45 70 155.56% 11-45
../lib/CPAN/t/vcmp.t 30 1 3.33% 25
../lib/Tie/File/t/09_gen_rs.t 0 15 ?? ?? % ??
../lib/Unicode/Collate/t/test.t 199 30 15.08% 7 26-27 71-75
81-88 95 101
103-104 106 108-
109 122 124 161
169-172
../lib/sort.t 0 139 119 26 21.85% 107-119
op/alarm.t 4 1 25.00% 4
op/utfhash.t 97 1 1.03% 31
run/fresh_perl.t 91 1 1.10% 32
uni/tr_7jis.t ?? ?? % ??
uni/tr_eucjp.t 29 7424 6 12 200.00% 1-6
uni/tr_sjis.t 29 7424 6 12 200.00% 1-6
56 tests and 467 subtests skipped.
Failed 27/811 test scripts, 96.67% okay. 1383/75399 subtests failed,
98.17% okay.
```
The alarm() test failure is caused by system() apparently blocking alarm(). That is probably a libc bug, and given that SunOS 4.x has been end-of-lifed years ago, don't hold your breath for a fix. In addition to that, don't try anything too Unicode-y, especially with Encode, and you should be fine in SunOS 4.x.
AUTHOR
------
The original was written by Andy Dougherty *[email protected]* drawing heavily on advice from Alan Burlison, Nick Ing-Simmons, Tim Bunce, and many other Solaris users over the years.
Please report any errors, updates, or suggestions to <https://github.com/Perl/perl5/issues>.
| programming_docs |
perl CPAN::Debug CPAN::Debug
===========
CONTENTS
--------
* [NAME](#NAME)
* [LICENSE](#LICENSE)
NAME
----
CPAN::Debug - internal debugging for CPAN.pm
LICENSE
-------
This program is free software; you can redistribute it and/or modify it under the same terms as Perl itself.
perl Test2::EventFacet::Meta Test2::EventFacet::Meta
=======================
CONTENTS
--------
* [NAME](#NAME)
* [DESCRIPTION](#DESCRIPTION)
* [METHODS AND FIELDS](#METHODS-AND-FIELDS)
* [SOURCE](#SOURCE)
* [MAINTAINERS](#MAINTAINERS)
* [AUTHORS](#AUTHORS)
* [COPYRIGHT](#COPYRIGHT)
NAME
----
Test2::EventFacet::Meta - Facet for meta-data
DESCRIPTION
-----------
This facet can contain any random meta-data that has been attached to the event.
METHODS AND FIELDS
-------------------
Any/all fields and accessors are autovivified into existence. There is no way to know what metadata may be added, so any is allowed.
$anything = $meta->{anything}
$anything = $meta->anything() SOURCE
------
The source code repository for Test2 can be found at *http://github.com/Test-More/test-more/*.
MAINTAINERS
-----------
Chad Granum <[email protected]> AUTHORS
-------
Chad Granum <[email protected]> COPYRIGHT
---------
Copyright 2020 Chad Granum <[email protected]>.
This program is free software; you can redistribute it and/or modify it under the same terms as Perl itself.
See *http://dev.perl.org/licenses/*
perl Tie::File Tie::File
=========
CONTENTS
--------
* [NAME](#NAME)
* [SYNOPSIS](#SYNOPSIS)
* [DESCRIPTION](#DESCRIPTION)
+ [recsep](#recsep)
+ [autochomp](#autochomp)
+ [mode](#mode)
+ [memory](#memory)
+ [dw\_size](#dw_size)
+ [Option Format](#Option-Format)
* [Public Methods](#Public-Methods)
+ [flock](#flock)
+ [autochomp](#autochomp1)
+ [defer, flush, discard, and autodefer](#defer,-flush,-discard,-and-autodefer)
+ [offset](#offset)
* [Tying to an already-opened filehandle](#Tying-to-an-already-opened-filehandle)
* [Deferred Writing](#Deferred-Writing)
+ [Autodeferring](#Autodeferring)
* [CONCURRENT ACCESS TO FILES](#CONCURRENT-ACCESS-TO-FILES)
* [CAVEATS](#CAVEATS)
* [SUBCLASSING](#SUBCLASSING)
* [WHAT ABOUT DB\_File?](#WHAT-ABOUT-DB_File?)
* [AUTHOR](#AUTHOR)
* [LICENSE](#LICENSE)
* [WARRANTY](#WARRANTY)
* [THANKS](#THANKS)
* [TODO](#TODO)
NAME
----
Tie::File - Access the lines of a disk file via a Perl array
SYNOPSIS
--------
```
use Tie::File;
tie @array, 'Tie::File', filename or die ...;
$array[0] = 'blah'; # first line of the file is now 'blah'
# (line numbering starts at 0)
print $array[42]; # display line 43 of the file
$n_recs = @array; # how many records are in the file?
$#array -= 2; # chop two records off the end
for (@array) {
s/PERL/Perl/g; # Replace PERL with Perl everywhere in the file
}
# These are just like regular push, pop, unshift, shift, and splice
# Except that they modify the file in the way you would expect
push @array, new recs...;
my $r1 = pop @array;
unshift @array, new recs...;
my $r2 = shift @array;
@old_recs = splice @array, 3, 7, new recs...;
untie @array; # all finished
```
DESCRIPTION
-----------
`Tie::File` represents a regular text file as a Perl array. Each element in the array corresponds to a record in the file. The first line of the file is element 0 of the array; the second line is element 1, and so on.
The file is *not* loaded into memory, so this will work even for gigantic files.
Changes to the array are reflected in the file immediately.
Lazy people and beginners may now stop reading the manual.
### `recsep`
What is a 'record'? By default, the meaning is the same as for the `<...>` operator: It's a string terminated by `$/`, which is probably `"\n"`. (Minor exception: on DOS and Win32 systems, a 'record' is a string terminated by `"\r\n"`.) You may change the definition of "record" by supplying the `recsep` option in the `tie` call:
```
tie @array, 'Tie::File', $file, recsep => 'es';
```
This says that records are delimited by the string `es`. If the file contained the following data:
```
Curse these pesky flies!\n
```
then the `@array` would appear to have four elements:
```
"Curse th"
"e p"
"ky fli"
"!\n"
```
An undefined value is not permitted as a record separator. Perl's special "paragraph mode" semantics (à la `$/ = ""`) are not emulated.
Records read from the tied array do not have the record separator string on the end; this is to allow
```
$array[17] .= "extra";
```
to work as expected.
(See ["autochomp"](#autochomp), below.) Records stored into the array will have the record separator string appended before they are written to the file, if they don't have one already. For example, if the record separator string is `"\n"`, then the following two lines do exactly the same thing:
```
$array[17] = "Cherry pie";
$array[17] = "Cherry pie\n";
```
The result is that the contents of line 17 of the file will be replaced with "Cherry pie"; a newline character will separate line 17 from line 18. This means that this code will do nothing:
```
chomp $array[17];
```
Because the `chomp`ed value will have the separator reattached when it is written back to the file. There is no way to create a file whose trailing record separator string is missing.
Inserting records that *contain* the record separator string is not supported by this module. It will probably produce a reasonable result, but what this result will be may change in a future version. Use 'splice' to insert records or to replace one record with several.
### `autochomp`
Normally, array elements have the record separator removed, so that if the file contains the text
```
Gold
Frankincense
Myrrh
```
the tied array will appear to contain `("Gold", "Frankincense", "Myrrh")`. If you set `autochomp` to a false value, the record separator will not be removed. If the file above was tied with
```
tie @gifts, "Tie::File", $gifts, autochomp => 0;
```
then the array `@gifts` would appear to contain `("Gold\n", "Frankincense\n", "Myrrh\n")`, or (on Win32 systems) `("Gold\r\n", "Frankincense\r\n", "Myrrh\r\n")`.
### `mode`
Normally, the specified file will be opened for read and write access, and will be created if it does not exist. (That is, the flags `O_RDWR | O_CREAT` are supplied in the `open` call.) If you want to change this, you may supply alternative flags in the `mode` option. See [Fcntl](fcntl) for a listing of available flags. For example:
```
# open the file if it exists, but fail if it does not exist
use Fcntl 'O_RDWR';
tie @array, 'Tie::File', $file, mode => O_RDWR;
# create the file if it does not exist
use Fcntl 'O_RDWR', 'O_CREAT';
tie @array, 'Tie::File', $file, mode => O_RDWR | O_CREAT;
# open an existing file in read-only mode
use Fcntl 'O_RDONLY';
tie @array, 'Tie::File', $file, mode => O_RDONLY;
```
Opening the data file in write-only or append mode is not supported.
### `memory`
This is an upper limit on the amount of memory that `Tie::File` will consume at any time while managing the file. This is used for two things: managing the *read cache* and managing the *deferred write buffer*.
Records read in from the file are cached, to avoid having to re-read them repeatedly. If you read the same record twice, the first time it will be stored in memory, and the second time it will be fetched from the *read cache*. The amount of data in the read cache will not exceed the value you specified for `memory`. If `Tie::File` wants to cache a new record, but the read cache is full, it will make room by expiring the least-recently visited records from the read cache.
The default memory limit is 2Mib. You can adjust the maximum read cache size by supplying the `memory` option. The argument is the desired cache size, in bytes.
```
# I have a lot of memory, so use a large cache to speed up access
tie @array, 'Tie::File', $file, memory => 20_000_000;
```
Setting the memory limit to 0 will inhibit caching; records will be fetched from disk every time you examine them.
The `memory` value is not an absolute or exact limit on the memory used. `Tie::File` objects contains some structures besides the read cache and the deferred write buffer, whose sizes are not charged against `memory`.
The cache itself consumes about 310 bytes per cached record, so if your file has many short records, you may want to decrease the cache memory limit, or else the cache overhead may exceed the size of the cached data.
### `dw_size`
(This is an advanced feature. Skip this section on first reading.)
If you use deferred writing (See ["Deferred Writing"](#Deferred-Writing), below) then data you write into the array will not be written directly to the file; instead, it will be saved in the *deferred write buffer* to be written out later. Data in the deferred write buffer is also charged against the memory limit you set with the `memory` option.
You may set the `dw_size` option to limit the amount of data that can be saved in the deferred write buffer. This limit may not exceed the total memory limit. For example, if you set `dw_size` to 1000 and `memory` to 2500, that means that no more than 1000 bytes of deferred writes will be saved up. The space available for the read cache will vary, but it will always be at least 1500 bytes (if the deferred write buffer is full) and it could grow as large as 2500 bytes (if the deferred write buffer is empty.)
If you don't specify a `dw_size`, it defaults to the entire memory limit.
###
Option Format
`-mode` is a synonym for `mode`. `-recsep` is a synonym for `recsep`. `-memory` is a synonym for `memory`. You get the idea.
Public Methods
---------------
The `tie` call returns an object, say `$o`. You may call
```
$rec = $o->FETCH($n);
$o->STORE($n, $rec);
```
to fetch or store the record at line `$n`, respectively; similarly the other tied array methods. (See <perltie> for details.) You may also call the following methods on this object:
### `flock`
```
$o->flock(MODE)
```
will lock the tied file. `MODE` has the same meaning as the second argument to the Perl built-in `flock` function; for example `LOCK_SH` or `LOCK_EX | LOCK_NB`. (These constants are provided by the `use Fcntl ':flock'` declaration.)
`MODE` is optional; the default is `LOCK_EX`.
`Tie::File` maintains an internal table of the byte offset of each record it has seen in the file.
When you use `flock` to lock the file, `Tie::File` assumes that the read cache is no longer trustworthy, because another process might have modified the file since the last time it was read. Therefore, a successful call to `flock` discards the contents of the read cache and the internal record offset table.
`Tie::File` promises that the following sequence of operations will be safe:
```
my $o = tie @array, "Tie::File", $filename;
$o->flock;
```
In particular, `Tie::File` will *not* read or write the file during the `tie` call. (Exception: Using `mode => O_TRUNC` will, of course, erase the file during the `tie` call. If you want to do this safely, then open the file without `O_TRUNC`, lock the file, and use `@array = ()`.)
The best way to unlock a file is to discard the object and untie the array. It is probably unsafe to unlock the file without also untying it, because if you do, changes may remain unwritten inside the object. That is why there is no shortcut for unlocking. If you really want to unlock the file prematurely, you know what to do; if you don't know what to do, then don't do it.
All the usual warnings about file locking apply here. In particular, note that file locking in Perl is **advisory**, which means that holding a lock will not prevent anyone else from reading, writing, or erasing the file; it only prevents them from getting another lock at the same time. Locks are analogous to green traffic lights: If you have a green light, that does not prevent the idiot coming the other way from plowing into you sideways; it merely guarantees to you that the idiot does not also have a green light at the same time.
### `autochomp`
```
my $old_value = $o->autochomp(0); # disable autochomp option
my $old_value = $o->autochomp(1); # enable autochomp option
my $ac = $o->autochomp(); # recover current value
```
See ["autochomp"](#autochomp), above.
###
`defer`, `flush`, `discard`, and `autodefer`
See ["Deferred Writing"](#Deferred-Writing), below.
### `offset`
```
$off = $o->offset($n);
```
This method returns the byte offset of the start of the `$n`th record in the file. If there is no such record, it returns an undefined value.
Tying to an already-opened filehandle
--------------------------------------
If `$fh` is a filehandle, such as is returned by `IO::File` or one of the other `IO` modules, you may use:
```
tie @array, 'Tie::File', $fh, ...;
```
Similarly if you opened that handle `FH` with regular `open` or `sysopen`, you may use:
```
tie @array, 'Tie::File', \*FH, ...;
```
Handles that were opened write-only won't work. Handles that were opened read-only will work as long as you don't try to modify the array. Handles must be attached to seekable sources of data---that means no pipes or sockets. If `Tie::File` can detect that you supplied a non-seekable handle, the `tie` call will throw an exception. (On Unix systems, it can detect this.)
Note that Tie::File will only close any filehandles that it opened internally. If you passed it a filehandle as above, you "own" the filehandle, and are responsible for closing it after you have untied the @array.
Tie::File calls `binmode` on filehandles that it opens internally, but not on filehandles passed in by the user. For consistency, especially if using the tied files cross-platform, you may wish to call `binmode` on the filehandle prior to tying the file.
Deferred Writing
-----------------
(This is an advanced feature. Skip this section on first reading.)
Normally, modifying a `Tie::File` array writes to the underlying file immediately. Every assignment like `$a[3] = ...` rewrites as much of the file as is necessary; typically, everything from line 3 through the end will need to be rewritten. This is the simplest and most transparent behavior. Performance even for large files is reasonably good.
However, under some circumstances, this behavior may be excessively slow. For example, suppose you have a million-record file, and you want to do:
```
for (@FILE) {
$_ = "> $_";
}
```
The first time through the loop, you will rewrite the entire file, from line 0 through the end. The second time through the loop, you will rewrite the entire file from line 1 through the end. The third time through the loop, you will rewrite the entire file from line 2 to the end. And so on.
If the performance in such cases is unacceptable, you may defer the actual writing, and then have it done all at once. The following loop will perform much better for large files:
```
(tied @a)->defer;
for (@a) {
$_ = "> $_";
}
(tied @a)->flush;
```
If `Tie::File`'s memory limit is large enough, all the writing will done in memory. Then, when you call `->flush`, the entire file will be rewritten in a single pass.
(Actually, the preceding discussion is something of a fib. You don't need to enable deferred writing to get good performance for this common case, because `Tie::File` will do it for you automatically unless you specifically tell it not to. See ["Autodeferring"](#Autodeferring), below.)
Calling `->flush` returns the array to immediate-write mode. If you wish to discard the deferred writes, you may call `->discard` instead of `->flush`. Note that in some cases, some of the data will have been written already, and it will be too late for `->discard` to discard all the changes. Support for `->discard` may be withdrawn in a future version of `Tie::File`.
Deferred writes are cached in memory up to the limit specified by the `dw_size` option (see above). If the deferred-write buffer is full and you try to write still more deferred data, the buffer will be flushed. All buffered data will be written immediately, the buffer will be emptied, and the now-empty space will be used for future deferred writes.
If the deferred-write buffer isn't yet full, but the total size of the buffer and the read cache would exceed the `memory` limit, the oldest records will be expired from the read cache until the total size is under the limit.
`push`, `pop`, `shift`, `unshift`, and `splice` cannot be deferred. When you perform one of these operations, any deferred data is written to the file and the operation is performed immediately. This may change in a future version.
If you resize the array with deferred writing enabled, the file will be resized immediately, but deferred records will not be written. This has a surprising consequence: `@a = (...)` erases the file immediately, but the writing of the actual data is deferred. This might be a bug. If it is a bug, it will be fixed in a future version.
### Autodeferring
`Tie::File` tries to guess when deferred writing might be helpful, and to turn it on and off automatically.
```
for (@a) {
$_ = "> $_";
}
```
In this example, only the first two assignments will be done immediately; after this, all the changes to the file will be deferred up to the user-specified memory limit.
You should usually be able to ignore this and just use the module without thinking about deferring. However, special applications may require fine control over which writes are deferred, or may require that all writes be immediate. To disable the autodeferment feature, use
```
(tied @o)->autodefer(0);
```
or
```
tie @array, 'Tie::File', $file, autodefer => 0;
```
Similarly, `->autodefer(1)` re-enables autodeferment, and `->autodefer()` recovers the current value of the autodefer setting.
CONCURRENT ACCESS TO FILES
---------------------------
Caching and deferred writing are inappropriate if you want the same file to be accessed simultaneously from more than one process. Other optimizations performed internally by this module are also incompatible with concurrent access. A future version of this module will support a `concurrent => 1` option that enables safe concurrent access.
Previous versions of this documentation suggested using `memory => 0` for safe concurrent access. This was mistaken. Tie::File will not support safe concurrent access before version 0.96.
CAVEATS
-------
(That's Latin for 'warnings'.)
* Reasonable effort was made to make this module efficient. Nevertheless, changing the size of a record in the middle of a large file will always be fairly slow, because everything after the new record must be moved.
* The behavior of tied arrays is not precisely the same as for regular arrays. For example:
```
# This DOES print "How unusual!"
undef $a[10]; print "How unusual!\n" if defined $a[10];
```
`undef`-ing a `Tie::File` array element just blanks out the corresponding record in the file. When you read it back again, you'll get the empty string, so the supposedly-`undef`'ed value will be defined. Similarly, if you have `autochomp` disabled, then
```
# This DOES print "How unusual!" if 'autochomp' is disabled
undef $a[10];
print "How unusual!\n" if $a[10];
```
Because when `autochomp` is disabled, `$a[10]` will read back as `"\n"` (or whatever the record separator string is.)
There are other minor differences, particularly regarding `exists` and `delete`, but in general, the correspondence is extremely close.
* I have supposed that since this module is concerned with file I/O, almost all normal use of it will be heavily I/O bound. This means that the time to maintain complicated data structures inside the module will be dominated by the time to actually perform the I/O. When there was an opportunity to spend CPU time to avoid doing I/O, I usually tried to take it.
* You might be tempted to think that deferred writing is like transactions, with `flush` as `commit` and `discard` as `rollback`, but it isn't, so don't.
* There is a large memory overhead for each record offset and for each cache entry: about 310 bytes per cached data record, and about 21 bytes per offset table entry.
The per-record overhead will limit the maximum number of records you can access per file. Note that *accessing* the length of the array via `$x = scalar @tied_file` accesses **all** records and stores their offsets. The same for `foreach (@tied_file)`, even if you exit the loop early.
SUBCLASSING
-----------
This version promises absolutely nothing about the internals, which may change without notice. A future version of the module will have a well-defined and stable subclassing API.
WHAT ABOUT `DB_File`?
----------------------
People sometimes point out that [DB\_File](db_file) will do something similar, and ask why `Tie::File` module is necessary.
There are a number of reasons that you might prefer `Tie::File`. A list is available at `<http://perl.plover.com/TieFile/why-not-DB_File>`.
AUTHOR
------
Mark Jason Dominus
To contact the author, send email to: `[email protected]`
To receive an announcement whenever a new version of this module is released, send a blank email message to `[email protected]`.
The most recent version of this module, including documentation and any news of importance, will be available at
```
http://perl.plover.com/TieFile/
```
LICENSE
-------
`Tie::File` version 0.96 is copyright (C) 2003 Mark Jason Dominus.
This library is free software; you may redistribute it and/or modify it under the same terms as Perl itself.
These terms are your choice of any of (1) the Perl Artistic Licence, or (2) version 2 of the GNU General Public License as published by the Free Software Foundation, or (3) any later version of the GNU General Public License.
This library is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details.
You should have received a copy of the GNU General Public License along with this library program; it should be in the file `COPYING`. If not, write to the Free Software Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA
For licensing inquiries, contact the author at:
```
Mark Jason Dominus
255 S. Warnock St.
Philadelphia, PA 19107
```
WARRANTY
--------
`Tie::File` version 0.98 comes with ABSOLUTELY NO WARRANTY. For details, see the license.
THANKS
------
Gigantic thanks to Jarkko Hietaniemi, for agreeing to put this in the core when I hadn't written it yet, and for generally being helpful, supportive, and competent. (Usually the rule is "choose any one.") Also big thanks to Abhijit Menon-Sen for all of the same things.
Special thanks to Craig Berry and Peter Prymmer (for VMS portability help), Randy Kobes (for Win32 portability help), Clinton Pierce and Autrijus Tang (for heroic eleventh-hour Win32 testing above and beyond the call of duty), Michael G Schwern (for testing advice), and the rest of the CPAN testers (for testing generally).
Special thanks to Tels for suggesting several speed and memory optimizations.
Additional thanks to: Edward Avis / Mattia Barbon / Tom Christiansen / Gerrit Haase / Gurusamy Sarathy / Jarkko Hietaniemi (again) / Nikola Knezevic / John Kominetz / Nick Ing-Simmons / Tassilo von Parseval / H. Dieter Pearcey / Slaven Rezic / Eric Roode / Peter Scott / Peter Somu / Autrijus Tang (again) / Tels (again) / Juerd Waalboer / Todd Rinaldo
TODO
----
More tests. (Stuff I didn't think of yet.)
Paragraph mode?
Fixed-length mode. Leave-blanks mode.
Maybe an autolocking mode?
For many common uses of the module, the read cache is a liability. For example, a program that inserts a single record, or that scans the file once, will have a cache hit rate of zero. This suggests a major optimization: The cache should be initially disabled. Here's a hybrid approach: Initially, the cache is disabled, but the cache code maintains statistics about how high the hit rate would be \*if\* it were enabled. When it sees the hit rate get high enough, it enables itself. The STAT comments in this code are the beginning of an implementation of this.
Record locking with fcntl()? Then the module might support an undo log and get real transactions. What a tour de force that would be.
Keeping track of the highest cached record. This would allow reads-in-a-row to skip the cache lookup faster (if reading from 1..N with empty cache at start, the last cached value will be always N-1).
More tests.
| programming_docs |
perl ExtUtils::MakeMaker ExtUtils::MakeMaker
===================
CONTENTS
--------
* [NAME](#NAME)
* [SYNOPSIS](#SYNOPSIS)
* [DESCRIPTION](#DESCRIPTION)
+ [How To Write A Makefile.PL](#How-To-Write-A-Makefile.PL)
+ [Default Makefile Behaviour](#Default-Makefile-Behaviour)
+ [make test](#make-test)
+ [make testdb](#make-testdb)
+ [make install](#make-install)
+ [INSTALL\_BASE](#INSTALL_BASE)
+ [PREFIX and LIB attribute](#PREFIX-and-LIB-attribute)
+ [AFS users](#AFS-users)
+ [Static Linking of a new Perl Binary](#Static-Linking-of-a-new-Perl-Binary)
+ [Determination of Perl Library and Installation Locations](#Determination-of-Perl-Library-and-Installation-Locations)
+ [Which architecture dependent directory?](#Which-architecture-dependent-directory?)
+ [Using Attributes and Parameters](#Using-Attributes-and-Parameters)
+ [Additional lowercase attributes](#Additional-lowercase-attributes)
+ [Overriding MakeMaker Methods](#Overriding-MakeMaker-Methods)
+ [The End Of Cargo Cult Programming](#The-End-Of-Cargo-Cult-Programming)
+ [Hintsfile support](#Hintsfile-support)
+ [Distribution Support](#Distribution-Support)
+ [Module Meta-Data (META and MYMETA)](#Module-Meta-Data-(META-and-MYMETA))
+ [Disabling an extension](#Disabling-an-extension)
+ [Other Handy Functions](#Other-Handy-Functions)
+ [Supported versions of Perl](#Supported-versions-of-Perl)
* [ENVIRONMENT](#ENVIRONMENT)
* [SEE ALSO](#SEE-ALSO)
* [AUTHORS](#AUTHORS)
* [LICENSE](#LICENSE1)
NAME
----
ExtUtils::MakeMaker - Create a module Makefile
SYNOPSIS
--------
```
use ExtUtils::MakeMaker;
WriteMakefile(
NAME => "Foo::Bar",
VERSION_FROM => "lib/Foo/Bar.pm",
);
```
DESCRIPTION
-----------
This utility is designed to write a Makefile for an extension module from a Makefile.PL. It is based on the Makefile.SH model provided by Andy Dougherty and the perl5-porters.
It splits the task of generating the Makefile into several subroutines that can be individually overridden. Each subroutine returns the text it wishes to have written to the Makefile.
As there are various Make programs with incompatible syntax, which use operating system shells, again with incompatible syntax, it is important for users of this module to know which flavour of Make a Makefile has been written for so they'll use the correct one and won't have to face the possibly bewildering errors resulting from using the wrong one.
On POSIX systems, that program will likely be GNU Make; on Microsoft Windows, it will be either Microsoft NMake, DMake or GNU Make. See the section on the ["MAKE"](#MAKE) parameter for details.
ExtUtils::MakeMaker (EUMM) is object oriented. Each directory below the current directory that contains a Makefile.PL is treated as a separate object. This makes it possible to write an unlimited number of Makefiles with a single invocation of WriteMakefile().
All inputs to WriteMakefile are Unicode characters, not just octets. EUMM seeks to handle all of these correctly. It is currently still not possible to portably use Unicode characters in module names, because this requires Perl to handle Unicode filenames, which is not yet the case on Windows.
See <ExtUtils::MakeMaker::FAQ> for details of the design and usage.
###
How To Write A Makefile.PL
See <ExtUtils::MakeMaker::Tutorial>.
The long answer is the rest of the manpage :-)
###
Default Makefile Behaviour
The generated Makefile enables the user of the extension to invoke
```
perl Makefile.PL # optionally "perl Makefile.PL verbose"
make
make test # optionally set TEST_VERBOSE=1
make install # See below
```
The Makefile to be produced may be altered by adding arguments of the form `KEY=VALUE`. E.g.
```
perl Makefile.PL INSTALL_BASE=~
```
Other interesting targets in the generated Makefile are
```
make config # to check if the Makefile is up-to-date
make clean # delete local temp files (Makefile gets renamed)
make realclean # delete derived files (including ./blib)
make ci # check in all the files in the MANIFEST file
make dist # see below the Distribution Support section
```
###
make test
MakeMaker checks for the existence of a file named *test.pl* in the current directory, and if it exists it executes the script with the proper set of perl `-I` options.
MakeMaker also checks for any files matching glob("t/\*.t"). It will execute all matching files in alphabetical order via the <Test::Harness> module with the `-I` switches set correctly.
You can also organize your tests within subdirectories in the *t/* directory. To do so, use the *test* directive in your *Makefile.PL*. For example, if you had tests in:
```
t/foo
t/foo/bar
```
You could tell make to run tests in both of those directories with the following directives:
```
test => {TESTS => 't/*/*.t t/*/*/*.t'}
test => {TESTS => 't/foo/*.t t/foo/bar/*.t'}
```
The first will run all test files in all first-level subdirectories and all subdirectories they contain. The second will run tests in only the *t/foo* and *t/foo/bar*.
If you'd like to see the raw output of your tests, set the `TEST_VERBOSE` variable to true.
```
make test TEST_VERBOSE=1
```
If you want to run particular test files, set the `TEST_FILES` variable. It is possible to use globbing with this mechanism.
```
make test TEST_FILES='t/foobar.t t/dagobah*.t'
```
Windows users who are using `nmake` should note that due to a bug in `nmake`, when specifying `TEST_FILES` you must use back-slashes instead of forward-slashes.
```
nmake test TEST_FILES='t\foobar.t t\dagobah*.t'
```
###
make testdb
A useful variation of the above is the target `testdb`. It runs the test under the Perl debugger (see <perldebug>). If the file *test.pl* exists in the current directory, it is used for the test.
If you want to debug some other testfile, set the `TEST_FILE` variable thusly:
```
make testdb TEST_FILE=t/mytest.t
```
By default the debugger is called using `-d` option to perl. If you want to specify some other option, set the `TESTDB_SW` variable:
```
make testdb TESTDB_SW=-Dx
```
###
make install
make alone puts all relevant files into directories that are named by the macros INST\_LIB, INST\_ARCHLIB, INST\_SCRIPT, INST\_MAN1DIR and INST\_MAN3DIR. All these default to something below ./blib if you are *not* building below the perl source directory. If you *are* building below the perl source, INST\_LIB and INST\_ARCHLIB default to ../../lib, and INST\_SCRIPT is not defined.
The *install* target of the generated Makefile copies the files found below each of the INST\_\* directories to their INSTALL\* counterparts. Which counterparts are chosen depends on the setting of INSTALLDIRS according to the following table:
```
INSTALLDIRS set to
perl site vendor
PERLPREFIX SITEPREFIX VENDORPREFIX
INST_ARCHLIB INSTALLARCHLIB INSTALLSITEARCH INSTALLVENDORARCH
INST_LIB INSTALLPRIVLIB INSTALLSITELIB INSTALLVENDORLIB
INST_BIN INSTALLBIN INSTALLSITEBIN INSTALLVENDORBIN
INST_SCRIPT INSTALLSCRIPT INSTALLSITESCRIPT INSTALLVENDORSCRIPT
INST_MAN1DIR INSTALLMAN1DIR INSTALLSITEMAN1DIR INSTALLVENDORMAN1DIR
INST_MAN3DIR INSTALLMAN3DIR INSTALLSITEMAN3DIR INSTALLVENDORMAN3DIR
```
The INSTALL... macros in turn default to their %Config ($Config{installprivlib}, $Config{installarchlib}, etc.) counterparts.
You can check the values of these variables on your system with
```
perl '-V:install.*'
```
And to check the sequence in which the library directories are searched by perl, run
```
perl -le 'print join $/, @INC'
```
Sometimes older versions of the module you're installing live in other directories in @INC. Because Perl loads the first version of a module it finds, not the newest, you might accidentally get one of these older versions even after installing a brand new version. To delete *all other versions of the module you're installing* (not simply older ones) set the `UNINST` variable.
```
make install UNINST=1
```
### INSTALL\_BASE
INSTALL\_BASE can be passed into Makefile.PL to change where your module will be installed. INSTALL\_BASE is more like what everyone else calls "prefix" than PREFIX is.
To have everything installed in your home directory, do the following.
```
# Unix users, INSTALL_BASE=~ works fine
perl Makefile.PL INSTALL_BASE=/path/to/your/home/dir
```
Like PREFIX, it sets several INSTALL\* attributes at once. Unlike PREFIX it is easy to predict where the module will end up. The installation pattern looks like this:
```
INSTALLARCHLIB INSTALL_BASE/lib/perl5/$Config{archname}
INSTALLPRIVLIB INSTALL_BASE/lib/perl5
INSTALLBIN INSTALL_BASE/bin
INSTALLSCRIPT INSTALL_BASE/bin
INSTALLMAN1DIR INSTALL_BASE/man/man1
INSTALLMAN3DIR INSTALL_BASE/man/man3
```
INSTALL\_BASE in MakeMaker and `--install_base` in Module::Build (as of 0.28) install to the same location. If you want MakeMaker and Module::Build to install to the same location simply set INSTALL\_BASE and `--install_base` to the same location.
INSTALL\_BASE was added in 6.31.
###
PREFIX and LIB attribute
PREFIX and LIB can be used to set several INSTALL\* attributes in one go. Here's an example for installing into your home directory.
```
# Unix users, PREFIX=~ works fine
perl Makefile.PL PREFIX=/path/to/your/home/dir
```
This will install all files in the module under your home directory, with man pages and libraries going into an appropriate place (usually ~/man and ~/lib). How the exact location is determined is complicated and depends on how your Perl was configured. INSTALL\_BASE works more like what other build systems call "prefix" than PREFIX and we recommend you use that instead.
Another way to specify many INSTALL directories with a single parameter is LIB.
```
perl Makefile.PL LIB=~/lib
```
This will install the module's architecture-independent files into ~/lib, the architecture-dependent files into ~/lib/$archname.
Note, that in both cases the tilde expansion is done by MakeMaker, not by perl by default, nor by make.
Conflicts between parameters LIB, PREFIX and the various INSTALL\* arguments are resolved so that:
* setting LIB overrides any setting of INSTALLPRIVLIB, INSTALLARCHLIB, INSTALLSITELIB, INSTALLSITEARCH (and they are not affected by PREFIX);
* without LIB, setting PREFIX replaces the initial `$Config{prefix}` part of those INSTALL\* arguments, even if the latter are explicitly set (but are set to still start with `$Config{prefix}`).
If the user has superuser privileges, and is not working on AFS or relatives, then the defaults for INSTALLPRIVLIB, INSTALLARCHLIB, INSTALLSCRIPT, etc. will be appropriate, and this incantation will be the best:
```
perl Makefile.PL;
make;
make test
make install
```
make install by default writes some documentation of what has been done into the file `$(INSTALLARCHLIB)/perllocal.pod`. This feature can be bypassed by calling make pure\_install.
###
AFS users
will have to specify the installation directories as these most probably have changed since perl itself has been installed. They will have to do this by calling
```
perl Makefile.PL INSTALLSITELIB=/afs/here/today \
INSTALLSCRIPT=/afs/there/now INSTALLMAN3DIR=/afs/for/manpages
make
```
Be careful to repeat this procedure every time you recompile an extension, unless you are sure the AFS installation directories are still valid.
###
Static Linking of a new Perl Binary
An extension that is built with the above steps is ready to use on systems supporting dynamic loading. On systems that do not support dynamic loading, any newly created extension has to be linked together with the available resources. MakeMaker supports the linking process by creating appropriate targets in the Makefile whenever an extension is built. You can invoke the corresponding section of the makefile with
```
make perl
```
That produces a new perl binary in the current directory with all extensions linked in that can be found in INST\_ARCHLIB, SITELIBEXP, and PERL\_ARCHLIB. To do that, MakeMaker writes a new Makefile, on UNIX, this is called *Makefile.aperl* (may be system dependent). If you want to force the creation of a new perl, it is recommended that you delete this *Makefile.aperl*, so the directories are searched through for linkable libraries again.
The binary can be installed into the directory where perl normally resides on your machine with
```
make inst_perl
```
To produce a perl binary with a different name than `perl`, either say
```
perl Makefile.PL MAP_TARGET=myperl
make myperl
make inst_perl
```
or say
```
perl Makefile.PL
make myperl MAP_TARGET=myperl
make inst_perl MAP_TARGET=myperl
```
In any case you will be prompted with the correct invocation of the `inst_perl` target that installs the new binary into INSTALLBIN.
make inst\_perl by default writes some documentation of what has been done into the file `$(INSTALLARCHLIB)/perllocal.pod`. This can be bypassed by calling make pure\_inst\_perl.
Warning: the inst\_perl: target will most probably overwrite your existing perl binary. Use with care!
Sometimes you might want to build a statically linked perl although your system supports dynamic loading. In this case you may explicitly set the linktype with the invocation of the Makefile.PL or make:
```
perl Makefile.PL LINKTYPE=static # recommended
```
or
```
make LINKTYPE=static # works on most systems
```
###
Determination of Perl Library and Installation Locations
MakeMaker needs to know, or to guess, where certain things are located. Especially INST\_LIB and INST\_ARCHLIB (where to put the files during the make(1) run), PERL\_LIB and PERL\_ARCHLIB (where to read existing modules from), and PERL\_INC (header files and `libperl*.*`).
Extensions may be built either using the contents of the perl source directory tree or from the installed perl library. The recommended way is to build extensions after you have run 'make install' on perl itself. You can do that in any directory on your hard disk that is not below the perl source tree. The support for extensions below the ext directory of the perl distribution is only good for the standard extensions that come with perl.
If an extension is being built below the `ext/` directory of the perl source then MakeMaker will set PERL\_SRC automatically (e.g., `../..`). If PERL\_SRC is defined and the extension is recognized as a standard extension, then other variables default to the following:
```
PERL_INC = PERL_SRC
PERL_LIB = PERL_SRC/lib
PERL_ARCHLIB = PERL_SRC/lib
INST_LIB = PERL_LIB
INST_ARCHLIB = PERL_ARCHLIB
```
If an extension is being built away from the perl source then MakeMaker will leave PERL\_SRC undefined and default to using the installed copy of the perl library. The other variables default to the following:
```
PERL_INC = $archlibexp/CORE
PERL_LIB = $privlibexp
PERL_ARCHLIB = $archlibexp
INST_LIB = ./blib/lib
INST_ARCHLIB = ./blib/arch
```
If perl has not yet been installed then PERL\_SRC can be defined on the command line as shown in the previous section.
###
Which architecture dependent directory?
If you don't want to keep the defaults for the INSTALL\* macros, MakeMaker helps you to minimize the typing needed: the usual relationship between INSTALLPRIVLIB and INSTALLARCHLIB is determined by Configure at perl compilation time. MakeMaker supports the user who sets INSTALLPRIVLIB. If INSTALLPRIVLIB is set, but INSTALLARCHLIB not, then MakeMaker defaults the latter to be the same subdirectory of INSTALLPRIVLIB as Configure decided for the counterparts in %Config, otherwise it defaults to INSTALLPRIVLIB. The same relationship holds for INSTALLSITELIB and INSTALLSITEARCH.
MakeMaker gives you much more freedom than needed to configure internal variables and get different results. It is worth mentioning that make(1) also lets you configure most of the variables that are used in the Makefile. But in the majority of situations this will not be necessary, and should only be done if the author of a package recommends it (or you know what you're doing).
###
Using Attributes and Parameters
The following attributes may be specified as arguments to WriteMakefile() or as NAME=VALUE pairs on the command line. Attributes that became available with later versions of MakeMaker are indicated.
In order to maintain portability of attributes with older versions of MakeMaker you may want to use <App::EUMM::Upgrade> with your `Makefile.PL`.
ABSTRACT One line description of the module. Will be included in PPD file.
ABSTRACT\_FROM Name of the file that contains the package description. MakeMaker looks for a line in the POD matching /^($package\s-\s)(.\*)/. This is typically the first line in the "=head1 NAME" section. $2 becomes the abstract.
AUTHOR Array of strings containing name (and email address) of package author(s). Is used in CPAN Meta files (META.yml or META.json) and PPD (Perl Package Description) files for PPM (Perl Package Manager).
BINARY\_LOCATION Used when creating PPD files for binary packages. It can be set to a full or relative path or URL to the binary archive for a particular architecture. For example:
```
perl Makefile.PL BINARY_LOCATION=x86/Agent.tar.gz
```
builds a PPD package that references a binary of the `Agent` package, located in the `x86` directory relative to the PPD itself.
BUILD\_REQUIRES Available in version 6.55\_03 and above.
A hash of modules that are needed to build your module but not run it.
This will go into the `build_requires` field of your *META.yml* and the `build` of the `prereqs` field of your *META.json*.
Defaults to `{ "ExtUtils::MakeMaker" => 0 }` if this attribute is not specified.
The format is the same as PREREQ\_PM.
C Ref to array of \*.c file names. Initialised from a directory scan and the values portion of the XS attribute hash. This is not currently used by MakeMaker but may be handy in Makefile.PLs.
CCFLAGS String that will be included in the compiler call command line between the arguments INC and OPTIMIZE. Note that setting this will overwrite its default value (`$Config::Config{ccflags}`); to preserve that, include the default value directly, e.g.:
```
CCFLAGS => "$Config::Config{ccflags} ..."
```
CONFIG Arrayref. E.g. [qw(archname manext)] defines ARCHNAME & MANEXT from config.sh. MakeMaker will add to CONFIG the following values anyway: ar cc cccdlflags ccdlflags cpprun dlext dlsrc ld lddlflags ldflags libc lib\_ext obj\_ext ranlib sitelibexp sitearchexp so
CONFIGURE CODE reference. The subroutine should return a hash reference. The hash may contain further attributes, e.g. {LIBS => ...}, that have to be determined by some evaluation method.
CONFIGURE\_REQUIRES Available in version 6.52 and above.
A hash of modules that are required to run Makefile.PL itself, but not to run your distribution.
This will go into the `configure_requires` field of your *META.yml* and the `configure` of the `prereqs` field of your *META.json*.
Defaults to `{ "ExtUtils::MakeMaker" => 0 }` if this attribute is not specified.
The format is the same as PREREQ\_PM.
DEFINE Something like `"-DHAVE_UNISTD_H"`
DESTDIR This is the root directory into which the code will be installed. It *prepends itself to the normal prefix*. For example, if your code would normally go into */usr/local/lib/perl* you could set DESTDIR=~/tmp/ and installation would go into *~/tmp/usr/local/lib/perl*.
This is primarily of use for people who repackage Perl modules.
NOTE: Due to the nature of make, it is important that you put the trailing slash on your DESTDIR. *~/tmp/* not *~/tmp*.
DIR Ref to array of subdirectories containing Makefile.PLs e.g. ['sdbm'] in ext/SDBM\_File
DISTNAME A safe filename for the package.
Defaults to NAME below but with :: replaced with -.
For example, Foo::Bar becomes Foo-Bar.
DISTVNAME Your name for distributing the package with the version number included. This is used by 'make dist' to name the resulting archive file.
Defaults to DISTNAME-VERSION.
For example, version 1.04 of Foo::Bar becomes Foo-Bar-1.04.
On some OS's where . has special meaning VERSION\_SYM may be used in place of VERSION.
DLEXT Specifies the extension of the module's loadable object. For example:
```
DLEXT => 'unusual_ext', # Default value is $Config{so}
```
NOTE: When using this option to alter the extension of a module's loadable object, it is also necessary that the module's pm file specifies the same change:
```
local $DynaLoader::dl_dlext = 'unusual_ext';
```
DL\_FUNCS Hashref of symbol names for routines to be made available as universal symbols. Each key/value pair consists of the package name and an array of routine names in that package. Used only under AIX, OS/2, VMS and Win32 at present. The routine names supplied will be expanded in the same way as XSUB names are expanded by the XS() macro. Defaults to
```
{"$(NAME)" => ["boot_$(NAME)" ] }
```
e.g.
```
{"RPC" => [qw( boot_rpcb rpcb_gettime getnetconfigent )],
"NetconfigPtr" => [ 'DESTROY'] }
```
Please see the <ExtUtils::Mksymlists> documentation for more information about the DL\_FUNCS, DL\_VARS and FUNCLIST attributes.
DL\_VARS Array of symbol names for variables to be made available as universal symbols. Used only under AIX, OS/2, VMS and Win32 at present. Defaults to []. (e.g. [ qw(Foo\_version Foo\_numstreams Foo\_tree ) ])
EXCLUDE\_EXT Array of extension names to exclude when doing a static build. This is ignored if INCLUDE\_EXT is present. Consult INCLUDE\_EXT for more details. (e.g. [ qw( Socket POSIX ) ] )
This attribute may be most useful when specified as a string on the command line: perl Makefile.PL EXCLUDE\_EXT='Socket Safe'
EXE\_FILES Ref to array of executable files. The files will be copied to the INST\_SCRIPT directory. Make realclean will delete them from there again.
If your executables start with something like #!perl or #!/usr/bin/perl MakeMaker will change this to the path of the perl 'Makefile.PL' was invoked with so the programs will be sure to run properly even if perl is not in /usr/bin/perl.
FIRST\_MAKEFILE The name of the Makefile to be produced. This is used for the second Makefile that will be produced for the MAP\_TARGET.
Defaults to 'Makefile' or 'Descrip.MMS' on VMS.
(Note: we couldn't use MAKEFILE because dmake uses this for something else).
FULLPERL Perl binary able to run this extension, load XS modules, etc...
FULLPERLRUN Like PERLRUN, except it uses FULLPERL.
FULLPERLRUNINST Like PERLRUNINST, except it uses FULLPERL.
FUNCLIST This provides an alternate means to specify function names to be exported from the extension. Its value is a reference to an array of function names to be exported by the extension. These names are passed through unaltered to the linker options file.
H Ref to array of \*.h file names. Similar to C.
IMPORTS This attribute is used to specify names to be imported into the extension. Takes a hash ref.
It is only used on OS/2 and Win32.
INC Include file dirs eg: `"-I/usr/5include -I/path/to/inc"`
INCLUDE\_EXT Array of extension names to be included when doing a static build. MakeMaker will normally build with all of the installed extensions when doing a static build, and that is usually the desired behavior. If INCLUDE\_EXT is present then MakeMaker will build only with those extensions which are explicitly mentioned. (e.g. [ qw( Socket POSIX ) ])
It is not necessary to mention DynaLoader or the current extension when filling in INCLUDE\_EXT. If the INCLUDE\_EXT is mentioned but is empty then only DynaLoader and the current extension will be included in the build.
This attribute may be most useful when specified as a string on the command line: perl Makefile.PL INCLUDE\_EXT='POSIX Socket Devel::Peek'
INSTALLARCHLIB Used by 'make install', which copies files from INST\_ARCHLIB to this directory if INSTALLDIRS is set to perl.
INSTALLBIN Directory to install binary files (e.g. tkperl) into if INSTALLDIRS=perl.
INSTALLDIRS Determines which of the sets of installation directories to choose: perl, site or vendor. Defaults to site.
INSTALLMAN1DIR INSTALLMAN3DIR These directories get the man pages at 'make install' time if INSTALLDIRS=perl. Defaults to $Config{installman\*dir}.
If set to 'none', no man pages will be installed.
INSTALLPRIVLIB Used by 'make install', which copies files from INST\_LIB to this directory if INSTALLDIRS is set to perl.
Defaults to $Config{installprivlib}.
INSTALLSCRIPT Available in version 6.30\_02 and above.
Used by 'make install' which copies files from INST\_SCRIPT to this directory if INSTALLDIRS=perl.
INSTALLSITEARCH Used by 'make install', which copies files from INST\_ARCHLIB to this directory if INSTALLDIRS is set to site (default).
INSTALLSITEBIN Used by 'make install', which copies files from INST\_BIN to this directory if INSTALLDIRS is set to site (default).
INSTALLSITELIB Used by 'make install', which copies files from INST\_LIB to this directory if INSTALLDIRS is set to site (default).
INSTALLSITEMAN1DIR INSTALLSITEMAN3DIR These directories get the man pages at 'make install' time if INSTALLDIRS=site (default). Defaults to $(SITEPREFIX)/man/man$(MAN\*EXT).
If set to 'none', no man pages will be installed.
INSTALLSITESCRIPT Used by 'make install' which copies files from INST\_SCRIPT to this directory if INSTALLDIRS is set to site (default).
INSTALLVENDORARCH Used by 'make install', which copies files from INST\_ARCHLIB to this directory if INSTALLDIRS is set to vendor. Note that if you do not set this, the value of INSTALLVENDORLIB will be used, which is probably not what you want.
INSTALLVENDORBIN Used by 'make install', which copies files from INST\_BIN to this directory if INSTALLDIRS is set to vendor.
INSTALLVENDORLIB Used by 'make install', which copies files from INST\_LIB to this directory if INSTALLDIRS is set to vendor.
INSTALLVENDORMAN1DIR INSTALLVENDORMAN3DIR These directories get the man pages at 'make install' time if INSTALLDIRS=vendor. Defaults to $(VENDORPREFIX)/man/man$(MAN\*EXT).
If set to 'none', no man pages will be installed.
INSTALLVENDORSCRIPT Available in version 6.30\_02 and above.
Used by 'make install' which copies files from INST\_SCRIPT to this directory if INSTALLDIRS is set to vendor.
INST\_ARCHLIB Same as INST\_LIB for architecture dependent files.
INST\_BIN Directory to put real binary files during 'make'. These will be copied to INSTALLBIN during 'make install'
INST\_LIB Directory where we put library files of this extension while building it.
INST\_MAN1DIR Directory to hold the man pages at 'make' time
INST\_MAN3DIR Directory to hold the man pages at 'make' time
INST\_SCRIPT Directory where executable files should be installed during 'make'. Defaults to "./blib/script", just to have a dummy location during testing. make install will copy the files in INST\_SCRIPT to INSTALLSCRIPT.
LD Program to be used to link libraries for dynamic loading.
Defaults to $Config{ld}.
LDDLFLAGS Any special flags that might need to be passed to ld to create a shared library suitable for dynamic loading. It is up to the makefile to use it. (See ["lddlflags" in Config](config#lddlflags))
Defaults to $Config{lddlflags}.
LDFROM Defaults to "$(OBJECT)" and is used in the ld command to specify what files to link/load from (also see dynamic\_lib below for how to specify ld flags)
LIB LIB should only be set at `perl Makefile.PL` time but is allowed as a MakeMaker argument. It has the effect of setting both INSTALLPRIVLIB and INSTALLSITELIB to that value regardless any explicit setting of those arguments (or of PREFIX). INSTALLARCHLIB and INSTALLSITEARCH are set to the corresponding architecture subdirectory.
LIBPERL\_A The filename of the perllibrary that will be used together with this extension. Defaults to libperl.a.
LIBS An anonymous array of alternative library specifications to be searched for (in order) until at least one library is found. E.g.
```
'LIBS' => ["-lgdbm", "-ldbm -lfoo", "-L/path -ldbm.nfs"]
```
Mind, that any element of the array contains a complete set of arguments for the ld command. So do not specify
```
'LIBS' => ["-ltcl", "-ltk", "-lX11"]
```
See ODBM\_File/Makefile.PL for an example, where an array is needed. If you specify a scalar as in
```
'LIBS' => "-ltcl -ltk -lX11"
```
MakeMaker will turn it into an array with one element.
LICENSE Available in version 6.31 and above.
The licensing terms of your distribution. Generally it's "perl\_5" for the same license as Perl itself.
See <CPAN::Meta::Spec> for the list of options.
Defaults to "unknown".
LINKTYPE 'static' or 'dynamic' (default unless usedl=undef in config.sh). Should only be used to force static linking (also see linkext below).
MAGICXS Available in version 6.8305 and above.
When this is set to `1`, `OBJECT` will be automagically derived from `O_FILES`.
MAKE Available in version 6.30\_01 and above.
Variant of make you intend to run the generated Makefile with. This parameter lets Makefile.PL know what make quirks to account for when generating the Makefile.
MakeMaker also honors the MAKE environment variable. This parameter takes precedence.
Currently the only significant values are 'dmake' and 'nmake' for Windows users, instructing MakeMaker to generate a Makefile in the flavour of DMake ("Dennis Vadura's Make") or Microsoft NMake respectively.
Defaults to $Config{make}, which may go looking for a Make program in your environment.
How are you supposed to know what flavour of Make a Makefile has been generated for if you didn't specify a value explicitly? Search the generated Makefile for the definition of the MAKE variable, which is used to recursively invoke the Make utility. That will tell you what Make you're supposed to invoke the Makefile with.
MAKEAPERL Boolean which tells MakeMaker that it should include the rules to make a perl. This is handled automatically as a switch by MakeMaker. The user normally does not need it.
MAKEFILE\_OLD When 'make clean' or similar is run, the $(FIRST\_MAKEFILE) will be backed up at this location.
Defaults to $(FIRST\_MAKEFILE).old or $(FIRST\_MAKEFILE)\_old on VMS.
MAN1PODS Hashref of pod-containing files. MakeMaker will default this to all EXE\_FILES files that include POD directives. The files listed here will be converted to man pages and installed as was requested at Configure time.
This hash should map POD files (or scripts containing POD) to the man file names under the `blib/man1/` directory, as in the following example:
```
MAN1PODS => {
'doc/command.pod' => 'blib/man1/command.1',
'scripts/script.pl' => 'blib/man1/script.1',
}
```
MAN3PODS Hashref that assigns to \*.pm and \*.pod files the files into which the manpages are to be written. MakeMaker parses all \*.pod and \*.pm files for POD directives. Files that contain POD will be the default keys of the MAN3PODS hashref. These will then be converted to man pages during `make` and will be installed during `make install`.
Example similar to MAN1PODS.
MAP\_TARGET If it is intended that a new perl binary be produced, this variable may hold a name for that binary. Defaults to perl
META\_ADD META\_MERGE Available in version 6.46 and above.
A hashref of items to add to the CPAN Meta file (*META.yml* or *META.json*).
They differ in how they behave if they have the same key as the default metadata. META\_ADD will override the default value with its own. META\_MERGE will merge its value with the default.
Unless you want to override the defaults, prefer META\_MERGE so as to get the advantage of any future defaults.
Where prereqs are concerned, if META\_MERGE is used, prerequisites are merged with their counterpart `WriteMakefile()` argument (PREREQ\_PM is merged into {prereqs}{runtime}{requires}, BUILD\_REQUIRES into `{prereqs}{build}{requires}`, CONFIGURE\_REQUIRES into `{prereqs}{configure}{requires}`, and TEST\_REQUIRES into `{prereqs}{test}{requires})`. When prereqs are specified with META\_ADD, the only prerequisites added to the file come from the metadata, not `WriteMakefile()` arguments.
Note that these configuration options are only used for generating *META.yml* and *META.json* -- they are NOT used for *MYMETA.yml* and *MYMETA.json*. Therefore data in these fields should NOT be used for dynamic (user-side) configuration.
By default CPAN Meta specification `1.4` is used. In order to use CPAN Meta specification `2.0`, indicate with `meta-spec` the version you want to use.
```
META_MERGE => {
"meta-spec" => { version => 2 },
resources => {
repository => {
type => 'git',
url => 'git://github.com/Perl-Toolchain-Gang/ExtUtils-MakeMaker.git',
web => 'https://github.com/Perl-Toolchain-Gang/ExtUtils-MakeMaker',
},
},
},
```
MIN\_PERL\_VERSION Available in version 6.48 and above.
The minimum required version of Perl for this distribution.
Either the 5.006001 or the 5.6.1 format is acceptable.
MYEXTLIB If the extension links to a library that it builds, set this to the name of the library (see SDBM\_File)
NAME The package representing the distribution. For example, `Test::More` or `ExtUtils::MakeMaker`. It will be used to derive information about the distribution such as the ["DISTNAME"](#DISTNAME), installation locations within the Perl library and where XS files will be looked for by default (see ["XS"](#XS)).
`NAME` *must* be a valid Perl package name and it *must* have an associated `.pm` file. For example, `Foo::Bar` is a valid `NAME` and there must exist *Foo/Bar.pm*. Any XS code should be in *Bar.xs* unless stated otherwise.
Your distribution **must** have a `NAME`.
NEEDS\_LINKING MakeMaker will figure out if an extension contains linkable code anywhere down the directory tree, and will set this variable accordingly, but you can speed it up a very little bit if you define this boolean variable yourself.
NOECHO Command so make does not print the literal commands it's running.
By setting it to an empty string you can generate a Makefile that prints all commands. Mainly used in debugging MakeMaker itself.
Defaults to `@`.
NORECURS Boolean. Attribute to inhibit descending into subdirectories.
NO\_META When true, suppresses the generation and addition to the MANIFEST of the META.yml and META.json module meta-data files during 'make distdir'.
Defaults to false.
NO\_MYMETA Available in version 6.57\_02 and above.
When true, suppresses the generation of MYMETA.yml and MYMETA.json module meta-data files during 'perl Makefile.PL'.
Defaults to false.
NO\_PACKLIST Available in version 6.7501 and above.
When true, suppresses the writing of `packlist` files for installs.
Defaults to false.
NO\_PERLLOCAL Available in version 6.7501 and above.
When true, suppresses the appending of installations to `perllocal`.
Defaults to false.
NO\_VC In general, any generated Makefile checks for the current version of MakeMaker and the version the Makefile was built under. If NO\_VC is set, the version check is neglected. Do not write this into your Makefile.PL, use it interactively instead.
OBJECT List of object files, defaults to '$(BASEEXT)$(OBJ\_EXT)', but can be a long string or an array containing all object files, e.g. "tkpBind.o tkpButton.o tkpCanvas.o" or ["tkpBind.o", "tkpButton.o", "tkpCanvas.o"]
(Where BASEEXT is the last component of NAME, and OBJ\_EXT is $Config{obj\_ext}.)
OPTIMIZE Defaults to `-O`. Set it to `-g` to turn debugging on. The flag is passed to subdirectory makes.
PERL Perl binary for tasks that can be done by miniperl. If it contains spaces or other shell metacharacters, it needs to be quoted in a way that protects them, since this value is intended to be inserted in a shell command line in the Makefile. E.g.:
```
# Perl executable lives in "C:/Program Files/Perl/bin"
# Normally you don't need to set this yourself!
$ perl Makefile.PL PERL='"C:/Program Files/Perl/bin/perl.exe" -w'
```
PERL\_CORE Set only when MakeMaker is building the extensions of the Perl core distribution.
PERLMAINCC The call to the program that is able to compile perlmain.c. Defaults to $(CC).
PERL\_ARCHLIB Same as for PERL\_LIB, but for architecture dependent files.
Used only when MakeMaker is building the extensions of the Perl core distribution (because normally $(PERL\_ARCHLIB) is automatically in @INC, and adding it would get in the way of PERL5LIB).
PERL\_LIB Directory containing the Perl library to use.
Used only when MakeMaker is building the extensions of the Perl core distribution (because normally $(PERL\_LIB) is automatically in @INC, and adding it would get in the way of PERL5LIB).
PERL\_MALLOC\_OK defaults to 0. Should be set to TRUE if the extension can work with the memory allocation routines substituted by the Perl malloc() subsystem. This should be applicable to most extensions with exceptions of those
* with bugs in memory allocations which are caught by Perl's malloc();
* which interact with the memory allocator in other ways than via malloc(), realloc(), free(), calloc(), sbrk() and brk();
* which rely on special alignment which is not provided by Perl's malloc().
**NOTE.** Neglecting to set this flag in *any one* of the loaded extension nullifies many advantages of Perl's malloc(), such as better usage of system resources, error detection, memory usage reporting, catchable failure of memory allocations, etc.
PERLPREFIX Directory under which core modules are to be installed.
Defaults to $Config{installprefixexp}, falling back to $Config{installprefix}, $Config{prefixexp} or $Config{prefix} should $Config{installprefixexp} not exist.
Overridden by PREFIX.
PERLRUN Use this instead of $(PERL) when you wish to run perl. It will set up extra necessary flags for you.
PERLRUNINST Use this instead of $(PERL) when you wish to run perl to work with modules. It will add things like -I$(INST\_ARCH) and other necessary flags so perl can see the modules you're about to install.
PERL\_SRC Directory containing the Perl source code (use of this should be avoided, it may be undefined)
PERM\_DIR Available in version 6.51\_01 and above.
Desired permission for directories. Defaults to `755`.
PERM\_RW Desired permission for read/writable files. Defaults to `644`.
PERM\_RWX Desired permission for executable files. Defaults to `755`.
PL\_FILES MakeMaker can run programs to generate files for you at build time. By default any file named \*.PL (except Makefile.PL and Build.PL) in the top level directory will be assumed to be a Perl program and run passing its own basename in as an argument. This basename is actually a build target, and there is an intention, but not a requirement, that the \*.PL file make the file passed to to as an argument. For example...
```
perl foo.PL foo
```
This behavior can be overridden by supplying your own set of files to search. PL\_FILES accepts a hash ref, the key being the file to run and the value is passed in as the first argument when the PL file is run.
```
PL_FILES => {'bin/foobar.PL' => 'bin/foobar'}
PL_FILES => {'foo.PL' => 'foo.c'}
```
Would run bin/foobar.PL like this:
```
perl bin/foobar.PL bin/foobar
```
If multiple files from one program are desired an array ref can be used.
```
PL_FILES => {'bin/foobar.PL' => [qw(bin/foobar1 bin/foobar2)]}
```
In this case the program will be run multiple times using each target file.
```
perl bin/foobar.PL bin/foobar1
perl bin/foobar.PL bin/foobar2
```
If an output file depends on extra input files beside the script itself, a hash ref can be used in version 7.36 and above:
```
PL_FILES => { 'foo.PL' => {
'foo.out' => 'foo.in',
'bar.out' => [qw(bar1.in bar2.in)],
}
```
In this case the extra input files will be passed to the program after the target file:
```
perl foo.PL foo.out foo.in
perl foo.PL bar.out bar1.in bar2.in
```
PL files are normally run **after** pm\_to\_blib and include INST\_LIB and INST\_ARCH in their `@INC`, so the just built modules can be accessed... unless the PL file is making a module (or anything else in PM) in which case it is run **before** pm\_to\_blib and does not include INST\_LIB and INST\_ARCH in its `@INC`. This apparently odd behavior is there for backwards compatibility (and it's somewhat DWIM). The argument passed to the .PL is set up as a target to build in the Makefile. In other sections such as `postamble` you can specify a dependency on the filename/argument that the .PL is supposed (or will have, now that that is is a dependency) to generate. Note the file to be generated will still be generated and the .PL will still run even without an explicit dependency created by you, since the `all` target still depends on running all eligible to run.PL files.
PM Hashref of .pm files and \*.pl files to be installed. e.g.
```
{'name_of_file.pm' => '$(INST_LIB)/install_as.pm'}
```
By default this will include \*.pm and \*.pl and the files found in the PMLIBDIRS directories. Defining PM in the Makefile.PL will override PMLIBDIRS.
PMLIBDIRS Ref to array of subdirectories containing library files. Defaults to [ 'lib', $(BASEEXT) ]. The directories will be scanned and *any* files they contain will be installed in the corresponding location in the library. A libscan() method can be used to alter the behaviour. Defining PM in the Makefile.PL will override PMLIBDIRS.
(Where BASEEXT is the last component of NAME.)
PM\_FILTER A filter program, in the traditional Unix sense (input from stdin, output to stdout) that is passed on each .pm file during the build (in the pm\_to\_blib() phase). It is empty by default, meaning no filtering is done. You could use:
```
PM_FILTER => 'perl -ne "print unless /^\\#/"',
```
to remove all the leading comments on the fly during the build. In order to be as portable as possible, please consider using a Perl one-liner rather than Unix (or other) utilities, as above. The # is escaped for the Makefile, since what is going to be generated will then be:
```
PM_FILTER = perl -ne "print unless /^\#/"
```
Without the \ before the #, we'd have the start of a Makefile comment, and the macro would be incorrectly defined.
You will almost certainly be better off using the `PL_FILES` system, instead. See above, or the <ExtUtils::MakeMaker::FAQ> entry.
POLLUTE Prior to 5.6 various interpreter variables were available without a `PL_` prefix, eg. `PL_undef` was available as `undef`. As of release 5.6, these are only defined if the POLLUTE flag is enabled:
```
perl Makefile.PL POLLUTE=1
```
Please inform the module author if this is necessary to successfully install a module under 5.6 or later.
PPM\_INSTALL\_EXEC Name of the executable used to run `PPM_INSTALL_SCRIPT` below. (e.g. perl)
PPM\_INSTALL\_SCRIPT Name of the script that gets executed by the Perl Package Manager after the installation of a package.
PPM\_UNINSTALL\_EXEC Available in version 6.8502 and above.
Name of the executable used to run `PPM_UNINSTALL_SCRIPT` below. (e.g. perl)
PPM\_UNINSTALL\_SCRIPT Available in version 6.8502 and above.
Name of the script that gets executed by the Perl Package Manager before the removal of a package.
PREFIX This overrides all the default install locations. Man pages, libraries, scripts, etc... MakeMaker will try to make an educated guess about where to place things under the new PREFIX based on your Config defaults. Failing that, it will fall back to a structure which should be sensible for your platform.
If you specify LIB or any INSTALL\* variables they will not be affected by the PREFIX.
PREREQ\_FATAL Bool. If this parameter is true, failing to have the required modules (or the right versions thereof) will be fatal. `perl Makefile.PL` will `die` instead of simply informing the user of the missing dependencies.
It is *extremely* rare to have to use `PREREQ_FATAL`. Its use by module authors is *strongly discouraged* and should never be used lightly.
For dependencies that are required in order to run `Makefile.PL`, see `CONFIGURE_REQUIRES`.
Module installation tools have ways of resolving unmet dependencies but to do that they need a *Makefile*. Using `PREREQ_FATAL` breaks this. That's bad.
Assuming you have good test coverage, your tests should fail with missing dependencies informing the user more strongly that something is wrong. You can write a *t/00compile.t* test which will simply check that your code compiles and stop "make test" prematurely if it doesn't. See ["BAIL\_OUT" in Test::More](Test::More#BAIL_OUT) for more details.
PREREQ\_PM A hash of modules that are needed to run your module. The keys are the module names ie. Test::More, and the minimum version is the value. If the required version number is 0 any version will do. The versions given may be a Perl v-string (see <version>) or a range (see <CPAN::Meta::Requirements>).
This will go into the `requires` field of your *META.yml* and the `runtime` of the `prereqs` field of your *META.json*.
```
PREREQ_PM => {
# Require Test::More at least 0.47
"Test::More" => "0.47",
# Require any version of Acme::Buffy
"Acme::Buffy" => 0,
}
```
PREREQ\_PRINT Bool. If this parameter is true, the prerequisites will be printed to stdout and MakeMaker will exit. The output format is an evalable hash ref.
```
$PREREQ_PM = {
'A::B' => Vers1,
'C::D' => Vers2,
...
};
```
If a distribution defines a minimal required perl version, this is added to the output as an additional line of the form:
```
$MIN_PERL_VERSION = '5.008001';
```
If BUILD\_REQUIRES is not empty, it will be dumped as $BUILD\_REQUIRES hashref.
PRINT\_PREREQ RedHatism for `PREREQ_PRINT`. The output format is different, though:
```
perl(A::B)>=Vers1 perl(C::D)>=Vers2 ...
```
A minimal required perl version, if present, will look like this:
```
perl(perl)>=5.008001
```
SITEPREFIX Like PERLPREFIX, but only for the site install locations.
Defaults to $Config{siteprefixexp}. Perls prior to 5.6.0 didn't have an explicit siteprefix in the Config. In those cases $Config{installprefix} will be used.
Overridable by PREFIX
SIGN Available in version 6.18 and above.
When true, perform the generation and addition to the MANIFEST of the SIGNATURE file in the distdir during 'make distdir', via 'cpansign -s'.
Note that you need to install the Module::Signature module to perform this operation.
Defaults to false.
SKIP Arrayref. E.g. [qw(name1 name2)] skip (do not write) sections of the Makefile. Caution! Do not use the SKIP attribute for the negligible speedup. It may seriously damage the resulting Makefile. Only use it if you really need it.
TEST\_REQUIRES Available in version 6.64 and above.
A hash of modules that are needed to test your module but not run or build it.
This will go into the `build_requires` field of your *META.yml* and the `test` of the `prereqs` field of your *META.json*.
The format is the same as PREREQ\_PM.
TYPEMAPS Ref to array of typemap file names. Use this when the typemaps are in some directory other than the current directory or when they are not named **typemap**. The last typemap in the list takes precedence. A typemap in the current directory has highest precedence, even if it isn't listed in TYPEMAPS. The default system typemap has lowest precedence.
VENDORPREFIX Like PERLPREFIX, but only for the vendor install locations.
Defaults to $Config{vendorprefixexp}.
Overridable by PREFIX
VERBINST If true, make install will be verbose
VERSION Your version number for distributing the package. This defaults to 0.1.
VERSION\_FROM Instead of specifying the VERSION in the Makefile.PL you can let MakeMaker parse a file to determine the version number. The parsing routine requires that the file named by VERSION\_FROM contains one single line to compute the version number. The first line in the file that contains something like a $VERSION assignment or `package Name VERSION` will be used. The following lines will be parsed o.k.:
```
# Good
package Foo::Bar 1.23; # 1.23
$VERSION = '1.00'; # 1.00
*VERSION = \'1.01'; # 1.01
($VERSION) = q$Revision$ =~ /(\d+)/g; # The digits in $Revision$
$FOO::VERSION = '1.10'; # 1.10
*FOO::VERSION = \'1.11'; # 1.11
```
but these will fail:
```
# Bad
my $VERSION = '1.01';
local $VERSION = '1.02';
local $FOO::VERSION = '1.30';
```
(Putting `my` or `local` on the preceding line will work o.k.)
"Version strings" are incompatible and should not be used.
```
# Bad
$VERSION = 1.2.3;
$VERSION = v1.2.3;
```
<version> objects are fine. As of MakeMaker 6.35 version.pm will be automatically loaded, but you must declare the dependency on version.pm. For compatibility with older MakeMaker you should load on the same line as $VERSION is declared.
```
# All on one line
use version; our $VERSION = qv(1.2.3);
```
The file named in VERSION\_FROM is not added as a dependency to Makefile. This is not really correct, but it would be a major pain during development to have to rewrite the Makefile for any smallish change in that file. If you want to make sure that the Makefile contains the correct VERSION macro after any change of the file, you would have to do something like
```
depend => { Makefile => '$(VERSION_FROM)' }
```
See attribute `depend` below.
VERSION\_SYM A sanitized VERSION with . replaced by \_. For places where . has special meaning (some filesystems, RCS labels, etc...)
XS Hashref of .xs files. MakeMaker will default this. e.g.
```
{'name_of_file.xs' => 'name_of_file.c'}
```
The .c files will automatically be included in the list of files deleted by a make clean.
XSBUILD Available in version 7.12 and above.
Hashref with options controlling the operation of `XSMULTI`:
```
{
xs => {
all => {
# options applying to all .xs files for this distribution
},
'lib/Class/Name/File' => { # specifically for this file
DEFINE => '-Dfunktastic', # defines for only this file
INC => "-I$funkyliblocation", # include flags for only this file
# OBJECT => 'lib/Class/Name/File$(OBJ_EXT)', # default
LDFROM => "lib/Class/Name/File\$(OBJ_EXT) $otherfile\$(OBJ_EXT)", # what's linked
},
},
}
```
Note `xs` is the file-extension. More possibilities may arise in the future. Note that object names are specified without their XS extension.
`LDFROM` defaults to the same as `OBJECT`. `OBJECT` defaults to, for `XSMULTI`, just the XS filename with the extension replaced with the compiler-specific object-file extension.
The distinction between `OBJECT` and `LDFROM`: `OBJECT` is the make target, so make will try to build it. However, `LDFROM` is what will actually be linked together to make the shared object or static library (SO/SL), so if you override it, make sure it includes what you want to make the final SO/SL, almost certainly including the XS basename with `$(OBJ_EXT)` appended.
XSMULTI Available in version 7.12 and above.
When this is set to `1`, multiple XS files may be placed under *lib/* next to their corresponding `*.pm` files (this is essential for compiling with the correct `VERSION` values). This feature should be considered experimental, and details of it may change.
This feature was inspired by, and small portions of code copied from, <ExtUtils::MakeMaker::BigHelper>. Hopefully this feature will render that module mainly obsolete.
XSOPT String of options to pass to xsubpp. This might include `-C++` or `-extern`. Do not include typemaps here; the TYPEMAP parameter exists for that purpose.
XSPROTOARG May be set to `-prototypes`, `-noprototypes` or the empty string. The empty string is equivalent to the xsubpp default, or `-noprototypes`. See the xsubpp documentation for details. MakeMaker defaults to the empty string.
XS\_VERSION Your version number for the .xs file of this package. This defaults to the value of the VERSION attribute.
###
Additional lowercase attributes
can be used to pass parameters to the methods which implement that part of the Makefile. Parameters are specified as a hash ref but are passed to the method as a hash.
clean
```
{FILES => "*.xyz foo"}
```
depend
```
{ANY_TARGET => ANY_DEPENDENCY, ...}
```
(ANY\_TARGET must not be given a double-colon rule by MakeMaker.)
dist
```
{TARFLAGS => 'cvfF', COMPRESS => 'gzip', SUFFIX => '.gz',
SHAR => 'shar -m', DIST_CP => 'ln', ZIP => '/bin/zip',
ZIPFLAGS => '-rl', DIST_DEFAULT => 'private tardist' }
```
If you specify COMPRESS, then SUFFIX should also be altered, as it is needed to tell make the target file of the compression. Setting DIST\_CP to ln can be useful, if you need to preserve the timestamps on your files. DIST\_CP can take the values 'cp', which copies the file, 'ln', which links the file, and 'best' which copies symbolic links and links the rest. Default is 'best'.
dynamic\_lib
```
{ARMAYBE => 'ar', OTHERLDFLAGS => '...', INST_DYNAMIC_DEP => '...'}
```
linkext
```
{LINKTYPE => 'static', 'dynamic' or ''}
```
NB: Extensions that have nothing but \*.pm files had to say
```
{LINKTYPE => ''}
```
with Pre-5.0 MakeMakers. Since version 5.00 of MakeMaker such a line can be deleted safely. MakeMaker recognizes when there's nothing to be linked.
macro
```
{ANY_MACRO => ANY_VALUE, ...}
```
postamble Anything put here will be passed to [MY::postamble()](ExtUtils::MM_Any#postamble-%28o%29) if you have one.
realclean
```
{FILES => '$(INST_ARCHAUTODIR)/*.xyz'}
```
test Specify the targets for testing.
```
{TESTS => 't/*.t'}
```
`RECURSIVE_TEST_FILES` can be used to include all directories recursively under `t` that contain `.t` files. It will be ignored if you provide your own `TESTS` attribute, defaults to false.
```
{RECURSIVE_TEST_FILES=>1}
```
This is supported since 6.76
tool\_autosplit
```
{MAXLEN => 8}
```
###
Overriding MakeMaker Methods
If you cannot achieve the desired Makefile behaviour by specifying attributes you may define private subroutines in the Makefile.PL. Each subroutine returns the text it wishes to have written to the Makefile. To override a section of the Makefile you can either say:
```
sub MY::c_o { "new literal text" }
```
or you can edit the default by saying something like:
```
package MY; # so that "SUPER" works right
sub c_o {
my $inherited = shift->SUPER::c_o(@_);
$inherited =~ s/old text/new text/;
$inherited;
}
```
If you are running experiments with embedding perl as a library into other applications, you might find MakeMaker is not sufficient. You'd better have a look at <ExtUtils::Embed> which is a collection of utilities for embedding.
If you still need a different solution, try to develop another subroutine that fits your needs and submit the diffs to `[email protected]`
For a complete description of all MakeMaker methods see <ExtUtils::MM_Unix>.
Here is a simple example of how to add a new target to the generated Makefile:
```
sub MY::postamble {
return <<'MAKE_FRAG';
$(MYEXTLIB): sdbm/Makefile
cd sdbm && $(MAKE) all
MAKE_FRAG
}
```
###
The End Of Cargo Cult Programming
WriteMakefile() now does some basic sanity checks on its parameters to protect against typos and malformatted values. This means some things which happened to work in the past will now throw warnings and possibly produce internal errors.
Some of the most common mistakes:
`MAN3PODS => ' '`
This is commonly used to suppress the creation of man pages. MAN3PODS takes a hash ref not a string, but the above worked by accident in old versions of MakeMaker.
The correct code is `MAN3PODS => { }`.
###
Hintsfile support
MakeMaker.pm uses the architecture-specific information from Config.pm. In addition it evaluates architecture specific hints files in a `hints/` directory. The hints files are expected to be named like their counterparts in `PERL_SRC/hints`, but with an `.pl` file name extension (eg. `next_3_2.pl`). They are simply `eval`ed by MakeMaker within the WriteMakefile() subroutine, and can be used to execute commands as well as to include special variables. The rules which hintsfile is chosen are the same as in Configure.
The hintsfile is eval()ed immediately after the arguments given to WriteMakefile are stuffed into a hash reference $self but before this reference becomes blessed. So if you want to do the equivalent to override or create an attribute you would say something like
```
$self->{LIBS} = ['-ldbm -lucb -lc'];
```
###
Distribution Support
For authors of extensions MakeMaker provides several Makefile targets. Most of the support comes from the <ExtUtils::Manifest> module, where additional documentation can be found.
make distcheck reports which files are below the build directory but not in the MANIFEST file and vice versa. (See ["fullcheck" in ExtUtils::Manifest](ExtUtils::Manifest#fullcheck) for details)
make skipcheck reports which files are skipped due to the entries in the `MANIFEST.SKIP` file (See ["skipcheck" in ExtUtils::Manifest](ExtUtils::Manifest#skipcheck) for details)
make distclean does a realclean first and then the distcheck. Note that this is not needed to build a new distribution as long as you are sure that the MANIFEST file is ok.
make veryclean does a realclean first and then removes backup files such as `*~`, `*.bak`, `*.old` and `*.orig`
make manifest rewrites the MANIFEST file, adding all remaining files found (See ["mkmanifest" in ExtUtils::Manifest](ExtUtils::Manifest#mkmanifest) for details)
make distdir Copies all the files that are in the MANIFEST file to a newly created directory with the name `$(DISTNAME)-$(VERSION)`. If that directory exists, it will be removed first.
Additionally, it will create META.yml and META.json module meta-data file in the distdir and add this to the distdir's MANIFEST. You can shut this behavior off with the NO\_META flag.
make disttest Makes a distdir first, and runs a `perl Makefile.PL`, a make, and a make test in that directory.
make tardist First does a distdir. Then a command $(PREOP) which defaults to a null command, followed by $(TO\_UNIX), which defaults to a null command under UNIX, and will convert files in distribution directory to UNIX format otherwise. Next it runs `tar` on that directory into a tarfile and deletes the directory. Finishes with a command $(POSTOP) which defaults to a null command.
make dist Defaults to $(DIST\_DEFAULT) which in turn defaults to tardist.
make uutardist Runs a tardist first and uuencodes the tarfile.
make shdist First does a distdir. Then a command $(PREOP) which defaults to a null command. Next it runs `shar` on that directory into a sharfile and deletes the intermediate directory again. Finishes with a command $(POSTOP) which defaults to a null command. Note: For shdist to work properly a `shar` program that can handle directories is mandatory.
make zipdist First does a distdir. Then a command $(PREOP) which defaults to a null command. Runs `$(ZIP) $(ZIPFLAGS)` on that directory into a zipfile. Then deletes that directory. Finishes with a command $(POSTOP) which defaults to a null command.
make ci Does a $(CI) and a $(RCS\_LABEL) on all files in the MANIFEST file.
Customization of the dist targets can be done by specifying a hash reference to the dist attribute of the WriteMakefile call. The following parameters are recognized:
```
CI ('ci -u')
COMPRESS ('gzip --best')
POSTOP ('@ :')
PREOP ('@ :')
TO_UNIX (depends on the system)
RCS_LABEL ('rcs -q -Nv$(VERSION_SYM):')
SHAR ('shar')
SUFFIX ('.gz')
TAR ('tar')
TARFLAGS ('cvf')
ZIP ('zip')
ZIPFLAGS ('-r')
```
An example:
```
WriteMakefile(
...other options...
dist => {
COMPRESS => "bzip2",
SUFFIX => ".bz2"
}
);
```
###
Module Meta-Data (META and MYMETA)
Long plaguing users of MakeMaker based modules has been the problem of getting basic information about the module out of the sources *without* running the *Makefile.PL* and doing a bunch of messy heuristics on the resulting *Makefile*. Over the years, it has become standard to keep this information in one or more CPAN Meta files distributed with each distribution.
The original format of CPAN Meta files was [YAML](yaml) and the corresponding file was called *META.yml*. In 2010, version 2 of the <CPAN::Meta::Spec> was released, which mandates JSON format for the metadata in order to overcome certain compatibility issues between YAML serializers and to avoid breaking older clients unable to handle a new version of the spec. The <CPAN::Meta> library is now standard for accessing old and new-style Meta files.
If <CPAN::Meta> is installed, MakeMaker will automatically generate *META.json* and *META.yml* files for you and add them to your *MANIFEST* as part of the 'distdir' target (and thus the 'dist' target). This is intended to seamlessly and rapidly populate CPAN with module meta-data. If you wish to shut this feature off, set the `NO_META` `WriteMakefile()` flag to true.
At the 2008 QA Hackathon in Oslo, Perl module toolchain maintainers agreed to use the CPAN Meta format to communicate post-configuration requirements between toolchain components. These files, *MYMETA.json* and *MYMETA.yml*, are generated when *Makefile.PL* generates a *Makefile* (if <CPAN::Meta> is installed). Clients like [CPAN](cpan) or [CPANPLUS](cpanplus) will read these files to see what prerequisites must be fulfilled before building or testing the distribution. If you wish to shut this feature off, set the `NO_MYMETA` `WriteMakeFile()` flag to true.
###
Disabling an extension
If some events detected in *Makefile.PL* imply that there is no way to create the Module, but this is a normal state of things, then you can create a *Makefile* which does nothing, but succeeds on all the "usual" build targets. To do so, use
```
use ExtUtils::MakeMaker qw(WriteEmptyMakefile);
WriteEmptyMakefile();
```
instead of WriteMakefile().
This may be useful if other modules expect this module to be *built* OK, as opposed to *work* OK (say, this system-dependent module builds in a subdirectory of some other distribution, or is listed as a dependency in a CPAN::Bundle, but the functionality is supported by different means on the current architecture).
###
Other Handy Functions
prompt
```
my $value = prompt($message);
my $value = prompt($message, $default);
```
The `prompt()` function provides an easy way to request user input used to write a makefile. It displays the $message as a prompt for input. If a $default is provided it will be used as a default. The function returns the $value selected by the user.
If `prompt()` detects that it is not running interactively and there is nothing on STDIN or if the PERL\_MM\_USE\_DEFAULT environment variable is set to true, the $default will be used without prompting. This prevents automated processes from blocking on user input.
If no $default is provided an empty string will be used instead.
os\_unsupported
```
os_unsupported();
os_unsupported if $^O eq 'MSWin32';
```
The `os_unsupported()` function provides a way to correctly exit your `Makefile.PL` before calling `WriteMakefile`. It is essentially a `die` with the message "OS unsupported".
This is supported since 7.26
###
Supported versions of Perl
Please note that while this module works on Perl 5.6, it is no longer being routinely tested on 5.6 - the earliest Perl version being routinely tested, and expressly supported, is 5.8.1. However, patches to repair any breakage on 5.6 are still being accepted.
ENVIRONMENT
-----------
PERL\_MM\_OPT Command line options used by `MakeMaker->new()`, and thus by `WriteMakefile()`. The string is split as the shell would, and the result is processed before any actual command line arguments are processed.
```
PERL_MM_OPT='CCFLAGS="-Wl,-rpath -Wl,/foo/bar/lib" LIBS="-lwibble -lwobble"'
```
PERL\_MM\_USE\_DEFAULT If set to a true value then MakeMaker's prompt function will always return the default without waiting for user input.
PERL\_CORE Same as the PERL\_CORE parameter. The parameter overrides this.
SEE ALSO
---------
<Module::Build> is a pure-Perl alternative to MakeMaker which does not rely on make or any other external utility. It may be easier to extend to suit your needs.
<Module::Build::Tiny> is a minimal pure-Perl alternative to MakeMaker that follows the Build.PL protocol of Module::Build but without its complexity and cruft, implementing only the installation of the module and leaving authoring to <mbtiny> or other authoring tools.
<Module::Install> is a (now discouraged) wrapper around MakeMaker which adds features not normally available.
<ExtUtils::ModuleMaker> and <Module::Starter> are both modules to help you setup your distribution.
<CPAN::Meta> and <CPAN::Meta::Spec> explain CPAN Meta files in detail.
<File::ShareDir::Install> makes it easy to install static, sometimes also referred to as 'shared' files. <File::ShareDir> helps accessing the shared files after installation. <Test::File::ShareDir> helps when writing tests to use the shared files both before and after installation.
<Dist::Zilla> is an authoring tool which allows great customization and extensibility of the author experience, relying on the existing install tools like ExtUtils::MakeMaker only for installation.
<Dist::Milla> is a Dist::Zilla bundle that greatly simplifies common usage.
[Minilla](minilla) is a minimal authoring tool that does the same things as Dist::Milla without the overhead of Dist::Zilla.
AUTHORS
-------
Andy Dougherty `[email protected]`, Andreas König `[email protected]`, Tim Bunce `[email protected]`. VMS support by Charles Bailey `[email protected]`. OS/2 support by Ilya Zakharevich `[email protected]`.
Currently maintained by Michael G Schwern `[email protected]`
Send patches and ideas to `[email protected]`.
Send bug reports via http://rt.cpan.org/. Please send your generated Makefile along with your report.
For more up-to-date information, see <https://metacpan.org/release/ExtUtils-MakeMaker>.
Repository available at <https://github.com/Perl-Toolchain-Gang/ExtUtils-MakeMaker>.
LICENSE
-------
This program is free software; you can redistribute it and/or modify it under the same terms as Perl itself.
See <http://www.perl.com/perl/misc/Artistic.html>
| programming_docs |
perl TAP::Parser::Result::YAML TAP::Parser::Result::YAML
=========================
CONTENTS
--------
* [NAME](#NAME)
* [VERSION](#VERSION)
* [DESCRIPTION](#DESCRIPTION)
* [OVERRIDDEN METHODS](#OVERRIDDEN-METHODS)
+ [Instance Methods](#Instance-Methods)
- [data](#data)
NAME
----
TAP::Parser::Result::YAML - YAML result token.
VERSION
-------
Version 3.44
DESCRIPTION
-----------
This is a subclass of <TAP::Parser::Result>. A token of this class will be returned if a YAML block is encountered.
```
1..1
ok 1 - woo hooo!
```
`1..1` is the plan. Gotta have a plan.
OVERRIDDEN METHODS
-------------------
Mainly listed here to shut up the pitiful screams of the pod coverage tests. They keep me awake at night.
* `as_string`
* `raw`
###
Instance Methods
#### `data`
```
if ( $result->is_yaml ) {
print $result->data;
}
```
Return the parsed YAML data for this result
perl Test2::Transition Test2::Transition
=================
CONTENTS
--------
* [NAME](#NAME)
* [DESCRIPTION](#DESCRIPTION)
* [THINGS THAT BREAK](#THINGS-THAT-BREAK)
+ [Test::Builder1.5/2 conditionals](#Test::Builder1.5/2-conditionals)
- [The Problem](#The-Problem)
- [The Fix](#The-Fix)
+ [Replacing the Test::Builder singleton](#Replacing-the-Test::Builder-singleton)
- [The Problem](#The-Problem1)
- [The Fix](#The-Fix1)
+ [Directly Accessing Hash Elements](#Directly-Accessing-Hash-Elements)
- [The Problem](#The-Problem2)
- [The Fix](#The-Fix2)
+ [Subtest indentation](#Subtest-indentation)
- [The Problem](#The-Problem3)
- [The Fix](#The-Fix3)
* [DISTRIBUTIONS THAT BREAK OR NEED TO BE UPGRADED](#DISTRIBUTIONS-THAT-BREAK-OR-NEED-TO-BE-UPGRADED)
+ [WORKS BUT TESTS WILL FAIL](#WORKS-BUT-TESTS-WILL-FAIL)
+ [UPGRADE SUGGESTED](#UPGRADE-SUGGESTED)
+ [NEED TO UPGRADE](#NEED-TO-UPGRADE)
+ [STILL BROKEN](#STILL-BROKEN)
* [MAKE ASSERTIONS -> SEND EVENTS](#MAKE-ASSERTIONS-%3E-SEND-EVENTS)
+ [LEGACY](#LEGACY)
+ [TEST2](#TEST2)
* [WRAP EXISTING TOOLS](#WRAP-EXISTING-TOOLS)
+ [LEGACY](#LEGACY1)
+ [TEST2](#TEST21)
* [USING UTF8](#USING-UTF8)
+ [LEGACY](#LEGACY2)
+ [TEST2](#TEST22)
* [AUTHORS, CONTRIBUTORS AND REVIEWERS](#AUTHORS,-CONTRIBUTORS-AND-REVIEWERS)
* [SOURCE](#SOURCE)
* [MAINTAINER](#MAINTAINER)
* [COPYRIGHT](#COPYRIGHT)
NAME
----
Test2::Transition - Transition notes when upgrading to Test2
DESCRIPTION
-----------
This is where gotchas and breakages related to the Test2 upgrade are documented. The upgrade causes Test::Builder to defer to Test2 under the hood. This transition is mostly transparent, but there are a few cases that can trip you up.
THINGS THAT BREAK
------------------
This is the list of scenarios that break with the new internals.
###
Test::Builder1.5/2 conditionals
####
The Problem
a few years back there were two attempts to upgrade/replace Test::Builder. Confusingly these were called Test::Builder2 and Test::Builder1.5, in that order. Many people put conditionals in their code to check the Test::Builder version number and adapt their code accordingly.
The Test::Builder2/1.5 projects both died out. Now the conditional code people added has become a mine field. A vast majority of modules broken by Test2 fall into this category.
####
The Fix
The fix is to remove all Test::Builder1.5/2 related code. Either use the legacy Test::Builder API, or use Test2 directly.
###
Replacing the Test::Builder singleton
####
The Problem
Some test modules would replace the Test::Builder singleton instance with their own instance or subclass. This was usually done to intercept or modify results as they happened.
The Test::Builder singleton is now a simple compatibility wrapper around Test2. The Test::Builder singleton is no longer the central place for results. Many results bypass the Test::Builder singleton completely, which breaks and behavior intended when replacing the singleton.
####
The Fix
If you simply want to intercept all results instead of letting them go to TAP, you should look at the <Test2::API> docs and read about pushing a new hub onto the hub stack. Replacing the hub temporarily is now the correct way to intercept results.
If your goal is purely monitoring of events use the `Test2::Hub->listen()` method exported by Test::More to watch events as they are fired. If you wish to modify results before they go to TAP look at the `Test2::Hub->filter()` method.
###
Directly Accessing Hash Elements
####
The Problem
Some modules look directly at hash keys on the Test::Builder singleton. The problem here is that the Test::Builder singleton no longer holds anything important.
####
The Fix
The fix is to use the API specified in <Test2::API> to look at or modify state as needed.
###
Subtest indentation
####
The Problem
An early change, in fact the change that made Test2 an idea, was a change to the indentation of the subtest note. It was decided it would be more readable to outdent the subtest note instead of having it inline with the subtest:
```
# subtest foo
ok 1 - blah
1..1
ok 1 - subtest foo
```
The old style indented the note:
```
# subtest foo
ok 1 - blah
1..1
ok 1 - subtest foo
```
This breaks tests that do string comparison of TAP output.
####
The Fix
```
my $indent = $INC{'Test2/API.pm'} ? '' : ' ';
is(
$subtest_output,
"${indent}# subtest foo",
"Got subtest note"
);
```
Check if `$INC{'Test2/API.pm'}` is set, if it is then no indentation should be expected. If it is not set, then the old Test::Builder is in use, indentation should be expected.
DISTRIBUTIONS THAT BREAK OR NEED TO BE UPGRADED
------------------------------------------------
This is a list of cpan modules that have been known to have been broken by the upgrade at one point.
###
WORKS BUT TESTS WILL FAIL
These modules still function correctly, but their test suites will not pass. If you already have these modules installed then you can continue to use them. If you are trying to install them after upgrading Test::Builder you will need to force installation, or bypass the broken tests.
Test::DBIx::Class::Schema This module has a test that appears to work around a Test::Builder bug. The bug appears to have been fixed by Test2, which means the workaround causes a failure. This can be easily updated, but nobody has done so yet.
Known broken in versions: 1.0.9 and older
Device::Chip Tests break due to subtest indentation.
Known broken in version 0.07. Apparently works fine in 0.06 though. Patch has been submitted to fix the issue.
###
UPGRADE SUGGESTED
These are modules that did not break, but had broken test suites that have since been fixed.
Test::Exception Old versions work fine, but have a minor test name behavior that breaks with Test2. Old versions will no longer install because of this. The latest version on CPAN will install just fine. Upgrading is not required, but is recommended.
Fixed in version: 0.43
Data::Peek Some tests depended on `$!` and `$?` being modified in subtle ways. A patch was applied to correct things that changed.
The module itself works fine, there is no need to upgrade.
Fixed in version: 0.45
circular::require Some tests were fragile and required base.pm to be loaded at a late stage. Test2 was loading base.pm too early. The tests were updated to fix this.
The module itself never broke, you do not need to upgrade.
Fixed in version: 0.12
Test::Module::Used A test worked around a now-fixed planning bug. There is no need to upgrade if you have an old version installed. New versions install fine if you want them.
Fixed in version: 0.2.5
Test::Moose::More Some tests were fragile, but have been fixed. The actual breakage was from the subtest comment indentation change.
No need to upgrade, old versions work fine. Only new versions will install.
Fixed in version: 0.025
Test::FITesque This was broken by a bugfix to how planning is done. The test was updated after the bugfix.
Fixed in version: 0.04
Test::Kit Old versions work fine, but would not install because <Test::Aggregate> was in the dependency chain. An upgrade should not be needed.
Fixed in version: 2.15
autouse A test broke because it depended on Scalar::Util not being loaded. Test2 loads Scalar::Util. The test was updated to load Test2 after checking Scalar::Util's load status.
There is no need to upgrade if you already have it installed.
Fixed in version: 1.11
###
NEED TO UPGRADE
Test::SharedFork Old versions need to directly access Test::Builder singleton hash elements. The latest version on CPAN will still do this on old Test::Builder, but will defer to <Test2::IPC> on Test2.
Fixed in version: 0.35
Test::Builder::Clutch This works by doing overriding methods on the singleton, and directly accessing hash values on the singleton. A new version has been released that uses the Test2 API to accomplish the same result in a saner way.
Fixed in version: 0.07
Test::Dist::VersionSync This had Test::Builder2 conditionals. This was fixed by removing the conditionals.
Fixed in version: 1.1.4
Test::Modern This relied on `Test::Builder->_try()` which was a private method, documented as something nobody should use. This was fixed by using a different tool.
Fixed in version: 0.012
Test::UseAllModules Version 0.14 relied on `Test::Builder->history` which was available in Test::Builder 1.5. Versions 0.12 and 0.13 relied on other Test::Builder internals.
Fixed in version: 0.15
Test::More::Prefix Worked by applying a role that wrapped `Test::Builder->_print_comment`. Fixed by adding an event filter that modifies the message instead when running under Test2.
Fixed in version: 0.007
###
STILL BROKEN
Test::Aggregate This distribution directly accesses the hash keys in the <Test::Builder> singleton. It also approaches the problem from the wrong angle, please consider using <Test2::Aggregate> for similar functionality and <Test2::Harness> which allows module preloading at the harness level.
Still broken as of version: 0.373
Test::Wrapper This module directly uses hash keys in the <Test::Builder> singleton. This module is also obsolete thanks to the benefits of [Test2](test2). Use `intercept()` from <Test2::API> to achieve a similar result.
Still broken as of version: 0.3.0
Test::ParallelSubtest This module overrides `Test::Builder::subtest()` and `Test::Builder::done_testing()`. It also directly accesses hash elements of the singleton. It has not yet been fixed.
Alternatives: <Test2::AsyncSubtest> and <Test2::Workflow> (not stable).
Still broken as of version: 0.05
Test::Pretty See https://github.com/tokuhirom/Test-Pretty/issues/25
The author admits the module is crazy, and he is awaiting a stable release of something new (Test2) to completely rewrite it in a sane way.
Still broken as of version: 0.32
Net::BitTorrent The tests for this module directly access <Test::Builder> hash keys. Most, if not all of these hash keys have public API methods that could be used instead to avoid the problem.
Still broken in version: 0.052
Test::Group It monkeypatches Test::Builder, and calls it "black magic" in the code.
Still broken as of version: 0.20
Test::Flatten This modifies the Test::Builder internals in many ways. A better was to accomplish the goal of this module is to write your own subtest function.
Still broken as of version: 0.11
Log::Dispatch::Config::TestLog Modifies Test::Builder internals.
Still broken as of version: 0.02
Test::Able Modifies Test::Builder internals.
Still broken as of version: 0.11
MAKE ASSERTIONS -> SEND EVENTS
-------------------------------
### LEGACY
```
use Test::Builder;
# A majority of tools out there do this:
# my $TB = Test::Builder->new;
# This works, but has always been wrong, forcing Test::Builder to implement
# subtests as a horrific hack. It also causes problems for tools that try
# to replace the singleton (also discouraged).
sub my_ok($;$) {
my ($bool, $name) = @_;
my $TB = Test::Builder->new;
$TB->ok($bool, $name);
}
sub my_diag($) {
my ($msg) = @_;
my $TB = Test::Builder->new;
$TB->diag($msg);
}
```
### TEST2
```
use Test2::API qw/context/;
sub my_ok($;$) {
my ($bool, $name) = @_;
my $ctx = context();
$ctx->ok($bool, $name);
$ctx->release;
}
sub my_diag($) {
my ($msg) = @_;
my $ctx = context();
$ctx->diag($msg);
$ctx->release;
}
```
The context object has API compatible implementations of the following methods:
ok($bool, $name)
diag(@messages)
note(@messages)
subtest($name, $code) If you are looking for helpers with `is`, `like`, and others, see <Test2::Suite>.
WRAP EXISTING TOOLS
--------------------
### LEGACY
```
use Test::More;
sub exclusive_ok {
my ($bool1, $bool2, $name) = @_;
# Ensure errors are reported 1 level higher
local $Test::Builder::Level = $Test::Builder::Level + 1;
$ok = $bool1 || $bool2;
$ok &&= !($bool1 && $bool2);
ok($ok, $name);
return $bool;
}
```
Every single tool in the chain from this, to `ok`, to anything `ok` calls needs to increment the `$Level` variable. When an error occurs Test::Builder will do a trace to the stack frame determined by `$Level`, and report that file+line as the one where the error occurred. If you or any other tool you use forgets to set `$Level` then errors will be reported to the wrong place.
### TEST2
```
use Test::More;
sub exclusive_ok {
my ($bool1, $bool2, $name) = @_;
# Grab and store the context, even if you do not need to use it
# directly.
my $ctx = context();
$ok = $bool1 || $bool2;
$ok &&= !($bool1 && $bool2);
ok($ok, $name);
$ctx->release;
return $bool;
}
```
Instead of using `$Level` to perform a backtrace, Test2 uses a context object. In this sample you create a context object and store it. This locks the context (errors report 1 level up from here) for all wrapped tools to find. You do not need to use the context object, but you do need to store it in a variable. Once the sub ends the `$ctx` variable is destroyed which lets future tools find their own.
USING UTF8
-----------
### LEGACY
```
# Set the mode BEFORE anything loads Test::Builder
use open ':std', ':encoding(utf8)';
use Test::More;
```
Or
```
# Modify the filehandles
my $builder = Test::More->builder;
binmode $builder->output, ":encoding(utf8)";
binmode $builder->failure_output, ":encoding(utf8)";
binmode $builder->todo_output, ":encoding(utf8)";
```
### TEST2
```
use Test2::API qw/test2_stack/;
test2_stack->top->format->encoding('utf8');
```
Though a much better way is to use the <Test2::Plugin::UTF8> plugin, which is part of <Test2::Suite>.
AUTHORS, CONTRIBUTORS AND REVIEWERS
------------------------------------
The following people have all contributed to this document in some way, even if only for review.
Chad Granum (EXODIST) <[email protected]> SOURCE
------
The source code repository for Test2 can be found at *http://github.com/Test-More/test-more/*.
MAINTAINER
----------
Chad Granum <[email protected]> COPYRIGHT
---------
Copyright 2020 Chad Granum <[email protected]>.
This program is free software; you can redistribute it and/or modify it under the same terms as Perl itself.
See *http://www.perl.com/perl/misc/Artistic.html*
perl I18N::LangTags::List I18N::LangTags::List
====================
CONTENTS
--------
* [NAME](#NAME)
* [SYNOPSIS](#SYNOPSIS)
* [DESCRIPTION](#DESCRIPTION)
* [ABOUT LANGUAGE TAGS](#ABOUT-LANGUAGE-TAGS)
* [LIST OF LANGUAGES](#LIST-OF-LANGUAGES)
* [SEE ALSO](#SEE-ALSO)
* [COPYRIGHT AND DISCLAIMER](#COPYRIGHT-AND-DISCLAIMER)
* [AUTHOR](#AUTHOR)
NAME
----
I18N::LangTags::List -- tags and names for human languages
SYNOPSIS
--------
```
use I18N::LangTags::List;
print "Parlez-vous... ", join(', ',
I18N::LangTags::List::name('elx') || 'unknown_language',
I18N::LangTags::List::name('ar-Kw') || 'unknown_language',
I18N::LangTags::List::name('en') || 'unknown_language',
I18N::LangTags::List::name('en-CA') || 'unknown_language',
), "?\n";
```
prints:
```
Parlez-vous... Elamite, Kuwait Arabic, English, Canadian English?
```
DESCRIPTION
-----------
This module provides a function `I18N::LangTags::List::name( *langtag* )` that takes a language tag (see <I18N::LangTags>) and returns the best attempt at an English name for it, or undef if it can't make sense of the tag.
The function I18N::LangTags::List::name(...) is not exported.
This module also provides a function `I18N::LangTags::List::is_decent( *langtag* )` that returns true iff the language tag is syntactically valid and is for general use (like "fr" or "fr-ca", below). That is, it returns false for tags that are syntactically invalid and for tags, like "aus", that are listed in brackets below. This function is not exported.
The map of tags-to-names that it uses is accessible as %I18N::LangTags::List::Name, and it's the same as the list that follows in this documentation, which should be useful to you even if you don't use this module.
ABOUT LANGUAGE TAGS
--------------------
Internet language tags, as defined in RFC 3066, are a formalism for denoting human languages. The two-letter ISO 639-1 language codes are well known (as "en" for English), as are their forms when qualified by a country code ("en-US"). Less well-known are the arbitrary-length non-ISO codes (like "i-mingo"), and the recently (in 2001) introduced three-letter ISO-639-2 codes.
Remember these important facts:
* Language tags are not locale IDs. A locale ID is written with a "\_" instead of a "-", (almost?) always matches `m/^\w\w_\w\w\b/`, and *means* something different than a language tag. A language tag denotes a language. A locale ID denotes a language *as used in* a particular place, in combination with non-linguistic location-specific information such as what currency is used there. Locales *also* often denote character set information, as in "en\_US.ISO8859-1".
* Language tags are not for computer languages.
* "Dialect" is not a useful term, since there is no objective criterion for establishing when two language-forms are dialects of eachother, or are separate languages.
* Language tags are not case-sensitive. en-US, en-us, En-Us, etc., are all the same tag, and denote the same language.
* Not every language tag really refers to a single language. Some language tags refer to conditions: i-default (system-message text in English plus maybe other languages), und (undetermined language). Others (notably lots of the three-letter codes) are bibliographic tags that classify whole groups of languages, as with cus "Cushitic (Other)" (i.e., a language that has been classed as Cushtic, but which has no more specific code) or the even less linguistically coherent sai for "South American Indian (Other)". Though useful in bibliography, **SUCH TAGS ARE NOT FOR GENERAL USE**. For further guidance, email me.
* Language tags are not country codes. In fact, they are often distinct codes, as with language tag ja for Japanese, and ISO 3166 country code `.jp` for Japan.
LIST OF LANGUAGES
------------------
The first part of each item is the language tag, between {...}. It is followed by an English name for the language or language-group. Language tags that I judge to be not for general use, are bracketed.
This list is in alphabetical order by English name of the language.
{ab} : Abkhazian eq Abkhaz
{ace} : Achinese
{ach} : Acoli
{ada} : Adangme
{ady} : Adyghe eq Adygei
{aa} : Afar
{afh} : Afrihili (Artificial)
{af} : Afrikaans
[{afa} : Afro-Asiatic (Other)]
{ak} : Akan (Formerly "aka".)
{akk} : Akkadian (Historical)
{sq} : Albanian
{ale} : Aleut
[{alg} : Algonquian languages] NOT Algonquin!
[{tut} : Altaic (Other)]
{am} : Amharic NOT Aramaic!
{i-ami} : Ami eq Amis. eq 'Amis. eq Pangca.
[{apa} : Apache languages]
{ar} : Arabic Many forms are mutually un-intelligible in spoken media. Notable forms: {ar-ae} UAE Arabic; {ar-bh} Bahrain Arabic; {ar-dz} Algerian Arabic; {ar-eg} Egyptian Arabic; {ar-iq} Iraqi Arabic; {ar-jo} Jordanian Arabic; {ar-kw} Kuwait Arabic; {ar-lb} Lebanese Arabic; {ar-ly} Libyan Arabic; {ar-ma} Moroccan Arabic; {ar-om} Omani Arabic; {ar-qa} Qatari Arabic; {ar-sa} Sauda Arabic; {ar-sy} Syrian Arabic; {ar-tn} Tunisian Arabic; {ar-ye} Yemen Arabic.
{arc} : Aramaic NOT Amharic! NOT Samaritan Aramaic!
{arp} : Arapaho
{arn} : Araucanian
{arw} : Arawak
{hy} : Armenian
{an} : Aragonese
[{art} : Artificial (Other)]
{ast} : Asturian eq Bable.
{as} : Assamese
[{ath} : Athapascan languages] eq Athabaskan. eq Athapaskan. eq Athabascan.
[{aus} : Australian languages]
[{map} : Austronesian (Other)]
{av} : Avaric (Formerly "ava".)
{ae} : Avestan eq Zend
{awa} : Awadhi
{ay} : Aymara
{az} : Azerbaijani eq Azeri
Notable forms: {az-Arab} Azerbaijani in Arabic script; {az-Cyrl} Azerbaijani in Cyrillic script; {az-Latn} Azerbaijani in Latin script.
{ban} : Balinese
[{bat} : Baltic (Other)]
{bal} : Baluchi
{bm} : Bambara (Formerly "bam".)
[{bai} : Bamileke languages]
{bad} : Banda
[{bnt} : Bantu (Other)]
{bas} : Basa
{ba} : Bashkir
{eu} : Basque
{btk} : Batak (Indonesia)
{bej} : Beja
{be} : Belarusian eq Belarussian. eq Byelarussian. eq Belorussian. eq Byelorussian. eq White Russian. eq White Ruthenian. NOT Ruthenian!
{bem} : Bemba
{bn} : Bengali eq Bangla.
[{ber} : Berber (Other)]
{bho} : Bhojpuri
{bh} : Bihari
{bik} : Bikol
{bin} : Bini
{bi} : Bislama eq Bichelamar.
{bs} : Bosnian
{bra} : Braj
{br} : Breton
{bug} : Buginese
{bg} : Bulgarian
{i-bnn} : Bunun
{bua} : Buriat
{my} : Burmese
{cad} : Caddo
{car} : Carib
{ca} : Catalan eq Catalán. eq Catalonian.
[{cau} : Caucasian (Other)]
{ceb} : Cebuano
[{cel} : Celtic (Other)] Notable forms: {cel-gaulish} Gaulish (Historical)
[{cai} : Central American Indian (Other)]
{chg} : Chagatai (Historical?)
[{cmc} : Chamic languages]
{ch} : Chamorro
{ce} : Chechen
{chr} : Cherokee eq Tsalagi
{chy} : Cheyenne
{chb} : Chibcha (Historical) NOT Chibchan (which is a language family).
{ny} : Chichewa eq Nyanja. eq Chinyanja.
{zh} : Chinese Many forms are mutually un-intelligible in spoken media. Notable forms: {zh-Hans} Chinese, in simplified script; {zh-Hant} Chinese, in traditional script; {zh-tw} Taiwan Chinese; {zh-cn} PRC Chinese; {zh-sg} Singapore Chinese; {zh-mo} Macau Chinese; {zh-hk} Hong Kong Chinese; {zh-guoyu} Mandarin [Putonghua/Guoyu]; {zh-hakka} Hakka [formerly "i-hakka"]; {zh-min} Hokkien; {zh-min-nan} Southern Hokkien; {zh-wuu} Shanghaiese; {zh-xiang} Hunanese; {zh-gan} Gan; {zh-yue} Cantonese.
{chn} : Chinook Jargon eq Chinook Wawa.
{chp} : Chipewyan
{cho} : Choctaw
{cu} : Church Slavic eq Old Church Slavonic.
{chk} : Chuukese eq Trukese. eq Chuuk. eq Truk. eq Ruk.
{cv} : Chuvash
{cop} : Coptic
{kw} : Cornish
{co} : Corsican eq Corse.
{cr} : Cree NOT Creek! (Formerly "cre".)
{mus} : Creek NOT Cree!
[{cpe} : English-based Creoles and pidgins (Other)]
[{cpf} : French-based Creoles and pidgins (Other)]
[{cpp} : Portuguese-based Creoles and pidgins (Other)]
[{crp} : Creoles and pidgins (Other)]
{hr} : Croatian eq Croat.
[{cus} : Cushitic (Other)]
{cs} : Czech
{dak} : Dakota eq Nakota. eq Latoka.
{da} : Danish
{dar} : Dargwa
{day} : Dayak
{i-default} : Default (Fallthru) Language Defined in RFC 2277, this is for tagging text (which must include English text, and might/should include text in other appropriate languages) that is emitted in a context where language-negotiation wasn't possible -- in SMTP mail failure messages, for example.
{del} : Delaware
{din} : Dinka
{dv} : Divehi eq Maldivian. (Formerly "div".)
{doi} : Dogri NOT Dogrib!
{dgr} : Dogrib NOT Dogri!
[{dra} : Dravidian (Other)]
{dua} : Duala
{nl} : Dutch eq Netherlander. Notable forms: {nl-nl} Netherlands Dutch; {nl-be} Belgian Dutch.
{dum} : Middle Dutch (ca.1050-1350) (Historical)
{dyu} : Dyula
{dz} : Dzongkha
{efi} : Efik
{egy} : Ancient Egyptian (Historical)
{eka} : Ekajuk
{elx} : Elamite (Historical)
{en} : English Notable forms: {en-au} Australian English; {en-bz} Belize English; {en-ca} Canadian English; {en-gb} UK English; {en-ie} Irish English; {en-jm} Jamaican English; {en-nz} New Zealand English; {en-ph} Philippine English; {en-tt} Trinidad English; {en-us} US English; {en-za} South African English; {en-zw} Zimbabwe English.
{enm} : Old English (1100-1500) (Historical)
{ang} : Old English (ca.450-1100) eq Anglo-Saxon. (Historical)
{i-enochian} : Enochian (Artificial)
{myv} : Erzya
{eo} : Esperanto (Artificial)
{et} : Estonian
{ee} : Ewe (Formerly "ewe".)
{ewo} : Ewondo
{fan} : Fang
{fat} : Fanti
{fo} : Faroese
{fj} : Fijian
{fi} : Finnish
[{fiu} : Finno-Ugrian (Other)] eq Finno-Ugric. NOT Ugaritic!
{fon} : Fon
{fr} : French Notable forms: {fr-fr} France French; {fr-be} Belgian French; {fr-ca} Canadian French; {fr-ch} Swiss French; {fr-lu} Luxembourg French; {fr-mc} Monaco French.
{frm} : Middle French (ca.1400-1600) (Historical)
{fro} : Old French (842-ca.1400) (Historical)
{fy} : Frisian
{fur} : Friulian
{ff} : Fulah (Formerly "ful".)
{gaa} : Ga
{gd} : Scots Gaelic NOT Scots!
{gl} : Gallegan eq Galician
{lg} : Ganda (Formerly "lug".)
{gay} : Gayo
{gba} : Gbaya
{gez} : Geez eq Ge'ez
{ka} : Georgian
{de} : German Notable forms: {de-at} Austrian German; {de-be} Belgian German; {de-ch} Swiss German; {de-de} Germany German; {de-li} Liechtenstein German; {de-lu} Luxembourg German.
{gmh} : Middle High German (ca.1050-1500) (Historical)
{goh} : Old High German (ca.750-1050) (Historical)
[{gem} : Germanic (Other)]
{gil} : Gilbertese
{gon} : Gondi
{gor} : Gorontalo
{got} : Gothic (Historical)
{grb} : Grebo
{grc} : Ancient Greek (Historical) (Until 15th century or so.)
{el} : Modern Greek (Since 15th century or so.)
{gn} : Guarani Guaraní
{gu} : Gujarati
{gwi} : Gwich'in eq Gwichin
{hai} : Haida
{ht} : Haitian eq Haitian Creole
{ha} : Hausa
{haw} : Hawaiian Hawai'ian
{he} : Hebrew (Formerly "iw".)
{hz} : Herero
{hil} : Hiligaynon
{him} : Himachali
{hi} : Hindi
{ho} : Hiri Motu
{hit} : Hittite (Historical)
{hmn} : Hmong
{hu} : Hungarian
{hup} : Hupa
{iba} : Iban
{is} : Icelandic
{io} : Ido (Artificial)
{ig} : Igbo (Formerly "ibo".)
{ijo} : Ijo
{ilo} : Iloko
[{inc} : Indic (Other)]
[{ine} : Indo-European (Other)]
{id} : Indonesian (Formerly "in".)
{inh} : Ingush
{ia} : Interlingua (International Auxiliary Language Association) (Artificial) NOT Interlingue!
{ie} : Interlingue (Artificial) NOT Interlingua!
{iu} : Inuktitut A subform of "Eskimo".
{ik} : Inupiaq A subform of "Eskimo".
[{ira} : Iranian (Other)]
{ga} : Irish
{mga} : Middle Irish (900-1200) (Historical)
{sga} : Old Irish (to 900) (Historical)
[{iro} : Iroquoian languages]
{it} : Italian Notable forms: {it-it} Italy Italian; {it-ch} Swiss Italian.
{ja} : Japanese (NOT "jp"!)
{jv} : Javanese (Formerly "jw" because of a typo.)
{jrb} : Judeo-Arabic
{jpr} : Judeo-Persian
{kbd} : Kabardian
{kab} : Kabyle
{kac} : Kachin
{kl} : Kalaallisut eq Greenlandic "Eskimo"
{xal} : Kalmyk
{kam} : Kamba
{kn} : Kannada eq Kanarese. NOT Canadian!
{kr} : Kanuri (Formerly "kau".)
{krc} : Karachay-Balkar
{kaa} : Kara-Kalpak
{kar} : Karen
{ks} : Kashmiri
{csb} : Kashubian eq Kashub
{kaw} : Kawi
{kk} : Kazakh
{kha} : Khasi
{km} : Khmer eq Cambodian. eq Kampuchean.
[{khi} : Khoisan (Other)]
{kho} : Khotanese
{ki} : Kikuyu eq Gikuyu.
{kmb} : Kimbundu
{rw} : Kinyarwanda
{ky} : Kirghiz
{i-klingon} : Klingon
{kv} : Komi
{kg} : Kongo (Formerly "kon".)
{kok} : Konkani
{ko} : Korean
{kos} : Kosraean
{kpe} : Kpelle
{kro} : Kru
{kj} : Kuanyama
{kum} : Kumyk
{ku} : Kurdish
{kru} : Kurukh
{kut} : Kutenai
{lad} : Ladino eq Judeo-Spanish. NOT Ladin (a minority language in Italy).
{lah} : Lahnda NOT Lamba!
{lam} : Lamba NOT Lahnda!
{lo} : Lao eq Laotian.
{la} : Latin (Historical) NOT Ladin! NOT Ladino!
{lv} : Latvian eq Lettish.
{lb} : Letzeburgesch eq Luxemburgian, eq Luxemburger. (Formerly "i-lux".)
{lez} : Lezghian
{li} : Limburgish eq Limburger, eq Limburgan. NOT Letzeburgesch!
{ln} : Lingala
{lt} : Lithuanian
{nds} : Low German eq Low Saxon. eq Low German. eq Low Saxon.
{art-lojban} : Lojban (Artificial)
{loz} : Lozi
{lu} : Luba-Katanga (Formerly "lub".)
{lua} : Luba-Lulua
{lui} : Luiseno eq Luiseño.
{lun} : Lunda
{luo} : Luo (Kenya and Tanzania)
{lus} : Lushai
{mk} : Macedonian eq the modern Slavic language spoken in what was Yugoslavia. NOT the form of Greek spoken in Greek Macedonia!
{mad} : Madurese
{mag} : Magahi
{mai} : Maithili
{mak} : Makasar
{mg} : Malagasy
{ms} : Malay NOT Malayalam!
{ml} : Malayalam NOT Malay!
{mt} : Maltese
{mnc} : Manchu
{mdr} : Mandar NOT Mandarin!
{man} : Mandingo
{mni} : Manipuri eq Meithei.
[{mno} : Manobo languages]
{gv} : Manx
{mi} : Maori NOT Mari!
{mr} : Marathi
{chm} : Mari NOT Maori!
{mh} : Marshall eq Marshallese.
{mwr} : Marwari
{mas} : Masai
[{myn} : Mayan languages]
{men} : Mende
{mic} : Micmac
{min} : Minangkabau
{i-mingo} : Mingo eq the Irquoian language West Virginia Seneca. NOT New York Seneca!
[{mis} : Miscellaneous languages] Don't use this.
{moh} : Mohawk
{mdf} : Moksha
{mo} : Moldavian eq Moldovan.
[{mkh} : Mon-Khmer (Other)]
{lol} : Mongo
{mn} : Mongolian eq Mongol.
{mos} : Mossi
[{mul} : Multiple languages] Not for normal use.
[{mun} : Munda languages]
{nah} : Nahuatl
{nap} : Neapolitan
{na} : Nauru
{nv} : Navajo eq Navaho. (Formerly "i-navajo".)
{nd} : North Ndebele
{nr} : South Ndebele
{ng} : Ndonga
{ne} : Nepali eq Nepalese. Notable forms: {ne-np} Nepal Nepali; {ne-in} India Nepali.
{new} : Newari
{nia} : Nias
[{nic} : Niger-Kordofanian (Other)]
[{ssa} : Nilo-Saharan (Other)]
{niu} : Niuean
{nog} : Nogai
{non} : Old Norse (Historical)
[{nai} : North American Indian] Do not use this.
{no} : Norwegian Note the two following forms:
{nb} : Norwegian Bokmal eq Bokmål, (A form of Norwegian.) (Formerly "no-bok".)
{nn} : Norwegian Nynorsk (A form of Norwegian.) (Formerly "no-nyn".)
[{nub} : Nubian languages]
{nym} : Nyamwezi
{nyn} : Nyankole
{nyo} : Nyoro
{nzi} : Nzima
{oc} : Occitan (post 1500) eq Provençal, eq Provencal
{oj} : Ojibwa eq Ojibwe. (Formerly "oji".)
{or} : Oriya
{om} : Oromo
{osa} : Osage
{os} : Ossetian; Ossetic
[{oto} : Otomian languages] Group of languages collectively called "Otomí".
{pal} : Pahlavi eq Pahlevi
{i-pwn} : Paiwan eq Pariwan
{pau} : Palauan
{pi} : Pali (Historical?)
{pam} : Pampanga
{pag} : Pangasinan
{pa} : Panjabi eq Punjabi
{pap} : Papiamento eq Papiamentu.
[{paa} : Papuan (Other)]
{fa} : Persian eq Farsi. eq Iranian.
{peo} : Old Persian (ca.600-400 B.C.)
[{phi} : Philippine (Other)]
{phn} : Phoenician (Historical)
{pon} : Pohnpeian NOT Pompeiian!
{pl} : Polish
{pt} : Portuguese eq Portugese. Notable forms: {pt-pt} Portugal Portuguese; {pt-br} Brazilian Portuguese.
[{pra} : Prakrit languages]
{pro} : Old Provencal (to 1500) eq Old Provençal. (Historical.)
{ps} : Pushto eq Pashto. eq Pushtu.
{qu} : Quechua eq Quecha.
{rm} : Raeto-Romance eq Romansh.
{raj} : Rajasthani
{rap} : Rapanui
{rar} : Rarotongan
[{qaa - qtz} : Reserved for local use.]
[{roa} : Romance (Other)] NOT Romanian! NOT Romany! NOT Romansh!
{ro} : Romanian eq Rumanian. NOT Romany!
{rom} : Romany eq Rom. NOT Romanian!
{rn} : Rundi
{ru} : Russian NOT White Russian! NOT Rusyn!
[{sal} : Salishan languages] Large language group.
{sam} : Samaritan Aramaic NOT Aramaic!
{se} : Northern Sami eq Lappish. eq Lapp. eq (Northern) Saami.
{sma} : Southern Sami
{smn} : Inari Sami
{smj} : Lule Sami
{sms} : Skolt Sami
[{smi} : Sami languages (Other)]
{sm} : Samoan
{sad} : Sandawe
{sg} : Sango
{sa} : Sanskrit (Historical)
{sat} : Santali
{sc} : Sardinian eq Sard.
{sas} : Sasak
{sco} : Scots NOT Scots Gaelic!
{sel} : Selkup
[{sem} : Semitic (Other)]
{sr} : Serbian eq Serb. NOT Sorbian.
Notable forms: {sr-Cyrl} : Serbian in Cyrillic script; {sr-Latn} : Serbian in Latin script.
{srr} : Serer
{shn} : Shan
{sn} : Shona
{sid} : Sidamo
{sgn-...} : Sign Languages Always use with a subtag. Notable forms: {sgn-gb} British Sign Language (BSL); {sgn-ie} Irish Sign Language (ESL); {sgn-ni} Nicaraguan Sign Language (ISN); {sgn-us} American Sign Language (ASL).
(And so on with other country codes as the subtag.)
{bla} : Siksika eq Blackfoot. eq Pikanii.
{sd} : Sindhi
{si} : Sinhalese eq Sinhala.
[{sit} : Sino-Tibetan (Other)]
[{sio} : Siouan languages]
{den} : Slave (Athapascan) ("Slavey" is a subform.)
[{sla} : Slavic (Other)]
{sk} : Slovak eq Slovakian.
{sl} : Slovenian eq Slovene.
{sog} : Sogdian
{so} : Somali
{son} : Songhai
{snk} : Soninke
{wen} : Sorbian languages eq Wendish. eq Sorb. eq Lusatian. eq Wend. NOT Venda! NOT Serbian!
{nso} : Northern Sotho
{st} : Southern Sotho eq Sutu. eq Sesotho.
[{sai} : South American Indian (Other)]
{es} : Spanish Notable forms: {es-ar} Argentine Spanish; {es-bo} Bolivian Spanish; {es-cl} Chilean Spanish; {es-co} Colombian Spanish; {es-do} Dominican Spanish; {es-ec} Ecuadorian Spanish; {es-es} Spain Spanish; {es-gt} Guatemalan Spanish; {es-hn} Honduran Spanish; {es-mx} Mexican Spanish; {es-pa} Panamanian Spanish; {es-pe} Peruvian Spanish; {es-pr} Puerto Rican Spanish; {es-py} Paraguay Spanish; {es-sv} Salvadoran Spanish; {es-us} US Spanish; {es-uy} Uruguayan Spanish; {es-ve} Venezuelan Spanish.
{suk} : Sukuma
{sux} : Sumerian (Historical)
{su} : Sundanese
{sus} : Susu
{sw} : Swahili eq Kiswahili
{ss} : Swati
{sv} : Swedish Notable forms: {sv-se} Sweden Swedish; {sv-fi} Finland Swedish.
{syr} : Syriac
{tl} : Tagalog
{ty} : Tahitian
[{tai} : Tai (Other)] NOT Thai!
{tg} : Tajik
{tmh} : Tamashek
{ta} : Tamil
{i-tao} : Tao eq Yami.
{tt} : Tatar
{i-tay} : Tayal eq Atayal. eq Atayan.
{te} : Telugu
{ter} : Tereno
{tet} : Tetum
{th} : Thai NOT Tai!
{bo} : Tibetan
{tig} : Tigre
{ti} : Tigrinya
{tem} : Timne eq Themne. eq Timene.
{tiv} : Tiv
{tli} : Tlingit
{tpi} : Tok Pisin
{tkl} : Tokelau
{tog} : Tonga (Nyasa) NOT Tsonga!
{to} : Tonga (Tonga Islands) (Pronounced "Tong-a", not "Tong-ga")
NOT Tsonga!
{tsi} : Tsimshian eq Sm'algyax
{ts} : Tsonga NOT Tonga!
{i-tsu} : Tsou
{tn} : Tswana Same as Setswana.
{tum} : Tumbuka
[{tup} : Tupi languages]
{tr} : Turkish (Typically in Roman script)
{ota} : Ottoman Turkish (1500-1928) (Typically in Arabic script) (Historical)
{crh} : Crimean Turkish eq Crimean Tatar
{tk} : Turkmen eq Turkmeni.
{tvl} : Tuvalu
{tyv} : Tuvinian eq Tuvan. eq Tuvin.
{tw} : Twi
{udm} : Udmurt
{uga} : Ugaritic NOT Ugric!
{ug} : Uighur
{uk} : Ukrainian
{umb} : Umbundu
{und} : Undetermined Not a tag for normal use.
{ur} : Urdu
{uz} : Uzbek eq Özbek
Notable forms: {uz-Cyrl} Uzbek in Cyrillic script; {uz-Latn} Uzbek in Latin script.
{vai} : Vai
{ve} : Venda NOT Wendish! NOT Wend! NOT Avestan! (Formerly "ven".)
{vi} : Vietnamese eq Viet.
{vo} : Volapuk eq Volapük. (Artificial)
{vot} : Votic eq Votian. eq Vod.
[{wak} : Wakashan languages]
{wa} : Walloon
{wal} : Walamo eq Wolaytta.
{war} : Waray Presumably the Philippine language Waray-Waray (Samareño), not the smaller Philippine language Waray Sorsogon, nor the extinct Australian language Waray.
{was} : Washo eq Washoe
{cy} : Welsh
{wo} : Wolof
{x-...} : Unregistered (Semi-Private Use) "x-" is a prefix for language tags that are not registered with ISO or IANA. Example, x-double-dutch
{xh} : Xhosa
{sah} : Yakut
{yao} : Yao (The Yao in Malawi?)
{yap} : Yapese eq Yap
{ii} : Sichuan Yi
{yi} : Yiddish Formerly "ji". Usually in Hebrew script.
Notable forms: {yi-latn} Yiddish in Latin script
{yo} : Yoruba
[{ypk} : Yupik languages] Several "Eskimo" languages.
{znd} : Zande
[{zap} : Zapotec] (A group of languages.)
{zen} : Zenaga NOT Zend.
{za} : Zhuang
{zu} : Zulu
{zun} : Zuni eq Zuñi
SEE ALSO
---------
<I18N::LangTags> and its "See Also" section.
COPYRIGHT AND DISCLAIMER
-------------------------
Copyright (c) 2001+ Sean M. Burke. All rights reserved.
You can redistribute and/or modify this document under the same terms as Perl itself.
This document is provided in the hope that it will be useful, but without any warranty; without even the implied warranty of accuracy, authoritativeness, completeness, merchantability, or fitness for a particular purpose.
Email any corrections or questions to me.
AUTHOR
------
Sean M. Burke, [email protected]
| programming_docs |
perl IPC::Semaphore IPC::Semaphore
==============
CONTENTS
--------
* [NAME](#NAME)
* [SYNOPSIS](#SYNOPSIS)
* [DESCRIPTION](#DESCRIPTION)
* [METHODS](#METHODS)
* [SEE ALSO](#SEE-ALSO)
* [AUTHORS](#AUTHORS)
* [COPYRIGHT](#COPYRIGHT)
NAME
----
IPC::Semaphore - SysV Semaphore IPC object class
SYNOPSIS
--------
```
use IPC::SysV qw(IPC_PRIVATE S_IRUSR S_IWUSR IPC_CREAT);
use IPC::Semaphore;
$sem = IPC::Semaphore->new(IPC_PRIVATE, 10, S_IRUSR | S_IWUSR | IPC_CREAT);
$sem->setall( (0) x 10);
@sem = $sem->getall;
$ncnt = $sem->getncnt;
$zcnt = $sem->getzcnt;
$ds = $sem->stat;
$sem->remove;
```
DESCRIPTION
-----------
A class providing an object based interface to SysV IPC semaphores.
METHODS
-------
new ( KEY , NSEMS , FLAGS ) Create a new semaphore set associated with `KEY`. `NSEMS` is the number of semaphores in the set. A new set is created if
* `KEY` is equal to `IPC_PRIVATE`
* `KEY` does not already have a semaphore identifier associated with it, and `*FLAGS* & IPC_CREAT` is true.
On creation of a new semaphore set `FLAGS` is used to set the permissions. Be careful not to set any flags that the Sys V IPC implementation does not allow: in some systems setting execute bits makes the operations fail.
getall Returns the values of the semaphore set as an array.
getncnt ( SEM ) Returns the number of processes waiting for the semaphore `SEM` to become greater than its current value
getpid ( SEM ) Returns the process id of the last process that performed an operation on the semaphore `SEM`.
getval ( SEM ) Returns the current value of the semaphore `SEM`.
getzcnt ( SEM ) Returns the number of processes waiting for the semaphore `SEM` to become zero.
id Returns the system identifier for the semaphore set.
op ( OPLIST ) `OPLIST` is a list of operations to pass to `semop`. `OPLIST` is a concatenation of smaller lists, each which has three values. The first is the semaphore number, the second is the operation and the last is a flags value. See [semop(2)](http://man.he.net/man2/semop) for more details. For example
```
$sem->op(
0, -1, IPC_NOWAIT,
1, 1, IPC_NOWAIT
);
```
remove Remove and destroy the semaphore set from the system.
set ( STAT )
set ( NAME => VALUE [, NAME => VALUE ...] ) `set` will set the following values of the `stat` structure associated with the semaphore set.
```
uid
gid
mode (only the permission bits)
```
`set` accepts either a stat object, as returned by the `stat` method, or a list of *name*-*value* pairs.
setall ( VALUES ) Sets all values in the semaphore set to those given on the `VALUES` list. `VALUES` must contain the correct number of values.
setval ( N , VALUE ) Set the `N`th value in the semaphore set to `VALUE`
stat Returns an object of type `IPC::Semaphore::stat` which is a sub-class of `Class::Struct`. It provides the following fields. For a description of these fields see your system documentation.
```
uid
gid
cuid
cgid
mode
ctime
otime
nsems
```
SEE ALSO
---------
<IPC::SysV>, <Class::Struct>, [semget(2)](http://man.he.net/man2/semget), [semctl(2)](http://man.he.net/man2/semctl), [semop(2)](http://man.he.net/man2/semop)
AUTHORS
-------
Graham Barr <[email protected]>, Marcus Holland-Moritz <[email protected]>
COPYRIGHT
---------
Version 2.x, Copyright (C) 2007-2013, Marcus Holland-Moritz.
Version 1.x, Copyright (c) 1997, Graham Barr.
This program is free software; you can redistribute it and/or modify it under the same terms as Perl itself.
perl perlmodstyle perlmodstyle
============
CONTENTS
--------
* [NAME](#NAME)
* [INTRODUCTION](#INTRODUCTION)
* [QUICK CHECKLIST](#QUICK-CHECKLIST)
+ [Before you start](#Before-you-start)
+ [The API](#The-API)
+ [Stability](#Stability)
+ [Documentation](#Documentation)
+ [Release considerations](#Release-considerations)
* [BEFORE YOU START WRITING A MODULE](#BEFORE-YOU-START-WRITING-A-MODULE)
+ [Has it been done before?](#Has-it-been-done-before?)
+ [Do one thing and do it well](#Do-one-thing-and-do-it-well)
+ [What's in a name?](#What's-in-a-name?)
+ [Get feedback before publishing](#Get-feedback-before-publishing)
* [DESIGNING AND WRITING YOUR MODULE](#DESIGNING-AND-WRITING-YOUR-MODULE)
+ [To OO or not to OO?](#To-OO-or-not-to-OO?)
+ [Designing your API](#Designing-your-API)
+ [Strictness and warnings](#Strictness-and-warnings)
+ [Backwards compatibility](#Backwards-compatibility)
+ [Error handling and messages](#Error-handling-and-messages)
* [DOCUMENTING YOUR MODULE](#DOCUMENTING-YOUR-MODULE)
+ [POD](#POD)
+ [README, INSTALL, release notes, changelogs](#README,-INSTALL,-release-notes,-changelogs)
* [RELEASE CONSIDERATIONS](#RELEASE-CONSIDERATIONS)
+ [Version numbering](#Version-numbering)
+ [Pre-requisites](#Pre-requisites)
+ [Testing](#Testing)
+ [Packaging](#Packaging)
+ [Licensing](#Licensing)
* [COMMON PITFALLS](#COMMON-PITFALLS)
+ [Reinventing the wheel](#Reinventing-the-wheel)
+ [Trying to do too much](#Trying-to-do-too-much)
+ [Inappropriate documentation](#Inappropriate-documentation)
* [SEE ALSO](#SEE-ALSO)
* [AUTHOR](#AUTHOR)
NAME
----
perlmodstyle - Perl module style guide
INTRODUCTION
------------
This document attempts to describe the Perl Community's "best practice" for writing Perl modules. It extends the recommendations found in <perlstyle> , which should be considered required reading before reading this document.
While this document is intended to be useful to all module authors, it is particularly aimed at authors who wish to publish their modules on CPAN.
The focus is on elements of style which are visible to the users of a module, rather than those parts which are only seen by the module's developers. However, many of the guidelines presented in this document can be extrapolated and applied successfully to a module's internals.
This document differs from <perlnewmod> in that it is a style guide rather than a tutorial on creating CPAN modules. It provides a checklist against which modules can be compared to determine whether they conform to best practice, without necessarily describing in detail how to achieve this.
All the advice contained in this document has been gleaned from extensive conversations with experienced CPAN authors and users. Every piece of advice given here is the result of previous mistakes. This information is here to help you avoid the same mistakes and the extra work that would inevitably be required to fix them.
The first section of this document provides an itemized checklist; subsequent sections provide a more detailed discussion of the items on the list. The final section, "Common Pitfalls", describes some of the most popular mistakes made by CPAN authors.
QUICK CHECKLIST
----------------
For more detail on each item in this checklist, see below.
###
Before you start
* Don't re-invent the wheel
* Patch, extend or subclass an existing module where possible
* Do one thing and do it well
* Choose an appropriate name
* Get feedback before publishing
###
The API
* API should be understandable by the average programmer
* Simple methods for simple tasks
* Separate functionality from output
* Consistent naming of subroutines or methods
* Use named parameters (a hash or hashref) when there are more than two parameters
### Stability
* Ensure your module works under `use strict` and `-w`
* Stable modules should maintain backwards compatibility
### Documentation
* Write documentation in POD
* Document purpose, scope and target applications
* Document each publicly accessible method or subroutine, including params and return values
* Give examples of use in your documentation
* Provide a README file and perhaps also release notes, changelog, etc
* Provide links to further information (URL, email)
###
Release considerations
* Specify pre-requisites in Makefile.PL or Build.PL
* Specify Perl version requirements with `use`
* Include tests with your module
* Choose a sensible and consistent version numbering scheme (X.YY is the common Perl module numbering scheme)
* Increment the version number for every change, no matter how small
* Package the module using "make dist"
* Choose an appropriate license (GPL/Artistic is a good default)
BEFORE YOU START WRITING A MODULE
----------------------------------
Try not to launch headlong into developing your module without spending some time thinking first. A little forethought may save you a vast amount of effort later on.
###
Has it been done before?
You may not even need to write the module. Check whether it's already been done in Perl, and avoid re-inventing the wheel unless you have a good reason.
Good places to look for pre-existing modules include [MetaCPAN](https://metacpan.org) and [PrePAN](http://prepan.org) and asking on `[email protected]` (<https://lists.perl.org/list/module-authors.html>).
If an existing module **almost** does what you want, consider writing a patch, writing a subclass, or otherwise extending the existing module rather than rewriting it.
###
Do one thing and do it well
At the risk of stating the obvious, modules are intended to be modular. A Perl developer should be able to use modules to put together the building blocks of their application. However, it's important that the blocks are the right shape, and that the developer shouldn't have to use a big block when all they need is a small one.
Your module should have a clearly defined scope which is no longer than a single sentence. Can your module be broken down into a family of related modules?
Bad example:
"FooBar.pm provides an implementation of the FOO protocol and the related BAR standard."
Good example:
"Foo.pm provides an implementation of the FOO protocol. Bar.pm implements the related BAR protocol."
This means that if a developer only needs a module for the BAR standard, they should not be forced to install libraries for FOO as well.
###
What's in a name?
Make sure you choose an appropriate name for your module early on. This will help people find and remember your module, and make programming with your module more intuitive.
When naming your module, consider the following:
* Be descriptive (i.e. accurately describes the purpose of the module).
* Be consistent with existing modules.
* Reflect the functionality of the module, not the implementation.
* Avoid starting a new top-level hierarchy, especially if a suitable hierarchy already exists under which you could place your module.
###
Get feedback before publishing
If you have never uploaded a module to CPAN before (and even if you have), you are strongly encouraged to get feedback on [PrePAN](http://prepan.org). PrePAN is a site dedicated to discussing ideas for CPAN modules with other Perl developers and is a great resource for new (and experienced) Perl developers.
You should also try to get feedback from people who are already familiar with the module's application domain and the CPAN naming system. Authors of similar modules, or modules with similar names, may be a good place to start, as are community sites like [Perl Monks](https://www.perlmonks.org).
DESIGNING AND WRITING YOUR MODULE
----------------------------------
Considerations for module design and coding:
###
To OO or not to OO?
Your module may be object oriented (OO) or not, or it may have both kinds of interfaces available. There are pros and cons of each technique, which should be considered when you design your API.
In *Perl Best Practices* (copyright 2004, Published by O'Reilly Media, Inc.), Damian Conway provides a list of criteria to use when deciding if OO is the right fit for your problem:
* The system being designed is large, or is likely to become large.
* The data can be aggregated into obvious structures, especially if there's a large amount of data in each aggregate.
* The various types of data aggregate form a natural hierarchy that facilitates the use of inheritance and polymorphism.
* You have a piece of data on which many different operations are applied.
* You need to perform the same general operations on related types of data, but with slight variations depending on the specific type of data the operations are applied to.
* It's likely you'll have to add new data types later.
* The typical interactions between pieces of data are best represented by operators.
* The implementation of individual components of the system is likely to change over time.
* The system design is already object-oriented.
* Large numbers of other programmers will be using your code modules.
Think carefully about whether OO is appropriate for your module. Gratuitous object orientation results in complex APIs which are difficult for the average module user to understand or use.
###
Designing your API
Your interfaces should be understandable by an average Perl programmer. The following guidelines may help you judge whether your API is sufficiently straightforward:
Write simple routines to do simple things. It's better to have numerous simple routines than a few monolithic ones. If your routine changes its behaviour significantly based on its arguments, it's a sign that you should have two (or more) separate routines.
Separate functionality from output. Return your results in the most generic form possible and allow the user to choose how to use them. The most generic form possible is usually a Perl data structure which can then be used to generate a text report, HTML, XML, a database query, or whatever else your users require.
If your routine iterates through some kind of list (such as a list of files, or records in a database) you may consider providing a callback so that users can manipulate each element of the list in turn. File::Find provides an example of this with its `find(\&wanted, $dir)` syntax.
Provide sensible shortcuts and defaults. Don't require every module user to jump through the same hoops to achieve a simple result. You can always include optional parameters or routines for more complex or non-standard behaviour. If most of your users have to type a few almost identical lines of code when they start using your module, it's a sign that you should have made that behaviour a default. Another good indicator that you should use defaults is if most of your users call your routines with the same arguments.
Naming conventions Your naming should be consistent. For instance, it's better to have:
```
display_day();
display_week();
display_year();
```
than
```
display_day();
week_display();
show_year();
```
This applies equally to method names, parameter names, and anything else which is visible to the user (and most things that aren't!)
Parameter passing Use named parameters. It's easier to use a hash like this:
```
$obj->do_something(
name => "wibble",
type => "text",
size => 1024,
);
```
... than to have a long list of unnamed parameters like this:
```
$obj->do_something("wibble", "text", 1024);
```
While the list of arguments might work fine for one, two or even three arguments, any more arguments become hard for the module user to remember, and hard for the module author to manage. If you want to add a new parameter you will have to add it to the end of the list for backward compatibility, and this will probably make your list order unintuitive. Also, if many elements may be undefined you may see the following unattractive method calls:
```
$obj->do_something(undef, undef, undef, undef, undef, 1024);
```
Provide sensible defaults for parameters which have them. Don't make your users specify parameters which will almost always be the same.
The issue of whether to pass the arguments in a hash or a hashref is largely a matter of personal style.
The use of hash keys starting with a hyphen (`-name`) or entirely in upper case (`NAME`) is a relic of older versions of Perl in which ordinary lower case strings were not handled correctly by the `=>` operator. While some modules retain uppercase or hyphenated argument keys for historical reasons or as a matter of personal style, most new modules should use simple lower case keys. Whatever you choose, be consistent!
###
Strictness and warnings
Your module should run successfully under the strict pragma and should run without generating any warnings. Your module should also handle taint-checking where appropriate, though this can cause difficulties in many cases.
###
Backwards compatibility
Modules which are "stable" should not break backwards compatibility without at least a long transition phase and a major change in version number.
###
Error handling and messages
When your module encounters an error it should do one or more of:
* Return an undefined value.
* set `$Module::errstr` or similar (`errstr` is a common name used by DBI and other popular modules; if you choose something else, be sure to document it clearly).
* `warn()` or `carp()` a message to STDERR.
* `croak()` only when your module absolutely cannot figure out what to do. (`croak()` is a better version of `die()` for use within modules, which reports its errors from the perspective of the caller. See [Carp](carp) for details of `croak()`, `carp()` and other useful routines.)
* As an alternative to the above, you may prefer to throw exceptions using the Error module.
Configurable error handling can be very useful to your users. Consider offering a choice of levels for warning and debug messages, an option to send messages to a separate file, a way to specify an error-handling routine, or other such features. Be sure to default all these options to the commonest use.
DOCUMENTING YOUR MODULE
------------------------
### POD
Your module should include documentation aimed at Perl developers. You should use Perl's "plain old documentation" (POD) for your general technical documentation, though you may wish to write additional documentation (white papers, tutorials, etc) in some other format. You need to cover the following subjects:
* A synopsis of the common uses of the module
* The purpose, scope and target applications of your module
* Use of each publicly accessible method or subroutine, including parameters and return values
* Examples of use
* Sources of further information
* A contact email address for the author/maintainer
The level of detail in Perl module documentation generally goes from less detailed to more detailed. Your SYNOPSIS section should contain a minimal example of use (perhaps as little as one line of code; skip the unusual use cases or anything not needed by most users); the DESCRIPTION should describe your module in broad terms, generally in just a few paragraphs; more detail of the module's routines or methods, lengthy code examples, or other in-depth material should be given in subsequent sections.
Ideally, someone who's slightly familiar with your module should be able to refresh their memory without hitting "page down". As your reader continues through the document, they should receive a progressively greater amount of knowledge.
The recommended order of sections in Perl module documentation is:
* NAME
* SYNOPSIS
* DESCRIPTION
* One or more sections or subsections giving greater detail of available methods and routines and any other relevant information.
* BUGS/CAVEATS/etc
* AUTHOR
* SEE ALSO
* COPYRIGHT and LICENSE
Keep your documentation near the code it documents ("inline" documentation). Include POD for a given method right above that method's subroutine. This makes it easier to keep the documentation up to date, and avoids having to document each piece of code twice (once in POD and once in comments).
###
README, INSTALL, release notes, changelogs
Your module should also include a README file describing the module and giving pointers to further information (website, author email).
An INSTALL file should be included, and should contain simple installation instructions. When using ExtUtils::MakeMaker this will usually be:
perl Makefile.PL make
make test
make install When using Module::Build, this will usually be:
perl Build.PL
perl Build
perl Build test
perl Build install Release notes or changelogs should be produced for each release of your software describing user-visible changes to your module, in terms relevant to the user.
Unless you have good reasons for using some other format (for example, a format used within your company), the convention is to name your changelog file `Changes`, and to follow the simple format described in <CPAN::Changes::Spec>.
RELEASE CONSIDERATIONS
-----------------------
###
Version numbering
Version numbers should indicate at least major and minor releases, and possibly sub-minor releases. A major release is one in which most of the functionality has changed, or in which major new functionality is added. A minor release is one in which a small amount of functionality has been added or changed. Sub-minor version numbers are usually used for changes which do not affect functionality, such as documentation patches.
The most common CPAN version numbering scheme looks like this:
```
1.00, 1.10, 1.11, 1.20, 1.30, 1.31, 1.32
```
A correct CPAN version number is a floating point number with at least 2 digits after the decimal. You can test whether it conforms to CPAN by using
```
perl -MExtUtils::MakeMaker -le 'print MM->parse_version(shift)' \
'Foo.pm'
```
If you want to release a 'beta' or 'alpha' version of a module but don't want CPAN.pm to list it as most recent use an '\_' after the regular version number followed by at least 2 digits, eg. 1.20\_01. If you do this, the following idiom is recommended:
```
our $VERSION = "1.12_01"; # so CPAN distribution will have
# right filename
our $XS_VERSION = $VERSION; # only needed if you have XS code
$VERSION = eval $VERSION; # so "use Module 0.002" won't warn on
# underscore
```
With that trick MakeMaker will only read the first line and thus read the underscore, while the perl interpreter will evaluate the $VERSION and convert the string into a number. Later operations that treat $VERSION as a number will then be able to do so without provoking a warning about $VERSION not being a number.
Never release anything (even a one-word documentation patch) without incrementing the number. Even a one-word documentation patch should result in a change in version at the sub-minor level.
Once picked, it is important to stick to your version scheme, without reducing the number of digits. This is because "downstream" packagers, such as the FreeBSD ports system, interpret the version numbers in various ways. If you change the number of digits in your version scheme, you can confuse these systems so they get the versions of your module out of order, which is obviously bad.
###
Pre-requisites
Module authors should carefully consider whether to rely on other modules, and which modules to rely on.
Most importantly, choose modules which are as stable as possible. In order of preference:
* Core Perl modules
* Stable CPAN modules
* Unstable CPAN modules
* Modules not available from CPAN
Specify version requirements for other Perl modules in the pre-requisites in your Makefile.PL or Build.PL.
Be sure to specify Perl version requirements both in Makefile.PL or Build.PL and with `require 5.6.1` or similar. See the documentation on [`use VERSION`](perlfunc#use-VERSION) for details.
### Testing
All modules should be tested before distribution (using "make disttest"), and the tests should also be available to people installing the modules (using "make test"). For Module::Build you would use the `make test` equivalent `perl Build test`.
The importance of these tests is proportional to the alleged stability of a module. A module which purports to be stable or which hopes to achieve wide use should adhere to as strict a testing regime as possible.
Useful modules to help you write tests (with minimum impact on your development process or your time) include Test::Simple, Carp::Assert and Test::Inline. For more sophisticated test suites there are Test::More and Test::MockObject.
### Packaging
Modules should be packaged using one of the standard packaging tools. Currently you have the choice between ExtUtils::MakeMaker and the more platform independent Module::Build, allowing modules to be installed in a consistent manner. When using ExtUtils::MakeMaker, you can use "make dist" to create your package. Tools exist to help you to build your module in a MakeMaker-friendly style. These include ExtUtils::ModuleMaker and h2xs. See also <perlnewmod>.
### Licensing
Make sure that your module has a license, and that the full text of it is included in the distribution (unless it's a common one and the terms of the license don't require you to include it).
If you don't know what license to use, dual licensing under the GPL and Artistic licenses (the same as Perl itself) is a good idea. See [perlgpl](https://perldoc.perl.org/5.36.0/perlgpl) and [perlartistic](https://perldoc.perl.org/5.36.0/perlartistic).
COMMON PITFALLS
----------------
###
Reinventing the wheel
There are certain application spaces which are already very, very well served by CPAN. One example is templating systems, another is date and time modules, and there are many more. While it is a rite of passage to write your own version of these things, please consider carefully whether the Perl world really needs you to publish it.
###
Trying to do too much
Your module will be part of a developer's toolkit. It will not, in itself, form the **entire** toolkit. It's tempting to add extra features until your code is a monolithic system rather than a set of modular building blocks.
###
Inappropriate documentation
Don't fall into the trap of writing for the wrong audience. Your primary audience is a reasonably experienced developer with at least a moderate understanding of your module's application domain, who's just downloaded your module and wants to start using it as quickly as possible.
Tutorials, end-user documentation, research papers, FAQs etc are not appropriate in a module's main documentation. If you really want to write these, include them as sub-documents such as `My::Module::Tutorial` or `My::Module::FAQ` and provide a link in the SEE ALSO section of the main documentation.
SEE ALSO
---------
<perlstyle> General Perl style guide
<perlnewmod> How to create a new module
<perlpod> POD documentation
<podchecker> Verifies your POD's correctness
Packaging Tools <ExtUtils::MakeMaker>, <Module::Build>
Testing tools <Test::Simple>, <Test::Inline>, <Carp::Assert>, <Test::More>, <Test::MockObject>
<https://pause.perl.org/>
Perl Authors Upload Server. Contains links to information for module authors.
Any good book on software engineering AUTHOR
------
Kirrily "Skud" Robert <[email protected]>
| programming_docs |
perl Net::Ping Net::Ping
=========
CONTENTS
--------
* [NAME](#NAME)
* [SYNOPSIS](#SYNOPSIS)
* [DESCRIPTION](#DESCRIPTION)
+ [Functions](#Functions)
* [NOTES](#NOTES)
* [INSTALL](#INSTALL)
* [BUGS](#BUGS)
* [AUTHORS](#AUTHORS)
* [COPYRIGHT](#COPYRIGHT)
NAME
----
Net::Ping - check a remote host for reachability
SYNOPSIS
--------
```
use Net::Ping;
$p = Net::Ping->new();
print "$host is alive.\n" if $p->ping($host);
$p->close();
$p = Net::Ping->new("icmp");
$p->bind($my_addr); # Specify source interface of pings
foreach $host (@host_array)
{
print "$host is ";
print "NOT " unless $p->ping($host, 2);
print "reachable.\n";
sleep(1);
}
$p->close();
$p = Net::Ping->new("icmpv6");
$ip = "[fd00:dead:beef::4e]";
print "$ip is alive.\n" if $p->ping($ip);
$p = Net::Ping->new("tcp", 2);
# Try connecting to the www port instead of the echo port
$p->port_number(scalar(getservbyname("http", "tcp")));
while ($stop_time > time())
{
print "$host not reachable ", scalar(localtime()), "\n"
unless $p->ping($host);
sleep(300);
}
undef($p);
# Like tcp protocol, but with many hosts
$p = Net::Ping->new("syn");
$p->port_number(getservbyname("http", "tcp"));
foreach $host (@host_array) {
$p->ping($host);
}
while (($host,$rtt,$ip) = $p->ack) {
print "HOST: $host [$ip] ACKed in $rtt seconds.\n";
}
# High precision syntax (requires Time::HiRes)
$p = Net::Ping->new();
$p->hires();
($ret, $duration, $ip) = $p->ping($host, 5.5);
printf("$host [ip: $ip] is alive (packet return time: %.2f ms)\n",
1000 * $duration)
if $ret;
$p->close();
# For backward compatibility
print "$host is alive.\n" if pingecho($host);
```
DESCRIPTION
-----------
This module contains methods to test the reachability of remote hosts on a network. A ping object is first created with optional parameters, a variable number of hosts may be pinged multiple times and then the connection is closed.
You may choose one of six different protocols to use for the ping. The "tcp" protocol is the default. Note that a live remote host may still fail to be pingable by one or more of these protocols. For example, www.microsoft.com is generally alive but not "icmp" pingable.
With the "tcp" protocol the ping() method attempts to establish a connection to the remote host's echo port. If the connection is successfully established, the remote host is considered reachable. No data is actually echoed. This protocol does not require any special privileges but has higher overhead than the "udp" and "icmp" protocols.
Specifying the "udp" protocol causes the ping() method to send a udp packet to the remote host's echo port. If the echoed packet is received from the remote host and the received packet contains the same data as the packet that was sent, the remote host is considered reachable. This protocol does not require any special privileges. It should be borne in mind that, for a udp ping, a host will be reported as unreachable if it is not running the appropriate echo service. For Unix-like systems see [inetd(8)](http://man.he.net/man8/inetd) for more information.
If the "icmp" protocol is specified, the ping() method sends an icmp echo message to the remote host, which is what the UNIX ping program does. If the echoed message is received from the remote host and the echoed information is correct, the remote host is considered reachable. Specifying the "icmp" protocol requires that the program be run as root or that the program be setuid to root.
If the "external" protocol is specified, the ping() method attempts to use the `Net::Ping::External` module to ping the remote host. `Net::Ping::External` interfaces with your system's default `ping` utility to perform the ping, and generally produces relatively accurate results. If `Net::Ping::External` if not installed on your system, specifying the "external" protocol will result in an error.
If the "syn" protocol is specified, the ["ping"](#ping) method will only send a TCP SYN packet to the remote host then immediately return. If the syn packet was sent successfully, it will return a true value, otherwise it will return false. NOTE: Unlike the other protocols, the return value does NOT determine if the remote host is alive or not since the full TCP three-way handshake may not have completed yet. The remote host is only considered reachable if it receives a TCP ACK within the timeout specified. To begin waiting for the ACK packets, use the ["ack"](#ack) method as explained below. Use the "syn" protocol instead the "tcp" protocol to determine reachability of multiple destinations simultaneously by sending parallel TCP SYN packets. It will not block while testing each remote host. This protocol does not require any special privileges.
### Functions
Net::Ping->new([proto, timeout, bytes, device, tos, ttl, family, host, port, bind, gateway, retrans, pingstring, source\_verify econnrefused dontfrag IPV6\_USE\_MIN\_MTU IPV6\_RECVPATHMTU]) Create a new ping object. All of the parameters are optional and can be passed as hash ref. All options besides the first 7 must be passed as hash ref.
`proto` specifies the protocol to use when doing a ping. The current choices are "tcp", "udp", "icmp", "icmpv6", "stream", "syn", or "external". The default is "tcp".
If a `timeout` in seconds is provided, it is used when a timeout is not given to the ping() method (below). The timeout must be greater than 0 and the default, if not specified, is 5 seconds.
If the number of data bytes (`bytes`) is given, that many data bytes are included in the ping packet sent to the remote host. The number of data bytes is ignored if the protocol is "tcp". The minimum (and default) number of data bytes is 1 if the protocol is "udp" and 0 otherwise. The maximum number of data bytes that can be specified is 65535, but staying below the MTU (1472 bytes for ICMP) is recommended. Many small devices cannot deal with fragmented ICMP packets.
If `device` is given, this device is used to bind the source endpoint before sending the ping packet. I believe this only works with superuser privileges and with udp and icmp protocols at this time.
If <tos> is given, this ToS is configured into the socket.
For icmp, `ttl` can be specified to set the TTL of the outgoing packet.
Valid `family` values for IPv4:
```
4, v4, ip4, ipv4, AF_INET (constant)
```
Valid `family` values for IPv6:
```
6, v6, ip6, ipv6, AF_INET6 (constant)
```
The `host` argument implicitly specifies the family if the family argument is not given.
The `port` argument is only valid for a udp, tcp or stream ping, and will not do what you think it does. ping returns true when we get a "Connection refused"! The default is the echo port.
The `bind` argument specifies the local\_addr to bind to. By specifying a bind argument you don't need the bind method.
The `gateway` argument is only valid for IPv6, and requires a IPv6 address.
The `retrans` argument the exponential backoff rate, default 1.2. It matches the $def\_factor global.
The `dontfrag` argument sets the IP\_DONTFRAG bit, but note that IP\_DONTFRAG is not yet defined by Socket, and not available on many systems. Then it is ignored. On linux it also sets IP\_MTU\_DISCOVER to IP\_PMTUDISC\_DO but need we don't chunk oversized packets. You need to set $data\_size manually.
$p->ping($host [, $timeout [, $family]]); Ping the remote host and wait for a response. $host can be either the hostname or the IP number of the remote host. The optional timeout must be greater than 0 seconds and defaults to whatever was specified when the ping object was created. Returns a success flag. If the hostname cannot be found or there is a problem with the IP number, the success flag returned will be undef. Otherwise, the success flag will be 1 if the host is reachable and 0 if it is not. For most practical purposes, undef and 0 and can be treated as the same case. In array context, the elapsed time as well as the string form of the ip the host resolved to are also returned. The elapsed time value will be a float, as returned by the Time::HiRes::time() function, if hires() has been previously called, otherwise it is returned as an integer.
$p->source\_verify( { 0 | 1 } ); Allows source endpoint verification to be enabled or disabled. This is useful for those remote destinations with multiples interfaces where the response may not originate from the same endpoint that the original destination endpoint was sent to. This only affects udp and icmp protocol pings.
This is enabled by default.
$p->service\_check( { 0 | 1 } ); Set whether or not the connect behavior should enforce remote service availability as well as reachability. Normally, if the remote server reported ECONNREFUSED, it must have been reachable because of the status packet that it reported. With this option enabled, the full three-way tcp handshake must have been established successfully before it will claim it is reachable. NOTE: It still does nothing more than connect and disconnect. It does not speak any protocol (i.e., HTTP or FTP) to ensure the remote server is sane in any way. The remote server CPU could be grinding to a halt and unresponsive to any clients connecting, but if the kernel throws the ACK packet, it is considered alive anyway. To really determine if the server is responding well would be application specific and is beyond the scope of Net::Ping. For udp protocol, enabling this option demands that the remote server replies with the same udp data that it was sent as defined by the udp echo service.
This affects the "udp", "tcp", and "syn" protocols.
This is disabled by default.
$p->tcp\_service\_check( { 0 | 1 } ); Deprecated method, but does the same as service\_check() method.
$p->hires( { 0 | 1 } ); With 1 causes this module to use Time::HiRes module, allowing milliseconds to be returned by subsequent calls to ping().
$p->time The current time, hires or not.
$p->socket\_blocking\_mode( $fh, $mode ); Sets or clears the O\_NONBLOCK flag on a file handle.
$p->IPV6\_USE\_MIN\_MTU With argument sets the option. Without returns the option value.
$p->IPV6\_RECVPATHMTU Notify an according IPv6 MTU.
With argument sets the option. Without returns the option value.
$p->IPV6\_HOPLIMIT With argument sets the option. Without returns the option value.
$p->IPV6\_REACHCONF *NYI* Sets ipv6 reachability IPV6\_REACHCONF was removed in RFC3542. ping6 -R supports it. IPV6\_REACHCONF requires root/admin permissions.
With argument sets the option. Without returns the option value.
Not yet implemented.
$p->bind($local\_addr); Sets the source address from which pings will be sent. This must be the address of one of the interfaces on the local host. $local\_addr may be specified as a hostname or as a text IP address such as "192.168.1.1".
If the protocol is set to "tcp", this method may be called any number of times, and each call to the ping() method (below) will use the most recent $local\_addr. If the protocol is "icmp" or "udp", then bind() must be called at most once per object, and (if it is called at all) must be called before the first call to ping() for that object.
The bind() call can be omitted when specifying the `bind` option to new().
$p->message\_type([$ping\_type]); When you are using the "icmp" protocol, this call permit to change the message type to 'echo' or 'timestamp' (only for IPv4, see RFC 792).
Without argument, it returns the currently used icmp protocol message type. By default, it returns 'echo'.
$p->open($host); When you are using the "stream" protocol, this call pre-opens the tcp socket. It's only necessary to do this if you want to provide a different timeout when creating the connection, or remove the overhead of establishing the connection from the first ping. If you don't call `open()`, the connection is automatically opened the first time `ping()` is called. This call simply does nothing if you are using any protocol other than stream.
The $host argument can be omitted when specifying the `host` option to new().
$p->ack( [ $host ] ); When using the "syn" protocol, use this method to determine the reachability of the remote host. This method is meant to be called up to as many times as ping() was called. Each call returns the host (as passed to ping()) that came back with the TCP ACK. The order in which the hosts are returned may not necessarily be the same order in which they were SYN queued using the ping() method. If the timeout is reached before the TCP ACK is received, or if the remote host is not listening on the port attempted, then the TCP connection will not be established and ack() will return undef. In list context, the host, the ack time, the dotted ip string, and the port number will be returned instead of just the host. If the optional `$host` argument is specified, the return value will be pertaining to that host only. This call simply does nothing if you are using any protocol other than "syn".
When ["new"](#new) had a host option, this host will be used. Without `$host` argument, all hosts are scanned.
$p->nack( $failed\_ack\_host ); The reason that `host $failed_ack_host` did not receive a valid ACK. Useful to find out why when `ack($fail_ack_host)` returns a false value.
$p->ack\_unfork($host) The variant called by ["ack"](#ack) with the "syn" protocol and `$syn_forking` enabled.
$p->ping\_icmp([$host, $timeout, $family]) The ["ping"](#ping) method used with the icmp protocol.
$p->ping\_icmpv6([$host, $timeout, $family]) The ["ping"](#ping) method used with the icmpv6 protocol.
$p->ping\_stream([$host, $timeout, $family]) The ["ping"](#ping) method used with the stream protocol.
Perform a stream ping. If the tcp connection isn't already open, it opens it. It then sends some data and waits for a reply. It leaves the stream open on exit.
$p->ping\_syn([$host, $ip, $start\_time, $stop\_time]) The ["ping"](#ping) method used with the syn protocol. Sends a TCP SYN packet to host specified.
$p->ping\_syn\_fork([$host, $timeout, $family]) The ["ping"](#ping) method used with the forking syn protocol.
$p->ping\_tcp([$host, $timeout, $family]) The ["ping"](#ping) method used with the tcp protocol.
$p->ping\_udp([$host, $timeout, $family]) The ["ping"](#ping) method used with the udp protocol.
Perform a udp echo ping. Construct a message of at least the one-byte sequence number and any additional data bytes. Send the message out and wait for a message to come back. If we get a message, make sure all of its parts match. If they do, we are done. Otherwise go back and wait for the message until we run out of time. Return the result of our efforts.
$p->ping\_external([$host, $timeout, $family]) The ["ping"](#ping) method used with the external protocol. Uses <Net::Ping::External> to do an external ping.
$p->tcp\_connect([$ip, $timeout]) Initiates a TCP connection, for a tcp ping.
$p->tcp\_echo([$ip, $timeout, $pingstring]) Performs a TCP echo. It writes the given string to the socket and then reads it back. It returns 1 on success, 0 on failure.
$p->close(); Close the network connection for this ping object. The network connection is also closed by "undef $p". The network connection is automatically closed if the ping object goes out of scope (e.g. $p is local to a subroutine and you leave the subroutine).
$p->port\_number([$port\_number]) When called with a port number, the port number used to ping is set to `$port_number` rather than using the echo port. It also has the effect of calling `$p->service_check(1)` causing a ping to return a successful response only if that specific port is accessible. This function returns the value of the port that ["ping"](#ping) will connect to.
$p->mselect A `select()` wrapper that compensates for platform peculiarities.
$p->ntop Platform abstraction over `inet_ntop()`
$p->checksum($msg) Do a checksum on the message. Basically sum all of the short words and fold the high order bits into the low order bits.
$p->icmp\_result Returns a list of addr, type, subcode.
pingecho($host [, $timeout]); To provide backward compatibility with the previous version of <Net::Ping>, a `pingecho()` subroutine is available with the same functionality as before. `pingecho()` uses the tcp protocol. The return values and parameters are the same as described for the ["ping"](#ping) method. This subroutine is obsolete and may be removed in a future version of <Net::Ping>.
wakeonlan($mac, [$host, [$port]]) Emit the popular wake-on-lan magic udp packet to wake up a local device. See also <Net::Wake>, but this has the mac address as 1st arg. `$host` should be the local gateway. Without it will broadcast.
Default host: '255.255.255.255' Default port: 9
```
perl -MNet::Ping=wakeonlan -e'wakeonlan "e0:69:95:35:68:d2"'
```
NOTES
-----
There will be less network overhead (and some efficiency in your program) if you specify either the udp or the icmp protocol. The tcp protocol will generate 2.5 times or more traffic for each ping than either udp or icmp. If many hosts are pinged frequently, you may wish to implement a small wait (e.g. 25ms or more) between each ping to avoid flooding your network with packets.
The icmp and icmpv6 protocols requires that the program be run as root or that it be setuid to root. The other protocols do not require special privileges, but not all network devices implement tcp or udp echo.
Local hosts should normally respond to pings within milliseconds. However, on a very congested network it may take up to 3 seconds or longer to receive an echo packet from the remote host. If the timeout is set too low under these conditions, it will appear that the remote host is not reachable (which is almost the truth).
Reachability doesn't necessarily mean that the remote host is actually functioning beyond its ability to echo packets. tcp is slightly better at indicating the health of a system than icmp because it uses more of the networking stack to respond.
Because of a lack of anything better, this module uses its own routines to pack and unpack ICMP packets. It would be better for a separate module to be written which understands all of the different kinds of ICMP packets.
INSTALL
-------
The latest source tree is available via git:
```
git clone https://github.com/rurban/Net-Ping.git
cd Net-Ping
```
The tarball can be created as follows:
```
perl Makefile.PL ; make ; make dist
```
The latest Net::Ping releases are included in cperl and perl5.
BUGS
----
For a list of known issues, visit:
<https://rt.cpan.org/NoAuth/Bugs.html?Dist=Net-Ping> and <https://github.com/rurban/Net-Ping/issues>
To report a new bug, visit:
<https://github.com/rurban/Net-Ping/issues>
AUTHORS
-------
```
Current maintainers:
perl11 (for cperl, with IPv6 support and more)
p5p (for perl5)
Previous maintainers:
[email protected] (Rob Brown)
Steve Peters
External protocol:
[email protected] (Colin McMillen)
Stream protocol:
[email protected] (Scott Bronson)
Wake-on-lan:
1999-2003 Clinton Wong
Original pingecho():
[email protected] (Andreas Karrer)
[email protected] (Paul Marquess)
Original Net::Ping author:
[email protected] (Russell Mosemann)
```
COPYRIGHT
---------
Copyright (c) 2017-2020, Reini Urban. All rights reserved.
Copyright (c) 2016, cPanel Inc. All rights reserved.
Copyright (c) 2012, Steve Peters. All rights reserved.
Copyright (c) 2002-2003, Rob Brown. All rights reserved.
Copyright (c) 2001, Colin McMillen. All rights reserved.
This program is free software; you may redistribute it and/or modify it under the same terms as Perl itself.
perl Term::ReadLine Term::ReadLine
==============
CONTENTS
--------
* [NAME](#NAME)
* [SYNOPSIS](#SYNOPSIS)
* [DESCRIPTION](#DESCRIPTION)
* [Minimal set of supported functions](#Minimal-set-of-supported-functions)
* [Additional supported functions](#Additional-supported-functions)
* [EXPORTS](#EXPORTS)
* [ENVIRONMENT](#ENVIRONMENT)
NAME
----
Term::ReadLine - Perl interface to various `readline` packages. If no real package is found, substitutes stubs instead of basic functions.
SYNOPSIS
--------
```
use Term::ReadLine;
my $term = Term::ReadLine->new('Simple Perl calc');
my $prompt = "Enter your arithmetic expression: ";
my $OUT = $term->OUT || \*STDOUT;
while ( defined ($_ = $term->readline($prompt)) ) {
my $res = eval($_);
warn $@ if $@;
print $OUT $res, "\n" unless $@;
$term->addhistory($_) if /\S/;
}
```
DESCRIPTION
-----------
This package is just a front end to some other packages. It's a stub to set up a common interface to the various ReadLine implementations found on CPAN (under the `Term::ReadLine::*` namespace).
Minimal set of supported functions
-----------------------------------
All the supported functions should be called as methods, i.e., either as
```
$term = Term::ReadLine->new('name');
```
or as
```
$term->addhistory('row');
```
where $term is a return value of Term::ReadLine->new().
`ReadLine` returns the actual package that executes the commands. Among possible values are `Term::ReadLine::Gnu`, `Term::ReadLine::Perl`, `Term::ReadLine::Stub`.
`new` returns the handle for subsequent calls to following functions. Argument is the name of the application. Optionally can be followed by two arguments for `IN` and `OUT` filehandles. These arguments should be globs.
`readline` gets an input line, *possibly* with actual `readline` support. Trailing newline is removed. Returns `undef` on `EOF`.
`addhistory` adds the line to the history of input, from where it can be used if the actual `readline` is present.
`IN`, `OUT`
return the filehandles for input and output or `undef` if `readline` input and output cannot be used for Perl.
`MinLine` If argument is specified, it is an advice on minimal size of line to be included into history. `undef` means do not include anything into history. Returns the old value.
`findConsole` returns an array with two strings that give most appropriate names for files for input and output using conventions `"<$in"`, `">out"`.
The strings returned may not be useful for 3-argument open().
Attribs returns a reference to a hash which describes internal configuration of the package. Names of keys in this hash conform to standard conventions with the leading `rl_` stripped.
`Features` Returns a reference to a hash with keys being features present in current implementation. Several optional features are used in the minimal interface: `appname` should be present if the first argument to `new` is recognized, and `minline` should be present if `MinLine` method is not dummy. `autohistory` should be present if lines are put into history automatically (maybe subject to `MinLine`), and `addhistory` if `addhistory` method is not dummy.
If `Features` method reports a feature `attribs` as present, the method `Attribs` is not dummy.
Additional supported functions
-------------------------------
Actually `Term::ReadLine` can use some other package, that will support a richer set of commands.
All these commands are callable via method interface and have names which conform to standard conventions with the leading `rl_` stripped.
The stub package included with the perl distribution allows some additional methods:
`tkRunning` makes Tk event loop run when waiting for user input (i.e., during `readline` method).
`event_loop` Registers call-backs to wait for user input (i.e., during `readline` method). This supersedes tkRunning.
The first call-back registered is the call back for waiting. It is expected that the callback will call the current event loop until there is something waiting to get on the input filehandle. The parameter passed in is the return value of the second call back.
The second call-back registered is the call back for registration. The input filehandle (often STDIN, but not necessarily) will be passed in.
For example, with AnyEvent:
```
$term->event_loop(sub {
my $data = shift;
$data->[1] = AE::cv();
$data->[1]->recv();
}, sub {
my $fh = shift;
my $data = [];
$data->[0] = AE::io($fh, 0, sub { $data->[1]->send() });
$data;
});
```
The second call-back is optional if the call back is registered prior to the call to $term->readline.
Deregistration is done in this case by calling event\_loop with `undef` as its parameter:
```
$term->event_loop(undef);
```
This will cause the data array ref to be removed, allowing normal garbage collection to clean it up. With AnyEvent, that will cause $data->[0] to be cleaned up, and AnyEvent will automatically cancel the watcher at that time. If another loop requires more than that to clean up a file watcher, that will be up to the caller to handle.
`ornaments` makes the command line stand out by using termcap data. The argument to `ornaments` should be 0, 1, or a string of a form `"aa,bb,cc,dd"`. Four components of this string should be names of *terminal capacities*, first two will be issued to make the prompt standout, last two to make the input line standout.
`newTTY` takes two arguments which are input filehandle and output filehandle. Switches to use these filehandles.
One can check whether the currently loaded ReadLine package supports these methods by checking for corresponding `Features`.
EXPORTS
-------
None
ENVIRONMENT
-----------
The environment variable `PERL_RL` governs which ReadLine clone is loaded. If the value is false, a dummy interface is used. If the value is true, it should be tail of the name of the package to use, such as `Perl` or `Gnu`.
As a special case, if the value of this variable is space-separated, the tail might be used to disable the ornaments by setting the tail to be `o=0` or `ornaments=0`. The head should be as described above, say
If the variable is not set, or if the head of space-separated list is empty, the best available package is loaded.
```
export "PERL_RL=Perl o=0" # Use Perl ReadLine sans ornaments
export "PERL_RL= o=0" # Use best available ReadLine sans ornaments
```
(Note that processing of `PERL_RL` for ornaments is in the discretion of the particular used `Term::ReadLine::*` package).
| programming_docs |
perl Test2::Event::Note Test2::Event::Note
==================
CONTENTS
--------
* [NAME](#NAME)
* [DESCRIPTION](#DESCRIPTION)
* [SYNOPSIS](#SYNOPSIS)
* [ACCESSORS](#ACCESSORS)
* [SOURCE](#SOURCE)
* [MAINTAINERS](#MAINTAINERS)
* [AUTHORS](#AUTHORS)
* [COPYRIGHT](#COPYRIGHT)
NAME
----
Test2::Event::Note - Note event type
DESCRIPTION
-----------
Notes, typically rendered to STDOUT.
SYNOPSIS
--------
```
use Test2::API qw/context/;
use Test2::Event::Note;
my $ctx = context();
my $event = $ctx->Note($message);
```
ACCESSORS
---------
$note->message The message for the note.
SOURCE
------
The source code repository for Test2 can be found at *http://github.com/Test-More/test-more/*.
MAINTAINERS
-----------
Chad Granum <[email protected]> AUTHORS
-------
Chad Granum <[email protected]> COPYRIGHT
---------
Copyright 2020 Chad Granum <[email protected]>.
This program is free software; you can redistribute it and/or modify it under the same terms as Perl itself.
See *http://dev.perl.org/licenses/*
perl Test2::Util Test2::Util
===========
CONTENTS
--------
* [NAME](#NAME)
* [DESCRIPTION](#DESCRIPTION)
* [EXPORTS](#EXPORTS)
* [NOTES && CAVEATS](#NOTES-&&-CAVEATS)
* [SOURCE](#SOURCE)
* [MAINTAINERS](#MAINTAINERS)
* [AUTHORS](#AUTHORS)
* [COPYRIGHT](#COPYRIGHT)
NAME
----
Test2::Util - Tools used by Test2 and friends.
DESCRIPTION
-----------
Collection of tools used by [Test2](test2) and friends.
EXPORTS
-------
All exports are optional. You must specify subs to import.
($success, $error) = try { ... } Eval the codeblock, return success or failure, and the error message. This code protects $@ and $!, they will be restored by the end of the run. This code also temporarily blocks $SIG{DIE} handlers.
protect { ... } Similar to try, except that it does not catch exceptions. The idea here is to protect $@ and $! from changes. $@ and $! will be restored to whatever they were before the run so long as it is successful. If the run fails $! will still be restored, but $@ will contain the exception being thrown.
CAN\_FORK True if this system is capable of true or pseudo-fork.
CAN\_REALLY\_FORK True if the system can really fork. This will be false for systems where fork is emulated.
CAN\_THREAD True if this system is capable of using threads.
USE\_THREADS Returns true if threads are enabled, false if they are not.
get\_tid This will return the id of the current thread when threads are enabled, otherwise it returns 0.
my $file = pkg\_to\_file($package) Convert a package name to a filename.
$string = ipc\_separator() Get the IPC separator. Currently this is always the string `'~'`.
$string = gen\_uid() Generate a unique id (NOT A UUID). This will typically be the process id, the thread id, the time, and an incrementing integer all joined with the `ipc_separator()`.
These ID's are unique enough for most purposes. For identical ids to be generated you must have 2 processes with the same PID generate IDs at the same time with the same current state of the incrementing integer. This is a perfectly reasonable thing to expect to happen across multiple machines, but is quite unlikely to happen on one machine.
This can fail to be unique if a process generates an id, calls exec, and does it again after the exec and it all happens in less than a second. It can also happen if the systems process id's cycle in less than a second allowing 2 different programs that use this generator to run with the same PID in less than a second. Both these cases are sufficiently unlikely. If you need universally unique ids, or ids that are unique in these conditions, look at <Data::UUID>.
($ok, $err) = do\_rename($old\_name, $new\_name) Rename a file, this wraps `rename()` in a way that makes it more reliable cross-platform when trying to rename files you recently altered.
($ok, $err) = do\_unlink($filename) Unlink a file, this wraps `unlink()` in a way that makes it more reliable cross-platform when trying to unlink files you recently altered.
($ok, $err) = try\_sig\_mask { ... } Complete an action with several signals masked, they will be unmasked at the end allowing any signals that were intercepted to get handled.
This is primarily used when you need to make several actions atomic (against some signals anyway).
Signals that are intercepted:
SIGINT SIGALRM SIGHUP SIGTERM SIGUSR1 SIGUSR2
NOTES && CAVEATS
-----------------
5.10.0 Perl 5.10.0 has a bug when compiled with newer gcc versions. This bug causes a segfault whenever a new thread is launched. Test2 will attempt to detect this, and note that the system is not capable of forking when it is detected.
Devel::Cover Devel::Cover does not support threads. CAN\_THREAD will return false if Devel::Cover is loaded before the check is first run.
SOURCE
------
The source code repository for Test2 can be found at *http://github.com/Test-More/test-more/*.
MAINTAINERS
-----------
Chad Granum <[email protected]> AUTHORS
-------
Chad Granum <[email protected]>
Kent Fredric <[email protected]> COPYRIGHT
---------
Copyright 2020 Chad Granum <[email protected]>.
This program is free software; you can redistribute it and/or modify it under the same terms as Perl itself.
See *http://dev.perl.org/licenses/*
perl perlfunc perlfunc
========
CONTENTS
--------
* [NAME](#NAME)
* [DESCRIPTION](#DESCRIPTION)
+ [Perl Functions by Category](#Perl-Functions-by-Category)
+ [Portability](#Portability)
+ [Alphabetical Listing of Perl Functions](#Alphabetical-Listing-of-Perl-Functions)
+ [Non-function Keywords by Cross-reference](#Non-function-Keywords-by-Cross-reference)
- [perldata](#perldata)
- [perlmod](#perlmod)
- [perlobj](#perlobj)
- [perlop](#perlop)
- [perlsub](#perlsub)
- [perlsyn](#perlsyn)
NAME
----
perlfunc - Perl builtin functions
DESCRIPTION
-----------
The functions in this section can serve as terms in an expression. They fall into two major categories: list operators and named unary operators. These differ in their precedence relationship with a following comma. (See the precedence table in <perlop>.) List operators take more than one argument, while unary operators can never take more than one argument. Thus, a comma terminates the argument of a unary operator, but merely separates the arguments of a list operator. A unary operator generally provides scalar context to its argument, while a list operator may provide either scalar or list contexts for its arguments. If it does both, scalar arguments come first and list argument follow, and there can only ever be one such list argument. For instance, [`splice`](#splice-ARRAY%2COFFSET%2CLENGTH%2CLIST) has three scalar arguments followed by a list, whereas [`gethostbyname`](#gethostbyname-NAME) has four scalar arguments.
In the syntax descriptions that follow, list operators that expect a list (and provide list context for elements of the list) are shown with LIST as an argument. Such a list may consist of any combination of scalar arguments or list values; the list values will be included in the list as if each individual element were interpolated at that point in the list, forming a longer single-dimensional list value. Commas should separate literal elements of the LIST.
Any function in the list below may be used either with or without parentheses around its arguments. (The syntax descriptions omit the parentheses.) If you use parentheses, the simple but occasionally surprising rule is this: It *looks* like a function, therefore it *is* a function, and precedence doesn't matter. Otherwise it's a list operator or unary operator, and precedence does matter. Whitespace between the function and left parenthesis doesn't count, so sometimes you need to be careful:
```
print 1+2+4; # Prints 7.
print(1+2) + 4; # Prints 3.
print (1+2)+4; # Also prints 3!
print +(1+2)+4; # Prints 7.
print ((1+2)+4); # Prints 7.
```
If you run Perl with the [`use warnings`](warnings) pragma, it can warn you about this. For example, the third line above produces:
```
print (...) interpreted as function at - line 1.
Useless use of integer addition in void context at - line 1.
```
A few functions take no arguments at all, and therefore work as neither unary nor list operators. These include such functions as [`time`](#time) and [`endpwent`](#endpwent). For example, `time+86_400` always means `time() + 86_400`.
For functions that can be used in either a scalar or list context, nonabortive failure is generally indicated in scalar context by returning the undefined value, and in list context by returning the empty list.
Remember the following important rule: There is **no rule** that relates the behavior of an expression in list context to its behavior in scalar context, or vice versa. It might do two totally different things. Each operator and function decides which sort of value would be most appropriate to return in scalar context. Some operators return the length of the list that would have been returned in list context. Some operators return the first value in the list. Some operators return the last value in the list. Some operators return a count of successful operations. In general, they do what you want, unless you want consistency.
A named array in scalar context is quite different from what would at first glance appear to be a list in scalar context. You can't get a list like `(1,2,3)` into being in scalar context, because the compiler knows the context at compile time. It would generate the scalar comma operator there, not the list concatenation version of the comma. That means it was never a list to start with.
In general, functions in Perl that serve as wrappers for system calls ("syscalls") of the same name (like [chown(2)](http://man.he.net/man2/chown), [fork(2)](http://man.he.net/man2/fork), [closedir(2)](http://man.he.net/man2/closedir), etc.) return true when they succeed and [`undef`](#undef-EXPR) otherwise, as is usually mentioned in the descriptions below. This is different from the C interfaces, which return `-1` on failure. Exceptions to this rule include [`wait`](#wait), [`waitpid`](#waitpid-PID%2CFLAGS), and [`syscall`](#syscall-NUMBER%2C-LIST). System calls also set the special [`$!`](perlvar#%24%21) variable on failure. Other functions do not, except accidentally.
Extension modules can also hook into the Perl parser to define new kinds of keyword-headed expression. These may look like functions, but may also look completely different. The syntax following the keyword is defined entirely by the extension. If you are an implementor, see ["PL\_keyword\_plugin" in perlapi](perlapi#PL_keyword_plugin) for the mechanism. If you are using such a module, see the module's documentation for details of the syntax that it defines.
###
Perl Functions by Category
Here are Perl's functions (including things that look like functions, like some keywords and named operators) arranged by category. Some functions appear in more than one place. Any warnings, including those produced by keywords, are described in <perldiag> and <warnings>.
Functions for SCALARs or strings [`chomp`](#chomp-VARIABLE), [`chop`](#chop-VARIABLE), [`chr`](#chr-NUMBER), [`crypt`](#crypt-PLAINTEXT%2CSALT), [`fc`](#fc-EXPR), [`hex`](#hex-EXPR), [`index`](#index-STR%2CSUBSTR%2CPOSITION), [`lc`](#lc-EXPR), [`lcfirst`](#lcfirst-EXPR), [`length`](#length-EXPR), [`oct`](#oct-EXPR), [`ord`](#ord-EXPR), [`pack`](#pack-TEMPLATE%2CLIST), [`q//`](#q%2FSTRING%2F), [`qq//`](#qq%2FSTRING%2F), [`reverse`](#reverse-LIST), [`rindex`](#rindex-STR%2CSUBSTR%2CPOSITION), [`sprintf`](#sprintf-FORMAT%2C-LIST), [`substr`](#substr-EXPR%2COFFSET%2CLENGTH%2CREPLACEMENT), [`tr///`](#tr%2F%2F%2F), [`uc`](#uc-EXPR), [`ucfirst`](#ucfirst-EXPR), [`y///`](#y%2F%2F%2F)
[`fc`](#fc-EXPR) is available only if the [`"fc"` feature](feature#The-%27fc%27-feature) is enabled or if it is prefixed with `CORE::`. The [`"fc"` feature](feature#The-%27fc%27-feature) is enabled automatically with a `use v5.16` (or higher) declaration in the current scope.
Regular expressions and pattern matching [`m//`](#m%2F%2F), [`pos`](#pos-SCALAR), [`qr//`](#qr%2FSTRING%2F), [`quotemeta`](#quotemeta-EXPR), [`s///`](#s%2F%2F%2F), [`split`](#split-%2FPATTERN%2F%2CEXPR%2CLIMIT), [`study`](#study-SCALAR)
Numeric functions [`abs`](#abs-VALUE), [`atan2`](#atan2-Y%2CX), [`cos`](#cos-EXPR), [`exp`](#exp-EXPR), [`hex`](#hex-EXPR), [`int`](#int-EXPR), [`log`](#log-EXPR), [`oct`](#oct-EXPR), [`rand`](#rand-EXPR), [`sin`](#sin-EXPR), [`sqrt`](#sqrt-EXPR), [`srand`](#srand-EXPR)
Functions for real @ARRAYs [`each`](#each-HASH), [`keys`](#keys-HASH), [`pop`](#pop-ARRAY), [`push`](#push-ARRAY%2CLIST), [`shift`](#shift-ARRAY), [`splice`](#splice-ARRAY%2COFFSET%2CLENGTH%2CLIST), [`unshift`](#unshift-ARRAY%2CLIST), [`values`](#values-HASH)
Functions for list data [`grep`](#grep-BLOCK-LIST), [`join`](#join-EXPR%2CLIST), [`map`](#map-BLOCK-LIST), [`qw//`](#qw%2FSTRING%2F), [`reverse`](#reverse-LIST), [`sort`](#sort-SUBNAME-LIST), [`unpack`](#unpack-TEMPLATE%2CEXPR)
Functions for real %HASHes [`delete`](#delete-EXPR), [`each`](#each-HASH), [`exists`](#exists-EXPR), [`keys`](#keys-HASH), [`values`](#values-HASH)
Input and output functions [`binmode`](#binmode-FILEHANDLE%2C-LAYER), [`close`](#close-FILEHANDLE), [`closedir`](#closedir-DIRHANDLE), [`dbmclose`](#dbmclose-HASH), [`dbmopen`](#dbmopen-HASH%2CDBNAME%2CMASK), [`die`](#die-LIST), [`eof`](#eof-FILEHANDLE), [`fileno`](#fileno-FILEHANDLE), [`flock`](#flock-FILEHANDLE%2COPERATION), [`format`](#format), [`getc`](#getc-FILEHANDLE), [`print`](#print-FILEHANDLE-LIST), [`printf`](#printf-FILEHANDLE-FORMAT%2C-LIST), [`read`](#read-FILEHANDLE%2CSCALAR%2CLENGTH%2COFFSET), [`readdir`](#readdir-DIRHANDLE), [`readline`](#readline-EXPR), [`rewinddir`](#rewinddir-DIRHANDLE), [`say`](#say-FILEHANDLE-LIST), [`seek`](#seek-FILEHANDLE%2CPOSITION%2CWHENCE), [`seekdir`](#seekdir-DIRHANDLE%2CPOS), [`select`](#select-RBITS%2CWBITS%2CEBITS%2CTIMEOUT), [`syscall`](#syscall-NUMBER%2C-LIST), [`sysread`](#sysread-FILEHANDLE%2CSCALAR%2CLENGTH%2COFFSET), [`sysseek`](#sysseek-FILEHANDLE%2CPOSITION%2CWHENCE), [`syswrite`](#syswrite-FILEHANDLE%2CSCALAR%2CLENGTH%2COFFSET), [`tell`](#tell-FILEHANDLE), [`telldir`](#telldir-DIRHANDLE), [`truncate`](#truncate-FILEHANDLE%2CLENGTH), [`warn`](#warn-LIST), [`write`](#write-FILEHANDLE)
[`say`](#say-FILEHANDLE-LIST) is available only if the [`"say"` feature](feature#The-%27say%27-feature) is enabled or if it is prefixed with `CORE::`. The [`"say"` feature](feature#The-%27say%27-feature) is enabled automatically with a `use v5.10` (or higher) declaration in the current scope.
Functions for fixed-length data or records [`pack`](#pack-TEMPLATE%2CLIST), [`read`](#read-FILEHANDLE%2CSCALAR%2CLENGTH%2COFFSET), [`syscall`](#syscall-NUMBER%2C-LIST), [`sysread`](#sysread-FILEHANDLE%2CSCALAR%2CLENGTH%2COFFSET), [`sysseek`](#sysseek-FILEHANDLE%2CPOSITION%2CWHENCE), [`syswrite`](#syswrite-FILEHANDLE%2CSCALAR%2CLENGTH%2COFFSET), [`unpack`](#unpack-TEMPLATE%2CEXPR), [`vec`](#vec-EXPR%2COFFSET%2CBITS)
Functions for filehandles, files, or directories [`-*X*`](#-X-FILEHANDLE), [`chdir`](#chdir-EXPR), [`chmod`](#chmod-LIST), [`chown`](#chown-LIST), [`chroot`](#chroot-FILENAME), [`fcntl`](#fcntl-FILEHANDLE%2CFUNCTION%2CSCALAR), [`glob`](#glob-EXPR), [`ioctl`](#ioctl-FILEHANDLE%2CFUNCTION%2CSCALAR), [`link`](#link-OLDFILE%2CNEWFILE), [`lstat`](#lstat-FILEHANDLE), [`mkdir`](#mkdir-FILENAME%2CMODE), [`open`](#open-FILEHANDLE%2CMODE%2CEXPR), [`opendir`](#opendir-DIRHANDLE%2CEXPR), [`readlink`](#readlink-EXPR), [`rename`](#rename-OLDNAME%2CNEWNAME), [`rmdir`](#rmdir-FILENAME), [`select`](#select-FILEHANDLE), [`stat`](#stat-FILEHANDLE), [`symlink`](#symlink-OLDFILE%2CNEWFILE), [`sysopen`](#sysopen-FILEHANDLE%2CFILENAME%2CMODE), [`umask`](#umask-EXPR), [`unlink`](#unlink-LIST), [`utime`](#utime-LIST)
Keywords related to the control flow of your Perl program [`break`](#break), [`caller`](#caller-EXPR), [`continue`](#continue-BLOCK), [`die`](#die-LIST), [`do`](#do-BLOCK), [`dump`](#dump-LABEL), [`eval`](#eval-EXPR), [`evalbytes`](#evalbytes-EXPR), [`exit`](#exit-EXPR), [`__FILE__`](#__FILE__), [`goto`](#goto-LABEL), [`last`](#last-LABEL), [`__LINE__`](#__LINE__), [`next`](#next-LABEL), [`__PACKAGE__`](#__PACKAGE__), [`redo`](#redo-LABEL), [`return`](#return-EXPR), [`sub`](#sub-NAME-BLOCK), [`__SUB__`](#__SUB__), [`wantarray`](#wantarray)
[`break`](#break) is available only if you enable the experimental [`"switch"` feature](feature#The-%27switch%27-feature) or use the `CORE::` prefix. The [`"switch"` feature](feature#The-%27switch%27-feature) also enables the `default`, `given` and `when` statements, which are documented in ["Switch Statements" in perlsyn](perlsyn#Switch-Statements). The [`"switch"` feature](feature#The-%27switch%27-feature) is enabled automatically with a `use v5.10` (or higher) declaration in the current scope. In Perl v5.14 and earlier, [`continue`](#continue-BLOCK) required the [`"switch"` feature](feature#The-%27switch%27-feature), like the other keywords.
[`evalbytes`](#evalbytes-EXPR) is only available with the [`"evalbytes"` feature](feature#The-%27unicode_eval%27-and-%27evalbytes%27-features) (see <feature>) or if prefixed with `CORE::`. [`__SUB__`](#__SUB__) is only available with the [`"current_sub"` feature](feature#The-%27current_sub%27-feature) or if prefixed with `CORE::`. Both the [`"evalbytes"`](feature#The-%27unicode_eval%27-and-%27evalbytes%27-features) and [`"current_sub"`](feature#The-%27current_sub%27-feature) features are enabled automatically with a `use v5.16` (or higher) declaration in the current scope.
Keywords related to scoping [`caller`](#caller-EXPR), [`import`](#import-LIST), [`local`](#local-EXPR), [`my`](#my-VARLIST), [`our`](#our-VARLIST), [`package`](#package-NAMESPACE), [`state`](#state-VARLIST), [`use`](#use-Module-VERSION-LIST)
[`state`](#state-VARLIST) is available only if the [`"state"` feature](feature#The-%27state%27-feature) is enabled or if it is prefixed with `CORE::`. The [`"state"` feature](feature#The-%27state%27-feature) is enabled automatically with a `use v5.10` (or higher) declaration in the current scope.
Miscellaneous functions [`defined`](#defined-EXPR), [`formline`](#formline-PICTURE%2CLIST), [`lock`](#lock-THING), [`prototype`](#prototype-FUNCTION), [`reset`](#reset-EXPR), [`scalar`](#scalar-EXPR), [`undef`](#undef-EXPR)
Functions for processes and process groups [`alarm`](#alarm-SECONDS), [`exec`](#exec-LIST), [`fork`](#fork), [`getpgrp`](#getpgrp-PID), [`getppid`](#getppid), [`getpriority`](#getpriority-WHICH%2CWHO), [`kill`](#kill-SIGNAL%2C-LIST), [`pipe`](#pipe-READHANDLE%2CWRITEHANDLE), [`qx//`](#qx%2FSTRING%2F), [`readpipe`](#readpipe-EXPR), [`setpgrp`](#setpgrp-PID%2CPGRP), [`setpriority`](#setpriority-WHICH%2CWHO%2CPRIORITY), [`sleep`](#sleep-EXPR), [`system`](#system-LIST), [`times`](#times), [`wait`](#wait), [`waitpid`](#waitpid-PID%2CFLAGS)
Keywords related to Perl modules [`do`](#do-EXPR), [`import`](#import-LIST), [`no`](#no-MODULE-VERSION-LIST), [`package`](#package-NAMESPACE), [`require`](#require-VERSION), [`use`](#use-Module-VERSION-LIST)
Keywords related to classes and object-orientation [`bless`](#bless-REF%2CCLASSNAME), [`dbmclose`](#dbmclose-HASH), [`dbmopen`](#dbmopen-HASH%2CDBNAME%2CMASK), [`package`](#package-NAMESPACE), [`ref`](#ref-EXPR), [`tie`](#tie-VARIABLE%2CCLASSNAME%2CLIST), [`tied`](#tied-VARIABLE), [`untie`](#untie-VARIABLE), [`use`](#use-Module-VERSION-LIST)
Low-level socket functions [`accept`](#accept-NEWSOCKET%2CGENERICSOCKET), [`bind`](#bind-SOCKET%2CNAME), [`connect`](#connect-SOCKET%2CNAME), [`getpeername`](#getpeername-SOCKET), [`getsockname`](#getsockname-SOCKET), [`getsockopt`](#getsockopt-SOCKET%2CLEVEL%2COPTNAME), [`listen`](#listen-SOCKET%2CQUEUESIZE), [`recv`](#recv-SOCKET%2CSCALAR%2CLENGTH%2CFLAGS), [`send`](#send-SOCKET%2CMSG%2CFLAGS%2CTO), [`setsockopt`](#setsockopt-SOCKET%2CLEVEL%2COPTNAME%2COPTVAL), [`shutdown`](#shutdown-SOCKET%2CHOW), [`socket`](#socket-SOCKET%2CDOMAIN%2CTYPE%2CPROTOCOL), [`socketpair`](#socketpair-SOCKET1%2CSOCKET2%2CDOMAIN%2CTYPE%2CPROTOCOL)
System V interprocess communication functions [`msgctl`](#msgctl-ID%2CCMD%2CARG), [`msgget`](#msgget-KEY%2CFLAGS), [`msgrcv`](#msgrcv-ID%2CVAR%2CSIZE%2CTYPE%2CFLAGS), [`msgsnd`](#msgsnd-ID%2CMSG%2CFLAGS), [`semctl`](#semctl-ID%2CSEMNUM%2CCMD%2CARG), [`semget`](#semget-KEY%2CNSEMS%2CFLAGS), [`semop`](#semop-KEY%2COPSTRING), [`shmctl`](#shmctl-ID%2CCMD%2CARG), [`shmget`](#shmget-KEY%2CSIZE%2CFLAGS), [`shmread`](#shmread-ID%2CVAR%2CPOS%2CSIZE), [`shmwrite`](#shmwrite-ID%2CSTRING%2CPOS%2CSIZE)
Fetching user and group info [`endgrent`](#endgrent), [`endhostent`](#endhostent), [`endnetent`](#endnetent), [`endpwent`](#endpwent), [`getgrent`](#getgrent), [`getgrgid`](#getgrgid-GID), [`getgrnam`](#getgrnam-NAME), [`getlogin`](#getlogin), [`getpwent`](#getpwent), [`getpwnam`](#getpwnam-NAME), [`getpwuid`](#getpwuid-UID), [`setgrent`](#setgrent), [`setpwent`](#setpwent)
Fetching network info [`endprotoent`](#endprotoent), [`endservent`](#endservent), [`gethostbyaddr`](#gethostbyaddr-ADDR%2CADDRTYPE), [`gethostbyname`](#gethostbyname-NAME), [`gethostent`](#gethostent), [`getnetbyaddr`](#getnetbyaddr-ADDR%2CADDRTYPE), [`getnetbyname`](#getnetbyname-NAME), [`getnetent`](#getnetent), [`getprotobyname`](#getprotobyname-NAME), [`getprotobynumber`](#getprotobynumber-NUMBER), [`getprotoent`](#getprotoent), [`getservbyname`](#getservbyname-NAME%2CPROTO), [`getservbyport`](#getservbyport-PORT%2CPROTO), [`getservent`](#getservent), [`sethostent`](#sethostent-STAYOPEN), [`setnetent`](#setnetent-STAYOPEN), [`setprotoent`](#setprotoent-STAYOPEN), [`setservent`](#setservent-STAYOPEN)
Time-related functions [`gmtime`](#gmtime-EXPR), [`localtime`](#localtime-EXPR), [`time`](#time), [`times`](#times)
Non-function keywords `and`, `AUTOLOAD`, `BEGIN`, `catch`, `CHECK`, `cmp`, `CORE`, `__DATA__`, `default`, `defer`, `DESTROY`, `else`, `elseif`, `elsif`, `END`, `__END__`, `eq`, `finally`, `for`, `foreach`, `ge`, `given`, `gt`, `if`, `INIT`, `isa`, `le`, `lt`, `ne`, `not`, `or`, `try`, `UNITCHECK`, `unless`, `until`, `when`, `while`, `x`, `xor`
### Portability
Perl was born in Unix and can therefore access all common Unix system calls. In non-Unix environments, the functionality of some Unix system calls may not be available or details of the available functionality may differ slightly. The Perl functions affected by this are:
[`-*X*`](#-X-FILEHANDLE), [`binmode`](#binmode-FILEHANDLE%2C-LAYER), [`chmod`](#chmod-LIST), [`chown`](#chown-LIST), [`chroot`](#chroot-FILENAME), [`crypt`](#crypt-PLAINTEXT%2CSALT), [`dbmclose`](#dbmclose-HASH), [`dbmopen`](#dbmopen-HASH%2CDBNAME%2CMASK), [`dump`](#dump-LABEL), [`endgrent`](#endgrent), [`endhostent`](#endhostent), [`endnetent`](#endnetent), [`endprotoent`](#endprotoent), [`endpwent`](#endpwent), [`endservent`](#endservent), [`exec`](#exec-LIST), [`fcntl`](#fcntl-FILEHANDLE%2CFUNCTION%2CSCALAR), [`flock`](#flock-FILEHANDLE%2COPERATION), [`fork`](#fork), [`getgrent`](#getgrent), [`getgrgid`](#getgrgid-GID), [`gethostbyname`](#gethostbyname-NAME), [`gethostent`](#gethostent), [`getlogin`](#getlogin), [`getnetbyaddr`](#getnetbyaddr-ADDR%2CADDRTYPE), [`getnetbyname`](#getnetbyname-NAME), [`getnetent`](#getnetent), [`getppid`](#getppid), [`getpgrp`](#getpgrp-PID), [`getpriority`](#getpriority-WHICH%2CWHO), [`getprotobynumber`](#getprotobynumber-NUMBER), [`getprotoent`](#getprotoent), [`getpwent`](#getpwent), [`getpwnam`](#getpwnam-NAME), [`getpwuid`](#getpwuid-UID), [`getservbyport`](#getservbyport-PORT%2CPROTO), [`getservent`](#getservent), [`getsockopt`](#getsockopt-SOCKET%2CLEVEL%2COPTNAME), [`glob`](#glob-EXPR), [`ioctl`](#ioctl-FILEHANDLE%2CFUNCTION%2CSCALAR), [`kill`](#kill-SIGNAL%2C-LIST), [`link`](#link-OLDFILE%2CNEWFILE), [`lstat`](#lstat-FILEHANDLE), [`msgctl`](#msgctl-ID%2CCMD%2CARG), [`msgget`](#msgget-KEY%2CFLAGS), [`msgrcv`](#msgrcv-ID%2CVAR%2CSIZE%2CTYPE%2CFLAGS), [`msgsnd`](#msgsnd-ID%2CMSG%2CFLAGS), [`open`](#open-FILEHANDLE%2CMODE%2CEXPR), [`pipe`](#pipe-READHANDLE%2CWRITEHANDLE), [`readlink`](#readlink-EXPR), [`rename`](#rename-OLDNAME%2CNEWNAME), [`select`](#select-RBITS%2CWBITS%2CEBITS%2CTIMEOUT), [`semctl`](#semctl-ID%2CSEMNUM%2CCMD%2CARG), [`semget`](#semget-KEY%2CNSEMS%2CFLAGS), [`semop`](#semop-KEY%2COPSTRING), [`setgrent`](#setgrent), [`sethostent`](#sethostent-STAYOPEN), [`setnetent`](#setnetent-STAYOPEN), [`setpgrp`](#setpgrp-PID%2CPGRP), [`setpriority`](#setpriority-WHICH%2CWHO%2CPRIORITY), [`setprotoent`](#setprotoent-STAYOPEN), [`setpwent`](#setpwent), [`setservent`](#setservent-STAYOPEN), [`setsockopt`](#setsockopt-SOCKET%2CLEVEL%2COPTNAME%2COPTVAL), [`shmctl`](#shmctl-ID%2CCMD%2CARG), [`shmget`](#shmget-KEY%2CSIZE%2CFLAGS), [`shmread`](#shmread-ID%2CVAR%2CPOS%2CSIZE), [`shmwrite`](#shmwrite-ID%2CSTRING%2CPOS%2CSIZE), [`socket`](#socket-SOCKET%2CDOMAIN%2CTYPE%2CPROTOCOL), [`socketpair`](#socketpair-SOCKET1%2CSOCKET2%2CDOMAIN%2CTYPE%2CPROTOCOL), [`stat`](#stat-FILEHANDLE), [`symlink`](#symlink-OLDFILE%2CNEWFILE), [`syscall`](#syscall-NUMBER%2C-LIST), [`sysopen`](#sysopen-FILEHANDLE%2CFILENAME%2CMODE), [`system`](#system-LIST), [`times`](#times), [`truncate`](#truncate-FILEHANDLE%2CLENGTH), [`umask`](#umask-EXPR), [`unlink`](#unlink-LIST), [`utime`](#utime-LIST), [`wait`](#wait), [`waitpid`](#waitpid-PID%2CFLAGS)
For more information about the portability of these functions, see <perlport> and other available platform-specific documentation.
###
Alphabetical Listing of Perl Functions
-X FILEHANDLE
-X EXPR
-X DIRHANDLE
-X A file test, where X is one of the letters listed below. This unary operator takes one argument, either a filename, a filehandle, or a dirhandle, and tests the associated file to see if something is true about it. If the argument is omitted, tests [`$_`](perlvar#%24_), except for `-t`, which tests STDIN. Unless otherwise documented, it returns `1` for true and `''` for false. If the file doesn't exist or can't be examined, it returns [`undef`](#undef-EXPR) and sets [`$!`](perlvar#%24%21) (errno). With the exception of the `-l` test they all follow symbolic links because they use `stat()` and not `lstat()` (so dangling symlinks can't be examined and will therefore report failure).
Despite the funny names, precedence is the same as any other named unary operator. The operator may be any of:
```
-r File is readable by effective uid/gid.
-w File is writable by effective uid/gid.
-x File is executable by effective uid/gid.
-o File is owned by effective uid.
-R File is readable by real uid/gid.
-W File is writable by real uid/gid.
-X File is executable by real uid/gid.
-O File is owned by real uid.
-e File exists.
-z File has zero size (is empty).
-s File has nonzero size (returns size in bytes).
-f File is a plain file.
-d File is a directory.
-l File is a symbolic link (false if symlinks aren't
supported by the file system).
-p File is a named pipe (FIFO), or Filehandle is a pipe.
-S File is a socket.
-b File is a block special file.
-c File is a character special file.
-t Filehandle is opened to a tty.
-u File has setuid bit set.
-g File has setgid bit set.
-k File has sticky bit set.
-T File is an ASCII or UTF-8 text file (heuristic guess).
-B File is a "binary" file (opposite of -T).
-M Script start time minus file modification time, in days.
-A Same for access time.
-C Same for inode change time (Unix, may differ for other
platforms)
```
Example:
```
while (<>) {
chomp;
next unless -f $_; # ignore specials
#...
}
```
Note that `-s/a/b/` does not do a negated substitution. Saying `-exp($foo)` still works as expected, however: only single letters following a minus are interpreted as file tests.
These operators are exempt from the "looks like a function rule" described above. That is, an opening parenthesis after the operator does not affect how much of the following code constitutes the argument. Put the opening parentheses before the operator to separate it from code that follows (this applies only to operators with higher precedence than unary operators, of course):
```
-s($file) + 1024 # probably wrong; same as -s($file + 1024)
(-s $file) + 1024 # correct
```
The interpretation of the file permission operators `-r`, `-R`, `-w`, `-W`, `-x`, and `-X` is by default based solely on the mode of the file and the uids and gids of the user. There may be other reasons you can't actually read, write, or execute the file: for example network filesystem access controls, ACLs (access control lists), read-only filesystems, and unrecognized executable formats. Note that the use of these six specific operators to verify if some operation is possible is usually a mistake, because it may be open to race conditions.
Also note that, for the superuser on the local filesystems, the `-r`, `-R`, `-w`, and `-W` tests always return 1, and `-x` and `-X` return 1 if any execute bit is set in the mode. Scripts run by the superuser may thus need to do a [`stat`](#stat-FILEHANDLE) to determine the actual mode of the file, or temporarily set their effective uid to something else.
If you are using ACLs, there is a pragma called [`filetest`](filetest) that may produce more accurate results than the bare [`stat`](#stat-FILEHANDLE) mode bits. When under `use filetest 'access'`, the above-mentioned filetests test whether the permission can(not) be granted using the [access(2)](http://man.he.net/man2/access) family of system calls. Also note that the `-x` and `-X` tests may under this pragma return true even if there are no execute permission bits set (nor any extra execute permission ACLs). This strangeness is due to the underlying system calls' definitions. Note also that, due to the implementation of `use filetest 'access'`, the `_` special filehandle won't cache the results of the file tests when this pragma is in effect. Read the documentation for the [`filetest`](filetest) pragma for more information.
The `-T` and `-B` tests work as follows. The first block or so of the file is examined to see if it is valid UTF-8 that includes non-ASCII characters. If so, it's a `-T` file. Otherwise, that same portion of the file is examined for odd characters such as strange control codes or characters with the high bit set. If more than a third of the characters are strange, it's a `-B` file; otherwise it's a `-T` file. Also, any file containing a zero byte in the examined portion is considered a binary file. (If executed within the scope of a [use locale](perllocale) which includes `LC_CTYPE`, odd characters are anything that isn't a printable nor space in the current locale.) If `-T` or `-B` is used on a filehandle, the current IO buffer is examined rather than the first block. Both `-T` and `-B` return true on an empty file, or a file at EOF when testing a filehandle. Because you have to read a file to do the `-T` test, on most occasions you want to use a `-f` against the file first, as in `next unless -f $file && -T $file`.
If any of the file tests (or either the [`stat`](#stat-FILEHANDLE) or [`lstat`](#lstat-FILEHANDLE) operator) is given the special filehandle consisting of a solitary underline, then the stat structure of the previous file test (or [`stat`](#stat-FILEHANDLE) operator) is used, saving a system call. (This doesn't work with `-t`, and you need to remember that [`lstat`](#lstat-FILEHANDLE) and `-l` leave values in the stat structure for the symbolic link, not the real file.) (Also, if the stat buffer was filled by an [`lstat`](#lstat-FILEHANDLE) call, `-T` and `-B` will reset it with the results of `stat _`). Example:
```
print "Can do.\n" if -r $a || -w _ || -x _;
stat($filename);
print "Readable\n" if -r _;
print "Writable\n" if -w _;
print "Executable\n" if -x _;
print "Setuid\n" if -u _;
print "Setgid\n" if -g _;
print "Sticky\n" if -k _;
print "Text\n" if -T _;
print "Binary\n" if -B _;
```
As of Perl 5.10.0, as a form of purely syntactic sugar, you can stack file test operators, in a way that `-f -w -x $file` is equivalent to `-x $file && -w _ && -f _`. (This is only fancy syntax: if you use the return value of `-f $file` as an argument to another filetest operator, no special magic will happen.)
Portability issues: ["-X" in perlport](perlport#-X).
To avoid confusing would-be users of your code with mysterious syntax errors, put something like this at the top of your script:
```
use v5.10; # so filetest ops can stack
```
abs VALUE abs Returns the absolute value of its argument. If VALUE is omitted, uses [`$_`](perlvar#%24_).
accept NEWSOCKET,GENERICSOCKET Accepts an incoming socket connect, just as [accept(2)](http://man.he.net/man2/accept) does. Returns the packed address if it succeeded, false otherwise. See the example in ["Sockets: Client/Server Communication" in perlipc](perlipc#Sockets%3A-Client%2FServer-Communication).
On systems that support a close-on-exec flag on files, the flag will be set for the newly opened file descriptor, as determined by the value of [`$^F`](perlvar#%24%5EF). See ["$^F" in perlvar](perlvar#%24%5EF).
alarm SECONDS alarm Arranges to have a SIGALRM delivered to this process after the specified number of wallclock seconds has elapsed. If SECONDS is not specified, the value stored in [`$_`](perlvar#%24_) is used. (On some machines, unfortunately, the elapsed time may be up to one second less or more than you specified because of how seconds are counted, and process scheduling may delay the delivery of the signal even further.)
Only one timer may be counting at once. Each call disables the previous timer, and an argument of `0` may be supplied to cancel the previous timer without starting a new one. The returned value is the amount of time remaining on the previous timer.
For delays of finer granularity than one second, the <Time::HiRes> module (from CPAN, and starting from Perl 5.8 part of the standard distribution) provides [`ualarm`](Time::HiRes#ualarm-%28-%24useconds-%5B%2C-%24interval_useconds-%5D-%29). You may also use Perl's four-argument version of [`select`](#select-RBITS%2CWBITS%2CEBITS%2CTIMEOUT) leaving the first three arguments undefined, or you might be able to use the [`syscall`](#syscall-NUMBER%2C-LIST) interface to access [setitimer(2)](http://man.he.net/man2/setitimer) if your system supports it. See <perlfaq8> for details.
It is usually a mistake to intermix [`alarm`](#alarm-SECONDS) and [`sleep`](#sleep-EXPR) calls, because [`sleep`](#sleep-EXPR) may be internally implemented on your system with [`alarm`](#alarm-SECONDS).
If you want to use [`alarm`](#alarm-SECONDS) to time out a system call you need to use an [`eval`](#eval-EXPR)/[`die`](#die-LIST) pair. You can't rely on the alarm causing the system call to fail with [`$!`](perlvar#%24%21) set to `EINTR` because Perl sets up signal handlers to restart system calls on some systems. Using [`eval`](#eval-EXPR)/[`die`](#die-LIST) always works, modulo the caveats given in ["Signals" in perlipc](perlipc#Signals).
```
eval {
local $SIG{ALRM} = sub { die "alarm\n" }; # NB: \n required
alarm $timeout;
my $nread = sysread $socket, $buffer, $size;
alarm 0;
};
if ($@) {
die unless $@ eq "alarm\n"; # propagate unexpected errors
# timed out
}
else {
# didn't
}
```
For more information see <perlipc>.
Portability issues: ["alarm" in perlport](perlport#alarm).
atan2 Y,X Returns the arctangent of Y/X in the range -PI to PI.
For the tangent operation, you may use the [`Math::Trig::tan`](Math::Trig#tan) function, or use the familiar relation:
```
sub tan { sin($_[0]) / cos($_[0]) }
```
The return value for `atan2(0,0)` is implementation-defined; consult your [atan2(3)](http://man.he.net/man3/atan2) manpage for more information.
Portability issues: ["atan2" in perlport](perlport#atan2).
bind SOCKET,NAME Binds a network address to a socket, just as [bind(2)](http://man.he.net/man2/bind) does. Returns true if it succeeded, false otherwise. NAME should be a packed address of the appropriate type for the socket. See the examples in ["Sockets: Client/Server Communication" in perlipc](perlipc#Sockets%3A-Client%2FServer-Communication).
binmode FILEHANDLE, LAYER
binmode FILEHANDLE Arranges for FILEHANDLE to be read or written in "binary" or "text" mode on systems where the run-time libraries distinguish between binary and text files. If FILEHANDLE is an expression, the value is taken as the name of the filehandle. Returns true on success, otherwise it returns [`undef`](#undef-EXPR) and sets [`$!`](perlvar#%24%21) (errno).
On some systems (in general, DOS- and Windows-based systems) [`binmode`](#binmode-FILEHANDLE%2C-LAYER) is necessary when you're not working with a text file. For the sake of portability it is a good idea always to use it when appropriate, and never to use it when it isn't appropriate. Also, people can set their I/O to be by default UTF8-encoded Unicode, not bytes.
In other words: regardless of platform, use [`binmode`](#binmode-FILEHANDLE%2C-LAYER) on binary data, like images, for example.
If LAYER is present it is a single string, but may contain multiple directives. The directives alter the behaviour of the filehandle. When LAYER is present, using binmode on a text file makes sense.
If LAYER is omitted or specified as `:raw` the filehandle is made suitable for passing binary data. This includes turning off possible CRLF translation and marking it as bytes (as opposed to Unicode characters). Note that, despite what may be implied in *"Programming Perl"* (the Camel, 3rd edition) or elsewhere, `:raw` is *not* simply the inverse of `:crlf`. Other layers that would affect the binary nature of the stream are *also* disabled. See [PerlIO](perlio), and the discussion about the PERLIO environment variable in [perlrun](perlrun#PERLIO).
The `:bytes`, `:crlf`, `:utf8`, and any other directives of the form `:...`, are called I/O *layers*. The <open> pragma can be used to establish default I/O layers.
*The LAYER parameter of the [`binmode`](#binmode-FILEHANDLE%2C-LAYER) function is described as "DISCIPLINE" in "Programming Perl, 3rd Edition". However, since the publishing of this book, by many known as "Camel III", the consensus of the naming of this functionality has moved from "discipline" to "layer". All documentation of this version of Perl therefore refers to "layers" rather than to "disciplines". Now back to the regularly scheduled documentation...*
To mark FILEHANDLE as UTF-8, use `:utf8` or `:encoding(UTF-8)`. `:utf8` just marks the data as UTF-8 without further checking, while `:encoding(UTF-8)` checks the data for actually being valid UTF-8. More details can be found in <PerlIO::encoding>.
In general, [`binmode`](#binmode-FILEHANDLE%2C-LAYER) should be called after [`open`](#open-FILEHANDLE%2CMODE%2CEXPR) but before any I/O is done on the filehandle. Calling [`binmode`](#binmode-FILEHANDLE%2C-LAYER) normally flushes any pending buffered output data (and perhaps pending input data) on the handle. An exception to this is the `:encoding` layer that changes the default character encoding of the handle. The `:encoding` layer sometimes needs to be called in mid-stream, and it doesn't flush the stream. `:encoding` also implicitly pushes on top of itself the `:utf8` layer because internally Perl operates on UTF8-encoded Unicode characters.
The operating system, device drivers, C libraries, and Perl run-time system all conspire to let the programmer treat a single character (`\n`) as the line terminator, irrespective of external representation. On many operating systems, the native text file representation matches the internal representation, but on some platforms the external representation of `\n` is made up of more than one character.
All variants of Unix, Mac OS (old and new), and Stream\_LF files on VMS use a single character to end each line in the external representation of text (even though that single character is CARRIAGE RETURN on old, pre-Darwin flavors of Mac OS, and is LINE FEED on Unix and most VMS files). In other systems like OS/2, DOS, and the various flavors of MS-Windows, your program sees a `\n` as a simple `\cJ`, but what's stored in text files are the two characters `\cM\cJ`. That means that if you don't use [`binmode`](#binmode-FILEHANDLE%2C-LAYER) on these systems, `\cM\cJ` sequences on disk will be converted to `\n` on input, and any `\n` in your program will be converted back to `\cM\cJ` on output. This is what you want for text files, but it can be disastrous for binary files.
Another consequence of using [`binmode`](#binmode-FILEHANDLE%2C-LAYER) (on some systems) is that special end-of-file markers will be seen as part of the data stream. For systems from the Microsoft family this means that, if your binary data contain `\cZ`, the I/O subsystem will regard it as the end of the file, unless you use [`binmode`](#binmode-FILEHANDLE%2C-LAYER).
[`binmode`](#binmode-FILEHANDLE%2C-LAYER) is important not only for [`readline`](#readline-EXPR) and [`print`](#print-FILEHANDLE-LIST) operations, but also when using [`read`](#read-FILEHANDLE%2CSCALAR%2CLENGTH%2COFFSET), [`seek`](#seek-FILEHANDLE%2CPOSITION%2CWHENCE), [`sysread`](#sysread-FILEHANDLE%2CSCALAR%2CLENGTH%2COFFSET), [`syswrite`](#syswrite-FILEHANDLE%2CSCALAR%2CLENGTH%2COFFSET) and [`tell`](#tell-FILEHANDLE) (see <perlport> for more details). See the [`$/`](perlvar#%24%2F) and [`$\`](perlvar#%24%5C) variables in <perlvar> for how to manually set your input and output line-termination sequences.
Portability issues: ["binmode" in perlport](perlport#binmode).
bless REF,CLASSNAME
bless REF This function tells the thingy referenced by REF that it is now an object in the CLASSNAME package. If CLASSNAME is an empty string, it is interpreted as referring to the `main` package. If CLASSNAME is omitted, the current package is used. Because a [`bless`](#bless-REF%2CCLASSNAME) is often the last thing in a constructor, it returns the reference for convenience. Always use the two-argument version if a derived class might inherit the method doing the blessing. See <perlobj> for more about the blessing (and blessings) of objects.
Consider always blessing objects in CLASSNAMEs that are mixed case. Namespaces with all lowercase names are considered reserved for Perl pragmas. Builtin types have all uppercase names. To prevent confusion, you may wish to avoid such package names as well. It is advised to avoid the class name `0`, because much code erroneously uses the result of [`ref`](#ref-EXPR) as a truth value.
See ["Perl Modules" in perlmod](perlmod#Perl-Modules).
break Break out of a `given` block.
[`break`](#break) is available only if the [`"switch"` feature](feature#The-%27switch%27-feature) is enabled or if it is prefixed with `CORE::`. The [`"switch"` feature](feature#The-%27switch%27-feature) is enabled automatically with a `use v5.10` (or higher) declaration in the current scope.
caller EXPR caller Returns the context of the current pure perl subroutine call. In scalar context, returns the caller's package name if there *is* a caller (that is, if we're in a subroutine or [`eval`](#eval-EXPR) or [`require`](#require-VERSION)) and the undefined value otherwise. caller never returns XS subs and they are skipped. The next pure perl sub will appear instead of the XS sub in caller's return values. In list context, caller returns
```
# 0 1 2
my ($package, $filename, $line) = caller;
```
Like [`__FILE__`](#__FILE__) and [`__LINE__`](#__LINE__), the filename and line number returned here may be altered by the mechanism described at ["Plain Old Comments (Not!)" in perlsyn](perlsyn#Plain-Old-Comments-%28Not%21%29).
With EXPR, it returns some extra information that the debugger uses to print a stack trace. The value of EXPR indicates how many call frames to go back before the current one.
```
# 0 1 2 3 4
my ($package, $filename, $line, $subroutine, $hasargs,
# 5 6 7 8 9 10
$wantarray, $evaltext, $is_require, $hints, $bitmask, $hinthash)
= caller($i);
```
Here, $subroutine is the function that the caller called (rather than the function containing the caller). Note that $subroutine may be `(eval)` if the frame is not a subroutine call, but an [`eval`](#eval-EXPR). In such a case additional elements $evaltext and `$is_require` are set: `$is_require` is true if the frame is created by a [`require`](#require-VERSION) or [`use`](#use-Module-VERSION-LIST) statement, $evaltext contains the text of the `eval EXPR` statement. In particular, for an `eval BLOCK` statement, $subroutine is `(eval)`, but $evaltext is undefined. (Note also that each [`use`](#use-Module-VERSION-LIST) statement creates a [`require`](#require-VERSION) frame inside an `eval EXPR` frame.) $subroutine may also be `(unknown)` if this particular subroutine happens to have been deleted from the symbol table. `$hasargs` is true if a new instance of [`@_`](perlvar#%40_) was set up for the frame. `$hints` and `$bitmask` contain pragmatic hints that the caller was compiled with. `$hints` corresponds to [`$^H`](perlvar#%24%5EH), and `$bitmask` corresponds to [`${^WARNING_BITS}`](perlvar#%24%7B%5EWARNING_BITS%7D). The `$hints` and `$bitmask` values are subject to change between versions of Perl, and are not meant for external use.
`$hinthash` is a reference to a hash containing the value of [`%^H`](perlvar#%25%5EH) when the caller was compiled, or [`undef`](#undef-EXPR) if [`%^H`](perlvar#%25%5EH) was empty. Do not modify the values of this hash, as they are the actual values stored in the optree.
Note that the only types of call frames that are visible are subroutine calls and `eval`. Other forms of context, such as `while` or `foreach` loops or `try` blocks are not considered interesting to `caller`, as they do not alter the behaviour of the `return` expression.
Furthermore, when called from within the DB package in list context, and with an argument, caller returns more detailed information: it sets the list variable `@DB::args` to be the arguments with which the subroutine was invoked.
Be aware that the optimizer might have optimized call frames away before [`caller`](#caller-EXPR) had a chance to get the information. That means that `caller(N)` might not return information about the call frame you expect it to, for `N > 1`. In particular, `@DB::args` might have information from the previous time [`caller`](#caller-EXPR) was called.
Be aware that setting `@DB::args` is *best effort*, intended for debugging or generating backtraces, and should not be relied upon. In particular, as [`@_`](perlvar#%40_) contains aliases to the caller's arguments, Perl does not take a copy of [`@_`](perlvar#%40_), so `@DB::args` will contain modifications the subroutine makes to [`@_`](perlvar#%40_) or its contents, not the original values at call time. `@DB::args`, like [`@_`](perlvar#%40_), does not hold explicit references to its elements, so under certain cases its elements may have become freed and reallocated for other variables or temporary values. Finally, a side effect of the current implementation is that the effects of `shift @_` can *normally* be undone (but not `pop @_` or other splicing, *and* not if a reference to [`@_`](perlvar#%40_) has been taken, *and* subject to the caveat about reallocated elements), so `@DB::args` is actually a hybrid of the current state and initial state of [`@_`](perlvar#%40_). Buyer beware.
chdir EXPR
chdir FILEHANDLE
chdir DIRHANDLE chdir Changes the working directory to EXPR, if possible. If EXPR is omitted, changes to the directory specified by `$ENV{HOME}`, if set; if not, changes to the directory specified by `$ENV{LOGDIR}`. (Under VMS, the variable `$ENV{'SYS$LOGIN'}` is also checked, and used if it is set.) If neither is set, [`chdir`](#chdir-EXPR) does nothing and fails. It returns true on success, false otherwise. See the example under [`die`](#die-LIST).
On systems that support [fchdir(2)](http://man.he.net/man2/fchdir), you may pass a filehandle or directory handle as the argument. On systems that don't support [fchdir(2)](http://man.he.net/man2/fchdir), passing handles raises an exception.
chmod LIST Changes the permissions of a list of files. The first element of the list must be the numeric mode, which should probably be an octal number, and which definitely should *not* be a string of octal digits: `0644` is okay, but `"0644"` is not. Returns the number of files successfully changed. See also [`oct`](#oct-EXPR) if all you have is a string.
```
my $cnt = chmod 0755, "foo", "bar";
chmod 0755, @executables;
my $mode = "0644"; chmod $mode, "foo"; # !!! sets mode to
# --w----r-T
my $mode = "0644"; chmod oct($mode), "foo"; # this is better
my $mode = 0644; chmod $mode, "foo"; # this is best
```
On systems that support [fchmod(2)](http://man.he.net/man2/fchmod), you may pass filehandles among the files. On systems that don't support [fchmod(2)](http://man.he.net/man2/fchmod), passing filehandles raises an exception. Filehandles must be passed as globs or glob references to be recognized; barewords are considered filenames.
```
open(my $fh, "<", "foo");
my $perm = (stat $fh)[2] & 07777;
chmod($perm | 0600, $fh);
```
You can also import the symbolic `S_I*` constants from the [`Fcntl`](fcntl) module:
```
use Fcntl qw( :mode );
chmod S_IRWXU|S_IRGRP|S_IXGRP|S_IROTH|S_IXOTH, @executables;
# Identical to the chmod 0755 of the example above.
```
Portability issues: ["chmod" in perlport](perlport#chmod).
chomp VARIABLE
chomp( LIST ) chomp This safer version of [`chop`](#chop-VARIABLE) removes any trailing string that corresponds to the current value of [`$/`](perlvar#%24%2F) (also known as `$INPUT_RECORD_SEPARATOR` in the [`English`](english) module). It returns the total number of characters removed from all its arguments. It's often used to remove the newline from the end of an input record when you're worried that the final record may be missing its newline. When in paragraph mode (`$/ = ''`), it removes all trailing newlines from the string. When in slurp mode (`$/ = undef`) or fixed-length record mode ([`$/`](perlvar#%24%2F) is a reference to an integer or the like; see <perlvar>), [`chomp`](#chomp-VARIABLE) won't remove anything. If VARIABLE is omitted, it chomps [`$_`](perlvar#%24_). Example:
```
while (<>) {
chomp; # avoid \n on last field
my @array = split(/:/);
# ...
}
```
If VARIABLE is a hash, it chomps the hash's values, but not its keys, resetting the [`each`](#each-HASH) iterator in the process.
You can actually chomp anything that's an lvalue, including an assignment:
```
chomp(my $cwd = `pwd`);
chomp(my $answer = <STDIN>);
```
If you chomp a list, each element is chomped, and the total number of characters removed is returned.
Note that parentheses are necessary when you're chomping anything that is not a simple variable. This is because `chomp $cwd = `pwd`;` is interpreted as `(chomp $cwd) = `pwd`;`, rather than as `chomp( $cwd = `pwd` )` which you might expect. Similarly, `chomp $a, $b` is interpreted as `chomp($a), $b` rather than as `chomp($a, $b)`.
chop VARIABLE
chop( LIST ) chop Chops off the last character of a string and returns the character chopped. It is much more efficient than `s/.$//s` because it neither scans nor copies the string. If VARIABLE is omitted, chops [`$_`](perlvar#%24_). If VARIABLE is a hash, it chops the hash's values, but not its keys, resetting the [`each`](#each-HASH) iterator in the process.
You can actually chop anything that's an lvalue, including an assignment.
If you chop a list, each element is chopped. Only the value of the last [`chop`](#chop-VARIABLE) is returned.
Note that [`chop`](#chop-VARIABLE) returns the last character. To return all but the last character, use `substr($string, 0, -1)`.
See also [`chomp`](#chomp-VARIABLE).
chown LIST Changes the owner (and group) of a list of files. The first two elements of the list must be the *numeric* uid and gid, in that order. A value of -1 in either position is interpreted by most systems to leave that value unchanged. Returns the number of files successfully changed.
```
my $cnt = chown $uid, $gid, 'foo', 'bar';
chown $uid, $gid, @filenames;
```
On systems that support [fchown(2)](http://man.he.net/man2/fchown), you may pass filehandles among the files. On systems that don't support [fchown(2)](http://man.he.net/man2/fchown), passing filehandles raises an exception. Filehandles must be passed as globs or glob references to be recognized; barewords are considered filenames.
Here's an example that looks up nonnumeric uids in the passwd file:
```
print "User: ";
chomp(my $user = <STDIN>);
print "Files: ";
chomp(my $pattern = <STDIN>);
my ($login,$pass,$uid,$gid) = getpwnam($user)
or die "$user not in passwd file";
my @ary = glob($pattern); # expand filenames
chown $uid, $gid, @ary;
```
On most systems, you are not allowed to change the ownership of the file unless you're the superuser, although you should be able to change the group to any of your secondary groups. On insecure systems, these restrictions may be relaxed, but this is not a portable assumption. On POSIX systems, you can detect this condition this way:
```
use POSIX qw(sysconf _PC_CHOWN_RESTRICTED);
my $can_chown_giveaway = ! sysconf(_PC_CHOWN_RESTRICTED);
```
Portability issues: ["chown" in perlport](perlport#chown).
chr NUMBER chr Returns the character represented by that NUMBER in the character set. For example, `chr(65)` is `"A"` in either ASCII or Unicode, and chr(0x263a) is a Unicode smiley face.
Negative values give the Unicode replacement character (chr(0xfffd)), except under the <bytes> pragma, where the low eight bits of the value (truncated to an integer) are used.
If NUMBER is omitted, uses [`$_`](perlvar#%24_).
For the reverse, use [`ord`](#ord-EXPR).
Note that characters from 128 to 255 (inclusive) are by default internally not encoded as UTF-8 for backward compatibility reasons.
See <perlunicode> for more about Unicode.
chroot FILENAME chroot This function works like the system call by the same name: it makes the named directory the new root directory for all further pathnames that begin with a `/` by your process and all its children. (It doesn't change your current working directory, which is unaffected.) For security reasons, this call is restricted to the superuser. If FILENAME is omitted, does a [`chroot`](#chroot-FILENAME) to [`$_`](perlvar#%24_).
**NOTE:** It is mandatory for security to `chdir("/")` ([`chdir`](#chdir-EXPR) to the root directory) immediately after a [`chroot`](#chroot-FILENAME), otherwise the current working directory may be outside of the new root.
Portability issues: ["chroot" in perlport](perlport#chroot).
close FILEHANDLE close Closes the file or pipe associated with the filehandle, flushes the IO buffers, and closes the system file descriptor. Returns true if those operations succeed and if no error was reported by any PerlIO layer. Closes the currently selected filehandle if the argument is omitted.
You don't have to close FILEHANDLE if you are immediately going to do another [`open`](#open-FILEHANDLE%2CMODE%2CEXPR) on it, because [`open`](#open-FILEHANDLE%2CMODE%2CEXPR) closes it for you. (See [`open`](#open-FILEHANDLE%2CMODE%2CEXPR).) However, an explicit [`close`](#close-FILEHANDLE) on an input file resets the line counter ([`$.`](perlvar#%24.)), while the implicit close done by [`open`](#open-FILEHANDLE%2CMODE%2CEXPR) does not.
If the filehandle came from a piped open, [`close`](#close-FILEHANDLE) returns false if one of the other syscalls involved fails or if its program exits with non-zero status. If the only problem was that the program exited non-zero, [`$!`](perlvar#%24%21) will be set to `0`. Closing a pipe also waits for the process executing on the pipe to exit--in case you wish to look at the output of the pipe afterwards--and implicitly puts the exit status value of that command into [`$?`](perlvar#%24%3F) and [`${^CHILD_ERROR_NATIVE}`](perlvar#%24%7B%5ECHILD_ERROR_NATIVE%7D).
If there are multiple threads running, [`close`](#close-FILEHANDLE) on a filehandle from a piped open returns true without waiting for the child process to terminate, if the filehandle is still open in another thread.
Closing the read end of a pipe before the process writing to it at the other end is done writing results in the writer receiving a SIGPIPE. If the other end can't handle that, be sure to read all the data before closing the pipe.
Example:
```
open(OUTPUT, '|sort >foo') # pipe to sort
or die "Can't start sort: $!";
#... # print stuff to output
close OUTPUT # wait for sort to finish
or warn $! ? "Error closing sort pipe: $!"
: "Exit status $? from sort";
open(INPUT, 'foo') # get sort's results
or die "Can't open 'foo' for input: $!";
```
FILEHANDLE may be an expression whose value can be used as an indirect filehandle, usually the real filehandle name or an autovivified handle.
closedir DIRHANDLE Closes a directory opened by [`opendir`](#opendir-DIRHANDLE%2CEXPR) and returns the success of that system call.
connect SOCKET,NAME Attempts to connect to a remote socket, just like [connect(2)](http://man.he.net/man2/connect). Returns true if it succeeded, false otherwise. NAME should be a packed address of the appropriate type for the socket. See the examples in ["Sockets: Client/Server Communication" in perlipc](perlipc#Sockets%3A-Client%2FServer-Communication).
continue BLOCK continue When followed by a BLOCK, [`continue`](#continue-BLOCK) is actually a flow control statement rather than a function. If there is a [`continue`](#continue-BLOCK) BLOCK attached to a BLOCK (typically in a `while` or `foreach`), it is always executed just before the conditional is about to be evaluated again, just like the third part of a `for` loop in C. Thus it can be used to increment a loop variable, even when the loop has been continued via the [`next`](#next-LABEL) statement (which is similar to the C [`continue`](#continue-BLOCK) statement).
[`last`](#last-LABEL), [`next`](#next-LABEL), or [`redo`](#redo-LABEL) may appear within a [`continue`](#continue-BLOCK) block; [`last`](#last-LABEL) and [`redo`](#redo-LABEL) behave as if they had been executed within the main block. So will [`next`](#next-LABEL), but since it will execute a [`continue`](#continue-BLOCK) block, it may be more entertaining.
```
while (EXPR) {
### redo always comes here
do_something;
} continue {
### next always comes here
do_something_else;
# then back the top to re-check EXPR
}
### last always comes here
```
Omitting the [`continue`](#continue-BLOCK) section is equivalent to using an empty one, logically enough, so [`next`](#next-LABEL) goes directly back to check the condition at the top of the loop.
When there is no BLOCK, [`continue`](#continue-BLOCK) is a function that falls through the current `when` or `default` block instead of iterating a dynamically enclosing `foreach` or exiting a lexically enclosing `given`. In Perl 5.14 and earlier, this form of [`continue`](#continue-BLOCK) was only available when the [`"switch"` feature](feature#The-%27switch%27-feature) was enabled. See <feature> and ["Switch Statements" in perlsyn](perlsyn#Switch-Statements) for more information.
cos EXPR cos Returns the cosine of EXPR (expressed in radians). If EXPR is omitted, takes the cosine of [`$_`](perlvar#%24_).
For the inverse cosine operation, you may use the [`Math::Trig::acos`](Math::Trig) function, or use this relation:
```
sub acos { atan2( sqrt(1 - $_[0] * $_[0]), $_[0] ) }
```
crypt PLAINTEXT,SALT Creates a digest string exactly like the [crypt(3)](http://man.he.net/man3/crypt) function in the C library (assuming that you actually have a version there that has not been extirpated as a potential munition).
[`crypt`](#crypt-PLAINTEXT%2CSALT) is a one-way hash function. The PLAINTEXT and SALT are turned into a short string, called a digest, which is returned. The same PLAINTEXT and SALT will always return the same string, but there is no (known) way to get the original PLAINTEXT from the hash. Small changes in the PLAINTEXT or SALT will result in large changes in the digest.
There is no decrypt function. This function isn't all that useful for cryptography (for that, look for *Crypt* modules on your nearby CPAN mirror) and the name "crypt" is a bit of a misnomer. Instead it is primarily used to check if two pieces of text are the same without having to transmit or store the text itself. An example is checking if a correct password is given. The digest of the password is stored, not the password itself. The user types in a password that is [`crypt`](#crypt-PLAINTEXT%2CSALT)'d with the same salt as the stored digest. If the two digests match, the password is correct.
When verifying an existing digest string you should use the digest as the salt (like `crypt($plain, $digest) eq $digest`). The SALT used to create the digest is visible as part of the digest. This ensures [`crypt`](#crypt-PLAINTEXT%2CSALT) will hash the new string with the same salt as the digest. This allows your code to work with the standard [`crypt`](#crypt-PLAINTEXT%2CSALT) and with more exotic implementations. In other words, assume nothing about the returned string itself nor about how many bytes of SALT may matter.
Traditionally the result is a string of 13 bytes: two first bytes of the salt, followed by 11 bytes from the set `[./0-9A-Za-z]`, and only the first eight bytes of PLAINTEXT mattered. But alternative hashing schemes (like MD5), higher level security schemes (like C2), and implementations on non-Unix platforms may produce different strings.
When choosing a new salt create a random two character string whose characters come from the set `[./0-9A-Za-z]` (like `join '', ('.', '/', 0..9, 'A'..'Z', 'a'..'z')[rand 64, rand 64]`). This set of characters is just a recommendation; the characters allowed in the salt depend solely on your system's crypt library, and Perl can't restrict what salts [`crypt`](#crypt-PLAINTEXT%2CSALT) accepts.
Here's an example that makes sure that whoever runs this program knows their password:
```
my $pwd = (getpwuid($<))[1];
system "stty -echo";
print "Password: ";
chomp(my $word = <STDIN>);
print "\n";
system "stty echo";
if (crypt($word, $pwd) ne $pwd) {
die "Sorry...\n";
} else {
print "ok\n";
}
```
Of course, typing in your own password to whoever asks you for it is unwise.
The [`crypt`](#crypt-PLAINTEXT%2CSALT) function is unsuitable for hashing large quantities of data, not least of all because you can't get the information back. Look at the [Digest](digest) module for more robust algorithms.
If using [`crypt`](#crypt-PLAINTEXT%2CSALT) on a Unicode string (which *potentially* has characters with codepoints above 255), Perl tries to make sense of the situation by trying to downgrade (a copy of) the string back to an eight-bit byte string before calling [`crypt`](#crypt-PLAINTEXT%2CSALT) (on that copy). If that works, good. If not, [`crypt`](#crypt-PLAINTEXT%2CSALT) dies with [`Wide character in crypt`](perldiag#Wide-character-in-%25s).
Portability issues: ["crypt" in perlport](perlport#crypt).
dbmclose HASH [This function has been largely superseded by the [`untie`](#untie-VARIABLE) function.]
Breaks the binding between a DBM file and a hash.
Portability issues: ["dbmclose" in perlport](perlport#dbmclose).
dbmopen HASH,DBNAME,MASK [This function has been largely superseded by the [`tie`](#tie-VARIABLE%2CCLASSNAME%2CLIST) function.]
This binds a [dbm(3)](http://man.he.net/man3/dbm), [ndbm(3)](http://man.he.net/man3/ndbm), [sdbm(3)](http://man.he.net/man3/sdbm), [gdbm(3)](http://man.he.net/man3/gdbm), or Berkeley DB file to a hash. HASH is the name of the hash. (Unlike normal [`open`](#open-FILEHANDLE%2CMODE%2CEXPR), the first argument is *not* a filehandle, even though it looks like one). DBNAME is the name of the database (without the *.dir* or *.pag* extension if any). If the database does not exist, it is created with protection specified by MASK (as modified by the [`umask`](#umask-EXPR)). To prevent creation of the database if it doesn't exist, you may specify a MODE of 0, and the function will return a false value if it can't find an existing database. If your system supports only the older DBM functions, you may make only one [`dbmopen`](#dbmopen-HASH%2CDBNAME%2CMASK) call in your program. In older versions of Perl, if your system had neither DBM nor ndbm, calling [`dbmopen`](#dbmopen-HASH%2CDBNAME%2CMASK) produced a fatal error; it now falls back to [sdbm(3)](http://man.he.net/man3/sdbm).
If you don't have write access to the DBM file, you can only read hash variables, not set them. If you want to test whether you can write, either use file tests or try setting a dummy hash entry inside an [`eval`](#eval-EXPR) to trap the error.
Note that functions such as [`keys`](#keys-HASH) and [`values`](#values-HASH) may return huge lists when used on large DBM files. You may prefer to use the [`each`](#each-HASH) function to iterate over large DBM files. Example:
```
# print out history file offsets
dbmopen(%HIST,'/usr/lib/news/history',0666);
while (($key,$val) = each %HIST) {
print $key, ' = ', unpack('L',$val), "\n";
}
dbmclose(%HIST);
```
See also [AnyDBM\_File](anydbm_file) for a more general description of the pros and cons of the various dbm approaches, as well as [DB\_File](db_file) for a particularly rich implementation.
You can control which DBM library you use by loading that library before you call [`dbmopen`](#dbmopen-HASH%2CDBNAME%2CMASK):
```
use DB_File;
dbmopen(%NS_Hist, "$ENV{HOME}/.netscape/history.db")
or die "Can't open netscape history file: $!";
```
Portability issues: ["dbmopen" in perlport](perlport#dbmopen).
defined EXPR defined Returns a Boolean value telling whether EXPR has a value other than the undefined value [`undef`](#undef-EXPR). If EXPR is not present, [`$_`](perlvar#%24_) is checked.
Many operations return [`undef`](#undef-EXPR) to indicate failure, end of file, system error, uninitialized variable, and other exceptional conditions. This function allows you to distinguish [`undef`](#undef-EXPR) from other values. (A simple Boolean test will not distinguish among [`undef`](#undef-EXPR), zero, the empty string, and `"0"`, which are all equally false.) Note that since [`undef`](#undef-EXPR) is a valid scalar, its presence doesn't *necessarily* indicate an exceptional condition: [`pop`](#pop-ARRAY) returns [`undef`](#undef-EXPR) when its argument is an empty array, *or* when the element to return happens to be [`undef`](#undef-EXPR).
You may also use `defined(&func)` to check whether subroutine `func` has ever been defined. The return value is unaffected by any forward declarations of `func`. A subroutine that is not defined may still be callable: its package may have an `AUTOLOAD` method that makes it spring into existence the first time that it is called; see <perlsub>.
Use of [`defined`](#defined-EXPR) on aggregates (hashes and arrays) is no longer supported. It used to report whether memory for that aggregate had ever been allocated. You should instead use a simple test for size:
```
if (@an_array) { print "has array elements\n" }
if (%a_hash) { print "has hash members\n" }
```
When used on a hash element, it tells you whether the value is defined, not whether the key exists in the hash. Use [`exists`](#exists-EXPR) for the latter purpose.
Examples:
```
print if defined $switch{D};
print "$val\n" while defined($val = pop(@ary));
die "Can't readlink $sym: $!"
unless defined($value = readlink $sym);
sub foo { defined &$bar ? $bar->(@_) : die "No bar"; }
$debugging = 0 unless defined $debugging;
```
Note: Many folks tend to overuse [`defined`](#defined-EXPR) and are then surprised to discover that the number `0` and `""` (the zero-length string) are, in fact, defined values. For example, if you say
```
"ab" =~ /a(.*)b/;
```
The pattern match succeeds and `$1` is defined, although it matched "nothing". It didn't really fail to match anything. Rather, it matched something that happened to be zero characters long. This is all very above-board and honest. When a function returns an undefined value, it's an admission that it couldn't give you an honest answer. So you should use [`defined`](#defined-EXPR) only when questioning the integrity of what you're trying to do. At other times, a simple comparison to `0` or `""` is what you want.
See also [`undef`](#undef-EXPR), [`exists`](#exists-EXPR), [`ref`](#ref-EXPR).
delete EXPR Given an expression that specifies an element or slice of a hash, [`delete`](#delete-EXPR) deletes the specified elements from that hash so that [`exists`](#exists-EXPR) on that element no longer returns true. Setting a hash element to the undefined value does not remove its key, but deleting it does; see [`exists`](#exists-EXPR).
In list context, usually returns the value or values deleted, or the last such element in scalar context. The return list's length corresponds to that of the argument list: deleting non-existent elements returns the undefined value in their corresponding positions. Since Perl 5.28, a [key/value hash slice](perldata#Key%2FValue-Hash-Slices) can be passed to `delete`, and the return value is a list of key/value pairs (two elements for each item deleted from the hash).
[`delete`](#delete-EXPR) may also be used on arrays and array slices, but its behavior is less straightforward. Although [`exists`](#exists-EXPR) will return false for deleted entries, deleting array elements never changes indices of existing values; use [`shift`](#shift-ARRAY) or [`splice`](#splice-ARRAY%2COFFSET%2CLENGTH%2CLIST) for that. However, if any deleted elements fall at the end of an array, the array's size shrinks to the position of the highest element that still tests true for [`exists`](#exists-EXPR), or to 0 if none do. In other words, an array won't have trailing nonexistent elements after a delete.
**WARNING:** Calling [`delete`](#delete-EXPR) on array values is strongly discouraged. The notion of deleting or checking the existence of Perl array elements is not conceptually coherent, and can lead to surprising behavior.
Deleting from [`%ENV`](perlvar#%25ENV) modifies the environment. Deleting from a hash tied to a DBM file deletes the entry from the DBM file. Deleting from a [`tied`](#tied-VARIABLE) hash or array may not necessarily return anything; it depends on the implementation of the [`tied`](#tied-VARIABLE) package's DELETE method, which may do whatever it pleases.
The `delete local EXPR` construct localizes the deletion to the current block at run time. Until the block exits, elements locally deleted temporarily no longer exist. See ["Localized deletion of elements of composite types" in perlsub](perlsub#Localized-deletion-of-elements-of-composite-types).
```
my %hash = (foo => 11, bar => 22, baz => 33);
my $scalar = delete $hash{foo}; # $scalar is 11
$scalar = delete @hash{qw(foo bar)}; # $scalar is 22
my @array = delete @hash{qw(foo baz)}; # @array is (undef,33)
```
The following (inefficiently) deletes all the values of %HASH and @ARRAY:
```
foreach my $key (keys %HASH) {
delete $HASH{$key};
}
foreach my $index (0 .. $#ARRAY) {
delete $ARRAY[$index];
}
```
And so do these:
```
delete @HASH{keys %HASH};
delete @ARRAY[0 .. $#ARRAY];
```
But both are slower than assigning the empty list or undefining %HASH or @ARRAY, which is the customary way to empty out an aggregate:
```
%HASH = (); # completely empty %HASH
undef %HASH; # forget %HASH ever existed
@ARRAY = (); # completely empty @ARRAY
undef @ARRAY; # forget @ARRAY ever existed
```
The EXPR can be arbitrarily complicated provided its final operation is an element or slice of an aggregate:
```
delete $ref->[$x][$y]{$key};
delete $ref->[$x][$y]->@{$key1, $key2, @morekeys};
delete $ref->[$x][$y][$index];
delete $ref->[$x][$y]->@[$index1, $index2, @moreindices];
```
die LIST [`die`](#die-LIST) raises an exception. Inside an [`eval`](#eval-EXPR) the exception is stuffed into [`$@`](perlvar#%24%40) and the [`eval`](#eval-EXPR) is terminated with the undefined value. If the exception is outside of all enclosing [`eval`](#eval-EXPR)s, then the uncaught exception is printed to `STDERR` and perl exits with an exit code indicating failure. If you need to exit the process with a specific exit code, see [`exit`](#exit-EXPR).
Equivalent examples:
```
die "Can't cd to spool: $!\n" unless chdir '/usr/spool/news';
chdir '/usr/spool/news' or die "Can't cd to spool: $!\n"
```
Most of the time, `die` is called with a string to use as the exception. You may either give a single non-reference operand to serve as the exception, or a list of two or more items, which will be stringified and concatenated to make the exception.
If the string exception does not end in a newline, the current script line number and input line number (if any) and a newline are appended to it. Note that the "input line number" (also known as "chunk") is subject to whatever notion of "line" happens to be currently in effect, and is also available as the special variable [`$.`](perlvar#%24.). See ["$/" in perlvar](perlvar#%24%2F) and ["$." in perlvar](perlvar#%24.).
Hint: sometimes appending `", stopped"` to your message will cause it to make better sense when the string `"at foo line 123"` is appended. Suppose you are running script "canasta".
```
die "/etc/games is no good";
die "/etc/games is no good, stopped";
```
produce, respectively
```
/etc/games is no good at canasta line 123.
/etc/games is no good, stopped at canasta line 123.
```
If LIST was empty or made an empty string, and [`$@`](perlvar#%24%40) already contains an exception value (typically from a previous [`eval`](#eval-EXPR)), then that value is reused after appending `"\t...propagated"`. This is useful for propagating exceptions:
```
eval { ... };
die unless $@ =~ /Expected exception/;
```
If LIST was empty or made an empty string, and [`$@`](perlvar#%24%40) contains an object reference that has a `PROPAGATE` method, that method will be called with additional file and line number parameters. The return value replaces the value in [`$@`](perlvar#%24%40); i.e., as if `$@ = eval { $@->PROPAGATE(__FILE__, __LINE__) };` were called.
If LIST was empty or made an empty string, and [`$@`](perlvar#%24%40) is also empty, then the string `"Died"` is used.
You can also call [`die`](#die-LIST) with a reference argument, and if this is trapped within an [`eval`](#eval-EXPR), [`$@`](perlvar#%24%40) contains that reference. This permits more elaborate exception handling using objects that maintain arbitrary state about the exception. Such a scheme is sometimes preferable to matching particular string values of [`$@`](perlvar#%24%40) with regular expressions.
Because Perl stringifies uncaught exception messages before display, you'll probably want to overload stringification operations on exception objects. See <overload> for details about that. The stringified message should be non-empty, and should end in a newline, in order to fit in with the treatment of string exceptions. Also, because an exception object reference cannot be stringified without destroying it, Perl doesn't attempt to append location or other information to a reference exception. If you want location information with a complex exception object, you'll have to arrange to put the location information into the object yourself.
Because [`$@`](perlvar#%24%40) is a global variable, be careful that analyzing an exception caught by `eval` doesn't replace the reference in the global variable. It's easiest to make a local copy of the reference before any manipulations. Here's an example:
```
use Scalar::Util "blessed";
eval { ... ; die Some::Module::Exception->new( FOO => "bar" ) };
if (my $ev_err = $@) {
if (blessed($ev_err)
&& $ev_err->isa("Some::Module::Exception")) {
# handle Some::Module::Exception
}
else {
# handle all other possible exceptions
}
}
```
If an uncaught exception results in interpreter exit, the exit code is determined from the values of [`$!`](perlvar#%24%21) and [`$?`](perlvar#%24%3F) with this pseudocode:
```
exit $! if $!; # errno
exit $? >> 8 if $? >> 8; # child exit status
exit 255; # last resort
```
As with [`exit`](#exit-EXPR), [`$?`](perlvar#%24%3F) is set prior to unwinding the call stack; any `DESTROY` or `END` handlers can then alter this value, and thus Perl's exit code.
The intent is to squeeze as much possible information about the likely cause into the limited space of the system exit code. However, as [`$!`](perlvar#%24%21) is the value of C's `errno`, which can be set by any system call, this means that the value of the exit code used by [`die`](#die-LIST) can be non-predictable, so should not be relied upon, other than to be non-zero.
You can arrange for a callback to be run just before the [`die`](#die-LIST) does its deed, by setting the [`$SIG{__DIE__}`](perlvar#%25SIG) hook. The associated handler is called with the exception as an argument, and can change the exception, if it sees fit, by calling [`die`](#die-LIST) again. See ["%SIG" in perlvar](perlvar#%25SIG) for details on setting [`%SIG`](perlvar#%25SIG) entries, and [`eval`](#eval-EXPR) for some examples. Although this feature was to be run only right before your program was to exit, this is not currently so: the [`$SIG{__DIE__}`](perlvar#%25SIG) hook is currently called even inside [`eval`](#eval-EXPR)ed blocks/strings! If one wants the hook to do nothing in such situations, put
```
die @_ if $^S;
```
as the first line of the handler (see ["$^S" in perlvar](perlvar#%24%5ES)). Because this promotes strange action at a distance, this counterintuitive behavior may be fixed in a future release.
See also [`exit`](#exit-EXPR), [`warn`](#warn-LIST), and the [Carp](carp) module.
do BLOCK Not really a function. Returns the value of the last command in the sequence of commands indicated by BLOCK. When modified by the `while` or `until` loop modifier, executes the BLOCK once before testing the loop condition. (On other statements the loop modifiers test the conditional first.)
`do BLOCK` does *not* count as a loop, so the loop control statements [`next`](#next-LABEL), [`last`](#last-LABEL), or [`redo`](#redo-LABEL) cannot be used to leave or restart the block. See <perlsyn> for alternative strategies.
do EXPR Uses the value of EXPR as a filename and executes the contents of the file as a Perl script:
```
# load the exact specified file (./ and ../ special-cased)
do '/foo/stat.pl';
do './stat.pl';
do '../foo/stat.pl';
# search for the named file within @INC
do 'stat.pl';
do 'foo/stat.pl';
```
`do './stat.pl'` is largely like
```
eval `cat stat.pl`;
```
except that it's more concise, runs no external processes, and keeps track of the current filename for error messages. It also differs in that code evaluated with `do FILE` cannot see lexicals in the enclosing scope; `eval STRING` does. It's the same, however, in that it does reparse the file every time you call it, so you probably don't want to do this inside a loop.
Using `do` with a relative path (except for *./* and *../*), like
```
do 'foo/stat.pl';
```
will search the [`@INC`](perlvar#%40INC) directories, and update [`%INC`](perlvar#%25INC) if the file is found. See ["@INC" in perlvar](perlvar#%40INC) and ["%INC" in perlvar](perlvar#%25INC) for these variables. In particular, note that whilst historically [`@INC`](perlvar#%40INC) contained '.' (the current directory) making these two cases equivalent, that is no longer necessarily the case, as '.' is not included in `@INC` by default in perl versions 5.26.0 onwards. Instead, perl will now warn:
```
do "stat.pl" failed, '.' is no longer in @INC;
did you mean do "./stat.pl"?
```
If [`do`](#do-EXPR) can read the file but cannot compile it, it returns [`undef`](#undef-EXPR) and sets an error message in [`$@`](perlvar#%24%40). If [`do`](#do-EXPR) cannot read the file, it returns undef and sets [`$!`](perlvar#%24%21) to the error. Always check [`$@`](perlvar#%24%40) first, as compilation could fail in a way that also sets [`$!`](perlvar#%24%21). If the file is successfully compiled, [`do`](#do-EXPR) returns the value of the last expression evaluated.
Inclusion of library modules is better done with the [`use`](#use-Module-VERSION-LIST) and [`require`](#require-VERSION) operators, which also do automatic error checking and raise an exception if there's a problem.
You might like to use [`do`](#do-EXPR) to read in a program configuration file. Manual error checking can be done this way:
```
# Read in config files: system first, then user.
# Beware of using relative pathnames here.
for $file ("/share/prog/defaults.rc",
"$ENV{HOME}/.someprogrc")
{
unless ($return = do $file) {
warn "couldn't parse $file: $@" if $@;
warn "couldn't do $file: $!" unless defined $return;
warn "couldn't run $file" unless $return;
}
}
```
dump LABEL
dump EXPR dump This function causes an immediate core dump. See also the **-u** command-line switch in [perlrun](perlrun#-u), which does the same thing. Primarily this is so that you can use the **undump** program (not supplied) to turn your core dump into an executable binary after having initialized all your variables at the beginning of the program. When the new binary is executed it will begin by executing a `goto LABEL` (with all the restrictions that [`goto`](#goto-LABEL) suffers). Think of it as a goto with an intervening core dump and reincarnation. If `LABEL` is omitted, restarts the program from the top. The `dump EXPR` form, available starting in Perl 5.18.0, allows a name to be computed at run time, being otherwise identical to `dump LABEL`.
**WARNING**: Any files opened at the time of the dump will *not* be open any more when the program is reincarnated, with possible resulting confusion by Perl.
This function is now largely obsolete, mostly because it's very hard to convert a core file into an executable. As of Perl 5.30, it must be invoked as `CORE::dump()`.
Unlike most named operators, this has the same precedence as assignment. It is also exempt from the looks-like-a-function rule, so `dump ("foo")."bar"` will cause "bar" to be part of the argument to [`dump`](#dump-LABEL).
Portability issues: ["dump" in perlport](perlport#dump).
each HASH
each ARRAY When called on a hash in list context, returns a 2-element list consisting of the key and value for the next element of a hash. In Perl 5.12 and later only, it will also return the index and value for the next element of an array so that you can iterate over it; older Perls consider this a syntax error. When called in scalar context, returns only the key (not the value) in a hash, or the index in an array.
Hash entries are returned in an apparently random order. The actual random order is specific to a given hash; the exact same series of operations on two hashes may result in a different order for each hash. Any insertion into the hash may change the order, as will any deletion, with the exception that the most recent key returned by [`each`](#each-HASH) or [`keys`](#keys-HASH) may be deleted without changing the order. So long as a given hash is unmodified you may rely on [`keys`](#keys-HASH), [`values`](#values-HASH) and [`each`](#each-HASH) to repeatedly return the same order as each other. See ["Algorithmic Complexity Attacks" in perlsec](perlsec#Algorithmic-Complexity-Attacks) for details on why hash order is randomized. Aside from the guarantees provided here the exact details of Perl's hash algorithm and the hash traversal order are subject to change in any release of Perl.
After [`each`](#each-HASH) has returned all entries from the hash or array, the next call to [`each`](#each-HASH) returns the empty list in list context and [`undef`](#undef-EXPR) in scalar context; the next call following *that* one restarts iteration. Each hash or array has its own internal iterator, accessed by [`each`](#each-HASH), [`keys`](#keys-HASH), and [`values`](#values-HASH). The iterator is implicitly reset when [`each`](#each-HASH) has reached the end as just described; it can be explicitly reset by calling [`keys`](#keys-HASH) or [`values`](#values-HASH) on the hash or array, or by referencing the hash (but not array) in list context. If you add or delete a hash's elements while iterating over it, the effect on the iterator is unspecified; for example, entries may be skipped or duplicated--so don't do that. Exception: It is always safe to delete the item most recently returned by [`each`](#each-HASH), so the following code works properly:
```
while (my ($key, $value) = each %hash) {
print $key, "\n";
delete $hash{$key}; # This is safe
}
```
Tied hashes may have a different ordering behaviour to perl's hash implementation.
The iterator used by `each` is attached to the hash or array, and is shared between all iteration operations applied to the same hash or array. Thus all uses of `each` on a single hash or array advance the same iterator location. All uses of `each` are also subject to having the iterator reset by any use of `keys` or `values` on the same hash or array, or by the hash (but not array) being referenced in list context. This makes `each`-based loops quite fragile: it is easy to arrive at such a loop with the iterator already part way through the object, or to accidentally clobber the iterator state during execution of the loop body. It's easy enough to explicitly reset the iterator before starting a loop, but there is no way to insulate the iterator state used by a loop from the iterator state used by anything else that might execute during the loop body. To avoid these problems, use a `foreach` loop rather than `while`-`each`.
This extends to using `each` on the result of an anonymous hash or array constructor. A new underlying array or hash is created each time so each will always start iterating from scratch, eg:
```
# loops forever
while (my ($key, $value) = each @{ +{ a => 1 } }) {
print "$key=$value\n";
}
```
This prints out your environment like the [printenv(1)](http://man.he.net/man1/printenv) program, but in a different order:
```
while (my ($key,$value) = each %ENV) {
print "$key=$value\n";
}
```
Starting with Perl 5.14, an experimental feature allowed [`each`](#each-HASH) to take a scalar expression. This experiment has been deemed unsuccessful, and was removed as of Perl 5.24.
As of Perl 5.18 you can use a bare [`each`](#each-HASH) in a `while` loop, which will set [`$_`](perlvar#%24_) on every iteration. If either an `each` expression or an explicit assignment of an `each` expression to a scalar is used as a `while`/`for` condition, then the condition actually tests for definedness of the expression's value, not for its regular truth value.
```
while (each %ENV) {
print "$_=$ENV{$_}\n";
}
```
To avoid confusing would-be users of your code who are running earlier versions of Perl with mysterious syntax errors, put this sort of thing at the top of your file to signal that your code will work *only* on Perls of a recent vintage:
```
use v5.12; # so keys/values/each work on arrays
use v5.18; # so each assigns to $_ in a lone while test
```
See also [`keys`](#keys-HASH), [`values`](#values-HASH), and [`sort`](#sort-SUBNAME-LIST).
eof FILEHANDLE
eof () eof Returns 1 if the next read on FILEHANDLE will return end of file *or* if FILEHANDLE is not open. FILEHANDLE may be an expression whose value gives the real filehandle. (Note that this function actually reads a character and then `ungetc`s it, so isn't useful in an interactive context.) Do not read from a terminal file (or call `eof(FILEHANDLE)` on it) after end-of-file is reached. File types such as terminals may lose the end-of-file condition if you do.
An [`eof`](#eof-FILEHANDLE) without an argument uses the last file read. Using [`eof()`](#eof-FILEHANDLE) with empty parentheses is different. It refers to the pseudo file formed from the files listed on the command line and accessed via the `<>` operator. Since `<>` isn't explicitly opened, as a normal filehandle is, an [`eof()`](#eof-FILEHANDLE) before `<>` has been used will cause [`@ARGV`](perlvar#%40ARGV) to be examined to determine if input is available. Similarly, an [`eof()`](#eof-FILEHANDLE) after `<>` has returned end-of-file will assume you are processing another [`@ARGV`](perlvar#%40ARGV) list, and if you haven't set [`@ARGV`](perlvar#%40ARGV), will read input from `STDIN`; see ["I/O Operators" in perlop](perlop#I%2FO-Operators).
In a `while (<>)` loop, [`eof`](#eof-FILEHANDLE) or `eof(ARGV)` can be used to detect the end of each file, whereas [`eof()`](#eof-FILEHANDLE) will detect the end of the very last file only. Examples:
```
# reset line numbering on each input file
while (<>) {
next if /^\s*#/; # skip comments
print "$.\t$_";
} continue {
close ARGV if eof; # Not eof()!
}
# insert dashes just before last line of last file
while (<>) {
if (eof()) { # check for end of last file
print "--------------\n";
}
print;
last if eof(); # needed if we're reading from a terminal
}
```
Practical hint: you almost never need to use [`eof`](#eof-FILEHANDLE) in Perl, because the input operators typically return [`undef`](#undef-EXPR) when they run out of data or encounter an error.
eval EXPR
eval BLOCK eval `eval` in all its forms is used to execute a little Perl program, trapping any errors encountered so they don't crash the calling program.
Plain `eval` with no argument is just `eval EXPR`, where the expression is understood to be contained in [`$_`](perlvar#%24_). Thus there are only two real `eval` forms; the one with an EXPR is often called "string eval". In a string eval, the value of the expression (which is itself determined within scalar context) is first parsed, and if there were no errors, executed as a block within the lexical context of the current Perl program. This form is typically used to delay parsing and subsequent execution of the text of EXPR until run time. Note that the value is parsed every time the `eval` executes.
The other form is called "block eval". It is less general than string eval, but the code within the BLOCK is parsed only once (at the same time the code surrounding the `eval` itself was parsed) and executed within the context of the current Perl program. This form is typically used to trap exceptions more efficiently than the first, while also providing the benefit of checking the code within BLOCK at compile time. BLOCK is parsed and compiled just once. Since errors are trapped, it often is used to check if a given feature is available.
In both forms, the value returned is the value of the last expression evaluated inside the mini-program; a return statement may also be used, just as with subroutines. The expression providing the return value is evaluated in void, scalar, or list context, depending on the context of the `eval` itself. See [`wantarray`](#wantarray) for more on how the evaluation context can be determined.
If there is a syntax error or runtime error, or a [`die`](#die-LIST) statement is executed, `eval` returns [`undef`](#undef-EXPR) in scalar context, or an empty list in list context, and [`$@`](perlvar#%24%40) is set to the error message. (Prior to 5.16, a bug caused [`undef`](#undef-EXPR) to be returned in list context for syntax errors, but not for runtime errors.) If there was no error, [`$@`](perlvar#%24%40) is set to the empty string. A control flow operator like [`last`](#last-LABEL) or [`goto`](#goto-LABEL) can bypass the setting of [`$@`](perlvar#%24%40). Beware that using `eval` neither silences Perl from printing warnings to STDERR, nor does it stuff the text of warning messages into [`$@`](perlvar#%24%40). To do either of those, you have to use the [`$SIG{__WARN__}`](perlvar#%25SIG) facility, or turn off warnings inside the BLOCK or EXPR using `no warnings 'all'`. See [`warn`](#warn-LIST), <perlvar>, and <warnings>.
Note that, because `eval` traps otherwise-fatal errors, it is useful for determining whether a particular feature (such as [`socket`](#socket-SOCKET%2CDOMAIN%2CTYPE%2CPROTOCOL) or [`symlink`](#symlink-OLDFILE%2CNEWFILE)) is implemented. It is also Perl's exception-trapping mechanism, where the [`die`](#die-LIST) operator is used to raise exceptions.
Before Perl 5.14, the assignment to [`$@`](perlvar#%24%40) occurred before restoration of localized variables, which means that for your code to run on older versions, a temporary is required if you want to mask some, but not all errors:
```
# alter $@ on nefarious repugnancy only
{
my $e;
{
local $@; # protect existing $@
eval { test_repugnancy() };
# $@ =~ /nefarious/ and die $@; # Perl 5.14 and higher only
$@ =~ /nefarious/ and $e = $@;
}
die $e if defined $e
}
```
There are some different considerations for each form:
String eval Since the return value of EXPR is executed as a block within the lexical context of the current Perl program, any outer lexical variables are visible to it, and any package variable settings or subroutine and format definitions remain afterwards.
Under the [`"unicode_eval"` feature](feature#The-%27unicode_eval%27-and-%27evalbytes%27-features)
If this feature is enabled (which is the default under a `use 5.16` or higher declaration), Perl assumes that EXPR is a character string. Any `use utf8` or `no utf8` declarations within the string thus have no effect. Source filters are forbidden as well. (`unicode_strings`, however, can appear within the string.)
See also the [`evalbytes`](#evalbytes-EXPR) operator, which works properly with source filters.
Outside the `"unicode_eval"` feature In this case, the behavior is problematic and is not so easily described. Here are two bugs that cannot easily be fixed without breaking existing programs:
* Perl's internal storage of EXPR affects the behavior of the executed code. For example:
```
my $v = eval "use utf8; '$expr'";
```
If $expr is `"\xc4\x80"` (U+0100 in UTF-8), then the value stored in `$v` will depend on whether Perl stores $expr "upgraded" (cf. <utf8>) or not:
+ If upgraded, `$v` will be `"\xc4\x80"` (i.e., the `use utf8` has no effect.)
+ If non-upgraded, `$v` will be `"\x{100}"`.This is undesirable since being upgraded or not should not affect a string's behavior.
* Source filters activated within `eval` leak out into whichever file scope is currently being compiled. To give an example with the CPAN module <Semi::Semicolons>:
```
BEGIN { eval "use Semi::Semicolons; # not filtered" }
# filtered here!
```
[`evalbytes`](#evalbytes-EXPR) fixes that to work the way one would expect:
```
use feature "evalbytes";
BEGIN { evalbytes "use Semi::Semicolons; # filtered" }
# not filtered
```
Problems can arise if the string expands a scalar containing a floating point number. That scalar can expand to letters, such as `"NaN"` or `"Infinity"`; or, within the scope of a [`use locale`](locale), the decimal point character may be something other than a dot (such as a comma). None of these are likely to parse as you are likely expecting.
You should be especially careful to remember what's being looked at when:
```
eval $x; # CASE 1
eval "$x"; # CASE 2
eval '$x'; # CASE 3
eval { $x }; # CASE 4
eval "\$$x++"; # CASE 5
$$x++; # CASE 6
```
Cases 1 and 2 above behave identically: they run the code contained in the variable $x. (Although case 2 has misleading double quotes making the reader wonder what else might be happening (nothing is).) Cases 3 and 4 likewise behave in the same way: they run the code `'$x'`, which does nothing but return the value of $x. (Case 4 is preferred for purely visual reasons, but it also has the advantage of compiling at compile-time instead of at run-time.) Case 5 is a place where normally you *would* like to use double quotes, except that in this particular situation, you can just use symbolic references instead, as in case 6.
An `eval ''` executed within a subroutine defined in the `DB` package doesn't see the usual surrounding lexical scope, but rather the scope of the first non-DB piece of code that called it. You don't normally need to worry about this unless you are writing a Perl debugger.
The final semicolon, if any, may be omitted from the value of EXPR.
Block eval If the code to be executed doesn't vary, you may use the eval-BLOCK form to trap run-time errors without incurring the penalty of recompiling each time. The error, if any, is still returned in [`$@`](perlvar#%24%40). Examples:
```
# make divide-by-zero nonfatal
eval { $answer = $a / $b; }; warn $@ if $@;
# same thing, but less efficient
eval '$answer = $a / $b'; warn $@ if $@;
# a compile-time error
eval { $answer = }; # WRONG
# a run-time error
eval '$answer ='; # sets $@
```
If you want to trap errors when loading an XS module, some problems with the binary interface (such as Perl version skew) may be fatal even with `eval` unless `$ENV{PERL_DL_NONLAZY}` is set. See [perlrun](perlrun#PERL_DL_NONLAZY).
Using the `eval {}` form as an exception trap in libraries does have some issues. Due to the current arguably broken state of `__DIE__` hooks, you may wish not to trigger any `__DIE__` hooks that user code may have installed. You can use the `local $SIG{__DIE__}` construct for this purpose, as this example shows:
```
# a private exception trap for divide-by-zero
eval { local $SIG{'__DIE__'}; $answer = $a / $b; };
warn $@ if $@;
```
This is especially significant, given that `__DIE__` hooks can call [`die`](#die-LIST) again, which has the effect of changing their error messages:
```
# __DIE__ hooks may modify error messages
{
local $SIG{'__DIE__'} =
sub { (my $x = $_[0]) =~ s/foo/bar/g; die $x };
eval { die "foo lives here" };
print $@ if $@; # prints "bar lives here"
}
```
Because this promotes action at a distance, this counterintuitive behavior may be fixed in a future release.
`eval BLOCK` does *not* count as a loop, so the loop control statements [`next`](#next-LABEL), [`last`](#last-LABEL), or [`redo`](#redo-LABEL) cannot be used to leave or restart the block.
The final semicolon, if any, may be omitted from within the BLOCK.
evalbytes EXPR evalbytes This function is similar to a [string eval](#eval-EXPR), except it always parses its argument (or [`$_`](perlvar#%24_) if EXPR is omitted) as a byte string. If the string contains any code points above 255, then it cannot be a byte string, and the `evalbytes` will fail with the error stored in `$@`.
`use utf8` and `no utf8` within the string have their usual effect.
Source filters activated within the evaluated code apply to the code itself.
[`evalbytes`](#evalbytes-EXPR) is available starting in Perl v5.16. To access it, you must say `CORE::evalbytes`, but you can omit the `CORE::` if the [`"evalbytes"` feature](feature#The-%27unicode_eval%27-and-%27evalbytes%27-features) is enabled. This is enabled automatically with a `use v5.16` (or higher) declaration in the current scope.
exec LIST
exec PROGRAM LIST The [`exec`](#exec-LIST) function executes a system command *and never returns*; use [`system`](#system-LIST) instead of [`exec`](#exec-LIST) if you want it to return. It fails and returns false only if the command does not exist *and* it is executed directly instead of via your system's command shell (see below).
Since it's a common mistake to use [`exec`](#exec-LIST) instead of [`system`](#system-LIST), Perl warns you if [`exec`](#exec-LIST) is called in void context and if there is a following statement that isn't [`die`](#die-LIST), [`warn`](#warn-LIST), or [`exit`](#exit-EXPR) (if <warnings> are enabled--but you always do that, right?). If you *really* want to follow an [`exec`](#exec-LIST) with some other statement, you can use one of these styles to avoid the warning:
```
exec ('foo') or print STDERR "couldn't exec foo: $!";
{ exec ('foo') }; print STDERR "couldn't exec foo: $!";
```
If there is more than one argument in LIST, this calls [execvp(3)](http://man.he.net/man3/execvp) with the arguments in LIST. If there is only one element in LIST, the argument is checked for shell metacharacters, and if there are any, the entire argument is passed to the system's command shell for parsing (this is `/bin/sh -c` on Unix platforms, but varies on other platforms). If there are no shell metacharacters in the argument, it is split into words and passed directly to `execvp`, which is more efficient. Examples:
```
exec '/bin/echo', 'Your arguments are: ', @ARGV;
exec "sort $outfile | uniq";
```
If you don't really want to execute the first argument, but want to lie to the program you are executing about its own name, you can specify the program you actually want to run as an "indirect object" (without a comma) in front of the LIST, as in `exec PROGRAM LIST`. (This always forces interpretation of the LIST as a multivalued list, even if there is only a single scalar in the list.) Example:
```
my $shell = '/bin/csh';
exec $shell '-sh'; # pretend it's a login shell
```
or, more directly,
```
exec {'/bin/csh'} '-sh'; # pretend it's a login shell
```
When the arguments get executed via the system shell, results are subject to its quirks and capabilities. See ["`STRING`" in perlop](perlop#%60STRING%60) for details.
Using an indirect object with [`exec`](#exec-LIST) or [`system`](#system-LIST) is also more secure. This usage (which also works fine with [`system`](#system-LIST)) forces interpretation of the arguments as a multivalued list, even if the list had just one argument. That way you're safe from the shell expanding wildcards or splitting up words with whitespace in them.
```
my @args = ( "echo surprise" );
exec @args; # subject to shell escapes
# if @args == 1
exec { $args[0] } @args; # safe even with one-arg list
```
The first version, the one without the indirect object, ran the *echo* program, passing it `"surprise"` an argument. The second version didn't; it tried to run a program named *"echo surprise"*, didn't find it, and set [`$?`](perlvar#%24%3F) to a non-zero value indicating failure.
On Windows, only the `exec PROGRAM LIST` indirect object syntax will reliably avoid using the shell; `exec LIST`, even with more than one element, will fall back to the shell if the first spawn fails.
Perl attempts to flush all files opened for output before the exec, but this may not be supported on some platforms (see <perlport>). To be safe, you may need to set [`$|`](perlvar#%24%7C) (`$AUTOFLUSH` in [English](english)) or call the `autoflush` method of [`IO::Handle`](IO::Handle#METHODS) on any open handles to avoid lost output.
Note that [`exec`](#exec-LIST) will not call your `END` blocks, nor will it invoke `DESTROY` methods on your objects.
Portability issues: ["exec" in perlport](perlport#exec).
exists EXPR Given an expression that specifies an element of a hash, returns true if the specified element in the hash has ever been initialized, even if the corresponding value is undefined.
```
print "Exists\n" if exists $hash{$key};
print "Defined\n" if defined $hash{$key};
print "True\n" if $hash{$key};
```
exists may also be called on array elements, but its behavior is much less obvious and is strongly tied to the use of [`delete`](#delete-EXPR) on arrays.
**WARNING:** Calling [`exists`](#exists-EXPR) on array values is strongly discouraged. The notion of deleting or checking the existence of Perl array elements is not conceptually coherent, and can lead to surprising behavior.
```
print "Exists\n" if exists $array[$index];
print "Defined\n" if defined $array[$index];
print "True\n" if $array[$index];
```
A hash or array element can be true only if it's defined and defined only if it exists, but the reverse doesn't necessarily hold true.
Given an expression that specifies the name of a subroutine, returns true if the specified subroutine has ever been declared, even if it is undefined. Mentioning a subroutine name for exists or defined does not count as declaring it. Note that a subroutine that does not exist may still be callable: its package may have an `AUTOLOAD` method that makes it spring into existence the first time that it is called; see <perlsub>.
```
print "Exists\n" if exists &subroutine;
print "Defined\n" if defined &subroutine;
```
Note that the EXPR can be arbitrarily complicated as long as the final operation is a hash or array key lookup or subroutine name:
```
if (exists $ref->{A}->{B}->{$key}) { }
if (exists $hash{A}{B}{$key}) { }
if (exists $ref->{A}->{B}->[$ix]) { }
if (exists $hash{A}{B}[$ix]) { }
if (exists &{$ref->{A}{B}{$key}}) { }
```
Although the most deeply nested array or hash element will not spring into existence just because its existence was tested, any intervening ones will. Thus `$ref->{"A"}` and `$ref->{"A"}->{"B"}` will spring into existence due to the existence test for the `$key` element above. This happens anywhere the arrow operator is used, including even here:
```
undef $ref;
if (exists $ref->{"Some key"}) { }
print $ref; # prints HASH(0x80d3d5c)
```
Use of a subroutine call, rather than a subroutine name, as an argument to [`exists`](#exists-EXPR) is an error.
```
exists ⊂ # OK
exists &sub(); # Error
```
exit EXPR exit Evaluates EXPR and exits immediately with that value. Example:
```
my $ans = <STDIN>;
exit 0 if $ans =~ /^[Xx]/;
```
See also [`die`](#die-LIST). If EXPR is omitted, exits with `0` status. The only universally recognized values for EXPR are `0` for success and `1` for error; other values are subject to interpretation depending on the environment in which the Perl program is running. For example, exiting 69 (EX\_UNAVAILABLE) from a *sendmail* incoming-mail filter will cause the mailer to return the item undelivered, but that's not true everywhere.
Don't use [`exit`](#exit-EXPR) to abort a subroutine if there's any chance that someone might want to trap whatever error happened. Use [`die`](#die-LIST) instead, which can be trapped by an [`eval`](#eval-EXPR).
The [`exit`](#exit-EXPR) function does not always exit immediately. It calls any defined `END` routines first, but these `END` routines may not themselves abort the exit. Likewise any object destructors that need to be called are called before the real exit. `END` routines and destructors can change the exit status by modifying [`$?`](perlvar#%24%3F). If this is a problem, you can call [`POSIX::_exit($status)`](posix#_exit) to avoid `END` and destructor processing. See <perlmod> for details.
Portability issues: ["exit" in perlport](perlport#exit).
exp EXPR exp Returns *e* (the natural logarithm base) to the power of EXPR. If EXPR is omitted, gives `exp($_)`.
fc EXPR fc Returns the casefolded version of EXPR. This is the internal function implementing the `\F` escape in double-quoted strings.
Casefolding is the process of mapping strings to a form where case differences are erased; comparing two strings in their casefolded form is effectively a way of asking if two strings are equal, regardless of case.
Roughly, if you ever found yourself writing this
```
lc($this) eq lc($that) # Wrong!
# or
uc($this) eq uc($that) # Also wrong!
# or
$this =~ /^\Q$that\E\z/i # Right!
```
Now you can write
```
fc($this) eq fc($that)
```
And get the correct results.
Perl only implements the full form of casefolding, but you can access the simple folds using ["**casefold()**" in Unicode::UCD](Unicode::UCD#casefold%28%29) and ["**prop\_invmap()**" in Unicode::UCD](Unicode::UCD#prop_invmap%28%29). For further information on casefolding, refer to the Unicode Standard, specifically sections 3.13 `Default Case Operations`, 4.2 `Case-Normative`, and 5.18 `Case Mappings`, available at <https://www.unicode.org/versions/latest/>, as well as the Case Charts available at <https://www.unicode.org/charts/case/>.
If EXPR is omitted, uses [`$_`](perlvar#%24_).
This function behaves the same way under various pragmas, such as within [`"use feature 'unicode_strings"`](feature#The-%27unicode_strings%27-feature), as [`lc`](#lc-EXPR) does, with the single exception of [`fc`](#fc-EXPR) of *LATIN CAPITAL LETTER SHARP S* (U+1E9E) within the scope of [`use locale`](locale). The foldcase of this character would normally be `"ss"`, but as explained in the [`lc`](#lc-EXPR) section, case changes that cross the 255/256 boundary are problematic under locales, and are hence prohibited. Therefore, this function under locale returns instead the string `"\x{17F}\x{17F}"`, which is the *LATIN SMALL LETTER LONG S*. Since that character itself folds to `"s"`, the string of two of them together should be equivalent to a single U+1E9E when foldcased.
While the Unicode Standard defines two additional forms of casefolding, one for Turkic languages and one that never maps one character into multiple characters, these are not provided by the Perl core. However, the CPAN module [`Unicode::Casing`](Unicode::Casing) may be used to provide an implementation.
[`fc`](#fc-EXPR) is available only if the [`"fc"` feature](feature#The-%27fc%27-feature) is enabled or if it is prefixed with `CORE::`. The [`"fc"` feature](feature#The-%27fc%27-feature) is enabled automatically with a `use v5.16` (or higher) declaration in the current scope.
fcntl FILEHANDLE,FUNCTION,SCALAR Implements the [fcntl(2)](http://man.he.net/man2/fcntl) function. You'll probably have to say
```
use Fcntl;
```
first to get the correct constant definitions. Argument processing and value returned work just like [`ioctl`](#ioctl-FILEHANDLE%2CFUNCTION%2CSCALAR) below. For example:
```
use Fcntl;
my $flags = fcntl($filehandle, F_GETFL, 0)
or die "Can't fcntl F_GETFL: $!";
```
You don't have to check for [`defined`](#defined-EXPR) on the return from [`fcntl`](#fcntl-FILEHANDLE%2CFUNCTION%2CSCALAR). Like [`ioctl`](#ioctl-FILEHANDLE%2CFUNCTION%2CSCALAR), it maps a `0` return from the system call into `"0 but true"` in Perl. This string is true in boolean context and `0` in numeric context. It is also exempt from the normal [`Argument "..." isn't numeric`](perldiag#Argument-%22%25s%22-isn%27t-numeric%25s) <warnings> on improper numeric conversions.
Note that [`fcntl`](#fcntl-FILEHANDLE%2CFUNCTION%2CSCALAR) raises an exception if used on a machine that doesn't implement [fcntl(2)](http://man.he.net/man2/fcntl). See the [Fcntl](fcntl) module or your [fcntl(2)](http://man.he.net/man2/fcntl) manpage to learn what functions are available on your system.
Here's an example of setting a filehandle named `$REMOTE` to be non-blocking at the system level. You'll have to negotiate [`$|`](perlvar#%24%7C) on your own, though.
```
use Fcntl qw(F_GETFL F_SETFL O_NONBLOCK);
my $flags = fcntl($REMOTE, F_GETFL, 0)
or die "Can't get flags for the socket: $!\n";
fcntl($REMOTE, F_SETFL, $flags | O_NONBLOCK)
or die "Can't set flags for the socket: $!\n";
```
Portability issues: ["fcntl" in perlport](perlport#fcntl).
\_\_FILE\_\_ A special token that returns the name of the file in which it occurs. It can be altered by the mechanism described at ["Plain Old Comments (Not!)" in perlsyn](perlsyn#Plain-Old-Comments-%28Not%21%29).
fileno FILEHANDLE
fileno DIRHANDLE Returns the file descriptor for a filehandle or directory handle, or undefined if the filehandle is not open. If there is no real file descriptor at the OS level, as can happen with filehandles connected to memory objects via [`open`](#open-FILEHANDLE%2CMODE%2CEXPR) with a reference for the third argument, -1 is returned.
This is mainly useful for constructing bitmaps for [`select`](#select-RBITS%2CWBITS%2CEBITS%2CTIMEOUT) and low-level POSIX tty-handling operations. If FILEHANDLE is an expression, the value is taken as an indirect filehandle, generally its name.
You can use this to find out whether two handles refer to the same underlying descriptor:
```
if (fileno($this) != -1 && fileno($this) == fileno($that)) {
print "\$this and \$that are dups\n";
} elsif (fileno($this) != -1 && fileno($that) != -1) {
print "\$this and \$that have different " .
"underlying file descriptors\n";
} else {
print "At least one of \$this and \$that does " .
"not have a real file descriptor\n";
}
```
The behavior of [`fileno`](#fileno-FILEHANDLE) on a directory handle depends on the operating system. On a system with [dirfd(3)](http://man.he.net/man3/dirfd) or similar, [`fileno`](#fileno-FILEHANDLE) on a directory handle returns the underlying file descriptor associated with the handle; on systems with no such support, it returns the undefined value, and sets [`$!`](perlvar#%24%21) (errno).
flock FILEHANDLE,OPERATION Calls [flock(2)](http://man.he.net/man2/flock), or an emulation of it, on FILEHANDLE. Returns true for success, false on failure. Produces a fatal error if used on a machine that doesn't implement [flock(2)](http://man.he.net/man2/flock), [fcntl(2)](http://man.he.net/man2/fcntl) locking, or [lockf(3)](http://man.he.net/man3/lockf). [`flock`](#flock-FILEHANDLE%2COPERATION) is Perl's portable file-locking interface, although it locks entire files only, not records.
Two potentially non-obvious but traditional [`flock`](#flock-FILEHANDLE%2COPERATION) semantics are that it waits indefinitely until the lock is granted, and that its locks are **merely advisory**. Such discretionary locks are more flexible, but offer fewer guarantees. This means that programs that do not also use [`flock`](#flock-FILEHANDLE%2COPERATION) may modify files locked with [`flock`](#flock-FILEHANDLE%2COPERATION). See <perlport>, your port's specific documentation, and your system-specific local manpages for details. It's best to assume traditional behavior if you're writing portable programs. (But if you're not, you should as always feel perfectly free to write for your own system's idiosyncrasies (sometimes called "features"). Slavish adherence to portability concerns shouldn't get in the way of your getting your job done.)
OPERATION is one of LOCK\_SH, LOCK\_EX, or LOCK\_UN, possibly combined with LOCK\_NB. These constants are traditionally valued 1, 2, 8 and 4, but you can use the symbolic names if you import them from the [Fcntl](fcntl) module, either individually, or as a group using the `:flock` tag. LOCK\_SH requests a shared lock, LOCK\_EX requests an exclusive lock, and LOCK\_UN releases a previously requested lock. If LOCK\_NB is bitwise-or'ed with LOCK\_SH or LOCK\_EX, then [`flock`](#flock-FILEHANDLE%2COPERATION) returns immediately rather than blocking waiting for the lock; check the return status to see if you got it.
To avoid the possibility of miscoordination, Perl now flushes FILEHANDLE before locking or unlocking it.
Note that the emulation built with [lockf(3)](http://man.he.net/man3/lockf) doesn't provide shared locks, and it requires that FILEHANDLE be open with write intent. These are the semantics that [lockf(3)](http://man.he.net/man3/lockf) implements. Most if not all systems implement [lockf(3)](http://man.he.net/man3/lockf) in terms of [fcntl(2)](http://man.he.net/man2/fcntl) locking, though, so the differing semantics shouldn't bite too many people.
Note that the [fcntl(2)](http://man.he.net/man2/fcntl) emulation of [flock(3)](http://man.he.net/man3/flock) requires that FILEHANDLE be open with read intent to use LOCK\_SH and requires that it be open with write intent to use LOCK\_EX.
Note also that some versions of [`flock`](#flock-FILEHANDLE%2COPERATION) cannot lock things over the network; you would need to use the more system-specific [`fcntl`](#fcntl-FILEHANDLE%2CFUNCTION%2CSCALAR) for that. If you like you can force Perl to ignore your system's [flock(2)](http://man.he.net/man2/flock) function, and so provide its own [fcntl(2)](http://man.he.net/man2/fcntl)-based emulation, by passing the switch `-Ud_flock` to the *Configure* program when you configure and build a new Perl.
Here's a mailbox appender for BSD systems.
```
# import LOCK_* and SEEK_END constants
use Fcntl qw(:flock SEEK_END);
sub lock {
my ($fh) = @_;
flock($fh, LOCK_EX) or die "Cannot lock mailbox - $!\n";
# and, in case we're running on a very old UNIX
# variant without the modern O_APPEND semantics...
seek($fh, 0, SEEK_END) or die "Cannot seek - $!\n";
}
sub unlock {
my ($fh) = @_;
flock($fh, LOCK_UN) or die "Cannot unlock mailbox - $!\n";
}
open(my $mbox, ">>", "/usr/spool/mail/$ENV{'USER'}")
or die "Can't open mailbox: $!";
lock($mbox);
print $mbox $msg,"\n\n";
unlock($mbox);
```
On systems that support a real [flock(2)](http://man.he.net/man2/flock), locks are inherited across [`fork`](#fork) calls, whereas those that must resort to the more capricious [fcntl(2)](http://man.he.net/man2/fcntl) function lose their locks, making it seriously harder to write servers.
See also [DB\_File](db_file) for other [`flock`](#flock-FILEHANDLE%2COPERATION) examples.
Portability issues: ["flock" in perlport](perlport#flock).
fork Does a [fork(2)](http://man.he.net/man2/fork) system call to create a new process running the same program at the same point. It returns the child pid to the parent process, `0` to the child process, or [`undef`](#undef-EXPR) if the fork is unsuccessful. File descriptors (and sometimes locks on those descriptors) are shared, while everything else is copied. On most systems supporting [fork(2)](http://man.he.net/man2/fork), great care has gone into making it extremely efficient (for example, using copy-on-write technology on data pages), making it the dominant paradigm for multitasking over the last few decades.
Perl attempts to flush all files opened for output before forking the child process, but this may not be supported on some platforms (see <perlport>). To be safe, you may need to set [`$|`](perlvar#%24%7C) (`$AUTOFLUSH` in [English](english)) or call the `autoflush` method of [`IO::Handle`](IO::Handle#METHODS) on any open handles to avoid duplicate output.
If you [`fork`](#fork) without ever waiting on your children, you will accumulate zombies. On some systems, you can avoid this by setting [`$SIG{CHLD}`](perlvar#%25SIG) to `"IGNORE"`. See also <perlipc> for more examples of forking and reaping moribund children.
Note that if your forked child inherits system file descriptors like STDIN and STDOUT that are actually connected by a pipe or socket, even if you exit, then the remote server (such as, say, a CGI script or a backgrounded job launched from a remote shell) won't think you're done. You should reopen those to */dev/null* if it's any issue.
On some platforms such as Windows, where the [fork(2)](http://man.he.net/man2/fork) system call is not available, Perl can be built to emulate [`fork`](#fork) in the Perl interpreter. The emulation is designed, at the level of the Perl program, to be as compatible as possible with the "Unix" [fork(2)](http://man.he.net/man2/fork). However it has limitations that have to be considered in code intended to be portable. See <perlfork> for more details.
Portability issues: ["fork" in perlport](perlport#fork).
format Declare a picture format for use by the [`write`](#write-FILEHANDLE) function. For example:
```
format Something =
Test: @<<<<<<<< @||||| @>>>>>
$str, $%, '$' . int($num)
.
$str = "widget";
$num = $cost/$quantity;
$~ = 'Something';
write;
```
See <perlform> for many details and examples.
formline PICTURE,LIST This is an internal function used by [`format`](#format)s, though you may call it, too. It formats (see <perlform>) a list of values according to the contents of PICTURE, placing the output into the format output accumulator, [`$^A`](perlvar#%24%5EA) (or `$ACCUMULATOR` in [English](english)). Eventually, when a [`write`](#write-FILEHANDLE) is done, the contents of [`$^A`](perlvar#%24%5EA) are written to some filehandle. You could also read [`$^A`](perlvar#%24%5EA) and then set [`$^A`](perlvar#%24%5EA) back to `""`. Note that a format typically does one [`formline`](#formline-PICTURE%2CLIST) per line of form, but the [`formline`](#formline-PICTURE%2CLIST) function itself doesn't care how many newlines are embedded in the PICTURE. This means that the `~` and `~~` tokens treat the entire PICTURE as a single line. You may therefore need to use multiple formlines to implement a single record format, just like the [`format`](#format) compiler.
Be careful if you put double quotes around the picture, because an `@` character may be taken to mean the beginning of an array name. [`formline`](#formline-PICTURE%2CLIST) always returns true. See <perlform> for other examples.
If you are trying to use this instead of [`write`](#write-FILEHANDLE) to capture the output, you may find it easier to open a filehandle to a scalar (`open my $fh, ">", \$output`) and write to that instead.
getc FILEHANDLE getc Returns the next character from the input file attached to FILEHANDLE, or the undefined value at end of file or if there was an error (in the latter case [`$!`](perlvar#%24%21) is set). If FILEHANDLE is omitted, reads from STDIN. This is not particularly efficient. However, it cannot be used by itself to fetch single characters without waiting for the user to hit enter. For that, try something more like:
```
if ($BSD_STYLE) {
system "stty cbreak </dev/tty >/dev/tty 2>&1";
}
else {
system "stty", '-icanon', 'eol', "\001";
}
my $key = getc(STDIN);
if ($BSD_STYLE) {
system "stty -cbreak </dev/tty >/dev/tty 2>&1";
}
else {
system 'stty', 'icanon', 'eol', '^@'; # ASCII NUL
}
print "\n";
```
Determination of whether `$BSD_STYLE` should be set is left as an exercise to the reader.
The [`POSIX::getattr`](posix#getattr) function can do this more portably on systems purporting POSIX compliance. See also the [`Term::ReadKey`](Term::ReadKey) module on CPAN.
getlogin This implements the C library function of the same name, which on most systems returns the current login from */etc/utmp*, if any. If it returns the empty string, use [`getpwuid`](#getpwuid-UID).
```
my $login = getlogin || getpwuid($<) || "Kilroy";
```
Do not consider [`getlogin`](#getlogin) for authentication: it is not as secure as [`getpwuid`](#getpwuid-UID).
Portability issues: ["getlogin" in perlport](perlport#getlogin).
getpeername SOCKET Returns the packed sockaddr address of the other end of the SOCKET connection.
```
use Socket;
my $hersockaddr = getpeername($sock);
my ($port, $iaddr) = sockaddr_in($hersockaddr);
my $herhostname = gethostbyaddr($iaddr, AF_INET);
my $herstraddr = inet_ntoa($iaddr);
```
getpgrp PID Returns the current process group for the specified PID. Use a PID of `0` to get the current process group for the current process. Will raise an exception if used on a machine that doesn't implement [getpgrp(2)](http://man.he.net/man2/getpgrp). If PID is omitted, returns the process group of the current process. Note that the POSIX version of [`getpgrp`](#getpgrp-PID) does not accept a PID argument, so only `PID==0` is truly portable.
Portability issues: ["getpgrp" in perlport](perlport#getpgrp).
getppid Returns the process id of the parent process.
Note for Linux users: Between v5.8.1 and v5.16.0 Perl would work around non-POSIX thread semantics the minority of Linux systems (and Debian GNU/kFreeBSD systems) that used LinuxThreads, this emulation has since been removed. See the documentation for [$$](perlvar#%24%24) for details.
Portability issues: ["getppid" in perlport](perlport#getppid).
getpriority WHICH,WHO Returns the current priority for a process, a process group, or a user. (See [getpriority(2)](http://man.he.net/man2/getpriority).) Will raise a fatal exception if used on a machine that doesn't implement [getpriority(2)](http://man.he.net/man2/getpriority).
`WHICH` can be any of `PRIO_PROCESS`, `PRIO_PGRP` or `PRIO_USER` imported from ["RESOURCE CONSTANTS" in POSIX](posix#RESOURCE-CONSTANTS).
Portability issues: ["getpriority" in perlport](perlport#getpriority).
getpwnam NAME
getgrnam NAME
gethostbyname NAME
getnetbyname NAME
getprotobyname NAME
getpwuid UID
getgrgid GID
getservbyname NAME,PROTO
gethostbyaddr ADDR,ADDRTYPE
getnetbyaddr ADDR,ADDRTYPE
getprotobynumber NUMBER
getservbyport PORT,PROTO getpwent getgrent gethostent getnetent getprotoent getservent setpwent setgrent
sethostent STAYOPEN
setnetent STAYOPEN
setprotoent STAYOPEN
setservent STAYOPEN endpwent endgrent endhostent endnetent endprotoent endservent These routines are the same as their counterparts in the system C library. In list context, the return values from the various get routines are as follows:
```
# 0 1 2 3 4
my ( $name, $passwd, $gid, $members ) = getgr*
my ( $name, $aliases, $addrtype, $net ) = getnet*
my ( $name, $aliases, $port, $proto ) = getserv*
my ( $name, $aliases, $proto ) = getproto*
my ( $name, $aliases, $addrtype, $length, @addrs ) = gethost*
my ( $name, $passwd, $uid, $gid, $quota,
$comment, $gcos, $dir, $shell, $expire ) = getpw*
# 5 6 7 8 9
```
(If the entry doesn't exist, the return value is a single meaningless true value.)
The exact meaning of the $gcos field varies but it usually contains the real name of the user (as opposed to the login name) and other information pertaining to the user. Beware, however, that in many system users are able to change this information and therefore it cannot be trusted and therefore the $gcos is tainted (see <perlsec>). The $passwd and $shell, user's encrypted password and login shell, are also tainted, for the same reason.
In scalar context, you get the name, unless the function was a lookup by name, in which case you get the other thing, whatever it is. (If the entry doesn't exist you get the undefined value.) For example:
```
my $uid = getpwnam($name);
my $name = getpwuid($num);
my $name = getpwent();
my $gid = getgrnam($name);
my $name = getgrgid($num);
my $name = getgrent();
# etc.
```
In *getpw\*()* the fields $quota, $comment, and $expire are special in that they are unsupported on many systems. If the $quota is unsupported, it is an empty scalar. If it is supported, it usually encodes the disk quota. If the $comment field is unsupported, it is an empty scalar. If it is supported it usually encodes some administrative comment about the user. In some systems the $quota field may be $change or $age, fields that have to do with password aging. In some systems the $comment field may be $class. The $expire field, if present, encodes the expiration period of the account or the password. For the availability and the exact meaning of these fields in your system, please consult [getpwnam(3)](http://man.he.net/man3/getpwnam) and your system's *pwd.h* file. You can also find out from within Perl what your $quota and $comment fields mean and whether you have the $expire field by using the [`Config`](config) module and the values `d_pwquota`, `d_pwage`, `d_pwchange`, `d_pwcomment`, and `d_pwexpire`. Shadow password files are supported only if your vendor has implemented them in the intuitive fashion that calling the regular C library routines gets the shadow versions if you're running under privilege or if there exists the [shadow(3)](http://man.he.net/man3/shadow) functions as found in System V (this includes Solaris and Linux). Those systems that implement a proprietary shadow password facility are unlikely to be supported.
The $members value returned by *getgr\*()* is a space-separated list of the login names of the members of the group.
For the *gethost\*()* functions, if the `h_errno` variable is supported in C, it will be returned to you via [`$?`](perlvar#%24%3F) if the function call fails. The `@addrs` value returned by a successful call is a list of raw addresses returned by the corresponding library call. In the Internet domain, each address is four bytes long; you can unpack it by saying something like:
```
my ($w,$x,$y,$z) = unpack('W4',$addr[0]);
```
The Socket library makes this slightly easier:
```
use Socket;
my $iaddr = inet_aton("127.1"); # or whatever address
my $name = gethostbyaddr($iaddr, AF_INET);
# or going the other way
my $straddr = inet_ntoa($iaddr);
```
In the opposite way, to resolve a hostname to the IP address you can write this:
```
use Socket;
my $packed_ip = gethostbyname("www.perl.org");
my $ip_address;
if (defined $packed_ip) {
$ip_address = inet_ntoa($packed_ip);
}
```
Make sure [`gethostbyname`](#gethostbyname-NAME) is called in SCALAR context and that its return value is checked for definedness.
The [`getprotobynumber`](#getprotobynumber-NUMBER) function, even though it only takes one argument, has the precedence of a list operator, so beware:
```
getprotobynumber $number eq 'icmp' # WRONG
getprotobynumber($number eq 'icmp') # actually means this
getprotobynumber($number) eq 'icmp' # better this way
```
If you get tired of remembering which element of the return list contains which return value, by-name interfaces are provided in standard modules: [`File::stat`](File::stat), [`Net::hostent`](Net::hostent), [`Net::netent`](Net::netent), [`Net::protoent`](Net::protoent), [`Net::servent`](Net::servent), [`Time::gmtime`](Time::gmtime), [`Time::localtime`](Time::localtime), and [`User::grent`](User::grent). These override the normal built-ins, supplying versions that return objects with the appropriate names for each field. For example:
```
use File::stat;
use User::pwent;
my $is_his = (stat($filename)->uid == pwent($whoever)->uid);
```
Even though it looks as though they're the same method calls (uid), they aren't, because a `File::stat` object is different from a `User::pwent` object.
Many of these functions are not safe in a multi-threaded environment where more than one thread can be using them. In particular, functions like `getpwent()` iterate per-process and not per-thread, so if two threads are simultaneously iterating, neither will get all the records.
Some systems have thread-safe versions of some of the functions, such as `getpwnam_r()` instead of `getpwnam()`. There, Perl automatically and invisibly substitutes the thread-safe version, without notice. This means that code that safely runs on some systems can fail on others that lack the thread-safe versions.
Portability issues: ["getpwnam" in perlport](perlport#getpwnam) to ["endservent" in perlport](perlport#endservent).
getsockname SOCKET Returns the packed sockaddr address of this end of the SOCKET connection, in case you don't know the address because you have several different IPs that the connection might have come in on.
```
use Socket;
my $mysockaddr = getsockname($sock);
my ($port, $myaddr) = sockaddr_in($mysockaddr);
printf "Connect to %s [%s]\n",
scalar gethostbyaddr($myaddr, AF_INET),
inet_ntoa($myaddr);
```
getsockopt SOCKET,LEVEL,OPTNAME Queries the option named OPTNAME associated with SOCKET at a given LEVEL. Options may exist at multiple protocol levels depending on the socket type, but at least the uppermost socket level SOL\_SOCKET (defined in the [`Socket`](socket) module) will exist. To query options at another level the protocol number of the appropriate protocol controlling the option should be supplied. For example, to indicate that an option is to be interpreted by the TCP protocol, LEVEL should be set to the protocol number of TCP, which you can get using [`getprotobyname`](#getprotobyname-NAME).
The function returns a packed string representing the requested socket option, or [`undef`](#undef-EXPR) on error, with the reason for the error placed in [`$!`](perlvar#%24%21). Just what is in the packed string depends on LEVEL and OPTNAME; consult [getsockopt(2)](http://man.he.net/man2/getsockopt) for details. A common case is that the option is an integer, in which case the result is a packed integer, which you can decode using [`unpack`](#unpack-TEMPLATE%2CEXPR) with the `i` (or `I`) format.
Here's an example to test whether Nagle's algorithm is enabled on a socket:
```
use Socket qw(:all);
defined(my $tcp = getprotobyname("tcp"))
or die "Could not determine the protocol number for tcp";
# my $tcp = IPPROTO_TCP; # Alternative
my $packed = getsockopt($socket, $tcp, TCP_NODELAY)
or die "getsockopt TCP_NODELAY: $!";
my $nodelay = unpack("I", $packed);
print "Nagle's algorithm is turned ",
$nodelay ? "off\n" : "on\n";
```
Portability issues: ["getsockopt" in perlport](perlport#getsockopt).
glob EXPR glob In list context, returns a (possibly empty) list of filename expansions on the value of EXPR such as the Unix shell Bash would do. In scalar context, glob iterates through such filename expansions, returning [`undef`](#undef-EXPR) when the list is exhausted. If EXPR is omitted, [`$_`](perlvar#%24_) is used.
```
# List context
my @txt_files = glob("*.txt");
my @perl_files = glob("*.pl *.pm");
# Scalar context
while (my $file = glob("*.mp3")) {
# Do stuff
}
```
Glob also supports an alternate syntax using `<` `>` as delimiters. While this syntax is supported, it is recommended that you use `glob` instead as it is more readable and searchable.
```
my @txt_files = <"*.txt">;
```
If you need case insensitive file globbing that can be achieved using the `:nocase` parameter of the [`bsd_glob`](File::Glob#bsd_glob) module.
```
use File::Glob qw(:globally :nocase);
my @txt = glob("readme*"); # README readme.txt Readme.md
```
Note that [`glob`](#glob-EXPR) splits its arguments on whitespace and treats each segment as separate pattern. As such, `glob("*.c *.h")` matches all files with a *.c* or *.h* extension. The expression `glob(".* *")` matches all files in the current working directory. If you want to glob filenames that might contain whitespace, you'll have to use extra quotes around the spacey filename to protect it. For example, to glob filenames that have an `e` followed by a space followed by an `f`, use one of:
```
my @spacies = <"*e f*">;
my @spacies = glob('"*e f*"');
my @spacies = glob(q("*e f*"));
```
If you had to get a variable through, you could do this:
```
my @spacies = glob("'*${var}e f*'");
my @spacies = glob(qq("*${var}e f*"));
```
If non-empty braces are the only wildcard characters used in the [`glob`](#glob-EXPR), no filenames are matched, but potentially many strings are returned. For example, this produces nine strings, one for each pairing of fruits and colors:
```
my @many = glob("{apple,tomato,cherry}={green,yellow,red}");
```
This operator is implemented using the standard `File::Glob` extension. See [`bsd_glob`](File::Glob#bsd_glob) for details, including [`bsd_glob`](File::Glob#bsd_glob), which does not treat whitespace as a pattern separator.
If a `glob` expression is used as the condition of a `while` or `for` loop, then it will be implicitly assigned to `$_`. If either a `glob` expression or an explicit assignment of a `glob` expression to a scalar is used as a `while`/`for` condition, then the condition actually tests for definedness of the expression's value, not for its regular truth value.
Internal implemenation details:
This is the internal function implementing the `<*.c>` operator, but you can use it directly. The `<*.c>` operator is discussed in more detail in ["I/O Operators" in perlop](perlop#I%2FO-Operators).
Portability issues: ["glob" in perlport](perlport#glob).
gmtime EXPR gmtime Works just like [`localtime`](#localtime-EXPR), but the returned values are localized for the standard Greenwich time zone.
Note: When called in list context, $isdst, the last value returned by gmtime, is always `0`. There is no Daylight Saving Time in GMT.
Portability issues: ["gmtime" in perlport](perlport#gmtime).
goto LABEL
goto EXPR
goto &NAME The `goto LABEL` form finds the statement labeled with LABEL and resumes execution there. It can't be used to get out of a block or subroutine given to [`sort`](#sort-SUBNAME-LIST). It can be used to go almost anywhere else within the dynamic scope, including out of subroutines, but it's usually better to use some other construct such as [`last`](#last-LABEL) or [`die`](#die-LIST). The author of Perl has never felt the need to use this form of [`goto`](#goto-LABEL) (in Perl, that is; C is another matter). (The difference is that C does not offer named loops combined with loop control. Perl does, and this replaces most structured uses of [`goto`](#goto-LABEL) in other languages.)
The `goto EXPR` form expects to evaluate `EXPR` to a code reference or a label name. If it evaluates to a code reference, it will be handled like `goto &NAME`, below. This is especially useful for implementing tail recursion via `goto __SUB__`.
If the expression evaluates to a label name, its scope will be resolved dynamically. This allows for computed [`goto`](#goto-LABEL)s per FORTRAN, but isn't necessarily recommended if you're optimizing for maintainability:
```
goto ("FOO", "BAR", "GLARCH")[$i];
```
As shown in this example, `goto EXPR` is exempt from the "looks like a function" rule. A pair of parentheses following it does not (necessarily) delimit its argument. `goto("NE")."XT"` is equivalent to `goto NEXT`. Also, unlike most named operators, this has the same precedence as assignment.
Use of `goto LABEL` or `goto EXPR` to jump into a construct is deprecated and will issue a warning. Even then, it may not be used to go into any construct that requires initialization, such as a subroutine, a `foreach` loop, or a `given` block. In general, it may not be used to jump into the parameter of a binary or list operator, but it may be used to jump into the *first* parameter of a binary operator. (The `=` assignment operator's "first" operand is its right-hand operand.) It also can't be used to go into a construct that is optimized away.
The `goto &NAME` form is quite different from the other forms of [`goto`](#goto-LABEL). In fact, it isn't a goto in the normal sense at all, and doesn't have the stigma associated with other gotos. Instead, it exits the current subroutine (losing any changes set by [`local`](#local-EXPR)) and immediately calls in its place the named subroutine using the current value of [`@_`](perlvar#%40_). This is used by `AUTOLOAD` subroutines that wish to load another subroutine and then pretend that the other subroutine had been called in the first place (except that any modifications to [`@_`](perlvar#%40_) in the current subroutine are propagated to the other subroutine.) After the [`goto`](#goto-LABEL), not even [`caller`](#caller-EXPR) will be able to tell that this routine was called first.
NAME needn't be the name of a subroutine; it can be a scalar variable containing a code reference or a block that evaluates to a code reference.
grep BLOCK LIST
grep EXPR,LIST This is similar in spirit to, but not the same as, [grep(1)](http://man.he.net/man1/grep) and its relatives. In particular, it is not limited to using regular expressions.
Evaluates the BLOCK or EXPR for each element of LIST (locally setting [`$_`](perlvar#%24_) to each element) and returns the list value consisting of those elements for which the expression evaluated to true. In scalar context, returns the number of times the expression was true.
```
my @foo = grep(!/^#/, @bar); # weed out comments
```
or equivalently,
```
my @foo = grep {!/^#/} @bar; # weed out comments
```
Note that [`$_`](perlvar#%24_) is an alias to the list value, so it can be used to modify the elements of the LIST. While this is useful and supported, it can cause bizarre results if the elements of LIST are not variables. Similarly, grep returns aliases into the original list, much as a for loop's index variable aliases the list elements. That is, modifying an element of a list returned by grep (for example, in a `foreach`, [`map`](#map-BLOCK-LIST) or another [`grep`](#grep-BLOCK-LIST)) actually modifies the element in the original list. This is usually something to be avoided when writing clear code.
See also [`map`](#map-BLOCK-LIST) for a list composed of the results of the BLOCK or EXPR.
hex EXPR hex Interprets EXPR as a hex string and returns the corresponding numeric value. If EXPR is omitted, uses [`$_`](perlvar#%24_).
```
print hex '0xAf'; # prints '175'
print hex 'aF'; # same
$valid_input =~ /\A(?:0?[xX])?(?:_?[0-9a-fA-F])*\z/
```
A hex string consists of hex digits and an optional `0x` or `x` prefix. Each hex digit may be preceded by a single underscore, which will be ignored. Any other character triggers a warning and causes the rest of the string to be ignored (even leading whitespace, unlike [`oct`](#oct-EXPR)). Only integers can be represented, and integer overflow triggers a warning.
To convert strings that might start with any of `0`, `0x`, or `0b`, see [`oct`](#oct-EXPR). To present something as hex, look into [`printf`](#printf-FILEHANDLE-FORMAT%2C-LIST), [`sprintf`](#sprintf-FORMAT%2C-LIST), and [`unpack`](#unpack-TEMPLATE%2CEXPR).
import LIST There is no builtin [`import`](#import-LIST) function. It is just an ordinary method (subroutine) defined (or inherited) by modules that wish to export names to another module. The [`use`](#use-Module-VERSION-LIST) function calls the [`import`](#import-LIST) method for the package used. See also [`use`](#use-Module-VERSION-LIST), <perlmod>, and [Exporter](exporter).
index STR,SUBSTR,POSITION
index STR,SUBSTR The index function searches for one string within another, but without the wildcard-like behavior of a full regular-expression pattern match. It returns the position of the first occurrence of SUBSTR in STR at or after POSITION. If POSITION is omitted, starts searching from the beginning of the string. POSITION before the beginning of the string or after its end is treated as if it were the beginning or the end, respectively. POSITION and the return value are based at zero. If the substring is not found, [`index`](#index-STR%2CSUBSTR%2CPOSITION) returns -1.
Find characters or strings:
```
index("Perl is great", "P"); # Returns 0
index("Perl is great", "g"); # Returns 8
index("Perl is great", "great"); # Also returns 8
```
Attempting to find something not there:
```
index("Perl is great", "Z"); # Returns -1 (not found)
```
Using an offset to find the *second* occurrence:
```
index("Perl is great", "e", 5); # Returns 10
```
int EXPR int Returns the integer portion of EXPR. If EXPR is omitted, uses [`$_`](perlvar#%24_). You should not use this function for rounding: one because it truncates towards `0`, and two because machine representations of floating-point numbers can sometimes produce counterintuitive results. For example, `int(-6.725/0.025)` produces -268 rather than the correct -269; that's because it's really more like -268.99999999999994315658 instead. Usually, the [`sprintf`](#sprintf-FORMAT%2C-LIST), [`printf`](#printf-FILEHANDLE-FORMAT%2C-LIST), or the [`POSIX::floor`](posix#floor) and [`POSIX::ceil`](posix#ceil) functions will serve you better than will [`int`](#int-EXPR).
ioctl FILEHANDLE,FUNCTION,SCALAR Implements the [ioctl(2)](http://man.he.net/man2/ioctl) function. You'll probably first have to say
```
require "sys/ioctl.ph"; # probably in
# $Config{archlib}/sys/ioctl.ph
```
to get the correct function definitions. If *sys/ioctl.ph* doesn't exist or doesn't have the correct definitions you'll have to roll your own, based on your C header files such as *<sys/ioctl.h>*. (There is a Perl script called **h2ph** that comes with the Perl kit that may help you in this, but it's nontrivial.) SCALAR will be read and/or written depending on the FUNCTION; a C pointer to the string value of SCALAR will be passed as the third argument of the actual [`ioctl`](#ioctl-FILEHANDLE%2CFUNCTION%2CSCALAR) call. (If SCALAR has no string value but does have a numeric value, that value will be passed rather than a pointer to the string value. To guarantee this to be true, add a `0` to the scalar before using it.) The [`pack`](#pack-TEMPLATE%2CLIST) and [`unpack`](#unpack-TEMPLATE%2CEXPR) functions may be needed to manipulate the values of structures used by [`ioctl`](#ioctl-FILEHANDLE%2CFUNCTION%2CSCALAR).
The return value of [`ioctl`](#ioctl-FILEHANDLE%2CFUNCTION%2CSCALAR) (and [`fcntl`](#fcntl-FILEHANDLE%2CFUNCTION%2CSCALAR)) is as follows:
```
if OS returns: then Perl returns:
-1 undefined value
0 string "0 but true"
anything else that number
```
Thus Perl returns true on success and false on failure, yet you can still easily determine the actual value returned by the operating system:
```
my $retval = ioctl(...) || -1;
printf "System returned %d\n", $retval;
```
The special string `"0 but true"` is exempt from [`Argument "..." isn't numeric`](perldiag#Argument-%22%25s%22-isn%27t-numeric%25s) <warnings> on improper numeric conversions.
Portability issues: ["ioctl" in perlport](perlport#ioctl).
join EXPR,LIST Joins the separate strings of LIST into a single string with fields separated by the value of EXPR, and returns that new string. Example:
```
my $rec = join(':', $login,$passwd,$uid,$gid,$gcos,$home,$shell);
```
Beware that unlike [`split`](#split-%2FPATTERN%2F%2CEXPR%2CLIMIT), [`join`](#join-EXPR%2CLIST) doesn't take a pattern as its first argument. Compare [`split`](#split-%2FPATTERN%2F%2CEXPR%2CLIMIT).
keys HASH
keys ARRAY Called in list context, returns a list consisting of all the keys of the named hash, or in Perl 5.12 or later only, the indices of an array. Perl releases prior to 5.12 will produce a syntax error if you try to use an array argument. In scalar context, returns the number of keys or indices.
Hash entries are returned in an apparently random order. The actual random order is specific to a given hash; the exact same series of operations on two hashes may result in a different order for each hash. Any insertion into the hash may change the order, as will any deletion, with the exception that the most recent key returned by [`each`](#each-HASH) or [`keys`](#keys-HASH) may be deleted without changing the order. So long as a given hash is unmodified you may rely on [`keys`](#keys-HASH), [`values`](#values-HASH) and [`each`](#each-HASH) to repeatedly return the same order as each other. See ["Algorithmic Complexity Attacks" in perlsec](perlsec#Algorithmic-Complexity-Attacks) for details on why hash order is randomized. Aside from the guarantees provided here the exact details of Perl's hash algorithm and the hash traversal order are subject to change in any release of Perl. Tied hashes may behave differently to Perl's hashes with respect to changes in order on insertion and deletion of items.
As a side effect, calling [`keys`](#keys-HASH) resets the internal iterator of the HASH or ARRAY (see [`each`](#each-HASH)) before yielding the keys. In particular, calling [`keys`](#keys-HASH) in void context resets the iterator with no other overhead.
Here is yet another way to print your environment:
```
my @keys = keys %ENV;
my @values = values %ENV;
while (@keys) {
print pop(@keys), '=', pop(@values), "\n";
}
```
or how about sorted by key:
```
foreach my $key (sort(keys %ENV)) {
print $key, '=', $ENV{$key}, "\n";
}
```
The returned values are copies of the original keys in the hash, so modifying them will not affect the original hash. Compare [`values`](#values-HASH).
To sort a hash by value, you'll need to use a [`sort`](#sort-SUBNAME-LIST) function. Here's a descending numeric sort of a hash by its values:
```
foreach my $key (sort { $hash{$b} <=> $hash{$a} } keys %hash) {
printf "%4d %s\n", $hash{$key}, $key;
}
```
Used as an lvalue, [`keys`](#keys-HASH) allows you to increase the number of hash buckets allocated for the given hash. This can gain you a measure of efficiency if you know the hash is going to get big. (This is similar to pre-extending an array by assigning a larger number to $#array.) If you say
```
keys %hash = 200;
```
then `%hash` will have at least 200 buckets allocated for it--256 of them, in fact, since it rounds up to the next power of two. These buckets will be retained even if you do `%hash = ()`, use `undef %hash` if you want to free the storage while `%hash` is still in scope. You can't shrink the number of buckets allocated for the hash using [`keys`](#keys-HASH) in this way (but you needn't worry about doing this by accident, as trying has no effect). `keys @array` in an lvalue context is a syntax error.
Starting with Perl 5.14, an experimental feature allowed [`keys`](#keys-HASH) to take a scalar expression. This experiment has been deemed unsuccessful, and was removed as of Perl 5.24.
To avoid confusing would-be users of your code who are running earlier versions of Perl with mysterious syntax errors, put this sort of thing at the top of your file to signal that your code will work *only* on Perls of a recent vintage:
```
use v5.12; # so keys/values/each work on arrays
```
See also [`each`](#each-HASH), [`values`](#values-HASH), and [`sort`](#sort-SUBNAME-LIST).
kill SIGNAL, LIST
kill SIGNAL Sends a signal to a list of processes. Returns the number of arguments that were successfully used to signal (which is not necessarily the same as the number of processes actually killed, e.g. where a process group is killed).
```
my $cnt = kill 'HUP', $child1, $child2;
kill 'KILL', @goners;
```
SIGNAL may be either a signal name (a string) or a signal number. A signal name may start with a `SIG` prefix, thus `FOO` and `SIGFOO` refer to the same signal. The string form of SIGNAL is recommended for portability because the same signal may have different numbers in different operating systems.
A list of signal names supported by the current platform can be found in `$Config{sig_name}`, which is provided by the [`Config`](config) module. See [Config](config) for more details.
A negative signal name is the same as a negative signal number, killing process groups instead of processes. For example, `kill '-KILL', $pgrp` and `kill -9, $pgrp` will send `SIGKILL` to the entire process group specified. That means you usually want to use positive not negative signals.
If SIGNAL is either the number 0 or the string `ZERO` (or `SIGZERO`), no signal is sent to the process, but [`kill`](#kill-SIGNAL%2C-LIST) checks whether it's *possible* to send a signal to it (that means, to be brief, that the process is owned by the same user, or we are the super-user). This is useful to check that a child process is still alive (even if only as a zombie) and hasn't changed its UID. See <perlport> for notes on the portability of this construct.
The behavior of kill when a *PROCESS* number is zero or negative depends on the operating system. For example, on POSIX-conforming systems, zero will signal the current process group, -1 will signal all processes, and any other negative PROCESS number will act as a negative signal number and kill the entire process group specified.
If both the SIGNAL and the PROCESS are negative, the results are undefined. A warning may be produced in a future version.
See ["Signals" in perlipc](perlipc#Signals) for more details.
On some platforms such as Windows where the [fork(2)](http://man.he.net/man2/fork) system call is not available, Perl can be built to emulate [`fork`](#fork) at the interpreter level. This emulation has limitations related to kill that have to be considered, for code running on Windows and in code intended to be portable.
See <perlfork> for more details.
If there is no *LIST* of processes, no signal is sent, and the return value is 0. This form is sometimes used, however, because it causes tainting checks to be run, if your perl support taint checks. But see ["Laundering and Detecting Tainted Data" in perlsec](perlsec#Laundering-and-Detecting-Tainted-Data).
Portability issues: ["kill" in perlport](perlport#kill).
last LABEL
last EXPR last The [`last`](#last-LABEL) command is like the `break` statement in C (as used in loops); it immediately exits the loop in question. If the LABEL is omitted, the command refers to the innermost enclosing loop. The `last EXPR` form, available starting in Perl 5.18.0, allows a label name to be computed at run time, and is otherwise identical to `last LABEL`. The [`continue`](#continue-BLOCK) block, if any, is not executed:
```
LINE: while (<STDIN>) {
last LINE if /^$/; # exit when done with header
#...
}
```
[`last`](#last-LABEL) cannot return a value from a block that typically returns a value, such as `eval {}`, `sub {}`, or `do {}`. It will perform its flow control behavior, which precludes any return value. It should not be used to exit a [`grep`](#grep-BLOCK-LIST) or [`map`](#map-BLOCK-LIST) operation.
Note that a block by itself is semantically identical to a loop that executes once. Thus [`last`](#last-LABEL) can be used to effect an early exit out of such a block.
See also [`continue`](#continue-BLOCK) for an illustration of how [`last`](#last-LABEL), [`next`](#next-LABEL), and [`redo`](#redo-LABEL) work.
Unlike most named operators, this has the same precedence as assignment. It is also exempt from the looks-like-a-function rule, so `last ("foo")."bar"` will cause "bar" to be part of the argument to [`last`](#last-LABEL).
lc EXPR lc Returns a lowercased version of EXPR. If EXPR is omitted, uses [`$_`](perlvar#%24_).
```
my $str = lc("Perl is GREAT"); # "perl is great"
```
What gets returned depends on several factors:
If `use bytes` is in effect: The results follow ASCII rules. Only the characters `A-Z` change, to `a-z` respectively.
Otherwise, if `use locale` for `LC_CTYPE` is in effect: Respects current `LC_CTYPE` locale for code points < 256; and uses Unicode rules for the remaining code points (this last can only happen if the UTF8 flag is also set). See <perllocale>.
Starting in v5.20, Perl uses full Unicode rules if the locale is UTF-8. Otherwise, there is a deficiency in this scheme, which is that case changes that cross the 255/256 boundary are not well-defined. For example, the lower case of LATIN CAPITAL LETTER SHARP S (U+1E9E) in Unicode rules is U+00DF (on ASCII platforms). But under `use locale` (prior to v5.20 or not a UTF-8 locale), the lower case of U+1E9E is itself, because 0xDF may not be LATIN SMALL LETTER SHARP S in the current locale, and Perl has no way of knowing if that character even exists in the locale, much less what code point it is. Perl returns a result that is above 255 (almost always the input character unchanged), for all instances (and there aren't many) where the 255/256 boundary would otherwise be crossed; and starting in v5.22, it raises a [locale](perldiag#Can%27t-do-%25s%28%22%25s%22%29-on-non-UTF-8-locale%3B-resolved-to-%22%25s%22.) warning.
Otherwise, If EXPR has the UTF8 flag set: Unicode rules are used for the case change.
Otherwise, if `use feature 'unicode_strings'` or `use locale ':not_characters'` is in effect: Unicode rules are used for the case change.
Otherwise: ASCII rules are used for the case change. The lowercase of any character outside the ASCII range is the character itself.
**Note:** This is the internal function implementing the [`\L`](perlop#Quote-and-Quote-like-Operators) escape in double-quoted strings.
```
my $str = "Perl is \LGREAT\E"; # "Perl is great"
```
lcfirst EXPR lcfirst Returns the value of EXPR with the first character lowercased. This is the internal function implementing the `\l` escape in double-quoted strings.
If EXPR is omitted, uses [`$_`](perlvar#%24_).
This function behaves the same way under various pragmas, such as in a locale, as [`lc`](#lc-EXPR) does.
length EXPR length Returns the length in *characters* of the value of EXPR. If EXPR is omitted, returns the length of [`$_`](perlvar#%24_). If EXPR is undefined, returns [`undef`](#undef-EXPR).
This function cannot be used on an entire array or hash to find out how many elements these have. For that, use `scalar @array` and `scalar keys %hash`, respectively.
Like all Perl character operations, [`length`](#length-EXPR) normally deals in logical characters, not physical bytes. For how many bytes a string encoded as UTF-8 would take up, use `length(Encode::encode('UTF-8', EXPR))` (you'll have to `use Encode` first). See [Encode](encode) and <perlunicode>.
\_\_LINE\_\_ A special token that compiles to the current line number. It can be altered by the mechanism described at ["Plain Old Comments (Not!)" in perlsyn](perlsyn#Plain-Old-Comments-%28Not%21%29).
link OLDFILE,NEWFILE Creates a new filename linked to the old filename. Returns true for success, false otherwise.
Portability issues: ["link" in perlport](perlport#link).
listen SOCKET,QUEUESIZE Does the same thing that the [listen(2)](http://man.he.net/man2/listen) system call does. Returns true if it succeeded, false otherwise. See the example in ["Sockets: Client/Server Communication" in perlipc](perlipc#Sockets%3A-Client%2FServer-Communication).
local EXPR You really probably want to be using [`my`](#my-VARLIST) instead, because [`local`](#local-EXPR) isn't what most people think of as "local". See ["Private Variables via my()" in perlsub](perlsub#Private-Variables-via-my%28%29) for details.
A local modifies the listed variables to be local to the enclosing block, file, or eval. If more than one value is listed, the list must be placed in parentheses. See ["Temporary Values via local()" in perlsub](perlsub#Temporary-Values-via-local%28%29) for details, including issues with tied arrays and hashes.
The `delete local EXPR` construct can also be used to localize the deletion of array/hash elements to the current block. See ["Localized deletion of elements of composite types" in perlsub](perlsub#Localized-deletion-of-elements-of-composite-types).
localtime EXPR localtime Converts a time as returned by the time function to a 9-element list with the time analyzed for the local time zone. Typically used as follows:
```
# 0 1 2 3 4 5 6 7 8
my ($sec,$min,$hour,$mday,$mon,$year,$wday,$yday,$isdst) =
localtime(time);
```
All list elements are numeric and come straight out of the C `struct tm'. `$sec`, `$min`, and `$hour` are the seconds, minutes, and hours of the specified time.
`$mday` is the day of the month and `$mon` the month in the range `0..11`, with 0 indicating January and 11 indicating December. This makes it easy to get a month name from a list:
```
my @abbr = qw(Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec);
print "$abbr[$mon] $mday";
# $mon=9, $mday=18 gives "Oct 18"
```
`$year` contains the number of years since 1900. To get a 4-digit year write:
```
$year += 1900;
```
To get the last two digits of the year (e.g., "01" in 2001) do:
```
$year = sprintf("%02d", $year % 100);
```
`$wday` is the day of the week, with 0 indicating Sunday and 3 indicating Wednesday. `$yday` is the day of the year, in the range `0..364` (or `0..365` in leap years.)
`$isdst` is true if the specified time occurs when Daylight Saving Time is in effect, false otherwise.
If EXPR is omitted, [`localtime`](#localtime-EXPR) uses the current time (as returned by [`time`](#time)).
In scalar context, [`localtime`](#localtime-EXPR) returns the [ctime(3)](http://man.he.net/man3/ctime) value:
```
my $now_string = localtime; # e.g., "Thu Oct 13 04:54:34 1994"
```
This scalar value is always in English, and is **not** locale-dependent. To get similar but locale-dependent date strings, try for example:
```
use POSIX qw(strftime);
my $now_string = strftime "%a %b %e %H:%M:%S %Y", localtime;
# or for GMT formatted appropriately for your locale:
my $now_string = strftime "%a %b %e %H:%M:%S %Y", gmtime;
```
C$now\_string> will be formatted according to the current LC\_TIME locale the program or thread is running in. See <perllocale> for how to set up and change that locale. Note that `%a` and `%b`, the short forms of the day of the week and the month of the year, may not necessarily be three characters wide.
The <Time::gmtime> and <Time::localtime> modules provide a convenient, by-name access mechanism to the [`gmtime`](#gmtime-EXPR) and [`localtime`](#localtime-EXPR) functions, respectively.
For a comprehensive date and time representation look at the [DateTime](datetime) module on CPAN.
For GMT instead of local time use the [`gmtime`](#gmtime-EXPR) builtin.
See also the [`Time::Local`](Time::Local) module (for converting seconds, minutes, hours, and such back to the integer value returned by [`time`](#time)), and the [POSIX](posix) module's [`mktime`](posix#mktime) function.
Portability issues: ["localtime" in perlport](perlport#localtime).
lock THING This function places an advisory lock on a shared variable or referenced object contained in *THING* until the lock goes out of scope.
The value returned is the scalar itself, if the argument is a scalar, or a reference, if the argument is a hash, array or subroutine.
[`lock`](#lock-THING) is a "weak keyword"; this means that if you've defined a function by this name (before any calls to it), that function will be called instead. If you are not under `use threads::shared` this does nothing. See <threads::shared>.
log EXPR log Returns the natural logarithm (base *e*) of EXPR. If EXPR is omitted, returns the log of [`$_`](perlvar#%24_). To get the log of another base, use basic algebra: The base-N log of a number is equal to the natural log of that number divided by the natural log of N. For example:
```
sub log10 {
my $n = shift;
return log($n)/log(10);
}
```
See also [`exp`](#exp-EXPR) for the inverse operation.
lstat FILEHANDLE
lstat EXPR
lstat DIRHANDLE lstat Does the same thing as the [`stat`](#stat-FILEHANDLE) function (including setting the special `_` filehandle) but stats a symbolic link instead of the file the symbolic link points to. If symbolic links are unimplemented on your system, a normal [`stat`](#stat-FILEHANDLE) is done. For much more detailed information, please see the documentation for [`stat`](#stat-FILEHANDLE).
If EXPR is omitted, stats [`$_`](perlvar#%24_).
Portability issues: ["lstat" in perlport](perlport#lstat).
m// The match operator. See ["Regexp Quote-Like Operators" in perlop](perlop#Regexp-Quote-Like-Operators).
map BLOCK LIST
map EXPR,LIST Evaluates the BLOCK or EXPR for each element of LIST (locally setting [`$_`](perlvar#%24_) to each element) and composes a list of the results of each such evaluation. Each element of LIST may produce zero, one, or more elements in the generated list, so the number of elements in the generated list may differ from that in LIST. In scalar context, returns the total number of elements so generated. In list context, returns the generated list.
```
my @chars = map(chr, @numbers);
```
translates a list of numbers to the corresponding characters.
```
my @squares = map { $_ * $_ } @numbers;
```
translates a list of numbers to their squared values.
```
my @squares = map { $_ > 5 ? ($_ * $_) : () } @numbers;
```
shows that number of returned elements can differ from the number of input elements. To omit an element, return an empty list (). This could also be achieved by writing
```
my @squares = map { $_ * $_ } grep { $_ > 5 } @numbers;
```
which makes the intention more clear.
Map always returns a list, which can be assigned to a hash such that the elements become key/value pairs. See <perldata> for more details.
```
my %hash = map { get_a_key_for($_) => $_ } @array;
```
is just a funny way to write
```
my %hash;
foreach (@array) {
$hash{get_a_key_for($_)} = $_;
}
```
Note that [`$_`](perlvar#%24_) is an alias to the list value, so it can be used to modify the elements of the LIST. While this is useful and supported, it can cause bizarre results if the elements of LIST are not variables. Using a regular `foreach` loop for this purpose would be clearer in most cases. See also [`grep`](#grep-BLOCK-LIST) for a list composed of those items of the original list for which the BLOCK or EXPR evaluates to true.
`{` starts both hash references and blocks, so `map { ...` could be either the start of map BLOCK LIST or map EXPR, LIST. Because Perl doesn't look ahead for the closing `}` it has to take a guess at which it's dealing with based on what it finds just after the `{`. Usually it gets it right, but if it doesn't it won't realize something is wrong until it gets to the `}` and encounters the missing (or unexpected) comma. The syntax error will be reported close to the `}`, but you'll need to change something near the `{` such as using a unary `+` or semicolon to give Perl some help:
```
my %hash = map { "\L$_" => 1 } @array # perl guesses EXPR. wrong
my %hash = map { +"\L$_" => 1 } @array # perl guesses BLOCK. right
my %hash = map {; "\L$_" => 1 } @array # this also works
my %hash = map { ("\L$_" => 1) } @array # as does this
my %hash = map { lc($_) => 1 } @array # and this.
my %hash = map +( lc($_) => 1 ), @array # this is EXPR and works!
my %hash = map ( lc($_), 1 ), @array # evaluates to (1, @array)
```
or to force an anon hash constructor use `+{`:
```
my @hashes = map +{ lc($_) => 1 }, @array # EXPR, so needs
# comma at end
```
to get a list of anonymous hashes each with only one entry apiece.
mkdir FILENAME,MODE
mkdir FILENAME mkdir Creates the directory specified by FILENAME, with permissions specified by MODE (as modified by [`umask`](#umask-EXPR)). If it succeeds it returns true; otherwise it returns false and sets [`$!`](perlvar#%24%21) (errno). MODE defaults to 0777 if omitted, and FILENAME defaults to [`$_`](perlvar#%24_) if omitted.
In general, it is better to create directories with a permissive MODE and let the user modify that with their [`umask`](#umask-EXPR) than it is to supply a restrictive MODE and give the user no way to be more permissive. The exceptions to this rule are when the file or directory should be kept private (mail files, for instance). The documentation for [`umask`](#umask-EXPR) discusses the choice of MODE in more detail.
Note that according to the POSIX 1003.1-1996 the FILENAME may have any number of trailing slashes. Some operating and filesystems do not get this right, so Perl automatically removes all trailing slashes to keep everyone happy.
To recursively create a directory structure, look at the [`make_path`](File::Path#make_path%28-%24dir1%2C-%24dir2%2C-....-%29) function of the <File::Path> module.
msgctl ID,CMD,ARG Calls the System V IPC function [msgctl(2)](http://man.he.net/man2/msgctl). You'll probably have to say
```
use IPC::SysV;
```
first to get the correct constant definitions. If CMD is `IPC_STAT`, then ARG must be a variable that will hold the returned `msqid_ds` structure. Returns like [`ioctl`](#ioctl-FILEHANDLE%2CFUNCTION%2CSCALAR): the undefined value for error, `"0 but true"` for zero, or the actual return value otherwise. See also ["SysV IPC" in perlipc](perlipc#SysV-IPC) and the documentation for [`IPC::SysV`](IPC::SysV) and [`IPC::Semaphore`](IPC::Semaphore).
Portability issues: ["msgctl" in perlport](perlport#msgctl).
msgget KEY,FLAGS Calls the System V IPC function [msgget(2)](http://man.he.net/man2/msgget). Returns the message queue id, or [`undef`](#undef-EXPR) on error. See also ["SysV IPC" in perlipc](perlipc#SysV-IPC) and the documentation for [`IPC::SysV`](IPC::SysV) and [`IPC::Msg`](IPC::Msg).
Portability issues: ["msgget" in perlport](perlport#msgget).
msgrcv ID,VAR,SIZE,TYPE,FLAGS Calls the System V IPC function msgrcv to receive a message from message queue ID into variable VAR with a maximum message size of SIZE. Note that when a message is received, the message type as a native long integer will be the first thing in VAR, followed by the actual message. This packing may be opened with `unpack("l! a*")`. Taints the variable. Returns true if successful, false on error. See also ["SysV IPC" in perlipc](perlipc#SysV-IPC) and the documentation for [`IPC::SysV`](IPC::SysV) and [`IPC::Msg`](IPC::Msg).
Portability issues: ["msgrcv" in perlport](perlport#msgrcv).
msgsnd ID,MSG,FLAGS Calls the System V IPC function msgsnd to send the message MSG to the message queue ID. MSG must begin with the native long integer message type, followed by the message itself. This kind of packing can be achieved with `pack("l! a*", $type, $message)`. Returns true if successful, false on error. See also ["SysV IPC" in perlipc](perlipc#SysV-IPC) and the documentation for [`IPC::SysV`](IPC::SysV) and [`IPC::Msg`](IPC::Msg).
Portability issues: ["msgsnd" in perlport](perlport#msgsnd).
my VARLIST
my TYPE VARLIST
my VARLIST : ATTRS
my TYPE VARLIST : ATTRS A [`my`](#my-VARLIST) declares the listed variables to be local (lexically) to the enclosing block, file, or [`eval`](#eval-EXPR). If more than one variable is listed, the list must be placed in parentheses.
Note that with a parenthesised list, [`undef`](#undef-EXPR) can be used as a dummy placeholder, for example to skip assignment of initial values:
```
my ( undef, $min, $hour ) = localtime;
```
Redeclaring a variable in the same scope or statement will "shadow" the previous declaration, creating a new instance and preventing access to the previous one. This is usually undesired and, if warnings are enabled, will result in a warning in the `shadow` category.
The exact semantics and interface of TYPE and ATTRS are still evolving. TYPE may be a bareword, a constant declared with [`use constant`](constant), or [`__PACKAGE__`](#__PACKAGE__). It is currently bound to the use of the <fields> pragma, and attributes are handled using the <attributes> pragma, or starting from Perl 5.8.0 also via the <Attribute::Handlers> module. See ["Private Variables via my()" in perlsub](perlsub#Private-Variables-via-my%28%29) for details.
next LABEL
next EXPR next The [`next`](#next-LABEL) command is like the `continue` statement in C; it starts the next iteration of the loop:
```
LINE: while (<STDIN>) {
next LINE if /^#/; # discard comments
#...
}
```
Note that if there were a [`continue`](#continue-BLOCK) block on the above, it would get executed even on discarded lines. If LABEL is omitted, the command refers to the innermost enclosing loop. The `next EXPR` form, available as of Perl 5.18.0, allows a label name to be computed at run time, being otherwise identical to `next LABEL`.
[`next`](#next-LABEL) cannot return a value from a block that typically returns a value, such as `eval {}`, `sub {}`, or `do {}`. It will perform its flow control behavior, which precludes any return value. It should not be used to exit a [`grep`](#grep-BLOCK-LIST) or [`map`](#map-BLOCK-LIST) operation.
Note that a block by itself is semantically identical to a loop that executes once. Thus [`next`](#next-LABEL) will exit such a block early.
See also [`continue`](#continue-BLOCK) for an illustration of how [`last`](#last-LABEL), [`next`](#next-LABEL), and [`redo`](#redo-LABEL) work.
Unlike most named operators, this has the same precedence as assignment. It is also exempt from the looks-like-a-function rule, so `next ("foo")."bar"` will cause "bar" to be part of the argument to [`next`](#next-LABEL).
no MODULE VERSION LIST
no MODULE VERSION
no MODULE LIST
no MODULE
no VERSION See the [`use`](#use-Module-VERSION-LIST) function, of which [`no`](#no-MODULE-VERSION-LIST) is the opposite.
oct EXPR oct Interprets EXPR as an octal string and returns the corresponding value. An octal string consists of octal digits and, as of Perl 5.33.5, an optional `0o` or `o` prefix. Each octal digit may be preceded by a single underscore, which will be ignored. (If EXPR happens to start off with `0x` or `x`, interprets it as a hex string. If EXPR starts off with `0b` or `b`, it is interpreted as a binary string. Leading whitespace is ignored in all three cases.) The following will handle decimal, binary, octal, and hex in standard Perl notation:
```
$val = oct($val) if $val =~ /^0/;
```
If EXPR is omitted, uses [`$_`](perlvar#%24_). To go the other way (produce a number in octal), use [`sprintf`](#sprintf-FORMAT%2C-LIST) or [`printf`](#printf-FILEHANDLE-FORMAT%2C-LIST):
```
my $dec_perms = (stat("filename"))[2] & 07777;
my $oct_perm_str = sprintf "%o", $perms;
```
The [`oct`](#oct-EXPR) function is commonly used when a string such as `644` needs to be converted into a file mode, for example. Although Perl automatically converts strings into numbers as needed, this automatic conversion assumes base 10.
Leading white space is ignored without warning, as too are any trailing non-digits, such as a decimal point ([`oct`](#oct-EXPR) only handles non-negative integers, not negative integers or floating point).
open FILEHANDLE,MODE,EXPR
open FILEHANDLE,MODE,EXPR,LIST
open FILEHANDLE,MODE,REFERENCE
open FILEHANDLE,EXPR
open FILEHANDLE Associates an internal FILEHANDLE with the external file specified by EXPR. That filehandle will subsequently allow you to perform I/O operations on that file, such as reading from it or writing to it.
Instead of a filename, you may specify an external command (plus an optional argument list) or a scalar reference, in order to open filehandles on commands or in-memory scalars, respectively.
A thorough reference to `open` follows. For a gentler introduction to the basics of `open`, see also the <perlopentut> manual page.
Working with files Most often, `open` gets invoked with three arguments: the required FILEHANDLE (usually an empty scalar variable), followed by MODE (usually a literal describing the I/O mode the filehandle will use), and then the filename that the new filehandle will refer to.
Simple examples Reading from a file:
```
open(my $fh, "<", "input.txt")
or die "Can't open < input.txt: $!";
# Process every line in input.txt
while (my $line = <$fh>) {
#
# ... do something interesting with $line here ...
#
}
```
or writing to one:
```
open(my $fh, ">", "output.txt")
or die "Can't open > output.txt: $!";
print $fh "This line gets printed into output.txt.\n";
```
For a summary of common filehandle operations such as these, see ["Files and I/O" in perlintro](perlintro#Files-and-I%2FO).
About filehandles The first argument to `open`, labeled FILEHANDLE in this reference, is usually a scalar variable. (Exceptions exist, described in "Other considerations", below.) If the call to `open` succeeds, then the expression provided as FILEHANDLE will get assigned an open *filehandle*. That filehandle provides an internal reference to the specified external file, conveniently stored in a Perl variable, and ready for I/O operations such as reading and writing.
About modes When calling `open` with three or more arguments, the second argument -- labeled MODE here -- defines the *open mode*. MODE is usually a literal string comprising special characters that define the intended I/O role of the filehandle being created: whether it's read-only, or read-and-write, and so on.
If MODE is `<`, the file is opened for input (read-only). If MODE is `>`, the file is opened for output, with existing files first being truncated ("clobbered") and nonexisting files newly created. If MODE is `>>`, the file is opened for appending, again being created if necessary.
You can put a `+` in front of the `>` or `<` to indicate that you want both read and write access to the file; thus `+<` is almost always preferred for read/write updates--the `+>` mode would clobber the file first. You can't usually use either read-write mode for updating textfiles, since they have variable-length records. See the **-i** switch in [perlrun](perlrun#-i%5Bextension%5D) for a better approach. The file is created with permissions of `0666` modified by the process's [`umask`](#umask-EXPR) value.
These various prefixes correspond to the [fopen(3)](http://man.he.net/man3/fopen) modes of `r`, `r+`, `w`, `w+`, `a`, and `a+`.
More examples of different modes in action:
```
# Open a file for concatenation
open(my $log, ">>", "/usr/spool/news/twitlog")
or warn "Couldn't open log file; discarding input";
# Open a file for reading and writing
open(my $dbase, "+<", "dbase.mine")
or die "Can't open 'dbase.mine' for update: $!";
```
Checking the return value Open returns nonzero on success, the undefined value otherwise. If the `open` involved a pipe, the return value happens to be the pid of the subprocess.
When opening a file, it's seldom a good idea to continue if the request failed, so `open` is frequently used with [`die`](#die-LIST). Even if you want your code to do something other than `die` on a failed open, you should still always check the return value from opening a file.
Specifying I/O layers in MODE You can use the three-argument form of open to specify I/O layers (sometimes referred to as "disciplines") to apply to the new filehandle. These affect how the input and output are processed (see <open> and [PerlIO](perlio) for more details). For example:
```
# loads PerlIO::encoding automatically
open(my $fh, "<:encoding(UTF-8)", $filename)
|| die "Can't open UTF-8 encoded $filename: $!";
```
This opens the UTF8-encoded file containing Unicode characters; see <perluniintro>. Note that if layers are specified in the three-argument form, then default layers stored in [`${^OPEN}`](perlvar#%24%7B%5EOPEN%7D) (usually set by the <open> pragma or the switch `-CioD`) are ignored. Those layers will also be ignored if you specify a colon with no name following it. In that case the default layer for the operating system (:raw on Unix, :crlf on Windows) is used.
On some systems (in general, DOS- and Windows-based systems) [`binmode`](#binmode-FILEHANDLE%2C-LAYER) is necessary when you're not working with a text file. For the sake of portability it is a good idea always to use it when appropriate, and never to use it when it isn't appropriate. Also, people can set their I/O to be by default UTF8-encoded Unicode, not bytes.
Using `undef` for temporary files As a special case the three-argument form with a read/write mode and the third argument being [`undef`](#undef-EXPR):
```
open(my $tmp, "+>", undef) or die ...
```
opens a filehandle to a newly created empty anonymous temporary file. (This happens under any mode, which makes `+>` the only useful and sensible mode to use.) You will need to [`seek`](#seek-FILEHANDLE%2CPOSITION%2CWHENCE) to do the reading.
Opening a filehandle into an in-memory scalar You can open filehandles directly to Perl scalars instead of a file or other resource external to the program. To do so, provide a reference to that scalar as the third argument to `open`, like so:
```
open(my $memory, ">", \$var)
or die "Can't open memory file: $!";
print $memory "foo!\n"; # output will appear in $var
```
To (re)open `STDOUT` or `STDERR` as an in-memory file, close it first:
```
close STDOUT;
open(STDOUT, ">", \$variable)
or die "Can't open STDOUT: $!";
```
The scalars for in-memory files are treated as octet strings: unless the file is being opened with truncation the scalar may not contain any code points over 0xFF.
Opening in-memory files *can* fail for a variety of reasons. As with any other `open`, check the return value for success.
*Technical note*: This feature works only when Perl is built with PerlIO -- the default, except with older (pre-5.16) Perl installations that were configured to not include it (e.g. via `Configure -Uuseperlio`). You can see whether your Perl was built with PerlIO by running `perl -V:useperlio`. If it says `'define'`, you have PerlIO; otherwise you don't.
See <perliol> for detailed info on PerlIO.
Opening a filehandle into a command If MODE is `|-`, then the filename is interpreted as a command to which output is to be piped, and if MODE is `-|`, the filename is interpreted as a command that pipes output to us. In the two-argument (and one-argument) form, one should replace dash (`-`) with the command. See ["Using open() for IPC" in perlipc](perlipc#Using-open%28%29-for-IPC) for more examples of this. (You are not allowed to [`open`](#open-FILEHANDLE%2CMODE%2CEXPR) to a command that pipes both in *and* out, but see <IPC::Open2>, <IPC::Open3>, and ["Bidirectional Communication with Another Process" in perlipc](perlipc#Bidirectional-Communication-with-Another-Process) for alternatives.)
```
open(my $article_fh, "-|", "caesar <$article") # decrypt
# article
or die "Can't start caesar: $!";
open(my $article_fh, "caesar <$article |") # ditto
or die "Can't start caesar: $!";
open(my $out_fh, "|-", "sort >Tmp$$") # $$ is our process id
or die "Can't start sort: $!";
```
In the form of pipe opens taking three or more arguments, if LIST is specified (extra arguments after the command name) then LIST becomes arguments to the command invoked if the platform supports it. The meaning of [`open`](#open-FILEHANDLE%2CMODE%2CEXPR) with more than three arguments for non-pipe modes is not yet defined, but experimental "layers" may give extra LIST arguments meaning.
If you open a pipe on the command `-` (that is, specify either `|-` or `-|` with the one- or two-argument forms of [`open`](#open-FILEHANDLE%2CMODE%2CEXPR)), an implicit [`fork`](#fork) is done, so [`open`](#open-FILEHANDLE%2CMODE%2CEXPR) returns twice: in the parent process it returns the pid of the child process, and in the child process it returns (a defined) `0`. Use `defined($pid)` or `//` to determine whether the open was successful.
For example, use either
```
my $child_pid = open(my $from_kid, "-|")
// die "Can't fork: $!";
```
or
```
my $child_pid = open(my $to_kid, "|-")
// die "Can't fork: $!";
```
followed by
```
if ($child_pid) {
# am the parent:
# either write $to_kid or else read $from_kid
...
waitpid $child_pid, 0;
} else {
# am the child; use STDIN/STDOUT normally
...
exit;
}
```
The filehandle behaves normally for the parent, but I/O to that filehandle is piped from/to the STDOUT/STDIN of the child process. In the child process, the filehandle isn't opened--I/O happens from/to the new STDOUT/STDIN. Typically this is used like the normal piped open when you want to exercise more control over just how the pipe command gets executed, such as when running setuid and you don't want to have to scan shell commands for metacharacters.
The following blocks are more or less equivalent:
```
open(my $fh, "|tr '[a-z]' '[A-Z]'");
open(my $fh, "|-", "tr '[a-z]' '[A-Z]'");
open(my $fh, "|-") || exec 'tr', '[a-z]', '[A-Z]';
open(my $fh, "|-", "tr", '[a-z]', '[A-Z]');
open(my $fh, "cat -n '$file'|");
open(my $fh, "-|", "cat -n '$file'");
open(my $fh, "-|") || exec "cat", "-n", $file;
open(my $fh, "-|", "cat", "-n", $file);
```
The last two examples in each block show the pipe as "list form", which is not yet supported on all platforms. (If your platform has a real [`fork`](#fork), such as Linux and macOS, you can use the list form; it also works on Windows with Perl 5.22 or later.) You would want to use the list form of the pipe so you can pass literal arguments to the command without risk of the shell interpreting any shell metacharacters in them. However, this also bars you from opening pipes to commands that intentionally contain shell metacharacters, such as:
```
open(my $fh, "|cat -n | expand -4 | lpr")
|| die "Can't open pipeline to lpr: $!";
```
See ["Safe Pipe Opens" in perlipc](perlipc#Safe-Pipe-Opens) for more examples of this.
Duping filehandles You may also, in the Bourne shell tradition, specify an EXPR beginning with `>&`, in which case the rest of the string is interpreted as the name of a filehandle (or file descriptor, if numeric) to be duped (as in [dup(2)](http://man.he.net/man2/dup)) and opened. You may use `&` after `>`, `>>`, `<`, `+>`, `+>>`, and `+<`. The mode you specify should match the mode of the original filehandle. (Duping a filehandle does not take into account any existing contents of IO buffers.) If you use the three-argument form, then you can pass either a number, the name of a filehandle, or the normal "reference to a glob".
Here is a script that saves, redirects, and restores `STDOUT` and `STDERR` using various methods:
```
#!/usr/bin/perl
open(my $oldout, ">&STDOUT")
or die "Can't dup STDOUT: $!";
open(OLDERR, ">&", \*STDERR)
or die "Can't dup STDERR: $!";
open(STDOUT, '>', "foo.out")
or die "Can't redirect STDOUT: $!";
open(STDERR, ">&STDOUT")
or die "Can't dup STDOUT: $!";
select STDERR; $| = 1; # make unbuffered
select STDOUT; $| = 1; # make unbuffered
print STDOUT "stdout 1\n"; # this works for
print STDERR "stderr 1\n"; # subprocesses too
open(STDOUT, ">&", $oldout)
or die "Can't dup \$oldout: $!";
open(STDERR, ">&OLDERR")
or die "Can't dup OLDERR: $!";
print STDOUT "stdout 2\n";
print STDERR "stderr 2\n";
```
If you specify `'<&=X'`, where `X` is a file descriptor number or a filehandle, then Perl will do an equivalent of C's [fdopen(3)](http://man.he.net/man3/fdopen) of that file descriptor (and not call [dup(2)](http://man.he.net/man2/dup)); this is more parsimonious of file descriptors. For example:
```
# open for input, reusing the fileno of $fd
open(my $fh, "<&=", $fd)
```
or
```
open(my $fh, "<&=$fd")
```
or
```
# open for append, using the fileno of $oldfh
open(my $fh, ">>&=", $oldfh)
```
Being parsimonious on filehandles is also useful (besides being parsimonious) for example when something is dependent on file descriptors, like for example locking using [`flock`](#flock-FILEHANDLE%2COPERATION). If you do just `open(my $A, ">>&", $B)`, the filehandle `$A` will not have the same file descriptor as `$B`, and therefore `flock($A)` will not `flock($B)` nor vice versa. But with `open(my $A, ">>&=", $B)`, the filehandles will share the same underlying system file descriptor.
Note that under Perls older than 5.8.0, Perl uses the standard C library's' [fdopen(3)](http://man.he.net/man3/fdopen) to implement the `=` functionality. On many Unix systems, [fdopen(3)](http://man.he.net/man3/fdopen) fails when file descriptors exceed a certain value, typically 255. For Perls 5.8.0 and later, PerlIO is (most often) the default.
Legacy usage This section describes ways to call `open` outside of best practices; you may encounter these uses in older code. Perl does not consider their use deprecated, exactly, but neither is it recommended in new code, for the sake of clarity and readability.
Specifying mode and filename as a single argument In the one- and two-argument forms of the call, the mode and filename should be concatenated (in that order), preferably separated by white space. You can--but shouldn't--omit the mode in these forms when that mode is `<`. It is safe to use the two-argument form of [`open`](#open-FILEHANDLE%2CMODE%2CEXPR) if the filename argument is a known literal.
```
open(my $dbase, "+<dbase.mine") # ditto
or die "Can't open 'dbase.mine' for update: $!";
```
In the two-argument (and one-argument) form, opening `<-` or `-` opens STDIN and opening `>-` opens STDOUT.
New code should favor the three-argument form of `open` over this older form. Declaring the mode and the filename as two distinct arguments avoids any confusion between the two.
Calling `open` with one argument via global variables As a shortcut, a one-argument call takes the filename from the global scalar variable of the same name as the filehandle:
```
$ARTICLE = 100;
open(ARTICLE)
or die "Can't find article $ARTICLE: $!\n";
```
Here `$ARTICLE` must be a global (package) scalar variable - not one declared with [`my`](#my-VARLIST) or [`state`](#state-VARLIST).
Assigning a filehandle to a bareword An older style is to use a bareword as the filehandle, as
```
open(FH, "<", "input.txt")
or die "Can't open < input.txt: $!";
```
Then you can use `FH` as the filehandle, in `close FH` and `<FH>` and so on. Note that it's a global variable, so this form is not recommended when dealing with filehandles other than Perl's built-in ones (e.g. STDOUT and STDIN). In fact, using a bareword for the filehandle is an error when the `bareword_filehandles` feature has been disabled. This feature is disabled by default when in the scope of `use v5.36.0` or later.
Other considerations
Automatic filehandle closure The filehandle will be closed when its reference count reaches zero. If it is a lexically scoped variable declared with [`my`](#my-VARLIST), that usually means the end of the enclosing scope. However, this automatic close does not check for errors, so it is better to explicitly close filehandles, especially those used for writing:
```
close($handle)
|| warn "close failed: $!";
```
Automatic pipe flushing Perl will attempt to flush all files opened for output before any operation that may do a fork, but this may not be supported on some platforms (see <perlport>). To be safe, you may need to set [`$|`](perlvar#%24%7C) (`$AUTOFLUSH` in [English](english)) or call the `autoflush` method of [`IO::Handle`](IO::Handle#METHODS) on any open handles.
On systems that support a close-on-exec flag on files, the flag will be set for the newly opened file descriptor as determined by the value of [`$^F`](perlvar#%24%5EF). See ["$^F" in perlvar](perlvar#%24%5EF).
Closing any piped filehandle causes the parent process to wait for the child to finish, then returns the status value in [`$?`](perlvar#%24%3F) and [`${^CHILD_ERROR_NATIVE}`](perlvar#%24%7B%5ECHILD_ERROR_NATIVE%7D).
Direct versus by-reference assignment of filehandles If FILEHANDLE -- the first argument in a call to `open` -- is an undefined scalar variable (or array or hash element), a new filehandle is autovivified, meaning that the variable is assigned a reference to a newly allocated anonymous filehandle. Otherwise if FILEHANDLE is an expression, its value is the real filehandle. (This is considered a symbolic reference, so `use strict "refs"` should *not* be in effect.)
Whitespace and special characters in the filename argument The filename passed to the one- and two-argument forms of [`open`](#open-FILEHANDLE%2CMODE%2CEXPR) will have leading and trailing whitespace deleted and normal redirection characters honored. This property, known as "magic open", can often be used to good effect. A user could specify a filename of *"rsh cat file |"*, or you could change certain filenames as needed:
```
$filename =~ s/(.*\.gz)\s*$/gzip -dc < $1|/;
open(my $fh, $filename)
or die "Can't open $filename: $!";
```
Use the three-argument form to open a file with arbitrary weird characters in it,
```
open(my $fh, "<", $file)
|| die "Can't open $file: $!";
```
otherwise it's necessary to protect any leading and trailing whitespace:
```
$file =~ s#^(\s)#./$1#;
open(my $fh, "< $file\0")
|| die "Can't open $file: $!";
```
(this may not work on some bizarre filesystems). One should conscientiously choose between the *magic* and *three-argument* form of [`open`](#open-FILEHANDLE%2CMODE%2CEXPR):
```
open(my $in, $ARGV[0]) || die "Can't open $ARGV[0]: $!";
```
will allow the user to specify an argument of the form `"rsh cat file |"`, but will not work on a filename that happens to have a trailing space, while
```
open(my $in, "<", $ARGV[0])
|| die "Can't open $ARGV[0]: $!";
```
will have exactly the opposite restrictions. (However, some shells support the syntax `perl your_program.pl <( rsh cat file )`, which produces a filename that can be opened normally.)
Invoking C-style `open`
If you want a "real" C [open(2)](http://man.he.net/man2/open), then you should use the [`sysopen`](#sysopen-FILEHANDLE%2CFILENAME%2CMODE) function, which involves no such magic (but uses different filemodes than Perl [`open`](#open-FILEHANDLE%2CMODE%2CEXPR), which corresponds to C [fopen(3)](http://man.he.net/man3/fopen)). This is another way to protect your filenames from interpretation. For example:
```
use IO::Handle;
sysopen(my $fh, $path, O_RDWR|O_CREAT|O_EXCL)
or die "Can't open $path: $!";
$fh->autoflush(1);
print $fh "stuff $$\n";
seek($fh, 0, 0);
print "File contains: ", readline($fh);
```
See [`seek`](#seek-FILEHANDLE%2CPOSITION%2CWHENCE) for some details about mixing reading and writing.
Portability issues See ["open" in perlport](perlport#open).
opendir DIRHANDLE,EXPR Opens a directory named EXPR for processing by [`readdir`](#readdir-DIRHANDLE), [`telldir`](#telldir-DIRHANDLE), [`seekdir`](#seekdir-DIRHANDLE%2CPOS), [`rewinddir`](#rewinddir-DIRHANDLE), and [`closedir`](#closedir-DIRHANDLE). Returns true if successful. DIRHANDLE may be an expression whose value can be used as an indirect dirhandle, usually the real dirhandle name. If DIRHANDLE is an undefined scalar variable (or array or hash element), the variable is assigned a reference to a new anonymous dirhandle; that is, it's autovivified. Dirhandles are the same objects as filehandles; an I/O object can only be open as one of these handle types at once.
See the example at [`readdir`](#readdir-DIRHANDLE).
ord EXPR ord Returns the numeric value of the first character of EXPR. If EXPR is an empty string, returns 0. If EXPR is omitted, uses [`$_`](perlvar#%24_). (Note *character*, not byte.)
For the reverse, see [`chr`](#chr-NUMBER). See <perlunicode> for more about Unicode.
our VARLIST
our TYPE VARLIST
our VARLIST : ATTRS
our TYPE VARLIST : ATTRS [`our`](#our-VARLIST) makes a lexical alias to a package (i.e. global) variable of the same name in the current package for use within the current lexical scope.
[`our`](#our-VARLIST) has the same scoping rules as [`my`](#my-VARLIST) or [`state`](#state-VARLIST), meaning that it is only valid within a lexical scope. Unlike [`my`](#my-VARLIST) and [`state`](#state-VARLIST), which both declare new (lexical) variables, [`our`](#our-VARLIST) only creates an alias to an existing variable: a package variable of the same name.
This means that when `use strict 'vars'` is in effect, [`our`](#our-VARLIST) lets you use a package variable without qualifying it with the package name, but only within the lexical scope of the [`our`](#our-VARLIST) declaration. This applies immediately--even within the same statement.
```
package Foo;
use v5.36; # which implies "use strict;"
$Foo::foo = 23;
{
our $foo; # alias to $Foo::foo
print $foo; # prints 23
}
print $Foo::foo; # prints 23
print $foo; # ERROR: requires explicit package name
```
This works even if the package variable has not been used before, as package variables spring into existence when first used.
```
package Foo;
use v5.36;
our $foo = 23; # just like $Foo::foo = 23
print $Foo::foo; # prints 23
```
Because the variable becomes legal immediately under `use strict 'vars'`, so long as there is no variable with that name is already in scope, you can then reference the package variable again even within the same statement.
```
package Foo;
use v5.36;
my $foo = $foo; # error, undeclared $foo on right-hand side
our $foo = $foo; # no errors
```
If more than one variable is listed, the list must be placed in parentheses.
```
our($bar, $baz);
```
An [`our`](#our-VARLIST) declaration declares an alias for a package variable that will be visible across its entire lexical scope, even across package boundaries. The package in which the variable is entered is determined at the point of the declaration, not at the point of use. This means the following behavior holds:
```
package Foo;
our $bar; # declares $Foo::bar for rest of lexical scope
$bar = 20;
package Bar;
print $bar; # prints 20, as it refers to $Foo::bar
```
Multiple [`our`](#our-VARLIST) declarations with the same name in the same lexical scope are allowed if they are in different packages. If they happen to be in the same package, Perl will emit warnings if you have asked for them, just like multiple [`my`](#my-VARLIST) declarations. Unlike a second [`my`](#my-VARLIST) declaration, which will bind the name to a fresh variable, a second [`our`](#our-VARLIST) declaration in the same package, in the same scope, is merely redundant.
```
use warnings;
package Foo;
our $bar; # declares $Foo::bar for rest of lexical scope
$bar = 20;
package Bar;
our $bar = 30; # declares $Bar::bar for rest of lexical scope
print $bar; # prints 30
our $bar; # emits warning but has no other effect
print $bar; # still prints 30
```
An [`our`](#our-VARLIST) declaration may also have a list of attributes associated with it.
The exact semantics and interface of TYPE and ATTRS are still evolving. TYPE is currently bound to the use of the <fields> pragma, and attributes are handled using the <attributes> pragma, or, starting from Perl 5.8.0, also via the <Attribute::Handlers> module. See ["Private Variables via my()" in perlsub](perlsub#Private-Variables-via-my%28%29) for details.
Note that with a parenthesised list, [`undef`](#undef-EXPR) can be used as a dummy placeholder, for example to skip assignment of initial values:
```
our ( undef, $min, $hour ) = localtime;
```
[`our`](#our-VARLIST) differs from [`use vars`](vars), which allows use of an unqualified name *only* within the affected package, but across scopes.
pack TEMPLATE,LIST Takes a LIST of values and converts it into a string using the rules given by the TEMPLATE. The resulting string is the concatenation of the converted values. Typically, each converted value looks like its machine-level representation. For example, on 32-bit machines an integer may be represented by a sequence of 4 bytes, which will in Perl be presented as a string that's 4 characters long.
See <perlpacktut> for an introduction to this function.
The TEMPLATE is a sequence of characters that give the order and type of values, as follows:
```
a A string with arbitrary binary data, will be null padded.
A A text (ASCII) string, will be space padded.
Z A null-terminated (ASCIZ) string, will be null padded.
b A bit string (ascending bit order inside each byte,
like vec()).
B A bit string (descending bit order inside each byte).
h A hex string (low nybble first).
H A hex string (high nybble first).
c A signed char (8-bit) value.
C An unsigned char (octet) value.
W An unsigned char value (can be greater than 255).
s A signed short (16-bit) value.
S An unsigned short value.
l A signed long (32-bit) value.
L An unsigned long value.
q A signed quad (64-bit) value.
Q An unsigned quad value.
(Quads are available only if your system supports 64-bit
integer values _and_ if Perl has been compiled to support
those. Raises an exception otherwise.)
i A signed integer value.
I An unsigned integer value.
(This 'integer' is _at_least_ 32 bits wide. Its exact
size depends on what a local C compiler calls 'int'.)
n An unsigned short (16-bit) in "network" (big-endian) order.
N An unsigned long (32-bit) in "network" (big-endian) order.
v An unsigned short (16-bit) in "VAX" (little-endian) order.
V An unsigned long (32-bit) in "VAX" (little-endian) order.
j A Perl internal signed integer value (IV).
J A Perl internal unsigned integer value (UV).
f A single-precision float in native format.
d A double-precision float in native format.
F A Perl internal floating-point value (NV) in native format
D A float of long-double precision in native format.
(Long doubles are available only if your system supports
long double values. Raises an exception otherwise.
Note that there are different long double formats.)
p A pointer to a null-terminated string.
P A pointer to a structure (fixed-length string).
u A uuencoded string.
U A Unicode character number. Encodes to a character in char-
acter mode and UTF-8 (or UTF-EBCDIC in EBCDIC platforms) in
byte mode. Also on EBCDIC platforms, the character number will
be the native EBCDIC value for character numbers below 256.
This allows most programs using this feature to not have to
care which type of platform they are running on.
w A BER compressed integer (not an ASN.1 BER, see perlpacktut
for details). Its bytes represent an unsigned integer in
base 128, most significant digit first, with as few digits
as possible. Bit eight (the high bit) is set on each byte
except the last.
x A null byte (a.k.a ASCII NUL, "\000", chr(0))
X Back up a byte.
@ Null-fill or truncate to absolute position, counted from the
start of the innermost ()-group.
. Null-fill or truncate to absolute position specified by
the value.
( Start of a ()-group.
```
One or more modifiers below may optionally follow certain letters in the TEMPLATE (the second column lists letters for which the modifier is valid):
```
! sSlLiI Forces native (short, long, int) sizes instead
of fixed (16-/32-bit) sizes.
! xX Make x and X act as alignment commands.
! nNvV Treat integers as signed instead of unsigned.
! @. Specify position as byte offset in the internal
representation of the packed string. Efficient
but dangerous.
> sSiIlLqQ Force big-endian byte-order on the type.
jJfFdDpP (The "big end" touches the construct.)
< sSiIlLqQ Force little-endian byte-order on the type.
jJfFdDpP (The "little end" touches the construct.)
```
The `>` and `<` modifiers can also be used on `()` groups to force a particular byte-order on all components in that group, including all its subgroups.
The following rules apply:
* Each letter may optionally be followed by a number indicating the repeat count. A numeric repeat count may optionally be enclosed in brackets, as in `pack("C[80]", @arr)`. The repeat count gobbles that many values from the LIST when used with all format types other than `a`, `A`, `Z`, `b`, `B`, `h`, `H`, `@`, `.`, `x`, `X`, and `P`, where it means something else, described below. Supplying a `*` for the repeat count instead of a number means to use however many items are left, except for:
+ `@`, `x`, and `X`, where it is equivalent to `0`.
+ <.>, where it means relative to the start of the string.
+ `u`, where it is equivalent to 1 (or 45, which here is equivalent).One can replace a numeric repeat count with a template letter enclosed in brackets to use the packed byte length of the bracketed template for the repeat count.
For example, the template `x[L]` skips as many bytes as in a packed long, and the template `"$t X[$t] $t"` unpacks twice whatever $t (when variable-expanded) unpacks. If the template in brackets contains alignment commands (such as `x![d]`), its packed length is calculated as if the start of the template had the maximal possible alignment.
When used with `Z`, a `*` as the repeat count is guaranteed to add a trailing null byte, so the resulting string is always one byte longer than the byte length of the item itself.
When used with `@`, the repeat count represents an offset from the start of the innermost `()` group.
When used with `.`, the repeat count determines the starting position to calculate the value offset as follows:
+ If the repeat count is `0`, it's relative to the current position.
+ If the repeat count is `*`, the offset is relative to the start of the packed string.
+ And if it's an integer *n*, the offset is relative to the start of the *n*th innermost `( )` group, or to the start of the string if *n* is bigger then the group level.The repeat count for `u` is interpreted as the maximal number of bytes to encode per line of output, with 0, 1 and 2 replaced by 45. The repeat count should not be more than 65.
* The `a`, `A`, and `Z` types gobble just one value, but pack it as a string of length count, padding with nulls or spaces as needed. When unpacking, `A` strips trailing whitespace and nulls, `Z` strips everything after the first null, and `a` returns data with no stripping at all.
If the value to pack is too long, the result is truncated. If it's too long and an explicit count is provided, `Z` packs only `$count-1` bytes, followed by a null byte. Thus `Z` always packs a trailing null, except when the count is 0.
* Likewise, the `b` and `B` formats pack a string that's that many bits long. Each such format generates 1 bit of the result. These are typically followed by a repeat count like `B8` or `B64`.
Each result bit is based on the least-significant bit of the corresponding input character, i.e., on `ord($char)%2`. In particular, characters `"0"` and `"1"` generate bits 0 and 1, as do characters `"\000"` and `"\001"`.
Starting from the beginning of the input string, each 8-tuple of characters is converted to 1 character of output. With format `b`, the first character of the 8-tuple determines the least-significant bit of a character; with format `B`, it determines the most-significant bit of a character.
If the length of the input string is not evenly divisible by 8, the remainder is packed as if the input string were padded by null characters at the end. Similarly during unpacking, "extra" bits are ignored.
If the input string is longer than needed, remaining characters are ignored.
A `*` for the repeat count uses all characters of the input field. On unpacking, bits are converted to a string of `0`s and `1`s.
* The `h` and `H` formats pack a string that many nybbles (4-bit groups, representable as hexadecimal digits, `"0".."9"` `"a".."f"`) long.
For each such format, [`pack`](#pack-TEMPLATE%2CLIST) generates 4 bits of result. With non-alphabetical characters, the result is based on the 4 least-significant bits of the input character, i.e., on `ord($char)%16`. In particular, characters `"0"` and `"1"` generate nybbles 0 and 1, as do bytes `"\000"` and `"\001"`. For characters `"a".."f"` and `"A".."F"`, the result is compatible with the usual hexadecimal digits, so that `"a"` and `"A"` both generate the nybble `0xA==10`. Use only these specific hex characters with this format.
Starting from the beginning of the template to [`pack`](#pack-TEMPLATE%2CLIST), each pair of characters is converted to 1 character of output. With format `h`, the first character of the pair determines the least-significant nybble of the output character; with format `H`, it determines the most-significant nybble.
If the length of the input string is not even, it behaves as if padded by a null character at the end. Similarly, "extra" nybbles are ignored during unpacking.
If the input string is longer than needed, extra characters are ignored.
A `*` for the repeat count uses all characters of the input field. For [`unpack`](#unpack-TEMPLATE%2CEXPR), nybbles are converted to a string of hexadecimal digits.
* The `p` format packs a pointer to a null-terminated string. You are responsible for ensuring that the string is not a temporary value, as that could potentially get deallocated before you got around to using the packed result. The `P` format packs a pointer to a structure of the size indicated by the length. A null pointer is created if the corresponding value for `p` or `P` is [`undef`](#undef-EXPR); similarly with [`unpack`](#unpack-TEMPLATE%2CEXPR), where a null pointer unpacks into [`undef`](#undef-EXPR).
If your system has a strange pointer size--meaning a pointer is neither as big as an int nor as big as a long--it may not be possible to pack or unpack pointers in big- or little-endian byte order. Attempting to do so raises an exception.
* The `/` template character allows packing and unpacking of a sequence of items where the packed structure contains a packed item count followed by the packed items themselves. This is useful when the structure you're unpacking has encoded the sizes or repeat counts for some of its fields within the structure itself as separate fields.
For [`pack`](#pack-TEMPLATE%2CLIST), you write *length-item*`/`*sequence-item*, and the *length-item* describes how the length value is packed. Formats likely to be of most use are integer-packing ones like `n` for Java strings, `w` for ASN.1 or SNMP, and `N` for Sun XDR.
For [`pack`](#pack-TEMPLATE%2CLIST), *sequence-item* may have a repeat count, in which case the minimum of that and the number of available items is used as the argument for *length-item*. If it has no repeat count or uses a '\*', the number of available items is used.
For [`unpack`](#unpack-TEMPLATE%2CEXPR), an internal stack of integer arguments unpacked so far is used. You write `/`*sequence-item* and the repeat count is obtained by popping off the last element from the stack. The *sequence-item* must not have a repeat count.
If *sequence-item* refers to a string type (`"A"`, `"a"`, or `"Z"`), the *length-item* is the string length, not the number of strings. With an explicit repeat count for pack, the packed string is adjusted to that length. For example:
```
This code: gives this result:
unpack("W/a", "\004Gurusamy") ("Guru")
unpack("a3/A A*", "007 Bond J ") (" Bond", "J")
unpack("a3 x2 /A A*", "007: Bond, J.") ("Bond, J", ".")
pack("n/a* w/a","hello,","world") "\000\006hello,\005world"
pack("a/W2", ord("a") .. ord("z")) "2ab"
```
The *length-item* is not returned explicitly from [`unpack`](#unpack-TEMPLATE%2CEXPR).
Supplying a count to the *length-item* format letter is only useful with `A`, `a`, or `Z`. Packing with a *length-item* of `a` or `Z` may introduce `"\000"` characters, which Perl does not regard as legal in numeric strings.
* The integer types `s`, `S`, `l`, and `L` may be followed by a `!` modifier to specify native shorts or longs. As shown in the example above, a bare `l` means exactly 32 bits, although the native `long` as seen by the local C compiler may be larger. This is mainly an issue on 64-bit platforms. You can see whether using `!` makes any difference this way:
```
printf "format s is %d, s! is %d\n",
length pack("s"), length pack("s!");
printf "format l is %d, l! is %d\n",
length pack("l"), length pack("l!");
```
`i!` and `I!` are also allowed, but only for completeness' sake: they are identical to `i` and `I`.
The actual sizes (in bytes) of native shorts, ints, longs, and long longs on the platform where Perl was built are also available from the command line:
```
$ perl -V:{short,int,long{,long}}size
shortsize='2';
intsize='4';
longsize='4';
longlongsize='8';
```
or programmatically via the [`Config`](config) module:
```
use Config;
print $Config{shortsize}, "\n";
print $Config{intsize}, "\n";
print $Config{longsize}, "\n";
print $Config{longlongsize}, "\n";
```
`$Config{longlongsize}` is undefined on systems without long long support.
* The integer formats `s`, `S`, `i`, `I`, `l`, `L`, `j`, and `J` are inherently non-portable between processors and operating systems because they obey native byteorder and endianness. For example, a 4-byte integer 0x12345678 (305419896 decimal) would be ordered natively (arranged in and handled by the CPU registers) into bytes as
```
0x12 0x34 0x56 0x78 # big-endian
0x78 0x56 0x34 0x12 # little-endian
```
Basically, Intel and VAX CPUs are little-endian, while everybody else, including Motorola m68k/88k, PPC, Sparc, HP PA, Power, and Cray, are big-endian. Alpha and MIPS can be either: Digital/Compaq uses (well, used) them in little-endian mode, but SGI/Cray uses them in big-endian mode.
The names *big-endian* and *little-endian* are comic references to the egg-eating habits of the little-endian Lilliputians and the big-endian Blefuscudians from the classic Jonathan Swift satire, *Gulliver's Travels*. This entered computer lingo via the paper "On Holy Wars and a Plea for Peace" by Danny Cohen, USC/ISI IEN 137, April 1, 1980.
Some systems may have even weirder byte orders such as
```
0x56 0x78 0x12 0x34
0x34 0x12 0x78 0x56
```
These are called mid-endian, middle-endian, mixed-endian, or just weird.
You can determine your system endianness with this incantation:
```
printf("%#02x ", $_) for unpack("W*", pack L=>0x12345678);
```
The byteorder on the platform where Perl was built is also available via [Config](config):
```
use Config;
print "$Config{byteorder}\n";
```
or from the command line:
```
$ perl -V:byteorder
```
Byteorders `"1234"` and `"12345678"` are little-endian; `"4321"` and `"87654321"` are big-endian. Systems with multiarchitecture binaries will have `"ffff"`, signifying that static information doesn't work, one must use runtime probing.
For portably packed integers, either use the formats `n`, `N`, `v`, and `V` or else use the `>` and `<` modifiers described immediately below. See also <perlport>.
* Also floating point numbers have endianness. Usually (but not always) this agrees with the integer endianness. Even though most platforms these days use the IEEE 754 binary format, there are differences, especially if the long doubles are involved. You can see the `Config` variables `doublekind` and `longdblkind` (also `doublesize`, `longdblsize`): the "kind" values are enums, unlike `byteorder`.
Portability-wise the best option is probably to keep to the IEEE 754 64-bit doubles, and of agreed-upon endianness. Another possibility is the `"%a"`) format of [`printf`](#printf-FILEHANDLE-FORMAT%2C-LIST).
* Starting with Perl 5.10.0, integer and floating-point formats, along with the `p` and `P` formats and `()` groups, may all be followed by the `>` or `<` endianness modifiers to respectively enforce big- or little-endian byte-order. These modifiers are especially useful given how `n`, `N`, `v`, and `V` don't cover signed integers, 64-bit integers, or floating-point values.
Here are some concerns to keep in mind when using an endianness modifier:
+ Exchanging signed integers between different platforms works only when all platforms store them in the same format. Most platforms store signed integers in two's-complement notation, so usually this is not an issue.
+ The `>` or `<` modifiers can only be used on floating-point formats on big- or little-endian machines. Otherwise, attempting to use them raises an exception.
+ Forcing big- or little-endian byte-order on floating-point values for data exchange can work only if all platforms use the same binary representation such as IEEE floating-point. Even if all platforms are using IEEE, there may still be subtle differences. Being able to use `>` or `<` on floating-point values can be useful, but also dangerous if you don't know exactly what you're doing. It is not a general way to portably store floating-point values.
+ When using `>` or `<` on a `()` group, this affects all types inside the group that accept byte-order modifiers, including all subgroups. It is silently ignored for all other types. You are not allowed to override the byte-order within a group that already has a byte-order modifier suffix.
* Real numbers (floats and doubles) are in native machine format only. Due to the multiplicity of floating-point formats and the lack of a standard "network" representation for them, no facility for interchange has been made. This means that packed floating-point data written on one machine may not be readable on another, even if both use IEEE floating-point arithmetic (because the endianness of the memory representation is not part of the IEEE spec). See also <perlport>.
If you know *exactly* what you're doing, you can use the `>` or `<` modifiers to force big- or little-endian byte-order on floating-point values.
Because Perl uses doubles (or long doubles, if configured) internally for all numeric calculation, converting from double into float and thence to double again loses precision, so `unpack("f", pack("f", $foo)`) will not in general equal $foo.
* Pack and unpack can operate in two modes: character mode (`C0` mode) where the packed string is processed per character, and UTF-8 byte mode (`U0` mode) where the packed string is processed in its UTF-8-encoded Unicode form on a byte-by-byte basis. Character mode is the default unless the format string starts with `U`. You can always switch mode mid-format with an explicit `C0` or `U0` in the format. This mode remains in effect until the next mode change, or until the end of the `()` group it (directly) applies to.
Using `C0` to get Unicode characters while using `U0` to get *non*-Unicode bytes is not necessarily obvious. Probably only the first of these is what you want:
```
$ perl -CS -E 'say "\x{3B1}\x{3C9}"' |
perl -CS -ne 'printf "%v04X\n", $_ for unpack("C0A*", $_)'
03B1.03C9
$ perl -CS -E 'say "\x{3B1}\x{3C9}"' |
perl -CS -ne 'printf "%v02X\n", $_ for unpack("U0A*", $_)'
CE.B1.CF.89
$ perl -CS -E 'say "\x{3B1}\x{3C9}"' |
perl -C0 -ne 'printf "%v02X\n", $_ for unpack("C0A*", $_)'
CE.B1.CF.89
$ perl -CS -E 'say "\x{3B1}\x{3C9}"' |
perl -C0 -ne 'printf "%v02X\n", $_ for unpack("U0A*", $_)'
C3.8E.C2.B1.C3.8F.C2.89
```
Those examples also illustrate that you should not try to use [`pack`](#pack-TEMPLATE%2CLIST)/[`unpack`](#unpack-TEMPLATE%2CEXPR) as a substitute for the [Encode](encode) module.
* You must yourself do any alignment or padding by inserting, for example, enough `"x"`es while packing. There is no way for [`pack`](#pack-TEMPLATE%2CLIST) and [`unpack`](#unpack-TEMPLATE%2CEXPR) to know where characters are going to or coming from, so they handle their output and input as flat sequences of characters.
* A `()` group is a sub-TEMPLATE enclosed in parentheses. A group may take a repeat count either as postfix, or for [`unpack`](#unpack-TEMPLATE%2CEXPR), also via the `/` template character. Within each repetition of a group, positioning with `@` starts over at 0. Therefore, the result of
```
pack("@1A((@2A)@3A)", qw[X Y Z])
```
is the string `"\0X\0\0YZ"`.
* `x` and `X` accept the `!` modifier to act as alignment commands: they jump forward or back to the closest position aligned at a multiple of `count` characters. For example, to [`pack`](#pack-TEMPLATE%2CLIST) or [`unpack`](#unpack-TEMPLATE%2CEXPR) a C structure like
```
struct {
char c; /* one signed, 8-bit character */
double d;
char cc[2];
}
```
one may need to use the template `c x![d] d c[2]`. This assumes that doubles must be aligned to the size of double.
For alignment commands, a `count` of 0 is equivalent to a `count` of 1; both are no-ops.
* `n`, `N`, `v` and `V` accept the `!` modifier to represent signed 16-/32-bit integers in big-/little-endian order. This is portable only when all platforms sharing packed data use the same binary representation for signed integers; for example, when all platforms use two's-complement representation.
* Comments can be embedded in a TEMPLATE using `#` through the end of line. White space can separate pack codes from each other, but modifiers and repeat counts must follow immediately. Breaking complex templates into individual line-by-line components, suitably annotated, can do as much to improve legibility and maintainability of pack/unpack formats as `/x` can for complicated pattern matches.
* If TEMPLATE requires more arguments than [`pack`](#pack-TEMPLATE%2CLIST) is given, [`pack`](#pack-TEMPLATE%2CLIST) assumes additional `""` arguments. If TEMPLATE requires fewer arguments than given, extra arguments are ignored.
* Attempting to pack the special floating point values `Inf` and `NaN` (infinity, also in negative, and not-a-number) into packed integer values (like `"L"`) is a fatal error. The reason for this is that there simply isn't any sensible mapping for these special values into integers.
Examples:
```
$foo = pack("WWWW",65,66,67,68);
# foo eq "ABCD"
$foo = pack("W4",65,66,67,68);
# same thing
$foo = pack("W4",0x24b6,0x24b7,0x24b8,0x24b9);
# same thing with Unicode circled letters.
$foo = pack("U4",0x24b6,0x24b7,0x24b8,0x24b9);
# same thing with Unicode circled letters. You don't get the
# UTF-8 bytes because the U at the start of the format caused
# a switch to U0-mode, so the UTF-8 bytes get joined into
# characters
$foo = pack("C0U4",0x24b6,0x24b7,0x24b8,0x24b9);
# foo eq "\xe2\x92\xb6\xe2\x92\xb7\xe2\x92\xb8\xe2\x92\xb9"
# This is the UTF-8 encoding of the string in the
# previous example
$foo = pack("ccxxcc",65,66,67,68);
# foo eq "AB\0\0CD"
# NOTE: The examples above featuring "W" and "c" are true
# only on ASCII and ASCII-derived systems such as ISO Latin 1
# and UTF-8. On EBCDIC systems, the first example would be
# $foo = pack("WWWW",193,194,195,196);
$foo = pack("s2",1,2);
# "\001\000\002\000" on little-endian
# "\000\001\000\002" on big-endian
$foo = pack("a4","abcd","x","y","z");
# "abcd"
$foo = pack("aaaa","abcd","x","y","z");
# "axyz"
$foo = pack("a14","abcdefg");
# "abcdefg\0\0\0\0\0\0\0"
$foo = pack("i9pl", gmtime);
# a real struct tm (on my system anyway)
$utmp_template = "Z8 Z8 Z16 L";
$utmp = pack($utmp_template, @utmp1);
# a struct utmp (BSDish)
@utmp2 = unpack($utmp_template, $utmp);
# "@utmp1" eq "@utmp2"
sub bintodec {
unpack("N", pack("B32", substr("0" x 32 . shift, -32)));
}
$foo = pack('sx2l', 12, 34);
# short 12, two zero bytes padding, long 34
$bar = pack('s@4l', 12, 34);
# short 12, zero fill to position 4, long 34
# $foo eq $bar
$baz = pack('s.l', 12, 4, 34);
# short 12, zero fill to position 4, long 34
$foo = pack('nN', 42, 4711);
# pack big-endian 16- and 32-bit unsigned integers
$foo = pack('S>L>', 42, 4711);
# exactly the same
$foo = pack('s<l<', -42, 4711);
# pack little-endian 16- and 32-bit signed integers
$foo = pack('(sl)<', -42, 4711);
# exactly the same
```
The same template may generally also be used in [`unpack`](#unpack-TEMPLATE%2CEXPR).
package NAMESPACE
package NAMESPACE VERSION
package NAMESPACE BLOCK
package NAMESPACE VERSION BLOCK Declares the BLOCK or the rest of the compilation unit as being in the given namespace. The scope of the package declaration is either the supplied code BLOCK or, in the absence of a BLOCK, from the declaration itself through the end of current scope (the enclosing block, file, or [`eval`](#eval-EXPR)). That is, the forms without a BLOCK are operative through the end of the current scope, just like the [`my`](#my-VARLIST), [`state`](#state-VARLIST), and [`our`](#our-VARLIST) operators. All unqualified dynamic identifiers in this scope will be in the given namespace, except where overridden by another [`package`](#package-NAMESPACE) declaration or when they're one of the special identifiers that qualify into `main::`, like `STDOUT`, `ARGV`, `ENV`, and the punctuation variables.
A package statement affects dynamic variables only, including those you've used [`local`](#local-EXPR) on, but *not* lexically-scoped variables, which are created with [`my`](#my-VARLIST), [`state`](#state-VARLIST), or [`our`](#our-VARLIST). Typically it would be the first declaration in a file included by [`require`](#require-VERSION) or [`use`](#use-Module-VERSION-LIST). You can switch into a package in more than one place, since this only determines which default symbol table the compiler uses for the rest of that block. You can refer to identifiers in other packages than the current one by prefixing the identifier with the package name and a double colon, as in `$SomePack::var` or `ThatPack::INPUT_HANDLE`. If package name is omitted, the `main` package is assumed. That is, `$::sail` is equivalent to `$main::sail` (as well as to `$main'sail`, still seen in ancient code, mostly from Perl 4).
If VERSION is provided, [`package`](#package-NAMESPACE) sets the `$VERSION` variable in the given namespace to a <version> object with the VERSION provided. VERSION must be a "strict" style version number as defined by the <version> module: a positive decimal number (integer or decimal-fraction) without exponentiation or else a dotted-decimal v-string with a leading 'v' character and at least three components. You should set `$VERSION` only once per package.
See ["Packages" in perlmod](perlmod#Packages) for more information about packages, modules, and classes. See <perlsub> for other scoping issues.
\_\_PACKAGE\_\_ A special token that returns the name of the package in which it occurs.
pipe READHANDLE,WRITEHANDLE Opens a pair of connected pipes like the corresponding system call. Note that if you set up a loop of piped processes, deadlock can occur unless you are very careful. In addition, note that Perl's pipes use IO buffering, so you may need to set [`$|`](perlvar#%24%7C) to flush your WRITEHANDLE after each command, depending on the application.
Returns true on success.
See <IPC::Open2>, <IPC::Open3>, and ["Bidirectional Communication with Another Process" in perlipc](perlipc#Bidirectional-Communication-with-Another-Process) for examples of such things.
On systems that support a close-on-exec flag on files, that flag is set on all newly opened file descriptors whose [`fileno`](#fileno-FILEHANDLE)s are *higher* than the current value of [`$^F`](perlvar#%24%5EF) (by default 2 for `STDERR`). See ["$^F" in perlvar](perlvar#%24%5EF).
pop ARRAY pop Pops and returns the last value of the array, shortening the array by one element.
Returns the undefined value if the array is empty, although this may also happen at other times. If ARRAY is omitted, pops the [`@ARGV`](perlvar#%40ARGV) array in the main program, but the [`@_`](perlvar#%40_) array in subroutines, just like [`shift`](#shift-ARRAY).
Starting with Perl 5.14, an experimental feature allowed [`pop`](#pop-ARRAY) to take a scalar expression. This experiment has been deemed unsuccessful, and was removed as of Perl 5.24.
pos SCALAR pos Returns the offset of where the last `m//g` search left off for the variable in question ([`$_`](perlvar#%24_) is used when the variable is not specified). This offset is in characters unless the (no-longer-recommended) [`use bytes`](bytes) pragma is in effect, in which case the offset is in bytes. Note that 0 is a valid match offset. [`undef`](#undef-EXPR) indicates that the search position is reset (usually due to match failure, but can also be because no match has yet been run on the scalar).
[`pos`](#pos-SCALAR) directly accesses the location used by the regexp engine to store the offset, so assigning to [`pos`](#pos-SCALAR) will change that offset, and so will also influence the `\G` zero-width assertion in regular expressions. Both of these effects take place for the next match, so you can't affect the position with [`pos`](#pos-SCALAR) during the current match, such as in `(?{pos() = 5})` or `s//pos() = 5/e`.
Setting [`pos`](#pos-SCALAR) also resets the *matched with zero-length* flag, described under ["Repeated Patterns Matching a Zero-length Substring" in perlre](perlre#Repeated-Patterns-Matching-a-Zero-length-Substring).
Because a failed `m//gc` match doesn't reset the offset, the return from [`pos`](#pos-SCALAR) won't change either in this case. See <perlre> and <perlop>.
print FILEHANDLE LIST
print FILEHANDLE
print LIST print Prints a string or a list of strings. Returns true if successful. FILEHANDLE may be a scalar variable containing the name of or a reference to the filehandle, thus introducing one level of indirection. (NOTE: If FILEHANDLE is a variable and the next token is a term, it may be misinterpreted as an operator unless you interpose a `+` or put parentheses around the arguments.) If FILEHANDLE is omitted, prints to the last selected (see [`select`](#select-FILEHANDLE)) output handle. If LIST is omitted, prints [`$_`](perlvar#%24_) to the currently selected output handle. To use FILEHANDLE alone to print the content of [`$_`](perlvar#%24_) to it, you must use a bareword filehandle like `FH`, not an indirect one like `$fh`. To set the default output handle to something other than STDOUT, use the select operation.
The current value of [`$,`](perlvar#%24%2C) (if any) is printed between each LIST item. The current value of [`$\`](perlvar#%24%5C) (if any) is printed after the entire LIST has been printed. Because print takes a LIST, anything in the LIST is evaluated in list context, including any subroutines whose return lists you pass to [`print`](#print-FILEHANDLE-LIST). Be careful not to follow the print keyword with a left parenthesis unless you want the corresponding right parenthesis to terminate the arguments to the print; put parentheses around all arguments (or interpose a `+`, but that doesn't look as good).
If you're storing handles in an array or hash, or in general whenever you're using any expression more complex than a bareword handle or a plain, unsubscripted scalar variable to retrieve it, you will have to use a block returning the filehandle value instead, in which case the LIST may not be omitted:
```
print { $files[$i] } "stuff\n";
print { $OK ? *STDOUT : *STDERR } "stuff\n";
```
Printing to a closed pipe or socket will generate a SIGPIPE signal. See <perlipc> for more on signal handling.
printf FILEHANDLE FORMAT, LIST
printf FILEHANDLE
printf FORMAT, LIST printf Equivalent to `print FILEHANDLE sprintf(FORMAT, LIST)`, except that [`$\`](perlvar#%24%5C) (the output record separator) is not appended. The FORMAT and the LIST are actually parsed as a single list. The first argument of the list will be interpreted as the [`printf`](#printf-FILEHANDLE-FORMAT%2C-LIST) format. This means that `printf(@_)` will use `$_[0]` as the format. See [sprintf](#sprintf-FORMAT%2C-LIST) for an explanation of the format argument. If `use locale` (including `use locale ':not_characters'`) is in effect and [`POSIX::setlocale`](posix#setlocale) has been called, the character used for the decimal separator in formatted floating-point numbers is affected by the `LC_NUMERIC` locale setting. See <perllocale> and [POSIX](posix).
For historical reasons, if you omit the list, [`$_`](perlvar#%24_) is used as the format; to use FILEHANDLE without a list, you must use a bareword filehandle like `FH`, not an indirect one like `$fh`. However, this will rarely do what you want; if [`$_`](perlvar#%24_) contains formatting codes, they will be replaced with the empty string and a warning will be emitted if <warnings> are enabled. Just use [`print`](#print-FILEHANDLE-LIST) if you want to print the contents of [`$_`](perlvar#%24_).
Don't fall into the trap of using a [`printf`](#printf-FILEHANDLE-FORMAT%2C-LIST) when a simple [`print`](#print-FILEHANDLE-LIST) would do. The [`print`](#print-FILEHANDLE-LIST) is more efficient and less error prone.
prototype FUNCTION prototype Returns the prototype of a function as a string (or [`undef`](#undef-EXPR) if the function has no prototype). FUNCTION is a reference to, or the name of, the function whose prototype you want to retrieve. If FUNCTION is omitted, [`$_`](perlvar#%24_) is used.
If FUNCTION is a string starting with `CORE::`, the rest is taken as a name for a Perl builtin. If the builtin's arguments cannot be adequately expressed by a prototype (such as [`system`](#system-LIST)), [`prototype`](#prototype-FUNCTION) returns [`undef`](#undef-EXPR), because the builtin does not really behave like a Perl function. Otherwise, the string describing the equivalent prototype is returned.
push ARRAY,LIST Treats ARRAY as a stack by appending the values of LIST to the end of ARRAY. The length of ARRAY increases by the length of LIST. Has the same effect as
```
for my $value (LIST) {
$ARRAY[++$#ARRAY] = $value;
}
```
but is more efficient. Returns the number of elements in the array following the completed [`push`](#push-ARRAY%2CLIST).
Starting with Perl 5.14, an experimental feature allowed [`push`](#push-ARRAY%2CLIST) to take a scalar expression. This experiment has been deemed unsuccessful, and was removed as of Perl 5.24.
q/STRING/
qq/STRING/
qw/STRING/
qx/STRING/ Generalized quotes. See ["Quote-Like Operators" in perlop](perlop#Quote-Like-Operators).
qr/STRING/ Regexp-like quote. See ["Regexp Quote-Like Operators" in perlop](perlop#Regexp-Quote-Like-Operators).
quotemeta EXPR quotemeta Returns the value of EXPR with all the ASCII non-"word" characters backslashed. (That is, all ASCII characters not matching `/[A-Za-z_0-9]/` will be preceded by a backslash in the returned string, regardless of any locale settings.) This is the internal function implementing the `\Q` escape in double-quoted strings. (See below for the behavior on non-ASCII code points.)
If EXPR is omitted, uses [`$_`](perlvar#%24_).
quotemeta (and `\Q` ... `\E`) are useful when interpolating strings into regular expressions, because by default an interpolated variable will be considered a mini-regular expression. For example:
```
my $sentence = 'The quick brown fox jumped over the lazy dog';
my $substring = 'quick.*?fox';
$sentence =~ s{$substring}{big bad wolf};
```
Will cause `$sentence` to become `'The big bad wolf jumped over...'`.
On the other hand:
```
my $sentence = 'The quick brown fox jumped over the lazy dog';
my $substring = 'quick.*?fox';
$sentence =~ s{\Q$substring\E}{big bad wolf};
```
Or:
```
my $sentence = 'The quick brown fox jumped over the lazy dog';
my $substring = 'quick.*?fox';
my $quoted_substring = quotemeta($substring);
$sentence =~ s{$quoted_substring}{big bad wolf};
```
Will both leave the sentence as is. Normally, when accepting literal string input from the user, [`quotemeta`](#quotemeta-EXPR) or `\Q` must be used.
Beware that if you put literal backslashes (those not inside interpolated variables) between `\Q` and `\E`, double-quotish backslash interpolation may lead to confusing results. If you *need* to use literal backslashes within `\Q...\E`, consult ["Gory details of parsing quoted constructs" in perlop](perlop#Gory-details-of-parsing-quoted-constructs).
Because the result of `"\Q *STRING* \E"` has all metacharacters quoted, there is no way to insert a literal `$` or `@` inside a `\Q\E` pair. If protected by `\`, `$` will be quoted to become `"\\\$"`; if not, it is interpreted as the start of an interpolated scalar.
In Perl v5.14, all non-ASCII characters are quoted in non-UTF-8-encoded strings, but not quoted in UTF-8 strings.
Starting in Perl v5.16, Perl adopted a Unicode-defined strategy for quoting non-ASCII characters; the quoting of ASCII characters is unchanged.
Also unchanged is the quoting of non-UTF-8 strings when outside the scope of a [`use feature 'unicode_strings'`](feature#The-%27unicode_strings%27-feature), which is to quote all characters in the upper Latin1 range. This provides complete backwards compatibility for old programs which do not use Unicode. (Note that `unicode_strings` is automatically enabled within the scope of a `use v5.12` or greater.)
Within the scope of [`use locale`](locale), all non-ASCII Latin1 code points are quoted whether the string is encoded as UTF-8 or not. As mentioned above, locale does not affect the quoting of ASCII-range characters. This protects against those locales where characters such as `"|"` are considered to be word characters.
Otherwise, Perl quotes non-ASCII characters using an adaptation from Unicode (see <https://www.unicode.org/reports/tr31/>). The only code points that are quoted are those that have any of the Unicode properties: Pattern\_Syntax, Pattern\_White\_Space, White\_Space, Default\_Ignorable\_Code\_Point, or General\_Category=Control.
Of these properties, the two important ones are Pattern\_Syntax and Pattern\_White\_Space. They have been set up by Unicode for exactly this purpose of deciding which characters in a regular expression pattern should be quoted. No character that can be in an identifier has these properties.
Perl promises, that if we ever add regular expression pattern metacharacters to the dozen already defined (`\ | ( ) [ { ^ $ * + ? .`), that we will only use ones that have the Pattern\_Syntax property. Perl also promises, that if we ever add characters that are considered to be white space in regular expressions (currently mostly affected by `/x`), they will all have the Pattern\_White\_Space property.
Unicode promises that the set of code points that have these two properties will never change, so something that is not quoted in v5.16 will never need to be quoted in any future Perl release. (Not all the code points that match Pattern\_Syntax have actually had characters assigned to them; so there is room to grow, but they are quoted whether assigned or not. Perl, of course, would never use an unassigned code point as an actual metacharacter.)
Quoting characters that have the other 3 properties is done to enhance the readability of the regular expression and not because they actually need to be quoted for regular expression purposes (characters with the White\_Space property are likely to be indistinguishable on the page or screen from those with the Pattern\_White\_Space property; and the other two properties contain non-printing characters).
rand EXPR rand Returns a random fractional number greater than or equal to `0` and less than the value of EXPR. (EXPR should be positive.) If EXPR is omitted, the value `1` is used. Currently EXPR with the value `0` is also special-cased as `1` (this was undocumented before Perl 5.8.0 and is subject to change in future versions of Perl). Automatically calls [`srand`](#srand-EXPR) unless [`srand`](#srand-EXPR) has already been called. See also [`srand`](#srand-EXPR).
Apply [`int`](#int-EXPR) to the value returned by [`rand`](#rand-EXPR) if you want random integers instead of random fractional numbers. For example,
```
int(rand(10))
```
returns a random integer between `0` and `9`, inclusive.
(Note: If your rand function consistently returns numbers that are too large or too small, then your version of Perl was probably compiled with the wrong number of RANDBITS.)
**[`rand`](#rand-EXPR) is not cryptographically secure. You should not rely on it in security-sensitive situations.** As of this writing, a number of third-party CPAN modules offer random number generators intended by their authors to be cryptographically secure, including: <Data::Entropy>, <Crypt::Random>, <Math::Random::Secure>, and <Math::TrulyRandom>.
read FILEHANDLE,SCALAR,LENGTH,OFFSET
read FILEHANDLE,SCALAR,LENGTH Attempts to read LENGTH *characters* of data into variable SCALAR from the specified FILEHANDLE. Returns the number of characters actually read, `0` at end of file, or undef if there was an error (in the latter case [`$!`](perlvar#%24%21) is also set). SCALAR will be grown or shrunk so that the last character actually read is the last character of the scalar after the read.
An OFFSET may be specified to place the read data at some place in the string other than the beginning. A negative OFFSET specifies placement at that many characters counting backwards from the end of the string. A positive OFFSET greater than the length of SCALAR results in the string being padded to the required size with `"\0"` bytes before the result of the read is appended.
The call is implemented in terms of either Perl's or your system's native [fread(3)](http://man.he.net/man3/fread) library function, via the [PerlIO](perlio) layers applied to the handle. To get a true [read(2)](http://man.he.net/man2/read) system call, see [sysread](#sysread-FILEHANDLE%2CSCALAR%2CLENGTH%2COFFSET).
Note the *characters*: depending on the status of the filehandle, either (8-bit) bytes or characters are read. By default, all filehandles operate on bytes, but for example if the filehandle has been opened with the `:utf8` I/O layer (see [`open`](#open-FILEHANDLE%2CMODE%2CEXPR), and the <open> pragma), the I/O will operate on UTF8-encoded Unicode characters, not bytes. Similarly for the `:encoding` layer: in that case pretty much any characters can be read.
readdir DIRHANDLE Returns the next directory entry for a directory opened by [`opendir`](#opendir-DIRHANDLE%2CEXPR). If used in list context, returns all the rest of the entries in the directory. If there are no more entries, returns the undefined value in scalar context and the empty list in list context.
If you're planning to filetest the return values out of a [`readdir`](#readdir-DIRHANDLE), you'd better prepend the directory in question. Otherwise, because we didn't [`chdir`](#chdir-EXPR) there, it would have been testing the wrong file.
```
opendir(my $dh, $some_dir) || die "Can't opendir $some_dir: $!";
my @dots = grep { /^\./ && -f "$some_dir/$_" } readdir($dh);
closedir $dh;
```
As of Perl 5.12 you can use a bare [`readdir`](#readdir-DIRHANDLE) in a `while` loop, which will set [`$_`](perlvar#%24_) on every iteration. If either a `readdir` expression or an explicit assignment of a `readdir` expression to a scalar is used as a `while`/`for` condition, then the condition actually tests for definedness of the expression's value, not for its regular truth value.
```
opendir(my $dh, $some_dir) || die "Can't open $some_dir: $!";
while (readdir $dh) {
print "$some_dir/$_\n";
}
closedir $dh;
```
To avoid confusing would-be users of your code who are running earlier versions of Perl with mysterious failures, put this sort of thing at the top of your file to signal that your code will work *only* on Perls of a recent vintage:
```
use v5.12; # so readdir assigns to $_ in a lone while test
```
readline EXPR readline Reads from the filehandle whose typeglob is contained in EXPR (or from `*ARGV` if EXPR is not provided). In scalar context, each call reads and returns the next line until end-of-file is reached, whereupon the subsequent call returns [`undef`](#undef-EXPR). In list context, reads until end-of-file is reached and returns a list of lines. Note that the notion of "line" used here is whatever you may have defined with [`$/`](perlvar#%24%2F) (or `$INPUT_RECORD_SEPARATOR` in [English](english)). See ["$/" in perlvar](perlvar#%24%2F).
When [`$/`](perlvar#%24%2F) is set to [`undef`](#undef-EXPR), when [`readline`](#readline-EXPR) is in scalar context (i.e., file slurp mode), and when an empty file is read, it returns `''` the first time, followed by [`undef`](#undef-EXPR) subsequently.
This is the internal function implementing the `<EXPR>` operator, but you can use it directly. The `<EXPR>` operator is discussed in more detail in ["I/O Operators" in perlop](perlop#I%2FO-Operators).
```
my $line = <STDIN>;
my $line = readline(STDIN); # same thing
```
If [`readline`](#readline-EXPR) encounters an operating system error, [`$!`](perlvar#%24%21) will be set with the corresponding error message. It can be helpful to check [`$!`](perlvar#%24%21) when you are reading from filehandles you don't trust, such as a tty or a socket. The following example uses the operator form of [`readline`](#readline-EXPR) and dies if the result is not defined.
```
while ( ! eof($fh) ) {
defined( $_ = readline $fh ) or die "readline failed: $!";
...
}
```
Note that you can't handle [`readline`](#readline-EXPR) errors that way with the `ARGV` filehandle. In that case, you have to open each element of [`@ARGV`](perlvar#%40ARGV) yourself since [`eof`](#eof-FILEHANDLE) handles `ARGV` differently.
```
foreach my $arg (@ARGV) {
open(my $fh, $arg) or warn "Can't open $arg: $!";
while ( ! eof($fh) ) {
defined( $_ = readline $fh )
or die "readline failed for $arg: $!";
...
}
}
```
Like the `<EXPR>` operator, if a `readline` expression is used as the condition of a `while` or `for` loop, then it will be implicitly assigned to `$_`. If either a `readline` expression or an explicit assignment of a `readline` expression to a scalar is used as a `while`/`for` condition, then the condition actually tests for definedness of the expression's value, not for its regular truth value.
readlink EXPR readlink Returns the value of a symbolic link, if symbolic links are implemented. If not, raises an exception. If there is a system error, returns the undefined value and sets [`$!`](perlvar#%24%21) (errno). If EXPR is omitted, uses [`$_`](perlvar#%24_).
Portability issues: ["readlink" in perlport](perlport#readlink).
readpipe EXPR readpipe EXPR is executed as a system command. The collected standard output of the command is returned. In scalar context, it comes back as a single (potentially multi-line) string. In list context, returns a list of lines (however you've defined lines with [`$/`](perlvar#%24%2F) (or `$INPUT_RECORD_SEPARATOR` in [English](english))). This is the internal function implementing the `qx/EXPR/` operator, but you can use it directly. The `qx/EXPR/` operator is discussed in more detail in ["`qx/*STRING*/`" in perlop](perlop#qx%2FSTRING%2F). If EXPR is omitted, uses [`$_`](perlvar#%24_).
recv SOCKET,SCALAR,LENGTH,FLAGS Receives a message on a socket. Attempts to receive LENGTH characters of data into variable SCALAR from the specified SOCKET filehandle. SCALAR will be grown or shrunk to the length actually read. Takes the same flags as the system call of the same name. Returns the address of the sender if SOCKET's protocol supports this; returns an empty string otherwise. If there's an error, returns the undefined value. This call is actually implemented in terms of the [recvfrom(2)](http://man.he.net/man2/recvfrom) system call. See ["UDP: Message Passing" in perlipc](perlipc#UDP%3A-Message-Passing) for examples.
Note that if the socket has been marked as `:utf8`, `recv` will throw an exception. The `:encoding(...)` layer implicitly introduces the `:utf8` layer. See [`binmode`](#binmode-FILEHANDLE%2C-LAYER).
redo LABEL
redo EXPR redo The [`redo`](#redo-LABEL) command restarts the loop block without evaluating the conditional again. The [`continue`](#continue-BLOCK) block, if any, is not executed. If the LABEL is omitted, the command refers to the innermost enclosing loop. The `redo EXPR` form, available starting in Perl 5.18.0, allows a label name to be computed at run time, and is otherwise identical to `redo LABEL`. Programs that want to lie to themselves about what was just input normally use this command:
```
# a simpleminded Pascal comment stripper
# (warning: assumes no { or } in strings)
LINE: while (<STDIN>) {
while (s|({.*}.*){.*}|$1 |) {}
s|{.*}| |;
if (s|{.*| |) {
my $front = $_;
while (<STDIN>) {
if (/}/) { # end of comment?
s|^|$front\{|;
redo LINE;
}
}
}
print;
}
```
[`redo`](#redo-LABEL) cannot return a value from a block that typically returns a value, such as `eval {}`, `sub {}`, or `do {}`. It will perform its flow control behavior, which precludes any return value. It should not be used to exit a [`grep`](#grep-BLOCK-LIST) or [`map`](#map-BLOCK-LIST) operation.
Note that a block by itself is semantically identical to a loop that executes once. Thus [`redo`](#redo-LABEL) inside such a block will effectively turn it into a looping construct.
See also [`continue`](#continue-BLOCK) for an illustration of how [`last`](#last-LABEL), [`next`](#next-LABEL), and [`redo`](#redo-LABEL) work.
Unlike most named operators, this has the same precedence as assignment. It is also exempt from the looks-like-a-function rule, so `redo ("foo")."bar"` will cause "bar" to be part of the argument to [`redo`](#redo-LABEL).
ref EXPR ref Examines the value of EXPR, expecting it to be a reference, and returns a string giving information about the reference and the type of referent. If EXPR is not specified, [`$_`](perlvar#%24_) will be used.
If the operand is not a reference, then the empty string will be returned. An empty string will only be returned in this situation. `ref` is often useful to just test whether a value is a reference, which can be done by comparing the result to the empty string. It is a common mistake to use the result of `ref` directly as a truth value: this goes wrong because `0` (which is false) can be returned for a reference.
If the operand is a reference to a blessed object, then the name of the class into which the referent is blessed will be returned. `ref` doesn't care what the physical type of the referent is; blessing takes precedence over such concerns. Beware that exact comparison of `ref` results against a class name doesn't perform a class membership test: a class's members also include objects blessed into subclasses, for which `ref` will return the name of the subclass. Also beware that class names can clash with the built-in type names (described below).
If the operand is a reference to an unblessed object, then the return value indicates the type of object. If the unblessed referent is not a scalar, then the return value will be one of the strings `ARRAY`, `HASH`, `CODE`, `FORMAT`, or `IO`, indicating only which kind of object it is. If the unblessed referent is a scalar, then the return value will be one of the strings `SCALAR`, `VSTRING`, `REF`, `GLOB`, `LVALUE`, or `REGEXP`, depending on the kind of value the scalar currently has. But note that `qr//` scalars are created already blessed, so `ref qr/.../` will likely return `Regexp`. Beware that these built-in type names can also be used as class names, so `ref` returning one of these names doesn't unambiguously indicate that the referent is of the kind to which the name refers.
The ambiguity between built-in type names and class names significantly limits the utility of `ref`. For unambiguous information, use [`Scalar::Util::blessed()`](Scalar::Util#blessed) for information about blessing, and [`Scalar::Util::reftype()`](Scalar::Util#reftype) for information about physical types. Use [the `isa` method](universal#%24obj-%3Eisa%28-TYPE-%29) for class membership tests, though one must be sure of blessedness before attempting a method call. Alternatively, the [`isa` operator](perlop#Class-Instance-Operator) can test class membership without checking blessedness first.
See also <perlref> and <perlobj>.
rename OLDNAME,NEWNAME Changes the name of a file; an existing file NEWNAME will be clobbered. Returns true for success; on failure returns false and sets [`$!`](perlvar#%24%21).
Behavior of this function varies wildly depending on your system implementation. For example, it will usually not work across file system boundaries, even though the system *mv* command sometimes compensates for this. Other restrictions include whether it works on directories, open files, or pre-existing files. Check <perlport> and either the [rename(2)](http://man.he.net/man2/rename) manpage or equivalent system documentation for details.
For a platform independent [`move`](File::Copy#move) function look at the <File::Copy> module.
Portability issues: ["rename" in perlport](perlport#rename).
require VERSION
require EXPR require Demands a version of Perl specified by VERSION, or demands some semantics specified by EXPR or by [`$_`](perlvar#%24_) if EXPR is not supplied.
VERSION may be either a literal such as v5.24.1, which will be compared to [`$^V`](perlvar#%24%5EV) (or `$PERL_VERSION` in [English](english)), or a numeric argument of the form 5.024001, which will be compared to [`$]`](perlvar#%24%5D). An exception is raised if VERSION is greater than the version of the current Perl interpreter. Compare with [`use`](#use-Module-VERSION-LIST), which can do a similar check at compile time.
Specifying VERSION as a numeric argument of the form 5.024001 should generally be avoided as older less readable syntax compared to v5.24.1. Before perl 5.8.0 (released in 2002), the more verbose numeric form was the only supported syntax, which is why you might see it in older code.
```
require v5.24.1; # run time version check
require 5.24.1; # ditto
require 5.024_001; # ditto; older syntax compatible
with perl 5.6
```
Otherwise, [`require`](#require-VERSION) demands that a library file be included if it hasn't already been included. The file is included via the do-FILE mechanism, which is essentially just a variety of [`eval`](#eval-EXPR) with the caveat that lexical variables in the invoking script will be invisible to the included code. If it were implemented in pure Perl, it would have semantics similar to the following:
```
use Carp 'croak';
use version;
sub require {
my ($filename) = @_;
if ( my $version = eval { version->parse($filename) } ) {
if ( $version > $^V ) {
my $vn = $version->normal;
croak "Perl $vn required--this is only $^V, stopped";
}
return 1;
}
if (exists $INC{$filename}) {
return 1 if $INC{$filename};
croak "Compilation failed in require";
}
foreach $prefix (@INC) {
if (ref($prefix)) {
#... do other stuff - see text below ....
}
# (see text below about possible appending of .pmc
# suffix to $filename)
my $realfilename = "$prefix/$filename";
next if ! -e $realfilename || -d _ || -b _;
$INC{$filename} = $realfilename;
my $result = do($realfilename);
# but run in caller's namespace
if (!defined $result) {
$INC{$filename} = undef;
croak $@ ? "$@Compilation failed in require"
: "Can't locate $filename: $!\n";
}
if (!$result) {
delete $INC{$filename};
croak "$filename did not return true value";
}
$! = 0;
return $result;
}
croak "Can't locate $filename in \@INC ...";
}
```
Note that the file will not be included twice under the same specified name.
The file must return true as the last statement to indicate successful execution of any initialization code, so it's customary to end such a file with `1;` unless you're sure it'll return true otherwise. But it's better just to put the `1;`, in case you add more statements.
If EXPR is a bareword, [`require`](#require-VERSION) assumes a *.pm* extension and replaces `::` with `/` in the filename for you, to make it easy to load standard modules. This form of loading of modules does not risk altering your namespace, however it will autovivify the stash for the required module.
In other words, if you try this:
```
require Foo::Bar; # a splendid bareword
```
The require function will actually look for the *Foo/Bar.pm* file in the directories specified in the [`@INC`](perlvar#%40INC) array, and it will autovivify the `Foo::Bar::` stash at compile time.
But if you try this:
```
my $class = 'Foo::Bar';
require $class; # $class is not a bareword
#or
require "Foo::Bar"; # not a bareword because of the ""
```
The require function will look for the *Foo::Bar* file in the [`@INC`](perlvar#%40INC) array and will complain about not finding *Foo::Bar* there. In this case you can do:
```
eval "require $class";
```
or you could do
```
require "Foo/Bar.pm";
```
Neither of these forms will autovivify any stashes at compile time and only have run time effects.
Now that you understand how [`require`](#require-VERSION) looks for files with a bareword argument, there is a little extra functionality going on behind the scenes. Before [`require`](#require-VERSION) looks for a *.pm* extension, it will first look for a similar filename with a *.pmc* extension. If this file is found, it will be loaded in place of any file ending in a *.pm* extension. This applies to both the explicit `require "Foo/Bar.pm";` form and the `require Foo::Bar;` form.
You can also insert hooks into the import facility by putting Perl code directly into the [`@INC`](perlvar#%40INC) array. There are three forms of hooks: subroutine references, array references, and blessed objects.
Subroutine references are the simplest case. When the inclusion system walks through [`@INC`](perlvar#%40INC) and encounters a subroutine, this subroutine gets called with two parameters, the first a reference to itself, and the second the name of the file to be included (e.g., *Foo/Bar.pm*). The subroutine should return either nothing or else a list of up to four values in the following order:
1. A reference to a scalar, containing any initial source code to prepend to the file or generator output.
2. A filehandle, from which the file will be read.
3. A reference to a subroutine. If there is no filehandle (previous item), then this subroutine is expected to generate one line of source code per call, writing the line into [`$_`](perlvar#%24_) and returning 1, then finally at end of file returning 0. If there is a filehandle, then the subroutine will be called to act as a simple source filter, with the line as read in [`$_`](perlvar#%24_). Again, return 1 for each valid line, and 0 after all lines have been returned. For historical reasons the subroutine will receive a meaningless argument (in fact always the numeric value zero) as `$_[0]`.
4. Optional state for the subroutine. The state is passed in as `$_[1]`.
If an empty list, [`undef`](#undef-EXPR), or nothing that matches the first 3 values above is returned, then [`require`](#require-VERSION) looks at the remaining elements of [`@INC`](perlvar#%40INC). Note that this filehandle must be a real filehandle (strictly a typeglob or reference to a typeglob, whether blessed or unblessed); tied filehandles will be ignored and processing will stop there.
If the hook is an array reference, its first element must be a subroutine reference. This subroutine is called as above, but the first parameter is the array reference. This lets you indirectly pass arguments to the subroutine.
In other words, you can write:
```
push @INC, \&my_sub;
sub my_sub {
my ($coderef, $filename) = @_; # $coderef is \&my_sub
...
}
```
or:
```
push @INC, [ \&my_sub, $x, $y, ... ];
sub my_sub {
my ($arrayref, $filename) = @_;
# Retrieve $x, $y, ...
my (undef, @parameters) = @$arrayref;
...
}
```
If the hook is an object, it must provide an `INC` method that will be called as above, the first parameter being the object itself. (Note that you must fully qualify the sub's name, as unqualified `INC` is always forced into package `main`.) Here is a typical code layout:
```
# In Foo.pm
package Foo;
sub new { ... }
sub Foo::INC {
my ($self, $filename) = @_;
...
}
# In the main program
push @INC, Foo->new(...);
```
These hooks are also permitted to set the [`%INC`](perlvar#%25INC) entry corresponding to the files they have loaded. See ["%INC" in perlvar](perlvar#%25INC).
For a yet-more-powerful import facility, see [`use`](#use-Module-VERSION-LIST) and <perlmod>.
reset EXPR reset Generally used in a [`continue`](#continue-BLOCK) block at the end of a loop to clear variables and reset `m?pattern?` searches so that they work again. The expression is interpreted as a list of single characters (hyphens allowed for ranges). All variables (scalars, arrays, and hashes) in the current package beginning with one of those letters are reset to their pristine state. If the expression is omitted, one-match searches (`m?pattern?`) are reset to match again. Only resets variables or searches in the current package. Always returns 1. Examples:
```
reset 'X'; # reset all X variables
reset 'a-z'; # reset lower case variables
reset; # just reset m?one-time? searches
```
Resetting `"A-Z"` is not recommended because you'll wipe out your [`@ARGV`](perlvar#%40ARGV) and [`@INC`](perlvar#%40INC) arrays and your [`%ENV`](perlvar#%25ENV) hash.
Resets only package variables; lexical variables are unaffected, but they clean themselves up on scope exit anyway, so you'll probably want to use them instead. See [`my`](#my-VARLIST).
return EXPR return Returns from a subroutine, [`eval`](#eval-EXPR), [`do FILE`](#do-EXPR), [`sort`](#sort-SUBNAME-LIST) block or regex eval block (but not a [`grep`](#grep-BLOCK-LIST), [`map`](#map-BLOCK-LIST), or [`do BLOCK`](#do-BLOCK) block) with the value given in EXPR. Evaluation of EXPR may be in list, scalar, or void context, depending on how the return value will be used, and the context may vary from one execution to the next (see [`wantarray`](#wantarray)). If no EXPR is given, returns an empty list in list context, the undefined value in scalar context, and (of course) nothing at all in void context.
(In the absence of an explicit [`return`](#return-EXPR), a subroutine, [`eval`](#eval-EXPR), or [`do FILE`](#do-EXPR) automatically returns the value of the last expression evaluated.)
Unlike most named operators, this is also exempt from the looks-like-a-function rule, so `return ("foo")."bar"` will cause `"bar"` to be part of the argument to [`return`](#return-EXPR).
reverse LIST In list context, returns a list value consisting of the elements of LIST in the opposite order. In scalar context, concatenates the elements of LIST and returns a string value with all characters in the opposite order.
```
print join(", ", reverse "world", "Hello"); # Hello, world
print scalar reverse "dlrow ,", "olleH"; # Hello, world
```
Used without arguments in scalar context, [`reverse`](#reverse-LIST) reverses [`$_`](perlvar#%24_).
```
$_ = "dlrow ,olleH";
print reverse; # No output, list context
print scalar reverse; # Hello, world
```
Note that reversing an array to itself (as in `@a = reverse @a`) will preserve non-existent elements whenever possible; i.e., for non-magical arrays or for tied arrays with `EXISTS` and `DELETE` methods.
This operator is also handy for inverting a hash, although there are some caveats. If a value is duplicated in the original hash, only one of those can be represented as a key in the inverted hash. Also, this has to unwind one hash and build a whole new one, which may take some time on a large hash, such as from a DBM file.
```
my %by_name = reverse %by_address; # Invert the hash
```
rewinddir DIRHANDLE Sets the current position to the beginning of the directory for the [`readdir`](#readdir-DIRHANDLE) routine on DIRHANDLE.
Portability issues: ["rewinddir" in perlport](perlport#rewinddir).
rindex STR,SUBSTR,POSITION
rindex STR,SUBSTR Works just like [`index`](#index-STR%2CSUBSTR%2CPOSITION) except that it returns the position of the *last* occurrence of SUBSTR in STR. If POSITION is specified, returns the last occurrence beginning at or before that position.
rmdir FILENAME rmdir Deletes the directory specified by FILENAME if that directory is empty. If it succeeds it returns true; otherwise it returns false and sets [`$!`](perlvar#%24%21) (errno). If FILENAME is omitted, uses [`$_`](perlvar#%24_).
To remove a directory tree recursively (`rm -rf` on Unix) look at the [`rmtree`](File::Path#rmtree%28-%24dir-%29) function of the <File::Path> module.
s/// The substitution operator. See ["Regexp Quote-Like Operators" in perlop](perlop#Regexp-Quote-Like-Operators).
say FILEHANDLE LIST
say FILEHANDLE
say LIST say Just like [`print`](#print-FILEHANDLE-LIST), but implicitly appends a newline at the end of the LIST instead of any value [`$\`](perlvar#%24%5C) might have. To use FILEHANDLE without a LIST to print the contents of [`$_`](perlvar#%24_) to it, you must use a bareword filehandle like `FH`, not an indirect one like `$fh`.
[`say`](#say-FILEHANDLE-LIST) is available only if the [`"say"` feature](feature#The-%27say%27-feature) is enabled or if it is prefixed with `CORE::`. The [`"say"` feature](feature#The-%27say%27-feature) is enabled automatically with a `use v5.10` (or higher) declaration in the current scope.
scalar EXPR Forces EXPR to be interpreted in scalar context and returns the value of EXPR.
```
my @counts = ( scalar @a, scalar @b, scalar @c );
```
There is no equivalent operator to force an expression to be interpolated in list context because in practice, this is never needed. If you really wanted to do so, however, you could use the construction `@{[ (some expression) ]}`, but usually a simple `(some expression)` suffices.
Because [`scalar`](#scalar-EXPR) is a unary operator, if you accidentally use a parenthesized list for the EXPR, this behaves as a scalar comma expression, evaluating all but the last element in void context and returning the final element evaluated in scalar context. This is seldom what you want.
The following single statement:
```
print uc(scalar(foo(), $bar)), $baz;
```
is the moral equivalent of these two:
```
foo();
print(uc($bar), $baz);
```
See <perlop> for more details on unary operators and the comma operator, and <perldata> for details on evaluating a hash in scalar context.
seek FILEHANDLE,POSITION,WHENCE Sets FILEHANDLE's position, just like the [fseek(3)](http://man.he.net/man3/fseek) call of C `stdio`. FILEHANDLE may be an expression whose value gives the name of the filehandle. The values for WHENCE are `0` to set the new position *in bytes* to POSITION; `1` to set it to the current position plus POSITION; and `2` to set it to EOF plus POSITION, typically negative. For WHENCE you may use the constants `SEEK_SET`, `SEEK_CUR`, and `SEEK_END` (start of the file, current position, end of the file) from the [Fcntl](fcntl) module. Returns `1` on success, false otherwise.
Note the emphasis on bytes: even if the filehandle has been set to operate on characters (for example using the `:encoding(UTF-8)` I/O layer), the [`seek`](#seek-FILEHANDLE%2CPOSITION%2CWHENCE), [`tell`](#tell-FILEHANDLE), and [`sysseek`](#sysseek-FILEHANDLE%2CPOSITION%2CWHENCE) family of functions use byte offsets, not character offsets, because seeking to a character offset would be very slow in a UTF-8 file.
If you want to position the file for [`sysread`](#sysread-FILEHANDLE%2CSCALAR%2CLENGTH%2COFFSET) or [`syswrite`](#syswrite-FILEHANDLE%2CSCALAR%2CLENGTH%2COFFSET), don't use [`seek`](#seek-FILEHANDLE%2CPOSITION%2CWHENCE), because buffering makes its effect on the file's read-write position unpredictable and non-portable. Use [`sysseek`](#sysseek-FILEHANDLE%2CPOSITION%2CWHENCE) instead.
Due to the rules and rigors of ANSI C, on some systems you have to do a seek whenever you switch between reading and writing. Amongst other things, this may have the effect of calling stdio's [clearerr(3)](http://man.he.net/man3/clearerr). A WHENCE of `1` (`SEEK_CUR`) is useful for not moving the file position:
```
seek($fh, 0, 1);
```
This is also useful for applications emulating `tail -f`. Once you hit EOF on your read and then sleep for a while, you (probably) have to stick in a dummy [`seek`](#seek-FILEHANDLE%2CPOSITION%2CWHENCE) to reset things. The [`seek`](#seek-FILEHANDLE%2CPOSITION%2CWHENCE) doesn't change the position, but it *does* clear the end-of-file condition on the handle, so that the next `readline FILE` makes Perl try again to read something. (We hope.)
If that doesn't work (some I/O implementations are particularly cantankerous), you might need something like this:
```
for (;;) {
for ($curpos = tell($fh); $_ = readline($fh);
$curpos = tell($fh)) {
# search for some stuff and put it into files
}
sleep($for_a_while);
seek($fh, $curpos, 0);
}
```
seekdir DIRHANDLE,POS Sets the current position for the [`readdir`](#readdir-DIRHANDLE) routine on DIRHANDLE. POS must be a value returned by [`telldir`](#telldir-DIRHANDLE). [`seekdir`](#seekdir-DIRHANDLE%2CPOS) also has the same caveats about possible directory compaction as the corresponding system library routine.
select FILEHANDLE select Returns the currently selected filehandle. If FILEHANDLE is supplied, sets the new current default filehandle for output. This has two effects: first, a [`write`](#write-FILEHANDLE) or a [`print`](#print-FILEHANDLE-LIST) without a filehandle default to this FILEHANDLE. Second, references to variables related to output will refer to this output channel.
For example, to set the top-of-form format for more than one output channel, you might do the following:
```
select(REPORT1);
$^ = 'report1_top';
select(REPORT2);
$^ = 'report2_top';
```
FILEHANDLE may be an expression whose value gives the name of the actual filehandle. Thus:
```
my $oldfh = select(STDERR); $| = 1; select($oldfh);
```
Some programmers may prefer to think of filehandles as objects with methods, preferring to write the last example as:
```
STDERR->autoflush(1);
```
(Prior to Perl version 5.14, you have to `use IO::Handle;` explicitly first.)
Portability issues: ["select" in perlport](perlport#select).
select RBITS,WBITS,EBITS,TIMEOUT This calls the [select(2)](http://man.he.net/man2/select) syscall with the bit masks specified, which can be constructed using [`fileno`](#fileno-FILEHANDLE) and [`vec`](#vec-EXPR%2COFFSET%2CBITS), along these lines:
```
my $rin = my $win = my $ein = '';
vec($rin, fileno(STDIN), 1) = 1;
vec($win, fileno(STDOUT), 1) = 1;
$ein = $rin | $win;
```
If you want to select on many filehandles, you may wish to write a subroutine like this:
```
sub fhbits {
my @fhlist = @_;
my $bits = "";
for my $fh (@fhlist) {
vec($bits, fileno($fh), 1) = 1;
}
return $bits;
}
my $rin = fhbits(\*STDIN, $tty, $mysock);
```
The usual idiom is:
```
my ($nfound, $timeleft) =
select(my $rout = $rin, my $wout = $win, my $eout = $ein,
$timeout);
```
or to block until something becomes ready just do this
```
my $nfound =
select(my $rout = $rin, my $wout = $win, my $eout = $ein, undef);
```
Most systems do not bother to return anything useful in `$timeleft`, so calling [`select`](#select-RBITS%2CWBITS%2CEBITS%2CTIMEOUT) in scalar context just returns `$nfound`.
Any of the bit masks can also be [`undef`](#undef-EXPR). The timeout, if specified, is in seconds, which may be fractional. Note: not all implementations are capable of returning the `$timeleft`. If not, they always return `$timeleft` equal to the supplied `$timeout`.
You can effect a sleep of 250 milliseconds this way:
```
select(undef, undef, undef, 0.25);
```
Note that whether [`select`](#select-RBITS%2CWBITS%2CEBITS%2CTIMEOUT) gets restarted after signals (say, SIGALRM) is implementation-dependent. See also <perlport> for notes on the portability of [`select`](#select-RBITS%2CWBITS%2CEBITS%2CTIMEOUT).
On error, [`select`](#select-RBITS%2CWBITS%2CEBITS%2CTIMEOUT) behaves just like [select(2)](http://man.he.net/man2/select): it returns `-1` and sets [`$!`](perlvar#%24%21).
On some Unixes, [select(2)](http://man.he.net/man2/select) may report a socket file descriptor as "ready for reading" even when no data is available, and thus any subsequent [`read`](#read-FILEHANDLE%2CSCALAR%2CLENGTH%2COFFSET) would block. This can be avoided if you always use `O_NONBLOCK` on the socket. See [select(2)](http://man.he.net/man2/select) and [fcntl(2)](http://man.he.net/man2/fcntl) for further details.
The standard [`IO::Select`](IO::Select) module provides a user-friendlier interface to [`select`](#select-RBITS%2CWBITS%2CEBITS%2CTIMEOUT), mostly because it does all the bit-mask work for you.
**WARNING**: One should not attempt to mix buffered I/O (like [`read`](#read-FILEHANDLE%2CSCALAR%2CLENGTH%2COFFSET) or [`readline`](#readline-EXPR)) with [`select`](#select-RBITS%2CWBITS%2CEBITS%2CTIMEOUT), except as permitted by POSIX, and even then only on POSIX systems. You have to use [`sysread`](#sysread-FILEHANDLE%2CSCALAR%2CLENGTH%2COFFSET) instead.
Portability issues: ["select" in perlport](perlport#select).
semctl ID,SEMNUM,CMD,ARG Calls the System V IPC function [semctl(2)](http://man.he.net/man2/semctl). You'll probably have to say
```
use IPC::SysV;
```
first to get the correct constant definitions. If CMD is IPC\_STAT or GETALL, then ARG must be a variable that will hold the returned semid\_ds structure or semaphore value array. Returns like [`ioctl`](#ioctl-FILEHANDLE%2CFUNCTION%2CSCALAR): the undefined value for error, "`0 but true`" for zero, or the actual return value otherwise. The ARG must consist of a vector of native short integers, which may be created with `pack("s!",(0)x$nsem)`. See also ["SysV IPC" in perlipc](perlipc#SysV-IPC) and the documentation for [`IPC::SysV`](IPC::SysV) and [`IPC::Semaphore`](IPC::Semaphore).
Portability issues: ["semctl" in perlport](perlport#semctl).
semget KEY,NSEMS,FLAGS Calls the System V IPC function [semget(2)](http://man.he.net/man2/semget). Returns the semaphore id, or the undefined value on error. See also ["SysV IPC" in perlipc](perlipc#SysV-IPC) and the documentation for [`IPC::SysV`](IPC::SysV) and [`IPC::Semaphore`](IPC::Semaphore).
Portability issues: ["semget" in perlport](perlport#semget).
semop KEY,OPSTRING Calls the System V IPC function [semop(2)](http://man.he.net/man2/semop) for semaphore operations such as signalling and waiting. OPSTRING must be a packed array of semop structures. Each semop structure can be generated with `pack("s!3", $semnum, $semop, $semflag)`. The length of OPSTRING implies the number of semaphore operations. Returns true if successful, false on error. As an example, the following code waits on semaphore $semnum of semaphore id $semid:
```
my $semop = pack("s!3", $semnum, -1, 0);
die "Semaphore trouble: $!\n" unless semop($semid, $semop);
```
To signal the semaphore, replace `-1` with `1`. See also ["SysV IPC" in perlipc](perlipc#SysV-IPC) and the documentation for [`IPC::SysV`](IPC::SysV) and [`IPC::Semaphore`](IPC::Semaphore).
Portability issues: ["semop" in perlport](perlport#semop).
send SOCKET,MSG,FLAGS,TO
send SOCKET,MSG,FLAGS Sends a message on a socket. Attempts to send the scalar MSG to the SOCKET filehandle. Takes the same flags as the system call of the same name. On unconnected sockets, you must specify a destination to *send to*, in which case it does a [sendto(2)](http://man.he.net/man2/sendto) syscall. Returns the number of characters sent, or the undefined value on error. The [sendmsg(2)](http://man.he.net/man2/sendmsg) syscall is currently unimplemented. See ["UDP: Message Passing" in perlipc](perlipc#UDP%3A-Message-Passing) for examples.
Note that if the socket has been marked as `:utf8`, `send` will throw an exception. The `:encoding(...)` layer implicitly introduces the `:utf8` layer. See [`binmode`](#binmode-FILEHANDLE%2C-LAYER).
setpgrp PID,PGRP Sets the current process group for the specified PID, `0` for the current process. Raises an exception when used on a machine that doesn't implement POSIX [setpgid(2)](http://man.he.net/man2/setpgid) or BSD [setpgrp(2)](http://man.he.net/man2/setpgrp). If the arguments are omitted, it defaults to `0,0`. Note that the BSD 4.2 version of [`setpgrp`](#setpgrp-PID%2CPGRP) does not accept any arguments, so only `setpgrp(0,0)` is portable. See also [`POSIX::setsid()`](posix#setsid).
Portability issues: ["setpgrp" in perlport](perlport#setpgrp).
setpriority WHICH,WHO,PRIORITY Sets the current priority for a process, a process group, or a user. (See [setpriority(2)](http://man.he.net/man2/setpriority).) Raises an exception when used on a machine that doesn't implement [setpriority(2)](http://man.he.net/man2/setpriority).
`WHICH` can be any of `PRIO_PROCESS`, `PRIO_PGRP` or `PRIO_USER` imported from ["RESOURCE CONSTANTS" in POSIX](posix#RESOURCE-CONSTANTS).
Portability issues: ["setpriority" in perlport](perlport#setpriority).
setsockopt SOCKET,LEVEL,OPTNAME,OPTVAL Sets the socket option requested. Returns [`undef`](#undef-EXPR) on error. Use integer constants provided by the [`Socket`](socket) module for LEVEL and OPNAME. Values for LEVEL can also be obtained from getprotobyname. OPTVAL might either be a packed string or an integer. An integer OPTVAL is shorthand for pack("i", OPTVAL).
An example disabling Nagle's algorithm on a socket:
```
use Socket qw(IPPROTO_TCP TCP_NODELAY);
setsockopt($socket, IPPROTO_TCP, TCP_NODELAY, 1);
```
Portability issues: ["setsockopt" in perlport](perlport#setsockopt).
shift ARRAY shift Shifts the first value of the array off and returns it, shortening the array by 1 and moving everything down. If there are no elements in the array, returns the undefined value. If ARRAY is omitted, shifts the [`@_`](perlvar#%40_) array within the lexical scope of subroutines and formats, and the [`@ARGV`](perlvar#%40ARGV) array outside a subroutine and also within the lexical scopes established by the `eval STRING`, `BEGIN {}`, `INIT {}`, `CHECK {}`, `UNITCHECK {}`, and `END {}` constructs.
Starting with Perl 5.14, an experimental feature allowed [`shift`](#shift-ARRAY) to take a scalar expression. This experiment has been deemed unsuccessful, and was removed as of Perl 5.24.
See also [`unshift`](#unshift-ARRAY%2CLIST), [`push`](#push-ARRAY%2CLIST), and [`pop`](#pop-ARRAY). [`shift`](#shift-ARRAY) and [`unshift`](#unshift-ARRAY%2CLIST) do the same thing to the left end of an array that [`pop`](#pop-ARRAY) and [`push`](#push-ARRAY%2CLIST) do to the right end.
shmctl ID,CMD,ARG Calls the System V IPC function shmctl. You'll probably have to say
```
use IPC::SysV;
```
first to get the correct constant definitions. If CMD is `IPC_STAT`, then ARG must be a variable that will hold the returned `shmid_ds` structure. Returns like ioctl: [`undef`](#undef-EXPR) for error; "`0` but true" for zero; and the actual return value otherwise. See also ["SysV IPC" in perlipc](perlipc#SysV-IPC) and the documentation for [`IPC::SysV`](IPC::SysV).
Portability issues: ["shmctl" in perlport](perlport#shmctl).
shmget KEY,SIZE,FLAGS Calls the System V IPC function shmget. Returns the shared memory segment id, or [`undef`](#undef-EXPR) on error. See also ["SysV IPC" in perlipc](perlipc#SysV-IPC) and the documentation for [`IPC::SysV`](IPC::SysV).
Portability issues: ["shmget" in perlport](perlport#shmget).
shmread ID,VAR,POS,SIZE
shmwrite ID,STRING,POS,SIZE Reads or writes the System V shared memory segment ID starting at position POS for size SIZE by attaching to it, copying in/out, and detaching from it. When reading, VAR must be a variable that will hold the data read. When writing, if STRING is too long, only SIZE bytes are used; if STRING is too short, nulls are written to fill out SIZE bytes. Return true if successful, false on error. [`shmread`](#shmread-ID%2CVAR%2CPOS%2CSIZE) taints the variable. See also ["SysV IPC" in perlipc](perlipc#SysV-IPC) and the documentation for [`IPC::SysV`](IPC::SysV) and the [`IPC::Shareable`](IPC::Shareable) module from CPAN.
Portability issues: ["shmread" in perlport](perlport#shmread) and ["shmwrite" in perlport](perlport#shmwrite).
shutdown SOCKET,HOW Shuts down a socket connection in the manner indicated by HOW, which has the same interpretation as in the syscall of the same name.
```
shutdown($socket, 0); # I/we have stopped reading data
shutdown($socket, 1); # I/we have stopped writing data
shutdown($socket, 2); # I/we have stopped using this socket
```
This is useful with sockets when you want to tell the other side you're done writing but not done reading, or vice versa. It's also a more insistent form of close because it also disables the file descriptor in any forked copies in other processes.
Returns `1` for success; on error, returns [`undef`](#undef-EXPR) if the first argument is not a valid filehandle, or returns `0` and sets [`$!`](perlvar#%24%21) for any other failure.
sin EXPR sin Returns the sine of EXPR (expressed in radians). If EXPR is omitted, returns sine of [`$_`](perlvar#%24_).
For the inverse sine operation, you may use the `Math::Trig::asin` function, or use this relation:
```
sub asin { atan2($_[0], sqrt(1 - $_[0] * $_[0])) }
```
sleep EXPR sleep Causes the script to sleep for (integer) EXPR seconds, or forever if no argument is given. Returns the integer number of seconds actually slept.
EXPR should be a positive integer. If called with a negative integer, [`sleep`](#sleep-EXPR) does not sleep but instead emits a warning, sets $! (`errno`), and returns zero.
`sleep 0` is permitted, but a function call to the underlying platform implementation still occurs, with any side effects that may have. `sleep 0` is therefore not exactly identical to not sleeping at all.
May be interrupted if the process receives a signal such as `SIGALRM`.
```
eval {
local $SIG{ALRM} = sub { die "Alarm!\n" };
sleep;
};
die $@ unless $@ eq "Alarm!\n";
```
You probably cannot mix [`alarm`](#alarm-SECONDS) and [`sleep`](#sleep-EXPR) calls, because [`sleep`](#sleep-EXPR) is often implemented using [`alarm`](#alarm-SECONDS).
On some older systems, it may sleep up to a full second less than what you requested, depending on how it counts seconds. Most modern systems always sleep the full amount. They may appear to sleep longer than that, however, because your process might not be scheduled right away in a busy multitasking system.
For delays of finer granularity than one second, the <Time::HiRes> module (from CPAN, and starting from Perl 5.8 part of the standard distribution) provides [`usleep`](Time::HiRes#usleep-%28-%24useconds-%29). You may also use Perl's four-argument version of [`select`](#select-RBITS%2CWBITS%2CEBITS%2CTIMEOUT) leaving the first three arguments undefined, or you might be able to use the [`syscall`](#syscall-NUMBER%2C-LIST) interface to access [setitimer(2)](http://man.he.net/man2/setitimer) if your system supports it. See <perlfaq8> for details.
See also the [POSIX](posix) module's [`pause`](posix#pause) function.
socket SOCKET,DOMAIN,TYPE,PROTOCOL Opens a socket of the specified kind and attaches it to filehandle SOCKET. DOMAIN, TYPE, and PROTOCOL are specified the same as for the syscall of the same name. You should `use Socket` first to get the proper definitions imported. See the examples in ["Sockets: Client/Server Communication" in perlipc](perlipc#Sockets%3A-Client%2FServer-Communication).
On systems that support a close-on-exec flag on files, the flag will be set for the newly opened file descriptor, as determined by the value of [`$^F`](perlvar#%24%5EF). See ["$^F" in perlvar](perlvar#%24%5EF).
socketpair SOCKET1,SOCKET2,DOMAIN,TYPE,PROTOCOL Creates an unnamed pair of sockets in the specified domain, of the specified type. DOMAIN, TYPE, and PROTOCOL are specified the same as for the syscall of the same name. If unimplemented, raises an exception. Returns true if successful.
On systems that support a close-on-exec flag on files, the flag will be set for the newly opened file descriptors, as determined by the value of [`$^F`](perlvar#%24%5EF). See ["$^F" in perlvar](perlvar#%24%5EF).
Some systems define [`pipe`](#pipe-READHANDLE%2CWRITEHANDLE) in terms of [`socketpair`](#socketpair-SOCKET1%2CSOCKET2%2CDOMAIN%2CTYPE%2CPROTOCOL), in which a call to `pipe($rdr, $wtr)` is essentially:
```
use Socket;
socketpair(my $rdr, my $wtr, AF_UNIX, SOCK_STREAM, PF_UNSPEC);
shutdown($rdr, 1); # no more writing for reader
shutdown($wtr, 0); # no more reading for writer
```
See <perlipc> for an example of socketpair use. Perl 5.8 and later will emulate socketpair using IP sockets to localhost if your system implements sockets but not socketpair.
Portability issues: ["socketpair" in perlport](perlport#socketpair).
sort SUBNAME LIST
sort BLOCK LIST
sort LIST In list context, this sorts the LIST and returns the sorted list value. In scalar context, the behaviour of [`sort`](#sort-SUBNAME-LIST) is undefined.
If SUBNAME or BLOCK is omitted, [`sort`](#sort-SUBNAME-LIST)s in standard string comparison order. If SUBNAME is specified, it gives the name of a subroutine that returns an integer less than, equal to, or greater than `0`, depending on how the elements of the list are to be ordered. (The `<=>` and `cmp` operators are extremely useful in such routines.) SUBNAME may be a scalar variable name (unsubscripted), in which case the value provides the name of (or a reference to) the actual subroutine to use. In place of a SUBNAME, you can provide a BLOCK as an anonymous, in-line sort subroutine.
If the subroutine's prototype is `($$)`, the elements to be compared are passed by reference in [`@_`](perlvar#%40_), as for a normal subroutine. This is slower than unprototyped subroutines, where the elements to be compared are passed into the subroutine as the package global variables `$a` and `$b` (see example below).
If the subroutine is an XSUB, the elements to be compared are pushed on to the stack, the way arguments are usually passed to XSUBs. `$a` and `$b` are not set.
The values to be compared are always passed by reference and should not be modified.
You also cannot exit out of the sort block or subroutine using any of the loop control operators described in <perlsyn> or with [`goto`](#goto-LABEL).
When [`use locale`](locale) (but not `use locale ':not_characters'`) is in effect, `sort LIST` sorts LIST according to the current collation locale. See <perllocale>.
[`sort`](#sort-SUBNAME-LIST) returns aliases into the original list, much as a for loop's index variable aliases the list elements. That is, modifying an element of a list returned by [`sort`](#sort-SUBNAME-LIST) (for example, in a `foreach`, [`map`](#map-BLOCK-LIST) or [`grep`](#grep-BLOCK-LIST)) actually modifies the element in the original list. This is usually something to be avoided when writing clear code.
Historically Perl has varied in whether sorting is stable by default. If stability matters, it can be controlled explicitly by using the <sort> pragma.
Examples:
```
# sort lexically
my @articles = sort @files;
# same thing, but with explicit sort routine
my @articles = sort {$a cmp $b} @files;
# now case-insensitively
my @articles = sort {fc($a) cmp fc($b)} @files;
# same thing in reversed order
my @articles = sort {$b cmp $a} @files;
# sort numerically ascending
my @articles = sort {$a <=> $b} @files;
# sort numerically descending
my @articles = sort {$b <=> $a} @files;
# this sorts the %age hash by value instead of key
# using an in-line function
my @eldest = sort { $age{$b} <=> $age{$a} } keys %age;
# sort using explicit subroutine name
sub byage {
$age{$a} <=> $age{$b}; # presuming numeric
}
my @sortedclass = sort byage @class;
sub backwards { $b cmp $a }
my @harry = qw(dog cat x Cain Abel);
my @george = qw(gone chased yz Punished Axed);
print sort @harry;
# prints AbelCaincatdogx
print sort backwards @harry;
# prints xdogcatCainAbel
print sort @george, 'to', @harry;
# prints AbelAxedCainPunishedcatchaseddoggonetoxyz
# inefficiently sort by descending numeric compare using
# the first integer after the first = sign, or the
# whole record case-insensitively otherwise
my @new = sort {
($b =~ /=(\d+)/)[0] <=> ($a =~ /=(\d+)/)[0]
||
fc($a) cmp fc($b)
} @old;
# same thing, but much more efficiently;
# we'll build auxiliary indices instead
# for speed
my (@nums, @caps);
for (@old) {
push @nums, ( /=(\d+)/ ? $1 : undef );
push @caps, fc($_);
}
my @new = @old[ sort {
$nums[$b] <=> $nums[$a]
||
$caps[$a] cmp $caps[$b]
} 0..$#old
];
# same thing, but without any temps
my @new = map { $_->[0] }
sort { $b->[1] <=> $a->[1]
||
$a->[2] cmp $b->[2]
} map { [$_, /=(\d+)/, fc($_)] } @old;
# using a prototype allows you to use any comparison subroutine
# as a sort subroutine (including other package's subroutines)
package Other;
sub backwards ($$) { $_[1] cmp $_[0]; } # $a and $b are
# not set here
package main;
my @new = sort Other::backwards @old;
## using a prototype with function signature
use feature 'signatures';
sub function_with_signature :prototype($$) ($one, $two) {
return $one <=> $two
}
my @new = sort function_with_signature @old;
# guarantee stability
use sort 'stable';
my @new = sort { substr($a, 3, 5) cmp substr($b, 3, 5) } @old;
```
Warning: syntactical care is required when sorting the list returned from a function. If you want to sort the list returned by the function call `find_records(@key)`, you can use:
```
my @contact = sort { $a cmp $b } find_records @key;
my @contact = sort +find_records(@key);
my @contact = sort &find_records(@key);
my @contact = sort(find_records(@key));
```
If instead you want to sort the array `@key` with the comparison routine `find_records()` then you can use:
```
my @contact = sort { find_records() } @key;
my @contact = sort find_records(@key);
my @contact = sort(find_records @key);
my @contact = sort(find_records (@key));
```
`$a` and `$b` are set as package globals in the package the sort() is called from. That means `$main::a` and `$main::b` (or `$::a` and `$::b`) in the `main` package, `$FooPack::a` and `$FooPack::b` in the `FooPack` package, etc. If the sort block is in scope of a `my` or `state` declaration of `$a` and/or `$b`, you *must* spell out the full name of the variables in the sort block :
```
package main;
my $a = "C"; # DANGER, Will Robinson, DANGER !!!
print sort { $a cmp $b } qw(A C E G B D F H);
# WRONG
sub badlexi { $a cmp $b }
print sort badlexi qw(A C E G B D F H);
# WRONG
# the above prints BACFEDGH or some other incorrect ordering
print sort { $::a cmp $::b } qw(A C E G B D F H);
# OK
print sort { our $a cmp our $b } qw(A C E G B D F H);
# also OK
print sort { our ($a, $b); $a cmp $b } qw(A C E G B D F H);
# also OK
sub lexi { our $a cmp our $b }
print sort lexi qw(A C E G B D F H);
# also OK
# the above print ABCDEFGH
```
With proper care you may mix package and my (or state) `$a` and/or `$b`:
```
my $a = {
tiny => -2,
small => -1,
normal => 0,
big => 1,
huge => 2
};
say sort { $a->{our $a} <=> $a->{our $b} }
qw{ huge normal tiny small big};
# prints tinysmallnormalbighuge
```
`$a` and `$b` are implicitly local to the sort() execution and regain their former values upon completing the sort.
Sort subroutines written using `$a` and `$b` are bound to their calling package. It is possible, but of limited interest, to define them in a different package, since the subroutine must still refer to the calling package's `$a` and `$b` :
```
package Foo;
sub lexi { $Bar::a cmp $Bar::b }
package Bar;
... sort Foo::lexi ...
```
Use the prototyped versions (see above) for a more generic alternative.
The comparison function is required to behave. If it returns inconsistent results (sometimes saying `$x[1]` is less than `$x[2]` and sometimes saying the opposite, for example) the results are not well-defined.
Because `<=>` returns [`undef`](#undef-EXPR) when either operand is `NaN` (not-a-number), be careful when sorting with a comparison function like `$a <=> $b` any lists that might contain a `NaN`. The following example takes advantage that `NaN != NaN` to eliminate any `NaN`s from the input list.
```
my @result = sort { $a <=> $b } grep { $_ == $_ } @input;
```
In this version of *perl*, the `sort` function is implemented via the mergesort algorithm.
splice ARRAY,OFFSET,LENGTH,LIST
splice ARRAY,OFFSET,LENGTH
splice ARRAY,OFFSET
splice ARRAY Removes the elements designated by OFFSET and LENGTH from an array, and replaces them with the elements of LIST, if any. In list context, returns the elements removed from the array. In scalar context, returns the last element removed, or [`undef`](#undef-EXPR) if no elements are removed. The array grows or shrinks as necessary. If OFFSET is negative then it starts that far from the end of the array. If LENGTH is omitted, removes everything from OFFSET onward. If LENGTH is negative, removes the elements from OFFSET onward except for -LENGTH elements at the end of the array. If both OFFSET and LENGTH are omitted, removes everything. If OFFSET is past the end of the array and a LENGTH was provided, Perl issues a warning, and splices at the end of the array.
The following equivalences hold (assuming `$#a >= $i` )
```
push(@a,$x,$y) splice(@a,@a,0,$x,$y)
pop(@a) splice(@a,-1)
shift(@a) splice(@a,0,1)
unshift(@a,$x,$y) splice(@a,0,0,$x,$y)
$a[$i] = $y splice(@a,$i,1,$y)
```
[`splice`](#splice-ARRAY%2COFFSET%2CLENGTH%2CLIST) can be used, for example, to implement n-ary queue processing:
```
sub nary_print {
my $n = shift;
while (my @next_n = splice @_, 0, $n) {
say join q{ -- }, @next_n;
}
}
nary_print(3, qw(a b c d e f g h));
# prints:
# a -- b -- c
# d -- e -- f
# g -- h
```
Starting with Perl 5.14, an experimental feature allowed [`splice`](#splice-ARRAY%2COFFSET%2CLENGTH%2CLIST) to take a scalar expression. This experiment has been deemed unsuccessful, and was removed as of Perl 5.24.
split /PATTERN/,EXPR,LIMIT
split /PATTERN/,EXPR
split /PATTERN/ split Splits the string EXPR into a list of strings and returns the list in list context, or the size of the list in scalar context. (Prior to Perl 5.11, it also overwrote `@_` with the list in void and scalar context. If you target old perls, beware.)
If only PATTERN is given, EXPR defaults to [`$_`](perlvar#%24_).
Anything in EXPR that matches PATTERN is taken to be a separator that separates the EXPR into substrings (called "*fields*") that do **not** include the separator. Note that a separator may be longer than one character or even have no characters at all (the empty string, which is a zero-width match).
The PATTERN need not be constant; an expression may be used to specify a pattern that varies at runtime.
If PATTERN matches the empty string, the EXPR is split at the match position (between characters). As an example, the following:
```
my @x = split(/b/, "abc"); # ("a", "c")
```
uses the `b` in `'abc'` as a separator to produce the list ("a", "c"). However, this:
```
my @x = split(//, "abc"); # ("a", "b", "c")
```
uses empty string matches as separators; thus, the empty string may be used to split EXPR into a list of its component characters.
As a special case for [`split`](#split-%2FPATTERN%2F%2CEXPR%2CLIMIT), the empty pattern given in [match operator](perlop#m%2FPATTERN%2Fmsixpodualngc) syntax (`//`) specifically matches the empty string, which is contrary to its usual interpretation as the last successful match.
If PATTERN is `/^/`, then it is treated as if it used the [multiline modifier](perlreref#OPERATORS) (`/^/m`), since it isn't much use otherwise.
`/m` and any of the other pattern modifiers valid for `qr` (summarized in ["qr/STRING/msixpodualn" in perlop](perlop#qr%2FSTRING%2Fmsixpodualn)) may be specified explicitly.
As another special case, [`split`](#split-%2FPATTERN%2F%2CEXPR%2CLIMIT) emulates the default behavior of the command line tool **awk** when the PATTERN is either omitted or a string composed of a single space character (such as `' '` or `"\x20"`, but not e.g. `/ /`). In this case, any leading whitespace in EXPR is removed before splitting occurs, and the PATTERN is instead treated as if it were `/\s+/`; in particular, this means that *any* contiguous whitespace (not just a single space character) is used as a separator.
```
my @x = split(" ", " Quick brown fox\n");
# ("Quick", "brown", "fox")
my @x = split(" ", "RED\tGREEN\tBLUE");
# ("RED", "GREEN", "BLUE")
```
Using split in this fashion is very similar to how [`qw//`](#qw%2FSTRING%2F) works.
However, this special treatment can be avoided by specifying the pattern `/ /` instead of the string `" "`, thereby allowing only a single space character to be a separator. In earlier Perls this special case was restricted to the use of a plain `" "` as the pattern argument to split; in Perl 5.18.0 and later this special case is triggered by any expression which evaluates to the simple string `" "`.
As of Perl 5.28, this special-cased whitespace splitting works as expected in the scope of [`"use feature 'unicode_strings'"`](feature#The-%27unicode_strings%27-feature). In previous versions, and outside the scope of that feature, it exhibits ["The "Unicode Bug"" in perlunicode](perlunicode#The-%22Unicode-Bug%22): characters that are whitespace according to Unicode rules but not according to ASCII rules can be treated as part of fields rather than as field separators, depending on the string's internal encoding.
If omitted, PATTERN defaults to a single space, `" "`, triggering the previously described *awk* emulation.
If LIMIT is specified and positive, it represents the maximum number of fields into which the EXPR may be split; in other words, LIMIT is one greater than the maximum number of times EXPR may be split. Thus, the LIMIT value `1` means that EXPR may be split a maximum of zero times, producing a maximum of one field (namely, the entire value of EXPR). For instance:
```
my @x = split(//, "abc", 1); # ("abc")
my @x = split(//, "abc", 2); # ("a", "bc")
my @x = split(//, "abc", 3); # ("a", "b", "c")
my @x = split(//, "abc", 4); # ("a", "b", "c")
```
If LIMIT is negative, it is treated as if it were instead arbitrarily large; as many fields as possible are produced.
If LIMIT is omitted (or, equivalently, zero), then it is usually treated as if it were instead negative but with the exception that trailing empty fields are stripped (empty leading fields are always preserved); if all fields are empty, then all fields are considered to be trailing (and are thus stripped in this case). Thus, the following:
```
my @x = split(/,/, "a,b,c,,,"); # ("a", "b", "c")
```
produces only a three element list.
```
my @x = split(/,/, "a,b,c,,,", -1); # ("a", "b", "c", "", "", "")
```
produces a six element list.
In time-critical applications, it is worthwhile to avoid splitting into more fields than necessary. Thus, when assigning to a list, if LIMIT is omitted (or zero), then LIMIT is treated as though it were one larger than the number of variables in the list; for the following, LIMIT is implicitly 3:
```
my ($login, $passwd) = split(/:/);
```
Note that splitting an EXPR that evaluates to the empty string always produces zero fields, regardless of the LIMIT specified.
An empty leading field is produced when there is a positive-width match at the beginning of EXPR. For instance:
```
my @x = split(/ /, " abc"); # ("", "abc")
```
splits into two elements. However, a zero-width match at the beginning of EXPR never produces an empty field, so that:
```
my @x = split(//, " abc"); # (" ", "a", "b", "c")
```
splits into four elements instead of five.
An empty trailing field, on the other hand, is produced when there is a match at the end of EXPR, regardless of the length of the match (of course, unless a non-zero LIMIT is given explicitly, such fields are removed, as in the last example). Thus:
```
my @x = split(//, " abc", -1); # (" ", "a", "b", "c", "")
```
If the PATTERN contains [capturing groups](perlretut#Grouping-things-and-hierarchical-matching), then for each separator, an additional field is produced for each substring captured by a group (in the order in which the groups are specified, as per [backreferences](perlretut#Backreferences)); if any group does not match, then it captures the [`undef`](#undef-EXPR) value instead of a substring. Also, note that any such additional field is produced whenever there is a separator (that is, whenever a split occurs), and such an additional field does **not** count towards the LIMIT. Consider the following expressions evaluated in list context (each returned list is provided in the associated comment):
```
my @x = split(/-|,/ , "1-10,20", 3);
# ("1", "10", "20")
my @x = split(/(-|,)/ , "1-10,20", 3);
# ("1", "-", "10", ",", "20")
my @x = split(/-|(,)/ , "1-10,20", 3);
# ("1", undef, "10", ",", "20")
my @x = split(/(-)|,/ , "1-10,20", 3);
# ("1", "-", "10", undef, "20")
my @x = split(/(-)|(,)/, "1-10,20", 3);
# ("1", "-", undef, "10", undef, ",", "20")
```
sprintf FORMAT, LIST Returns a string formatted by the usual [`printf`](#printf-FILEHANDLE-FORMAT%2C-LIST) conventions of the C library function [`sprintf`](#sprintf-FORMAT%2C-LIST). See below for more details and see [sprintf(3)](http://man.he.net/man3/sprintf) or [printf(3)](http://man.he.net/man3/printf) on your system for an explanation of the general principles.
For example:
```
# Format number with up to 8 leading zeroes
my $result = sprintf("%08d", $number);
# Round number to 3 digits after decimal point
my $rounded = sprintf("%.3f", $number);
```
Perl does its own [`sprintf`](#sprintf-FORMAT%2C-LIST) formatting: it emulates the C function [sprintf(3)](http://man.he.net/man3/sprintf), but doesn't use it except for floating-point numbers, and even then only standard modifiers are allowed. Non-standard extensions in your local [sprintf(3)](http://man.he.net/man3/sprintf) are therefore unavailable from Perl.
Unlike [`printf`](#printf-FILEHANDLE-FORMAT%2C-LIST), [`sprintf`](#sprintf-FORMAT%2C-LIST) does not do what you probably mean when you pass it an array as your first argument. The array is given scalar context, and instead of using the 0th element of the array as the format, Perl will use the count of elements in the array as the format, which is almost never useful.
Perl's [`sprintf`](#sprintf-FORMAT%2C-LIST) permits the following universally-known conversions:
```
%% a percent sign
%c a character with the given number
%s a string
%d a signed integer, in decimal
%u an unsigned integer, in decimal
%o an unsigned integer, in octal
%x an unsigned integer, in hexadecimal
%e a floating-point number, in scientific notation
%f a floating-point number, in fixed decimal notation
%g a floating-point number, in %e or %f notation
```
In addition, Perl permits the following widely-supported conversions:
```
%X like %x, but using upper-case letters
%E like %e, but using an upper-case "E"
%G like %g, but with an upper-case "E" (if applicable)
%b an unsigned integer, in binary
%B like %b, but using an upper-case "B" with the # flag
%p a pointer (outputs the Perl value's address in hexadecimal)
%n special: *stores* the number of characters output so far
into the next argument in the parameter list
%a hexadecimal floating point
%A like %a, but using upper-case letters
```
Finally, for backward (and we do mean "backward") compatibility, Perl permits these unnecessary but widely-supported conversions:
```
%i a synonym for %d
%D a synonym for %ld
%U a synonym for %lu
%O a synonym for %lo
%F a synonym for %f
```
Note that the number of exponent digits in the scientific notation produced by `%e`, `%E`, `%g` and `%G` for numbers with the modulus of the exponent less than 100 is system-dependent: it may be three or less (zero-padded as necessary). In other words, 1.23 times ten to the 99th may be either "1.23e99" or "1.23e099". Similarly for `%a` and `%A`: the exponent or the hexadecimal digits may float: especially the "long doubles" Perl configuration option may cause surprises.
Between the `%` and the format letter, you may specify several additional attributes controlling the interpretation of the format. In order, these are:
format parameter index An explicit format parameter index, such as `2$`. By default sprintf will format the next unused argument in the list, but this allows you to take the arguments out of order:
```
printf '%2$d %1$d', 12, 34; # prints "34 12"
printf '%3$d %d %1$d', 1, 2, 3; # prints "3 1 1"
```
flags one or more of:
```
space prefix non-negative number with a space
+ prefix non-negative number with a plus sign
- left-justify within the field
0 use zeros, not spaces, to right-justify
# ensure the leading "0" for any octal,
prefix non-zero hexadecimal with "0x" or "0X",
prefix non-zero binary with "0b" or "0B"
```
For example:
```
printf '<% d>', 12; # prints "< 12>"
printf '<% d>', 0; # prints "< 0>"
printf '<% d>', -12; # prints "<-12>"
printf '<%+d>', 12; # prints "<+12>"
printf '<%+d>', 0; # prints "<+0>"
printf '<%+d>', -12; # prints "<-12>"
printf '<%6s>', 12; # prints "< 12>"
printf '<%-6s>', 12; # prints "<12 >"
printf '<%06s>', 12; # prints "<000012>"
printf '<%#o>', 12; # prints "<014>"
printf '<%#x>', 12; # prints "<0xc>"
printf '<%#X>', 12; # prints "<0XC>"
printf '<%#b>', 12; # prints "<0b1100>"
printf '<%#B>', 12; # prints "<0B1100>"
```
When a space and a plus sign are given as the flags at once, the space is ignored.
```
printf '<%+ d>', 12; # prints "<+12>"
printf '<% +d>', 12; # prints "<+12>"
```
When the # flag and a precision are given in the %o conversion, the precision is incremented if it's necessary for the leading "0".
```
printf '<%#.5o>', 012; # prints "<00012>"
printf '<%#.5o>', 012345; # prints "<012345>"
printf '<%#.0o>', 0; # prints "<0>"
```
vector flag This flag tells Perl to interpret the supplied string as a vector of integers, one for each character in the string. Perl applies the format to each integer in turn, then joins the resulting strings with a separator (a dot `.` by default). This can be useful for displaying ordinal values of characters in arbitrary strings:
```
printf "%vd", "AB\x{100}"; # prints "65.66.256"
printf "version is v%vd\n", $^V; # Perl's version
```
Put an asterisk `*` before the `v` to override the string to use to separate the numbers:
```
printf "address is %*vX\n", ":", $addr; # IPv6 address
printf "bits are %0*v8b\n", " ", $bits; # random bitstring
```
You can also explicitly specify the argument number to use for the join string using something like `*2$v`; for example:
```
printf '%*4$vX %*4$vX %*4$vX', # 3 IPv6 addresses
@addr[1..3], ":";
```
(minimum) width Arguments are usually formatted to be only as wide as required to display the given value. You can override the width by putting a number here, or get the width from the next argument (with `*`) or from a specified argument (e.g., with `*2$`):
```
printf "<%s>", "a"; # prints "<a>"
printf "<%6s>", "a"; # prints "< a>"
printf "<%*s>", 6, "a"; # prints "< a>"
printf '<%*2$s>', "a", 6; # prints "< a>"
printf "<%2s>", "long"; # prints "<long>" (does not truncate)
```
If a field width obtained through `*` is negative, it has the same effect as the `-` flag: left-justification.
precision, or maximum width You can specify a precision (for numeric conversions) or a maximum width (for string conversions) by specifying a `.` followed by a number. For floating-point formats except `g` and `G`, this specifies how many places right of the decimal point to show (the default being 6). For example:
```
# these examples are subject to system-specific variation
printf '<%f>', 1; # prints "<1.000000>"
printf '<%.1f>', 1; # prints "<1.0>"
printf '<%.0f>', 1; # prints "<1>"
printf '<%e>', 10; # prints "<1.000000e+01>"
printf '<%.1e>', 10; # prints "<1.0e+01>"
```
For "g" and "G", this specifies the maximum number of significant digits to show; for example:
```
# These examples are subject to system-specific variation.
printf '<%g>', 1; # prints "<1>"
printf '<%.10g>', 1; # prints "<1>"
printf '<%g>', 100; # prints "<100>"
printf '<%.1g>', 100; # prints "<1e+02>"
printf '<%.2g>', 100.01; # prints "<1e+02>"
printf '<%.5g>', 100.01; # prints "<100.01>"
printf '<%.4g>', 100.01; # prints "<100>"
printf '<%.1g>', 0.0111; # prints "<0.01>"
printf '<%.2g>', 0.0111; # prints "<0.011>"
printf '<%.3g>', 0.0111; # prints "<0.0111>"
```
For integer conversions, specifying a precision implies that the output of the number itself should be zero-padded to this width, where the 0 flag is ignored:
```
printf '<%.6d>', 1; # prints "<000001>"
printf '<%+.6d>', 1; # prints "<+000001>"
printf '<%-10.6d>', 1; # prints "<000001 >"
printf '<%10.6d>', 1; # prints "< 000001>"
printf '<%010.6d>', 1; # prints "< 000001>"
printf '<%+10.6d>', 1; # prints "< +000001>"
printf '<%.6x>', 1; # prints "<000001>"
printf '<%#.6x>', 1; # prints "<0x000001>"
printf '<%-10.6x>', 1; # prints "<000001 >"
printf '<%10.6x>', 1; # prints "< 000001>"
printf '<%010.6x>', 1; # prints "< 000001>"
printf '<%#10.6x>', 1; # prints "< 0x000001>"
```
For string conversions, specifying a precision truncates the string to fit the specified width:
```
printf '<%.5s>', "truncated"; # prints "<trunc>"
printf '<%10.5s>', "truncated"; # prints "< trunc>"
```
You can also get the precision from the next argument using `.*`, or from a specified argument (e.g., with `.*2$`):
```
printf '<%.6x>', 1; # prints "<000001>"
printf '<%.*x>', 6, 1; # prints "<000001>"
printf '<%.*2$x>', 1, 6; # prints "<000001>"
printf '<%6.*2$x>', 1, 4; # prints "< 0001>"
```
If a precision obtained through `*` is negative, it counts as having no precision at all.
```
printf '<%.*s>', 7, "string"; # prints "<string>"
printf '<%.*s>', 3, "string"; # prints "<str>"
printf '<%.*s>', 0, "string"; # prints "<>"
printf '<%.*s>', -1, "string"; # prints "<string>"
printf '<%.*d>', 1, 0; # prints "<0>"
printf '<%.*d>', 0, 0; # prints "<>"
printf '<%.*d>', -1, 0; # prints "<0>"
```
size For numeric conversions, you can specify the size to interpret the number as using `l`, `h`, `V`, `q`, `L`, or `ll`. For integer conversions (`d u o x X b i D U O`), numbers are usually assumed to be whatever the default integer size is on your platform (usually 32 or 64 bits), but you can override this to use instead one of the standard C types, as supported by the compiler used to build Perl:
```
hh interpret integer as C type "char" or "unsigned
char" on Perl 5.14 or later
h interpret integer as C type "short" or
"unsigned short"
j interpret integer as C type "intmax_t" on Perl
5.14 or later; and prior to Perl 5.30, only with
a C99 compiler (unportable)
l interpret integer as C type "long" or
"unsigned long"
q, L, or ll interpret integer as C type "long long",
"unsigned long long", or "quad" (typically
64-bit integers)
t interpret integer as C type "ptrdiff_t" on Perl
5.14 or later
z interpret integer as C types "size_t" or
"ssize_t" on Perl 5.14 or later
```
Note that, in general, using the `l` modifier (for example, when writing `"%ld"` or `"%lu"` instead of `"%d"` and `"%u"`) is unnecessary when used from Perl code. Moreover, it may be harmful, for example on Windows 64-bit where a long is 32-bits.
As of 5.14, none of these raises an exception if they are not supported on your platform. However, if warnings are enabled, a warning of the [`printf`](warnings) warning class is issued on an unsupported conversion flag. Should you instead prefer an exception, do this:
```
use warnings FATAL => "printf";
```
If you would like to know about a version dependency before you start running the program, put something like this at its top:
```
use v5.14; # for hh/j/t/z/ printf modifiers
```
You can find out whether your Perl supports quads via [Config](config):
```
use Config;
if ($Config{use64bitint} eq "define"
|| $Config{longsize} >= 8) {
print "Nice quads!\n";
}
```
For floating-point conversions (`e f g E F G`), numbers are usually assumed to be the default floating-point size on your platform (double or long double), but you can force "long double" with `q`, `L`, or `ll` if your platform supports them. You can find out whether your Perl supports long doubles via [Config](config):
```
use Config;
print "long doubles\n" if $Config{d_longdbl} eq "define";
```
You can find out whether Perl considers "long double" to be the default floating-point size to use on your platform via [Config](config):
```
use Config;
if ($Config{uselongdouble} eq "define") {
print "long doubles by default\n";
}
```
It can also be that long doubles and doubles are the same thing:
```
use Config;
($Config{doublesize} == $Config{longdblsize}) &&
print "doubles are long doubles\n";
```
The size specifier `V` has no effect for Perl code, but is supported for compatibility with XS code. It means "use the standard size for a Perl integer or floating-point number", which is the default.
order of arguments Normally, [`sprintf`](#sprintf-FORMAT%2C-LIST) takes the next unused argument as the value to format for each format specification. If the format specification uses `*` to require additional arguments, these are consumed from the argument list in the order they appear in the format specification *before* the value to format. Where an argument is specified by an explicit index, this does not affect the normal order for the arguments, even when the explicitly specified index would have been the next argument.
So:
```
printf "<%*.*s>", $a, $b, $c;
```
uses `$a` for the width, `$b` for the precision, and `$c` as the value to format; while:
```
printf '<%*1$.*s>', $a, $b;
```
would use `$a` for the width and precision, and `$b` as the value to format.
Here are some more examples; be aware that when using an explicit index, the `$` may need escaping:
```
printf "%2\$d %d\n", 12, 34; # will print "34 12\n"
printf "%2\$d %d %d\n", 12, 34; # will print "34 12 34\n"
printf "%3\$d %d %d\n", 12, 34, 56; # will print "56 12 34\n"
printf "%2\$*3\$d %d\n", 12, 34, 3; # will print " 34 12\n"
printf "%*1\$.*f\n", 4, 5, 10; # will print "5.0000\n"
```
If [`use locale`](locale) (including `use locale ':not_characters'`) is in effect and [`POSIX::setlocale`](posix#setlocale) has been called, the character used for the decimal separator in formatted floating-point numbers is affected by the `LC_NUMERIC` locale. See <perllocale> and [POSIX](posix).
sqrt EXPR sqrt Return the positive square root of EXPR. If EXPR is omitted, uses [`$_`](perlvar#%24_). Works only for non-negative operands unless you've loaded the [`Math::Complex`](Math::Complex) module.
```
use Math::Complex;
print sqrt(-4); # prints 2i
```
srand EXPR srand Sets and returns the random number seed for the [`rand`](#rand-EXPR) operator.
The point of the function is to "seed" the [`rand`](#rand-EXPR) function so that [`rand`](#rand-EXPR) can produce a different sequence each time you run your program. When called with a parameter, [`srand`](#srand-EXPR) uses that for the seed; otherwise it (semi-)randomly chooses a seed. In either case, starting with Perl 5.14, it returns the seed. To signal that your code will work *only* on Perls of a recent vintage:
```
use v5.14; # so srand returns the seed
```
If [`srand`](#srand-EXPR) is not called explicitly, it is called implicitly without a parameter at the first use of the [`rand`](#rand-EXPR) operator. However, there are a few situations where programs are likely to want to call [`srand`](#srand-EXPR). One is for generating predictable results, generally for testing or debugging. There, you use `srand($seed)`, with the same `$seed` each time. Another case is that you may want to call [`srand`](#srand-EXPR) after a [`fork`](#fork) to avoid child processes sharing the same seed value as the parent (and consequently each other).
Do **not** call `srand()` (i.e., without an argument) more than once per process. The internal state of the random number generator should contain more entropy than can be provided by any seed, so calling [`srand`](#srand-EXPR) again actually *loses* randomness.
Most implementations of [`srand`](#srand-EXPR) take an integer and will silently truncate decimal numbers. This means `srand(42)` will usually produce the same results as `srand(42.1)`. To be safe, always pass [`srand`](#srand-EXPR) an integer.
A typical use of the returned seed is for a test program which has too many combinations to test comprehensively in the time available to it each run. It can test a random subset each time, and should there be a failure, log the seed used for that run so that it can later be used to reproduce the same results.
**[`rand`](#rand-EXPR) is not cryptographically secure. You should not rely on it in security-sensitive situations.** As of this writing, a number of third-party CPAN modules offer random number generators intended by their authors to be cryptographically secure, including: <Data::Entropy>, <Crypt::Random>, <Math::Random::Secure>, and <Math::TrulyRandom>.
stat FILEHANDLE
stat EXPR
stat DIRHANDLE stat Returns a 13-element list giving the status info for a file, either the file opened via FILEHANDLE or DIRHANDLE, or named by EXPR. If EXPR is omitted, it stats [`$_`](perlvar#%24_) (not `_`!). Returns the empty list if [`stat`](#stat-FILEHANDLE) fails. Typically used as follows:
```
my ($dev,$ino,$mode,$nlink,$uid,$gid,$rdev,$size,
$atime,$mtime,$ctime,$blksize,$blocks)
= stat($filename);
```
Not all fields are supported on all filesystem types. Here are the meanings of the fields:
```
0 dev device number of filesystem
1 ino inode number
2 mode file mode (type and permissions)
3 nlink number of (hard) links to the file
4 uid numeric user ID of file's owner
5 gid numeric group ID of file's owner
6 rdev the device identifier (special files only)
7 size total size of file, in bytes
8 atime last access time in seconds since the epoch
9 mtime last modify time in seconds since the epoch
10 ctime inode change time in seconds since the epoch (*)
11 blksize preferred I/O size in bytes for interacting with the
file (may vary from file to file)
12 blocks actual number of system-specific blocks allocated
on disk (often, but not always, 512 bytes each)
```
(The epoch was at 00:00 January 1, 1970 GMT.)
(\*) Not all fields are supported on all filesystem types. Notably, the ctime field is non-portable. In particular, you cannot expect it to be a "creation time"; see ["Files and Filesystems" in perlport](perlport#Files-and-Filesystems) for details.
If [`stat`](#stat-FILEHANDLE) is passed the special filehandle consisting of an underline, no stat is done, but the current contents of the stat structure from the last [`stat`](#stat-FILEHANDLE), [`lstat`](#lstat-FILEHANDLE), or filetest are returned. Example:
```
if (-x $file && (($d) = stat(_)) && $d < 0) {
print "$file is executable NFS file\n";
}
```
(This works on machines only for which the device number is negative under NFS.)
On some platforms inode numbers are of a type larger than perl knows how to handle as integer numerical values. If necessary, an inode number will be returned as a decimal string in order to preserve the entire value. If used in a numeric context, this will be converted to a floating-point numerical value, with rounding, a fate that is best avoided. Therefore, you should prefer to compare inode numbers using `eq` rather than `==`. `eq` will work fine on inode numbers that are represented numerically, as well as those represented as strings.
Because the mode contains both the file type and its permissions, you should mask off the file type portion and (s)printf using a `"%o"` if you want to see the real permissions.
```
my $mode = (stat($filename))[2];
printf "Permissions are %04o\n", $mode & 07777;
```
In scalar context, [`stat`](#stat-FILEHANDLE) returns a boolean value indicating success or failure, and, if successful, sets the information associated with the special filehandle `_`.
The <File::stat> module provides a convenient, by-name access mechanism:
```
use File::stat;
my $sb = stat($filename);
printf "File is %s, size is %s, perm %04o, mtime %s\n",
$filename, $sb->size, $sb->mode & 07777,
scalar localtime $sb->mtime;
```
You can import symbolic mode constants (`S_IF*`) and functions (`S_IS*`) from the [Fcntl](fcntl) module:
```
use Fcntl ':mode';
my $mode = (stat($filename))[2];
my $user_rwx = ($mode & S_IRWXU) >> 6;
my $group_read = ($mode & S_IRGRP) >> 3;
my $other_execute = $mode & S_IXOTH;
printf "Permissions are %04o\n", S_IMODE($mode), "\n";
my $is_setuid = $mode & S_ISUID;
my $is_directory = S_ISDIR($mode);
```
You could write the last two using the `-u` and `-d` operators. Commonly available `S_IF*` constants are:
```
# Permissions: read, write, execute, for user, group, others.
S_IRWXU S_IRUSR S_IWUSR S_IXUSR
S_IRWXG S_IRGRP S_IWGRP S_IXGRP
S_IRWXO S_IROTH S_IWOTH S_IXOTH
# Setuid/Setgid/Stickiness/SaveText.
# Note that the exact meaning of these is system-dependent.
S_ISUID S_ISGID S_ISVTX S_ISTXT
# File types. Not all are necessarily available on
# your system.
S_IFREG S_IFDIR S_IFLNK S_IFBLK S_IFCHR
S_IFIFO S_IFSOCK S_IFWHT S_ENFMT
# The following are compatibility aliases for S_IRUSR,
# S_IWUSR, and S_IXUSR.
S_IREAD S_IWRITE S_IEXEC
```
and the `S_IF*` functions are
```
S_IMODE($mode) the part of $mode containing the permission
bits and the setuid/setgid/sticky bits
S_IFMT($mode) the part of $mode containing the file type
which can be bit-anded with (for example)
S_IFREG or with the following functions
# The operators -f, -d, -l, -b, -c, -p, and -S.
S_ISREG($mode) S_ISDIR($mode) S_ISLNK($mode)
S_ISBLK($mode) S_ISCHR($mode) S_ISFIFO($mode) S_ISSOCK($mode)
# No direct -X operator counterpart, but for the first one
# the -g operator is often equivalent. The ENFMT stands for
# record flocking enforcement, a platform-dependent feature.
S_ISENFMT($mode) S_ISWHT($mode)
```
See your native [chmod(2)](http://man.he.net/man2/chmod) and [stat(2)](http://man.he.net/man2/stat) documentation for more details about the `S_*` constants. To get status info for a symbolic link instead of the target file behind the link, use the [`lstat`](#lstat-FILEHANDLE) function.
Portability issues: ["stat" in perlport](perlport#stat).
state VARLIST
state TYPE VARLIST
state VARLIST : ATTRS
state TYPE VARLIST : ATTRS [`state`](#state-VARLIST) declares a lexically scoped variable, just like [`my`](#my-VARLIST). However, those variables will never be reinitialized, contrary to lexical variables that are reinitialized each time their enclosing block is entered. See ["Persistent Private Variables" in perlsub](perlsub#Persistent-Private-Variables) for details.
If more than one variable is listed, the list must be placed in parentheses. With a parenthesised list, [`undef`](#undef-EXPR) can be used as a dummy placeholder. However, since initialization of state variables in such lists is currently not possible this would serve no purpose.
Redeclaring a variable in the same scope or statement will "shadow" the previous declaration, creating a new instance and preventing access to the previous one. This is usually undesired and, if warnings are enabled, will result in a warning in the `shadow` category.
[`state`](#state-VARLIST) is available only if the [`"state"` feature](feature#The-%27state%27-feature) is enabled or if it is prefixed with `CORE::`. The [`"state"` feature](feature#The-%27state%27-feature) is enabled automatically with a `use v5.10` (or higher) declaration in the current scope.
study SCALAR study At this time, `study` does nothing. This may change in the future.
Prior to Perl version 5.16, it would create an inverted index of all characters that occurred in the given SCALAR (or [`$_`](perlvar#%24_) if unspecified). When matching a pattern, the rarest character from the pattern would be looked up in this index. Rarity was based on some static frequency tables constructed from some C programs and English text.
sub NAME BLOCK
sub NAME (PROTO) BLOCK
sub NAME : ATTRS BLOCK
sub NAME (PROTO) : ATTRS BLOCK This is subroutine definition, not a real function *per se*. Without a BLOCK it's just a forward declaration. Without a NAME, it's an anonymous function declaration, so does return a value: the CODE ref of the closure just created.
See <perlsub> and <perlref> for details about subroutines and references; see <attributes> and <Attribute::Handlers> for more information about attributes.
\_\_SUB\_\_ A special token that returns a reference to the current subroutine, or [`undef`](#undef-EXPR) outside of a subroutine.
The behaviour of [`__SUB__`](#__SUB__) within a regex code block (such as `/(?{...})/`) is subject to change.
This token is only available under `use v5.16` or the [`"current_sub"` feature](feature#The-%27current_sub%27-feature). See <feature>.
substr EXPR,OFFSET,LENGTH,REPLACEMENT
substr EXPR,OFFSET,LENGTH
substr EXPR,OFFSET Extracts a substring out of EXPR and returns it. First character is at offset zero. If OFFSET is negative, starts that far back from the end of the string. If LENGTH is omitted, returns everything through the end of the string. If LENGTH is negative, leaves that many characters off the end of the string.
```
my $s = "The black cat climbed the green tree";
my $color = substr $s, 4, 5; # black
my $middle = substr $s, 4, -11; # black cat climbed the
my $end = substr $s, 14; # climbed the green tree
my $tail = substr $s, -4; # tree
my $z = substr $s, -4, 2; # tr
```
You can use the [`substr`](#substr-EXPR%2COFFSET%2CLENGTH%2CREPLACEMENT) function as an lvalue, in which case EXPR must itself be an lvalue. If you assign something shorter than LENGTH, the string will shrink, and if you assign something longer than LENGTH, the string will grow to accommodate it. To keep the string the same length, you may need to pad or chop your value using [`sprintf`](#sprintf-FORMAT%2C-LIST).
If OFFSET and LENGTH specify a substring that is partly outside the string, only the part within the string is returned. If the substring is beyond either end of the string, [`substr`](#substr-EXPR%2COFFSET%2CLENGTH%2CREPLACEMENT) returns the undefined value and produces a warning. When used as an lvalue, specifying a substring that is entirely outside the string raises an exception. Here's an example showing the behavior for boundary cases:
```
my $name = 'fred';
substr($name, 4) = 'dy'; # $name is now 'freddy'
my $null = substr $name, 6, 2; # returns "" (no warning)
my $oops = substr $name, 7; # returns undef, with warning
substr($name, 7) = 'gap'; # raises an exception
```
An alternative to using [`substr`](#substr-EXPR%2COFFSET%2CLENGTH%2CREPLACEMENT) as an lvalue is to specify the replacement string as the 4th argument. This allows you to replace parts of the EXPR and return what was there before in one operation, just as you can with [`splice`](#splice-ARRAY%2COFFSET%2CLENGTH%2CLIST).
```
my $s = "The black cat climbed the green tree";
my $z = substr $s, 14, 7, "jumped from"; # climbed
# $s is now "The black cat jumped from the green tree"
```
Note that the lvalue returned by the three-argument version of [`substr`](#substr-EXPR%2COFFSET%2CLENGTH%2CREPLACEMENT) acts as a 'magic bullet'; each time it is assigned to, it remembers which part of the original string is being modified; for example:
```
my $x = '1234';
for (substr($x,1,2)) {
$_ = 'a'; print $x,"\n"; # prints 1a4
$_ = 'xyz'; print $x,"\n"; # prints 1xyz4
$x = '56789';
$_ = 'pq'; print $x,"\n"; # prints 5pq9
}
```
With negative offsets, it remembers its position from the end of the string when the target string is modified:
```
my $x = '1234';
for (substr($x, -3, 2)) {
$_ = 'a'; print $x,"\n"; # prints 1a4, as above
$x = 'abcdefg';
print $_,"\n"; # prints f
}
```
Prior to Perl version 5.10, the result of using an lvalue multiple times was unspecified. Prior to 5.16, the result with negative offsets was unspecified.
symlink OLDFILE,NEWFILE Creates a new filename symbolically linked to the old filename. Returns `1` for success, `0` otherwise. On systems that don't support symbolic links, raises an exception. To check for that, use eval:
```
my $symlink_exists = eval { symlink("",""); 1 };
```
Portability issues: ["symlink" in perlport](perlport#symlink).
syscall NUMBER, LIST Calls the system call specified as the first element of the list, passing the remaining elements as arguments to the system call. If unimplemented, raises an exception. The arguments are interpreted as follows: if a given argument is numeric, the argument is passed as an int. If not, the pointer to the string value is passed. You are responsible to make sure a string is pre-extended long enough to receive any result that might be written into a string. You can't use a string literal (or other read-only string) as an argument to [`syscall`](#syscall-NUMBER%2C-LIST) because Perl has to assume that any string pointer might be written through. If your integer arguments are not literals and have never been interpreted in a numeric context, you may need to add `0` to them to force them to look like numbers. This emulates the [`syswrite`](#syswrite-FILEHANDLE%2CSCALAR%2CLENGTH%2COFFSET) function (or vice versa):
```
require 'syscall.ph'; # may need to run h2ph
my $s = "hi there\n";
syscall(SYS_write(), fileno(STDOUT), $s, length $s);
```
Note that Perl supports passing of up to only 14 arguments to your syscall, which in practice should (usually) suffice.
Syscall returns whatever value returned by the system call it calls. If the system call fails, [`syscall`](#syscall-NUMBER%2C-LIST) returns `-1` and sets [`$!`](perlvar#%24%21) (errno). Note that some system calls *can* legitimately return `-1`. The proper way to handle such calls is to assign `$! = 0` before the call, then check the value of [`$!`](perlvar#%24%21) if [`syscall`](#syscall-NUMBER%2C-LIST) returns `-1`.
There's a problem with `syscall(SYS_pipe())`: it returns the file number of the read end of the pipe it creates, but there is no way to retrieve the file number of the other end. You can avoid this problem by using [`pipe`](#pipe-READHANDLE%2CWRITEHANDLE) instead.
Portability issues: ["syscall" in perlport](perlport#syscall).
sysopen FILEHANDLE,FILENAME,MODE
sysopen FILEHANDLE,FILENAME,MODE,PERMS Opens the file whose filename is given by FILENAME, and associates it with FILEHANDLE. If FILEHANDLE is an expression, its value is used as the real filehandle wanted; an undefined scalar will be suitably autovivified. This function calls the underlying operating system's [open(2)](http://man.he.net/man2/open) function with the parameters FILENAME, MODE, and PERMS.
Returns true on success and [`undef`](#undef-EXPR) otherwise.
[PerlIO](perlio) layers will be applied to the handle the same way they would in an [`open`](#open-FILEHANDLE%2CMODE%2CEXPR) call that does not specify layers. That is, the current value of [`${^OPEN}`](perlvar#%24%7B%5EOPEN%7D) as set by the <open> pragma in a lexical scope, or the `-C` commandline option or `PERL_UNICODE` environment variable in the main program scope, falling back to the platform defaults as described in ["Defaults and how to override them" in PerlIO](perlio#Defaults-and-how-to-override-them). If you want to remove any layers that may transform the byte stream, use [`binmode`](#binmode-FILEHANDLE%2C-LAYER) after opening it.
The possible values and flag bits of the MODE parameter are system-dependent; they are available via the standard module [`Fcntl`](fcntl). See the documentation of your operating system's [open(2)](http://man.he.net/man2/open) syscall to see which values and flag bits are available. You may combine several flags using the `|`-operator.
Some of the most common values are `O_RDONLY` for opening the file in read-only mode, `O_WRONLY` for opening the file in write-only mode, and `O_RDWR` for opening the file in read-write mode.
For historical reasons, some values work on almost every system supported by Perl: 0 means read-only, 1 means write-only, and 2 means read/write. We know that these values do *not* work under OS/390; you probably don't want to use them in new code.
If the file named by FILENAME does not exist and the [`open`](#open-FILEHANDLE%2CMODE%2CEXPR) call creates it (typically because MODE includes the `O_CREAT` flag), then the value of PERMS specifies the permissions of the newly created file. If you omit the PERMS argument to [`sysopen`](#sysopen-FILEHANDLE%2CFILENAME%2CMODE), Perl uses the octal value `0666`. These permission values need to be in octal, and are modified by your process's current [`umask`](#umask-EXPR).
In many systems the `O_EXCL` flag is available for opening files in exclusive mode. This is **not** locking: exclusiveness means here that if the file already exists, [`sysopen`](#sysopen-FILEHANDLE%2CFILENAME%2CMODE) fails. `O_EXCL` may not work on network filesystems, and has no effect unless the `O_CREAT` flag is set as well. Setting `O_CREAT|O_EXCL` prevents the file from being opened if it is a symbolic link. It does not protect against symbolic links in the file's path.
Sometimes you may want to truncate an already-existing file. This can be done using the `O_TRUNC` flag. The behavior of `O_TRUNC` with `O_RDONLY` is undefined.
You should seldom if ever use `0644` as argument to [`sysopen`](#sysopen-FILEHANDLE%2CFILENAME%2CMODE), because that takes away the user's option to have a more permissive umask. Better to omit it. See [`umask`](#umask-EXPR) for more on this.
This function has no direct relation to the usage of [`sysread`](#sysread-FILEHANDLE%2CSCALAR%2CLENGTH%2COFFSET), [`syswrite`](#syswrite-FILEHANDLE%2CSCALAR%2CLENGTH%2COFFSET), or [`sysseek`](#sysseek-FILEHANDLE%2CPOSITION%2CWHENCE). A handle opened with this function can be used with buffered IO just as one opened with [`open`](#open-FILEHANDLE%2CMODE%2CEXPR) can be used with unbuffered IO.
Note that under Perls older than 5.8.0, [`sysopen`](#sysopen-FILEHANDLE%2CFILENAME%2CMODE) depends on the [fdopen(3)](http://man.he.net/man3/fdopen) C library function. On many Unix systems, [fdopen(3)](http://man.he.net/man3/fdopen) is known to fail when file descriptors exceed a certain value, typically 255. If you need more file descriptors than that, consider using the [`POSIX::open`](posix#open) function. For Perls 5.8.0 and later, PerlIO is (most often) the default.
See <perlopentut> for a kinder, gentler explanation of opening files.
Portability issues: ["sysopen" in perlport](perlport#sysopen).
sysread FILEHANDLE,SCALAR,LENGTH,OFFSET
sysread FILEHANDLE,SCALAR,LENGTH Attempts to read LENGTH bytes of data into variable SCALAR from the specified FILEHANDLE, using [read(2)](http://man.he.net/man2/read). It bypasses any [PerlIO](perlio) layers including buffered IO (but is affected by the presence of the `:utf8` layer as described later), so mixing this with other kinds of reads, [`print`](#print-FILEHANDLE-LIST), [`write`](#write-FILEHANDLE), [`seek`](#seek-FILEHANDLE%2CPOSITION%2CWHENCE), [`tell`](#tell-FILEHANDLE), or [`eof`](#eof-FILEHANDLE) can cause confusion because the `:perlio` or `:crlf` layers usually buffer data. Returns the number of bytes actually read, `0` at end of file, or undef if there was an error (in the latter case [`$!`](perlvar#%24%21) is also set). SCALAR will be grown or shrunk so that the last byte actually read is the last byte of the scalar after the read.
An OFFSET may be specified to place the read data at some place in the string other than the beginning. A negative OFFSET specifies placement at that many characters counting backwards from the end of the string. A positive OFFSET greater than the length of SCALAR results in the string being padded to the required size with `"\0"` bytes before the result of the read is appended.
There is no syseof() function, which is ok, since [`eof`](#eof-FILEHANDLE) doesn't work well on device files (like ttys) anyway. Use [`sysread`](#sysread-FILEHANDLE%2CSCALAR%2CLENGTH%2COFFSET) and check for a return value of 0 to decide whether you're done.
Note that if the filehandle has been marked as `:utf8`, `sysread` will throw an exception. The `:encoding(...)` layer implicitly introduces the `:utf8` layer. See [`binmode`](#binmode-FILEHANDLE%2C-LAYER), [`open`](#open-FILEHANDLE%2CMODE%2CEXPR), and the <open> pragma.
sysseek FILEHANDLE,POSITION,WHENCE Sets FILEHANDLE's system position *in bytes* using [lseek(2)](http://man.he.net/man2/lseek). FILEHANDLE may be an expression whose value gives the name of the filehandle. The values for WHENCE are `0` to set the new position to POSITION; `1` to set it to the current position plus POSITION; and `2` to set it to EOF plus POSITION, typically negative.
Note the emphasis on bytes: even if the filehandle has been set to operate on characters (for example using the `:encoding(UTF-8)` I/O layer), the [`seek`](#seek-FILEHANDLE%2CPOSITION%2CWHENCE), [`tell`](#tell-FILEHANDLE), and [`sysseek`](#sysseek-FILEHANDLE%2CPOSITION%2CWHENCE) family of functions use byte offsets, not character offsets, because seeking to a character offset would be very slow in a UTF-8 file.
[`sysseek`](#sysseek-FILEHANDLE%2CPOSITION%2CWHENCE) bypasses normal buffered IO, so mixing it with reads other than [`sysread`](#sysread-FILEHANDLE%2CSCALAR%2CLENGTH%2COFFSET) (for example [`readline`](#readline-EXPR) or [`read`](#read-FILEHANDLE%2CSCALAR%2CLENGTH%2COFFSET)), [`print`](#print-FILEHANDLE-LIST), [`write`](#write-FILEHANDLE), [`seek`](#seek-FILEHANDLE%2CPOSITION%2CWHENCE), [`tell`](#tell-FILEHANDLE), or [`eof`](#eof-FILEHANDLE) may cause confusion.
For WHENCE, you may also use the constants `SEEK_SET`, `SEEK_CUR`, and `SEEK_END` (start of the file, current position, end of the file) from the [Fcntl](fcntl) module. Use of the constants is also more portable than relying on 0, 1, and 2. For example to define a "systell" function:
```
use Fcntl 'SEEK_CUR';
sub systell { sysseek($_[0], 0, SEEK_CUR) }
```
Returns the new position, or the undefined value on failure. A position of zero is returned as the string `"0 but true"`; thus [`sysseek`](#sysseek-FILEHANDLE%2CPOSITION%2CWHENCE) returns true on success and false on failure, yet you can still easily determine the new position.
system LIST
system PROGRAM LIST Does exactly the same thing as [`exec`](#exec-LIST), except that a fork is done first and the parent process waits for the child process to exit. Note that argument processing varies depending on the number of arguments. If there is more than one argument in LIST, or if LIST is an array with more than one value, starts the program given by the first element of the list with arguments given by the rest of the list. If there is only one scalar argument, the argument is checked for shell metacharacters, and if there are any, the entire argument is passed to the system's command shell for parsing (this is `/bin/sh -c` on Unix platforms, but varies on other platforms). If there are no shell metacharacters in the argument, it is split into words and passed directly to `execvp`, which is more efficient. On Windows, only the `system PROGRAM LIST` syntax will reliably avoid using the shell; `system LIST`, even with more than one element, will fall back to the shell if the first spawn fails.
Perl will attempt to flush all files opened for output before any operation that may do a fork, but this may not be supported on some platforms (see <perlport>). To be safe, you may need to set [`$|`](perlvar#%24%7C) (`$AUTOFLUSH` in [English](english)) or call the `autoflush` method of [`IO::Handle`](IO::Handle#METHODS) on any open handles.
The return value is the exit status of the program as returned by the [`wait`](#wait) call. To get the actual exit value, shift right by eight (see below). See also [`exec`](#exec-LIST). This is *not* what you want to use to capture the output from a command; for that you should use merely backticks or [`qx//`](#qx%2FSTRING%2F), as described in ["`STRING`" in perlop](perlop#%60STRING%60). Return value of -1 indicates a failure to start the program or an error of the [wait(2)](http://man.he.net/man2/wait) system call (inspect [`$!`](perlvar#%24%21) for the reason).
If you'd like to make [`system`](#system-LIST) (and many other bits of Perl) die on error, have a look at the <autodie> pragma.
Like [`exec`](#exec-LIST), [`system`](#system-LIST) allows you to lie to a program about its name if you use the `system PROGRAM LIST` syntax. Again, see [`exec`](#exec-LIST).
Since `SIGINT` and `SIGQUIT` are ignored during the execution of [`system`](#system-LIST), if you expect your program to terminate on receipt of these signals you will need to arrange to do so yourself based on the return value.
```
my @args = ("command", "arg1", "arg2");
system(@args) == 0
or die "system @args failed: $?";
```
If you'd like to manually inspect [`system`](#system-LIST)'s failure, you can check all possible failure modes by inspecting [`$?`](perlvar#%24%3F) like this:
```
if ($? == -1) {
print "failed to execute: $!\n";
}
elsif ($? & 127) {
printf "child died with signal %d, %s coredump\n",
($? & 127), ($? & 128) ? 'with' : 'without';
}
else {
printf "child exited with value %d\n", $? >> 8;
}
```
Alternatively, you may inspect the value of [`${^CHILD_ERROR_NATIVE}`](perlvar#%24%7B%5ECHILD_ERROR_NATIVE%7D) with the [`W*()`](posix#WIFEXITED) calls from the [POSIX](posix) module.
When [`system`](#system-LIST)'s arguments are executed indirectly by the shell, results and return codes are subject to its quirks. See ["`STRING`" in perlop](perlop#%60STRING%60) and [`exec`](#exec-LIST) for details.
Since [`system`](#system-LIST) does a [`fork`](#fork) and [`wait`](#wait) it may affect a `SIGCHLD` handler. See <perlipc> for details.
Portability issues: ["system" in perlport](perlport#system).
syswrite FILEHANDLE,SCALAR,LENGTH,OFFSET
syswrite FILEHANDLE,SCALAR,LENGTH
syswrite FILEHANDLE,SCALAR Attempts to write LENGTH bytes of data from variable SCALAR to the specified FILEHANDLE, using [write(2)](http://man.he.net/man2/write). If LENGTH is not specified, writes whole SCALAR. It bypasses any [PerlIO](perlio) layers including buffered IO (but is affected by the presence of the `:utf8` layer as described later), so mixing this with reads (other than `sysread)`), [`print`](#print-FILEHANDLE-LIST), [`write`](#write-FILEHANDLE), [`seek`](#seek-FILEHANDLE%2CPOSITION%2CWHENCE), [`tell`](#tell-FILEHANDLE), or [`eof`](#eof-FILEHANDLE) may cause confusion because the `:perlio` and `:crlf` layers usually buffer data. Returns the number of bytes actually written, or [`undef`](#undef-EXPR) if there was an error (in this case the errno variable [`$!`](perlvar#%24%21) is also set). If the LENGTH is greater than the data available in the SCALAR after the OFFSET, only as much data as is available will be written.
An OFFSET may be specified to write the data from some part of the string other than the beginning. A negative OFFSET specifies writing that many characters counting backwards from the end of the string. If SCALAR is of length zero, you can only use an OFFSET of 0.
**WARNING**: If the filehandle is marked `:utf8`, `syswrite` will raise an exception. The `:encoding(...)` layer implicitly introduces the `:utf8` layer. Alternately, if the handle is not marked with an encoding but you attempt to write characters with code points over 255, raises an exception. See [`binmode`](#binmode-FILEHANDLE%2C-LAYER), [`open`](#open-FILEHANDLE%2CMODE%2CEXPR), and the <open> pragma.
tell FILEHANDLE tell Returns the current position *in bytes* for FILEHANDLE, or -1 on error. FILEHANDLE may be an expression whose value gives the name of the actual filehandle. If FILEHANDLE is omitted, assumes the file last read.
Note the emphasis on bytes: even if the filehandle has been set to operate on characters (for example using the `:encoding(UTF-8)` I/O layer), the [`seek`](#seek-FILEHANDLE%2CPOSITION%2CWHENCE), [`tell`](#tell-FILEHANDLE), and [`sysseek`](#sysseek-FILEHANDLE%2CPOSITION%2CWHENCE) family of functions use byte offsets, not character offsets, because seeking to a character offset would be very slow in a UTF-8 file.
The return value of [`tell`](#tell-FILEHANDLE) for the standard streams like the STDIN depends on the operating system: it may return -1 or something else. [`tell`](#tell-FILEHANDLE) on pipes, fifos, and sockets usually returns -1.
There is no `systell` function. Use [`sysseek($fh, 0, 1)`](#sysseek-FILEHANDLE%2CPOSITION%2CWHENCE) for that.
Do not use [`tell`](#tell-FILEHANDLE) (or other buffered I/O operations) on a filehandle that has been manipulated by [`sysread`](#sysread-FILEHANDLE%2CSCALAR%2CLENGTH%2COFFSET), [`syswrite`](#syswrite-FILEHANDLE%2CSCALAR%2CLENGTH%2COFFSET), or [`sysseek`](#sysseek-FILEHANDLE%2CPOSITION%2CWHENCE). Those functions ignore the buffering, while [`tell`](#tell-FILEHANDLE) does not.
telldir DIRHANDLE Returns the current position of the [`readdir`](#readdir-DIRHANDLE) routines on DIRHANDLE. Value may be given to [`seekdir`](#seekdir-DIRHANDLE%2CPOS) to access a particular location in a directory. [`telldir`](#telldir-DIRHANDLE) has the same caveats about possible directory compaction as the corresponding system library routine.
tie VARIABLE,CLASSNAME,LIST This function binds a variable to a package class that will provide the implementation for the variable. VARIABLE is the name of the variable to be enchanted. CLASSNAME is the name of a class implementing objects of correct type. Any additional arguments are passed to the appropriate constructor method of the class (meaning `TIESCALAR`, `TIEHANDLE`, `TIEARRAY`, or `TIEHASH`). Typically these are arguments such as might be passed to the [dbm\_open(3)](http://man.he.net/man3/dbm_open) function of C. The object returned by the constructor is also returned by the [`tie`](#tie-VARIABLE%2CCLASSNAME%2CLIST) function, which would be useful if you want to access other methods in CLASSNAME.
Note that functions such as [`keys`](#keys-HASH) and [`values`](#values-HASH) may return huge lists when used on large objects, like DBM files. You may prefer to use the [`each`](#each-HASH) function to iterate over such. Example:
```
# print out history file offsets
use NDBM_File;
tie(my %HIST, 'NDBM_File', '/usr/lib/news/history', 1, 0);
while (my ($key,$val) = each %HIST) {
print $key, ' = ', unpack('L', $val), "\n";
}
```
A class implementing a hash should have the following methods:
```
TIEHASH classname, LIST
FETCH this, key
STORE this, key, value
DELETE this, key
CLEAR this
EXISTS this, key
FIRSTKEY this
NEXTKEY this, lastkey
SCALAR this
DESTROY this
UNTIE this
```
A class implementing an ordinary array should have the following methods:
```
TIEARRAY classname, LIST
FETCH this, key
STORE this, key, value
FETCHSIZE this
STORESIZE this, count
CLEAR this
PUSH this, LIST
POP this
SHIFT this
UNSHIFT this, LIST
SPLICE this, offset, length, LIST
EXTEND this, count
DELETE this, key
EXISTS this, key
DESTROY this
UNTIE this
```
A class implementing a filehandle should have the following methods:
```
TIEHANDLE classname, LIST
READ this, scalar, length, offset
READLINE this
GETC this
WRITE this, scalar, length, offset
PRINT this, LIST
PRINTF this, format, LIST
BINMODE this
EOF this
FILENO this
SEEK this, position, whence
TELL this
OPEN this, mode, LIST
CLOSE this
DESTROY this
UNTIE this
```
A class implementing a scalar should have the following methods:
```
TIESCALAR classname, LIST
FETCH this,
STORE this, value
DESTROY this
UNTIE this
```
Not all methods indicated above need be implemented. See <perltie>, <Tie::Hash>, <Tie::Array>, <Tie::Scalar>, and <Tie::Handle>.
Unlike [`dbmopen`](#dbmopen-HASH%2CDBNAME%2CMASK), the [`tie`](#tie-VARIABLE%2CCLASSNAME%2CLIST) function will not [`use`](#use-Module-VERSION-LIST) or [`require`](#require-VERSION) a module for you; you need to do that explicitly yourself. See [DB\_File](db_file) or the [Config](config) module for interesting [`tie`](#tie-VARIABLE%2CCLASSNAME%2CLIST) implementations.
For further details see <perltie>, [`tied`](#tied-VARIABLE).
tied VARIABLE Returns a reference to the object underlying VARIABLE (the same value that was originally returned by the [`tie`](#tie-VARIABLE%2CCLASSNAME%2CLIST) call that bound the variable to a package.) Returns the undefined value if VARIABLE isn't tied to a package.
time Returns the number of non-leap seconds since whatever time the system considers to be the epoch, suitable for feeding to [`gmtime`](#gmtime-EXPR) and [`localtime`](#localtime-EXPR). On most systems the epoch is 00:00:00 UTC, January 1, 1970; a prominent exception being Mac OS Classic which uses 00:00:00, January 1, 1904 in the current local time zone for its epoch.
For measuring time in better granularity than one second, use the <Time::HiRes> module from Perl 5.8 onwards (or from CPAN before then), or, if you have [gettimeofday(2)](http://man.he.net/man2/gettimeofday), you may be able to use the [`syscall`](#syscall-NUMBER%2C-LIST) interface of Perl. See <perlfaq8> for details.
For date and time processing look at the many related modules on CPAN. For a comprehensive date and time representation look at the [DateTime](datetime) module.
times Returns a four-element list giving the user and system times in seconds for this process and any exited children of this process.
```
my ($user,$system,$cuser,$csystem) = times;
```
In scalar context, [`times`](#times) returns `$user`.
Children's times are only included for terminated children.
Portability issues: ["times" in perlport](perlport#times).
tr/// The transliteration operator. Same as [`y///`](#y%2F%2F%2F). See ["Quote-Like Operators" in perlop](perlop#Quote-Like-Operators).
truncate FILEHANDLE,LENGTH
truncate EXPR,LENGTH Truncates the file opened on FILEHANDLE, or named by EXPR, to the specified length. Raises an exception if truncate isn't implemented on your system. Returns true if successful, [`undef`](#undef-EXPR) on error.
The behavior is undefined if LENGTH is greater than the length of the file.
The position in the file of FILEHANDLE is left unchanged. You may want to call [seek](#seek-FILEHANDLE%2CPOSITION%2CWHENCE) before writing to the file.
Portability issues: ["truncate" in perlport](perlport#truncate).
uc EXPR uc Returns an uppercased version of EXPR. If EXPR is omitted, uses [`$_`](perlvar#%24_).
```
my $str = uc("Perl is GREAT"); # "PERL IS GREAT"
```
This function behaves the same way under various pragmas, such as in a locale, as [`lc`](#lc-EXPR) does.
If you want titlecase mapping on initial letters see [`ucfirst`](#ucfirst-EXPR) instead.
**Note:** This is the internal function implementing the [`\U`](perlop#Quote-and-Quote-like-Operators) escape in double-quoted strings.
```
my $str = "Perl is \Ugreat\E"; # "Perl is GREAT"
```
ucfirst EXPR ucfirst Returns the value of EXPR with the first character in uppercase (titlecase in Unicode). This is the internal function implementing the `\u` escape in double-quoted strings.
If EXPR is omitted, uses [`$_`](perlvar#%24_).
This function behaves the same way under various pragmas, such as in a locale, as [`lc`](#lc-EXPR) does.
umask EXPR umask Sets the umask for the process to EXPR and returns the previous value. If EXPR is omitted, merely returns the current umask.
The Unix permission `rwxr-x---` is represented as three sets of three bits, or three octal digits: `0750` (the leading 0 indicates octal and isn't one of the digits). The [`umask`](#umask-EXPR) value is such a number representing disabled permissions bits. The permission (or "mode") values you pass [`mkdir`](#mkdir-FILENAME%2CMODE) or [`sysopen`](#sysopen-FILEHANDLE%2CFILENAME%2CMODE) are modified by your umask, so even if you tell [`sysopen`](#sysopen-FILEHANDLE%2CFILENAME%2CMODE) to create a file with permissions `0777`, if your umask is `0022`, then the file will actually be created with permissions `0755`. If your [`umask`](#umask-EXPR) were `0027` (group can't write; others can't read, write, or execute), then passing [`sysopen`](#sysopen-FILEHANDLE%2CFILENAME%2CMODE) `0666` would create a file with mode `0640` (because `0666 &~ 027` is `0640`).
Here's some advice: supply a creation mode of `0666` for regular files (in [`sysopen`](#sysopen-FILEHANDLE%2CFILENAME%2CMODE)) and one of `0777` for directories (in [`mkdir`](#mkdir-FILENAME%2CMODE)) and executable files. This gives users the freedom of choice: if they want protected files, they might choose process umasks of `022`, `027`, or even the particularly antisocial mask of `077`. Programs should rarely if ever make policy decisions better left to the user. The exception to this is when writing files that should be kept private: mail files, web browser cookies, *.rhosts* files, and so on.
If [umask(2)](http://man.he.net/man2/umask) is not implemented on your system and you are trying to restrict access for *yourself* (i.e., `(EXPR & 0700) > 0`), raises an exception. If [umask(2)](http://man.he.net/man2/umask) is not implemented and you are not trying to restrict access for yourself, returns [`undef`](#undef-EXPR).
Remember that a umask is a number, usually given in octal; it is *not* a string of octal digits. See also [`oct`](#oct-EXPR), if all you have is a string.
Portability issues: ["umask" in perlport](perlport#umask).
undef EXPR undef Undefines the value of EXPR, which must be an lvalue. Use only on a scalar value, an array (using `@`), a hash (using `%`), a subroutine (using `&`), or a typeglob (using `*`). Saying `undef $hash{$key}` will probably not do what you expect on most predefined variables or DBM list values, so don't do that; see [`delete`](#delete-EXPR). Always returns the undefined value. You can omit the EXPR, in which case nothing is undefined, but you still get an undefined value that you could, for instance, return from a subroutine, assign to a variable, or pass as a parameter. Examples:
```
undef $foo;
undef $bar{'blurfl'}; # Compare to: delete $bar{'blurfl'};
undef @ary;
undef %hash;
undef &mysub;
undef *xyz; # destroys $xyz, @xyz, %xyz, &xyz, etc.
return (wantarray ? (undef, $errmsg) : undef) if $they_blew_it;
select undef, undef, undef, 0.25;
my ($x, $y, undef, $z) = foo(); # Ignore third value returned
```
Note that this is a unary operator, not a list operator.
unlink LIST unlink Deletes a list of files. On success, it returns the number of files it successfully deleted. On failure, it returns false and sets [`$!`](perlvar#%24%21) (errno):
```
my $unlinked = unlink 'a', 'b', 'c';
unlink @goners;
unlink glob "*.bak";
```
On error, [`unlink`](#unlink-LIST) will not tell you which files it could not remove. If you want to know which files you could not remove, try them one at a time:
```
foreach my $file ( @goners ) {
unlink $file or warn "Could not unlink $file: $!";
}
```
Note: [`unlink`](#unlink-LIST) will not attempt to delete directories unless you are superuser and the **-U** flag is supplied to Perl. Even if these conditions are met, be warned that unlinking a directory can inflict damage on your filesystem. Finally, using [`unlink`](#unlink-LIST) on directories is not supported on many operating systems. Use [`rmdir`](#rmdir-FILENAME) instead.
If LIST is omitted, [`unlink`](#unlink-LIST) uses [`$_`](perlvar#%24_).
unpack TEMPLATE,EXPR
unpack TEMPLATE [`unpack`](#unpack-TEMPLATE%2CEXPR) does the reverse of [`pack`](#pack-TEMPLATE%2CLIST): it takes a string and expands it out into a list of values. (In scalar context, it returns merely the first value produced.)
If EXPR is omitted, unpacks the [`$_`](perlvar#%24_) string. See <perlpacktut> for an introduction to this function.
The string is broken into chunks described by the TEMPLATE. Each chunk is converted separately to a value. Typically, either the string is a result of [`pack`](#pack-TEMPLATE%2CLIST), or the characters of the string represent a C structure of some kind.
The TEMPLATE has the same format as in the [`pack`](#pack-TEMPLATE%2CLIST) function. Here's a subroutine that does substring:
```
sub substr {
my ($what, $where, $howmuch) = @_;
unpack("x$where a$howmuch", $what);
}
```
and then there's
```
sub ordinal { unpack("W",$_[0]); } # same as ord()
```
In addition to fields allowed in [`pack`](#pack-TEMPLATE%2CLIST), you may prefix a field with a %<number> to indicate that you want a <number>-bit checksum of the items instead of the items themselves. Default is a 16-bit checksum. The checksum is calculated by summing numeric values of expanded values (for string fields the sum of `ord($char)` is taken; for bit fields the sum of zeroes and ones).
For example, the following computes the same number as the System V sum program:
```
my $checksum = do {
local $/; # slurp!
unpack("%32W*", readline) % 65535;
};
```
The following efficiently counts the number of set bits in a bit vector:
```
my $setbits = unpack("%32b*", $selectmask);
```
The `p` and `P` formats should be used with care. Since Perl has no way of checking whether the value passed to [`unpack`](#unpack-TEMPLATE%2CEXPR) corresponds to a valid memory location, passing a pointer value that's not known to be valid is likely to have disastrous consequences.
If there are more pack codes or if the repeat count of a field or a group is larger than what the remainder of the input string allows, the result is not well defined: the repeat count may be decreased, or [`unpack`](#unpack-TEMPLATE%2CEXPR) may produce empty strings or zeros, or it may raise an exception. If the input string is longer than one described by the TEMPLATE, the remainder of that input string is ignored.
See [`pack`](#pack-TEMPLATE%2CLIST) for more examples and notes.
unshift ARRAY,LIST Does the opposite of a [`shift`](#shift-ARRAY). Or the opposite of a [`push`](#push-ARRAY%2CLIST), depending on how you look at it. Prepends list to the front of the array and returns the new number of elements in the array.
```
unshift(@ARGV, '-e') unless $ARGV[0] =~ /^-/;
```
Note the LIST is prepended whole, not one element at a time, so the prepended elements stay in the same order. Use [`reverse`](#reverse-LIST) to do the reverse.
Starting with Perl 5.14, an experimental feature allowed [`unshift`](#unshift-ARRAY%2CLIST) to take a scalar expression. This experiment has been deemed unsuccessful, and was removed as of Perl 5.24.
untie VARIABLE Breaks the binding between a variable and a package. (See [tie](#tie-VARIABLE%2CCLASSNAME%2CLIST).) Has no effect if the variable is not tied.
use Module VERSION LIST
use Module VERSION
use Module LIST
use Module Imports some semantics into the current package from the named module, generally by aliasing certain subroutine or variable names into your package. It is exactly equivalent to
```
BEGIN { require Module; Module->import( LIST ); }
```
except that Module *must* be a bareword. The importation can be made conditional by using the <if> module.
The `BEGIN` forces the [`require`](#require-VERSION) and [`import`](#import-LIST) to happen at compile time. The [`require`](#require-VERSION) makes sure the module is loaded into memory if it hasn't been yet. The [`import`](#import-LIST) is not a builtin; it's just an ordinary static method call into the `Module` package to tell the module to import the list of features back into the current package. The module can implement its [`import`](#import-LIST) method any way it likes, though most modules just choose to derive their [`import`](#import-LIST) method via inheritance from the `Exporter` class that is defined in the [`Exporter`](exporter) module. See [Exporter](exporter). If no [`import`](#import-LIST) method can be found, then the call is skipped, even if there is an AUTOLOAD method.
If you do not want to call the package's [`import`](#import-LIST) method (for instance, to stop your namespace from being altered), explicitly supply the empty list:
```
use Module ();
```
That is exactly equivalent to
```
BEGIN { require Module }
```
If the VERSION argument is present between Module and LIST, then the [`use`](#use-Module-VERSION-LIST) will call the `VERSION` method in class Module with the given version as an argument:
```
use Module 12.34;
```
is equivalent to:
```
BEGIN { require Module; Module->VERSION(12.34) }
```
The [default `VERSION` method](universal#VERSION-%28-%5B-REQUIRE-%5D-%29), inherited from the [`UNIVERSAL`](universal) class, croaks if the given version is larger than the value of the variable `$Module::VERSION`.
The VERSION argument cannot be an arbitrary expression. It only counts as a VERSION argument if it is a version number literal, starting with either a digit or `v` followed by a digit. Anything that doesn't look like a version literal will be parsed as the start of the LIST. Nevertheless, many attempts to use an arbitrary expression as a VERSION argument will appear to work, because [Exporter](exporter)'s `import` method handles numeric arguments specially, performing version checks rather than treating them as things to export.
Again, there is a distinction between omitting LIST ([`import`](#import-LIST) called with no arguments) and an explicit empty LIST `()` ([`import`](#import-LIST) not called). Note that there is no comma after VERSION!
Because this is a wide-open interface, pragmas (compiler directives) are also implemented this way. Some of the currently implemented pragmas are:
```
use constant;
use diagnostics;
use integer;
use sigtrap qw(SEGV BUS);
use strict qw(subs vars refs);
use subs qw(afunc blurfl);
use warnings qw(all);
use sort qw(stable);
```
Some of these pseudo-modules import semantics into the current block scope (like [`strict`](strict) or [`integer`](integer), unlike ordinary modules, which import symbols into the current package (which are effective through the end of the file).
Because [`use`](#use-Module-VERSION-LIST) takes effect at compile time, it doesn't respect the ordinary flow control of the code being compiled. In particular, putting a [`use`](#use-Module-VERSION-LIST) inside the false branch of a conditional doesn't prevent it from being processed. If a module or pragma only needs to be loaded conditionally, this can be done using the <if> pragma:
```
use if $] < 5.008, "utf8";
use if WANT_WARNINGS, warnings => qw(all);
```
There's a corresponding [`no`](#no-MODULE-VERSION-LIST) declaration that unimports meanings imported by [`use`](#use-Module-VERSION-LIST), i.e., it calls `Module->unimport(LIST)` instead of [`import`](#import-LIST). It behaves just as [`import`](#import-LIST) does with VERSION, an omitted or empty LIST, or no unimport method being found.
```
no integer;
no strict 'refs';
no warnings;
```
See <perlmodlib> for a list of standard modules and pragmas. See [perlrun](perlrun#-m%5B-%5Dmodule) for the `-M` and `-m` command-line options to Perl that give [`use`](#use-Module-VERSION-LIST) functionality from the command-line.
use VERSION Lexically enables all features available in the requested version as defined by the <feature> pragma, disabling any features not in the requested version's feature bundle. See <feature>.
VERSION may be either a v-string such as v5.24.1, which will be compared to [`$^V`](perlvar#%24%5EV) (aka $PERL\_VERSION), or a numeric argument of the form 5.024001, which will be compared to [`$]`](perlvar#%24%5D). An exception is raised if VERSION is greater than the version of the current Perl interpreter; Perl will not attempt to parse the rest of the file. Compare with [`require`](#require-VERSION), which can do a similar check at run time.
If the specified Perl version is 5.12 or higher, strictures are enabled lexically as with [`use strict`](strict). Similarly, if the specified Perl version is 5.35.0 or higher, <warnings> are enabled. Later use of `use VERSION` will override all behavior of a previous `use VERSION`, possibly removing the `strict`, `warnings`, and `feature` added by it. `use VERSION` does not load the *feature.pm*, *strict.pm*, or *warnings.pm* files.
In the current implementation, any explicit use of `use strict` or `no strict` overrides `use VERSION`, even if it comes before it. However, this may be subject to change in a future release of Perl, so new code should not rely on this fact. It is recommended that a `use VERSION` declaration be the first significant statement within a file (possibly after a `package` statement or any amount of whitespace or comment), so that its effects happen first, and other pragmata are applied after it.
Specifying VERSION as a numeric argument of the form 5.024001 should generally be avoided as older less readable syntax compared to v5.24.1. Before perl 5.8.0 released in 2002 the more verbose numeric form was the only supported syntax, which is why you might see it in older code.
```
use v5.24.1; # compile time version check
use 5.24.1; # ditto
use 5.024_001; # ditto; older syntax compatible with perl 5.6
```
This is often useful if you need to check the current Perl version before [`use`](#use-Module-VERSION-LIST)ing library modules that won't work with older versions of Perl. (We try not to do this more than we have to.)
Symmetrically, `no VERSION` allows you to specify that you want a version of Perl older than the specified one. Historically this was added during early designs of the Raku language (formerly "Perl 6"), so that a Perl 5 program could begin
```
no 6;
```
to declare that it is not a Perl 6 program. As the two languages have different implementations, file naming conventions, and other infrastructure, this feature is now little used in practice and should be avoided in newly-written code.
Care should be taken when using the `no VERSION` form, as it is *only* meant to be used to assert that the running Perl is of a earlier version than its argument and *not* to undo the feature-enabling side effects of `use VERSION`.
utime LIST Changes the access and modification times on each file of a list of files. The first two elements of the list must be the NUMERIC access and modification times, in that order. Returns the number of files successfully changed. The inode change time of each file is set to the current time. For example, this code has the same effect as the Unix [touch(1)](http://man.he.net/man1/touch) command when the files *already exist* and belong to the user running the program:
```
#!/usr/bin/perl
my $atime = my $mtime = time;
utime $atime, $mtime, @ARGV;
```
Since Perl 5.8.0, if the first two elements of the list are [`undef`](#undef-EXPR), the [utime(2)](http://man.he.net/man2/utime) syscall from your C library is called with a null second argument. On most systems, this will set the file's access and modification times to the current time (i.e., equivalent to the example above) and will work even on files you don't own provided you have write permission:
```
for my $file (@ARGV) {
utime(undef, undef, $file)
|| warn "Couldn't touch $file: $!";
}
```
Under NFS this will use the time of the NFS server, not the time of the local machine. If there is a time synchronization problem, the NFS server and local machine will have different times. The Unix [touch(1)](http://man.he.net/man1/touch) command will in fact normally use this form instead of the one shown in the first example.
Passing only one of the first two elements as [`undef`](#undef-EXPR) is equivalent to passing a 0 and will not have the effect described when both are [`undef`](#undef-EXPR). This also triggers an uninitialized warning.
On systems that support [futimes(2)](http://man.he.net/man2/futimes), you may pass filehandles among the files. On systems that don't support [futimes(2)](http://man.he.net/man2/futimes), passing filehandles raises an exception. Filehandles must be passed as globs or glob references to be recognized; barewords are considered filenames.
Portability issues: ["utime" in perlport](perlport#utime).
values HASH
values ARRAY In list context, returns a list consisting of all the values of the named hash. In Perl 5.12 or later only, will also return a list of the values of an array; prior to that release, attempting to use an array argument will produce a syntax error. In scalar context, returns the number of values.
Hash entries are returned in an apparently random order. The actual random order is specific to a given hash; the exact same series of operations on two hashes may result in a different order for each hash. Any insertion into the hash may change the order, as will any deletion, with the exception that the most recent key returned by [`each`](#each-HASH) or [`keys`](#keys-HASH) may be deleted without changing the order. So long as a given hash is unmodified you may rely on [`keys`](#keys-HASH), [`values`](#values-HASH) and [`each`](#each-HASH) to repeatedly return the same order as each other. See ["Algorithmic Complexity Attacks" in perlsec](perlsec#Algorithmic-Complexity-Attacks) for details on why hash order is randomized. Aside from the guarantees provided here the exact details of Perl's hash algorithm and the hash traversal order are subject to change in any release of Perl. Tied hashes may behave differently to Perl's hashes with respect to changes in order on insertion and deletion of items.
As a side effect, calling [`values`](#values-HASH) resets the HASH or ARRAY's internal iterator (see [`each`](#each-HASH)) before yielding the values. In particular, calling [`values`](#values-HASH) in void context resets the iterator with no other overhead.
Apart from resetting the iterator, `values @array` in list context is the same as plain `@array`. (We recommend that you use void context `keys @array` for this, but reasoned that taking `values @array` out would require more documentation than leaving it in.)
Note that the values are not copied, which means modifying them will modify the contents of the hash:
```
for (values %hash) { s/foo/bar/g } # modifies %hash values
for (@hash{keys %hash}) { s/foo/bar/g } # same
```
Starting with Perl 5.14, an experimental feature allowed [`values`](#values-HASH) to take a scalar expression. This experiment has been deemed unsuccessful, and was removed as of Perl 5.24.
To avoid confusing would-be users of your code who are running earlier versions of Perl with mysterious syntax errors, put this sort of thing at the top of your file to signal that your code will work *only* on Perls of a recent vintage:
```
use v5.12; # so keys/values/each work on arrays
```
See also [`keys`](#keys-HASH), [`each`](#each-HASH), and [`sort`](#sort-SUBNAME-LIST).
vec EXPR,OFFSET,BITS Treats the string in EXPR as a bit vector made up of elements of width BITS and returns the value of the element specified by OFFSET as an unsigned integer. BITS therefore specifies the number of bits that are reserved for each element in the bit vector. This must be a power of two from 1 to 32 (or 64, if your platform supports that).
If BITS is 8, "elements" coincide with bytes of the input string.
If BITS is 16 or more, bytes of the input string are grouped into chunks of size BITS/8, and each group is converted to a number as with [`pack`](#pack-TEMPLATE%2CLIST)/[`unpack`](#unpack-TEMPLATE%2CEXPR) with big-endian formats `n`/`N` (and analogously for BITS==64). See [`pack`](#pack-TEMPLATE%2CLIST) for details.
If bits is 4 or less, the string is broken into bytes, then the bits of each byte are broken into 8/BITS groups. Bits of a byte are numbered in a little-endian-ish way, as in `0x01`, `0x02`, `0x04`, `0x08`, `0x10`, `0x20`, `0x40`, `0x80`. For example, breaking the single input byte `chr(0x36)` into two groups gives a list `(0x6, 0x3)`; breaking it into 4 groups gives `(0x2, 0x1, 0x3, 0x0)`.
[`vec`](#vec-EXPR%2COFFSET%2CBITS) may also be assigned to, in which case parentheses are needed to give the expression the correct precedence as in
```
vec($image, $max_x * $x + $y, 8) = 3;
```
If the selected element is outside the string, the value 0 is returned. If an element off the end of the string is written to, Perl will first extend the string with sufficiently many zero bytes. It is an error to try to write off the beginning of the string (i.e., negative OFFSET).
If the string happens to be encoded as UTF-8 internally (and thus has the UTF8 flag set), [`vec`](#vec-EXPR%2COFFSET%2CBITS) tries to convert it to use a one-byte-per-character internal representation. However, if the string contains characters with values of 256 or higher, a fatal error will occur.
Strings created with [`vec`](#vec-EXPR%2COFFSET%2CBITS) can also be manipulated with the logical operators `|`, `&`, `^`, and `~`. These operators will assume a bit vector operation is desired when both operands are strings. See ["Bitwise String Operators" in perlop](perlop#Bitwise-String-Operators).
The following code will build up an ASCII string saying `'PerlPerlPerl'`. The comments show the string after each step. Note that this code works in the same way on big-endian or little-endian machines.
```
my $foo = '';
vec($foo, 0, 32) = 0x5065726C; # 'Perl'
# $foo eq "Perl" eq "\x50\x65\x72\x6C", 32 bits
print vec($foo, 0, 8); # prints 80 == 0x50 == ord('P')
vec($foo, 2, 16) = 0x5065; # 'PerlPe'
vec($foo, 3, 16) = 0x726C; # 'PerlPerl'
vec($foo, 8, 8) = 0x50; # 'PerlPerlP'
vec($foo, 9, 8) = 0x65; # 'PerlPerlPe'
vec($foo, 20, 4) = 2; # 'PerlPerlPe' . "\x02"
vec($foo, 21, 4) = 7; # 'PerlPerlPer'
# 'r' is "\x72"
vec($foo, 45, 2) = 3; # 'PerlPerlPer' . "\x0c"
vec($foo, 93, 1) = 1; # 'PerlPerlPer' . "\x2c"
vec($foo, 94, 1) = 1; # 'PerlPerlPerl'
# 'l' is "\x6c"
```
To transform a bit vector into a string or list of 0's and 1's, use these:
```
my $bits = unpack("b*", $vector);
my @bits = split(//, unpack("b*", $vector));
```
If you know the exact length in bits, it can be used in place of the `*`.
Here is an example to illustrate how the bits actually fall in place:
```
#!/usr/bin/perl -wl
print <<'EOT';
0 1 2 3
unpack("V",$_) 01234567890123456789012345678901
------------------------------------------------------------------
EOT
for $w (0..3) {
$width = 2**$w;
for ($shift=0; $shift < $width; ++$shift) {
for ($off=0; $off < 32/$width; ++$off) {
$str = pack("B*", "0"x32);
$bits = (1<<$shift);
vec($str, $off, $width) = $bits;
$res = unpack("b*",$str);
$val = unpack("V", $str);
write;
}
}
}
format STDOUT =
vec($_,@#,@#) = @<< == @######### @>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
$off, $width, $bits, $val, $res
.
__END__
```
Regardless of the machine architecture on which it runs, the example above should print the following table:
```
0 1 2 3
unpack("V",$_) 01234567890123456789012345678901
------------------------------------------------------------------
vec($_, 0, 1) = 1 == 1 10000000000000000000000000000000
vec($_, 1, 1) = 1 == 2 01000000000000000000000000000000
vec($_, 2, 1) = 1 == 4 00100000000000000000000000000000
vec($_, 3, 1) = 1 == 8 00010000000000000000000000000000
vec($_, 4, 1) = 1 == 16 00001000000000000000000000000000
vec($_, 5, 1) = 1 == 32 00000100000000000000000000000000
vec($_, 6, 1) = 1 == 64 00000010000000000000000000000000
vec($_, 7, 1) = 1 == 128 00000001000000000000000000000000
vec($_, 8, 1) = 1 == 256 00000000100000000000000000000000
vec($_, 9, 1) = 1 == 512 00000000010000000000000000000000
vec($_,10, 1) = 1 == 1024 00000000001000000000000000000000
vec($_,11, 1) = 1 == 2048 00000000000100000000000000000000
vec($_,12, 1) = 1 == 4096 00000000000010000000000000000000
vec($_,13, 1) = 1 == 8192 00000000000001000000000000000000
vec($_,14, 1) = 1 == 16384 00000000000000100000000000000000
vec($_,15, 1) = 1 == 32768 00000000000000010000000000000000
vec($_,16, 1) = 1 == 65536 00000000000000001000000000000000
vec($_,17, 1) = 1 == 131072 00000000000000000100000000000000
vec($_,18, 1) = 1 == 262144 00000000000000000010000000000000
vec($_,19, 1) = 1 == 524288 00000000000000000001000000000000
vec($_,20, 1) = 1 == 1048576 00000000000000000000100000000000
vec($_,21, 1) = 1 == 2097152 00000000000000000000010000000000
vec($_,22, 1) = 1 == 4194304 00000000000000000000001000000000
vec($_,23, 1) = 1 == 8388608 00000000000000000000000100000000
vec($_,24, 1) = 1 == 16777216 00000000000000000000000010000000
vec($_,25, 1) = 1 == 33554432 00000000000000000000000001000000
vec($_,26, 1) = 1 == 67108864 00000000000000000000000000100000
vec($_,27, 1) = 1 == 134217728 00000000000000000000000000010000
vec($_,28, 1) = 1 == 268435456 00000000000000000000000000001000
vec($_,29, 1) = 1 == 536870912 00000000000000000000000000000100
vec($_,30, 1) = 1 == 1073741824 00000000000000000000000000000010
vec($_,31, 1) = 1 == 2147483648 00000000000000000000000000000001
vec($_, 0, 2) = 1 == 1 10000000000000000000000000000000
vec($_, 1, 2) = 1 == 4 00100000000000000000000000000000
vec($_, 2, 2) = 1 == 16 00001000000000000000000000000000
vec($_, 3, 2) = 1 == 64 00000010000000000000000000000000
vec($_, 4, 2) = 1 == 256 00000000100000000000000000000000
vec($_, 5, 2) = 1 == 1024 00000000001000000000000000000000
vec($_, 6, 2) = 1 == 4096 00000000000010000000000000000000
vec($_, 7, 2) = 1 == 16384 00000000000000100000000000000000
vec($_, 8, 2) = 1 == 65536 00000000000000001000000000000000
vec($_, 9, 2) = 1 == 262144 00000000000000000010000000000000
vec($_,10, 2) = 1 == 1048576 00000000000000000000100000000000
vec($_,11, 2) = 1 == 4194304 00000000000000000000001000000000
vec($_,12, 2) = 1 == 16777216 00000000000000000000000010000000
vec($_,13, 2) = 1 == 67108864 00000000000000000000000000100000
vec($_,14, 2) = 1 == 268435456 00000000000000000000000000001000
vec($_,15, 2) = 1 == 1073741824 00000000000000000000000000000010
vec($_, 0, 2) = 2 == 2 01000000000000000000000000000000
vec($_, 1, 2) = 2 == 8 00010000000000000000000000000000
vec($_, 2, 2) = 2 == 32 00000100000000000000000000000000
vec($_, 3, 2) = 2 == 128 00000001000000000000000000000000
vec($_, 4, 2) = 2 == 512 00000000010000000000000000000000
vec($_, 5, 2) = 2 == 2048 00000000000100000000000000000000
vec($_, 6, 2) = 2 == 8192 00000000000001000000000000000000
vec($_, 7, 2) = 2 == 32768 00000000000000010000000000000000
vec($_, 8, 2) = 2 == 131072 00000000000000000100000000000000
vec($_, 9, 2) = 2 == 524288 00000000000000000001000000000000
vec($_,10, 2) = 2 == 2097152 00000000000000000000010000000000
vec($_,11, 2) = 2 == 8388608 00000000000000000000000100000000
vec($_,12, 2) = 2 == 33554432 00000000000000000000000001000000
vec($_,13, 2) = 2 == 134217728 00000000000000000000000000010000
vec($_,14, 2) = 2 == 536870912 00000000000000000000000000000100
vec($_,15, 2) = 2 == 2147483648 00000000000000000000000000000001
vec($_, 0, 4) = 1 == 1 10000000000000000000000000000000
vec($_, 1, 4) = 1 == 16 00001000000000000000000000000000
vec($_, 2, 4) = 1 == 256 00000000100000000000000000000000
vec($_, 3, 4) = 1 == 4096 00000000000010000000000000000000
vec($_, 4, 4) = 1 == 65536 00000000000000001000000000000000
vec($_, 5, 4) = 1 == 1048576 00000000000000000000100000000000
vec($_, 6, 4) = 1 == 16777216 00000000000000000000000010000000
vec($_, 7, 4) = 1 == 268435456 00000000000000000000000000001000
vec($_, 0, 4) = 2 == 2 01000000000000000000000000000000
vec($_, 1, 4) = 2 == 32 00000100000000000000000000000000
vec($_, 2, 4) = 2 == 512 00000000010000000000000000000000
vec($_, 3, 4) = 2 == 8192 00000000000001000000000000000000
vec($_, 4, 4) = 2 == 131072 00000000000000000100000000000000
vec($_, 5, 4) = 2 == 2097152 00000000000000000000010000000000
vec($_, 6, 4) = 2 == 33554432 00000000000000000000000001000000
vec($_, 7, 4) = 2 == 536870912 00000000000000000000000000000100
vec($_, 0, 4) = 4 == 4 00100000000000000000000000000000
vec($_, 1, 4) = 4 == 64 00000010000000000000000000000000
vec($_, 2, 4) = 4 == 1024 00000000001000000000000000000000
vec($_, 3, 4) = 4 == 16384 00000000000000100000000000000000
vec($_, 4, 4) = 4 == 262144 00000000000000000010000000000000
vec($_, 5, 4) = 4 == 4194304 00000000000000000000001000000000
vec($_, 6, 4) = 4 == 67108864 00000000000000000000000000100000
vec($_, 7, 4) = 4 == 1073741824 00000000000000000000000000000010
vec($_, 0, 4) = 8 == 8 00010000000000000000000000000000
vec($_, 1, 4) = 8 == 128 00000001000000000000000000000000
vec($_, 2, 4) = 8 == 2048 00000000000100000000000000000000
vec($_, 3, 4) = 8 == 32768 00000000000000010000000000000000
vec($_, 4, 4) = 8 == 524288 00000000000000000001000000000000
vec($_, 5, 4) = 8 == 8388608 00000000000000000000000100000000
vec($_, 6, 4) = 8 == 134217728 00000000000000000000000000010000
vec($_, 7, 4) = 8 == 2147483648 00000000000000000000000000000001
vec($_, 0, 8) = 1 == 1 10000000000000000000000000000000
vec($_, 1, 8) = 1 == 256 00000000100000000000000000000000
vec($_, 2, 8) = 1 == 65536 00000000000000001000000000000000
vec($_, 3, 8) = 1 == 16777216 00000000000000000000000010000000
vec($_, 0, 8) = 2 == 2 01000000000000000000000000000000
vec($_, 1, 8) = 2 == 512 00000000010000000000000000000000
vec($_, 2, 8) = 2 == 131072 00000000000000000100000000000000
vec($_, 3, 8) = 2 == 33554432 00000000000000000000000001000000
vec($_, 0, 8) = 4 == 4 00100000000000000000000000000000
vec($_, 1, 8) = 4 == 1024 00000000001000000000000000000000
vec($_, 2, 8) = 4 == 262144 00000000000000000010000000000000
vec($_, 3, 8) = 4 == 67108864 00000000000000000000000000100000
vec($_, 0, 8) = 8 == 8 00010000000000000000000000000000
vec($_, 1, 8) = 8 == 2048 00000000000100000000000000000000
vec($_, 2, 8) = 8 == 524288 00000000000000000001000000000000
vec($_, 3, 8) = 8 == 134217728 00000000000000000000000000010000
vec($_, 0, 8) = 16 == 16 00001000000000000000000000000000
vec($_, 1, 8) = 16 == 4096 00000000000010000000000000000000
vec($_, 2, 8) = 16 == 1048576 00000000000000000000100000000000
vec($_, 3, 8) = 16 == 268435456 00000000000000000000000000001000
vec($_, 0, 8) = 32 == 32 00000100000000000000000000000000
vec($_, 1, 8) = 32 == 8192 00000000000001000000000000000000
vec($_, 2, 8) = 32 == 2097152 00000000000000000000010000000000
vec($_, 3, 8) = 32 == 536870912 00000000000000000000000000000100
vec($_, 0, 8) = 64 == 64 00000010000000000000000000000000
vec($_, 1, 8) = 64 == 16384 00000000000000100000000000000000
vec($_, 2, 8) = 64 == 4194304 00000000000000000000001000000000
vec($_, 3, 8) = 64 == 1073741824 00000000000000000000000000000010
vec($_, 0, 8) = 128 == 128 00000001000000000000000000000000
vec($_, 1, 8) = 128 == 32768 00000000000000010000000000000000
vec($_, 2, 8) = 128 == 8388608 00000000000000000000000100000000
vec($_, 3, 8) = 128 == 2147483648 00000000000000000000000000000001
```
wait Behaves like [wait(2)](http://man.he.net/man2/wait) on your system: it waits for a child process to terminate and returns the pid of the deceased process, or `-1` if there are no child processes. The status is returned in [`$?`](perlvar#%24%3F) and [`${^CHILD_ERROR_NATIVE}`](perlvar#%24%7B%5ECHILD_ERROR_NATIVE%7D). Note that a return value of `-1` could mean that child processes are being automatically reaped, as described in <perlipc>.
If you use [`wait`](#wait) in your handler for [`$SIG{CHLD}`](perlvar#%25SIG), it may accidentally wait for the child created by [`qx`](#qx%2FSTRING%2F) or [`system`](#system-LIST). See <perlipc> for details.
Portability issues: ["wait" in perlport](perlport#wait).
waitpid PID,FLAGS Waits for a particular child process to terminate and returns the pid of the deceased process, or `-1` if there is no such child process. A non-blocking wait (with [WNOHANG](posix#WNOHANG) in FLAGS) can return 0 if there are child processes matching PID but none have terminated yet. The status is returned in [`$?`](perlvar#%24%3F) and [`${^CHILD_ERROR_NATIVE}`](perlvar#%24%7B%5ECHILD_ERROR_NATIVE%7D).
A PID of `0` indicates to wait for any child process whose process group ID is equal to that of the current process. A PID of less than `-1` indicates to wait for any child process whose process group ID is equal to -PID. A PID of `-1` indicates to wait for any child process.
If you say
```
use POSIX ":sys_wait_h";
my $kid;
do {
$kid = waitpid(-1, WNOHANG);
} while $kid > 0;
```
or
```
1 while waitpid(-1, WNOHANG) > 0;
```
then you can do a non-blocking wait for all pending zombie processes (see ["WAIT" in POSIX](posix#WAIT)). Non-blocking wait is available on machines supporting either the [waitpid(2)](http://man.he.net/man2/waitpid) or [wait4(2)](http://man.he.net/man2/wait4) syscalls. However, waiting for a particular pid with FLAGS of `0` is implemented everywhere. (Perl emulates the system call by remembering the status values of processes that have exited but have not been harvested by the Perl script yet.)
Note that on some systems, a return value of `-1` could mean that child processes are being automatically reaped. See <perlipc> for details, and for other examples.
Portability issues: ["waitpid" in perlport](perlport#waitpid).
wantarray Returns true if the context of the currently executing subroutine or [`eval`](#eval-EXPR) is looking for a list value. Returns false if the context is looking for a scalar. Returns the undefined value if the context is looking for no value (void context).
```
return unless defined wantarray; # don't bother doing more
my @a = complex_calculation();
return wantarray ? @a : "@a";
```
[`wantarray`](#wantarray)'s result is unspecified in the top level of a file, in a `BEGIN`, `UNITCHECK`, `CHECK`, `INIT` or `END` block, or in a `DESTROY` method.
This function should have been named wantlist() instead.
warn LIST Emits a warning, usually by printing it to `STDERR`. `warn` interprets its operand LIST in the same way as `die`, but is slightly different in what it defaults to when LIST is empty or makes an empty string. If it is empty and [`$@`](perlvar#%24%40) already contains an exception value then that value is used after appending `"\t...caught"`. If it is empty and `$@` is also empty then the string `"Warning: Something's wrong"` is used.
By default, the exception derived from the operand LIST is stringified and printed to `STDERR`. This behaviour can be altered by installing a [`$SIG{__WARN__}`](perlvar#%25SIG) handler. If there is such a handler then no message is automatically printed; it is the handler's responsibility to deal with the exception as it sees fit (like, for instance, converting it into a [`die`](#die-LIST)). Most handlers must therefore arrange to actually display the warnings that they are not prepared to deal with, by calling [`warn`](#warn-LIST) again in the handler. Note that this is quite safe and will not produce an endless loop, since `__WARN__` hooks are not called from inside one.
You will find this behavior is slightly different from that of [`$SIG{__DIE__}`](perlvar#%25SIG) handlers (which don't suppress the error text, but can instead call [`die`](#die-LIST) again to change it).
Using a `__WARN__` handler provides a powerful way to silence all warnings (even the so-called mandatory ones). An example:
```
# wipe out *all* compile-time warnings
BEGIN { $SIG{'__WARN__'} = sub { warn $_[0] if $DOWARN } }
my $foo = 10;
my $foo = 20; # no warning about duplicate my $foo,
# but hey, you asked for it!
# no compile-time or run-time warnings before here
$DOWARN = 1;
# run-time warnings enabled after here
warn "\$foo is alive and $foo!"; # does show up
```
See <perlvar> for details on setting [`%SIG`](perlvar#%25SIG) entries and for more examples. See the [Carp](carp) module for other kinds of warnings using its `carp` and `cluck` functions.
write FILEHANDLE
write EXPR write Writes a formatted record (possibly multi-line) to the specified FILEHANDLE, using the format associated with that file. By default the format for a file is the one having the same name as the filehandle, but the format for the current output channel (see the [`select`](#select-FILEHANDLE) function) may be set explicitly by assigning the name of the format to the [`$~`](perlvar#%24~) variable.
Top of form processing is handled automatically: if there is insufficient room on the current page for the formatted record, the page is advanced by writing a form feed and a special top-of-page format is used to format the new page header before the record is written. By default, the top-of-page format is the name of the filehandle with `_TOP` appended, or `top` in the current package if the former does not exist. This would be a problem with autovivified filehandles, but it may be dynamically set to the format of your choice by assigning the name to the [`$^`](perlvar#%24%5E) variable while that filehandle is selected. The number of lines remaining on the current page is in variable [`$-`](perlvar#%24-), which can be set to `0` to force a new page.
If FILEHANDLE is unspecified, output goes to the current default output channel, which starts out as STDOUT but may be changed by the [`select`](#select-FILEHANDLE) operator. If the FILEHANDLE is an EXPR, then the expression is evaluated and the resulting string is used to look up the name of the FILEHANDLE at run time. For more on formats, see <perlform>.
Note that write is *not* the opposite of [`read`](#read-FILEHANDLE%2CSCALAR%2CLENGTH%2COFFSET). Unfortunately.
y/// The transliteration operator. Same as [`tr///`](#tr%2F%2F%2F). See ["Quote-Like Operators" in perlop](perlop#Quote-Like-Operators).
###
Non-function Keywords by Cross-reference
#### perldata
\_\_DATA\_\_
\_\_END\_\_ These keywords are documented in ["Special Literals" in perldata](perldata#Special-Literals).
#### perlmod
BEGIN CHECK END INIT UNITCHECK These compile phase keywords are documented in ["BEGIN, UNITCHECK, CHECK, INIT and END" in perlmod](perlmod#BEGIN%2C-UNITCHECK%2C-CHECK%2C-INIT-and-END).
#### perlobj
DESTROY This method keyword is documented in ["Destructors" in perlobj](perlobj#Destructors).
#### perlop
and cmp eq ge gt isa le lt ne not or x xor These operators are documented in <perlop>.
#### perlsub
AUTOLOAD This keyword is documented in ["Autoloading" in perlsub](perlsub#Autoloading).
#### perlsyn
else elsif for foreach if unless until while These flow-control keywords are documented in ["Compound Statements" in perlsyn](perlsyn#Compound-Statements).
elseif The "else if" keyword is spelled `elsif` in Perl. There's no `elif` or `else if` either. It does parse `elseif`, but only to warn you about not using it.
See the documentation for flow-control keywords in ["Compound Statements" in perlsyn](perlsyn#Compound-Statements).
default given when These flow-control keywords related to the experimental switch feature are documented in ["Switch Statements" in perlsyn](perlsyn#Switch-Statements).
try catch finally These flow-control keywords related to the experimental `try` feature are documented in ["Try Catch Exception Handling" in perlsyn](perlsyn#Try-Catch-Exception-Handling).
defer This flow-control keyword related to the experimental `defer` feature is documented in ["defer blocks" in perlsyn](perlsyn#defer-blocks).
| programming_docs |
perl Net::FTP Net::FTP
========
CONTENTS
--------
* [NAME](#NAME)
* [SYNOPSIS](#SYNOPSIS)
* [DESCRIPTION](#DESCRIPTION)
+ [Overview](#Overview)
+ [Class Methods](#Class-Methods)
+ [Object Methods](#Object-Methods)
+ [Methods for the Adventurous](#Methods-for-the-Adventurous)
+ [The dataconn Class](#The-dataconn-Class)
+ [Unimplemented](#Unimplemented)
* [EXAMPLES](#EXAMPLES)
* [EXPORTS](#EXPORTS)
* [KNOWN BUGS](#KNOWN-BUGS)
+ [Reporting Bugs](#Reporting-Bugs)
* [SEE ALSO](#SEE-ALSO)
* [ACKNOWLEDGEMENTS](#ACKNOWLEDGEMENTS)
* [AUTHOR](#AUTHOR)
* [COPYRIGHT](#COPYRIGHT)
* [LICENCE](#LICENCE)
* [VERSION](#VERSION)
* [DATE](#DATE)
* [HISTORY](#HISTORY)
NAME
----
Net::FTP - FTP Client class
SYNOPSIS
--------
```
use Net::FTP;
$ftp = Net::FTP->new("some.host.name", Debug => 0)
or die "Cannot connect to some.host.name: $@";
$ftp->login("anonymous",'-anonymous@')
or die "Cannot login ", $ftp->message;
$ftp->cwd("/pub")
or die "Cannot change working directory ", $ftp->message;
$ftp->get("that.file")
or die "get failed ", $ftp->message;
$ftp->quit;
```
DESCRIPTION
-----------
`Net::FTP` is a class implementing a simple FTP client in Perl as described in RFC959. It provides wrappers for the commonly used subset of the RFC959 commands. If <IO::Socket::IP> or <IO::Socket::INET6> is installed it also provides support for IPv6 as defined in RFC2428. And with <IO::Socket::SSL> installed it provides support for implicit FTPS and explicit FTPS as defined in RFC4217.
The Net::FTP class is a subclass of Net::Cmd and (depending on avaibility) of IO::Socket::IP, IO::Socket::INET6 or IO::Socket::INET.
### Overview
FTP stands for File Transfer Protocol. It is a way of transferring files between networked machines. The protocol defines a client (whose commands are provided by this module) and a server (not implemented in this module). Communication is always initiated by the client, and the server responds with a message and a status code (and sometimes with data).
The FTP protocol allows files to be sent to or fetched from the server. Each transfer involves a **local file** (on the client) and a **remote file** (on the server). In this module, the same file name will be used for both local and remote if only one is specified. This means that transferring remote file `/path/to/file` will try to put that file in `/path/to/file` locally, unless you specify a local file name.
The protocol also defines several standard **translations** which the file can undergo during transfer. These are ASCII, EBCDIC, binary, and byte. ASCII is the default type, and indicates that the sender of files will translate the ends of lines to a standard representation which the receiver will then translate back into their local representation. EBCDIC indicates the file being transferred is in EBCDIC format. Binary (also known as image) format sends the data as a contiguous bit stream. Byte format transfers the data as bytes, the values of which remain the same regardless of differences in byte size between the two machines (in theory - in practice you should only use this if you really know what you're doing). This class does not support the EBCDIC or byte formats, and will default to binary instead if they are attempted.
###
Class Methods
`new([$host][, %options])`
This is the constructor for a new Net::FTP object. `$host` is the name of the remote host to which an FTP connection is required.
`$host` is optional. If `$host` is not given then it may instead be passed as the `Host` option described below.
`%options` are passed in a hash like fashion, using key and value pairs. Possible options are:
**Host** - FTP host to connect to. It may be a single scalar, as defined for the `PeerAddr` option in <IO::Socket::INET>, or a reference to an array with hosts to try in turn. The ["host"](#host) method will return the value which was used to connect to the host.
**Firewall** - The name of a machine which acts as an FTP firewall. This can be overridden by an environment variable `FTP_FIREWALL`. If specified, and the given host cannot be directly connected to, then the connection is made to the firewall machine and the string `@hostname` is appended to the login identifier. This kind of setup is also referred to as an ftp proxy.
**FirewallType** - The type of firewall running on the machine indicated by **Firewall**. This can be overridden by an environment variable `FTP_FIREWALL_TYPE`. For a list of permissible types, see the description of ftp\_firewall\_type in <Net::Config>.
**BlockSize** - This is the block size that Net::FTP will use when doing transfers. (defaults to 10240)
**Port** - The port number to connect to on the remote machine for the FTP connection
**SSL** - If the connection should be done from start with SSL, contrary to later upgrade with `starttls`.
**SSL\_\*** - SSL arguments which will be applied when upgrading the control or data connection to SSL. You can use SSL arguments as documented in <IO::Socket::SSL>, but it will usually use the right arguments already.
**Timeout** - Set a timeout value in seconds (defaults to 120)
**Debug** - debug level (see the debug method in <Net::Cmd>)
**Passive** - If set to a non-zero value then all data transfers will be done using passive mode. If set to zero then data transfers will be done using active mode. If the machine is connected to the Internet directly, both passive and active mode should work equally well. Behind most firewall and NAT configurations passive mode has a better chance of working. However, in some rare firewall configurations, active mode actually works when passive mode doesn't. Some really old FTP servers might not implement passive transfers. If not specified, then the transfer mode is set by the environment variable `FTP_PASSIVE` or if that one is not set by the settings done by the *libnetcfg* utility. If none of these apply then passive mode is used.
**Hash** - If given a reference to a file handle (e.g., `\*STDERR`), print hash marks (#) on that filehandle every 1024 bytes. This simply invokes the `hash()` method for you, so that hash marks are displayed for all transfers. You can, of course, call `hash()` explicitly whenever you'd like.
**LocalAddr** - Local address to use for all socket connections. This argument will be passed to the super class, i.e. <IO::Socket::INET> or <IO::Socket::IP>.
**Domain** - Domain to use, i.e. AF\_INET or AF\_INET6. This argument will be passed to the IO::Socket super class. This can be used to enforce IPv4 even with <IO::Socket::IP> which would default to IPv6. **Family** is accepted as alternative name for **Domain**.
If the constructor fails undef will be returned and an error message will be in $@
###
Object Methods
Unless otherwise stated all methods return either a *true* or *false* value, with *true* meaning that the operation was a success. When a method states that it returns a value, failure will be returned as *undef* or an empty list.
`Net::FTP` inherits from `Net::Cmd` so methods defined in `Net::Cmd` may be used to send commands to the remote FTP server in addition to the methods documented here.
`login([$login[, $password[, $account]]])`
Log into the remote FTP server with the given login information. If no arguments are given then the `Net::FTP` uses the `Net::Netrc` package to lookup the login information for the connected host. If no information is found then a login of *anonymous* is used. If no password is given and the login is *anonymous* then *anonymous@* will be used for password.
If the connection is via a firewall then the `authorize` method will be called with no arguments.
`starttls()`
Upgrade existing plain connection to SSL. The SSL arguments have to be given in `new` already because they are needed for data connections too.
`stoptls()`
Downgrade existing SSL connection back to plain. This is needed to work with some FTP helpers at firewalls, which need to see the PORT and PASV commands and responses to dynamically open the necessary ports. In this case `starttls` is usually only done to protect the authorization.
`prot($level)`
Set what type of data channel protection the client and server will be using. Only `$level`s "C" (clear) and "P" (private) are supported.
`host()`
Returns the value used by the constructor, and passed to the IO::Socket super class to connect to the host.
`account($acct)`
Set a string identifying the user's account.
`authorize([$auth[, $resp]])`
This is a protocol used by some firewall ftp proxies. It is used to authorise the user to send data out. If both arguments are not specified then `authorize` uses `Net::Netrc` to do a lookup.
`site($args)`
Send a SITE command to the remote server and wait for a response.
Returns most significant digit of the response code.
`ascii()`
Transfer file in ASCII. CRLF translation will be done if required
`binary()`
Transfer file in binary mode. No transformation will be done.
**Hint**: If both server and client machines use the same line ending for text files, then it will be faster to transfer all files in binary mode.
`type([$type])`
Set or get if files will be transferred in ASCII or binary mode.
`rename($oldname, $newname)`
Rename a file on the remote FTP server from `$oldname` to `$newname`. This is done by sending the RNFR and RNTO commands.
`delete($filename)`
Send a request to the server to delete `$filename`.
`cwd([$dir])`
Attempt to change directory to the directory given in `$dir`. If `$dir` is `".."`, the FTP `CDUP` command is used to attempt to move up one directory. If no directory is given then an attempt is made to change the directory to the root directory.
`cdup()`
Change directory to the parent of the current directory.
`passive([$passive])`
Set or get if data connections will be initiated in passive mode.
`pwd()`
Returns the full pathname of the current directory.
`restart($where)`
Set the byte offset at which to begin the next data transfer. Net::FTP simply records this value and uses it when during the next data transfer. For this reason this method will not return an error, but setting it may cause a subsequent data transfer to fail.
`rmdir($dir[, $recurse])`
Remove the directory with the name `$dir`. If `$recurse` is *true* then `rmdir` will attempt to delete everything inside the directory.
`mkdir($dir[, $recurse])`
Create a new directory with the name `$dir`. If `$recurse` is *true* then `mkdir` will attempt to create all the directories in the given path.
Returns the full pathname to the new directory.
`alloc($size[, $record_size])`
The alloc command allows you to give the ftp server a hint about the size of the file about to be transferred using the ALLO ftp command. Some storage systems use this to make intelligent decisions about how to store the file. The `$size` argument represents the size of the file in bytes. The `$record_size` argument indicates a maximum record or page size for files sent with a record or page structure.
The size of the file will be determined, and sent to the server automatically for normal files so that this method need only be called if you are transferring data from a socket, named pipe, or other stream not associated with a normal file.
`ls([$dir])`
Get a directory listing of `$dir`, or the current directory.
In an array context, returns a list of lines returned from the server. In a scalar context, returns a reference to a list.
`dir([$dir])`
Get a directory listing of `$dir`, or the current directory in long format.
In an array context, returns a list of lines returned from the server. In a scalar context, returns a reference to a list.
`get($remote_file[, $local_file[, $where]])`
Get `$remote_file` from the server and store locally. `$local_file` may be a filename or a filehandle. If not specified, the file will be stored in the current directory with the same leafname as the remote file.
If `$where` is given then the first `$where` bytes of the file will not be transferred, and the remaining bytes will be appended to the local file if it already exists.
Returns `$local_file`, or the generated local file name if `$local_file` is not given. If an error was encountered undef is returned.
`put($local_file[, $remote_file])`
Put a file on the remote server. `$local_file` may be a name or a filehandle. If `$local_file` is a filehandle then `$remote_file` must be specified. If `$remote_file` is not specified then the file will be stored in the current directory with the same leafname as `$local_file`.
Returns `$remote_file`, or the generated remote filename if `$remote_file` is not given.
**NOTE**: If for some reason the transfer does not complete and an error is returned then the contents that had been transferred will not be remove automatically.
`put_unique($local_file[, $remote_file])`
Same as put but uses the `STOU` command.
Returns the name of the file on the server.
`append($local_file[, $remote_file])`
Same as put but appends to the file on the remote server.
Returns `$remote_file`, or the generated remote filename if `$remote_file` is not given.
`unique_name()`
Returns the name of the last file stored on the server using the `STOU` command.
`mdtm($file)`
Returns the *modification time* of the given file
`size($file)`
Returns the size in bytes for the given file as stored on the remote server.
**NOTE**: The size reported is the size of the stored file on the remote server. If the file is subsequently transferred from the server in ASCII mode and the remote server and local machine have different ideas about "End Of Line" then the size of file on the local machine after transfer may be different.
`supported($cmd)`
Returns TRUE if the remote server supports the given command.
`hash([$filehandle_glob_ref[, $bytes_per_hash_mark]])`
Called without parameters, or with the first argument false, hash marks are suppressed. If the first argument is true but not a reference to a file handle glob, then \\*STDERR is used. The second argument is the number of bytes per hash mark printed, and defaults to 1024. In all cases the return value is a reference to an array of two: the filehandle glob reference and the bytes per hash mark.
`feature($name)`
Determine if the server supports the specified feature. The return value is a list of lines the server responded with to describe the options that it supports for the given feature. If the feature is unsupported then the empty list is returned.
```
if ($ftp->feature( 'MDTM' )) {
# Do something
}
if (grep { /\bTLS\b/ } $ftp->feature('AUTH')) {
# Server supports TLS
}
```
The following methods can return different results depending on how they are called. If the user explicitly calls either of the `pasv` or `port` methods then these methods will return a *true* or *false* value. If the user does not call either of these methods then the result will be a reference to a `Net::FTP::dataconn` based object.
`nlst([$dir])`
Send an `NLST` command to the server, with an optional parameter.
`list([$dir])`
Same as `nlst` but using the `LIST` command
`retr($file)`
Begin the retrieval of a file called `$file` from the remote server.
`stor($file)`
Tell the server that you wish to store a file. `$file` is the name of the new file that should be created.
`stou($file)`
Same as `stor` but using the `STOU` command. The name of the unique file which was created on the server will be available via the `unique_name` method after the data connection has been closed.
`appe($file)`
Tell the server that we want to append some data to the end of a file called `$file`. If this file does not exist then create it.
If for some reason you want to have complete control over the data connection, this includes generating it and calling the `response` method when required, then the user can use these methods to do so.
However calling these methods only affects the use of the methods above that can return a data connection. They have no effect on methods `get`, `put`, `put_unique` and those that do not require data connections.
`port([$port])`
`eprt([$port])`
Send a `PORT` (IPv4) or `EPRT` (IPv6) command to the server. If `$port` is specified then it is sent to the server. If not, then a listen socket is created and the correct information sent to the server.
`pasv()`
`epsv()`
Tell the server to go into passive mode (`pasv` for IPv4, `epsv` for IPv6). Returns the text that represents the port on which the server is listening, this text is in a suitable form to send to another ftp server using the `port` or `eprt` method.
The following methods can be used to transfer files between two remote servers, providing that these two servers can connect directly to each other.
`pasv_xfer($src_file, $dest_server[, $dest_file ])`
This method will do a file transfer between two remote ftp servers. If `$dest_file` is omitted then the leaf name of `$src_file` will be used.
`pasv_xfer_unique($src_file, $dest_server[, $dest_file ])`
Like `pasv_xfer` but the file is stored on the remote server using the STOU command.
`pasv_wait($non_pasv_server)`
This method can be used to wait for a transfer to complete between a passive server and a non-passive server. The method should be called on the passive server with the `Net::FTP` object for the non-passive server passed as an argument.
`abort()`
Abort the current data transfer.
`quit()`
Send the QUIT command to the remote FTP server and close the socket connection.
###
Methods for the Adventurous
`quot($cmd[, $args])`
Send a command, that Net::FTP does not directly support, to the remote server and wait for a response.
Returns most significant digit of the response code.
**WARNING** This call should only be used on commands that do not require data connections. Misuse of this method can hang the connection.
`can_inet6()`
Returns whether we can use IPv6.
`can_ssl()`
Returns whether we can use SSL.
###
The dataconn Class
Some of the methods defined in `Net::FTP` return an object which will be derived from the `Net::FTP::dataconn` class. See <Net::FTP::dataconn> for more details.
### Unimplemented
The following RFC959 commands have not been implemented:
`SMNT` Mount a different file system structure without changing login or accounting information.
`HELP` Ask the server for "helpful information" (that's what the RFC says) on the commands it accepts.
`MODE` Specifies transfer mode (stream, block or compressed) for file to be transferred.
`SYST` Request remote server system identification.
`STAT` Request remote server status.
`STRU` Specifies file structure for file to be transferred.
`REIN` Reinitialize the connection, flushing all I/O and account information.
EXAMPLES
--------
For an example of the use of Net::FTP see
<https://www.csh.rit.edu/~adam/Progs/>
`autoftp` is a program that can retrieve, send, or list files via the FTP protocol in a non-interactive manner.
EXPORTS
-------
*None*.
KNOWN BUGS
-----------
See <https://rt.cpan.org/Dist/Display.html?Status=Active&Queue=libnet>.
###
Reporting Bugs
When reporting bugs/problems please include as much information as possible. It may be difficult for me to reproduce the problem as almost every setup is different.
A small script which yields the problem will probably be of help. It would also be useful if this script was run with the extra options `Debug => 1` passed to the constructor, and the output sent with the bug report. If you cannot include a small script then please include a Debug trace from a run of your program which does yield the problem.
SEE ALSO
---------
<Net::Netrc>, <Net::Cmd>, <IO::Socket::SSL>;
[ftp(1)](http://man.he.net/man1/ftp), [ftpd(8)](http://man.he.net/man8/ftpd);
<https://www.ietf.org/rfc/rfc959.txt>, <https://www.ietf.org/rfc/rfc2428.txt>, <https://www.ietf.org/rfc/rfc4217.txt>.
ACKNOWLEDGEMENTS
----------------
Henry Gabryjelski <[[email protected]](mailto:[email protected])> - for the suggestion of creating directories recursively.
Nathan Torkington <[[email protected]](mailto:[email protected])> - for some input on the documentation.
Roderick Schertler <[[email protected]](mailto:[email protected])> - for various inputs
AUTHOR
------
Graham Barr <[[email protected]](mailto:[email protected])>.
Steve Hay <[[email protected]](mailto:[email protected])> is now maintaining libnet as of version 1.22\_02.
COPYRIGHT
---------
Copyright (C) 1995-2004 Graham Barr. All rights reserved.
Copyright (C) 2013-2017, 2020 Steve Hay. All rights reserved.
LICENCE
-------
This module is free software; you can redistribute it and/or modify it under the same terms as Perl itself, i.e. under the terms of either the GNU General Public License or the Artistic License, as specified in the *LICENCE* file.
VERSION
-------
Version 3.14
DATE
----
23 Dec 2020
HISTORY
-------
See the *Changes* file.
| programming_docs |
perl perlreref perlreref
=========
CONTENTS
--------
* [NAME](#NAME)
* [DESCRIPTION](#DESCRIPTION)
+ [OPERATORS](#OPERATORS)
+ [SYNTAX](#SYNTAX)
+ [ESCAPE SEQUENCES](#ESCAPE-SEQUENCES)
+ [CHARACTER CLASSES](#CHARACTER-CLASSES)
+ [ANCHORS](#ANCHORS)
+ [QUANTIFIERS](#QUANTIFIERS)
+ [EXTENDED CONSTRUCTS](#EXTENDED-CONSTRUCTS)
+ [VARIABLES](#VARIABLES)
+ [FUNCTIONS](#FUNCTIONS)
+ [TERMINOLOGY](#TERMINOLOGY)
- [Titlecase](#Titlecase)
- [Foldcase](#Foldcase)
* [AUTHOR](#AUTHOR)
* [SEE ALSO](#SEE-ALSO)
* [THANKS](#THANKS)
NAME
----
perlreref - Perl Regular Expressions Reference
DESCRIPTION
-----------
This is a quick reference to Perl's regular expressions. For full information see <perlre> and <perlop>, as well as the ["SEE ALSO"](#SEE-ALSO) section in this document.
### OPERATORS
`=~` determines to which variable the regex is applied. In its absence, $\_ is used.
```
$var =~ /foo/;
```
`!~` determines to which variable the regex is applied, and negates the result of the match; it returns false if the match succeeds, and true if it fails.
```
$var !~ /foo/;
```
`m/pattern/msixpogcdualn` searches a string for a pattern match, applying the given options.
```
m Multiline mode - ^ and $ match internal lines
s match as a Single line - . matches \n
i case-Insensitive
x eXtended legibility - free whitespace and comments
p Preserve a copy of the matched string -
${^PREMATCH}, ${^MATCH}, ${^POSTMATCH} will be defined.
o compile pattern Once
g Global - all occurrences
c don't reset pos on failed matches when using /g
a restrict \d, \s, \w and [:posix:] to match ASCII only
aa (two a's) also /i matches exclude ASCII/non-ASCII
l match according to current locale
u match according to Unicode rules
d match according to native rules unless something indicates
Unicode
n Non-capture mode. Don't let () fill in $1, $2, etc...
```
If 'pattern' is an empty string, the last *successfully* matched regex is used. Delimiters other than '/' may be used for both this operator and the following ones. The leading `m` can be omitted if the delimiter is '/'.
`qr/pattern/msixpodualn` lets you store a regex in a variable, or pass one around. Modifiers as for `m//`, and are stored within the regex.
`s/pattern/replacement/msixpogcedual` substitutes matches of 'pattern' with 'replacement'. Modifiers as for `m//`, with two additions:
```
e Evaluate 'replacement' as an expression
r Return substitution and leave the original string untouched.
```
'e' may be specified multiple times. 'replacement' is interpreted as a double quoted string unless a single-quote (`'`) is the delimiter.
`m?pattern?` is like `m/pattern/` but matches only once. No alternate delimiters can be used. Must be reset with reset().
### SYNTAX
```
\ Escapes the character immediately following it
. Matches any single character except a newline (unless /s is
used)
^ Matches at the beginning of the string (or line, if /m is used)
$ Matches at the end of the string (or line, if /m is used)
* Matches the preceding element 0 or more times
+ Matches the preceding element 1 or more times
? Matches the preceding element 0 or 1 times
{...} Specifies a range of occurrences for the element preceding it
[...] Matches any one of the characters contained within the brackets
(...) Groups subexpressions for capturing to $1, $2...
(?:...) Groups subexpressions without capturing (cluster)
| Matches either the subexpression preceding or following it
\g1 or \g{1}, \g2 ... Matches the text from the Nth group
\1, \2, \3 ... Matches the text from the Nth group
\g-1 or \g{-1}, \g-2 ... Matches the text from the Nth previous group
\g{name} Named backreference
\k<name> Named backreference
\k'name' Named backreference
(?P=name) Named backreference (python syntax)
```
###
ESCAPE SEQUENCES
These work as in normal strings.
```
\a Alarm (beep)
\e Escape
\f Formfeed
\n Newline
\r Carriage return
\t Tab
\037 Char whose ordinal is the 3 octal digits, max \777
\o{2307} Char whose ordinal is the octal number, unrestricted
\x7f Char whose ordinal is the 2 hex digits, max \xFF
\x{263a} Char whose ordinal is the hex number, unrestricted
\cx Control-x
\N{name} A named Unicode character or character sequence
\N{U+263D} A Unicode character by hex ordinal
\l Lowercase next character
\u Titlecase next character
\L Lowercase until \E
\U Uppercase until \E
\F Foldcase until \E
\Q Disable pattern metacharacters until \E
\E End modification
```
For Titlecase, see ["Titlecase"](#Titlecase).
This one works differently from normal strings:
```
\b An assertion, not backspace, except in a character class
```
###
CHARACTER CLASSES
```
[amy] Match 'a', 'm' or 'y'
[f-j] Dash specifies "range"
[f-j-] Dash escaped or at start or end means 'dash'
[^f-j] Caret indicates "match any character _except_ these"
```
The following sequences (except `\N`) work within or without a character class. The first six are locale aware, all are Unicode aware. See <perllocale> and <perlunicode> for details.
```
\d A digit
\D A nondigit
\w A word character
\W A non-word character
\s A whitespace character
\S A non-whitespace character
\h A horizontal whitespace
\H A non horizontal whitespace
\N A non newline (when not followed by '{NAME}';;
not valid in a character class; equivalent to [^\n]; it's
like '.' without /s modifier)
\v A vertical whitespace
\V A non vertical whitespace
\R A generic newline (?>\v|\x0D\x0A)
\pP Match P-named (Unicode) property
\p{...} Match Unicode property with name longer than 1 character
\PP Match non-P
\P{...} Match lack of Unicode property with name longer than 1 char
\X Match Unicode extended grapheme cluster
```
POSIX character classes and their Unicode and Perl equivalents:
```
ASCII- Full-
POSIX range range backslash
[[:...:]] \p{...} \p{...} sequence Description
-----------------------------------------------------------------------
alnum PosixAlnum XPosixAlnum 'alpha' plus 'digit'
alpha PosixAlpha XPosixAlpha Alphabetic characters
ascii ASCII Any ASCII character
blank PosixBlank XPosixBlank \h Horizontal whitespace;
full-range also
written as
\p{HorizSpace} (GNU
extension)
cntrl PosixCntrl XPosixCntrl Control characters
digit PosixDigit XPosixDigit \d Decimal digits
graph PosixGraph XPosixGraph 'alnum' plus 'punct'
lower PosixLower XPosixLower Lowercase characters
print PosixPrint XPosixPrint 'graph' plus 'space',
but not any Controls
punct PosixPunct XPosixPunct Punctuation and Symbols
in ASCII-range; just
punct outside it
space PosixSpace XPosixSpace \s Whitespace
upper PosixUpper XPosixUpper Uppercase characters
word PosixWord XPosixWord \w 'alnum' + Unicode marks
+ connectors, like
'_' (Perl extension)
xdigit ASCII_Hex_Digit XPosixDigit Hexadecimal digit,
ASCII-range is
[0-9A-Fa-f]
```
Also, various synonyms like `\p{Alpha}` for `\p{XPosixAlpha}`; all listed in ["Properties accessible through \p{} and \P{}" in perluniprops](perluniprops#Properties-accessible-through-%5Cp%7B%7D-and-%5CP%7B%7D)
Within a character class:
```
POSIX traditional Unicode
[:digit:] \d \p{Digit}
[:^digit:] \D \P{Digit}
```
### ANCHORS
All are zero-width assertions.
```
^ Match string start (or line, if /m is used)
$ Match string end (or line, if /m is used) or before newline
\b{} Match boundary of type specified within the braces
\B{} Match wherever \b{} doesn't match
\b Match word boundary (between \w and \W)
\B Match except at word boundary (between \w and \w or \W and \W)
\A Match string start (regardless of /m)
\Z Match string end (before optional newline)
\z Match absolute string end
\G Match where previous m//g left off
\K Keep the stuff left of the \K, don't include it in $&
```
### QUANTIFIERS
Quantifiers are greedy by default and match the **longest** leftmost.
```
Maximal Minimal Possessive Allowed range
------- ------- ---------- -------------
{n,m} {n,m}? {n,m}+ Must occur at least n times
but no more than m times
{n,} {n,}? {n,}+ Must occur at least n times
{,n} {,n}? {,n}+ Must occur at most n times
{n} {n}? {n}+ Must occur exactly n times
* *? *+ 0 or more times (same as {0,})
+ +? ++ 1 or more times (same as {1,})
? ?? ?+ 0 or 1 time (same as {0,1})
```
The possessive forms (new in Perl 5.10) prevent backtracking: what gets matched by a pattern with a possessive quantifier will not be backtracked into, even if that causes the whole match to fail.
###
EXTENDED CONSTRUCTS
```
(?#text) A comment
(?:...) Groups subexpressions without capturing (cluster)
(?pimsx-imsx:...) Enable/disable option (as per m// modifiers)
(?=...) Zero-width positive lookahead assertion
(*pla:...) Same, starting in 5.32; experimentally in 5.28
(*positive_lookahead:...) Same, same versions as *pla
(?!...) Zero-width negative lookahead assertion
(*nla:...) Same, starting in 5.32; experimentally in 5.28
(*negative_lookahead:...) Same, same versions as *nla
(?<=...) Zero-width positive lookbehind assertion
(*plb:...) Same, starting in 5.32; experimentally in 5.28
(*positive_lookbehind:...) Same, same versions as *plb
(?<!...) Zero-width negative lookbehind assertion
(*nlb:...) Same, starting in 5.32; experimentally in 5.28
(*negative_lookbehind:...) Same, same versions as *plb
(?>...) Grab what we can, prohibit backtracking
(*atomic:...) Same, starting in 5.32; experimentally in 5.28
(?|...) Branch reset
(?<name>...) Named capture
(?'name'...) Named capture
(?P<name>...) Named capture (python syntax)
(?[...]) Extended bracketed character class
(?{ code }) Embedded code, return value becomes $^R
(??{ code }) Dynamic regex, return value used as regex
(?N) Recurse into subpattern number N
(?-N), (?+N) Recurse into Nth previous/next subpattern
(?R), (?0) Recurse at the beginning of the whole pattern
(?&name) Recurse into a named subpattern
(?P>name) Recurse into a named subpattern (python syntax)
(?(cond)yes|no)
(?(cond)yes) Conditional expression, where "(cond)" can be:
(?=pat) lookahead; also (*pla:pat)
(*positive_lookahead:pat)
(?!pat) negative lookahead; also (*nla:pat)
(*negative_lookahead:pat)
(?<=pat) lookbehind; also (*plb:pat)
(*lookbehind:pat)
(?<!pat) negative lookbehind; also (*nlb:pat)
(*negative_lookbehind:pat)
(N) subpattern N has matched something
(<name>) named subpattern has matched something
('name') named subpattern has matched something
(?{code}) code condition
(R) true if recursing
(RN) true if recursing into Nth subpattern
(R&name) true if recursing into named subpattern
(DEFINE) always false, no no-pattern allowed
```
### VARIABLES
```
$_ Default variable for operators to use
$` Everything prior to matched string
$& Entire matched string
$' Everything after to matched string
${^PREMATCH} Everything prior to matched string
${^MATCH} Entire matched string
${^POSTMATCH} Everything after to matched string
```
Note to those still using Perl 5.18 or earlier: The use of `$``, `$&` or `$'` will slow down **all** regex use within your program. Consult <perlvar> for `@-` to see equivalent expressions that won't cause slow down. See also <Devel::SawAmpersand>. Starting with Perl 5.10, you can also use the equivalent variables `${^PREMATCH}`, `${^MATCH}` and `${^POSTMATCH}`, but for them to be defined, you have to specify the `/p` (preserve) modifier on your regular expression. In Perl 5.20, the use of `$``, `$&` and `$'` makes no speed difference.
```
$1, $2 ... hold the Xth captured expr
$+ Last parenthesized pattern match
$^N Holds the most recently closed capture
$^R Holds the result of the last (?{...}) expr
@- Offsets of starts of groups. $-[0] holds start of whole match
@+ Offsets of ends of groups. $+[0] holds end of whole match
%+ Named capture groups
%- Named capture groups, as array refs
```
Captured groups are numbered according to their *opening* paren.
### FUNCTIONS
```
lc Lowercase a string
lcfirst Lowercase first char of a string
uc Uppercase a string
ucfirst Titlecase first char of a string
fc Foldcase a string
pos Return or set current match position
quotemeta Quote metacharacters
reset Reset m?pattern? status
study Analyze string for optimizing matching
split Use a regex to split a string into parts
```
The first five of these are like the escape sequences `\L`, `\l`, `\U`, `\u`, and `\F`. For Titlecase, see ["Titlecase"](#Titlecase); For Foldcase, see ["Foldcase"](#Foldcase).
### TERMINOLOGY
#### Titlecase
Unicode concept which most often is equal to uppercase, but for certain characters like the German "sharp s" there is a difference.
#### Foldcase
Unicode form that is useful when comparing strings regardless of case, as certain characters have complex one-to-many case mappings. Primarily a variant of lowercase.
AUTHOR
------
Iain Truskett. Updated by the Perl 5 Porters.
This document may be distributed under the same terms as Perl itself.
SEE ALSO
---------
* <perlretut> for a tutorial on regular expressions.
* <perlrequick> for a rapid tutorial.
* <perlre> for more details.
* <perlvar> for details on the variables.
* <perlop> for details on the operators.
* <perlfunc> for details on the functions.
* <perlfaq6> for FAQs on regular expressions.
* <perlrebackslash> for a reference on backslash sequences.
* <perlrecharclass> for a reference on character classes.
* The <re> module to alter behaviour and aid debugging.
* ["Debugging Regular Expressions" in perldebug](perldebug#Debugging-Regular-Expressions)
* <perluniintro>, <perlunicode>, <charnames> and <perllocale> for details on regexes and internationalisation.
* *Mastering Regular Expressions* by Jeffrey Friedl (<http://oreilly.com/catalog/9780596528126/>) for a thorough grounding and reference on the topic.
THANKS
------
David P.C. Wollmann, Richard Soderberg, Sean M. Burke, Tom Christiansen, Jim Cromie, and Jeffrey Goff for useful advice.
perl Encode::Unicode::UTF7 Encode::Unicode::UTF7
=====================
CONTENTS
--------
* [NAME](#NAME)
* [SYNOPSIS](#SYNOPSIS)
* [ABSTRACT](#ABSTRACT)
* [In Practice](#In-Practice)
* [SEE ALSO](#SEE-ALSO)
NAME
----
Encode::Unicode::UTF7 -- UTF-7 encoding
SYNOPSIS
--------
```
use Encode qw/encode decode/;
$utf7 = encode("UTF-7", $utf8);
$utf8 = decode("UTF-7", $ucs2);
```
ABSTRACT
--------
This module implements UTF-7 encoding documented in RFC 2152. UTF-7, as its name suggests, is a 7-bit re-encoded version of UTF-16BE. It is designed to be MTA-safe and expected to be a standard way to exchange Unicoded mails via mails. But with the advent of UTF-8 and 8-bit compliant MTAs, UTF-7 is hardly ever used.
UTF-7 was not supported by Encode until version 1.95 because of that. But Unicode::String, a module by Gisle Aas which adds Unicode supports to non-utf8-savvy perl did support UTF-7, the UTF-7 support was added so Encode can supersede Unicode::String 100%.
In Practice
------------
When you want to encode Unicode for mails and web pages, however, do not use UTF-7 unless you are sure your recipients and readers can handle it. Very few MUAs and WWW Browsers support these days (only Mozilla seems to support one). For general cases, use UTF-8 for message body and MIME-Header for header instead.
SEE ALSO
---------
[Encode](encode), <Encode::Unicode>, <Unicode::String>
RFC 2781 <http://www.ietf.org/rfc/rfc2152.txt>
perl Test2::EventFacet::Hub Test2::EventFacet::Hub
======================
CONTENTS
--------
* [NAME](#NAME)
* [DESCRIPTION](#DESCRIPTION)
* [FACET FIELDS](#FACET-FIELDS)
* [SOURCE](#SOURCE)
* [MAINTAINERS](#MAINTAINERS)
* [AUTHORS](#AUTHORS)
* [COPYRIGHT](#COPYRIGHT)
NAME
----
Test2::EventFacet::Hub - Facet for the hubs an event passes through.
DESCRIPTION
-----------
These are a record of the hubs an event passes through. Most recent hub is the first one in the list.
FACET FIELDS
-------------
$string = $trace->{details}
$string = $trace->details() The hub class or subclass
$int = $trace->{pid}
$int = $trace->pid() PID of the hub this event was sent to.
$int = $trace->{tid}
$int = $trace->tid() The thread ID of the hub the event was sent to.
$hid = $trace->{hid}
$hid = $trace->hid() The ID of the hub that the event was send to.
$huuid = $trace->{huuid}
$huuid = $trace->huuid() The UUID of the hub that the event was sent to.
$int = $trace->{nested}
$int = $trace->nested() How deeply nested the hub was.
$bool = $trace->{buffered}
$bool = $trace->buffered() True if the event was buffered and not sent to the formatter independent of a parent (This should never be set when nested is `0` or `undef`).
SOURCE
------
The source code repository for Test2 can be found at *http://github.com/Test-More/test-more/*.
MAINTAINERS
-----------
Chad Granum <[email protected]> AUTHORS
-------
Chad Granum <[email protected]> COPYRIGHT
---------
Copyright 2020 Chad Granum <[email protected]>.
This program is free software; you can redistribute it and/or modify it under the same terms as Perl itself.
See *http://dev.perl.org/licenses/*
perl TAP::Parser::Iterator::Stream TAP::Parser::Iterator::Stream
=============================
CONTENTS
--------
* [NAME](#NAME)
* [VERSION](#VERSION)
* [SYNOPSIS](#SYNOPSIS)
* [DESCRIPTION](#DESCRIPTION)
* [METHODS](#METHODS)
+ [Class Methods](#Class-Methods)
- [new](#new)
+ [Instance Methods](#Instance-Methods)
- [next](#next)
- [next\_raw](#next_raw)
- [wait](#wait)
- [exit](#exit)
* [ATTRIBUTION](#ATTRIBUTION)
* [SEE ALSO](#SEE-ALSO)
NAME
----
TAP::Parser::Iterator::Stream - Iterator for filehandle-based TAP sources
VERSION
-------
Version 3.44
SYNOPSIS
--------
```
use TAP::Parser::Iterator::Stream;
open( TEST, 'test.tap' );
my $it = TAP::Parser::Iterator::Stream->new(\*TEST);
my $line = $it->next;
```
DESCRIPTION
-----------
This is a simple iterator wrapper for reading from filehandles, used by <TAP::Parser>. Unless you're writing a plugin or subclassing, you probably won't need to use this module directly.
METHODS
-------
###
Class Methods
#### `new`
Create an iterator. Expects one argument containing a filehandle.
###
Instance Methods
#### `next`
Iterate through it, of course.
#### `next_raw`
Iterate raw input without applying any fixes for quirky input syntax.
#### `wait`
Get the wait status for this iterator. Always returns zero.
#### `exit`
Get the exit status for this iterator. Always returns zero.
ATTRIBUTION
-----------
Originally ripped off from <Test::Harness>.
SEE ALSO
---------
<TAP::Object>, <TAP::Parser>, <TAP::Parser::Iterator>,
| programming_docs |
scikit_learn scikit-learn scikit-learn
============
[Classification](https://scikit-learn.org/1.1/supervised_learning.html#supervised-learning)
Identifying which category an object belongs to.
**Applications:** Spam detection, image recognition. **Algorithms:** [SVM](modules/svm#svm-classification), [nearest neighbors](modules/neighbors#classification), [random forest](modules/ensemble#forest), and [more...](https://scikit-learn.org/1.1/supervised_learning.html#supervised-learning)
[Regression](https://scikit-learn.org/1.1/supervised_learning.html#supervised-learning)
Predicting a continuous-valued attribute associated with an object.
**Applications:** Drug response, Stock prices. **Algorithms:** [SVR](modules/svm#svm-regression), [nearest neighbors](modules/neighbors#regression), [random forest](modules/ensemble#forest), and [more...](https://scikit-learn.org/1.1/supervised_learning.html#supervised-learning)
[Clustering](modules/clustering#clustering)
Automatic grouping of similar objects into sets.
**Applications:** Customer segmentation, Grouping experiment outcomes **Algorithms:** [k-Means](modules/clustering#k-means), [spectral clustering](modules/clustering#spectral-clustering), [mean-shift](modules/clustering#mean-shift), and [more...](modules/clustering#clustering)
[Dimensionality reduction](modules/decomposition#decompositions)
Reducing the number of random variables to consider.
**Applications:** Visualization, Increased efficiency **Algorithms:** [PCA](modules/decomposition#pca), [feature selection](modules/feature_selection#feature-selection), [non-negative matrix factorization](modules/decomposition#nmf), and [more...](modules/decomposition#decompositions)
[Model selection](https://scikit-learn.org/1.1/model_selection.html#model-selection)
Comparing, validating and choosing parameters and models.
**Applications:** Improved accuracy via parameter tuning **Algorithms:** [grid search](modules/grid_search#grid-search), [cross validation](modules/cross_validation#cross-validation), [metrics](modules/model_evaluation#model-evaluation), and [more...](https://scikit-learn.org/1.1/model_selection.html)
[Preprocessing](modules/preprocessing#preprocessing)
Feature extraction and normalization.
**Applications:** Transforming input data such as text for use with machine learning algorithms. **Algorithms:** [preprocessing](modules/preprocessing#preprocessing), [feature extraction](modules/feature_extraction#feature-extraction), and [more...](modules/preprocessing#preprocessing)
scikit_learn 7. Dataset loading utilities 7. Dataset loading utilities
============================
The `sklearn.datasets` package embeds some small toy datasets as introduced in the [Getting Started](tutorial/basic/tutorial#loading-example-dataset) section.
This package also features helpers to fetch larger datasets commonly used by the machine learning community to benchmark algorithms on data that comes from the ‘real world’.
To evaluate the impact of the scale of the dataset (`n_samples` and `n_features`) while controlling the statistical properties of the data (typically the correlation and informativeness of the features), it is also possible to generate synthetic data.
**General dataset API.** There are three main kinds of dataset interfaces that can be used to get datasets depending on the desired type of dataset.
**The dataset loaders.** They can be used to load small standard datasets, described in the [Toy datasets](https://scikit-learn.org/1.1/datasets/toy_dataset.html#toy-datasets) section.
**The dataset fetchers.** They can be used to download and load larger datasets, described in the [Real world datasets](https://scikit-learn.org/1.1/datasets/real_world.html#real-world-datasets) section.
Both loaders and fetchers functions return a [`Bunch`](modules/generated/sklearn.utils.bunch#sklearn.utils.Bunch "sklearn.utils.Bunch") object holding at least two items: an array of shape `n_samples` \* `n_features` with key `data` (except for 20newsgroups) and a numpy array of length `n_samples`, containing the target values, with key `target`.
The Bunch object is a dictionary that exposes its keys as attributes. For more information about Bunch object, see [`Bunch`](modules/generated/sklearn.utils.bunch#sklearn.utils.Bunch "sklearn.utils.Bunch").
It’s also possible for almost all of these function to constrain the output to be a tuple containing only the data and the target, by setting the `return_X_y` parameter to `True`.
The datasets also contain a full description in their `DESCR` attribute and some contain `feature_names` and `target_names`. See the dataset descriptions below for details.
**The dataset generation functions.** They can be used to generate controlled synthetic datasets, described in the [Generated datasets](https://scikit-learn.org/1.1/datasets/sample_generators.html#sample-generators) section.
These functions return a tuple `(X, y)` consisting of a `n_samples` \* `n_features` numpy array `X` and an array of length `n_samples` containing the targets `y`.
In addition, there are also miscellaneous tools to load datasets of other formats or from other locations, described in the [Loading other datasets](https://scikit-learn.org/1.1/datasets/loading_other_datasets.html#loading-other-datasets) section.
* [7.1. Toy datasets](https://scikit-learn.org/1.1/datasets/toy_dataset.html)
+ [7.1.1. Boston house prices dataset](https://scikit-learn.org/1.1/datasets/toy_dataset.html#boston-house-prices-dataset)
+ [7.1.2. Iris plants dataset](https://scikit-learn.org/1.1/datasets/toy_dataset.html#iris-plants-dataset)
+ [7.1.3. Diabetes dataset](https://scikit-learn.org/1.1/datasets/toy_dataset.html#diabetes-dataset)
+ [7.1.4. Optical recognition of handwritten digits dataset](https://scikit-learn.org/1.1/datasets/toy_dataset.html#optical-recognition-of-handwritten-digits-dataset)
+ [7.1.5. Linnerrud dataset](https://scikit-learn.org/1.1/datasets/toy_dataset.html#linnerrud-dataset)
+ [7.1.6. Wine recognition dataset](https://scikit-learn.org/1.1/datasets/toy_dataset.html#wine-recognition-dataset)
+ [7.1.7. Breast cancer wisconsin (diagnostic) dataset](https://scikit-learn.org/1.1/datasets/toy_dataset.html#breast-cancer-wisconsin-diagnostic-dataset)
* [7.2. Real world datasets](https://scikit-learn.org/1.1/datasets/real_world.html)
+ [7.2.1. The Olivetti faces dataset](https://scikit-learn.org/1.1/datasets/real_world.html#the-olivetti-faces-dataset)
+ [7.2.2. The 20 newsgroups text dataset](https://scikit-learn.org/1.1/datasets/real_world.html#the-20-newsgroups-text-dataset)
+ [7.2.3. The Labeled Faces in the Wild face recognition dataset](https://scikit-learn.org/1.1/datasets/real_world.html#the-labeled-faces-in-the-wild-face-recognition-dataset)
+ [7.2.4. Forest covertypes](https://scikit-learn.org/1.1/datasets/real_world.html#forest-covertypes)
+ [7.2.5. RCV1 dataset](https://scikit-learn.org/1.1/datasets/real_world.html#rcv1-dataset)
+ [7.2.6. Kddcup 99 dataset](https://scikit-learn.org/1.1/datasets/real_world.html#kddcup-99-dataset)
+ [7.2.7. California Housing dataset](https://scikit-learn.org/1.1/datasets/real_world.html#california-housing-dataset)
* [7.3. Generated datasets](https://scikit-learn.org/1.1/datasets/sample_generators.html)
+ [7.3.1. Generators for classification and clustering](https://scikit-learn.org/1.1/datasets/sample_generators.html#generators-for-classification-and-clustering)
+ [7.3.2. Generators for regression](https://scikit-learn.org/1.1/datasets/sample_generators.html#generators-for-regression)
+ [7.3.3. Generators for manifold learning](https://scikit-learn.org/1.1/datasets/sample_generators.html#generators-for-manifold-learning)
+ [7.3.4. Generators for decomposition](https://scikit-learn.org/1.1/datasets/sample_generators.html#generators-for-decomposition)
* [7.4. Loading other datasets](https://scikit-learn.org/1.1/datasets/loading_other_datasets.html)
+ [7.4.1. Sample images](https://scikit-learn.org/1.1/datasets/loading_other_datasets.html#sample-images)
+ [7.4.2. Datasets in svmlight / libsvm format](https://scikit-learn.org/1.1/datasets/loading_other_datasets.html#datasets-in-svmlight-libsvm-format)
+ [7.4.3. Downloading datasets from the openml.org repository](https://scikit-learn.org/1.1/datasets/loading_other_datasets.html#downloading-datasets-from-the-openml-org-repository)
+ [7.4.4. Loading from external datasets](https://scikit-learn.org/1.1/datasets/loading_other_datasets.html#loading-from-external-datasets)
scikit_learn An introduction to machine learning with scikit-learn An introduction to machine learning with scikit-learn
=====================================================
Machine learning: the problem setting
-------------------------------------
In general, a learning problem considers a set of n [samples](https://en.wikipedia.org/wiki/Sample_(statistics)) of data and then tries to predict properties of unknown data. If each sample is more than a single number and, for instance, a multi-dimensional entry (aka [multivariate](https://en.wikipedia.org/wiki/Multivariate_random_variable) data), it is said to have several attributes or **features**.
Learning problems fall into a few categories:
* [supervised learning](https://en.wikipedia.org/wiki/Supervised_learning), in which the data comes with additional attributes that we want to predict ([Click here](https://scikit-learn.org/1.1/supervised_learning.html#supervised-learning) to go to the scikit-learn supervised learning page).This problem can be either:
+ [classification](https://en.wikipedia.org/wiki/Classification_in_machine_learning): samples belong to two or more classes and we want to learn from already labeled data how to predict the class of unlabeled data. An example of a classification problem would be handwritten digit recognition, in which the aim is to assign each input vector to one of a finite number of discrete categories. Another way to think of classification is as a discrete (as opposed to continuous) form of supervised learning where one has a limited number of categories and for each of the n samples provided, one is to try to label them with the correct category or class.
+ [regression](https://en.wikipedia.org/wiki/Regression_analysis): if the desired output consists of one or more continuous variables, then the task is called *regression*. An example of a regression problem would be the prediction of the length of a salmon as a function of its age and weight.
* [unsupervised learning](https://en.wikipedia.org/wiki/Unsupervised_learning), in which the training data consists of a set of input vectors x without any corresponding target values. The goal in such problems may be to discover groups of similar examples within the data, where it is called [clustering](https://en.wikipedia.org/wiki/Cluster_analysis), or to determine the distribution of data within the input space, known as [density estimation](https://en.wikipedia.org/wiki/Density_estimation), or to project the data from a high-dimensional space down to two or three dimensions for the purpose of *visualization* ([Click here](https://scikit-learn.org/1.1/unsupervised_learning.html#unsupervised-learning) to go to the Scikit-Learn unsupervised learning page).
Loading an example dataset
--------------------------
`scikit-learn` comes with a few standard datasets, for instance the [iris](https://en.wikipedia.org/wiki/Iris_flower_data_set) and [digits](https://archive.ics.uci.edu/ml/datasets/Pen-Based+Recognition+of+Handwritten+Digits) datasets for classification and the [diabetes dataset](https://www4.stat.ncsu.edu/~boos/var.select/diabetes.html) for regression.
In the following, we start a Python interpreter from our shell and then load the `iris` and `digits` datasets. Our notational convention is that `$` denotes the shell prompt while `>>>` denotes the Python interpreter prompt:
```
$ python
>>> from sklearn import datasets
>>> iris = datasets.load_iris()
>>> digits = datasets.load_digits()
```
A dataset is a dictionary-like object that holds all the data and some metadata about the data. This data is stored in the `.data` member, which is a `n_samples, n_features` array. In the case of supervised problems, one or more response variables are stored in the `.target` member. More details on the different datasets can be found in the [dedicated section](../../datasets#datasets).
For instance, in the case of the digits dataset, `digits.data` gives access to the features that can be used to classify the digits samples:
```
>>> print(digits.data)
[[ 0. 0. 5. ... 0. 0. 0.]
[ 0. 0. 0. ... 10. 0. 0.]
[ 0. 0. 0. ... 16. 9. 0.]
...
[ 0. 0. 1. ... 6. 0. 0.]
[ 0. 0. 2. ... 12. 0. 0.]
[ 0. 0. 10. ... 12. 1. 0.]]
```
and `digits.target` gives the ground truth for the digit dataset, that is the number corresponding to each digit image that we are trying to learn:
```
>>> digits.target
array([0, 1, 2, ..., 8, 9, 8])
```
Learning and predicting
-----------------------
In the case of the digits dataset, the task is to predict, given an image, which digit it represents. We are given samples of each of the 10 possible classes (the digits zero through nine) on which we *fit* an [estimator](https://en.wikipedia.org/wiki/Estimator) to be able to *predict* the classes to which unseen samples belong.
In scikit-learn, an estimator for classification is a Python object that implements the methods `fit(X, y)` and `predict(T)`.
An example of an estimator is the class `sklearn.svm.SVC`, which implements [support vector classification](https://en.wikipedia.org/wiki/Support_vector_machine). The estimator’s constructor takes as arguments the model’s parameters.
For now, we will consider the estimator as a black box:
```
>>> from sklearn import svm
>>> clf = svm.SVC(gamma=0.001, C=100.)
```
The `clf` (for classifier) estimator instance is first fitted to the model; that is, it must *learn* from the model. This is done by passing our training set to the `fit` method. For the training set, we’ll use all the images from our dataset, except for the last image, which we’ll reserve for our predicting. We select the training set with the `[:-1]` Python syntax, which produces a new array that contains all but the last item from `digits.data`:
```
>>> clf.fit(digits.data[:-1], digits.target[:-1])
SVC(C=100.0, gamma=0.001)
```
Now you can *predict* new values. In this case, you’ll predict using the last image from `digits.data`. By predicting, you’ll determine the image from the training set that best matches the last image.
```
>>> clf.predict(digits.data[-1:])
array([8])
```
The corresponding image is:
As you can see, it is a challenging task: after all, the images are of poor resolution. Do you agree with the classifier?
A complete example of this classification problem is available as an example that you can run and study: [Recognizing hand-written digits](../../auto_examples/classification/plot_digits_classification#sphx-glr-auto-examples-classification-plot-digits-classification-py).
Conventions
-----------
scikit-learn estimators follow certain rules to make their behavior more predictive. These are described in more detail in the [Glossary of Common Terms and API Elements](https://scikit-learn.org/1.1/glossary.html#glossary).
### Type casting
Unless otherwise specified, input will be cast to `float64`:
```
>>> import numpy as np
>>> from sklearn import kernel_approximation
>>> rng = np.random.RandomState(0)
>>> X = rng.rand(10, 2000)
>>> X = np.array(X, dtype='float32')
>>> X.dtype
dtype('float32')
>>> transformer = kernel_approximation.RBFSampler()
>>> X_new = transformer.fit_transform(X)
>>> X_new.dtype
dtype('float64')
```
In this example, `X` is `float32`, which is cast to `float64` by `fit_transform(X)`.
Regression targets are cast to `float64` and classification targets are maintained:
```
>>> from sklearn import datasets
>>> from sklearn.svm import SVC
>>> iris = datasets.load_iris()
>>> clf = SVC()
>>> clf.fit(iris.data, iris.target)
SVC()
>>> list(clf.predict(iris.data[:3]))
[0, 0, 0]
>>> clf.fit(iris.data, iris.target_names[iris.target])
SVC()
>>> list(clf.predict(iris.data[:3]))
['setosa', 'setosa', 'setosa']
```
Here, the first `predict()` returns an integer array, since `iris.target` (an integer array) was used in `fit`. The second `predict()` returns a string array, since `iris.target_names` was for fitting.
### Refitting and updating parameters
Hyper-parameters of an estimator can be updated after it has been constructed via the [set\_params()](https://scikit-learn.org/1.1/glossary.html#term-set_params) method. Calling `fit()` more than once will overwrite what was learned by any previous `fit()`:
```
>>> import numpy as np
>>> from sklearn.datasets import load_iris
>>> from sklearn.svm import SVC
>>> X, y = load_iris(return_X_y=True)
>>> clf = SVC()
>>> clf.set_params(kernel='linear').fit(X, y)
SVC(kernel='linear')
>>> clf.predict(X[:5])
array([0, 0, 0, 0, 0])
>>> clf.set_params(kernel='rbf').fit(X, y)
SVC()
>>> clf.predict(X[:5])
array([0, 0, 0, 0, 0])
```
Here, the default kernel `rbf` is first changed to `linear` via [`SVC.set_params()`](../../modules/generated/sklearn.svm.svc#sklearn.svm.SVC.set_params "sklearn.svm.SVC.set_params") after the estimator has been constructed, and changed back to `rbf` to refit the estimator and to make a second prediction.
### Multiclass vs. multilabel fitting
When using [`multiclass classifiers`](../../modules/classes#module-sklearn.multiclass "sklearn.multiclass"), the learning and prediction task that is performed is dependent on the format of the target data fit upon:
```
>>> from sklearn.svm import SVC
>>> from sklearn.multiclass import OneVsRestClassifier
>>> from sklearn.preprocessing import LabelBinarizer
>>> X = [[1, 2], [2, 4], [4, 5], [3, 2], [3, 1]]
>>> y = [0, 0, 1, 1, 2]
>>> classif = OneVsRestClassifier(estimator=SVC(random_state=0))
>>> classif.fit(X, y).predict(X)
array([0, 0, 1, 1, 2])
```
In the above case, the classifier is fit on a 1d array of multiclass labels and the `predict()` method therefore provides corresponding multiclass predictions. It is also possible to fit upon a 2d array of binary label indicators:
```
>>> y = LabelBinarizer().fit_transform(y)
>>> classif.fit(X, y).predict(X)
array([[1, 0, 0],
[1, 0, 0],
[0, 1, 0],
[0, 0, 0],
[0, 0, 0]])
```
Here, the classifier is `fit()` on a 2d binary label representation of `y`, using the [`LabelBinarizer`](../../modules/generated/sklearn.preprocessing.labelbinarizer#sklearn.preprocessing.LabelBinarizer "sklearn.preprocessing.LabelBinarizer"). In this case `predict()` returns a 2d array representing the corresponding multilabel predictions.
Note that the fourth and fifth instances returned all zeroes, indicating that they matched none of the three labels `fit` upon. With multilabel outputs, it is similarly possible for an instance to be assigned multiple labels:
```
>>> from sklearn.preprocessing import MultiLabelBinarizer
>>> y = [[0, 1], [0, 2], [1, 3], [0, 2, 3], [2, 4]]
>>> y = MultiLabelBinarizer().fit_transform(y)
>>> classif.fit(X, y).predict(X)
array([[1, 1, 0, 0, 0],
[1, 0, 1, 0, 0],
[0, 1, 0, 1, 0],
[1, 0, 1, 0, 0],
[1, 0, 1, 0, 0]])
```
In this case, the classifier is fit upon instances each assigned multiple labels. The [`MultiLabelBinarizer`](../../modules/generated/sklearn.preprocessing.multilabelbinarizer#sklearn.preprocessing.MultiLabelBinarizer "sklearn.preprocessing.MultiLabelBinarizer") is used to binarize the 2d array of multilabels to `fit` upon. As a result, `predict()` returns a 2d array with multiple predicted labels for each instance.
scikit_learn A tutorial on statistical-learning for scientific data processing A tutorial on statistical-learning for scientific data processing
=================================================================
* [Statistical learning: the setting and the estimator object in scikit-learn](settings)
+ [Datasets](settings#datasets)
+ [Estimators objects](settings#estimators-objects)
* [Supervised learning: predicting an output variable from high-dimensional observations](supervised_learning)
+ [Nearest neighbor and the curse of dimensionality](supervised_learning#nearest-neighbor-and-the-curse-of-dimensionality)
+ [Linear model: from regression to sparsity](supervised_learning#linear-model-from-regression-to-sparsity)
+ [Support vector machines (SVMs)](supervised_learning#support-vector-machines-svms)
* [Model selection: choosing estimators and their parameters](model_selection)
+ [Score, and cross-validated scores](model_selection#score-and-cross-validated-scores)
+ [Cross-validation generators](model_selection#cross-validation-generators)
+ [Grid-search and cross-validated estimators](model_selection#grid-search-and-cross-validated-estimators)
* [Unsupervised learning: seeking representations of the data](unsupervised_learning)
+ [Clustering: grouping observations together](unsupervised_learning#clustering-grouping-observations-together)
+ [Decompositions: from a signal to components and loadings](unsupervised_learning#decompositions-from-a-signal-to-components-and-loadings)
* [Putting it all together](putting_together)
+ [Pipelining](putting_together#pipelining)
+ [Face recognition with eigenfaces](putting_together#face-recognition-with-eigenfaces)
+ [Open problem: Stock Market Structure](putting_together#open-problem-stock-market-structure)
| programming_docs |
scikit_learn Unsupervised learning: seeking representations of the data Unsupervised learning: seeking representations of the data
==========================================================
Clustering: grouping observations together
------------------------------------------
### K-means clustering
Note that there exist a lot of different clustering criteria and associated algorithms. The simplest clustering algorithm is [K-means](../../modules/clustering#k-means).
```
>>> from sklearn import cluster, datasets
>>> X_iris, y_iris = datasets.load_iris(return_X_y=True)
>>> k_means = cluster.KMeans(n_clusters=3)
>>> k_means.fit(X_iris)
KMeans(n_clusters=3)
>>> print(k_means.labels_[::10])
[1 1 1 1 1 0 0 0 0 0 2 2 2 2 2]
>>> print(y_iris[::10])
[0 0 0 0 0 1 1 1 1 1 2 2 2 2 2]
```
Warning
There is absolutely no guarantee of recovering a ground truth. First, choosing the right number of clusters is hard. Second, the algorithm is sensitive to initialization, and can fall into local minima, although scikit-learn employs several tricks to mitigate this issue.
**Bad initialization**
**8 clusters**
**Ground truth**
**Don’t over-interpret clustering results**
### Hierarchical agglomerative clustering: Ward
A [Hierarchical clustering](../../modules/clustering#hierarchical-clustering) method is a type of cluster analysis that aims to build a hierarchy of clusters. In general, the various approaches of this technique are either:
* **Agglomerative** - bottom-up approaches: each observation starts in its own cluster, and clusters are iteratively merged in such a way to minimize a *linkage* criterion. This approach is particularly interesting when the clusters of interest are made of only a few observations. When the number of clusters is large, it is much more computationally efficient than k-means.
* **Divisive** - top-down approaches: all observations start in one cluster, which is iteratively split as one moves down the hierarchy. For estimating large numbers of clusters, this approach is both slow (due to all observations starting as one cluster, which it splits recursively) and statistically ill-posed.
#### Connectivity-constrained clustering
With agglomerative clustering, it is possible to specify which samples can be clustered together by giving a connectivity graph. Graphs in scikit-learn are represented by their adjacency matrix. Often, a sparse matrix is used. This can be useful, for instance, to retrieve connected regions (sometimes also referred to as connected components) when clustering an image.
```
>>> from skimage.data import coins
>>> from scipy.ndimage import gaussian_filter
>>> from skimage.transform import rescale
>>> rescaled_coins = rescale(
... gaussian_filter(coins(), sigma=2),
... 0.2, mode='reflect', anti_aliasing=False, multichannel=False
... )
>>> X = np.reshape(rescaled_coins, (-1, 1))
```
We need a vectorized version of the image. `'rescaled_coins'` is a down-scaled version of the coins image to speed up the process:
```
>>> from sklearn.feature_extraction import grid_to_graph
>>> connectivity = grid_to_graph(*rescaled_coins.shape)
```
Define the graph structure of the data. Pixels connected to their neighbors:
```
>>> n_clusters = 27 # number of regions
>>> from sklearn.cluster import AgglomerativeClustering
>>> ward = AgglomerativeClustering(n_clusters=n_clusters, linkage='ward',
... connectivity=connectivity)
>>> ward.fit(X)
AgglomerativeClustering(connectivity=..., n_clusters=27)
>>> label = np.reshape(ward.labels_, rescaled_coins.shape)
```
#### Feature agglomeration
We have seen that sparsity could be used to mitigate the curse of dimensionality, *i.e* an insufficient amount of observations compared to the number of features. Another approach is to merge together similar features: **feature agglomeration**. This approach can be implemented by clustering in the feature direction, in other words clustering the transposed data.
```
>>> digits = datasets.load_digits()
>>> images = digits.images
>>> X = np.reshape(images, (len(images), -1))
>>> connectivity = grid_to_graph(*images[0].shape)
>>> agglo = cluster.FeatureAgglomeration(connectivity=connectivity,
... n_clusters=32)
>>> agglo.fit(X)
FeatureAgglomeration(connectivity=..., n_clusters=32)
>>> X_reduced = agglo.transform(X)
>>> X_approx = agglo.inverse_transform(X_reduced)
>>> images_approx = np.reshape(X_approx, images.shape)
```
Decompositions: from a signal to components and loadings
--------------------------------------------------------
### Principal component analysis: PCA
[Principal component analysis (PCA)](../../modules/decomposition#pca) selects the successive components that explain the maximum variance in the signal.
The point cloud spanned by the observations above is very flat in one direction: one of the three univariate features can almost be exactly computed using the other two. PCA finds the directions in which the data is not *flat*
When used to *transform* data, PCA can reduce the dimensionality of the data by projecting on a principal subspace.
```
>>> # Create a signal with only 2 useful dimensions
>>> x1 = np.random.normal(size=100)
>>> x2 = np.random.normal(size=100)
>>> x3 = x1 + x2
>>> X = np.c_[x1, x2, x3]
>>> from sklearn import decomposition
>>> pca = decomposition.PCA()
>>> pca.fit(X)
PCA()
>>> print(pca.explained_variance_)
[ 2.18565811e+00 1.19346747e+00 8.43026679e-32]
>>> # As we can see, only the 2 first components are useful
>>> pca.n_components = 2
>>> X_reduced = pca.fit_transform(X)
>>> X_reduced.shape
(100, 2)
```
### Independent Component Analysis: ICA
[Independent component analysis (ICA)](../../modules/decomposition#ica) selects components so that the distribution of their loadings carries a maximum amount of independent information. It is able to recover **non-Gaussian** independent signals:
```
>>> # Generate sample data
>>> import numpy as np
>>> from scipy import signal
>>> time = np.linspace(0, 10, 2000)
>>> s1 = np.sin(2 * time) # Signal 1 : sinusoidal signal
>>> s2 = np.sign(np.sin(3 * time)) # Signal 2 : square signal
>>> s3 = signal.sawtooth(2 * np.pi * time) # Signal 3: saw tooth signal
>>> S = np.c_[s1, s2, s3]
>>> S += 0.2 * np.random.normal(size=S.shape) # Add noise
>>> S /= S.std(axis=0) # Standardize data
>>> # Mix data
>>> A = np.array([[1, 1, 1], [0.5, 2, 1], [1.5, 1, 2]]) # Mixing matrix
>>> X = np.dot(S, A.T) # Generate observations
>>> # Compute ICA
>>> ica = decomposition.FastICA()
>>> S_ = ica.fit_transform(X) # Get the estimated sources
>>> A_ = ica.mixing_.T
>>> np.allclose(X, np.dot(S_, A_) + ica.mean_)
True
```
scikit_learn Putting it all together Putting it all together
=======================
Pipelining
----------
We have seen that some estimators can transform data and that some estimators can predict variables. We can also create combined estimators:
```
from sklearn.decomposition import PCA
from sklearn.linear_model import LogisticRegression
from sklearn.pipeline import Pipeline
from sklearn.model_selection import GridSearchCV
from sklearn.preprocessing import StandardScaler
# Define a pipeline to search for the best combination of PCA truncation
# and classifier regularization.
pca = PCA()
# Define a Standard Scaler to normalize inputs
scaler = StandardScaler()
# set the tolerance to a large value to make the example faster
logistic = LogisticRegression(max_iter=10000, tol=0.1)
pipe = Pipeline(steps=[("scaler", scaler), ("pca", pca), ("logistic", logistic)])
X_digits, y_digits = datasets.load_digits(return_X_y=True)
# Parameters of pipelines can be set using '__' separated parameter names:
param_grid = {
"pca__n_components": [5, 15, 30, 45, 60],
"logistic__C": np.logspace(-4, 4, 4),
}
search = GridSearchCV(pipe, param_grid, n_jobs=2)
search.fit(X_digits, y_digits)
print("Best parameter (CV score=%0.3f):" % search.best_score_)
print(search.best_params_)
# Plot the PCA spectrum
pca.fit(X_digits)
fig, (ax0, ax1) = plt.subplots(nrows=2, sharex=True, figsize=(6, 6))
ax0.plot(
np.arange(1, pca.n_components_ + 1), pca.explained_variance_ratio_, "+", linewidth=2
)
ax0.set_ylabel("PCA explained variance ratio")
ax0.axvline(
search.best_estimator_.named_steps["pca"].n_components,
linestyle=":",
label="n_components chosen",
)
```
Face recognition with eigenfaces
--------------------------------
The dataset used in this example is a preprocessed excerpt of the “Labeled Faces in the Wild”, also known as [LFW](http://vis-www.cs.umass.edu/lfw/):
<http://vis-www.cs.umass.edu/lfw/lfw-funneled.tgz> (233MB)
```
"""
===================================================
Faces recognition example using eigenfaces and SVMs
===================================================
The dataset used in this example is a preprocessed excerpt of the
"Labeled Faces in the Wild", aka LFW_:
http://vis-www.cs.umass.edu/lfw/lfw-funneled.tgz (233MB)
.. _LFW: http://vis-www.cs.umass.edu/lfw/
"""
# %%
from time import time
import matplotlib.pyplot as plt
from sklearn.model_selection import train_test_split
from sklearn.model_selection import RandomizedSearchCV
from sklearn.datasets import fetch_lfw_people
from sklearn.metrics import classification_report
from sklearn.metrics import ConfusionMatrixDisplay
from sklearn.preprocessing import StandardScaler
from sklearn.decomposition import PCA
from sklearn.svm import SVC
from sklearn.utils.fixes import loguniform
# %%
# Download the data, if not already on disk and load it as numpy arrays
lfw_people = fetch_lfw_people(min_faces_per_person=70, resize=0.4)
# introspect the images arrays to find the shapes (for plotting)
n_samples, h, w = lfw_people.images.shape
# for machine learning we use the 2 data directly (as relative pixel
# positions info is ignored by this model)
X = lfw_people.data
n_features = X.shape[1]
# the label to predict is the id of the person
y = lfw_people.target
target_names = lfw_people.target_names
n_classes = target_names.shape[0]
print("Total dataset size:")
print("n_samples: %d" % n_samples)
print("n_features: %d" % n_features)
print("n_classes: %d" % n_classes)
# %%
# Split into a training set and a test and keep 25% of the data for testing.
X_train, X_test, y_train, y_test = train_test_split(
X, y, test_size=0.25, random_state=42
)
scaler = StandardScaler()
X_train = scaler.fit_transform(X_train)
X_test = scaler.transform(X_test)
# %%
# Compute a PCA (eigenfaces) on the face dataset (treated as unlabeled
# dataset): unsupervised feature extraction / dimensionality reduction
n_components = 150
print(
"Extracting the top %d eigenfaces from %d faces" % (n_components, X_train.shape[0])
)
t0 = time()
pca = PCA(n_components=n_components, svd_solver="randomized", whiten=True).fit(X_train)
print("done in %0.3fs" % (time() - t0))
eigenfaces = pca.components_.reshape((n_components, h, w))
print("Projecting the input data on the eigenfaces orthonormal basis")
t0 = time()
X_train_pca = pca.transform(X_train)
X_test_pca = pca.transform(X_test)
print("done in %0.3fs" % (time() - t0))
# %%
# Train a SVM classification model
print("Fitting the classifier to the training set")
t0 = time()
param_grid = {
"C": loguniform(1e3, 1e5),
"gamma": loguniform(1e-4, 1e-1),
}
clf = RandomizedSearchCV(
SVC(kernel="rbf", class_weight="balanced"), param_grid, n_iter=10
)
clf = clf.fit(X_train_pca, y_train)
print("done in %0.3fs" % (time() - t0))
print("Best estimator found by grid search:")
print(clf.best_estimator_)
# %%
# Quantitative evaluation of the model quality on the test set
print("Predicting people's names on the test set")
t0 = time()
y_pred = clf.predict(X_test_pca)
print("done in %0.3fs" % (time() - t0))
print(classification_report(y_test, y_pred, target_names=target_names))
ConfusionMatrixDisplay.from_estimator(
clf, X_test_pca, y_test, display_labels=target_names, xticks_rotation="vertical"
)
plt.tight_layout()
plt.show()
# %%
# Qualitative evaluation of the predictions using matplotlib
def plot_gallery(images, titles, h, w, n_row=3, n_col=4):
"""Helper function to plot a gallery of portraits"""
plt.figure(figsize=(1.8 * n_col, 2.4 * n_row))
plt.subplots_adjust(bottom=0, left=0.01, right=0.99, top=0.90, hspace=0.35)
for i in range(n_row * n_col):
plt.subplot(n_row, n_col, i + 1)
plt.imshow(images[i].reshape((h, w)), cmap=plt.cm.gray)
plt.title(titles[i], size=12)
plt.xticks(())
plt.yticks(())
# %%
# plot the result of the prediction on a portion of the test set
def title(y_pred, y_test, target_names, i):
pred_name = target_names[y_pred[i]].rsplit(" ", 1)[-1]
true_name = target_names[y_test[i]].rsplit(" ", 1)[-1]
return "predicted: %s\ntrue: %s" % (pred_name, true_name)
prediction_titles = [
title(y_pred, y_test, target_names, i) for i in range(y_pred.shape[0])
]
plot_gallery(X_test, prediction_titles, h, w)
# %%
# plot the gallery of the most significative eigenfaces
eigenface_titles = ["eigenface %d" % i for i in range(eigenfaces.shape[0])]
plot_gallery(eigenfaces, eigenface_titles, h, w)
plt.show()
# %%
# Face recognition problem would be much more effectively solved by training
# convolutional neural networks but this family of models is outside of the scope of
# the scikit-learn library. Interested readers should instead try to use pytorch or
# tensorflow to implement such models.
```
**Prediction**
**Eigenfaces**
Expected results for the top 5 most represented people in the dataset:
```
precision recall f1-score support
Gerhard_Schroeder 0.91 0.75 0.82 28
Donald_Rumsfeld 0.84 0.82 0.83 33
Tony_Blair 0.65 0.82 0.73 34
Colin_Powell 0.78 0.88 0.83 58
George_W_Bush 0.93 0.86 0.90 129
avg / total 0.86 0.84 0.85 282
```
Open problem: Stock Market Structure
------------------------------------
Can we predict the variation in stock prices for Google over a given time frame?
[Learning a graph structure](../../auto_examples/applications/plot_stock_market#stock-market)
scikit_learn Statistical learning: the setting and the estimator object in scikit-learn Statistical learning: the setting and the estimator object in scikit-learn
==========================================================================
Datasets
--------
Scikit-learn deals with learning information from one or more datasets that are represented as 2D arrays. They can be understood as a list of multi-dimensional observations. We say that the first axis of these arrays is the **samples** axis, while the second is the **features** axis.
When the data is not initially in the `(n_samples, n_features)` shape, it needs to be preprocessed in order to be used by scikit-learn.
Estimators objects
------------------
**Fitting data**: the main API implemented by scikit-learn is that of the `estimator`. An estimator is any object that learns from data; it may be a classification, regression or clustering algorithm or a *transformer* that extracts/filters useful features from raw data.
All estimator objects expose a `fit` method that takes a dataset (usually a 2-d array):
```
>>> estimator.fit(data)
```
**Estimator parameters**: All the parameters of an estimator can be set when it is instantiated or by modifying the corresponding attribute:
```
>>> estimator = Estimator(param1=1, param2=2)
>>> estimator.param1
1
```
**Estimated parameters**: When data is fitted with an estimator, parameters are estimated from the data at hand. All the estimated parameters are attributes of the estimator object ending by an underscore:
```
>>> estimator.estimated_param_
```
scikit_learn Model selection: choosing estimators and their parameters Model selection: choosing estimators and their parameters
=========================================================
Score, and cross-validated scores
---------------------------------
As we have seen, every estimator exposes a `score` method that can judge the quality of the fit (or the prediction) on new data. **Bigger is better**.
```
>>> from sklearn import datasets, svm
>>> X_digits, y_digits = datasets.load_digits(return_X_y=True)
>>> svc = svm.SVC(C=1, kernel='linear')
>>> svc.fit(X_digits[:-100], y_digits[:-100]).score(X_digits[-100:], y_digits[-100:])
0.98
```
To get a better measure of prediction accuracy (which we can use as a proxy for goodness of fit of the model), we can successively split the data in *folds* that we use for training and testing:
```
>>> import numpy as np
>>> X_folds = np.array_split(X_digits, 3)
>>> y_folds = np.array_split(y_digits, 3)
>>> scores = list()
>>> for k in range(3):
... # We use 'list' to copy, in order to 'pop' later on
... X_train = list(X_folds)
... X_test = X_train.pop(k)
... X_train = np.concatenate(X_train)
... y_train = list(y_folds)
... y_test = y_train.pop(k)
... y_train = np.concatenate(y_train)
... scores.append(svc.fit(X_train, y_train).score(X_test, y_test))
>>> print(scores)
[0.934..., 0.956..., 0.939...]
```
This is called a [`KFold`](../../modules/generated/sklearn.model_selection.kfold#sklearn.model_selection.KFold "sklearn.model_selection.KFold") cross-validation.
Cross-validation generators
---------------------------
Scikit-learn has a collection of classes which can be used to generate lists of train/test indices for popular cross-validation strategies.
They expose a `split` method which accepts the input dataset to be split and yields the train/test set indices for each iteration of the chosen cross-validation strategy.
This example shows an example usage of the `split` method.
```
>>> from sklearn.model_selection import KFold, cross_val_score
>>> X = ["a", "a", "a", "b", "b", "c", "c", "c", "c", "c"]
>>> k_fold = KFold(n_splits=5)
>>> for train_indices, test_indices in k_fold.split(X):
... print('Train: %s | test: %s' % (train_indices, test_indices))
Train: [2 3 4 5 6 7 8 9] | test: [0 1]
Train: [0 1 4 5 6 7 8 9] | test: [2 3]
Train: [0 1 2 3 6 7 8 9] | test: [4 5]
Train: [0 1 2 3 4 5 8 9] | test: [6 7]
Train: [0 1 2 3 4 5 6 7] | test: [8 9]
```
The cross-validation can then be performed easily:
```
>>> [svc.fit(X_digits[train], y_digits[train]).score(X_digits[test], y_digits[test])
... for train, test in k_fold.split(X_digits)]
[0.963..., 0.922..., 0.963..., 0.963..., 0.930...]
```
The cross-validation score can be directly calculated using the [`cross_val_score`](../../modules/generated/sklearn.model_selection.cross_val_score#sklearn.model_selection.cross_val_score "sklearn.model_selection.cross_val_score") helper. Given an estimator, the cross-validation object and the input dataset, the [`cross_val_score`](../../modules/generated/sklearn.model_selection.cross_val_score#sklearn.model_selection.cross_val_score "sklearn.model_selection.cross_val_score") splits the data repeatedly into a training and a testing set, trains the estimator using the training set and computes the scores based on the testing set for each iteration of cross-validation.
By default the estimator’s `score` method is used to compute the individual scores.
Refer the [metrics module](../../modules/metrics#metrics) to learn more on the available scoring methods.
```
>>> cross_val_score(svc, X_digits, y_digits, cv=k_fold, n_jobs=-1)
array([0.96388889, 0.92222222, 0.9637883 , 0.9637883 , 0.93036212])
```
`n_jobs=-1` means that the computation will be dispatched on all the CPUs of the computer.
Alternatively, the `scoring` argument can be provided to specify an alternative scoring method.
```
>>> cross_val_score(svc, X_digits, y_digits, cv=k_fold,
... scoring='precision_macro')
array([0.96578289, 0.92708922, 0.96681476, 0.96362897, 0.93192644])
```
**Cross-validation generators**
| | | |
| --- | --- | --- |
| [`KFold`](../../modules/generated/sklearn.model_selection.kfold#sklearn.model_selection.KFold "sklearn.model_selection.KFold") **(n\_splits, shuffle, random\_state)** | [`StratifiedKFold`](../../modules/generated/sklearn.model_selection.stratifiedkfold#sklearn.model_selection.StratifiedKFold "sklearn.model_selection.StratifiedKFold") **(n\_splits, shuffle, random\_state)** | [`GroupKFold`](../../modules/generated/sklearn.model_selection.groupkfold#sklearn.model_selection.GroupKFold "sklearn.model_selection.GroupKFold") **(n\_splits)** |
| Splits it into K folds, trains on K-1 and then tests on the left-out. | Same as K-Fold but preserves the class distribution within each fold. | Ensures that the same group is not in both testing and training sets. |
| | | |
| --- | --- | --- |
| [`ShuffleSplit`](../../modules/generated/sklearn.model_selection.shufflesplit#sklearn.model_selection.ShuffleSplit "sklearn.model_selection.ShuffleSplit") **(n\_splits, test\_size, train\_size, random\_state)** | [`StratifiedShuffleSplit`](../../modules/generated/sklearn.model_selection.stratifiedshufflesplit#sklearn.model_selection.StratifiedShuffleSplit "sklearn.model_selection.StratifiedShuffleSplit") | [`GroupShuffleSplit`](../../modules/generated/sklearn.model_selection.groupshufflesplit#sklearn.model_selection.GroupShuffleSplit "sklearn.model_selection.GroupShuffleSplit") |
| Generates train/test indices based on random permutation. | Same as shuffle split but preserves the class distribution within each iteration. | Ensures that the same group is not in both testing and training sets. |
| | | |
| --- | --- | --- |
| [`LeaveOneGroupOut`](../../modules/generated/sklearn.model_selection.leaveonegroupout#sklearn.model_selection.LeaveOneGroupOut "sklearn.model_selection.LeaveOneGroupOut") **()** | [`LeavePGroupsOut`](../../modules/generated/sklearn.model_selection.leavepgroupsout#sklearn.model_selection.LeavePGroupsOut "sklearn.model_selection.LeavePGroupsOut") **(n\_groups)** | [`LeaveOneOut`](../../modules/generated/sklearn.model_selection.leaveoneout#sklearn.model_selection.LeaveOneOut "sklearn.model_selection.LeaveOneOut") **()** |
| Takes a group array to group observations. | Leave P groups out. | Leave one observation out. |
| | |
| --- | --- |
| [`LeavePOut`](../../modules/generated/sklearn.model_selection.leavepout#sklearn.model_selection.LeavePOut "sklearn.model_selection.LeavePOut") **(p)** | [`PredefinedSplit`](../../modules/generated/sklearn.model_selection.predefinedsplit#sklearn.model_selection.PredefinedSplit "sklearn.model_selection.PredefinedSplit") |
| Leave P observations out. | Generates train/test indices based on predefined splits. |
Grid-search and cross-validated estimators
------------------------------------------
### Grid-search
scikit-learn provides an object that, given data, computes the score during the fit of an estimator on a parameter grid and chooses the parameters to maximize the cross-validation score. This object takes an estimator during the construction and exposes an estimator API:
```
>>> from sklearn.model_selection import GridSearchCV, cross_val_score
>>> Cs = np.logspace(-6, -1, 10)
>>> clf = GridSearchCV(estimator=svc, param_grid=dict(C=Cs),
... n_jobs=-1)
>>> clf.fit(X_digits[:1000], y_digits[:1000])
GridSearchCV(cv=None,...
>>> clf.best_score_
0.925...
>>> clf.best_estimator_.C
0.0077...
>>> # Prediction performance on test set is not as good as on train set
>>> clf.score(X_digits[1000:], y_digits[1000:])
0.943...
```
By default, the [`GridSearchCV`](../../modules/generated/sklearn.model_selection.gridsearchcv#sklearn.model_selection.GridSearchCV "sklearn.model_selection.GridSearchCV") uses a 5-fold cross-validation. However, if it detects that a classifier is passed, rather than a regressor, it uses a stratified 5-fold.
Warning
You cannot nest objects with parallel computing (`n_jobs` different than 1).
### Cross-validated estimators
Cross-validation to set a parameter can be done more efficiently on an algorithm-by-algorithm basis. This is why, for certain estimators, scikit-learn exposes [Cross-validation: evaluating estimator performance](../../modules/cross_validation#cross-validation) estimators that set their parameter automatically by cross-validation:
```
>>> from sklearn import linear_model, datasets
>>> lasso = linear_model.LassoCV()
>>> X_diabetes, y_diabetes = datasets.load_diabetes(return_X_y=True)
>>> lasso.fit(X_diabetes, y_diabetes)
LassoCV()
>>> # The estimator chose automatically its lambda:
>>> lasso.alpha_
0.00375...
```
These estimators are called similarly to their counterparts, with ‘CV’ appended to their name.
| programming_docs |
scikit_learn Supervised learning: predicting an output variable from high-dimensional observations Supervised learning: predicting an output variable from high-dimensional observations
=====================================================================================
Nearest neighbor and the curse of dimensionality
------------------------------------------------
### k-Nearest neighbors classifier
The simplest possible classifier is the [nearest neighbor](https://en.wikipedia.org/wiki/K-nearest_neighbor_algorithm): given a new observation `X_test`, find in the training set (i.e. the data used to train the estimator) the observation with the closest feature vector. (Please see the [Nearest Neighbors section](../../modules/neighbors#neighbors) of the online Scikit-learn documentation for more information about this type of classifier.)
**KNN (k nearest neighbors) classification example**:
```
>>> # Split iris data in train and test data
>>> # A random permutation, to split the data randomly
>>> np.random.seed(0)
>>> indices = np.random.permutation(len(iris_X))
>>> iris_X_train = iris_X[indices[:-10]]
>>> iris_y_train = iris_y[indices[:-10]]
>>> iris_X_test = iris_X[indices[-10:]]
>>> iris_y_test = iris_y[indices[-10:]]
>>> # Create and fit a nearest-neighbor classifier
>>> from sklearn.neighbors import KNeighborsClassifier
>>> knn = KNeighborsClassifier()
>>> knn.fit(iris_X_train, iris_y_train)
KNeighborsClassifier()
>>> knn.predict(iris_X_test)
array([1, 2, 1, 0, 0, 0, 2, 1, 2, 0])
>>> iris_y_test
array([1, 1, 1, 0, 0, 0, 2, 1, 2, 0])
```
### The curse of dimensionality
For an estimator to be effective, you need the distance between neighboring points to be less than some value \(d\), which depends on the problem. In one dimension, this requires on average \(n \sim 1/d\) points. In the context of the above \(k\)-NN example, if the data is described by just one feature with values ranging from 0 to 1 and with \(n\) training observations, then new data will be no further away than \(1/n\). Therefore, the nearest neighbor decision rule will be efficient as soon as \(1/n\) is small compared to the scale of between-class feature variations.
If the number of features is \(p\), you now require \(n \sim 1/d^p\) points. Let’s say that we require 10 points in one dimension: now \(10^p\) points are required in \(p\) dimensions to pave the \([0, 1]\) space. As \(p\) becomes large, the number of training points required for a good estimator grows exponentially.
For example, if each point is just a single number (8 bytes), then an effective \(k\)-NN estimator in a paltry \(p \sim 20\) dimensions would require more training data than the current estimated size of the entire internet (±1000 Exabytes or so).
This is called the [curse of dimensionality](https://en.wikipedia.org/wiki/Curse_of_dimensionality) and is a core problem that machine learning addresses.
Linear model: from regression to sparsity
-----------------------------------------
### Linear regression
[`LinearRegression`](../../modules/generated/sklearn.linear_model.linearregression#sklearn.linear_model.LinearRegression "sklearn.linear_model.LinearRegression"), in its simplest form, fits a linear model to the data set by adjusting a set of parameters in order to make the sum of the squared residuals of the model as small as possible.
Linear models: \(y = X\beta + \epsilon\)
* \(X\): data
* \(y\): target variable
* \(\beta\): Coefficients
* \(\epsilon\): Observation noise
```
>>> from sklearn import linear_model
>>> regr = linear_model.LinearRegression()
>>> regr.fit(diabetes_X_train, diabetes_y_train)
LinearRegression()
>>> print(regr.coef_)
[ 0.30349955 -237.63931533 510.53060544 327.73698041 -814.13170937
492.81458798 102.84845219 184.60648906 743.51961675 76.09517222]
>>> # The mean square error
>>> np.mean((regr.predict(diabetes_X_test) - diabetes_y_test)**2)
2004.5...
>>> # Explained variance score: 1 is perfect prediction
>>> # and 0 means that there is no linear relationship
>>> # between X and y.
>>> regr.score(diabetes_X_test, diabetes_y_test)
0.585...
```
### Shrinkage
If there are few data points per dimension, noise in the observations induces high variance:
```
>>> X = np.c_[ .5, 1].T
>>> y = [.5, 1]
>>> test = np.c_[ 0, 2].T
>>> regr = linear_model.LinearRegression()
>>> import matplotlib.pyplot as plt
>>> plt.figure()
<...>
>>> np.random.seed(0)
>>> for _ in range(6):
... this_X = .1 * np.random.normal(size=(2, 1)) + X
... regr.fit(this_X, y)
... plt.plot(test, regr.predict(test))
... plt.scatter(this_X, y, s=3)
LinearRegression...
```
A solution in high-dimensional statistical learning is to *shrink* the regression coefficients to zero: any two randomly chosen set of observations are likely to be uncorrelated. This is called [`Ridge`](../../modules/generated/sklearn.linear_model.ridge#sklearn.linear_model.Ridge "sklearn.linear_model.Ridge") regression:
```
>>> regr = linear_model.Ridge(alpha=.1)
>>> plt.figure()
<...>
>>> np.random.seed(0)
>>> for _ in range(6):
... this_X = .1 * np.random.normal(size=(2, 1)) + X
... regr.fit(this_X, y)
... plt.plot(test, regr.predict(test))
... plt.scatter(this_X, y, s=3)
Ridge...
```
This is an example of **bias/variance tradeoff**: the larger the ridge `alpha` parameter, the higher the bias and the lower the variance.
We can choose `alpha` to minimize left out error, this time using the diabetes dataset rather than our synthetic data:
```
>>> alphas = np.logspace(-4, -1, 6)
>>> print([regr.set_params(alpha=alpha)
... .fit(diabetes_X_train, diabetes_y_train)
... .score(diabetes_X_test, diabetes_y_test)
... for alpha in alphas])
[0.585..., 0.585..., 0.5854..., 0.5855..., 0.583..., 0.570...]
```
Note
Capturing in the fitted parameters noise that prevents the model to generalize to new data is called [overfitting](https://en.wikipedia.org/wiki/Overfitting). The bias introduced by the ridge regression is called a [regularization](https://en.wikipedia.org/wiki/Regularization_%28machine_learning%29).
### Sparsity
**Fitting only features 1 and 2**
Note
A representation of the full diabetes dataset would involve 11 dimensions (10 feature dimensions and one of the target variable). It is hard to develop an intuition on such representation, but it may be useful to keep in mind that it would be a fairly *empty* space.
We can see that, although feature 2 has a strong coefficient on the full model, it conveys little information on `y` when considered with feature 1.
To improve the conditioning of the problem (i.e. mitigating the [The curse of dimensionality](#curse-of-dimensionality)), it would be interesting to select only the informative features and set non-informative ones, like feature 2 to 0. Ridge regression will decrease their contribution, but not set them to zero. Another penalization approach, called [Lasso](../../modules/linear_model#lasso) (least absolute shrinkage and selection operator), can set some coefficients to zero. Such methods are called **sparse methods** and sparsity can be seen as an application of Occam’s razor: *prefer simpler models*.
```
>>> regr = linear_model.Lasso()
>>> scores = [regr.set_params(alpha=alpha)
... .fit(diabetes_X_train, diabetes_y_train)
... .score(diabetes_X_test, diabetes_y_test)
... for alpha in alphas]
>>> best_alpha = alphas[scores.index(max(scores))]
>>> regr.alpha = best_alpha
>>> regr.fit(diabetes_X_train, diabetes_y_train)
Lasso(alpha=0.025118864315095794)
>>> print(regr.coef_)
[ 0. -212.4... 517.2... 313.7... -160.8...
-0. -187.1... 69.3... 508.6... 71.8... ]
```
### Classification
For classification, as in the labeling [iris](https://en.wikipedia.org/wiki/Iris_flower_data_set) task, linear regression is not the right approach as it will give too much weight to data far from the decision frontier. A linear approach is to fit a sigmoid function or **logistic** function:
\[y = \textrm{sigmoid}(X\beta - \textrm{offset}) + \epsilon = \frac{1}{1 + \textrm{exp}(- X\beta + \textrm{offset})} + \epsilon\]
```
>>> log = linear_model.LogisticRegression(C=1e5)
>>> log.fit(iris_X_train, iris_y_train)
LogisticRegression(C=100000.0)
```
This is known as [`LogisticRegression`](../../modules/generated/sklearn.linear_model.logisticregression#sklearn.linear_model.LogisticRegression "sklearn.linear_model.LogisticRegression").
Support vector machines (SVMs)
------------------------------
### Linear SVMs
[Support Vector Machines](../../modules/svm#svm) belong to the discriminant model family: they try to find a combination of samples to build a plane maximizing the margin between the two classes. Regularization is set by the `C` parameter: a small value for `C` means the margin is calculated using many or all of the observations around the separating line (more regularization); a large value for `C` means the margin is calculated on observations close to the separating line (less regularization).
**Unregularized SVM**
**Regularized SVM (default)**
SVMs can be used in regression –[`SVR`](../../modules/generated/sklearn.svm.svr#sklearn.svm.SVR "sklearn.svm.SVR") (Support Vector Regression)–, or in classification –[`SVC`](../../modules/generated/sklearn.svm.svc#sklearn.svm.SVC "sklearn.svm.SVC") (Support Vector Classification).
```
>>> from sklearn import svm
>>> svc = svm.SVC(kernel='linear')
>>> svc.fit(iris_X_train, iris_y_train)
SVC(kernel='linear')
```
Warning
**Normalizing data**
For many estimators, including the SVMs, having datasets with unit standard deviation for each feature is important to get good prediction.
### Using kernels
Classes are not always linearly separable in feature space. The solution is to build a decision function that is not linear but may be polynomial instead. This is done using the *kernel trick* that can be seen as creating a decision energy by positioning *kernels* on observations:
#### Linear kernel
```
>>> svc = svm.SVC(kernel='linear')
```
#### Polynomial kernel
```
>>> svc = svm.SVC(kernel='poly',
... degree=3)
>>> # degree: polynomial degree
```
#### RBF kernel (Radial Basis Function)
```
>>> svc = svm.SVC(kernel='rbf')
>>> # gamma: inverse of size of
>>> # radial kernel
```
scikit_learn Examples Examples
========
Release Highlights
------------------
These examples illustrate the main features of the releases of scikit-learn.
[Release Highlights for scikit-learn 1.1](release_highlights/plot_release_highlights_1_1_0#sphx-glr-auto-examples-release-highlights-plot-release-highlights-1-1-0-py)
[Release Highlights for scikit-learn 1.0](release_highlights/plot_release_highlights_1_0_0#sphx-glr-auto-examples-release-highlights-plot-release-highlights-1-0-0-py)
[Release Highlights for scikit-learn 0.24](release_highlights/plot_release_highlights_0_24_0#sphx-glr-auto-examples-release-highlights-plot-release-highlights-0-24-0-py)
[Release Highlights for scikit-learn 0.23](release_highlights/plot_release_highlights_0_23_0#sphx-glr-auto-examples-release-highlights-plot-release-highlights-0-23-0-py)
[Release Highlights for scikit-learn 0.22](release_highlights/plot_release_highlights_0_22_0#sphx-glr-auto-examples-release-highlights-plot-release-highlights-0-22-0-py)
Biclustering
------------
Examples concerning the `sklearn.cluster.bicluster` module.
[A demo of the Spectral Biclustering algorithm](bicluster/plot_spectral_biclustering#sphx-glr-auto-examples-bicluster-plot-spectral-biclustering-py)
[A demo of the Spectral Co-Clustering algorithm](bicluster/plot_spectral_coclustering#sphx-glr-auto-examples-bicluster-plot-spectral-coclustering-py)
[Biclustering documents with the Spectral Co-clustering algorithm](bicluster/plot_bicluster_newsgroups#sphx-glr-auto-examples-bicluster-plot-bicluster-newsgroups-py)
Calibration
-----------
Examples illustrating the calibration of predicted probabilities of classifiers.
[Comparison of Calibration of Classifiers](calibration/plot_compare_calibration#sphx-glr-auto-examples-calibration-plot-compare-calibration-py)
[Probability Calibration curves](calibration/plot_calibration_curve#sphx-glr-auto-examples-calibration-plot-calibration-curve-py)
[Probability Calibration for 3-class classification](calibration/plot_calibration_multiclass#sphx-glr-auto-examples-calibration-plot-calibration-multiclass-py)
[Probability calibration of classifiers](calibration/plot_calibration#sphx-glr-auto-examples-calibration-plot-calibration-py)
Classification
--------------
General examples about classification algorithms.
[Classifier comparison](classification/plot_classifier_comparison#sphx-glr-auto-examples-classification-plot-classifier-comparison-py)
[Linear and Quadratic Discriminant Analysis with covariance ellipsoid](classification/plot_lda_qda#sphx-glr-auto-examples-classification-plot-lda-qda-py)
[Normal, Ledoit-Wolf and OAS Linear Discriminant Analysis for classification](classification/plot_lda#sphx-glr-auto-examples-classification-plot-lda-py)
[Plot classification probability](classification/plot_classification_probability#sphx-glr-auto-examples-classification-plot-classification-probability-py)
[Recognizing hand-written digits](classification/plot_digits_classification#sphx-glr-auto-examples-classification-plot-digits-classification-py)
Clustering
----------
Examples concerning the [`sklearn.cluster`](../modules/classes#module-sklearn.cluster "sklearn.cluster") module.
[A demo of K-Means clustering on the handwritten digits data](cluster/plot_kmeans_digits#sphx-glr-auto-examples-cluster-plot-kmeans-digits-py)
[A demo of structured Ward hierarchical clustering on an image of coins](cluster/plot_coin_ward_segmentation#sphx-glr-auto-examples-cluster-plot-coin-ward-segmentation-py)
[A demo of the mean-shift clustering algorithm](cluster/plot_mean_shift#sphx-glr-auto-examples-cluster-plot-mean-shift-py)
[Adjustment for chance in clustering performance evaluation](cluster/plot_adjusted_for_chance_measures#sphx-glr-auto-examples-cluster-plot-adjusted-for-chance-measures-py)
[Agglomerative clustering with and without structure](cluster/plot_agglomerative_clustering#sphx-glr-auto-examples-cluster-plot-agglomerative-clustering-py)
[Agglomerative clustering with different metrics](cluster/plot_agglomerative_clustering_metrics#sphx-glr-auto-examples-cluster-plot-agglomerative-clustering-metrics-py)
[An example of K-Means++ initialization](cluster/plot_kmeans_plusplus#sphx-glr-auto-examples-cluster-plot-kmeans-plusplus-py)
[Bisecting K-Means and Regular K-Means Performance Comparison](cluster/plot_bisect_kmeans#sphx-glr-auto-examples-cluster-plot-bisect-kmeans-py)
[Color Quantization using K-Means](cluster/plot_color_quantization#sphx-glr-auto-examples-cluster-plot-color-quantization-py)
[Compare BIRCH and MiniBatchKMeans](cluster/plot_birch_vs_minibatchkmeans#sphx-glr-auto-examples-cluster-plot-birch-vs-minibatchkmeans-py)
[Comparing different clustering algorithms on toy datasets](cluster/plot_cluster_comparison#sphx-glr-auto-examples-cluster-plot-cluster-comparison-py)
[Comparing different hierarchical linkage methods on toy datasets](cluster/plot_linkage_comparison#sphx-glr-auto-examples-cluster-plot-linkage-comparison-py)
[Comparison of the K-Means and MiniBatchKMeans clustering algorithms](cluster/plot_mini_batch_kmeans#sphx-glr-auto-examples-cluster-plot-mini-batch-kmeans-py)
[Demo of DBSCAN clustering algorithm](cluster/plot_dbscan#sphx-glr-auto-examples-cluster-plot-dbscan-py)
[Demo of OPTICS clustering algorithm](cluster/plot_optics#sphx-glr-auto-examples-cluster-plot-optics-py)
[Demo of affinity propagation clustering algorithm](cluster/plot_affinity_propagation#sphx-glr-auto-examples-cluster-plot-affinity-propagation-py)
[Demonstration of k-means assumptions](cluster/plot_kmeans_assumptions#sphx-glr-auto-examples-cluster-plot-kmeans-assumptions-py)
[Empirical evaluation of the impact of k-means initialization](cluster/plot_kmeans_stability_low_dim_dense#sphx-glr-auto-examples-cluster-plot-kmeans-stability-low-dim-dense-py)
[Feature agglomeration](cluster/plot_digits_agglomeration#sphx-glr-auto-examples-cluster-plot-digits-agglomeration-py)
[Feature agglomeration vs. univariate selection](cluster/plot_feature_agglomeration_vs_univariate_selection#sphx-glr-auto-examples-cluster-plot-feature-agglomeration-vs-univariate-selection-py)
[Hierarchical clustering: structured vs unstructured ward](cluster/plot_ward_structured_vs_unstructured#sphx-glr-auto-examples-cluster-plot-ward-structured-vs-unstructured-py)
[Inductive Clustering](cluster/plot_inductive_clustering#sphx-glr-auto-examples-cluster-plot-inductive-clustering-py)
[K-means Clustering](cluster/plot_cluster_iris#sphx-glr-auto-examples-cluster-plot-cluster-iris-py)
[Online learning of a dictionary of parts of faces](cluster/plot_dict_face_patches#sphx-glr-auto-examples-cluster-plot-dict-face-patches-py)
[Plot Hierarchical Clustering Dendrogram](cluster/plot_agglomerative_dendrogram#sphx-glr-auto-examples-cluster-plot-agglomerative-dendrogram-py)
[Segmenting the picture of greek coins in regions](cluster/plot_coin_segmentation#sphx-glr-auto-examples-cluster-plot-coin-segmentation-py)
[Selecting the number of clusters with silhouette analysis on KMeans clustering](cluster/plot_kmeans_silhouette_analysis#sphx-glr-auto-examples-cluster-plot-kmeans-silhouette-analysis-py)
[Spectral clustering for image segmentation](cluster/plot_segmentation_toy#sphx-glr-auto-examples-cluster-plot-segmentation-toy-py)
[Various Agglomerative Clustering on a 2D embedding of digits](cluster/plot_digits_linkage#sphx-glr-auto-examples-cluster-plot-digits-linkage-py)
[Vector Quantization Example](cluster/plot_face_compress#sphx-glr-auto-examples-cluster-plot-face-compress-py)
Covariance estimation
---------------------
Examples concerning the [`sklearn.covariance`](../modules/classes#module-sklearn.covariance "sklearn.covariance") module.
[Ledoit-Wolf vs OAS estimation](covariance/plot_lw_vs_oas#sphx-glr-auto-examples-covariance-plot-lw-vs-oas-py)
[Robust covariance estimation and Mahalanobis distances relevance](covariance/plot_mahalanobis_distances#sphx-glr-auto-examples-covariance-plot-mahalanobis-distances-py)
[Robust vs Empirical covariance estimate](covariance/plot_robust_vs_empirical_covariance#sphx-glr-auto-examples-covariance-plot-robust-vs-empirical-covariance-py)
[Shrinkage covariance estimation: LedoitWolf vs OAS and max-likelihood](covariance/plot_covariance_estimation#sphx-glr-auto-examples-covariance-plot-covariance-estimation-py)
[Sparse inverse covariance estimation](covariance/plot_sparse_cov#sphx-glr-auto-examples-covariance-plot-sparse-cov-py)
Cross decomposition
-------------------
Examples concerning the [`sklearn.cross_decomposition`](../modules/classes#module-sklearn.cross_decomposition "sklearn.cross_decomposition") module.
[Compare cross decomposition methods](cross_decomposition/plot_compare_cross_decomposition#sphx-glr-auto-examples-cross-decomposition-plot-compare-cross-decomposition-py)
[Principal Component Regression vs Partial Least Squares Regression](cross_decomposition/plot_pcr_vs_pls#sphx-glr-auto-examples-cross-decomposition-plot-pcr-vs-pls-py)
Dataset examples
----------------
Examples concerning the [`sklearn.datasets`](../modules/classes#module-sklearn.datasets "sklearn.datasets") module.
[Plot randomly generated classification dataset](datasets/plot_random_dataset#sphx-glr-auto-examples-datasets-plot-random-dataset-py)
[Plot randomly generated multilabel dataset](datasets/plot_random_multilabel_dataset#sphx-glr-auto-examples-datasets-plot-random-multilabel-dataset-py)
[The Digit Dataset](datasets/plot_digits_last_image#sphx-glr-auto-examples-datasets-plot-digits-last-image-py)
[The Iris Dataset](datasets/plot_iris_dataset#sphx-glr-auto-examples-datasets-plot-iris-dataset-py)
Decision Trees
--------------
Examples concerning the [`sklearn.tree`](../modules/classes#module-sklearn.tree "sklearn.tree") module.
[Decision Tree Regression](tree/plot_tree_regression#sphx-glr-auto-examples-tree-plot-tree-regression-py)
[Multi-output Decision Tree Regression](tree/plot_tree_regression_multioutput#sphx-glr-auto-examples-tree-plot-tree-regression-multioutput-py)
[Plot the decision surface of decision trees trained on the iris dataset](tree/plot_iris_dtc#sphx-glr-auto-examples-tree-plot-iris-dtc-py)
[Post pruning decision trees with cost complexity pruning](tree/plot_cost_complexity_pruning#sphx-glr-auto-examples-tree-plot-cost-complexity-pruning-py)
[Understanding the decision tree structure](tree/plot_unveil_tree_structure#sphx-glr-auto-examples-tree-plot-unveil-tree-structure-py)
Decomposition
-------------
Examples concerning the [`sklearn.decomposition`](../modules/classes#module-sklearn.decomposition "sklearn.decomposition") module.
[Beta-divergence loss functions](decomposition/plot_beta_divergence#sphx-glr-auto-examples-decomposition-plot-beta-divergence-py)
[Blind source separation using FastICA](decomposition/plot_ica_blind_source_separation#sphx-glr-auto-examples-decomposition-plot-ica-blind-source-separation-py)
[Comparison of LDA and PCA 2D projection of Iris dataset](decomposition/plot_pca_vs_lda#sphx-glr-auto-examples-decomposition-plot-pca-vs-lda-py)
[Faces dataset decompositions](decomposition/plot_faces_decomposition#sphx-glr-auto-examples-decomposition-plot-faces-decomposition-py)
[Factor Analysis (with rotation) to visualize patterns](decomposition/plot_varimax_fa#sphx-glr-auto-examples-decomposition-plot-varimax-fa-py)
[FastICA on 2D point clouds](decomposition/plot_ica_vs_pca#sphx-glr-auto-examples-decomposition-plot-ica-vs-pca-py)
[Image denoising using dictionary learning](decomposition/plot_image_denoising#sphx-glr-auto-examples-decomposition-plot-image-denoising-py)
[Incremental PCA](decomposition/plot_incremental_pca#sphx-glr-auto-examples-decomposition-plot-incremental-pca-py)
[Kernel PCA](decomposition/plot_kernel_pca#sphx-glr-auto-examples-decomposition-plot-kernel-pca-py)
[Model selection with Probabilistic PCA and Factor Analysis (FA)](decomposition/plot_pca_vs_fa_model_selection#sphx-glr-auto-examples-decomposition-plot-pca-vs-fa-model-selection-py)
[PCA example with Iris Data-set](decomposition/plot_pca_iris#sphx-glr-auto-examples-decomposition-plot-pca-iris-py)
[Principal components analysis (PCA)](decomposition/plot_pca_3d#sphx-glr-auto-examples-decomposition-plot-pca-3d-py)
[Sparse coding with a precomputed dictionary](decomposition/plot_sparse_coding#sphx-glr-auto-examples-decomposition-plot-sparse-coding-py)
Ensemble methods
----------------
Examples concerning the [`sklearn.ensemble`](../modules/classes#module-sklearn.ensemble "sklearn.ensemble") module.
[Categorical Feature Support in Gradient Boosting](ensemble/plot_gradient_boosting_categorical#sphx-glr-auto-examples-ensemble-plot-gradient-boosting-categorical-py)
[Combine predictors using stacking](ensemble/plot_stack_predictors#sphx-glr-auto-examples-ensemble-plot-stack-predictors-py)
[Comparing random forests and the multi-output meta estimator](ensemble/plot_random_forest_regression_multioutput#sphx-glr-auto-examples-ensemble-plot-random-forest-regression-multioutput-py)
[Decision Tree Regression with AdaBoost](ensemble/plot_adaboost_regression#sphx-glr-auto-examples-ensemble-plot-adaboost-regression-py)
[Discrete versus Real AdaBoost](ensemble/plot_adaboost_hastie_10_2#sphx-glr-auto-examples-ensemble-plot-adaboost-hastie-10-2-py)
[Early stopping of Gradient Boosting](ensemble/plot_gradient_boosting_early_stopping#sphx-glr-auto-examples-ensemble-plot-gradient-boosting-early-stopping-py)
[Feature importances with a forest of trees](ensemble/plot_forest_importances#sphx-glr-auto-examples-ensemble-plot-forest-importances-py)
[Feature transformations with ensembles of trees](ensemble/plot_feature_transformation#sphx-glr-auto-examples-ensemble-plot-feature-transformation-py)
[Gradient Boosting Out-of-Bag estimates](ensemble/plot_gradient_boosting_oob#sphx-glr-auto-examples-ensemble-plot-gradient-boosting-oob-py)
[Gradient Boosting regression](ensemble/plot_gradient_boosting_regression#sphx-glr-auto-examples-ensemble-plot-gradient-boosting-regression-py)
[Gradient Boosting regularization](ensemble/plot_gradient_boosting_regularization#sphx-glr-auto-examples-ensemble-plot-gradient-boosting-regularization-py)
[Hashing feature transformation using Totally Random Trees](ensemble/plot_random_forest_embedding#sphx-glr-auto-examples-ensemble-plot-random-forest-embedding-py)
[IsolationForest example](ensemble/plot_isolation_forest#sphx-glr-auto-examples-ensemble-plot-isolation-forest-py)
[Monotonic Constraints](ensemble/plot_monotonic_constraints#sphx-glr-auto-examples-ensemble-plot-monotonic-constraints-py)
[Multi-class AdaBoosted Decision Trees](ensemble/plot_adaboost_multiclass#sphx-glr-auto-examples-ensemble-plot-adaboost-multiclass-py)
[OOB Errors for Random Forests](ensemble/plot_ensemble_oob#sphx-glr-auto-examples-ensemble-plot-ensemble-oob-py)
[Pixel importances with a parallel forest of trees](ensemble/plot_forest_importances_faces#sphx-glr-auto-examples-ensemble-plot-forest-importances-faces-py)
[Plot class probabilities calculated by the VotingClassifier](ensemble/plot_voting_probas#sphx-glr-auto-examples-ensemble-plot-voting-probas-py)
[Plot individual and voting regression predictions](ensemble/plot_voting_regressor#sphx-glr-auto-examples-ensemble-plot-voting-regressor-py)
[Plot the decision boundaries of a VotingClassifier](ensemble/plot_voting_decision_regions#sphx-glr-auto-examples-ensemble-plot-voting-decision-regions-py)
[Plot the decision surfaces of ensembles of trees on the iris dataset](ensemble/plot_forest_iris#sphx-glr-auto-examples-ensemble-plot-forest-iris-py)
[Prediction Intervals for Gradient Boosting Regression](ensemble/plot_gradient_boosting_quantile#sphx-glr-auto-examples-ensemble-plot-gradient-boosting-quantile-py)
[Single estimator versus bagging: bias-variance decomposition](ensemble/plot_bias_variance#sphx-glr-auto-examples-ensemble-plot-bias-variance-py)
[Two-class AdaBoost](ensemble/plot_adaboost_twoclass#sphx-glr-auto-examples-ensemble-plot-adaboost-twoclass-py)
Examples based on real world datasets
-------------------------------------
Applications to real world problems with some medium sized datasets or interactive user interface.
[Compressive sensing: tomography reconstruction with L1 prior (Lasso)](applications/plot_tomography_l1_reconstruction#sphx-glr-auto-examples-applications-plot-tomography-l1-reconstruction-py)
[Faces recognition example using eigenfaces and SVMs](applications/plot_face_recognition#sphx-glr-auto-examples-applications-plot-face-recognition-py)
[Image denoising using kernel PCA](applications/plot_digits_denoising#sphx-glr-auto-examples-applications-plot-digits-denoising-py)
[Libsvm GUI](applications/svm_gui#sphx-glr-auto-examples-applications-svm-gui-py)
[Model Complexity Influence](applications/plot_model_complexity_influence#sphx-glr-auto-examples-applications-plot-model-complexity-influence-py)
[Out-of-core classification of text documents](applications/plot_out_of_core_classification#sphx-glr-auto-examples-applications-plot-out-of-core-classification-py)
[Outlier detection on a real data set](applications/plot_outlier_detection_wine#sphx-glr-auto-examples-applications-plot-outlier-detection-wine-py)
[Prediction Latency](applications/plot_prediction_latency#sphx-glr-auto-examples-applications-plot-prediction-latency-py)
[Species distribution modeling](applications/plot_species_distribution_modeling#sphx-glr-auto-examples-applications-plot-species-distribution-modeling-py)
[Time-related feature engineering](applications/plot_cyclical_feature_engineering#sphx-glr-auto-examples-applications-plot-cyclical-feature-engineering-py)
[Topic extraction with Non-negative Matrix Factorization and Latent Dirichlet Allocation](applications/plot_topics_extraction_with_nmf_lda#sphx-glr-auto-examples-applications-plot-topics-extraction-with-nmf-lda-py)
[Visualizing the stock market structure](applications/plot_stock_market#sphx-glr-auto-examples-applications-plot-stock-market-py)
[Wikipedia principal eigenvector](applications/wikipedia_principal_eigenvector#sphx-glr-auto-examples-applications-wikipedia-principal-eigenvector-py)
Feature Selection
-----------------
Examples concerning the [`sklearn.feature_selection`](../modules/classes#module-sklearn.feature_selection "sklearn.feature_selection") module.
[Comparison of F-test and mutual information](feature_selection/plot_f_test_vs_mi#sphx-glr-auto-examples-feature-selection-plot-f-test-vs-mi-py)
[Model-based and sequential feature selection](feature_selection/plot_select_from_model_diabetes#sphx-glr-auto-examples-feature-selection-plot-select-from-model-diabetes-py)
[Pipeline ANOVA SVM](feature_selection/plot_feature_selection_pipeline#sphx-glr-auto-examples-feature-selection-plot-feature-selection-pipeline-py)
[Recursive feature elimination](feature_selection/plot_rfe_digits#sphx-glr-auto-examples-feature-selection-plot-rfe-digits-py)
[Recursive feature elimination with cross-validation](feature_selection/plot_rfe_with_cross_validation#sphx-glr-auto-examples-feature-selection-plot-rfe-with-cross-validation-py)
[Univariate Feature Selection](feature_selection/plot_feature_selection#sphx-glr-auto-examples-feature-selection-plot-feature-selection-py)
Gaussian Mixture Models
-----------------------
Examples concerning the [`sklearn.mixture`](../modules/classes#module-sklearn.mixture "sklearn.mixture") module.
[Concentration Prior Type Analysis of Variation Bayesian Gaussian Mixture](mixture/plot_concentration_prior#sphx-glr-auto-examples-mixture-plot-concentration-prior-py)
[Density Estimation for a Gaussian mixture](mixture/plot_gmm_pdf#sphx-glr-auto-examples-mixture-plot-gmm-pdf-py)
[GMM Initialization Methods](mixture/plot_gmm_init#sphx-glr-auto-examples-mixture-plot-gmm-init-py)
[GMM covariances](mixture/plot_gmm_covariances#sphx-glr-auto-examples-mixture-plot-gmm-covariances-py)
[Gaussian Mixture Model Ellipsoids](mixture/plot_gmm#sphx-glr-auto-examples-mixture-plot-gmm-py)
[Gaussian Mixture Model Selection](mixture/plot_gmm_selection#sphx-glr-auto-examples-mixture-plot-gmm-selection-py)
[Gaussian Mixture Model Sine Curve](mixture/plot_gmm_sin#sphx-glr-auto-examples-mixture-plot-gmm-sin-py)
Gaussian Process for Machine Learning
-------------------------------------
Examples concerning the [`sklearn.gaussian_process`](../modules/classes#module-sklearn.gaussian_process "sklearn.gaussian_process") module.
[Comparison of kernel ridge and Gaussian process regression](gaussian_process/plot_compare_gpr_krr#sphx-glr-auto-examples-gaussian-process-plot-compare-gpr-krr-py)
[Gaussian Processes regression: basic introductory example](gaussian_process/plot_gpr_noisy_targets#sphx-glr-auto-examples-gaussian-process-plot-gpr-noisy-targets-py)
[Gaussian process classification (GPC) on iris dataset](gaussian_process/plot_gpc_iris#sphx-glr-auto-examples-gaussian-process-plot-gpc-iris-py)
[Gaussian process regression (GPR) on Mauna Loa CO2 data](gaussian_process/plot_gpr_co2#sphx-glr-auto-examples-gaussian-process-plot-gpr-co2-py)
[Gaussian process regression (GPR) with noise-level estimation](gaussian_process/plot_gpr_noisy#sphx-glr-auto-examples-gaussian-process-plot-gpr-noisy-py)
[Gaussian processes on discrete data structures](gaussian_process/plot_gpr_on_structured_data#sphx-glr-auto-examples-gaussian-process-plot-gpr-on-structured-data-py)
[Illustration of Gaussian process classification (GPC) on the XOR dataset](gaussian_process/plot_gpc_xor#sphx-glr-auto-examples-gaussian-process-plot-gpc-xor-py)
[Illustration of prior and posterior Gaussian process for different kernels](gaussian_process/plot_gpr_prior_posterior#sphx-glr-auto-examples-gaussian-process-plot-gpr-prior-posterior-py)
[Iso-probability lines for Gaussian Processes classification (GPC)](gaussian_process/plot_gpc_isoprobability#sphx-glr-auto-examples-gaussian-process-plot-gpc-isoprobability-py)
[Probabilistic predictions with Gaussian process classification (GPC)](gaussian_process/plot_gpc#sphx-glr-auto-examples-gaussian-process-plot-gpc-py)
Generalized Linear Models
-------------------------
Examples concerning the [`sklearn.linear_model`](../modules/classes#module-sklearn.linear_model "sklearn.linear_model") module.
[Comparing Linear Bayesian Regressors](linear_model/plot_ard#sphx-glr-auto-examples-linear-model-plot-ard-py)
[Comparing various online solvers](linear_model/plot_sgd_comparison#sphx-glr-auto-examples-linear-model-plot-sgd-comparison-py)
[Curve Fitting with Bayesian Ridge Regression](linear_model/plot_bayesian_ridge_curvefit#sphx-glr-auto-examples-linear-model-plot-bayesian-ridge-curvefit-py)
[Early stopping of Stochastic Gradient Descent](linear_model/plot_sgd_early_stopping#sphx-glr-auto-examples-linear-model-plot-sgd-early-stopping-py)
[Fitting an Elastic Net with a precomputed Gram Matrix and Weighted Samples](linear_model/plot_elastic_net_precomputed_gram_matrix_with_weighted_samples#sphx-glr-auto-examples-linear-model-plot-elastic-net-precomputed-gram-matrix-with-weighted-samples-py)
[HuberRegressor vs Ridge on dataset with strong outliers](linear_model/plot_huber_vs_ridge#sphx-glr-auto-examples-linear-model-plot-huber-vs-ridge-py)
[Joint feature selection with multi-task Lasso](linear_model/plot_multi_task_lasso_support#sphx-glr-auto-examples-linear-model-plot-multi-task-lasso-support-py)
[L1 Penalty and Sparsity in Logistic Regression](linear_model/plot_logistic_l1_l2_sparsity#sphx-glr-auto-examples-linear-model-plot-logistic-l1-l2-sparsity-py)
[Lasso and Elastic Net](linear_model/plot_lasso_coordinate_descent_path#sphx-glr-auto-examples-linear-model-plot-lasso-coordinate-descent-path-py)
[Lasso and Elastic Net for Sparse Signals](linear_model/plot_lasso_and_elasticnet#sphx-glr-auto-examples-linear-model-plot-lasso-and-elasticnet-py)
[Lasso model selection via information criteria](linear_model/plot_lasso_lars_ic#sphx-glr-auto-examples-linear-model-plot-lasso-lars-ic-py)
[Lasso model selection: AIC-BIC / cross-validation](linear_model/plot_lasso_model_selection#sphx-glr-auto-examples-linear-model-plot-lasso-model-selection-py)
[Lasso on dense and sparse data](linear_model/plot_lasso_dense_vs_sparse_data#sphx-glr-auto-examples-linear-model-plot-lasso-dense-vs-sparse-data-py)
[Lasso path using LARS](linear_model/plot_lasso_lars#sphx-glr-auto-examples-linear-model-plot-lasso-lars-py)
[Linear Regression Example](linear_model/plot_ols#sphx-glr-auto-examples-linear-model-plot-ols-py)
[Logistic Regression 3-class Classifier](linear_model/plot_iris_logistic#sphx-glr-auto-examples-linear-model-plot-iris-logistic-py)
[Logistic function](linear_model/plot_logistic#sphx-glr-auto-examples-linear-model-plot-logistic-py)
[MNIST classification using multinomial logistic + L1](linear_model/plot_sparse_logistic_regression_mnist#sphx-glr-auto-examples-linear-model-plot-sparse-logistic-regression-mnist-py)
[Multiclass sparse logistic regression on 20newgroups](linear_model/plot_sparse_logistic_regression_20newsgroups#sphx-glr-auto-examples-linear-model-plot-sparse-logistic-regression-20newsgroups-py)
[Non-negative least squares](linear_model/plot_nnls#sphx-glr-auto-examples-linear-model-plot-nnls-py)
[One-Class SVM versus One-Class SVM using Stochastic Gradient Descent](linear_model/plot_sgdocsvm_vs_ocsvm#sphx-glr-auto-examples-linear-model-plot-sgdocsvm-vs-ocsvm-py)
[Ordinary Least Squares and Ridge Regression Variance](linear_model/plot_ols_ridge_variance#sphx-glr-auto-examples-linear-model-plot-ols-ridge-variance-py)
[Orthogonal Matching Pursuit](linear_model/plot_omp#sphx-glr-auto-examples-linear-model-plot-omp-py)
[Plot Ridge coefficients as a function of the L2 regularization](linear_model/plot_ridge_coeffs#sphx-glr-auto-examples-linear-model-plot-ridge-coeffs-py)
[Plot Ridge coefficients as a function of the regularization](linear_model/plot_ridge_path#sphx-glr-auto-examples-linear-model-plot-ridge-path-py)
[Plot multi-class SGD on the iris dataset](linear_model/plot_sgd_iris#sphx-glr-auto-examples-linear-model-plot-sgd-iris-py)
[Plot multinomial and One-vs-Rest Logistic Regression](linear_model/plot_logistic_multinomial#sphx-glr-auto-examples-linear-model-plot-logistic-multinomial-py)
[Poisson regression and non-normal loss](linear_model/plot_poisson_regression_non_normal_loss#sphx-glr-auto-examples-linear-model-plot-poisson-regression-non-normal-loss-py)
[Polynomial and Spline interpolation](linear_model/plot_polynomial_interpolation#sphx-glr-auto-examples-linear-model-plot-polynomial-interpolation-py)
[Quantile regression](linear_model/plot_quantile_regression#sphx-glr-auto-examples-linear-model-plot-quantile-regression-py)
[Regularization path of L1- Logistic Regression](linear_model/plot_logistic_path#sphx-glr-auto-examples-linear-model-plot-logistic-path-py)
[Robust linear estimator fitting](linear_model/plot_robust_fit#sphx-glr-auto-examples-linear-model-plot-robust-fit-py)
[Robust linear model estimation using RANSAC](linear_model/plot_ransac#sphx-glr-auto-examples-linear-model-plot-ransac-py)
[SGD: Maximum margin separating hyperplane](linear_model/plot_sgd_separating_hyperplane#sphx-glr-auto-examples-linear-model-plot-sgd-separating-hyperplane-py)
[SGD: Penalties](linear_model/plot_sgd_penalties#sphx-glr-auto-examples-linear-model-plot-sgd-penalties-py)
[SGD: Weighted samples](linear_model/plot_sgd_weighted_samples#sphx-glr-auto-examples-linear-model-plot-sgd-weighted-samples-py)
[SGD: convex loss functions](linear_model/plot_sgd_loss_functions#sphx-glr-auto-examples-linear-model-plot-sgd-loss-functions-py)
[Sparsity Example: Fitting only features 1 and 2](linear_model/plot_ols_3d#sphx-glr-auto-examples-linear-model-plot-ols-3d-py)
[Theil-Sen Regression](linear_model/plot_theilsen#sphx-glr-auto-examples-linear-model-plot-theilsen-py)
[Tweedie regression on insurance claims](linear_model/plot_tweedie_regression_insurance_claims#sphx-glr-auto-examples-linear-model-plot-tweedie-regression-insurance-claims-py)
Inspection
----------
Examples related to the [`sklearn.inspection`](../modules/classes#module-sklearn.inspection "sklearn.inspection") module.
[Common pitfalls in the interpretation of coefficients of linear models](inspection/plot_linear_model_coefficient_interpretation#sphx-glr-auto-examples-inspection-plot-linear-model-coefficient-interpretation-py)
[Partial Dependence and Individual Conditional Expectation Plots](inspection/plot_partial_dependence#sphx-glr-auto-examples-inspection-plot-partial-dependence-py)
[Permutation Importance vs Random Forest Feature Importance (MDI)](inspection/plot_permutation_importance#sphx-glr-auto-examples-inspection-plot-permutation-importance-py)
[Permutation Importance with Multicollinear or Correlated Features](inspection/plot_permutation_importance_multicollinear#sphx-glr-auto-examples-inspection-plot-permutation-importance-multicollinear-py)
Kernel Approximation
--------------------
Examples concerning the [`sklearn.kernel_approximation`](../modules/classes#module-sklearn.kernel_approximation "sklearn.kernel_approximation") module.
[Scalable learning with polynomial kernel approximation](kernel_approximation/plot_scalable_poly_kernels#sphx-glr-auto-examples-kernel-approximation-plot-scalable-poly-kernels-py)
Manifold learning
-----------------
Examples concerning the [`sklearn.manifold`](../modules/classes#module-sklearn.manifold "sklearn.manifold") module.
[Comparison of Manifold Learning methods](manifold/plot_compare_methods#sphx-glr-auto-examples-manifold-plot-compare-methods-py)
[Manifold Learning methods on a severed sphere](manifold/plot_manifold_sphere#sphx-glr-auto-examples-manifold-plot-manifold-sphere-py)
[Manifold learning on handwritten digits: Locally Linear Embedding, Isomap…](manifold/plot_lle_digits#sphx-glr-auto-examples-manifold-plot-lle-digits-py)
[Multi-dimensional scaling](manifold/plot_mds#sphx-glr-auto-examples-manifold-plot-mds-py)
[Swiss Roll And Swiss-Hole Reduction](manifold/plot_swissroll#sphx-glr-auto-examples-manifold-plot-swissroll-py)
[t-SNE: The effect of various perplexity values on the shape](manifold/plot_t_sne_perplexity#sphx-glr-auto-examples-manifold-plot-t-sne-perplexity-py)
Miscellaneous
-------------
Miscellaneous and introductory examples for scikit-learn.
[Advanced Plotting With Partial Dependence](miscellaneous/plot_partial_dependence_visualization_api#sphx-glr-auto-examples-miscellaneous-plot-partial-dependence-visualization-api-py)
[Compact estimator representations](miscellaneous/plot_changed_only_pprint_parameter#sphx-glr-auto-examples-miscellaneous-plot-changed-only-pprint-parameter-py)
[Comparing anomaly detection algorithms for outlier detection on toy datasets](miscellaneous/plot_anomaly_comparison#sphx-glr-auto-examples-miscellaneous-plot-anomaly-comparison-py)
[Comparison of kernel ridge regression and SVR](miscellaneous/plot_kernel_ridge_regression#sphx-glr-auto-examples-miscellaneous-plot-kernel-ridge-regression-py)
[Displaying Pipelines](miscellaneous/plot_pipeline_display#sphx-glr-auto-examples-miscellaneous-plot-pipeline-display-py)
[Evaluation of outlier detection estimators](miscellaneous/plot_outlier_detection_bench#sphx-glr-auto-examples-miscellaneous-plot-outlier-detection-bench-py)
[Explicit feature map approximation for RBF kernels](miscellaneous/plot_kernel_approximation#sphx-glr-auto-examples-miscellaneous-plot-kernel-approximation-py)
[Face completion with a multi-output estimators](miscellaneous/plot_multioutput_face_completion#sphx-glr-auto-examples-miscellaneous-plot-multioutput-face-completion-py)
[Isotonic Regression](miscellaneous/plot_isotonic_regression#sphx-glr-auto-examples-miscellaneous-plot-isotonic-regression-py)
[Multilabel classification](miscellaneous/plot_multilabel#sphx-glr-auto-examples-miscellaneous-plot-multilabel-py)
[ROC Curve with Visualization API](miscellaneous/plot_roc_curve_visualization_api#sphx-glr-auto-examples-miscellaneous-plot-roc-curve-visualization-api-py)
[The Johnson-Lindenstrauss bound for embedding with random projections](miscellaneous/plot_johnson_lindenstrauss_bound#sphx-glr-auto-examples-miscellaneous-plot-johnson-lindenstrauss-bound-py)
[Visualizations with Display Objects](miscellaneous/plot_display_object_visualization#sphx-glr-auto-examples-miscellaneous-plot-display-object-visualization-py)
Missing Value Imputation
------------------------
Examples concerning the [`sklearn.impute`](../modules/classes#module-sklearn.impute "sklearn.impute") module.
[Imputing missing values before building an estimator](impute/plot_missing_values#sphx-glr-auto-examples-impute-plot-missing-values-py)
[Imputing missing values with variants of IterativeImputer](impute/plot_iterative_imputer_variants_comparison#sphx-glr-auto-examples-impute-plot-iterative-imputer-variants-comparison-py)
Model Selection
---------------
Examples related to the [`sklearn.model_selection`](../modules/classes#module-sklearn.model_selection "sklearn.model_selection") module.
[Balance model complexity and cross-validated score](model_selection/plot_grid_search_refit_callable#sphx-glr-auto-examples-model-selection-plot-grid-search-refit-callable-py)
[Comparing randomized search and grid search for hyperparameter estimation](model_selection/plot_randomized_search#sphx-glr-auto-examples-model-selection-plot-randomized-search-py)
[Comparison between grid search and successive halving](model_selection/plot_successive_halving_heatmap#sphx-glr-auto-examples-model-selection-plot-successive-halving-heatmap-py)
[Confusion matrix](model_selection/plot_confusion_matrix#sphx-glr-auto-examples-model-selection-plot-confusion-matrix-py)
[Custom refit strategy of a grid search with cross-validation](model_selection/plot_grid_search_digits#sphx-glr-auto-examples-model-selection-plot-grid-search-digits-py)
[Demonstration of multi-metric evaluation on cross\_val\_score and GridSearchCV](model_selection/plot_multi_metric_evaluation#sphx-glr-auto-examples-model-selection-plot-multi-metric-evaluation-py)
[Detection error tradeoff (DET) curve](model_selection/plot_det#sphx-glr-auto-examples-model-selection-plot-det-py)
[Nested versus non-nested cross-validation](model_selection/plot_nested_cross_validation_iris#sphx-glr-auto-examples-model-selection-plot-nested-cross-validation-iris-py)
[Plotting Cross-Validated Predictions](model_selection/plot_cv_predict#sphx-glr-auto-examples-model-selection-plot-cv-predict-py)
[Plotting Learning Curves](model_selection/plot_learning_curve#sphx-glr-auto-examples-model-selection-plot-learning-curve-py)
[Plotting Validation Curves](model_selection/plot_validation_curve#sphx-glr-auto-examples-model-selection-plot-validation-curve-py)
[Precision-Recall](model_selection/plot_precision_recall#sphx-glr-auto-examples-model-selection-plot-precision-recall-py)
[Receiver Operating Characteristic (ROC)](model_selection/plot_roc#sphx-glr-auto-examples-model-selection-plot-roc-py)
[Receiver Operating Characteristic (ROC) with cross validation](model_selection/plot_roc_crossval#sphx-glr-auto-examples-model-selection-plot-roc-crossval-py)
[Sample pipeline for text feature extraction and evaluation](model_selection/grid_search_text_feature_extraction#sphx-glr-auto-examples-model-selection-grid-search-text-feature-extraction-py)
[Statistical comparison of models using grid search](model_selection/plot_grid_search_stats#sphx-glr-auto-examples-model-selection-plot-grid-search-stats-py)
[Successive Halving Iterations](model_selection/plot_successive_halving_iterations#sphx-glr-auto-examples-model-selection-plot-successive-halving-iterations-py)
[Test with permutations the significance of a classification score](model_selection/plot_permutation_tests_for_classification#sphx-glr-auto-examples-model-selection-plot-permutation-tests-for-classification-py)
[Train error vs Test error](model_selection/plot_train_error_vs_test_error#sphx-glr-auto-examples-model-selection-plot-train-error-vs-test-error-py)
[Underfitting vs. Overfitting](model_selection/plot_underfitting_overfitting#sphx-glr-auto-examples-model-selection-plot-underfitting-overfitting-py)
[Visualizing cross-validation behavior in scikit-learn](model_selection/plot_cv_indices#sphx-glr-auto-examples-model-selection-plot-cv-indices-py)
Multioutput methods
-------------------
Examples concerning the [`sklearn.multioutput`](../modules/classes#module-sklearn.multioutput "sklearn.multioutput") module.
[Classifier Chain](multioutput/plot_classifier_chain_yeast#sphx-glr-auto-examples-multioutput-plot-classifier-chain-yeast-py)
Nearest Neighbors
-----------------
Examples concerning the [`sklearn.neighbors`](../modules/classes#module-sklearn.neighbors "sklearn.neighbors") module.
[Approximate nearest neighbors in TSNE](neighbors/approximate_nearest_neighbors#sphx-glr-auto-examples-neighbors-approximate-nearest-neighbors-py)
[Caching nearest neighbors](neighbors/plot_caching_nearest_neighbors#sphx-glr-auto-examples-neighbors-plot-caching-nearest-neighbors-py)
[Comparing Nearest Neighbors with and without Neighborhood Components Analysis](neighbors/plot_nca_classification#sphx-glr-auto-examples-neighbors-plot-nca-classification-py)
[Dimensionality Reduction with Neighborhood Components Analysis](neighbors/plot_nca_dim_reduction#sphx-glr-auto-examples-neighbors-plot-nca-dim-reduction-py)
[Kernel Density Estimate of Species Distributions](neighbors/plot_species_kde#sphx-glr-auto-examples-neighbors-plot-species-kde-py)
[Kernel Density Estimation](neighbors/plot_digits_kde_sampling#sphx-glr-auto-examples-neighbors-plot-digits-kde-sampling-py)
[Nearest Centroid Classification](neighbors/plot_nearest_centroid#sphx-glr-auto-examples-neighbors-plot-nearest-centroid-py)
[Nearest Neighbors Classification](neighbors/plot_classification#sphx-glr-auto-examples-neighbors-plot-classification-py)
[Nearest Neighbors regression](neighbors/plot_regression#sphx-glr-auto-examples-neighbors-plot-regression-py)
[Neighborhood Components Analysis Illustration](neighbors/plot_nca_illustration#sphx-glr-auto-examples-neighbors-plot-nca-illustration-py)
[Novelty detection with Local Outlier Factor (LOF)](neighbors/plot_lof_novelty_detection#sphx-glr-auto-examples-neighbors-plot-lof-novelty-detection-py)
[Outlier detection with Local Outlier Factor (LOF)](neighbors/plot_lof_outlier_detection#sphx-glr-auto-examples-neighbors-plot-lof-outlier-detection-py)
[Simple 1D Kernel Density Estimation](neighbors/plot_kde_1d#sphx-glr-auto-examples-neighbors-plot-kde-1d-py)
Neural Networks
---------------
Examples concerning the [`sklearn.neural_network`](../modules/classes#module-sklearn.neural_network "sklearn.neural_network") module.
[Compare Stochastic learning strategies for MLPClassifier](neural_networks/plot_mlp_training_curves#sphx-glr-auto-examples-neural-networks-plot-mlp-training-curves-py)
[Restricted Boltzmann Machine features for digit classification](neural_networks/plot_rbm_logistic_classification#sphx-glr-auto-examples-neural-networks-plot-rbm-logistic-classification-py)
[Varying regularization in Multi-layer Perceptron](neural_networks/plot_mlp_alpha#sphx-glr-auto-examples-neural-networks-plot-mlp-alpha-py)
[Visualization of MLP weights on MNIST](neural_networks/plot_mnist_filters#sphx-glr-auto-examples-neural-networks-plot-mnist-filters-py)
Pipelines and composite estimators
----------------------------------
Examples of how to compose transformers and pipelines from other estimators. See the [User Guide](../modules/compose#combining-estimators).
[Column Transformer with Heterogeneous Data Sources](compose/plot_column_transformer#sphx-glr-auto-examples-compose-plot-column-transformer-py)
[Column Transformer with Mixed Types](compose/plot_column_transformer_mixed_types#sphx-glr-auto-examples-compose-plot-column-transformer-mixed-types-py)
[Concatenating multiple feature extraction methods](compose/plot_feature_union#sphx-glr-auto-examples-compose-plot-feature-union-py)
[Effect of transforming the targets in regression model](compose/plot_transformed_target#sphx-glr-auto-examples-compose-plot-transformed-target-py)
[Pipelining: chaining a PCA and a logistic regression](compose/plot_digits_pipe#sphx-glr-auto-examples-compose-plot-digits-pipe-py)
[Selecting dimensionality reduction with Pipeline and GridSearchCV](compose/plot_compare_reduction#sphx-glr-auto-examples-compose-plot-compare-reduction-py)
Preprocessing
-------------
Examples concerning the [`sklearn.preprocessing`](../modules/classes#module-sklearn.preprocessing "sklearn.preprocessing") module.
[Compare the effect of different scalers on data with outliers](preprocessing/plot_all_scaling#sphx-glr-auto-examples-preprocessing-plot-all-scaling-py)
[Demonstrating the different strategies of KBinsDiscretizer](preprocessing/plot_discretization_strategies#sphx-glr-auto-examples-preprocessing-plot-discretization-strategies-py)
[Feature discretization](preprocessing/plot_discretization_classification#sphx-glr-auto-examples-preprocessing-plot-discretization-classification-py)
[Importance of Feature Scaling](preprocessing/plot_scaling_importance#sphx-glr-auto-examples-preprocessing-plot-scaling-importance-py)
[Map data to a normal distribution](preprocessing/plot_map_data_to_normal#sphx-glr-auto-examples-preprocessing-plot-map-data-to-normal-py)
[Using KBinsDiscretizer to discretize continuous features](preprocessing/plot_discretization#sphx-glr-auto-examples-preprocessing-plot-discretization-py)
Semi Supervised Classification
------------------------------
Examples concerning the [`sklearn.semi_supervised`](../modules/classes#module-sklearn.semi_supervised "sklearn.semi_supervised") module.
[Decision boundary of semi-supervised classifiers versus SVM on the Iris dataset](semi_supervised/plot_semi_supervised_versus_svm_iris#sphx-glr-auto-examples-semi-supervised-plot-semi-supervised-versus-svm-iris-py)
[Effect of varying threshold for self-training](semi_supervised/plot_self_training_varying_threshold#sphx-glr-auto-examples-semi-supervised-plot-self-training-varying-threshold-py)
[Label Propagation digits active learning](semi_supervised/plot_label_propagation_digits_active_learning#sphx-glr-auto-examples-semi-supervised-plot-label-propagation-digits-active-learning-py)
[Label Propagation digits: Demonstrating performance](semi_supervised/plot_label_propagation_digits#sphx-glr-auto-examples-semi-supervised-plot-label-propagation-digits-py)
[Label Propagation learning a complex structure](semi_supervised/plot_label_propagation_structure#sphx-glr-auto-examples-semi-supervised-plot-label-propagation-structure-py)
[Semi-supervised Classification on a Text Dataset](semi_supervised/plot_semi_supervised_newsgroups#sphx-glr-auto-examples-semi-supervised-plot-semi-supervised-newsgroups-py)
Support Vector Machines
-----------------------
Examples concerning the [`sklearn.svm`](../modules/classes#module-sklearn.svm "sklearn.svm") module.
[Non-linear SVM](svm/plot_svm_nonlinear#sphx-glr-auto-examples-svm-plot-svm-nonlinear-py)
[One-class SVM with non-linear kernel (RBF)](svm/plot_oneclass#sphx-glr-auto-examples-svm-plot-oneclass-py)
[Plot different SVM classifiers in the iris dataset](svm/plot_iris_svc#sphx-glr-auto-examples-svm-plot-iris-svc-py)
[Plot the support vectors in LinearSVC](svm/plot_linearsvc_support_vectors#sphx-glr-auto-examples-svm-plot-linearsvc-support-vectors-py)
[RBF SVM parameters](svm/plot_rbf_parameters#sphx-glr-auto-examples-svm-plot-rbf-parameters-py)
[SVM Margins Example](svm/plot_svm_margin#sphx-glr-auto-examples-svm-plot-svm-margin-py)
[SVM Tie Breaking Example](svm/plot_svm_tie_breaking#sphx-glr-auto-examples-svm-plot-svm-tie-breaking-py)
[SVM with custom kernel](svm/plot_custom_kernel#sphx-glr-auto-examples-svm-plot-custom-kernel-py)
[SVM-Anova: SVM with univariate feature selection](svm/plot_svm_anova#sphx-glr-auto-examples-svm-plot-svm-anova-py)
[SVM-Kernels](svm/plot_svm_kernels#sphx-glr-auto-examples-svm-plot-svm-kernels-py)
[SVM: Maximum margin separating hyperplane](svm/plot_separating_hyperplane#sphx-glr-auto-examples-svm-plot-separating-hyperplane-py)
[SVM: Separating hyperplane for unbalanced classes](svm/plot_separating_hyperplane_unbalanced#sphx-glr-auto-examples-svm-plot-separating-hyperplane-unbalanced-py)
[SVM: Weighted samples](svm/plot_weighted_samples#sphx-glr-auto-examples-svm-plot-weighted-samples-py)
[Scaling the regularization parameter for SVCs](svm/plot_svm_scale_c#sphx-glr-auto-examples-svm-plot-svm-scale-c-py)
[Support Vector Regression (SVR) using linear and non-linear kernels](svm/plot_svm_regression#sphx-glr-auto-examples-svm-plot-svm-regression-py)
Tutorial exercises
------------------
Exercises for the tutorials
[Cross-validation on Digits Dataset Exercise](exercises/plot_cv_digits#sphx-glr-auto-examples-exercises-plot-cv-digits-py)
[Cross-validation on diabetes Dataset Exercise](exercises/plot_cv_diabetes#sphx-glr-auto-examples-exercises-plot-cv-diabetes-py)
[Digits Classification Exercise](exercises/plot_digits_classification_exercise#sphx-glr-auto-examples-exercises-plot-digits-classification-exercise-py)
[SVM Exercise](exercises/plot_iris_exercise#sphx-glr-auto-examples-exercises-plot-iris-exercise-py)
Working with text documents
---------------------------
Examples concerning the [`sklearn.feature_extraction.text`](../modules/classes#module-sklearn.feature_extraction.text "sklearn.feature_extraction.text") module.
[Classification of text documents using sparse features](text/plot_document_classification_20newsgroups#sphx-glr-auto-examples-text-plot-document-classification-20newsgroups-py)
[Clustering text documents using k-means](text/plot_document_clustering#sphx-glr-auto-examples-text-plot-document-clustering-py)
[FeatureHasher and DictVectorizer Comparison](text/plot_hashing_vs_dict_vectorizer#sphx-glr-auto-examples-text-plot-hashing-vs-dict-vectorizer-py)
[`Download all examples in Python source code: auto_examples_python.zip`](https://scikit-learn.org/1.1/_downloads/07fcc19ba03226cd3d83d4e40ec44385/auto_examples_python.zip)
[`Download all examples in Jupyter notebooks: auto_examples_jupyter.zip`](https://scikit-learn.org/1.1/_downloads/6f1e7a639e0699d6164445b55e6c116d/auto_examples_jupyter.zip)
| programming_docs |
scikit_learn A demo of the Spectral Co-Clustering algorithm Note
Click [here](#sphx-glr-download-auto-examples-bicluster-plot-spectral-coclustering-py) to download the full example code or to run this example in your browser via Binder
A demo of the Spectral Co-Clustering algorithm
==============================================
This example demonstrates how to generate a dataset and bicluster it using the Spectral Co-Clustering algorithm.
The dataset is generated using the `make_biclusters` function, which creates a matrix of small values and implants bicluster with large values. The rows and columns are then shuffled and passed to the Spectral Co-Clustering algorithm. Rearranging the shuffled matrix to make biclusters contiguous shows how accurately the algorithm found the biclusters.
*
*
*
```
consensus score: 1.000
```
```
# Author: Kemal Eren <[email protected]>
# License: BSD 3 clause
import numpy as np
from matplotlib import pyplot as plt
from sklearn.datasets import make_biclusters
from sklearn.cluster import SpectralCoclustering
from sklearn.metrics import consensus_score
data, rows, columns = make_biclusters(
shape=(300, 300), n_clusters=5, noise=5, shuffle=False, random_state=0
)
plt.matshow(data, cmap=plt.cm.Blues)
plt.title("Original dataset")
# shuffle clusters
rng = np.random.RandomState(0)
row_idx = rng.permutation(data.shape[0])
col_idx = rng.permutation(data.shape[1])
data = data[row_idx][:, col_idx]
plt.matshow(data, cmap=plt.cm.Blues)
plt.title("Shuffled dataset")
model = SpectralCoclustering(n_clusters=5, random_state=0)
model.fit(data)
score = consensus_score(model.biclusters_, (rows[:, row_idx], columns[:, col_idx]))
print("consensus score: {:.3f}".format(score))
fit_data = data[np.argsort(model.row_labels_)]
fit_data = fit_data[:, np.argsort(model.column_labels_)]
plt.matshow(fit_data, cmap=plt.cm.Blues)
plt.title("After biclustering; rearranged to show biclusters")
plt.show()
```
**Total running time of the script:** ( 0 minutes 0.290 seconds)
[`Download Python source code: plot_spectral_coclustering.py`](https://scikit-learn.org/1.1/_downloads/bc8849a7cb8ea7a8dc7237431b95a1cc/plot_spectral_coclustering.py)
[`Download Jupyter notebook: plot_spectral_coclustering.ipynb`](https://scikit-learn.org/1.1/_downloads/ee8e74bb66ae2967f890e19f28090b37/plot_spectral_coclustering.ipynb)
scikit_learn A demo of the Spectral Biclustering algorithm Note
Click [here](#sphx-glr-download-auto-examples-bicluster-plot-spectral-biclustering-py) to download the full example code or to run this example in your browser via Binder
A demo of the Spectral Biclustering algorithm
=============================================
This example demonstrates how to generate a checkerboard dataset and bicluster it using the Spectral Biclustering algorithm.
The data is generated with the `make_checkerboard` function, then shuffled and passed to the Spectral Biclustering algorithm. The rows and columns of the shuffled matrix are rearranged to show the biclusters found by the algorithm.
The outer product of the row and column label vectors shows a representation of the checkerboard structure.
*
*
*
*
```
consensus score: 1.0
```
```
# Author: Kemal Eren <[email protected]>
# License: BSD 3 clause
import numpy as np
from matplotlib import pyplot as plt
from sklearn.datasets import make_checkerboard
from sklearn.cluster import SpectralBiclustering
from sklearn.metrics import consensus_score
n_clusters = (4, 3)
data, rows, columns = make_checkerboard(
shape=(300, 300), n_clusters=n_clusters, noise=10, shuffle=False, random_state=0
)
plt.matshow(data, cmap=plt.cm.Blues)
plt.title("Original dataset")
# shuffle clusters
rng = np.random.RandomState(0)
row_idx = rng.permutation(data.shape[0])
col_idx = rng.permutation(data.shape[1])
data = data[row_idx][:, col_idx]
plt.matshow(data, cmap=plt.cm.Blues)
plt.title("Shuffled dataset")
model = SpectralBiclustering(n_clusters=n_clusters, method="log", random_state=0)
model.fit(data)
score = consensus_score(model.biclusters_, (rows[:, row_idx], columns[:, col_idx]))
print("consensus score: {:.1f}".format(score))
fit_data = data[np.argsort(model.row_labels_)]
fit_data = fit_data[:, np.argsort(model.column_labels_)]
plt.matshow(fit_data, cmap=plt.cm.Blues)
plt.title("After biclustering; rearranged to show biclusters")
plt.matshow(
np.outer(np.sort(model.row_labels_) + 1, np.sort(model.column_labels_) + 1),
cmap=plt.cm.Blues,
)
plt.title("Checkerboard structure of rearranged data")
plt.show()
```
**Total running time of the script:** ( 0 minutes 0.446 seconds)
[`Download Python source code: plot_spectral_biclustering.py`](https://scikit-learn.org/1.1/_downloads/ac19db97f4bbd077ccffef2736ed5f3d/plot_spectral_biclustering.py)
[`Download Jupyter notebook: plot_spectral_biclustering.ipynb`](https://scikit-learn.org/1.1/_downloads/6b00e458f3e282f1cc421f077b2fcad1/plot_spectral_biclustering.ipynb)
scikit_learn Biclustering documents with the Spectral Co-clustering algorithm Note
Click [here](#sphx-glr-download-auto-examples-bicluster-plot-bicluster-newsgroups-py) to download the full example code or to run this example in your browser via Binder
Biclustering documents with the Spectral Co-clustering algorithm
================================================================
This example demonstrates the Spectral Co-clustering algorithm on the twenty newsgroups dataset. The ‘comp.os.ms-windows.misc’ category is excluded because it contains many posts containing nothing but data.
The TF-IDF vectorized posts form a word frequency matrix, which is then biclustered using Dhillon’s Spectral Co-Clustering algorithm. The resulting document-word biclusters indicate subsets words used more often in those subsets documents.
For a few of the best biclusters, its most common document categories and its ten most important words get printed. The best biclusters are determined by their normalized cut. The best words are determined by comparing their sums inside and outside the bicluster.
For comparison, the documents are also clustered using MiniBatchKMeans. The document clusters derived from the biclusters achieve a better V-measure than clusters found by MiniBatchKMeans.
```
Vectorizing...
Coclustering...
Done in 1.25s. V-measure: 0.4431
MiniBatchKMeans...
Done in 3.16s. V-measure: 0.3177
Best biclusters:
----------------
bicluster 0 : 1961 documents, 4388 words
categories : 23% talk.politics.guns, 18% talk.politics.misc, 17% sci.med
words : gun, geb, guns, banks, gordon, clinton, pitt, cdt, surrender, veal
bicluster 1 : 1269 documents, 3558 words
categories : 27% soc.religion.christian, 25% talk.politics.mideast, 24% alt.atheism
words : god, jesus, christians, sin, objective, kent, belief, christ, faith, moral
bicluster 2 : 2201 documents, 2747 words
categories : 18% comp.sys.mac.hardware, 17% comp.sys.ibm.pc.hardware, 16% comp.graphics
words : voltage, board, dsp, packages, receiver, stereo, shipping, package, compression, image
bicluster 3 : 1773 documents, 2620 words
categories : 27% rec.motorcycles, 23% rec.autos, 13% misc.forsale
words : bike, car, dod, engine, motorcycle, ride, honda, bikes, helmet, bmw
bicluster 4 : 201 documents, 1175 words
categories : 81% talk.politics.mideast, 10% alt.atheism, 7% soc.religion.christian
words : turkish, armenia, armenian, armenians, turks, petch, sera, zuma, argic, gvg47
```
```
from collections import defaultdict
import operator
from time import time
import numpy as np
from sklearn.cluster import SpectralCoclustering
from sklearn.cluster import MiniBatchKMeans
from sklearn.datasets import fetch_20newsgroups
from sklearn.feature_extraction.text import TfidfVectorizer
from sklearn.metrics.cluster import v_measure_score
def number_normalizer(tokens):
"""Map all numeric tokens to a placeholder.
For many applications, tokens that begin with a number are not directly
useful, but the fact that such a token exists can be relevant. By applying
this form of dimensionality reduction, some methods may perform better.
"""
return ("#NUMBER" if token[0].isdigit() else token for token in tokens)
class NumberNormalizingVectorizer(TfidfVectorizer):
def build_tokenizer(self):
tokenize = super().build_tokenizer()
return lambda doc: list(number_normalizer(tokenize(doc)))
# exclude 'comp.os.ms-windows.misc'
categories = [
"alt.atheism",
"comp.graphics",
"comp.sys.ibm.pc.hardware",
"comp.sys.mac.hardware",
"comp.windows.x",
"misc.forsale",
"rec.autos",
"rec.motorcycles",
"rec.sport.baseball",
"rec.sport.hockey",
"sci.crypt",
"sci.electronics",
"sci.med",
"sci.space",
"soc.religion.christian",
"talk.politics.guns",
"talk.politics.mideast",
"talk.politics.misc",
"talk.religion.misc",
]
newsgroups = fetch_20newsgroups(categories=categories)
y_true = newsgroups.target
vectorizer = NumberNormalizingVectorizer(stop_words="english", min_df=5)
cocluster = SpectralCoclustering(
n_clusters=len(categories), svd_method="arpack", random_state=0
)
kmeans = MiniBatchKMeans(n_clusters=len(categories), batch_size=20000, random_state=0)
print("Vectorizing...")
X = vectorizer.fit_transform(newsgroups.data)
print("Coclustering...")
start_time = time()
cocluster.fit(X)
y_cocluster = cocluster.row_labels_
print(
"Done in {:.2f}s. V-measure: {:.4f}".format(
time() - start_time, v_measure_score(y_cocluster, y_true)
)
)
print("MiniBatchKMeans...")
start_time = time()
y_kmeans = kmeans.fit_predict(X)
print(
"Done in {:.2f}s. V-measure: {:.4f}".format(
time() - start_time, v_measure_score(y_kmeans, y_true)
)
)
feature_names = vectorizer.get_feature_names_out()
document_names = list(newsgroups.target_names[i] for i in newsgroups.target)
def bicluster_ncut(i):
rows, cols = cocluster.get_indices(i)
if not (np.any(rows) and np.any(cols)):
import sys
return sys.float_info.max
row_complement = np.nonzero(np.logical_not(cocluster.rows_[i]))[0]
col_complement = np.nonzero(np.logical_not(cocluster.columns_[i]))[0]
# Note: the following is identical to X[rows[:, np.newaxis],
# cols].sum() but much faster in scipy <= 0.16
weight = X[rows][:, cols].sum()
cut = X[row_complement][:, cols].sum() + X[rows][:, col_complement].sum()
return cut / weight
def most_common(d):
"""Items of a defaultdict(int) with the highest values.
Like Counter.most_common in Python >=2.7.
"""
return sorted(d.items(), key=operator.itemgetter(1), reverse=True)
bicluster_ncuts = list(bicluster_ncut(i) for i in range(len(newsgroups.target_names)))
best_idx = np.argsort(bicluster_ncuts)[:5]
print()
print("Best biclusters:")
print("----------------")
for idx, cluster in enumerate(best_idx):
n_rows, n_cols = cocluster.get_shape(cluster)
cluster_docs, cluster_words = cocluster.get_indices(cluster)
if not len(cluster_docs) or not len(cluster_words):
continue
# categories
counter = defaultdict(int)
for i in cluster_docs:
counter[document_names[i]] += 1
cat_string = ", ".join(
"{:.0f}% {}".format(float(c) / n_rows * 100, name)
for name, c in most_common(counter)[:3]
)
# words
out_of_cluster_docs = cocluster.row_labels_ != cluster
out_of_cluster_docs = np.where(out_of_cluster_docs)[0]
word_col = X[:, cluster_words]
word_scores = np.array(
word_col[cluster_docs, :].sum(axis=0)
- word_col[out_of_cluster_docs, :].sum(axis=0)
)
word_scores = word_scores.ravel()
important_words = list(
feature_names[cluster_words[i]] for i in word_scores.argsort()[:-11:-1]
)
print("bicluster {} : {} documents, {} words".format(idx, n_rows, n_cols))
print("categories : {}".format(cat_string))
print("words : {}\n".format(", ".join(important_words)))
```
**Total running time of the script:** ( 0 minutes 13.238 seconds)
[`Download Python source code: plot_bicluster_newsgroups.py`](https://scikit-learn.org/1.1/_downloads/e68419b513284db108081422c73a5667/plot_bicluster_newsgroups.py)
[`Download Jupyter notebook: plot_bicluster_newsgroups.ipynb`](https://scikit-learn.org/1.1/_downloads/3f7191b01d0103d1886c959ed7687c4d/plot_bicluster_newsgroups.ipynb)
scikit_learn Recognizing hand-written digits Note
Click [here](#sphx-glr-download-auto-examples-classification-plot-digits-classification-py) to download the full example code or to run this example in your browser via Binder
Recognizing hand-written digits
===============================
This example shows how scikit-learn can be used to recognize images of hand-written digits, from 0-9.
```
# Author: Gael Varoquaux <gael dot varoquaux at normalesup dot org>
# License: BSD 3 clause
# Standard scientific Python imports
import matplotlib.pyplot as plt
# Import datasets, classifiers and performance metrics
from sklearn import datasets, svm, metrics
from sklearn.model_selection import train_test_split
```
Digits dataset
--------------
The digits dataset consists of 8x8 pixel images of digits. The `images` attribute of the dataset stores 8x8 arrays of grayscale values for each image. We will use these arrays to visualize the first 4 images. The `target` attribute of the dataset stores the digit each image represents and this is included in the title of the 4 plots below.
Note: if we were working from image files (e.g., ‘png’ files), we would load them using [`matplotlib.pyplot.imread`](https://matplotlib.org/stable/api/_as_gen/matplotlib.pyplot.imread.html#matplotlib.pyplot.imread "(in Matplotlib v3.6.0)").
```
digits = datasets.load_digits()
_, axes = plt.subplots(nrows=1, ncols=4, figsize=(10, 3))
for ax, image, label in zip(axes, digits.images, digits.target):
ax.set_axis_off()
ax.imshow(image, cmap=plt.cm.gray_r, interpolation="nearest")
ax.set_title("Training: %i" % label)
```
Classification
--------------
To apply a classifier on this data, we need to flatten the images, turning each 2-D array of grayscale values from shape `(8, 8)` into shape `(64,)`. Subsequently, the entire dataset will be of shape `(n_samples, n_features)`, where `n_samples` is the number of images and `n_features` is the total number of pixels in each image.
We can then split the data into train and test subsets and fit a support vector classifier on the train samples. The fitted classifier can subsequently be used to predict the value of the digit for the samples in the test subset.
```
# flatten the images
n_samples = len(digits.images)
data = digits.images.reshape((n_samples, -1))
# Create a classifier: a support vector classifier
clf = svm.SVC(gamma=0.001)
# Split data into 50% train and 50% test subsets
X_train, X_test, y_train, y_test = train_test_split(
data, digits.target, test_size=0.5, shuffle=False
)
# Learn the digits on the train subset
clf.fit(X_train, y_train)
# Predict the value of the digit on the test subset
predicted = clf.predict(X_test)
```
Below we visualize the first 4 test samples and show their predicted digit value in the title.
```
_, axes = plt.subplots(nrows=1, ncols=4, figsize=(10, 3))
for ax, image, prediction in zip(axes, X_test, predicted):
ax.set_axis_off()
image = image.reshape(8, 8)
ax.imshow(image, cmap=plt.cm.gray_r, interpolation="nearest")
ax.set_title(f"Prediction: {prediction}")
```
[`classification_report`](../../modules/generated/sklearn.metrics.classification_report#sklearn.metrics.classification_report "sklearn.metrics.classification_report") builds a text report showing the main classification metrics.
```
print(
f"Classification report for classifier {clf}:\n"
f"{metrics.classification_report(y_test, predicted)}\n"
)
```
```
Classification report for classifier SVC(gamma=0.001):
precision recall f1-score support
0 1.00 0.99 0.99 88
1 0.99 0.97 0.98 91
2 0.99 0.99 0.99 86
3 0.98 0.87 0.92 91
4 0.99 0.96 0.97 92
5 0.95 0.97 0.96 91
6 0.99 0.99 0.99 91
7 0.96 0.99 0.97 89
8 0.94 1.00 0.97 88
9 0.93 0.98 0.95 92
accuracy 0.97 899
macro avg 0.97 0.97 0.97 899
weighted avg 0.97 0.97 0.97 899
```
We can also plot a [confusion matrix](../../modules/model_evaluation#confusion-matrix) of the true digit values and the predicted digit values.
```
disp = metrics.ConfusionMatrixDisplay.from_predictions(y_test, predicted)
disp.figure_.suptitle("Confusion Matrix")
print(f"Confusion matrix:\n{disp.confusion_matrix}")
plt.show()
```
```
Confusion matrix:
[[87 0 0 0 1 0 0 0 0 0]
[ 0 88 1 0 0 0 0 0 1 1]
[ 0 0 85 1 0 0 0 0 0 0]
[ 0 0 0 79 0 3 0 4 5 0]
[ 0 0 0 0 88 0 0 0 0 4]
[ 0 0 0 0 0 88 1 0 0 2]
[ 0 1 0 0 0 0 90 0 0 0]
[ 0 0 0 0 0 1 0 88 0 0]
[ 0 0 0 0 0 0 0 0 88 0]
[ 0 0 0 1 0 1 0 0 0 90]]
```
**Total running time of the script:** ( 0 minutes 0.357 seconds)
[`Download Python source code: plot_digits_classification.py`](https://scikit-learn.org/1.1/_downloads/1a55101a8e49ab5d3213dadb31332045/plot_digits_classification.py)
[`Download Jupyter notebook: plot_digits_classification.ipynb`](https://scikit-learn.org/1.1/_downloads/eb87d6211b2c0a7c2dc460a9e28b1f6a/plot_digits_classification.ipynb)
scikit_learn Linear and Quadratic Discriminant Analysis with covariance ellipsoid Note
Click [here](#sphx-glr-download-auto-examples-classification-plot-lda-qda-py) to download the full example code or to run this example in your browser via Binder
Linear and Quadratic Discriminant Analysis with covariance ellipsoid
====================================================================
This example plots the covariance ellipsoids of each class and decision boundary learned by LDA and QDA. The ellipsoids display the double standard deviation for each class. With LDA, the standard deviation is the same for all the classes, while each class has its own standard deviation with QDA.
Colormap
--------
```
import matplotlib.pyplot as plt
import matplotlib as mpl
from matplotlib import colors
cmap = colors.LinearSegmentedColormap(
"red_blue_classes",
{
"red": [(0, 1, 1), (1, 0.7, 0.7)],
"green": [(0, 0.7, 0.7), (1, 0.7, 0.7)],
"blue": [(0, 0.7, 0.7), (1, 1, 1)],
},
)
plt.cm.register_cmap(cmap=cmap)
```
Datasets generation functions
-----------------------------
```
import numpy as np
def dataset_fixed_cov():
"""Generate 2 Gaussians samples with the same covariance matrix"""
n, dim = 300, 2
np.random.seed(0)
C = np.array([[0.0, -0.23], [0.83, 0.23]])
X = np.r_[
np.dot(np.random.randn(n, dim), C),
np.dot(np.random.randn(n, dim), C) + np.array([1, 1]),
]
y = np.hstack((np.zeros(n), np.ones(n)))
return X, y
def dataset_cov():
"""Generate 2 Gaussians samples with different covariance matrices"""
n, dim = 300, 2
np.random.seed(0)
C = np.array([[0.0, -1.0], [2.5, 0.7]]) * 2.0
X = np.r_[
np.dot(np.random.randn(n, dim), C),
np.dot(np.random.randn(n, dim), C.T) + np.array([1, 4]),
]
y = np.hstack((np.zeros(n), np.ones(n)))
return X, y
```
Plot functions
--------------
```
from scipy import linalg
def plot_data(lda, X, y, y_pred, fig_index):
splot = plt.subplot(2, 2, fig_index)
if fig_index == 1:
plt.title("Linear Discriminant Analysis")
plt.ylabel("Data with\n fixed covariance")
elif fig_index == 2:
plt.title("Quadratic Discriminant Analysis")
elif fig_index == 3:
plt.ylabel("Data with\n varying covariances")
tp = y == y_pred # True Positive
tp0, tp1 = tp[y == 0], tp[y == 1]
X0, X1 = X[y == 0], X[y == 1]
X0_tp, X0_fp = X0[tp0], X0[~tp0]
X1_tp, X1_fp = X1[tp1], X1[~tp1]
# class 0: dots
plt.scatter(X0_tp[:, 0], X0_tp[:, 1], marker=".", color="red")
plt.scatter(X0_fp[:, 0], X0_fp[:, 1], marker="x", s=20, color="#990000") # dark red
# class 1: dots
plt.scatter(X1_tp[:, 0], X1_tp[:, 1], marker=".", color="blue")
plt.scatter(
X1_fp[:, 0], X1_fp[:, 1], marker="x", s=20, color="#000099"
) # dark blue
# class 0 and 1 : areas
nx, ny = 200, 100
x_min, x_max = plt.xlim()
y_min, y_max = plt.ylim()
xx, yy = np.meshgrid(np.linspace(x_min, x_max, nx), np.linspace(y_min, y_max, ny))
Z = lda.predict_proba(np.c_[xx.ravel(), yy.ravel()])
Z = Z[:, 1].reshape(xx.shape)
plt.pcolormesh(
xx, yy, Z, cmap="red_blue_classes", norm=colors.Normalize(0.0, 1.0), zorder=0
)
plt.contour(xx, yy, Z, [0.5], linewidths=2.0, colors="white")
# means
plt.plot(
lda.means_[0][0],
lda.means_[0][1],
"*",
color="yellow",
markersize=15,
markeredgecolor="grey",
)
plt.plot(
lda.means_[1][0],
lda.means_[1][1],
"*",
color="yellow",
markersize=15,
markeredgecolor="grey",
)
return splot
def plot_ellipse(splot, mean, cov, color):
v, w = linalg.eigh(cov)
u = w[0] / linalg.norm(w[0])
angle = np.arctan(u[1] / u[0])
angle = 180 * angle / np.pi # convert to degrees
# filled Gaussian at 2 standard deviation
ell = mpl.patches.Ellipse(
mean,
2 * v[0] ** 0.5,
2 * v[1] ** 0.5,
180 + angle,
facecolor=color,
edgecolor="black",
linewidth=2,
)
ell.set_clip_box(splot.bbox)
ell.set_alpha(0.2)
splot.add_artist(ell)
splot.set_xticks(())
splot.set_yticks(())
def plot_lda_cov(lda, splot):
plot_ellipse(splot, lda.means_[0], lda.covariance_, "red")
plot_ellipse(splot, lda.means_[1], lda.covariance_, "blue")
def plot_qda_cov(qda, splot):
plot_ellipse(splot, qda.means_[0], qda.covariance_[0], "red")
plot_ellipse(splot, qda.means_[1], qda.covariance_[1], "blue")
```
Plot
----
```
plt.figure(figsize=(10, 8), facecolor="white")
plt.suptitle(
"Linear Discriminant Analysis vs Quadratic Discriminant Analysis",
y=0.98,
fontsize=15,
)
from sklearn.discriminant_analysis import LinearDiscriminantAnalysis
from sklearn.discriminant_analysis import QuadraticDiscriminantAnalysis
for i, (X, y) in enumerate([dataset_fixed_cov(), dataset_cov()]):
# Linear Discriminant Analysis
lda = LinearDiscriminantAnalysis(solver="svd", store_covariance=True)
y_pred = lda.fit(X, y).predict(X)
splot = plot_data(lda, X, y, y_pred, fig_index=2 * i + 1)
plot_lda_cov(lda, splot)
plt.axis("tight")
# Quadratic Discriminant Analysis
qda = QuadraticDiscriminantAnalysis(store_covariance=True)
y_pred = qda.fit(X, y).predict(X)
splot = plot_data(qda, X, y, y_pred, fig_index=2 * i + 2)
plot_qda_cov(qda, splot)
plt.axis("tight")
plt.tight_layout()
plt.subplots_adjust(top=0.92)
plt.show()
```
**Total running time of the script:** ( 0 minutes 0.270 seconds)
[`Download Python source code: plot_lda_qda.py`](https://scikit-learn.org/1.1/_downloads/d7c704916c145b9b383b87c04245efda/plot_lda_qda.py)
[`Download Jupyter notebook: plot_lda_qda.ipynb`](https://scikit-learn.org/1.1/_downloads/1a2ab00bbfd4eb80e0afca13d83e2a14/plot_lda_qda.ipynb)
| programming_docs |
scikit_learn Normal, Ledoit-Wolf and OAS Linear Discriminant Analysis for classification Note
Click [here](#sphx-glr-download-auto-examples-classification-plot-lda-py) to download the full example code or to run this example in your browser via Binder
Normal, Ledoit-Wolf and OAS Linear Discriminant Analysis for classification
===========================================================================
This example illustrates how the Ledoit-Wolf and Oracle Shrinkage Approximating (OAS) estimators of covariance can improve classification.
```
import numpy as np
import matplotlib.pyplot as plt
from sklearn.datasets import make_blobs
from sklearn.discriminant_analysis import LinearDiscriminantAnalysis
from sklearn.covariance import OAS
n_train = 20 # samples for training
n_test = 200 # samples for testing
n_averages = 50 # how often to repeat classification
n_features_max = 75 # maximum number of features
step = 4 # step size for the calculation
def generate_data(n_samples, n_features):
"""Generate random blob-ish data with noisy features.
This returns an array of input data with shape `(n_samples, n_features)`
and an array of `n_samples` target labels.
Only one feature contains discriminative information, the other features
contain only noise.
"""
X, y = make_blobs(n_samples=n_samples, n_features=1, centers=[[-2], [2]])
# add non-discriminative features
if n_features > 1:
X = np.hstack([X, np.random.randn(n_samples, n_features - 1)])
return X, y
acc_clf1, acc_clf2, acc_clf3 = [], [], []
n_features_range = range(1, n_features_max + 1, step)
for n_features in n_features_range:
score_clf1, score_clf2, score_clf3 = 0, 0, 0
for _ in range(n_averages):
X, y = generate_data(n_train, n_features)
clf1 = LinearDiscriminantAnalysis(solver="lsqr", shrinkage="auto").fit(X, y)
clf2 = LinearDiscriminantAnalysis(solver="lsqr", shrinkage=None).fit(X, y)
oa = OAS(store_precision=False, assume_centered=False)
clf3 = LinearDiscriminantAnalysis(solver="lsqr", covariance_estimator=oa).fit(
X, y
)
X, y = generate_data(n_test, n_features)
score_clf1 += clf1.score(X, y)
score_clf2 += clf2.score(X, y)
score_clf3 += clf3.score(X, y)
acc_clf1.append(score_clf1 / n_averages)
acc_clf2.append(score_clf2 / n_averages)
acc_clf3.append(score_clf3 / n_averages)
features_samples_ratio = np.array(n_features_range) / n_train
plt.plot(
features_samples_ratio,
acc_clf1,
linewidth=2,
label="Linear Discriminant Analysis with Ledoit Wolf",
color="navy",
linestyle="dashed",
)
plt.plot(
features_samples_ratio,
acc_clf2,
linewidth=2,
label="Linear Discriminant Analysis",
color="gold",
linestyle="solid",
)
plt.plot(
features_samples_ratio,
acc_clf3,
linewidth=2,
label="Linear Discriminant Analysis with OAS",
color="red",
linestyle="dotted",
)
plt.xlabel("n_features / n_samples")
plt.ylabel("Classification accuracy")
plt.legend(loc=3, prop={"size": 12})
plt.suptitle(
"Linear Discriminant Analysis vs. "
+ "\n"
+ "Shrinkage Linear Discriminant Analysis vs. "
+ "\n"
+ "OAS Linear Discriminant Analysis (1 discriminative feature)"
)
plt.show()
```
**Total running time of the script:** ( 0 minutes 4.476 seconds)
[`Download Python source code: plot_lda.py`](https://scikit-learn.org/1.1/_downloads/14f620cd922ca2c9a39ae5784034dd0d/plot_lda.py)
[`Download Jupyter notebook: plot_lda.ipynb`](https://scikit-learn.org/1.1/_downloads/acc912c1f80e1cb0e32675b5f7686075/plot_lda.ipynb)
scikit_learn Classifier comparison Note
Click [here](#sphx-glr-download-auto-examples-classification-plot-classifier-comparison-py) to download the full example code or to run this example in your browser via Binder
Classifier comparison
=====================
A comparison of a several classifiers in scikit-learn on synthetic datasets. The point of this example is to illustrate the nature of decision boundaries of different classifiers. This should be taken with a grain of salt, as the intuition conveyed by these examples does not necessarily carry over to real datasets.
Particularly in high-dimensional spaces, data can more easily be separated linearly and the simplicity of classifiers such as naive Bayes and linear SVMs might lead to better generalization than is achieved by other classifiers.
The plots show training points in solid colors and testing points semi-transparent. The lower right shows the classification accuracy on the test set.

```
# Code source: Gaël Varoquaux
# Andreas Müller
# Modified for documentation by Jaques Grobler
# License: BSD 3 clause
import numpy as np
import matplotlib.pyplot as plt
from matplotlib.colors import ListedColormap
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import StandardScaler
from sklearn.datasets import make_moons, make_circles, make_classification
from sklearn.neural_network import MLPClassifier
from sklearn.neighbors import KNeighborsClassifier
from sklearn.svm import SVC
from sklearn.gaussian_process import GaussianProcessClassifier
from sklearn.gaussian_process.kernels import RBF
from sklearn.tree import DecisionTreeClassifier
from sklearn.ensemble import RandomForestClassifier, AdaBoostClassifier
from sklearn.naive_bayes import GaussianNB
from sklearn.discriminant_analysis import QuadraticDiscriminantAnalysis
from sklearn.inspection import DecisionBoundaryDisplay
names = [
"Nearest Neighbors",
"Linear SVM",
"RBF SVM",
"Gaussian Process",
"Decision Tree",
"Random Forest",
"Neural Net",
"AdaBoost",
"Naive Bayes",
"QDA",
]
classifiers = [
KNeighborsClassifier(3),
SVC(kernel="linear", C=0.025),
SVC(gamma=2, C=1),
GaussianProcessClassifier(1.0 * RBF(1.0)),
DecisionTreeClassifier(max_depth=5),
RandomForestClassifier(max_depth=5, n_estimators=10, max_features=1),
MLPClassifier(alpha=1, max_iter=1000),
AdaBoostClassifier(),
GaussianNB(),
QuadraticDiscriminantAnalysis(),
]
X, y = make_classification(
n_features=2, n_redundant=0, n_informative=2, random_state=1, n_clusters_per_class=1
)
rng = np.random.RandomState(2)
X += 2 * rng.uniform(size=X.shape)
linearly_separable = (X, y)
datasets = [
make_moons(noise=0.3, random_state=0),
make_circles(noise=0.2, factor=0.5, random_state=1),
linearly_separable,
]
figure = plt.figure(figsize=(27, 9))
i = 1
# iterate over datasets
for ds_cnt, ds in enumerate(datasets):
# preprocess dataset, split into training and test part
X, y = ds
X = StandardScaler().fit_transform(X)
X_train, X_test, y_train, y_test = train_test_split(
X, y, test_size=0.4, random_state=42
)
x_min, x_max = X[:, 0].min() - 0.5, X[:, 0].max() + 0.5
y_min, y_max = X[:, 1].min() - 0.5, X[:, 1].max() + 0.5
# just plot the dataset first
cm = plt.cm.RdBu
cm_bright = ListedColormap(["#FF0000", "#0000FF"])
ax = plt.subplot(len(datasets), len(classifiers) + 1, i)
if ds_cnt == 0:
ax.set_title("Input data")
# Plot the training points
ax.scatter(X_train[:, 0], X_train[:, 1], c=y_train, cmap=cm_bright, edgecolors="k")
# Plot the testing points
ax.scatter(
X_test[:, 0], X_test[:, 1], c=y_test, cmap=cm_bright, alpha=0.6, edgecolors="k"
)
ax.set_xlim(x_min, x_max)
ax.set_ylim(y_min, y_max)
ax.set_xticks(())
ax.set_yticks(())
i += 1
# iterate over classifiers
for name, clf in zip(names, classifiers):
ax = plt.subplot(len(datasets), len(classifiers) + 1, i)
clf.fit(X_train, y_train)
score = clf.score(X_test, y_test)
DecisionBoundaryDisplay.from_estimator(
clf, X, cmap=cm, alpha=0.8, ax=ax, eps=0.5
)
# Plot the training points
ax.scatter(
X_train[:, 0], X_train[:, 1], c=y_train, cmap=cm_bright, edgecolors="k"
)
# Plot the testing points
ax.scatter(
X_test[:, 0],
X_test[:, 1],
c=y_test,
cmap=cm_bright,
edgecolors="k",
alpha=0.6,
)
ax.set_xlim(x_min, x_max)
ax.set_ylim(y_min, y_max)
ax.set_xticks(())
ax.set_yticks(())
if ds_cnt == 0:
ax.set_title(name)
ax.text(
x_max - 0.3,
y_min + 0.3,
("%.2f" % score).lstrip("0"),
size=15,
horizontalalignment="right",
)
i += 1
plt.tight_layout()
plt.show()
```
**Total running time of the script:** ( 0 minutes 2.103 seconds)
[`Download Python source code: plot_classifier_comparison.py`](https://scikit-learn.org/1.1/_downloads/2da0534ab0e0c8241033bcc2d912e419/plot_classifier_comparison.py)
[`Download Jupyter notebook: plot_classifier_comparison.ipynb`](https://scikit-learn.org/1.1/_downloads/3438aba177365cb595921cf18806dfa7/plot_classifier_comparison.ipynb)
scikit_learn Plot classification probability Note
Click [here](#sphx-glr-download-auto-examples-classification-plot-classification-probability-py) to download the full example code or to run this example in your browser via Binder
Plot classification probability
===============================
Plot the classification probability for different classifiers. We use a 3 class dataset, and we classify it with a Support Vector classifier, L1 and L2 penalized logistic regression with either a One-Vs-Rest or multinomial setting, and Gaussian process classification.
Linear SVC is not a probabilistic classifier by default but it has a built-in calibration option enabled in this example (`probability=True`).
The logistic regression with One-Vs-Rest is not a multiclass classifier out of the box. As a result it has more trouble in separating class 2 and 3 than the other estimators.
```
Accuracy (train) for L1 logistic: 83.3%
Accuracy (train) for L2 logistic (Multinomial): 82.7%
Accuracy (train) for L2 logistic (OvR): 79.3%
Accuracy (train) for Linear SVC: 82.0%
Accuracy (train) for GPC: 82.7%
```
```
# Author: Alexandre Gramfort <[email protected]>
# License: BSD 3 clause
import matplotlib.pyplot as plt
import numpy as np
from sklearn.metrics import accuracy_score
from sklearn.linear_model import LogisticRegression
from sklearn.svm import SVC
from sklearn.gaussian_process import GaussianProcessClassifier
from sklearn.gaussian_process.kernels import RBF
from sklearn import datasets
iris = datasets.load_iris()
X = iris.data[:, 0:2] # we only take the first two features for visualization
y = iris.target
n_features = X.shape[1]
C = 10
kernel = 1.0 * RBF([1.0, 1.0]) # for GPC
# Create different classifiers.
classifiers = {
"L1 logistic": LogisticRegression(
C=C, penalty="l1", solver="saga", multi_class="multinomial", max_iter=10000
),
"L2 logistic (Multinomial)": LogisticRegression(
C=C, penalty="l2", solver="saga", multi_class="multinomial", max_iter=10000
),
"L2 logistic (OvR)": LogisticRegression(
C=C, penalty="l2", solver="saga", multi_class="ovr", max_iter=10000
),
"Linear SVC": SVC(kernel="linear", C=C, probability=True, random_state=0),
"GPC": GaussianProcessClassifier(kernel),
}
n_classifiers = len(classifiers)
plt.figure(figsize=(3 * 2, n_classifiers * 2))
plt.subplots_adjust(bottom=0.2, top=0.95)
xx = np.linspace(3, 9, 100)
yy = np.linspace(1, 5, 100).T
xx, yy = np.meshgrid(xx, yy)
Xfull = np.c_[xx.ravel(), yy.ravel()]
for index, (name, classifier) in enumerate(classifiers.items()):
classifier.fit(X, y)
y_pred = classifier.predict(X)
accuracy = accuracy_score(y, y_pred)
print("Accuracy (train) for %s: %0.1f%% " % (name, accuracy * 100))
# View probabilities:
probas = classifier.predict_proba(Xfull)
n_classes = np.unique(y_pred).size
for k in range(n_classes):
plt.subplot(n_classifiers, n_classes, index * n_classes + k + 1)
plt.title("Class %d" % k)
if k == 0:
plt.ylabel(name)
imshow_handle = plt.imshow(
probas[:, k].reshape((100, 100)), extent=(3, 9, 1, 5), origin="lower"
)
plt.xticks(())
plt.yticks(())
idx = y_pred == k
if idx.any():
plt.scatter(X[idx, 0], X[idx, 1], marker="o", c="w", edgecolor="k")
ax = plt.axes([0.15, 0.04, 0.7, 0.05])
plt.title("Probability")
plt.colorbar(imshow_handle, cax=ax, orientation="horizontal")
plt.show()
```
**Total running time of the script:** ( 0 minutes 1.000 seconds)
[`Download Python source code: plot_classification_probability.py`](https://scikit-learn.org/1.1/_downloads/f42d0af747657c8328f34a4238d49800/plot_classification_probability.py)
[`Download Jupyter notebook: plot_classification_probability.ipynb`](https://scikit-learn.org/1.1/_downloads/07960f9087d379e9d0da6350d6ee3f41/plot_classification_probability.ipynb)
scikit_learn Plot the decision surface of decision trees trained on the iris dataset Note
Click [here](#sphx-glr-download-auto-examples-tree-plot-iris-dtc-py) to download the full example code or to run this example in your browser via Binder
Plot the decision surface of decision trees trained on the iris dataset
=======================================================================
Plot the decision surface of a decision tree trained on pairs of features of the iris dataset.
See [decision tree](../../modules/tree#tree) for more information on the estimator.
For each pair of iris features, the decision tree learns decision boundaries made of combinations of simple thresholding rules inferred from the training samples.
We also show the tree structure of a model built on all of the features.
First load the copy of the Iris dataset shipped with scikit-learn:
```
from sklearn.datasets import load_iris
iris = load_iris()
```
Display the decision functions of trees trained on all pairs of features.
```
import numpy as np
import matplotlib.pyplot as plt
from sklearn.datasets import load_iris
from sklearn.tree import DecisionTreeClassifier
from sklearn.inspection import DecisionBoundaryDisplay
# Parameters
n_classes = 3
plot_colors = "ryb"
plot_step = 0.02
for pairidx, pair in enumerate([[0, 1], [0, 2], [0, 3], [1, 2], [1, 3], [2, 3]]):
# We only take the two corresponding features
X = iris.data[:, pair]
y = iris.target
# Train
clf = DecisionTreeClassifier().fit(X, y)
# Plot the decision boundary
ax = plt.subplot(2, 3, pairidx + 1)
plt.tight_layout(h_pad=0.5, w_pad=0.5, pad=2.5)
DecisionBoundaryDisplay.from_estimator(
clf,
X,
cmap=plt.cm.RdYlBu,
response_method="predict",
ax=ax,
xlabel=iris.feature_names[pair[0]],
ylabel=iris.feature_names[pair[1]],
)
# Plot the training points
for i, color in zip(range(n_classes), plot_colors):
idx = np.where(y == i)
plt.scatter(
X[idx, 0],
X[idx, 1],
c=color,
label=iris.target_names[i],
cmap=plt.cm.RdYlBu,
edgecolor="black",
s=15,
)
plt.suptitle("Decision surface of decision trees trained on pairs of features")
plt.legend(loc="lower right", borderpad=0, handletextpad=0)
_ = plt.axis("tight")
```
Display the structure of a single decision tree trained on all the features together.
```
from sklearn.tree import plot_tree
plt.figure()
clf = DecisionTreeClassifier().fit(iris.data, iris.target)
plot_tree(clf, filled=True)
plt.title("Decision tree trained on all the iris features")
plt.show()
```
**Total running time of the script:** ( 0 minutes 0.636 seconds)
[`Download Python source code: plot_iris_dtc.py`](https://scikit-learn.org/1.1/_downloads/00ae629d652473137a3905a5e08ea815/plot_iris_dtc.py)
[`Download Jupyter notebook: plot_iris_dtc.ipynb`](https://scikit-learn.org/1.1/_downloads/bc4cacb86f284cd0b3913166a69c9fb2/plot_iris_dtc.ipynb)
scikit_learn Decision Tree Regression Note
Click [here](#sphx-glr-download-auto-examples-tree-plot-tree-regression-py) to download the full example code or to run this example in your browser via Binder
Decision Tree Regression
========================
A 1D regression with decision tree.
The [decision trees](../../modules/tree#tree) is used to fit a sine curve with addition noisy observation. As a result, it learns local linear regressions approximating the sine curve.
We can see that if the maximum depth of the tree (controlled by the `max_depth` parameter) is set too high, the decision trees learn too fine details of the training data and learn from the noise, i.e. they overfit.
```
# Import the necessary modules and libraries
import numpy as np
from sklearn.tree import DecisionTreeRegressor
import matplotlib.pyplot as plt
# Create a random dataset
rng = np.random.RandomState(1)
X = np.sort(5 * rng.rand(80, 1), axis=0)
y = np.sin(X).ravel()
y[::5] += 3 * (0.5 - rng.rand(16))
# Fit regression model
regr_1 = DecisionTreeRegressor(max_depth=2)
regr_2 = DecisionTreeRegressor(max_depth=5)
regr_1.fit(X, y)
regr_2.fit(X, y)
# Predict
X_test = np.arange(0.0, 5.0, 0.01)[:, np.newaxis]
y_1 = regr_1.predict(X_test)
y_2 = regr_2.predict(X_test)
# Plot the results
plt.figure()
plt.scatter(X, y, s=20, edgecolor="black", c="darkorange", label="data")
plt.plot(X_test, y_1, color="cornflowerblue", label="max_depth=2", linewidth=2)
plt.plot(X_test, y_2, color="yellowgreen", label="max_depth=5", linewidth=2)
plt.xlabel("data")
plt.ylabel("target")
plt.title("Decision Tree Regression")
plt.legend()
plt.show()
```
**Total running time of the script:** ( 0 minutes 0.074 seconds)
[`Download Python source code: plot_tree_regression.py`](https://scikit-learn.org/1.1/_downloads/1fda803e152cdabd5ad21330d93e1258/plot_tree_regression.py)
[`Download Jupyter notebook: plot_tree_regression.ipynb`](https://scikit-learn.org/1.1/_downloads/ad1c1da0830674700828e658b74f6cf6/plot_tree_regression.ipynb)
scikit_learn Post pruning decision trees with cost complexity pruning Note
Click [here](#sphx-glr-download-auto-examples-tree-plot-cost-complexity-pruning-py) to download the full example code or to run this example in your browser via Binder
Post pruning decision trees with cost complexity pruning
========================================================
The [`DecisionTreeClassifier`](../../modules/generated/sklearn.tree.decisiontreeclassifier#sklearn.tree.DecisionTreeClassifier "sklearn.tree.DecisionTreeClassifier") provides parameters such as `min_samples_leaf` and `max_depth` to prevent a tree from overfiting. Cost complexity pruning provides another option to control the size of a tree. In [`DecisionTreeClassifier`](../../modules/generated/sklearn.tree.decisiontreeclassifier#sklearn.tree.DecisionTreeClassifier "sklearn.tree.DecisionTreeClassifier"), this pruning technique is parameterized by the cost complexity parameter, `ccp_alpha`. Greater values of `ccp_alpha` increase the number of nodes pruned. Here we only show the effect of `ccp_alpha` on regularizing the trees and how to choose a `ccp_alpha` based on validation scores.
See also [Minimal Cost-Complexity Pruning](../../modules/tree#minimal-cost-complexity-pruning) for details on pruning.
```
import matplotlib.pyplot as plt
from sklearn.model_selection import train_test_split
from sklearn.datasets import load_breast_cancer
from sklearn.tree import DecisionTreeClassifier
```
Total impurity of leaves vs effective alphas of pruned tree
-----------------------------------------------------------
Minimal cost complexity pruning recursively finds the node with the “weakest link”. The weakest link is characterized by an effective alpha, where the nodes with the smallest effective alpha are pruned first. To get an idea of what values of `ccp_alpha` could be appropriate, scikit-learn provides [`DecisionTreeClassifier.cost_complexity_pruning_path`](../../modules/generated/sklearn.tree.decisiontreeclassifier#sklearn.tree.DecisionTreeClassifier.cost_complexity_pruning_path "sklearn.tree.DecisionTreeClassifier.cost_complexity_pruning_path") that returns the effective alphas and the corresponding total leaf impurities at each step of the pruning process. As alpha increases, more of the tree is pruned, which increases the total impurity of its leaves.
```
X, y = load_breast_cancer(return_X_y=True)
X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=0)
clf = DecisionTreeClassifier(random_state=0)
path = clf.cost_complexity_pruning_path(X_train, y_train)
ccp_alphas, impurities = path.ccp_alphas, path.impurities
```
In the following plot, the maximum effective alpha value is removed, because it is the trivial tree with only one node.
```
fig, ax = plt.subplots()
ax.plot(ccp_alphas[:-1], impurities[:-1], marker="o", drawstyle="steps-post")
ax.set_xlabel("effective alpha")
ax.set_ylabel("total impurity of leaves")
ax.set_title("Total Impurity vs effective alpha for training set")
```
```
Text(0.5, 1.0, 'Total Impurity vs effective alpha for training set')
```
Next, we train a decision tree using the effective alphas. The last value in `ccp_alphas` is the alpha value that prunes the whole tree, leaving the tree, `clfs[-1]`, with one node.
```
clfs = []
for ccp_alpha in ccp_alphas:
clf = DecisionTreeClassifier(random_state=0, ccp_alpha=ccp_alpha)
clf.fit(X_train, y_train)
clfs.append(clf)
print(
"Number of nodes in the last tree is: {} with ccp_alpha: {}".format(
clfs[-1].tree_.node_count, ccp_alphas[-1]
)
)
```
```
Number of nodes in the last tree is: 1 with ccp_alpha: 0.3272984419327777
```
For the remainder of this example, we remove the last element in `clfs` and `ccp_alphas`, because it is the trivial tree with only one node. Here we show that the number of nodes and tree depth decreases as alpha increases.
```
clfs = clfs[:-1]
ccp_alphas = ccp_alphas[:-1]
node_counts = [clf.tree_.node_count for clf in clfs]
depth = [clf.tree_.max_depth for clf in clfs]
fig, ax = plt.subplots(2, 1)
ax[0].plot(ccp_alphas, node_counts, marker="o", drawstyle="steps-post")
ax[0].set_xlabel("alpha")
ax[0].set_ylabel("number of nodes")
ax[0].set_title("Number of nodes vs alpha")
ax[1].plot(ccp_alphas, depth, marker="o", drawstyle="steps-post")
ax[1].set_xlabel("alpha")
ax[1].set_ylabel("depth of tree")
ax[1].set_title("Depth vs alpha")
fig.tight_layout()
```
Accuracy vs alpha for training and testing sets
-----------------------------------------------
When `ccp_alpha` is set to zero and keeping the other default parameters of [`DecisionTreeClassifier`](../../modules/generated/sklearn.tree.decisiontreeclassifier#sklearn.tree.DecisionTreeClassifier "sklearn.tree.DecisionTreeClassifier"), the tree overfits, leading to a 100% training accuracy and 88% testing accuracy. As alpha increases, more of the tree is pruned, thus creating a decision tree that generalizes better. In this example, setting `ccp_alpha=0.015` maximizes the testing accuracy.
```
train_scores = [clf.score(X_train, y_train) for clf in clfs]
test_scores = [clf.score(X_test, y_test) for clf in clfs]
fig, ax = plt.subplots()
ax.set_xlabel("alpha")
ax.set_ylabel("accuracy")
ax.set_title("Accuracy vs alpha for training and testing sets")
ax.plot(ccp_alphas, train_scores, marker="o", label="train", drawstyle="steps-post")
ax.plot(ccp_alphas, test_scores, marker="o", label="test", drawstyle="steps-post")
ax.legend()
plt.show()
```
**Total running time of the script:** ( 0 minutes 0.315 seconds)
[`Download Python source code: plot_cost_complexity_pruning.py`](https://scikit-learn.org/1.1/_downloads/93c278871f99c92a918dac30ee44a6a3/plot_cost_complexity_pruning.py)
[`Download Jupyter notebook: plot_cost_complexity_pruning.ipynb`](https://scikit-learn.org/1.1/_downloads/29998264311e172e4afe243096ca2c93/plot_cost_complexity_pruning.ipynb)
| programming_docs |
scikit_learn Understanding the decision tree structure Note
Click [here](#sphx-glr-download-auto-examples-tree-plot-unveil-tree-structure-py) to download the full example code or to run this example in your browser via Binder
Understanding the decision tree structure
=========================================
The decision tree structure can be analysed to gain further insight on the relation between the features and the target to predict. In this example, we show how to retrieve:
* the binary tree structure;
* the depth of each node and whether or not it’s a leaf;
* the nodes that were reached by a sample using the `decision_path` method;
* the leaf that was reached by a sample using the apply method;
* the rules that were used to predict a sample;
* the decision path shared by a group of samples.
```
import numpy as np
from matplotlib import pyplot as plt
from sklearn.model_selection import train_test_split
from sklearn.datasets import load_iris
from sklearn.tree import DecisionTreeClassifier
from sklearn import tree
```
Train tree classifier
---------------------
First, we fit a [`DecisionTreeClassifier`](../../modules/generated/sklearn.tree.decisiontreeclassifier#sklearn.tree.DecisionTreeClassifier "sklearn.tree.DecisionTreeClassifier") using the [`load_iris`](../../modules/generated/sklearn.datasets.load_iris#sklearn.datasets.load_iris "sklearn.datasets.load_iris") dataset.
```
iris = load_iris()
X = iris.data
y = iris.target
X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=0)
clf = DecisionTreeClassifier(max_leaf_nodes=3, random_state=0)
clf.fit(X_train, y_train)
```
```
DecisionTreeClassifier(max_leaf_nodes=3, random_state=0)
```
**In a Jupyter environment, please rerun this cell to show the HTML representation or trust the notebook.
On GitHub, the HTML representation is unable to render, please try loading this page with nbviewer.org.**
DecisionTreeClassifier
```
DecisionTreeClassifier(max_leaf_nodes=3, random_state=0)
```
Tree structure
--------------
The decision classifier has an attribute called `tree_` which allows access to low level attributes such as `node_count`, the total number of nodes, and `max_depth`, the maximal depth of the tree. It also stores the entire binary tree structure, represented as a number of parallel arrays. The i-th element of each array holds information about the node `i`. Node 0 is the tree’s root. Some of the arrays only apply to either leaves or split nodes. In this case the values of the nodes of the other type is arbitrary. For example, the arrays `feature` and `threshold` only apply to split nodes. The values for leaf nodes in these arrays are therefore arbitrary.
Among these arrays, we have:
* `children_left[i]`: id of the left child of node `i` or -1 if leaf node
* `children_right[i]`: id of the right child of node `i` or -1 if leaf node
* `feature[i]`: feature used for splitting node `i`
* `threshold[i]`: threshold value at node `i`
* `n_node_samples[i]`: the number of training samples reaching node `i`
* `impurity[i]`: the impurity at node `i`
Using the arrays, we can traverse the tree structure to compute various properties. Below, we will compute the depth of each node and whether or not it is a leaf.
```
n_nodes = clf.tree_.node_count
children_left = clf.tree_.children_left
children_right = clf.tree_.children_right
feature = clf.tree_.feature
threshold = clf.tree_.threshold
node_depth = np.zeros(shape=n_nodes, dtype=np.int64)
is_leaves = np.zeros(shape=n_nodes, dtype=bool)
stack = [(0, 0)] # start with the root node id (0) and its depth (0)
while len(stack) > 0:
# `pop` ensures each node is only visited once
node_id, depth = stack.pop()
node_depth[node_id] = depth
# If the left and right child of a node is not the same we have a split
# node
is_split_node = children_left[node_id] != children_right[node_id]
# If a split node, append left and right children and depth to `stack`
# so we can loop through them
if is_split_node:
stack.append((children_left[node_id], depth + 1))
stack.append((children_right[node_id], depth + 1))
else:
is_leaves[node_id] = True
print(
"The binary tree structure has {n} nodes and has "
"the following tree structure:\n".format(n=n_nodes)
)
for i in range(n_nodes):
if is_leaves[i]:
print(
"{space}node={node} is a leaf node.".format(
space=node_depth[i] * "\t", node=i
)
)
else:
print(
"{space}node={node} is a split node: "
"go to node {left} if X[:, {feature}] <= {threshold} "
"else to node {right}.".format(
space=node_depth[i] * "\t",
node=i,
left=children_left[i],
feature=feature[i],
threshold=threshold[i],
right=children_right[i],
)
)
```
```
The binary tree structure has 5 nodes and has the following tree structure:
node=0 is a split node: go to node 1 if X[:, 3] <= 0.800000011920929 else to node 2.
node=1 is a leaf node.
node=2 is a split node: go to node 3 if X[:, 2] <= 4.950000047683716 else to node 4.
node=3 is a leaf node.
node=4 is a leaf node.
```
We can compare the above output to the plot of the decision tree.
```
tree.plot_tree(clf)
plt.show()
```
Decision path
-------------
We can also retrieve the decision path of samples of interest. The `decision_path` method outputs an indicator matrix that allows us to retrieve the nodes the samples of interest traverse through. A non zero element in the indicator matrix at position `(i, j)` indicates that the sample `i` goes through the node `j`. Or, for one sample `i`, the positions of the non zero elements in row `i` of the indicator matrix designate the ids of the nodes that sample goes through.
The leaf ids reached by samples of interest can be obtained with the `apply` method. This returns an array of the node ids of the leaves reached by each sample of interest. Using the leaf ids and the `decision_path` we can obtain the splitting conditions that were used to predict a sample or a group of samples. First, let’s do it for one sample. Note that `node_index` is a sparse matrix.
```
node_indicator = clf.decision_path(X_test)
leaf_id = clf.apply(X_test)
sample_id = 0
# obtain ids of the nodes `sample_id` goes through, i.e., row `sample_id`
node_index = node_indicator.indices[
node_indicator.indptr[sample_id] : node_indicator.indptr[sample_id + 1]
]
print("Rules used to predict sample {id}:\n".format(id=sample_id))
for node_id in node_index:
# continue to the next node if it is a leaf node
if leaf_id[sample_id] == node_id:
continue
# check if value of the split feature for sample 0 is below threshold
if X_test[sample_id, feature[node_id]] <= threshold[node_id]:
threshold_sign = "<="
else:
threshold_sign = ">"
print(
"decision node {node} : (X_test[{sample}, {feature}] = {value}) "
"{inequality} {threshold})".format(
node=node_id,
sample=sample_id,
feature=feature[node_id],
value=X_test[sample_id, feature[node_id]],
inequality=threshold_sign,
threshold=threshold[node_id],
)
)
```
```
Rules used to predict sample 0:
decision node 0 : (X_test[0, 3] = 2.4) > 0.800000011920929)
decision node 2 : (X_test[0, 2] = 5.1) > 4.950000047683716)
```
For a group of samples, we can determine the common nodes the samples go through.
```
sample_ids = [0, 1]
# boolean array indicating the nodes both samples go through
common_nodes = node_indicator.toarray()[sample_ids].sum(axis=0) == len(sample_ids)
# obtain node ids using position in array
common_node_id = np.arange(n_nodes)[common_nodes]
print(
"\nThe following samples {samples} share the node(s) {nodes} in the tree.".format(
samples=sample_ids, nodes=common_node_id
)
)
print("This is {prop}% of all nodes.".format(prop=100 * len(common_node_id) / n_nodes))
```
```
The following samples [0, 1] share the node(s) [0 2] in the tree.
This is 40.0% of all nodes.
```
**Total running time of the script:** ( 0 minutes 0.090 seconds)
[`Download Python source code: plot_unveil_tree_structure.py`](https://scikit-learn.org/1.1/_downloads/21a6ff17ef2837fe1cd49e63223a368d/plot_unveil_tree_structure.py)
[`Download Jupyter notebook: plot_unveil_tree_structure.ipynb`](https://scikit-learn.org/1.1/_downloads/f7a387851c5762610f4e8197e52bbbca/plot_unveil_tree_structure.ipynb)
scikit_learn Multi-output Decision Tree Regression Note
Click [here](#sphx-glr-download-auto-examples-tree-plot-tree-regression-multioutput-py) to download the full example code or to run this example in your browser via Binder
Multi-output Decision Tree Regression
=====================================
An example to illustrate multi-output regression with decision tree.
The [decision trees](../../modules/tree#tree) is used to predict simultaneously the noisy x and y observations of a circle given a single underlying feature. As a result, it learns local linear regressions approximating the circle.
We can see that if the maximum depth of the tree (controlled by the `max_depth` parameter) is set too high, the decision trees learn too fine details of the training data and learn from the noise, i.e. they overfit.
```
import numpy as np
import matplotlib.pyplot as plt
from sklearn.tree import DecisionTreeRegressor
# Create a random dataset
rng = np.random.RandomState(1)
X = np.sort(200 * rng.rand(100, 1) - 100, axis=0)
y = np.array([np.pi * np.sin(X).ravel(), np.pi * np.cos(X).ravel()]).T
y[::5, :] += 0.5 - rng.rand(20, 2)
# Fit regression model
regr_1 = DecisionTreeRegressor(max_depth=2)
regr_2 = DecisionTreeRegressor(max_depth=5)
regr_3 = DecisionTreeRegressor(max_depth=8)
regr_1.fit(X, y)
regr_2.fit(X, y)
regr_3.fit(X, y)
# Predict
X_test = np.arange(-100.0, 100.0, 0.01)[:, np.newaxis]
y_1 = regr_1.predict(X_test)
y_2 = regr_2.predict(X_test)
y_3 = regr_3.predict(X_test)
# Plot the results
plt.figure()
s = 25
plt.scatter(y[:, 0], y[:, 1], c="navy", s=s, edgecolor="black", label="data")
plt.scatter(
y_1[:, 0],
y_1[:, 1],
c="cornflowerblue",
s=s,
edgecolor="black",
label="max_depth=2",
)
plt.scatter(y_2[:, 0], y_2[:, 1], c="red", s=s, edgecolor="black", label="max_depth=5")
plt.scatter(
y_3[:, 0], y_3[:, 1], c="orange", s=s, edgecolor="black", label="max_depth=8"
)
plt.xlim([-6, 6])
plt.ylim([-6, 6])
plt.xlabel("target 1")
plt.ylabel("target 2")
plt.title("Multi-output Decision Tree Regression")
plt.legend(loc="best")
plt.show()
```
**Total running time of the script:** ( 0 minutes 0.223 seconds)
[`Download Python source code: plot_tree_regression_multioutput.py`](https://scikit-learn.org/1.1/_downloads/8a87659782dab72f4bb6ef792517234c/plot_tree_regression_multioutput.py)
[`Download Jupyter notebook: plot_tree_regression_multioutput.ipynb`](https://scikit-learn.org/1.1/_downloads/aeb8d70a5cf3f742129926fb473c9bca/plot_tree_regression_multioutput.ipynb)
scikit_learn Plot the decision surfaces of ensembles of trees on the iris dataset Note
Click [here](#sphx-glr-download-auto-examples-ensemble-plot-forest-iris-py) to download the full example code or to run this example in your browser via Binder
Plot the decision surfaces of ensembles of trees on the iris dataset
====================================================================
Plot the decision surfaces of forests of randomized trees trained on pairs of features of the iris dataset.
This plot compares the decision surfaces learned by a decision tree classifier (first column), by a random forest classifier (second column), by an extra- trees classifier (third column) and by an AdaBoost classifier (fourth column).
In the first row, the classifiers are built using the sepal width and the sepal length features only, on the second row using the petal length and sepal length only, and on the third row using the petal width and the petal length only.
In descending order of quality, when trained (outside of this example) on all 4 features using 30 estimators and scored using 10 fold cross validation, we see:
```
ExtraTreesClassifier() # 0.95 score
RandomForestClassifier() # 0.94 score
AdaBoost(DecisionTree(max_depth=3)) # 0.94 score
DecisionTree(max_depth=None) # 0.94 score
```
Increasing `max_depth` for AdaBoost lowers the standard deviation of the scores (but the average score does not improve).
See the console’s output for further details about each model.
In this example you might try to:
1. vary the `max_depth` for the `DecisionTreeClassifier` and `AdaBoostClassifier`, perhaps try `max_depth=3` for the `DecisionTreeClassifier` or `max_depth=None` for `AdaBoostClassifier`
2. vary `n_estimators`
It is worth noting that RandomForests and ExtraTrees can be fitted in parallel on many cores as each tree is built independently of the others. AdaBoost’s samples are built sequentially and so do not use multiple cores.
```
DecisionTree with features [0, 1] has a score of 0.9266666666666666
RandomForest with 30 estimators with features [0, 1] has a score of 0.9266666666666666
ExtraTrees with 30 estimators with features [0, 1] has a score of 0.9266666666666666
AdaBoost with 30 estimators with features [0, 1] has a score of 0.8533333333333334
DecisionTree with features [0, 2] has a score of 0.9933333333333333
RandomForest with 30 estimators with features [0, 2] has a score of 0.9933333333333333
ExtraTrees with 30 estimators with features [0, 2] has a score of 0.9933333333333333
AdaBoost with 30 estimators with features [0, 2] has a score of 0.9933333333333333
DecisionTree with features [2, 3] has a score of 0.9933333333333333
RandomForest with 30 estimators with features [2, 3] has a score of 0.9933333333333333
ExtraTrees with 30 estimators with features [2, 3] has a score of 0.9933333333333333
AdaBoost with 30 estimators with features [2, 3] has a score of 0.9933333333333333
```
```
import numpy as np
import matplotlib.pyplot as plt
from matplotlib.colors import ListedColormap
from sklearn.datasets import load_iris
from sklearn.ensemble import (
RandomForestClassifier,
ExtraTreesClassifier,
AdaBoostClassifier,
)
from sklearn.tree import DecisionTreeClassifier
# Parameters
n_classes = 3
n_estimators = 30
cmap = plt.cm.RdYlBu
plot_step = 0.02 # fine step width for decision surface contours
plot_step_coarser = 0.5 # step widths for coarse classifier guesses
RANDOM_SEED = 13 # fix the seed on each iteration
# Load data
iris = load_iris()
plot_idx = 1
models = [
DecisionTreeClassifier(max_depth=None),
RandomForestClassifier(n_estimators=n_estimators),
ExtraTreesClassifier(n_estimators=n_estimators),
AdaBoostClassifier(DecisionTreeClassifier(max_depth=3), n_estimators=n_estimators),
]
for pair in ([0, 1], [0, 2], [2, 3]):
for model in models:
# We only take the two corresponding features
X = iris.data[:, pair]
y = iris.target
# Shuffle
idx = np.arange(X.shape[0])
np.random.seed(RANDOM_SEED)
np.random.shuffle(idx)
X = X[idx]
y = y[idx]
# Standardize
mean = X.mean(axis=0)
std = X.std(axis=0)
X = (X - mean) / std
# Train
model.fit(X, y)
scores = model.score(X, y)
# Create a title for each column and the console by using str() and
# slicing away useless parts of the string
model_title = str(type(model)).split(".")[-1][:-2][: -len("Classifier")]
model_details = model_title
if hasattr(model, "estimators_"):
model_details += " with {} estimators".format(len(model.estimators_))
print(model_details + " with features", pair, "has a score of", scores)
plt.subplot(3, 4, plot_idx)
if plot_idx <= len(models):
# Add a title at the top of each column
plt.title(model_title, fontsize=9)
# Now plot the decision boundary using a fine mesh as input to a
# filled contour plot
x_min, x_max = X[:, 0].min() - 1, X[:, 0].max() + 1
y_min, y_max = X[:, 1].min() - 1, X[:, 1].max() + 1
xx, yy = np.meshgrid(
np.arange(x_min, x_max, plot_step), np.arange(y_min, y_max, plot_step)
)
# Plot either a single DecisionTreeClassifier or alpha blend the
# decision surfaces of the ensemble of classifiers
if isinstance(model, DecisionTreeClassifier):
Z = model.predict(np.c_[xx.ravel(), yy.ravel()])
Z = Z.reshape(xx.shape)
cs = plt.contourf(xx, yy, Z, cmap=cmap)
else:
# Choose alpha blend level with respect to the number
# of estimators
# that are in use (noting that AdaBoost can use fewer estimators
# than its maximum if it achieves a good enough fit early on)
estimator_alpha = 1.0 / len(model.estimators_)
for tree in model.estimators_:
Z = tree.predict(np.c_[xx.ravel(), yy.ravel()])
Z = Z.reshape(xx.shape)
cs = plt.contourf(xx, yy, Z, alpha=estimator_alpha, cmap=cmap)
# Build a coarser grid to plot a set of ensemble classifications
# to show how these are different to what we see in the decision
# surfaces. These points are regularly space and do not have a
# black outline
xx_coarser, yy_coarser = np.meshgrid(
np.arange(x_min, x_max, plot_step_coarser),
np.arange(y_min, y_max, plot_step_coarser),
)
Z_points_coarser = model.predict(
np.c_[xx_coarser.ravel(), yy_coarser.ravel()]
).reshape(xx_coarser.shape)
cs_points = plt.scatter(
xx_coarser,
yy_coarser,
s=15,
c=Z_points_coarser,
cmap=cmap,
edgecolors="none",
)
# Plot the training points, these are clustered together and have a
# black outline
plt.scatter(
X[:, 0],
X[:, 1],
c=y,
cmap=ListedColormap(["r", "y", "b"]),
edgecolor="k",
s=20,
)
plot_idx += 1 # move on to the next plot in sequence
plt.suptitle("Classifiers on feature subsets of the Iris dataset", fontsize=12)
plt.axis("tight")
plt.tight_layout(h_pad=0.2, w_pad=0.2, pad=2.5)
plt.show()
```
**Total running time of the script:** ( 0 minutes 5.822 seconds)
[`Download Python source code: plot_forest_iris.py`](https://scikit-learn.org/1.1/_downloads/7b2d436e94933f77577bba1962762f33/plot_forest_iris.py)
[`Download Jupyter notebook: plot_forest_iris.ipynb`](https://scikit-learn.org/1.1/_downloads/985f759e90739932d90206b8295630d6/plot_forest_iris.ipynb)
scikit_learn Plot the decision boundaries of a VotingClassifier Note
Click [here](#sphx-glr-download-auto-examples-ensemble-plot-voting-decision-regions-py) to download the full example code or to run this example in your browser via Binder
Plot the decision boundaries of a VotingClassifier
==================================================
Plot the decision boundaries of a [`VotingClassifier`](../../modules/generated/sklearn.ensemble.votingclassifier#sklearn.ensemble.VotingClassifier "sklearn.ensemble.VotingClassifier") for two features of the Iris dataset.
Plot the class probabilities of the first sample in a toy dataset predicted by three different classifiers and averaged by the [`VotingClassifier`](../../modules/generated/sklearn.ensemble.votingclassifier#sklearn.ensemble.VotingClassifier "sklearn.ensemble.VotingClassifier").
First, three exemplary classifiers are initialized ([`DecisionTreeClassifier`](../../modules/generated/sklearn.tree.decisiontreeclassifier#sklearn.tree.DecisionTreeClassifier "sklearn.tree.DecisionTreeClassifier"), [`KNeighborsClassifier`](../../modules/generated/sklearn.neighbors.kneighborsclassifier#sklearn.neighbors.KNeighborsClassifier "sklearn.neighbors.KNeighborsClassifier"), and [`SVC`](../../modules/generated/sklearn.svm.svc#sklearn.svm.SVC "sklearn.svm.SVC")) and used to initialize a soft-voting [`VotingClassifier`](../../modules/generated/sklearn.ensemble.votingclassifier#sklearn.ensemble.VotingClassifier "sklearn.ensemble.VotingClassifier") with weights `[2,
1, 2]`, which means that the predicted probabilities of the [`DecisionTreeClassifier`](../../modules/generated/sklearn.tree.decisiontreeclassifier#sklearn.tree.DecisionTreeClassifier "sklearn.tree.DecisionTreeClassifier") and [`SVC`](../../modules/generated/sklearn.svm.svc#sklearn.svm.SVC "sklearn.svm.SVC") each count 2 times as much as the weights of the [`KNeighborsClassifier`](../../modules/generated/sklearn.neighbors.kneighborsclassifier#sklearn.neighbors.KNeighborsClassifier "sklearn.neighbors.KNeighborsClassifier") classifier when the averaged probability is calculated.
```
from itertools import product
import matplotlib.pyplot as plt
from sklearn import datasets
from sklearn.tree import DecisionTreeClassifier
from sklearn.neighbors import KNeighborsClassifier
from sklearn.svm import SVC
from sklearn.ensemble import VotingClassifier
from sklearn.inspection import DecisionBoundaryDisplay
# Loading some example data
iris = datasets.load_iris()
X = iris.data[:, [0, 2]]
y = iris.target
# Training classifiers
clf1 = DecisionTreeClassifier(max_depth=4)
clf2 = KNeighborsClassifier(n_neighbors=7)
clf3 = SVC(gamma=0.1, kernel="rbf", probability=True)
eclf = VotingClassifier(
estimators=[("dt", clf1), ("knn", clf2), ("svc", clf3)],
voting="soft",
weights=[2, 1, 2],
)
clf1.fit(X, y)
clf2.fit(X, y)
clf3.fit(X, y)
eclf.fit(X, y)
# Plotting decision regions
f, axarr = plt.subplots(2, 2, sharex="col", sharey="row", figsize=(10, 8))
for idx, clf, tt in zip(
product([0, 1], [0, 1]),
[clf1, clf2, clf3, eclf],
["Decision Tree (depth=4)", "KNN (k=7)", "Kernel SVM", "Soft Voting"],
):
DecisionBoundaryDisplay.from_estimator(
clf, X, alpha=0.4, ax=axarr[idx[0], idx[1]], response_method="predict"
)
axarr[idx[0], idx[1]].scatter(X[:, 0], X[:, 1], c=y, s=20, edgecolor="k")
axarr[idx[0], idx[1]].set_title(tt)
plt.show()
```
**Total running time of the script:** ( 0 minutes 0.458 seconds)
[`Download Python source code: plot_voting_decision_regions.py`](https://scikit-learn.org/1.1/_downloads/12b6dbb270865986bd1c9bbf7ce24cb0/plot_voting_decision_regions.py)
[`Download Jupyter notebook: plot_voting_decision_regions.ipynb`](https://scikit-learn.org/1.1/_downloads/905bd7d135e7fe5fdec55e4f0aa77420/plot_voting_decision_regions.ipynb)
| programming_docs |
scikit_learn Decision Tree Regression with AdaBoost Note
Click [here](#sphx-glr-download-auto-examples-ensemble-plot-adaboost-regression-py) to download the full example code or to run this example in your browser via Binder
Decision Tree Regression with AdaBoost
======================================
A decision tree is boosted using the AdaBoost.R2 [[1]](#id2) algorithm on a 1D sinusoidal dataset with a small amount of Gaussian noise. 299 boosts (300 decision trees) is compared with a single decision tree regressor. As the number of boosts is increased the regressor can fit more detail.
Preparing the data
------------------
First, we prepare dummy data with a sinusoidal relationship and some gaussian noise.
```
# Author: Noel Dawe <[email protected]>
#
# License: BSD 3 clause
import numpy as np
rng = np.random.RandomState(1)
X = np.linspace(0, 6, 100)[:, np.newaxis]
y = np.sin(X).ravel() + np.sin(6 * X).ravel() + rng.normal(0, 0.1, X.shape[0])
```
Training and prediction with DecisionTree and AdaBoost Regressors
-----------------------------------------------------------------
Now, we define the classifiers and fit them to the data. Then we predict on that same data to see how well they could fit it. The first regressor is a `DecisionTreeRegressor` with `max_depth=4`. The second regressor is an `AdaBoostRegressor` with a `DecisionTreeRegressor` of `max_depth=4` as base learner and will be built with `n_estimators=300` of those base learners.
```
from sklearn.ensemble import AdaBoostRegressor
from sklearn.tree import DecisionTreeRegressor
regr_1 = DecisionTreeRegressor(max_depth=4)
regr_2 = AdaBoostRegressor(
DecisionTreeRegressor(max_depth=4), n_estimators=300, random_state=rng
)
regr_1.fit(X, y)
regr_2.fit(X, y)
y_1 = regr_1.predict(X)
y_2 = regr_2.predict(X)
```
Plotting the results
--------------------
Finally, we plot how well our two regressors, single decision tree regressor and AdaBoost regressor, could fit the data.
```
import matplotlib.pyplot as plt
import seaborn as sns
colors = sns.color_palette("colorblind")
plt.figure()
plt.scatter(X, y, color=colors[0], label="training samples")
plt.plot(X, y_1, color=colors[1], label="n_estimators=1", linewidth=2)
plt.plot(X, y_2, color=colors[2], label="n_estimators=300", linewidth=2)
plt.xlabel("data")
plt.ylabel("target")
plt.title("Boosted Decision Tree Regression")
plt.legend()
plt.show()
```
**Total running time of the script:** ( 0 minutes 0.344 seconds)
[`Download Python source code: plot_adaboost_regression.py`](https://scikit-learn.org/1.1/_downloads/2da78c80da33b4e0d313b0a90b923ec8/plot_adaboost_regression.py)
[`Download Jupyter notebook: plot_adaboost_regression.ipynb`](https://scikit-learn.org/1.1/_downloads/38e826c9e3778d7de78b2fc671fd7903/plot_adaboost_regression.ipynb)
scikit_learn Comparing random forests and the multi-output meta estimator Note
Click [here](#sphx-glr-download-auto-examples-ensemble-plot-random-forest-regression-multioutput-py) to download the full example code or to run this example in your browser via Binder
Comparing random forests and the multi-output meta estimator
============================================================
An example to compare multi-output regression with random forest and the [multioutput.MultiOutputRegressor](../../modules/multiclass#multiclass) meta-estimator.
This example illustrates the use of the [multioutput.MultiOutputRegressor](../../modules/multiclass#multiclass) meta-estimator to perform multi-output regression. A random forest regressor is used, which supports multi-output regression natively, so the results can be compared.
The random forest regressor will only ever predict values within the range of observations or closer to zero for each of the targets. As a result the predictions are biased towards the centre of the circle.
Using a single underlying feature the model learns both the x and y coordinate as output.
```
# Author: Tim Head <[email protected]>
#
# License: BSD 3 clause
import numpy as np
import matplotlib.pyplot as plt
from sklearn.ensemble import RandomForestRegressor
from sklearn.model_selection import train_test_split
from sklearn.multioutput import MultiOutputRegressor
# Create a random dataset
rng = np.random.RandomState(1)
X = np.sort(200 * rng.rand(600, 1) - 100, axis=0)
y = np.array([np.pi * np.sin(X).ravel(), np.pi * np.cos(X).ravel()]).T
y += 0.5 - rng.rand(*y.shape)
X_train, X_test, y_train, y_test = train_test_split(
X, y, train_size=400, test_size=200, random_state=4
)
max_depth = 30
regr_multirf = MultiOutputRegressor(
RandomForestRegressor(n_estimators=100, max_depth=max_depth, random_state=0)
)
regr_multirf.fit(X_train, y_train)
regr_rf = RandomForestRegressor(n_estimators=100, max_depth=max_depth, random_state=2)
regr_rf.fit(X_train, y_train)
# Predict on new data
y_multirf = regr_multirf.predict(X_test)
y_rf = regr_rf.predict(X_test)
# Plot the results
plt.figure()
s = 50
a = 0.4
plt.scatter(
y_test[:, 0],
y_test[:, 1],
edgecolor="k",
c="navy",
s=s,
marker="s",
alpha=a,
label="Data",
)
plt.scatter(
y_multirf[:, 0],
y_multirf[:, 1],
edgecolor="k",
c="cornflowerblue",
s=s,
alpha=a,
label="Multi RF score=%.2f" % regr_multirf.score(X_test, y_test),
)
plt.scatter(
y_rf[:, 0],
y_rf[:, 1],
edgecolor="k",
c="c",
s=s,
marker="^",
alpha=a,
label="RF score=%.2f" % regr_rf.score(X_test, y_test),
)
plt.xlim([-6, 6])
plt.ylim([-6, 6])
plt.xlabel("target 1")
plt.ylabel("target 2")
plt.title("Comparing random forests and the multi-output meta estimator")
plt.legend()
plt.show()
```
**Total running time of the script:** ( 0 minutes 0.429 seconds)
[`Download Python source code: plot_random_forest_regression_multioutput.py`](https://scikit-learn.org/1.1/_downloads/d54072eca33dc111fb8f7a73aedcb488/plot_random_forest_regression_multioutput.py)
[`Download Jupyter notebook: plot_random_forest_regression_multioutput.ipynb`](https://scikit-learn.org/1.1/_downloads/11cbd2b41092ecf5ba8083dfb7bae25a/plot_random_forest_regression_multioutput.ipynb)
scikit_learn Categorical Feature Support in Gradient Boosting Note
Click [here](#sphx-glr-download-auto-examples-ensemble-plot-gradient-boosting-categorical-py) to download the full example code or to run this example in your browser via Binder
Categorical Feature Support in Gradient Boosting
================================================
In this example, we will compare the training times and prediction performances of [`HistGradientBoostingRegressor`](../../modules/generated/sklearn.ensemble.histgradientboostingregressor#sklearn.ensemble.HistGradientBoostingRegressor "sklearn.ensemble.HistGradientBoostingRegressor") with different encoding strategies for categorical features. In particular, we will evaluate:
* dropping the categorical features
* using a [`OneHotEncoder`](../../modules/generated/sklearn.preprocessing.onehotencoder#sklearn.preprocessing.OneHotEncoder "sklearn.preprocessing.OneHotEncoder")
* using an [`OrdinalEncoder`](../../modules/generated/sklearn.preprocessing.ordinalencoder#sklearn.preprocessing.OrdinalEncoder "sklearn.preprocessing.OrdinalEncoder") and treat categories as ordered, equidistant quantities
* using an [`OrdinalEncoder`](../../modules/generated/sklearn.preprocessing.ordinalencoder#sklearn.preprocessing.OrdinalEncoder "sklearn.preprocessing.OrdinalEncoder") and rely on the [native category support](../../modules/ensemble#categorical-support-gbdt) of the [`HistGradientBoostingRegressor`](../../modules/generated/sklearn.ensemble.histgradientboostingregressor#sklearn.ensemble.HistGradientBoostingRegressor "sklearn.ensemble.HistGradientBoostingRegressor") estimator.
We will work with the Ames Lowa Housing dataset which consists of numerical and categorical features, where the houses’ sales prices is the target.
Load Ames Housing dataset
-------------------------
First, we load the Ames Housing data as a pandas dataframe. The features are either categorical or numerical:
```
from sklearn.datasets import fetch_openml
X, y = fetch_openml(data_id=42165, as_frame=True, return_X_y=True)
# Select only a subset of features of X to make the example faster to run
categorical_columns_subset = [
"BldgType",
"GarageFinish",
"LotConfig",
"Functional",
"MasVnrType",
"HouseStyle",
"FireplaceQu",
"ExterCond",
"ExterQual",
"PoolQC",
]
numerical_columns_subset = [
"3SsnPorch",
"Fireplaces",
"BsmtHalfBath",
"HalfBath",
"GarageCars",
"TotRmsAbvGrd",
"BsmtFinSF1",
"BsmtFinSF2",
"GrLivArea",
"ScreenPorch",
]
X = X[categorical_columns_subset + numerical_columns_subset]
X[categorical_columns_subset] = X[categorical_columns_subset].astype("category")
n_categorical_features = X.select_dtypes(include="category").shape[1]
n_numerical_features = X.select_dtypes(include="number").shape[1]
print(f"Number of samples: {X.shape[0]}")
print(f"Number of features: {X.shape[1]}")
print(f"Number of categorical features: {n_categorical_features}")
print(f"Number of numerical features: {n_numerical_features}")
```
```
Number of samples: 1460
Number of features: 20
Number of categorical features: 10
Number of numerical features: 10
```
Gradient boosting estimator with dropped categorical features
-------------------------------------------------------------
As a baseline, we create an estimator where the categorical features are dropped:
```
from sklearn.ensemble import HistGradientBoostingRegressor
from sklearn.pipeline import make_pipeline
from sklearn.compose import make_column_transformer
from sklearn.compose import make_column_selector
dropper = make_column_transformer(
("drop", make_column_selector(dtype_include="category")), remainder="passthrough"
)
hist_dropped = make_pipeline(dropper, HistGradientBoostingRegressor(random_state=42))
```
Gradient boosting estimator with one-hot encoding
-------------------------------------------------
Next, we create a pipeline that will one-hot encode the categorical features and let the rest of the numerical data to passthrough:
```
from sklearn.preprocessing import OneHotEncoder
one_hot_encoder = make_column_transformer(
(
OneHotEncoder(sparse=False, handle_unknown="ignore"),
make_column_selector(dtype_include="category"),
),
remainder="passthrough",
)
hist_one_hot = make_pipeline(
one_hot_encoder, HistGradientBoostingRegressor(random_state=42)
)
```
Gradient boosting estimator with ordinal encoding
-------------------------------------------------
Next, we create a pipeline that will treat categorical features as if they were ordered quantities, i.e. the categories will be encoded as 0, 1, 2, etc., and treated as continuous features.
```
from sklearn.preprocessing import OrdinalEncoder
import numpy as np
ordinal_encoder = make_column_transformer(
(
OrdinalEncoder(handle_unknown="use_encoded_value", unknown_value=np.nan),
make_column_selector(dtype_include="category"),
),
remainder="passthrough",
)
hist_ordinal = make_pipeline(
ordinal_encoder, HistGradientBoostingRegressor(random_state=42)
)
```
Gradient boosting estimator with native categorical support
-----------------------------------------------------------
We now create a [`HistGradientBoostingRegressor`](../../modules/generated/sklearn.ensemble.histgradientboostingregressor#sklearn.ensemble.HistGradientBoostingRegressor "sklearn.ensemble.HistGradientBoostingRegressor") estimator that will natively handle categorical features. This estimator will not treat categorical features as ordered quantities.
Since the [`HistGradientBoostingRegressor`](../../modules/generated/sklearn.ensemble.histgradientboostingregressor#sklearn.ensemble.HistGradientBoostingRegressor "sklearn.ensemble.HistGradientBoostingRegressor") requires category values to be encoded in `[0, n_unique_categories - 1]`, we still rely on an [`OrdinalEncoder`](../../modules/generated/sklearn.preprocessing.ordinalencoder#sklearn.preprocessing.OrdinalEncoder "sklearn.preprocessing.OrdinalEncoder") to pre-process the data.
The main difference between this pipeline and the previous one is that in this one, we let the [`HistGradientBoostingRegressor`](../../modules/generated/sklearn.ensemble.histgradientboostingregressor#sklearn.ensemble.HistGradientBoostingRegressor "sklearn.ensemble.HistGradientBoostingRegressor") know which features are categorical.
```
# The ordinal encoder will first output the categorical features, and then the
# continuous (passed-through) features
categorical_mask = [True] * n_categorical_features + [False] * n_numerical_features
hist_native = make_pipeline(
ordinal_encoder,
HistGradientBoostingRegressor(
random_state=42, categorical_features=categorical_mask
),
)
```
Model comparison
----------------
Finally, we evaluate the models using cross validation. Here we compare the models performance in terms of [`mean_absolute_percentage_error`](../../modules/generated/sklearn.metrics.mean_absolute_percentage_error#sklearn.metrics.mean_absolute_percentage_error "sklearn.metrics.mean_absolute_percentage_error") and fit times.
```
from sklearn.model_selection import cross_validate
import matplotlib.pyplot as plt
scoring = "neg_mean_absolute_percentage_error"
n_cv_folds = 3
dropped_result = cross_validate(hist_dropped, X, y, cv=n_cv_folds, scoring=scoring)
one_hot_result = cross_validate(hist_one_hot, X, y, cv=n_cv_folds, scoring=scoring)
ordinal_result = cross_validate(hist_ordinal, X, y, cv=n_cv_folds, scoring=scoring)
native_result = cross_validate(hist_native, X, y, cv=n_cv_folds, scoring=scoring)
def plot_results(figure_title):
fig, (ax1, ax2) = plt.subplots(1, 2, figsize=(12, 8))
plot_info = [
("fit_time", "Fit times (s)", ax1, None),
("test_score", "Mean Absolute Percentage Error", ax2, None),
]
x, width = np.arange(4), 0.9
for key, title, ax, y_limit in plot_info:
items = [
dropped_result[key],
one_hot_result[key],
ordinal_result[key],
native_result[key],
]
mape_cv_mean = [np.mean(np.abs(item)) for item in items]
mape_cv_std = [np.std(item) for item in items]
ax.bar(
x=x,
height=mape_cv_mean,
width=width,
yerr=mape_cv_std,
color=["C0", "C1", "C2", "C3"],
)
ax.set(
xlabel="Model",
title=title,
xticks=x,
xticklabels=["Dropped", "One Hot", "Ordinal", "Native"],
ylim=y_limit,
)
fig.suptitle(figure_title)
plot_results("Gradient Boosting on Ames Housing")
```
We see that the model with one-hot-encoded data is by far the slowest. This is to be expected, since one-hot-encoding creates one additional feature per category value (for each categorical feature), and thus more split points need to be considered during fitting. In theory, we expect the native handling of categorical features to be slightly slower than treating categories as ordered quantities (‘Ordinal’), since native handling requires [sorting categories](../../modules/ensemble#categorical-support-gbdt). Fitting times should however be close when the number of categories is small, and this may not always be reflected in practice.
In terms of prediction performance, dropping the categorical features leads to poorer performance. The three models that use categorical features have comparable error rates, with a slight edge for the native handling.
Limiting the number of splits
-----------------------------
In general, one can expect poorer predictions from one-hot-encoded data, especially when the tree depths or the number of nodes are limited: with one-hot-encoded data, one needs more split points, i.e. more depth, in order to recover an equivalent split that could be obtained in one single split point with native handling.
This is also true when categories are treated as ordinal quantities: if categories are `A..F` and the best split is `ACF - BDE` the one-hot-encoder model will need 3 split points (one per category in the left node), and the ordinal non-native model will need 4 splits: 1 split to isolate `A`, 1 split to isolate `F`, and 2 splits to isolate `C` from `BCDE`.
How strongly the models’ performances differ in practice will depend on the dataset and on the flexibility of the trees.
To see this, let us re-run the same analysis with under-fitting models where we artificially limit the total number of splits by both limiting the number of trees and the depth of each tree.
```
for pipe in (hist_dropped, hist_one_hot, hist_ordinal, hist_native):
pipe.set_params(
histgradientboostingregressor__max_depth=3,
histgradientboostingregressor__max_iter=15,
)
dropped_result = cross_validate(hist_dropped, X, y, cv=n_cv_folds, scoring=scoring)
one_hot_result = cross_validate(hist_one_hot, X, y, cv=n_cv_folds, scoring=scoring)
ordinal_result = cross_validate(hist_ordinal, X, y, cv=n_cv_folds, scoring=scoring)
native_result = cross_validate(hist_native, X, y, cv=n_cv_folds, scoring=scoring)
plot_results("Gradient Boosting on Ames Housing (few and small trees)")
plt.show()
```
The results for these under-fitting models confirm our previous intuition: the native category handling strategy performs the best when the splitting budget is constrained. The two other strategies (one-hot encoding and treating categories as ordinal values) lead to error values comparable to the baseline model that just dropped the categorical features altogether.
**Total running time of the script:** ( 0 minutes 11.462 seconds)
[`Download Python source code: plot_gradient_boosting_categorical.py`](https://scikit-learn.org/1.1/_downloads/acc6f0183d4b7293ae5914724f55bc28/plot_gradient_boosting_categorical.py)
[`Download Jupyter notebook: plot_gradient_boosting_categorical.ipynb`](https://scikit-learn.org/1.1/_downloads/cd5de29451c4f8624f47d18def81839c/plot_gradient_boosting_categorical.ipynb)
scikit_learn Feature importances with a forest of trees Note
Click [here](#sphx-glr-download-auto-examples-ensemble-plot-forest-importances-py) to download the full example code or to run this example in your browser via Binder
Feature importances with a forest of trees
==========================================
This example shows the use of a forest of trees to evaluate the importance of features on an artificial classification task. The blue bars are the feature importances of the forest, along with their inter-trees variability represented by the error bars.
As expected, the plot suggests that 3 features are informative, while the remaining are not.
```
import matplotlib.pyplot as plt
```
Data generation and model fitting
---------------------------------
We generate a synthetic dataset with only 3 informative features. We will explicitly not shuffle the dataset to ensure that the informative features will correspond to the three first columns of X. In addition, we will split our dataset into training and testing subsets.
```
from sklearn.datasets import make_classification
from sklearn.model_selection import train_test_split
X, y = make_classification(
n_samples=1000,
n_features=10,
n_informative=3,
n_redundant=0,
n_repeated=0,
n_classes=2,
random_state=0,
shuffle=False,
)
X_train, X_test, y_train, y_test = train_test_split(X, y, stratify=y, random_state=42)
```
A random forest classifier will be fitted to compute the feature importances.
```
from sklearn.ensemble import RandomForestClassifier
feature_names = [f"feature {i}" for i in range(X.shape[1])]
forest = RandomForestClassifier(random_state=0)
forest.fit(X_train, y_train)
```
```
RandomForestClassifier(random_state=0)
```
**In a Jupyter environment, please rerun this cell to show the HTML representation or trust the notebook.
On GitHub, the HTML representation is unable to render, please try loading this page with nbviewer.org.**
RandomForestClassifier
```
RandomForestClassifier(random_state=0)
```
Feature importance based on mean decrease in impurity
-----------------------------------------------------
Feature importances are provided by the fitted attribute `feature_importances_` and they are computed as the mean and standard deviation of accumulation of the impurity decrease within each tree.
Warning
Impurity-based feature importances can be misleading for **high cardinality** features (many unique values). See [Permutation feature importance](../../modules/permutation_importance#permutation-importance) as an alternative below.
```
import time
import numpy as np
start_time = time.time()
importances = forest.feature_importances_
std = np.std([tree.feature_importances_ for tree in forest.estimators_], axis=0)
elapsed_time = time.time() - start_time
print(f"Elapsed time to compute the importances: {elapsed_time:.3f} seconds")
```
```
Elapsed time to compute the importances: 0.007 seconds
```
Let’s plot the impurity-based importance.
```
import pandas as pd
forest_importances = pd.Series(importances, index=feature_names)
fig, ax = plt.subplots()
forest_importances.plot.bar(yerr=std, ax=ax)
ax.set_title("Feature importances using MDI")
ax.set_ylabel("Mean decrease in impurity")
fig.tight_layout()
```
We observe that, as expected, the three first features are found important.
Feature importance based on feature permutation
-----------------------------------------------
Permutation feature importance overcomes limitations of the impurity-based feature importance: they do not have a bias toward high-cardinality features and can be computed on a left-out test set.
```
from sklearn.inspection import permutation_importance
start_time = time.time()
result = permutation_importance(
forest, X_test, y_test, n_repeats=10, random_state=42, n_jobs=2
)
elapsed_time = time.time() - start_time
print(f"Elapsed time to compute the importances: {elapsed_time:.3f} seconds")
forest_importances = pd.Series(result.importances_mean, index=feature_names)
```
```
Elapsed time to compute the importances: 0.572 seconds
```
The computation for full permutation importance is more costly. Features are shuffled n times and the model refitted to estimate the importance of it. Please see [Permutation feature importance](../../modules/permutation_importance#permutation-importance) for more details. We can now plot the importance ranking.
```
fig, ax = plt.subplots()
forest_importances.plot.bar(yerr=result.importances_std, ax=ax)
ax.set_title("Feature importances using permutation on full model")
ax.set_ylabel("Mean accuracy decrease")
fig.tight_layout()
plt.show()
```
The same features are detected as most important using both methods. Although the relative importances vary. As seen on the plots, MDI is less likely than permutation importance to fully omit a feature.
**Total running time of the script:** ( 0 minutes 0.925 seconds)
[`Download Python source code: plot_forest_importances.py`](https://scikit-learn.org/1.1/_downloads/74c6ab6570af7f9379b15af6a1323943/plot_forest_importances.py)
[`Download Jupyter notebook: plot_forest_importances.ipynb`](https://scikit-learn.org/1.1/_downloads/5e3b69cd18b43dd31a46a35ccba413e7/plot_forest_importances.ipynb)
| programming_docs |
scikit_learn Single estimator versus bagging: bias-variance decomposition Note
Click [here](#sphx-glr-download-auto-examples-ensemble-plot-bias-variance-py) to download the full example code or to run this example in your browser via Binder
Single estimator versus bagging: bias-variance decomposition
============================================================
This example illustrates and compares the bias-variance decomposition of the expected mean squared error of a single estimator against a bagging ensemble.
In regression, the expected mean squared error of an estimator can be decomposed in terms of bias, variance and noise. On average over datasets of the regression problem, the bias term measures the average amount by which the predictions of the estimator differ from the predictions of the best possible estimator for the problem (i.e., the Bayes model). The variance term measures the variability of the predictions of the estimator when fit over different random instances of the same problem. Each problem instance is noted “LS”, for “Learning Sample”, in the following. Finally, the noise measures the irreducible part of the error which is due the variability in the data.
The upper left figure illustrates the predictions (in dark red) of a single decision tree trained over a random dataset LS (the blue dots) of a toy 1d regression problem. It also illustrates the predictions (in light red) of other single decision trees trained over other (and different) randomly drawn instances LS of the problem. Intuitively, the variance term here corresponds to the width of the beam of predictions (in light red) of the individual estimators. The larger the variance, the more sensitive are the predictions for `x` to small changes in the training set. The bias term corresponds to the difference between the average prediction of the estimator (in cyan) and the best possible model (in dark blue). On this problem, we can thus observe that the bias is quite low (both the cyan and the blue curves are close to each other) while the variance is large (the red beam is rather wide).
The lower left figure plots the pointwise decomposition of the expected mean squared error of a single decision tree. It confirms that the bias term (in blue) is low while the variance is large (in green). It also illustrates the noise part of the error which, as expected, appears to be constant and around `0.01`.
The right figures correspond to the same plots but using instead a bagging ensemble of decision trees. In both figures, we can observe that the bias term is larger than in the previous case. In the upper right figure, the difference between the average prediction (in cyan) and the best possible model is larger (e.g., notice the offset around `x=2`). In the lower right figure, the bias curve is also slightly higher than in the lower left figure. In terms of variance however, the beam of predictions is narrower, which suggests that the variance is lower. Indeed, as the lower right figure confirms, the variance term (in green) is lower than for single decision trees. Overall, the bias- variance decomposition is therefore no longer the same. The tradeoff is better for bagging: averaging several decision trees fit on bootstrap copies of the dataset slightly increases the bias term but allows for a larger reduction of the variance, which results in a lower overall mean squared error (compare the red curves int the lower figures). The script output also confirms this intuition. The total error of the bagging ensemble is lower than the total error of a single decision tree, and this difference indeed mainly stems from a reduced variance.
For further details on bias-variance decomposition, see section 7.3 of [[1]](#id2).
References
----------
```
Tree: 0.0255 (error) = 0.0003 (bias^2) + 0.0152 (var) + 0.0098 (noise)
Bagging(Tree): 0.0196 (error) = 0.0004 (bias^2) + 0.0092 (var) + 0.0098 (noise)
```
```
# Author: Gilles Louppe <[email protected]>
# License: BSD 3 clause
import numpy as np
import matplotlib.pyplot as plt
from sklearn.ensemble import BaggingRegressor
from sklearn.tree import DecisionTreeRegressor
# Settings
n_repeat = 50 # Number of iterations for computing expectations
n_train = 50 # Size of the training set
n_test = 1000 # Size of the test set
noise = 0.1 # Standard deviation of the noise
np.random.seed(0)
# Change this for exploring the bias-variance decomposition of other
# estimators. This should work well for estimators with high variance (e.g.,
# decision trees or KNN), but poorly for estimators with low variance (e.g.,
# linear models).
estimators = [
("Tree", DecisionTreeRegressor()),
("Bagging(Tree)", BaggingRegressor(DecisionTreeRegressor())),
]
n_estimators = len(estimators)
# Generate data
def f(x):
x = x.ravel()
return np.exp(-(x**2)) + 1.5 * np.exp(-((x - 2) ** 2))
def generate(n_samples, noise, n_repeat=1):
X = np.random.rand(n_samples) * 10 - 5
X = np.sort(X)
if n_repeat == 1:
y = f(X) + np.random.normal(0.0, noise, n_samples)
else:
y = np.zeros((n_samples, n_repeat))
for i in range(n_repeat):
y[:, i] = f(X) + np.random.normal(0.0, noise, n_samples)
X = X.reshape((n_samples, 1))
return X, y
X_train = []
y_train = []
for i in range(n_repeat):
X, y = generate(n_samples=n_train, noise=noise)
X_train.append(X)
y_train.append(y)
X_test, y_test = generate(n_samples=n_test, noise=noise, n_repeat=n_repeat)
plt.figure(figsize=(10, 8))
# Loop over estimators to compare
for n, (name, estimator) in enumerate(estimators):
# Compute predictions
y_predict = np.zeros((n_test, n_repeat))
for i in range(n_repeat):
estimator.fit(X_train[i], y_train[i])
y_predict[:, i] = estimator.predict(X_test)
# Bias^2 + Variance + Noise decomposition of the mean squared error
y_error = np.zeros(n_test)
for i in range(n_repeat):
for j in range(n_repeat):
y_error += (y_test[:, j] - y_predict[:, i]) ** 2
y_error /= n_repeat * n_repeat
y_noise = np.var(y_test, axis=1)
y_bias = (f(X_test) - np.mean(y_predict, axis=1)) ** 2
y_var = np.var(y_predict, axis=1)
print(
"{0}: {1:.4f} (error) = {2:.4f} (bias^2) "
" + {3:.4f} (var) + {4:.4f} (noise)".format(
name, np.mean(y_error), np.mean(y_bias), np.mean(y_var), np.mean(y_noise)
)
)
# Plot figures
plt.subplot(2, n_estimators, n + 1)
plt.plot(X_test, f(X_test), "b", label="$f(x)$")
plt.plot(X_train[0], y_train[0], ".b", label="LS ~ $y = f(x)+noise$")
for i in range(n_repeat):
if i == 0:
plt.plot(X_test, y_predict[:, i], "r", label=r"$\^y(x)$")
else:
plt.plot(X_test, y_predict[:, i], "r", alpha=0.05)
plt.plot(X_test, np.mean(y_predict, axis=1), "c", label=r"$\mathbb{E}_{LS} \^y(x)$")
plt.xlim([-5, 5])
plt.title(name)
if n == n_estimators - 1:
plt.legend(loc=(1.1, 0.5))
plt.subplot(2, n_estimators, n_estimators + n + 1)
plt.plot(X_test, y_error, "r", label="$error(x)$")
plt.plot(X_test, y_bias, "b", label="$bias^2(x)$"),
plt.plot(X_test, y_var, "g", label="$variance(x)$"),
plt.plot(X_test, y_noise, "c", label="$noise(x)$")
plt.xlim([-5, 5])
plt.ylim([0, 0.1])
if n == n_estimators - 1:
plt.legend(loc=(1.1, 0.5))
plt.subplots_adjust(right=0.75)
plt.show()
```
**Total running time of the script:** ( 0 minutes 0.908 seconds)
[`Download Python source code: plot_bias_variance.py`](https://scikit-learn.org/1.1/_downloads/4fe7ce5d502ee21f3c344f775829354a/plot_bias_variance.py)
[`Download Jupyter notebook: plot_bias_variance.ipynb`](https://scikit-learn.org/1.1/_downloads/8d09950dfaf03c48cddf217f4acf8b65/plot_bias_variance.ipynb)
scikit_learn Two-class AdaBoost Note
Click [here](#sphx-glr-download-auto-examples-ensemble-plot-adaboost-twoclass-py) to download the full example code or to run this example in your browser via Binder
Two-class AdaBoost
==================
This example fits an AdaBoosted decision stump on a non-linearly separable classification dataset composed of two “Gaussian quantiles” clusters (see [`sklearn.datasets.make_gaussian_quantiles`](../../modules/generated/sklearn.datasets.make_gaussian_quantiles#sklearn.datasets.make_gaussian_quantiles "sklearn.datasets.make_gaussian_quantiles")) and plots the decision boundary and decision scores. The distributions of decision scores are shown separately for samples of class A and B. The predicted class label for each sample is determined by the sign of the decision score. Samples with decision scores greater than zero are classified as B, and are otherwise classified as A. The magnitude of a decision score determines the degree of likeness with the predicted class label. Additionally, a new dataset could be constructed containing a desired purity of class B, for example, by only selecting samples with a decision score above some value.
```
# Author: Noel Dawe <[email protected]>
#
# License: BSD 3 clause
import numpy as np
import matplotlib.pyplot as plt
from sklearn.ensemble import AdaBoostClassifier
from sklearn.tree import DecisionTreeClassifier
from sklearn.datasets import make_gaussian_quantiles
from sklearn.inspection import DecisionBoundaryDisplay
# Construct dataset
X1, y1 = make_gaussian_quantiles(
cov=2.0, n_samples=200, n_features=2, n_classes=2, random_state=1
)
X2, y2 = make_gaussian_quantiles(
mean=(3, 3), cov=1.5, n_samples=300, n_features=2, n_classes=2, random_state=1
)
X = np.concatenate((X1, X2))
y = np.concatenate((y1, -y2 + 1))
# Create and fit an AdaBoosted decision tree
bdt = AdaBoostClassifier(
DecisionTreeClassifier(max_depth=1), algorithm="SAMME", n_estimators=200
)
bdt.fit(X, y)
plot_colors = "br"
plot_step = 0.02
class_names = "AB"
plt.figure(figsize=(10, 5))
# Plot the decision boundaries
ax = plt.subplot(121)
disp = DecisionBoundaryDisplay.from_estimator(
bdt,
X,
cmap=plt.cm.Paired,
response_method="predict",
ax=ax,
xlabel="x",
ylabel="y",
)
x_min, x_max = disp.xx0.min(), disp.xx0.max()
y_min, y_max = disp.xx1.min(), disp.xx1.max()
plt.axis("tight")
# Plot the training points
for i, n, c in zip(range(2), class_names, plot_colors):
idx = np.where(y == i)
plt.scatter(
X[idx, 0],
X[idx, 1],
c=c,
cmap=plt.cm.Paired,
s=20,
edgecolor="k",
label="Class %s" % n,
)
plt.xlim(x_min, x_max)
plt.ylim(y_min, y_max)
plt.legend(loc="upper right")
plt.title("Decision Boundary")
# Plot the two-class decision scores
twoclass_output = bdt.decision_function(X)
plot_range = (twoclass_output.min(), twoclass_output.max())
plt.subplot(122)
for i, n, c in zip(range(2), class_names, plot_colors):
plt.hist(
twoclass_output[y == i],
bins=10,
range=plot_range,
facecolor=c,
label="Class %s" % n,
alpha=0.5,
edgecolor="k",
)
x1, x2, y1, y2 = plt.axis()
plt.axis((x1, x2, y1, y2 * 1.2))
plt.legend(loc="upper right")
plt.ylabel("Samples")
plt.xlabel("Score")
plt.title("Decision Scores")
plt.tight_layout()
plt.subplots_adjust(wspace=0.35)
plt.show()
```
**Total running time of the script:** ( 0 minutes 0.448 seconds)
[`Download Python source code: plot_adaboost_twoclass.py`](https://scikit-learn.org/1.1/_downloads/f0440afe8f6b99019c4fd181a9f57c59/plot_adaboost_twoclass.py)
[`Download Jupyter notebook: plot_adaboost_twoclass.ipynb`](https://scikit-learn.org/1.1/_downloads/c3a0ceabf65f8c894895af414e685ee9/plot_adaboost_twoclass.ipynb)
scikit_learn Plot individual and voting regression predictions Note
Click [here](#sphx-glr-download-auto-examples-ensemble-plot-voting-regressor-py) to download the full example code or to run this example in your browser via Binder
Plot individual and voting regression predictions
=================================================
A voting regressor is an ensemble meta-estimator that fits several base regressors, each on the whole dataset. Then it averages the individual predictions to form a final prediction. We will use three different regressors to predict the data: [`GradientBoostingRegressor`](../../modules/generated/sklearn.ensemble.gradientboostingregressor#sklearn.ensemble.GradientBoostingRegressor "sklearn.ensemble.GradientBoostingRegressor"), [`RandomForestRegressor`](../../modules/generated/sklearn.ensemble.randomforestregressor#sklearn.ensemble.RandomForestRegressor "sklearn.ensemble.RandomForestRegressor"), and [`LinearRegression`](../../modules/generated/sklearn.linear_model.linearregression#sklearn.linear_model.LinearRegression "sklearn.linear_model.LinearRegression")). Then the above 3 regressors will be used for the [`VotingRegressor`](../../modules/generated/sklearn.ensemble.votingregressor#sklearn.ensemble.VotingRegressor "sklearn.ensemble.VotingRegressor").
Finally, we will plot the predictions made by all models for comparison.
We will work with the diabetes dataset which consists of 10 features collected from a cohort of diabetes patients. The target is a quantitative measure of disease progression one year after baseline.
```
import matplotlib.pyplot as plt
from sklearn.datasets import load_diabetes
from sklearn.ensemble import GradientBoostingRegressor
from sklearn.ensemble import RandomForestRegressor
from sklearn.linear_model import LinearRegression
from sklearn.ensemble import VotingRegressor
```
Training classifiers
--------------------
First, we will load the diabetes dataset and initiate a gradient boosting regressor, a random forest regressor and a linear regression. Next, we will use the 3 regressors to build the voting regressor:
```
X, y = load_diabetes(return_X_y=True)
# Train classifiers
reg1 = GradientBoostingRegressor(random_state=1)
reg2 = RandomForestRegressor(random_state=1)
reg3 = LinearRegression()
reg1.fit(X, y)
reg2.fit(X, y)
reg3.fit(X, y)
ereg = VotingRegressor([("gb", reg1), ("rf", reg2), ("lr", reg3)])
ereg.fit(X, y)
```
```
VotingRegressor(estimators=[('gb', GradientBoostingRegressor(random_state=1)),
('rf', RandomForestRegressor(random_state=1)),
('lr', LinearRegression())])
```
**In a Jupyter environment, please rerun this cell to show the HTML representation or trust the notebook.
On GitHub, the HTML representation is unable to render, please try loading this page with nbviewer.org.**
VotingRegressor
```
VotingRegressor(estimators=[('gb', GradientBoostingRegressor(random_state=1)),
('rf', RandomForestRegressor(random_state=1)),
('lr', LinearRegression())])
```
gb
GradientBoostingRegressor
```
GradientBoostingRegressor(random_state=1)
```
rf
RandomForestRegressor
```
RandomForestRegressor(random_state=1)
```
lr
LinearRegression
```
LinearRegression()
```
Making predictions
------------------
Now we will use each of the regressors to make the 20 first predictions.
```
xt = X[:20]
pred1 = reg1.predict(xt)
pred2 = reg2.predict(xt)
pred3 = reg3.predict(xt)
pred4 = ereg.predict(xt)
```
Plot the results
----------------
Finally, we will visualize the 20 predictions. The red stars show the average prediction made by [`VotingRegressor`](../../modules/generated/sklearn.ensemble.votingregressor#sklearn.ensemble.VotingRegressor "sklearn.ensemble.VotingRegressor").
```
plt.figure()
plt.plot(pred1, "gd", label="GradientBoostingRegressor")
plt.plot(pred2, "b^", label="RandomForestRegressor")
plt.plot(pred3, "ys", label="LinearRegression")
plt.plot(pred4, "r*", ms=10, label="VotingRegressor")
plt.tick_params(axis="x", which="both", bottom=False, top=False, labelbottom=False)
plt.ylabel("predicted")
plt.xlabel("training samples")
plt.legend(loc="best")
plt.title("Regressor predictions and their average")
plt.show()
```
**Total running time of the script:** ( 0 minutes 0.655 seconds)
[`Download Python source code: plot_voting_regressor.py`](https://scikit-learn.org/1.1/_downloads/41218639b706e8575551bfabbe93c69f/plot_voting_regressor.py)
[`Download Jupyter notebook: plot_voting_regressor.ipynb`](https://scikit-learn.org/1.1/_downloads/5bb69fbfba6a6762ece8fe57d5044636/plot_voting_regressor.ipynb)
scikit_learn Combine predictors using stacking Note
Click [here](#sphx-glr-download-auto-examples-ensemble-plot-stack-predictors-py) to download the full example code or to run this example in your browser via Binder
Combine predictors using stacking
=================================
Stacking refers to a method to blend estimators. In this strategy, some estimators are individually fitted on some training data while a final estimator is trained using the stacked predictions of these base estimators.
In this example, we illustrate the use case in which different regressors are stacked together and a final linear penalized regressor is used to output the prediction. We compare the performance of each individual regressor with the stacking strategy. Stacking slightly improves the overall performance.
```
# Authors: Guillaume Lemaitre <[email protected]>
# Maria Telenczuk <https://github.com/maikia>
# License: BSD 3 clause
```
Download the dataset
--------------------
We will use [Ames Housing](http://jse.amstat.org/v19n3/decock.pdf) dataset which was first compiled by Dean De Cock and became better known after it was used in Kaggle challenge. It is a set of 1460 residential homes in Ames, Iowa, each described by 80 features. We will use it to predict the final logarithmic price of the houses. In this example we will use only 20 most interesting features chosen using GradientBoostingRegressor() and limit number of entries (here we won’t go into the details on how to select the most interesting features).
The Ames housing dataset is not shipped with scikit-learn and therefore we will fetch it from [OpenML](https://www.openml.org/d/42165).
```
import numpy as np
from sklearn.datasets import fetch_openml
from sklearn.utils import shuffle
def load_ames_housing():
df = fetch_openml(name="house_prices", as_frame=True)
X = df.data
y = df.target
features = [
"YrSold",
"HeatingQC",
"Street",
"YearRemodAdd",
"Heating",
"MasVnrType",
"BsmtUnfSF",
"Foundation",
"MasVnrArea",
"MSSubClass",
"ExterQual",
"Condition2",
"GarageCars",
"GarageType",
"OverallQual",
"TotalBsmtSF",
"BsmtFinSF1",
"HouseStyle",
"MiscFeature",
"MoSold",
]
X = X[features]
X, y = shuffle(X, y, random_state=0)
X = X[:600]
y = y[:600]
return X, np.log(y)
X, y = load_ames_housing()
```
Make pipeline to preprocess the data
------------------------------------
Before we can use Ames dataset we still need to do some preprocessing. First, we will select the categorical and numerical columns of the dataset to construct the first step of the pipeline.
```
from sklearn.compose import make_column_selector
cat_selector = make_column_selector(dtype_include=object)
num_selector = make_column_selector(dtype_include=np.number)
cat_selector(X)
```
```
['HeatingQC', 'Street', 'Heating', 'MasVnrType', 'Foundation', 'ExterQual', 'Condition2', 'GarageType', 'HouseStyle', 'MiscFeature']
```
```
num_selector(X)
```
```
['YrSold', 'YearRemodAdd', 'BsmtUnfSF', 'MasVnrArea', 'MSSubClass', 'GarageCars', 'OverallQual', 'TotalBsmtSF', 'BsmtFinSF1', 'MoSold']
```
Then, we will need to design preprocessing pipelines which depends on the ending regressor. If the ending regressor is a linear model, one needs to one-hot encode the categories. If the ending regressor is a tree-based model an ordinal encoder will be sufficient. Besides, numerical values need to be standardized for a linear model while the raw numerical data can be treated as is by a tree-based model. However, both models need an imputer to handle missing values.
We will first design the pipeline required for the tree-based models.
```
from sklearn.compose import make_column_transformer
from sklearn.impute import SimpleImputer
from sklearn.pipeline import make_pipeline
from sklearn.preprocessing import OrdinalEncoder
cat_tree_processor = OrdinalEncoder(
handle_unknown="use_encoded_value", unknown_value=-1
)
num_tree_processor = SimpleImputer(strategy="mean", add_indicator=True)
tree_preprocessor = make_column_transformer(
(num_tree_processor, num_selector), (cat_tree_processor, cat_selector)
)
tree_preprocessor
```
```
ColumnTransformer(transformers=[('simpleimputer',
SimpleImputer(add_indicator=True),
<sklearn.compose._column_transformer.make_column_selector object at 0x7f6e7f8bf7c0>),
('ordinalencoder',
OrdinalEncoder(handle_unknown='use_encoded_value',
unknown_value=-1),
<sklearn.compose._column_transformer.make_column_selector object at 0x7f6e7f8d3f70>)])
```
**In a Jupyter environment, please rerun this cell to show the HTML representation or trust the notebook.
On GitHub, the HTML representation is unable to render, please try loading this page with nbviewer.org.**
ColumnTransformer
```
ColumnTransformer(transformers=[('simpleimputer',
SimpleImputer(add_indicator=True),
<sklearn.compose._column_transformer.make_column_selector object at 0x7f6e7f8bf7c0>),
('ordinalencoder',
OrdinalEncoder(handle_unknown='use_encoded_value',
unknown_value=-1),
<sklearn.compose._column_transformer.make_column_selector object at 0x7f6e7f8d3f70>)])
```
simpleimputer
```
<sklearn.compose._column_transformer.make_column_selector object at 0x7f6e7f8bf7c0>
```
SimpleImputer
```
SimpleImputer(add_indicator=True)
```
ordinalencoder
```
<sklearn.compose._column_transformer.make_column_selector object at 0x7f6e7f8d3f70>
```
OrdinalEncoder
```
OrdinalEncoder(handle_unknown='use_encoded_value', unknown_value=-1)
```
Then, we will now define the preprocessor used when the ending regressor is a linear model.
```
from sklearn.preprocessing import OneHotEncoder
from sklearn.preprocessing import StandardScaler
cat_linear_processor = OneHotEncoder(handle_unknown="ignore")
num_linear_processor = make_pipeline(
StandardScaler(), SimpleImputer(strategy="mean", add_indicator=True)
)
linear_preprocessor = make_column_transformer(
(num_linear_processor, num_selector), (cat_linear_processor, cat_selector)
)
linear_preprocessor
```
```
ColumnTransformer(transformers=[('pipeline',
Pipeline(steps=[('standardscaler',
StandardScaler()),
('simpleimputer',
SimpleImputer(add_indicator=True))]),
<sklearn.compose._column_transformer.make_column_selector object at 0x7f6e7f8bf7c0>),
('onehotencoder',
OneHotEncoder(handle_unknown='ignore'),
<sklearn.compose._column_transformer.make_column_selector object at 0x7f6e7f8d3f70>)])
```
**In a Jupyter environment, please rerun this cell to show the HTML representation or trust the notebook.
On GitHub, the HTML representation is unable to render, please try loading this page with nbviewer.org.**
ColumnTransformer
```
ColumnTransformer(transformers=[('pipeline',
Pipeline(steps=[('standardscaler',
StandardScaler()),
('simpleimputer',
SimpleImputer(add_indicator=True))]),
<sklearn.compose._column_transformer.make_column_selector object at 0x7f6e7f8bf7c0>),
('onehotencoder',
OneHotEncoder(handle_unknown='ignore'),
<sklearn.compose._column_transformer.make_column_selector object at 0x7f6e7f8d3f70>)])
```
pipeline
```
<sklearn.compose._column_transformer.make_column_selector object at 0x7f6e7f8bf7c0>
```
StandardScaler
```
StandardScaler()
```
SimpleImputer
```
SimpleImputer(add_indicator=True)
```
onehotencoder
```
<sklearn.compose._column_transformer.make_column_selector object at 0x7f6e7f8d3f70>
```
OneHotEncoder
```
OneHotEncoder(handle_unknown='ignore')
```
Stack of predictors on a single data set
----------------------------------------
It is sometimes tedious to find the model which will best perform on a given dataset. Stacking provide an alternative by combining the outputs of several learners, without the need to choose a model specifically. The performance of stacking is usually close to the best model and sometimes it can outperform the prediction performance of each individual model.
Here, we combine 3 learners (linear and non-linear) and use a ridge regressor to combine their outputs together.
Note
Although we will make new pipelines with the processors which we wrote in the previous section for the 3 learners, the final estimator [`RidgeCV()`](../../modules/generated/sklearn.linear_model.ridgecv#sklearn.linear_model.RidgeCV "sklearn.linear_model.RidgeCV") does not need preprocessing of the data as it will be fed with the already preprocessed output from the 3 learners.
```
from sklearn.linear_model import LassoCV
lasso_pipeline = make_pipeline(linear_preprocessor, LassoCV())
lasso_pipeline
```
```
Pipeline(steps=[('columntransformer',
ColumnTransformer(transformers=[('pipeline',
Pipeline(steps=[('standardscaler',
StandardScaler()),
('simpleimputer',
SimpleImputer(add_indicator=True))]),
<sklearn.compose._column_transformer.make_column_selector object at 0x7f6e7f8bf7c0>),
('onehotencoder',
OneHotEncoder(handle_unknown='ignore'),
<sklearn.compose._column_transformer.make_column_selector object at 0x7f6e7f8d3f70>)])),
('lassocv', LassoCV())])
```
**In a Jupyter environment, please rerun this cell to show the HTML representation or trust the notebook.
On GitHub, the HTML representation is unable to render, please try loading this page with nbviewer.org.**
Pipeline
```
Pipeline(steps=[('columntransformer',
ColumnTransformer(transformers=[('pipeline',
Pipeline(steps=[('standardscaler',
StandardScaler()),
('simpleimputer',
SimpleImputer(add_indicator=True))]),
<sklearn.compose._column_transformer.make_column_selector object at 0x7f6e7f8bf7c0>),
('onehotencoder',
OneHotEncoder(handle_unknown='ignore'),
<sklearn.compose._column_transformer.make_column_selector object at 0x7f6e7f8d3f70>)])),
('lassocv', LassoCV())])
```
columntransformer: ColumnTransformer
```
ColumnTransformer(transformers=[('pipeline',
Pipeline(steps=[('standardscaler',
StandardScaler()),
('simpleimputer',
SimpleImputer(add_indicator=True))]),
<sklearn.compose._column_transformer.make_column_selector object at 0x7f6e7f8bf7c0>),
('onehotencoder',
OneHotEncoder(handle_unknown='ignore'),
<sklearn.compose._column_transformer.make_column_selector object at 0x7f6e7f8d3f70>)])
```
pipeline
```
<sklearn.compose._column_transformer.make_column_selector object at 0x7f6e7f8bf7c0>
```
StandardScaler
```
StandardScaler()
```
SimpleImputer
```
SimpleImputer(add_indicator=True)
```
onehotencoder
```
<sklearn.compose._column_transformer.make_column_selector object at 0x7f6e7f8d3f70>
```
OneHotEncoder
```
OneHotEncoder(handle_unknown='ignore')
```
LassoCV
```
LassoCV()
```
```
from sklearn.ensemble import RandomForestRegressor
rf_pipeline = make_pipeline(tree_preprocessor, RandomForestRegressor(random_state=42))
rf_pipeline
```
```
Pipeline(steps=[('columntransformer',
ColumnTransformer(transformers=[('simpleimputer',
SimpleImputer(add_indicator=True),
<sklearn.compose._column_transformer.make_column_selector object at 0x7f6e7f8bf7c0>),
('ordinalencoder',
OrdinalEncoder(handle_unknown='use_encoded_value',
unknown_value=-1),
<sklearn.compose._column_transformer.make_column_selector object at 0x7f6e7f8d3f70>)])),
('randomforestregressor',
RandomForestRegressor(random_state=42))])
```
**In a Jupyter environment, please rerun this cell to show the HTML representation or trust the notebook.
On GitHub, the HTML representation is unable to render, please try loading this page with nbviewer.org.**
Pipeline
```
Pipeline(steps=[('columntransformer',
ColumnTransformer(transformers=[('simpleimputer',
SimpleImputer(add_indicator=True),
<sklearn.compose._column_transformer.make_column_selector object at 0x7f6e7f8bf7c0>),
('ordinalencoder',
OrdinalEncoder(handle_unknown='use_encoded_value',
unknown_value=-1),
<sklearn.compose._column_transformer.make_column_selector object at 0x7f6e7f8d3f70>)])),
('randomforestregressor',
RandomForestRegressor(random_state=42))])
```
columntransformer: ColumnTransformer
```
ColumnTransformer(transformers=[('simpleimputer',
SimpleImputer(add_indicator=True),
<sklearn.compose._column_transformer.make_column_selector object at 0x7f6e7f8bf7c0>),
('ordinalencoder',
OrdinalEncoder(handle_unknown='use_encoded_value',
unknown_value=-1),
<sklearn.compose._column_transformer.make_column_selector object at 0x7f6e7f8d3f70>)])
```
simpleimputer
```
<sklearn.compose._column_transformer.make_column_selector object at 0x7f6e7f8bf7c0>
```
SimpleImputer
```
SimpleImputer(add_indicator=True)
```
ordinalencoder
```
<sklearn.compose._column_transformer.make_column_selector object at 0x7f6e7f8d3f70>
```
OrdinalEncoder
```
OrdinalEncoder(handle_unknown='use_encoded_value', unknown_value=-1)
```
RandomForestRegressor
```
RandomForestRegressor(random_state=42)
```
```
from sklearn.ensemble import HistGradientBoostingRegressor
gbdt_pipeline = make_pipeline(
tree_preprocessor, HistGradientBoostingRegressor(random_state=0)
)
gbdt_pipeline
```
```
Pipeline(steps=[('columntransformer',
ColumnTransformer(transformers=[('simpleimputer',
SimpleImputer(add_indicator=True),
<sklearn.compose._column_transformer.make_column_selector object at 0x7f6e7f8bf7c0>),
('ordinalencoder',
OrdinalEncoder(handle_unknown='use_encoded_value',
unknown_value=-1),
<sklearn.compose._column_transformer.make_column_selector object at 0x7f6e7f8d3f70>)])),
('histgradientboostingregressor',
HistGradientBoostingRegressor(random_state=0))])
```
**In a Jupyter environment, please rerun this cell to show the HTML representation or trust the notebook.
On GitHub, the HTML representation is unable to render, please try loading this page with nbviewer.org.**
Pipeline
```
Pipeline(steps=[('columntransformer',
ColumnTransformer(transformers=[('simpleimputer',
SimpleImputer(add_indicator=True),
<sklearn.compose._column_transformer.make_column_selector object at 0x7f6e7f8bf7c0>),
('ordinalencoder',
OrdinalEncoder(handle_unknown='use_encoded_value',
unknown_value=-1),
<sklearn.compose._column_transformer.make_column_selector object at 0x7f6e7f8d3f70>)])),
('histgradientboostingregressor',
HistGradientBoostingRegressor(random_state=0))])
```
columntransformer: ColumnTransformer
```
ColumnTransformer(transformers=[('simpleimputer',
SimpleImputer(add_indicator=True),
<sklearn.compose._column_transformer.make_column_selector object at 0x7f6e7f8bf7c0>),
('ordinalencoder',
OrdinalEncoder(handle_unknown='use_encoded_value',
unknown_value=-1),
<sklearn.compose._column_transformer.make_column_selector object at 0x7f6e7f8d3f70>)])
```
simpleimputer
```
<sklearn.compose._column_transformer.make_column_selector object at 0x7f6e7f8bf7c0>
```
SimpleImputer
```
SimpleImputer(add_indicator=True)
```
ordinalencoder
```
<sklearn.compose._column_transformer.make_column_selector object at 0x7f6e7f8d3f70>
```
OrdinalEncoder
```
OrdinalEncoder(handle_unknown='use_encoded_value', unknown_value=-1)
```
HistGradientBoostingRegressor
```
HistGradientBoostingRegressor(random_state=0)
```
```
from sklearn.ensemble import StackingRegressor
from sklearn.linear_model import RidgeCV
estimators = [
("Random Forest", rf_pipeline),
("Lasso", lasso_pipeline),
("Gradient Boosting", gbdt_pipeline),
]
stacking_regressor = StackingRegressor(estimators=estimators, final_estimator=RidgeCV())
stacking_regressor
```
```
StackingRegressor(estimators=[('Random Forest',
Pipeline(steps=[('columntransformer',
ColumnTransformer(transformers=[('simpleimputer',
SimpleImputer(add_indicator=True),
<sklearn.compose._column_transformer.make_column_selector object at 0x7f6e7f8bf7c0>),
('ordinalencoder',
OrdinalEncoder(handle_unknown='use_encoded_value',
unknown_value=-1),
<sklearn.compose...
<sklearn.compose._column_transformer.make_column_selector object at 0x7f6e7f8bf7c0>),
('ordinalencoder',
OrdinalEncoder(handle_unknown='use_encoded_value',
unknown_value=-1),
<sklearn.compose._column_transformer.make_column_selector object at 0x7f6e7f8d3f70>)])),
('histgradientboostingregressor',
HistGradientBoostingRegressor(random_state=0))]))],
final_estimator=RidgeCV())
```
**In a Jupyter environment, please rerun this cell to show the HTML representation or trust the notebook.
On GitHub, the HTML representation is unable to render, please try loading this page with nbviewer.org.**
StackingRegressor
```
StackingRegressor(estimators=[('Random Forest',
Pipeline(steps=[('columntransformer',
ColumnTransformer(transformers=[('simpleimputer',
SimpleImputer(add_indicator=True),
<sklearn.compose._column_transformer.make_column_selector object at 0x7f6e7f8bf7c0>),
('ordinalencoder',
OrdinalEncoder(handle_unknown='use_encoded_value',
unknown_value=-1),
<sklearn.compose...
<sklearn.compose._column_transformer.make_column_selector object at 0x7f6e7f8bf7c0>),
('ordinalencoder',
OrdinalEncoder(handle_unknown='use_encoded_value',
unknown_value=-1),
<sklearn.compose._column_transformer.make_column_selector object at 0x7f6e7f8d3f70>)])),
('histgradientboostingregressor',
HistGradientBoostingRegressor(random_state=0))]))],
final_estimator=RidgeCV())
```
Random Forest
columntransformer: ColumnTransformer
```
ColumnTransformer(transformers=[('simpleimputer',
SimpleImputer(add_indicator=True),
<sklearn.compose._column_transformer.make_column_selector object at 0x7f6e7f8bf7c0>),
('ordinalencoder',
OrdinalEncoder(handle_unknown='use_encoded_value',
unknown_value=-1),
<sklearn.compose._column_transformer.make_column_selector object at 0x7f6e7f8d3f70>)])
```
simpleimputer
```
<sklearn.compose._column_transformer.make_column_selector object at 0x7f6e7f8bf7c0>
```
SimpleImputer
```
SimpleImputer(add_indicator=True)
```
ordinalencoder
```
<sklearn.compose._column_transformer.make_column_selector object at 0x7f6e7f8d3f70>
```
OrdinalEncoder
```
OrdinalEncoder(handle_unknown='use_encoded_value', unknown_value=-1)
```
RandomForestRegressor
```
RandomForestRegressor(random_state=42)
```
Lasso
columntransformer: ColumnTransformer
```
ColumnTransformer(transformers=[('pipeline',
Pipeline(steps=[('standardscaler',
StandardScaler()),
('simpleimputer',
SimpleImputer(add_indicator=True))]),
<sklearn.compose._column_transformer.make_column_selector object at 0x7f6e7f8bf7c0>),
('onehotencoder',
OneHotEncoder(handle_unknown='ignore'),
<sklearn.compose._column_transformer.make_column_selector object at 0x7f6e7f8d3f70>)])
```
pipeline
```
<sklearn.compose._column_transformer.make_column_selector object at 0x7f6e7f8bf7c0>
```
StandardScaler
```
StandardScaler()
```
SimpleImputer
```
SimpleImputer(add_indicator=True)
```
onehotencoder
```
<sklearn.compose._column_transformer.make_column_selector object at 0x7f6e7f8d3f70>
```
OneHotEncoder
```
OneHotEncoder(handle_unknown='ignore')
```
LassoCV
```
LassoCV()
```
Gradient Boosting
columntransformer: ColumnTransformer
```
ColumnTransformer(transformers=[('simpleimputer',
SimpleImputer(add_indicator=True),
<sklearn.compose._column_transformer.make_column_selector object at 0x7f6e7f8bf7c0>),
('ordinalencoder',
OrdinalEncoder(handle_unknown='use_encoded_value',
unknown_value=-1),
<sklearn.compose._column_transformer.make_column_selector object at 0x7f6e7f8d3f70>)])
```
simpleimputer
```
<sklearn.compose._column_transformer.make_column_selector object at 0x7f6e7f8bf7c0>
```
SimpleImputer
```
SimpleImputer(add_indicator=True)
```
ordinalencoder
```
<sklearn.compose._column_transformer.make_column_selector object at 0x7f6e7f8d3f70>
```
OrdinalEncoder
```
OrdinalEncoder(handle_unknown='use_encoded_value', unknown_value=-1)
```
HistGradientBoostingRegressor
```
HistGradientBoostingRegressor(random_state=0)
```
final\_estimator
RidgeCV
```
RidgeCV()
```
Measure and plot the results
----------------------------
Now we can use Ames Housing dataset to make the predictions. We check the performance of each individual predictor as well as of the stack of the regressors.
The function `plot_regression_results` is used to plot the predicted and true targets.
```
import time
import matplotlib.pyplot as plt
from sklearn.model_selection import cross_validate, cross_val_predict
def plot_regression_results(ax, y_true, y_pred, title, scores, elapsed_time):
"""Scatter plot of the predicted vs true targets."""
ax.plot(
[y_true.min(), y_true.max()], [y_true.min(), y_true.max()], "--r", linewidth=2
)
ax.scatter(y_true, y_pred, alpha=0.2)
ax.spines["top"].set_visible(False)
ax.spines["right"].set_visible(False)
ax.get_xaxis().tick_bottom()
ax.get_yaxis().tick_left()
ax.spines["left"].set_position(("outward", 10))
ax.spines["bottom"].set_position(("outward", 10))
ax.set_xlim([y_true.min(), y_true.max()])
ax.set_ylim([y_true.min(), y_true.max()])
ax.set_xlabel("Measured")
ax.set_ylabel("Predicted")
extra = plt.Rectangle(
(0, 0), 0, 0, fc="w", fill=False, edgecolor="none", linewidth=0
)
ax.legend([extra], [scores], loc="upper left")
title = title + "\n Evaluation in {:.2f} seconds".format(elapsed_time)
ax.set_title(title)
fig, axs = plt.subplots(2, 2, figsize=(9, 7))
axs = np.ravel(axs)
for ax, (name, est) in zip(
axs, estimators + [("Stacking Regressor", stacking_regressor)]
):
start_time = time.time()
score = cross_validate(
est, X, y, scoring=["r2", "neg_mean_absolute_error"], n_jobs=2, verbose=0
)
elapsed_time = time.time() - start_time
y_pred = cross_val_predict(est, X, y, n_jobs=2, verbose=0)
plot_regression_results(
ax,
y,
y_pred,
name,
(r"$R^2={:.2f} \pm {:.2f}$" + "\n" + r"$MAE={:.2f} \pm {:.2f}$").format(
np.mean(score["test_r2"]),
np.std(score["test_r2"]),
-np.mean(score["test_neg_mean_absolute_error"]),
np.std(score["test_neg_mean_absolute_error"]),
),
elapsed_time,
)
plt.suptitle("Single predictors versus stacked predictors")
plt.tight_layout()
plt.subplots_adjust(top=0.9)
plt.show()
```
The stacked regressor will combine the strengths of the different regressors. However, we also see that training the stacked regressor is much more computationally expensive.
**Total running time of the script:** ( 0 minutes 18.317 seconds)
[`Download Python source code: plot_stack_predictors.py`](https://scikit-learn.org/1.1/_downloads/c6ccb1a9c5f82321f082e9767a2706f3/plot_stack_predictors.py)
[`Download Jupyter notebook: plot_stack_predictors.ipynb`](https://scikit-learn.org/1.1/_downloads/3c9b7bcd0b16f172ac12ffad61f3b5f0/plot_stack_predictors.ipynb)
| programming_docs |
scikit_learn Gradient Boosting Out-of-Bag estimates Note
Click [here](#sphx-glr-download-auto-examples-ensemble-plot-gradient-boosting-oob-py) to download the full example code or to run this example in your browser via Binder
Gradient Boosting Out-of-Bag estimates
======================================
Out-of-bag (OOB) estimates can be a useful heuristic to estimate the “optimal” number of boosting iterations. OOB estimates are almost identical to cross-validation estimates but they can be computed on-the-fly without the need for repeated model fitting. OOB estimates are only available for Stochastic Gradient Boosting (i.e. `subsample < 1.0`), the estimates are derived from the improvement in loss based on the examples not included in the bootstrap sample (the so-called out-of-bag examples). The OOB estimator is a pessimistic estimator of the true test loss, but remains a fairly good approximation for a small number of trees.
The figure shows the cumulative sum of the negative OOB improvements as a function of the boosting iteration. As you can see, it tracks the test loss for the first hundred iterations but then diverges in a pessimistic way. The figure also shows the performance of 3-fold cross validation which usually gives a better estimate of the test loss but is computationally more demanding.
```
Accuracy: 0.6820
/home/runner/work/scikit-learn/scikit-learn/sklearn/utils/deprecation.py:103: FutureWarning: Attribute `loss_` was deprecated in version 1.1 and will be removed in 1.3.
warnings.warn(msg, category=FutureWarning)
/home/runner/work/scikit-learn/scikit-learn/sklearn/utils/deprecation.py:103: FutureWarning: Attribute `loss_` was deprecated in version 1.1 and will be removed in 1.3.
warnings.warn(msg, category=FutureWarning)
/home/runner/work/scikit-learn/scikit-learn/sklearn/utils/deprecation.py:103: FutureWarning: Attribute `loss_` was deprecated in version 1.1 and will be removed in 1.3.
warnings.warn(msg, category=FutureWarning)
/home/runner/work/scikit-learn/scikit-learn/sklearn/utils/deprecation.py:103: FutureWarning: Attribute `loss_` was deprecated in version 1.1 and will be removed in 1.3.
warnings.warn(msg, category=FutureWarning)
```
```
# Author: Peter Prettenhofer <[email protected]>
#
# License: BSD 3 clause
import numpy as np
import matplotlib.pyplot as plt
from sklearn import ensemble
from sklearn.model_selection import KFold
from sklearn.model_selection import train_test_split
from scipy.special import expit
# Generate data (adapted from G. Ridgeway's gbm example)
n_samples = 1000
random_state = np.random.RandomState(13)
x1 = random_state.uniform(size=n_samples)
x2 = random_state.uniform(size=n_samples)
x3 = random_state.randint(0, 4, size=n_samples)
p = expit(np.sin(3 * x1) - 4 * x2 + x3)
y = random_state.binomial(1, p, size=n_samples)
X = np.c_[x1, x2, x3]
X = X.astype(np.float32)
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.5, random_state=9)
# Fit classifier with out-of-bag estimates
params = {
"n_estimators": 1200,
"max_depth": 3,
"subsample": 0.5,
"learning_rate": 0.01,
"min_samples_leaf": 1,
"random_state": 3,
}
clf = ensemble.GradientBoostingClassifier(**params)
clf.fit(X_train, y_train)
acc = clf.score(X_test, y_test)
print("Accuracy: {:.4f}".format(acc))
n_estimators = params["n_estimators"]
x = np.arange(n_estimators) + 1
def heldout_score(clf, X_test, y_test):
"""compute deviance scores on ``X_test`` and ``y_test``."""
score = np.zeros((n_estimators,), dtype=np.float64)
for i, y_pred in enumerate(clf.staged_decision_function(X_test)):
score[i] = clf.loss_(y_test, y_pred)
return score
def cv_estimate(n_splits=None):
cv = KFold(n_splits=n_splits)
cv_clf = ensemble.GradientBoostingClassifier(**params)
val_scores = np.zeros((n_estimators,), dtype=np.float64)
for train, test in cv.split(X_train, y_train):
cv_clf.fit(X_train[train], y_train[train])
val_scores += heldout_score(cv_clf, X_train[test], y_train[test])
val_scores /= n_splits
return val_scores
# Estimate best n_estimator using cross-validation
cv_score = cv_estimate(3)
# Compute best n_estimator for test data
test_score = heldout_score(clf, X_test, y_test)
# negative cumulative sum of oob improvements
cumsum = -np.cumsum(clf.oob_improvement_)
# min loss according to OOB
oob_best_iter = x[np.argmin(cumsum)]
# min loss according to test (normalize such that first loss is 0)
test_score -= test_score[0]
test_best_iter = x[np.argmin(test_score)]
# min loss according to cv (normalize such that first loss is 0)
cv_score -= cv_score[0]
cv_best_iter = x[np.argmin(cv_score)]
# color brew for the three curves
oob_color = list(map(lambda x: x / 256.0, (190, 174, 212)))
test_color = list(map(lambda x: x / 256.0, (127, 201, 127)))
cv_color = list(map(lambda x: x / 256.0, (253, 192, 134)))
# plot curves and vertical lines for best iterations
plt.plot(x, cumsum, label="OOB loss", color=oob_color)
plt.plot(x, test_score, label="Test loss", color=test_color)
plt.plot(x, cv_score, label="CV loss", color=cv_color)
plt.axvline(x=oob_best_iter, color=oob_color)
plt.axvline(x=test_best_iter, color=test_color)
plt.axvline(x=cv_best_iter, color=cv_color)
# add three vertical lines to xticks
xticks = plt.xticks()
xticks_pos = np.array(
xticks[0].tolist() + [oob_best_iter, cv_best_iter, test_best_iter]
)
xticks_label = np.array(list(map(lambda t: int(t), xticks[0])) + ["OOB", "CV", "Test"])
ind = np.argsort(xticks_pos)
xticks_pos = xticks_pos[ind]
xticks_label = xticks_label[ind]
plt.xticks(xticks_pos, xticks_label)
plt.legend(loc="upper right")
plt.ylabel("normalized loss")
plt.xlabel("number of iterations")
plt.show()
```
**Total running time of the script:** ( 0 minutes 3.049 seconds)
[`Download Python source code: plot_gradient_boosting_oob.py`](https://scikit-learn.org/1.1/_downloads/a0f093cdc82e6c383a734fc9beaf8e24/plot_gradient_boosting_oob.py)
[`Download Jupyter notebook: plot_gradient_boosting_oob.ipynb`](https://scikit-learn.org/1.1/_downloads/d45483409febd18772e1fe41233d258e/plot_gradient_boosting_oob.ipynb)
scikit_learn Discrete versus Real AdaBoost Note
Click [here](#sphx-glr-download-auto-examples-ensemble-plot-adaboost-hastie-10-2-py) to download the full example code or to run this example in your browser via Binder
Discrete versus Real AdaBoost
=============================
This notebook is based on Figure 10.2 from Hastie et al 2009 [[1]](#id3) and illustrates the difference in performance between the discrete SAMME [[2]](#id4) boosting algorithm and real SAMME.R boosting algorithm. Both algorithms are evaluated on a binary classification task where the target Y is a non-linear function of 10 input features.
Discrete SAMME AdaBoost adapts based on errors in predicted class labels whereas real SAMME.R uses the predicted class probabilities.
Preparing the data and baseline models
--------------------------------------
We start by generating the binary classification dataset used in Hastie et al. 2009, Example 10.2.
```
# Authors: Peter Prettenhofer <[email protected]>,
# Noel Dawe <[email protected]>
#
# License: BSD 3 clause
from sklearn import datasets
X, y = datasets.make_hastie_10_2(n_samples=12_000, random_state=1)
```
Now, we set the hyperparameters for our AdaBoost classifiers. Be aware, a learning rate of 1.0 may not be optimal for both SAMME and SAMME.R
```
n_estimators = 400
learning_rate = 1.0
```
We split the data into a training and a test set. Then, we train our baseline classifiers, a `DecisionTreeClassifier` with `depth=9` and a “stump” `DecisionTreeClassifier` with `depth=1` and compute the test error.
```
from sklearn.model_selection import train_test_split
from sklearn.tree import DecisionTreeClassifier
X_train, X_test, y_train, y_test = train_test_split(
X, y, test_size=2_000, shuffle=False
)
dt_stump = DecisionTreeClassifier(max_depth=1, min_samples_leaf=1)
dt_stump.fit(X_train, y_train)
dt_stump_err = 1.0 - dt_stump.score(X_test, y_test)
dt = DecisionTreeClassifier(max_depth=9, min_samples_leaf=1)
dt.fit(X_train, y_train)
dt_err = 1.0 - dt.score(X_test, y_test)
```
Adaboost with discrete SAMME and real SAMME.R
---------------------------------------------
We now define the discrete and real AdaBoost classifiers and fit them to the training set.
```
from sklearn.ensemble import AdaBoostClassifier
ada_discrete = AdaBoostClassifier(
base_estimator=dt_stump,
learning_rate=learning_rate,
n_estimators=n_estimators,
algorithm="SAMME",
)
ada_discrete.fit(X_train, y_train)
```
```
AdaBoostClassifier(algorithm='SAMME',
base_estimator=DecisionTreeClassifier(max_depth=1),
n_estimators=400)
```
**In a Jupyter environment, please rerun this cell to show the HTML representation or trust the notebook.
On GitHub, the HTML representation is unable to render, please try loading this page with nbviewer.org.**
AdaBoostClassifier
```
AdaBoostClassifier(algorithm='SAMME',
base_estimator=DecisionTreeClassifier(max_depth=1),
n_estimators=400)
```
base\_estimator: DecisionTreeClassifier
```
DecisionTreeClassifier(max_depth=1)
```
DecisionTreeClassifier
```
DecisionTreeClassifier(max_depth=1)
```
```
ada_real = AdaBoostClassifier(
base_estimator=dt_stump,
learning_rate=learning_rate,
n_estimators=n_estimators,
algorithm="SAMME.R",
)
ada_real.fit(X_train, y_train)
```
```
AdaBoostClassifier(base_estimator=DecisionTreeClassifier(max_depth=1),
n_estimators=400)
```
**In a Jupyter environment, please rerun this cell to show the HTML representation or trust the notebook.
On GitHub, the HTML representation is unable to render, please try loading this page with nbviewer.org.**
AdaBoostClassifier
```
AdaBoostClassifier(base_estimator=DecisionTreeClassifier(max_depth=1),
n_estimators=400)
```
base\_estimator: DecisionTreeClassifier
```
DecisionTreeClassifier(max_depth=1)
```
DecisionTreeClassifier
```
DecisionTreeClassifier(max_depth=1)
```
Now, let’s compute the test error of the discrete and real AdaBoost classifiers for each new stump in `n_estimators` added to the ensemble.
```
import numpy as np
from sklearn.metrics import zero_one_loss
ada_discrete_err = np.zeros((n_estimators,))
for i, y_pred in enumerate(ada_discrete.staged_predict(X_test)):
ada_discrete_err[i] = zero_one_loss(y_pred, y_test)
ada_discrete_err_train = np.zeros((n_estimators,))
for i, y_pred in enumerate(ada_discrete.staged_predict(X_train)):
ada_discrete_err_train[i] = zero_one_loss(y_pred, y_train)
ada_real_err = np.zeros((n_estimators,))
for i, y_pred in enumerate(ada_real.staged_predict(X_test)):
ada_real_err[i] = zero_one_loss(y_pred, y_test)
ada_real_err_train = np.zeros((n_estimators,))
for i, y_pred in enumerate(ada_real.staged_predict(X_train)):
ada_real_err_train[i] = zero_one_loss(y_pred, y_train)
```
Plotting the results
--------------------
Finally, we plot the train and test errors of our baselines and of the discrete and real AdaBoost classifiers
```
import matplotlib.pyplot as plt
import seaborn as sns
fig = plt.figure()
ax = fig.add_subplot(111)
ax.plot([1, n_estimators], [dt_stump_err] * 2, "k-", label="Decision Stump Error")
ax.plot([1, n_estimators], [dt_err] * 2, "k--", label="Decision Tree Error")
colors = sns.color_palette("colorblind")
ax.plot(
np.arange(n_estimators) + 1,
ada_discrete_err,
label="Discrete AdaBoost Test Error",
color=colors[0],
)
ax.plot(
np.arange(n_estimators) + 1,
ada_discrete_err_train,
label="Discrete AdaBoost Train Error",
color=colors[1],
)
ax.plot(
np.arange(n_estimators) + 1,
ada_real_err,
label="Real AdaBoost Test Error",
color=colors[2],
)
ax.plot(
np.arange(n_estimators) + 1,
ada_real_err_train,
label="Real AdaBoost Train Error",
color=colors[4],
)
ax.set_ylim((0.0, 0.5))
ax.set_xlabel("Number of weak learners")
ax.set_ylabel("error rate")
leg = ax.legend(loc="upper right", fancybox=True)
leg.get_frame().set_alpha(0.7)
plt.show()
```
Concluding remarks
------------------
We observe that the error rate for both train and test sets of real AdaBoost is lower than that of discrete AdaBoost.
**Total running time of the script:** ( 0 minutes 12.721 seconds)
[`Download Python source code: plot_adaboost_hastie_10_2.py`](https://scikit-learn.org/1.1/_downloads/2c8a162a0e436f4ca9af35453585fc81/plot_adaboost_hastie_10_2.py)
[`Download Jupyter notebook: plot_adaboost_hastie_10_2.ipynb`](https://scikit-learn.org/1.1/_downloads/97c9b8aba1989fb600a73f3afb354726/plot_adaboost_hastie_10_2.ipynb)
scikit_learn Monotonic Constraints Note
Click [here](#sphx-glr-download-auto-examples-ensemble-plot-monotonic-constraints-py) to download the full example code or to run this example in your browser via Binder
Monotonic Constraints
=====================
This example illustrates the effect of monotonic constraints on a gradient boosting estimator.
We build an artificial dataset where the target value is in general positively correlated with the first feature (with some random and non-random variations), and in general negatively correlated with the second feature.
By imposing a positive (increasing) or negative (decreasing) constraint on the features during the learning process, the estimator is able to properly follow the general trend instead of being subject to the variations.
This example was inspired by the [XGBoost documentation](https://xgboost.readthedocs.io/en/latest/tutorials/monotonic.html).
```
from sklearn.ensemble import HistGradientBoostingRegressor
from sklearn.inspection import PartialDependenceDisplay
import numpy as np
import matplotlib.pyplot as plt
rng = np.random.RandomState(0)
n_samples = 5000
f_0 = rng.rand(n_samples) # positive correlation with y
f_1 = rng.rand(n_samples) # negative correlation with y
X = np.c_[f_0, f_1]
noise = rng.normal(loc=0.0, scale=0.01, size=n_samples)
y = 5 * f_0 + np.sin(10 * np.pi * f_0) - 5 * f_1 - np.cos(10 * np.pi * f_1) + noise
fig, ax = plt.subplots()
# Without any constraint
gbdt = HistGradientBoostingRegressor()
gbdt.fit(X, y)
disp = PartialDependenceDisplay.from_estimator(
gbdt,
X,
features=[0, 1],
line_kw={"linewidth": 4, "label": "unconstrained", "color": "tab:blue"},
ax=ax,
)
# With positive and negative constraints
gbdt = HistGradientBoostingRegressor(monotonic_cst=[1, -1])
gbdt.fit(X, y)
PartialDependenceDisplay.from_estimator(
gbdt,
X,
features=[0, 1],
feature_names=(
"First feature\nPositive constraint",
"Second feature\nNegtive constraint",
),
line_kw={"linewidth": 4, "label": "constrained", "color": "tab:orange"},
ax=disp.axes_,
)
for f_idx in (0, 1):
disp.axes_[0, f_idx].plot(
X[:, f_idx], y, "o", alpha=0.3, zorder=-1, color="tab:green"
)
disp.axes_[0, f_idx].set_ylim(-6, 6)
plt.legend()
fig.suptitle("Monotonic constraints illustration")
plt.show()
```
**Total running time of the script:** ( 0 minutes 0.537 seconds)
[`Download Python source code: plot_monotonic_constraints.py`](https://scikit-learn.org/1.1/_downloads/9e22207e9bd6485b95f32783b59d9a80/plot_monotonic_constraints.py)
[`Download Jupyter notebook: plot_monotonic_constraints.ipynb`](https://scikit-learn.org/1.1/_downloads/215c560d29193ab9b0a495609bc74802/plot_monotonic_constraints.ipynb)
scikit_learn Multi-class AdaBoosted Decision Trees Note
Click [here](#sphx-glr-download-auto-examples-ensemble-plot-adaboost-multiclass-py) to download the full example code or to run this example in your browser via Binder
Multi-class AdaBoosted Decision Trees
=====================================
This example reproduces Figure 1 of Zhu et al [[1]](#id3) and shows how boosting can improve prediction accuracy on a multi-class problem. The classification dataset is constructed by taking a ten-dimensional standard normal distribution and defining three classes separated by nested concentric ten-dimensional spheres such that roughly equal numbers of samples are in each class (quantiles of the \(\chi^2\) distribution).
The performance of the SAMME and SAMME.R [[1]](#id3) algorithms are compared. SAMME.R uses the probability estimates to update the additive model, while SAMME uses the classifications only. As the example illustrates, the SAMME.R algorithm typically converges faster than SAMME, achieving a lower test error with fewer boosting iterations. The error of each algorithm on the test set after each boosting iteration is shown on the left, the classification error on the test set of each tree is shown in the middle, and the boost weight of each tree is shown on the right. All trees have a weight of one in the SAMME.R algorithm and therefore are not shown.
```
# Author: Noel Dawe <[email protected]>
#
# License: BSD 3 clause
import matplotlib.pyplot as plt
from sklearn.datasets import make_gaussian_quantiles
from sklearn.ensemble import AdaBoostClassifier
from sklearn.metrics import accuracy_score
from sklearn.tree import DecisionTreeClassifier
X, y = make_gaussian_quantiles(
n_samples=13000, n_features=10, n_classes=3, random_state=1
)
n_split = 3000
X_train, X_test = X[:n_split], X[n_split:]
y_train, y_test = y[:n_split], y[n_split:]
bdt_real = AdaBoostClassifier(
DecisionTreeClassifier(max_depth=2), n_estimators=300, learning_rate=1
)
bdt_discrete = AdaBoostClassifier(
DecisionTreeClassifier(max_depth=2),
n_estimators=300,
learning_rate=1.5,
algorithm="SAMME",
)
bdt_real.fit(X_train, y_train)
bdt_discrete.fit(X_train, y_train)
real_test_errors = []
discrete_test_errors = []
for real_test_predict, discrete_train_predict in zip(
bdt_real.staged_predict(X_test), bdt_discrete.staged_predict(X_test)
):
real_test_errors.append(1.0 - accuracy_score(real_test_predict, y_test))
discrete_test_errors.append(1.0 - accuracy_score(discrete_train_predict, y_test))
n_trees_discrete = len(bdt_discrete)
n_trees_real = len(bdt_real)
# Boosting might terminate early, but the following arrays are always
# n_estimators long. We crop them to the actual number of trees here:
discrete_estimator_errors = bdt_discrete.estimator_errors_[:n_trees_discrete]
real_estimator_errors = bdt_real.estimator_errors_[:n_trees_real]
discrete_estimator_weights = bdt_discrete.estimator_weights_[:n_trees_discrete]
plt.figure(figsize=(15, 5))
plt.subplot(131)
plt.plot(range(1, n_trees_discrete + 1), discrete_test_errors, c="black", label="SAMME")
plt.plot(
range(1, n_trees_real + 1),
real_test_errors,
c="black",
linestyle="dashed",
label="SAMME.R",
)
plt.legend()
plt.ylim(0.18, 0.62)
plt.ylabel("Test Error")
plt.xlabel("Number of Trees")
plt.subplot(132)
plt.plot(
range(1, n_trees_discrete + 1),
discrete_estimator_errors,
"b",
label="SAMME",
alpha=0.5,
)
plt.plot(
range(1, n_trees_real + 1), real_estimator_errors, "r", label="SAMME.R", alpha=0.5
)
plt.legend()
plt.ylabel("Error")
plt.xlabel("Number of Trees")
plt.ylim((0.2, max(real_estimator_errors.max(), discrete_estimator_errors.max()) * 1.2))
plt.xlim((-20, len(bdt_discrete) + 20))
plt.subplot(133)
plt.plot(range(1, n_trees_discrete + 1), discrete_estimator_weights, "b", label="SAMME")
plt.legend()
plt.ylabel("Weight")
plt.xlabel("Number of Trees")
plt.ylim((0, discrete_estimator_weights.max() * 1.2))
plt.xlim((-20, n_trees_discrete + 20))
# prevent overlapping y-axis labels
plt.subplots_adjust(wspace=0.25)
plt.show()
```
**Total running time of the script:** ( 0 minutes 5.283 seconds)
[`Download Python source code: plot_adaboost_multiclass.py`](https://scikit-learn.org/1.1/_downloads/4e46f015ab8300f262e6e8775bcdcf8a/plot_adaboost_multiclass.py)
[`Download Jupyter notebook: plot_adaboost_multiclass.ipynb`](https://scikit-learn.org/1.1/_downloads/607c99671400a5055ef516d1aabd00c1/plot_adaboost_multiclass.ipynb)
| programming_docs |
scikit_learn OOB Errors for Random Forests Note
Click [here](#sphx-glr-download-auto-examples-ensemble-plot-ensemble-oob-py) to download the full example code or to run this example in your browser via Binder
OOB Errors for Random Forests
=============================
The `RandomForestClassifier` is trained using *bootstrap aggregation*, where each new tree is fit from a bootstrap sample of the training observations \(z\_i = (x\_i, y\_i)\). The *out-of-bag* (OOB) error is the average error for each \(z\_i\) calculated using predictions from the trees that do not contain \(z\_i\) in their respective bootstrap sample. This allows the `RandomForestClassifier` to be fit and validated whilst being trained [[1]](#id2).
The example below demonstrates how the OOB error can be measured at the addition of each new tree during training. The resulting plot allows a practitioner to approximate a suitable value of `n_estimators` at which the error stabilizes.
```
# Author: Kian Ho <[email protected]>
# Gilles Louppe <[email protected]>
# Andreas Mueller <[email protected]>
#
# License: BSD 3 Clause
import matplotlib.pyplot as plt
from collections import OrderedDict
from sklearn.datasets import make_classification
from sklearn.ensemble import RandomForestClassifier
RANDOM_STATE = 123
# Generate a binary classification dataset.
X, y = make_classification(
n_samples=500,
n_features=25,
n_clusters_per_class=1,
n_informative=15,
random_state=RANDOM_STATE,
)
# NOTE: Setting the `warm_start` construction parameter to `True` disables
# support for parallelized ensembles but is necessary for tracking the OOB
# error trajectory during training.
ensemble_clfs = [
(
"RandomForestClassifier, max_features='sqrt'",
RandomForestClassifier(
warm_start=True,
oob_score=True,
max_features="sqrt",
random_state=RANDOM_STATE,
),
),
(
"RandomForestClassifier, max_features='log2'",
RandomForestClassifier(
warm_start=True,
max_features="log2",
oob_score=True,
random_state=RANDOM_STATE,
),
),
(
"RandomForestClassifier, max_features=None",
RandomForestClassifier(
warm_start=True,
max_features=None,
oob_score=True,
random_state=RANDOM_STATE,
),
),
]
# Map a classifier name to a list of (<n_estimators>, <error rate>) pairs.
error_rate = OrderedDict((label, []) for label, _ in ensemble_clfs)
# Range of `n_estimators` values to explore.
min_estimators = 15
max_estimators = 150
for label, clf in ensemble_clfs:
for i in range(min_estimators, max_estimators + 1, 5):
clf.set_params(n_estimators=i)
clf.fit(X, y)
# Record the OOB error for each `n_estimators=i` setting.
oob_error = 1 - clf.oob_score_
error_rate[label].append((i, oob_error))
# Generate the "OOB error rate" vs. "n_estimators" plot.
for label, clf_err in error_rate.items():
xs, ys = zip(*clf_err)
plt.plot(xs, ys, label=label)
plt.xlim(min_estimators, max_estimators)
plt.xlabel("n_estimators")
plt.ylabel("OOB error rate")
plt.legend(loc="upper right")
plt.show()
```
**Total running time of the script:** ( 0 minutes 3.102 seconds)
[`Download Python source code: plot_ensemble_oob.py`](https://scikit-learn.org/1.1/_downloads/75191b2eb3b4aa13066927321dd3fdcf/plot_ensemble_oob.py)
[`Download Jupyter notebook: plot_ensemble_oob.ipynb`](https://scikit-learn.org/1.1/_downloads/6c50dbd9c6dc52f3da913f8d8f82274d/plot_ensemble_oob.ipynb)
scikit_learn Hashing feature transformation using Totally Random Trees Note
Click [here](#sphx-glr-download-auto-examples-ensemble-plot-random-forest-embedding-py) to download the full example code or to run this example in your browser via Binder
Hashing feature transformation using Totally Random Trees
=========================================================
RandomTreesEmbedding provides a way to map data to a very high-dimensional, sparse representation, which might be beneficial for classification. The mapping is completely unsupervised and very efficient.
This example visualizes the partitions given by several trees and shows how the transformation can also be used for non-linear dimensionality reduction or non-linear classification.
Points that are neighboring often share the same leaf of a tree and therefore share large parts of their hashed representation. This allows to separate two concentric circles simply based on the principal components of the transformed data with truncated SVD.
In high-dimensional spaces, linear classifiers often achieve excellent accuracy. For sparse binary data, BernoulliNB is particularly well-suited. The bottom row compares the decision boundary obtained by BernoulliNB in the transformed space with an ExtraTreesClassifier forests learned on the original data.
```
import numpy as np
import matplotlib.pyplot as plt
from sklearn.datasets import make_circles
from sklearn.ensemble import RandomTreesEmbedding, ExtraTreesClassifier
from sklearn.decomposition import TruncatedSVD
from sklearn.naive_bayes import BernoulliNB
# make a synthetic dataset
X, y = make_circles(factor=0.5, random_state=0, noise=0.05)
# use RandomTreesEmbedding to transform data
hasher = RandomTreesEmbedding(n_estimators=10, random_state=0, max_depth=3)
X_transformed = hasher.fit_transform(X)
# Visualize result after dimensionality reduction using truncated SVD
svd = TruncatedSVD(n_components=2)
X_reduced = svd.fit_transform(X_transformed)
# Learn a Naive Bayes classifier on the transformed data
nb = BernoulliNB()
nb.fit(X_transformed, y)
# Learn an ExtraTreesClassifier for comparison
trees = ExtraTreesClassifier(max_depth=3, n_estimators=10, random_state=0)
trees.fit(X, y)
# scatter plot of original and reduced data
fig = plt.figure(figsize=(9, 8))
ax = plt.subplot(221)
ax.scatter(X[:, 0], X[:, 1], c=y, s=50, edgecolor="k")
ax.set_title("Original Data (2d)")
ax.set_xticks(())
ax.set_yticks(())
ax = plt.subplot(222)
ax.scatter(X_reduced[:, 0], X_reduced[:, 1], c=y, s=50, edgecolor="k")
ax.set_title(
"Truncated SVD reduction (2d) of transformed data (%dd)" % X_transformed.shape[1]
)
ax.set_xticks(())
ax.set_yticks(())
# Plot the decision in original space. For that, we will assign a color
# to each point in the mesh [x_min, x_max]x[y_min, y_max].
h = 0.01
x_min, x_max = X[:, 0].min() - 0.5, X[:, 0].max() + 0.5
y_min, y_max = X[:, 1].min() - 0.5, X[:, 1].max() + 0.5
xx, yy = np.meshgrid(np.arange(x_min, x_max, h), np.arange(y_min, y_max, h))
# transform grid using RandomTreesEmbedding
transformed_grid = hasher.transform(np.c_[xx.ravel(), yy.ravel()])
y_grid_pred = nb.predict_proba(transformed_grid)[:, 1]
ax = plt.subplot(223)
ax.set_title("Naive Bayes on Transformed data")
ax.pcolormesh(xx, yy, y_grid_pred.reshape(xx.shape))
ax.scatter(X[:, 0], X[:, 1], c=y, s=50, edgecolor="k")
ax.set_ylim(-1.4, 1.4)
ax.set_xlim(-1.4, 1.4)
ax.set_xticks(())
ax.set_yticks(())
# transform grid using ExtraTreesClassifier
y_grid_pred = trees.predict_proba(np.c_[xx.ravel(), yy.ravel()])[:, 1]
ax = plt.subplot(224)
ax.set_title("ExtraTrees predictions")
ax.pcolormesh(xx, yy, y_grid_pred.reshape(xx.shape))
ax.scatter(X[:, 0], X[:, 1], c=y, s=50, edgecolor="k")
ax.set_ylim(-1.4, 1.4)
ax.set_xlim(-1.4, 1.4)
ax.set_xticks(())
ax.set_yticks(())
plt.tight_layout()
plt.show()
```
**Total running time of the script:** ( 0 minutes 0.333 seconds)
[`Download Python source code: plot_random_forest_embedding.py`](https://scikit-learn.org/1.1/_downloads/58eb7f19c96d77148277d974aa5c112e/plot_random_forest_embedding.py)
[`Download Jupyter notebook: plot_random_forest_embedding.ipynb`](https://scikit-learn.org/1.1/_downloads/b22a06b90f4e4dff100e395092c650be/plot_random_forest_embedding.ipynb)
scikit_learn Early stopping of Gradient Boosting Note
Click [here](#sphx-glr-download-auto-examples-ensemble-plot-gradient-boosting-early-stopping-py) to download the full example code or to run this example in your browser via Binder
Early stopping of Gradient Boosting
===================================
Gradient boosting is an ensembling technique where several weak learners (regression trees) are combined to yield a powerful single model, in an iterative fashion.
Early stopping support in Gradient Boosting enables us to find the least number of iterations which is sufficient to build a model that generalizes well to unseen data.
The concept of early stopping is simple. We specify a `validation_fraction` which denotes the fraction of the whole dataset that will be kept aside from training to assess the validation loss of the model. The gradient boosting model is trained using the training set and evaluated using the validation set. When each additional stage of regression tree is added, the validation set is used to score the model. This is continued until the scores of the model in the last `n_iter_no_change` stages do not improve by at least `tol`. After that the model is considered to have converged and further addition of stages is “stopped early”.
The number of stages of the final model is available at the attribute `n_estimators_`.
This example illustrates how the early stopping can used in the [`GradientBoostingClassifier`](../../modules/generated/sklearn.ensemble.gradientboostingclassifier#sklearn.ensemble.GradientBoostingClassifier "sklearn.ensemble.GradientBoostingClassifier") model to achieve almost the same accuracy as compared to a model built without early stopping using many fewer estimators. This can significantly reduce training time, memory usage and prediction latency.
```
# Authors: Vighnesh Birodkar <[email protected]>
# Raghav RV <[email protected]>
# License: BSD 3 clause
import time
import numpy as np
import matplotlib.pyplot as plt
from sklearn import ensemble
from sklearn import datasets
from sklearn.model_selection import train_test_split
data_list = [
datasets.load_iris(return_X_y=True),
datasets.make_classification(n_samples=800, random_state=0),
datasets.make_hastie_10_2(n_samples=2000, random_state=0),
]
names = ["Iris Data", "Classification Data", "Hastie Data"]
n_gb = []
score_gb = []
time_gb = []
n_gbes = []
score_gbes = []
time_gbes = []
n_estimators = 200
for X, y in data_list:
X_train, X_test, y_train, y_test = train_test_split(
X, y, test_size=0.2, random_state=0
)
# We specify that if the scores don't improve by at least 0.01 for the last
# 10 stages, stop fitting additional stages
gbes = ensemble.GradientBoostingClassifier(
n_estimators=n_estimators,
validation_fraction=0.2,
n_iter_no_change=5,
tol=0.01,
random_state=0,
)
gb = ensemble.GradientBoostingClassifier(n_estimators=n_estimators, random_state=0)
start = time.time()
gb.fit(X_train, y_train)
time_gb.append(time.time() - start)
start = time.time()
gbes.fit(X_train, y_train)
time_gbes.append(time.time() - start)
score_gb.append(gb.score(X_test, y_test))
score_gbes.append(gbes.score(X_test, y_test))
n_gb.append(gb.n_estimators_)
n_gbes.append(gbes.n_estimators_)
bar_width = 0.2
n = len(data_list)
index = np.arange(0, n * bar_width, bar_width) * 2.5
index = index[0:n]
```
Compare scores with and without early stopping
----------------------------------------------
```
plt.figure(figsize=(9, 5))
bar1 = plt.bar(
index, score_gb, bar_width, label="Without early stopping", color="crimson"
)
bar2 = plt.bar(
index + bar_width, score_gbes, bar_width, label="With early stopping", color="coral"
)
plt.xticks(index + bar_width, names)
plt.yticks(np.arange(0, 1.3, 0.1))
def autolabel(rects, n_estimators):
"""
Attach a text label above each bar displaying n_estimators of each model
"""
for i, rect in enumerate(rects):
plt.text(
rect.get_x() + rect.get_width() / 2.0,
1.05 * rect.get_height(),
"n_est=%d" % n_estimators[i],
ha="center",
va="bottom",
)
autolabel(bar1, n_gb)
autolabel(bar2, n_gbes)
plt.ylim([0, 1.3])
plt.legend(loc="best")
plt.grid(True)
plt.xlabel("Datasets")
plt.ylabel("Test score")
plt.show()
```
Compare fit times with and without early stopping
-------------------------------------------------
```
plt.figure(figsize=(9, 5))
bar1 = plt.bar(
index, time_gb, bar_width, label="Without early stopping", color="crimson"
)
bar2 = plt.bar(
index + bar_width, time_gbes, bar_width, label="With early stopping", color="coral"
)
max_y = np.amax(np.maximum(time_gb, time_gbes))
plt.xticks(index + bar_width, names)
plt.yticks(np.linspace(0, 1.3 * max_y, 13))
autolabel(bar1, n_gb)
autolabel(bar2, n_gbes)
plt.ylim([0, 1.3 * max_y])
plt.legend(loc="best")
plt.grid(True)
plt.xlabel("Datasets")
plt.ylabel("Fit Time")
plt.show()
```
**Total running time of the script:** ( 0 minutes 2.364 seconds)
[`Download Python source code: plot_gradient_boosting_early_stopping.py`](https://scikit-learn.org/1.1/_downloads/be911e971b87fe80b6899069dbcfb737/plot_gradient_boosting_early_stopping.py)
[`Download Jupyter notebook: plot_gradient_boosting_early_stopping.ipynb`](https://scikit-learn.org/1.1/_downloads/8452fc8dfe9850cfdaa1b758e5a2748b/plot_gradient_boosting_early_stopping.ipynb)
scikit_learn Prediction Intervals for Gradient Boosting Regression Note
Click [here](#sphx-glr-download-auto-examples-ensemble-plot-gradient-boosting-quantile-py) to download the full example code or to run this example in your browser via Binder
Prediction Intervals for Gradient Boosting Regression
=====================================================
This example shows how quantile regression can be used to create prediction intervals.
Generate some data for a synthetic regression problem by applying the function f to uniformly sampled random inputs.
```
import numpy as np
from sklearn.model_selection import train_test_split
def f(x):
"""The function to predict."""
return x * np.sin(x)
rng = np.random.RandomState(42)
X = np.atleast_2d(rng.uniform(0, 10.0, size=1000)).T
expected_y = f(X).ravel()
```
To make the problem interesting, we generate observations of the target y as the sum of a deterministic term computed by the function f and a random noise term that follows a centered [log-normal](https://en.wikipedia.org/wiki/Log-normal_distribution). To make this even more interesting we consider the case where the amplitude of the noise depends on the input variable x (heteroscedastic noise).
The lognormal distribution is non-symmetric and long tailed: observing large outliers is likely but it is impossible to observe small outliers.
```
sigma = 0.5 + X.ravel() / 10
noise = rng.lognormal(sigma=sigma) - np.exp(sigma**2 / 2)
y = expected_y + noise
```
Split into train, test datasets:
```
X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=0)
```
Fitting non-linear quantile and least squares regressors
--------------------------------------------------------
Fit gradient boosting models trained with the quantile loss and alpha=0.05, 0.5, 0.95.
The models obtained for alpha=0.05 and alpha=0.95 produce a 90% confidence interval (95% - 5% = 90%).
The model trained with alpha=0.5 produces a regression of the median: on average, there should be the same number of target observations above and below the predicted values.
```
from sklearn.ensemble import GradientBoostingRegressor
from sklearn.metrics import mean_pinball_loss, mean_squared_error
all_models = {}
common_params = dict(
learning_rate=0.05,
n_estimators=200,
max_depth=2,
min_samples_leaf=9,
min_samples_split=9,
)
for alpha in [0.05, 0.5, 0.95]:
gbr = GradientBoostingRegressor(loss="quantile", alpha=alpha, **common_params)
all_models["q %1.2f" % alpha] = gbr.fit(X_train, y_train)
```
Notice that [`HistGradientBoostingRegressor`](../../modules/generated/sklearn.ensemble.histgradientboostingregressor#sklearn.ensemble.HistGradientBoostingRegressor "sklearn.ensemble.HistGradientBoostingRegressor") is much faster than [`GradientBoostingRegressor`](../../modules/generated/sklearn.ensemble.gradientboostingregressor#sklearn.ensemble.GradientBoostingRegressor "sklearn.ensemble.GradientBoostingRegressor") starting with intermediate datasets (`n_samples >= 10_000`), which is not the case of the present example.
For the sake of comparison, we also fit a baseline model trained with the usual (mean) squared error (MSE).
```
gbr_ls = GradientBoostingRegressor(loss="squared_error", **common_params)
all_models["mse"] = gbr_ls.fit(X_train, y_train)
```
Create an evenly spaced evaluation set of input values spanning the [0, 10] range.
```
xx = np.atleast_2d(np.linspace(0, 10, 1000)).T
```
Plot the true conditional mean function f, the predictions of the conditional mean (loss equals squared error), the conditional median and the conditional 90% interval (from 5th to 95th conditional percentiles).
```
import matplotlib.pyplot as plt
y_pred = all_models["mse"].predict(xx)
y_lower = all_models["q 0.05"].predict(xx)
y_upper = all_models["q 0.95"].predict(xx)
y_med = all_models["q 0.50"].predict(xx)
fig = plt.figure(figsize=(10, 10))
plt.plot(xx, f(xx), "g:", linewidth=3, label=r"$f(x) = x\,\sin(x)$")
plt.plot(X_test, y_test, "b.", markersize=10, label="Test observations")
plt.plot(xx, y_med, "r-", label="Predicted median")
plt.plot(xx, y_pred, "r-", label="Predicted mean")
plt.plot(xx, y_upper, "k-")
plt.plot(xx, y_lower, "k-")
plt.fill_between(
xx.ravel(), y_lower, y_upper, alpha=0.4, label="Predicted 90% interval"
)
plt.xlabel("$x$")
plt.ylabel("$f(x)$")
plt.ylim(-10, 25)
plt.legend(loc="upper left")
plt.show()
```
Comparing the predicted median with the predicted mean, we note that the median is on average below the mean as the noise is skewed towards high values (large outliers). The median estimate also seems to be smoother because of its natural robustness to outliers.
Also observe that the inductive bias of gradient boosting trees is unfortunately preventing our 0.05 quantile to fully capture the sinoisoidal shape of the signal, in particular around x=8. Tuning hyper-parameters can reduce this effect as shown in the last part of this notebook.
Analysis of the error metrics
-----------------------------
Measure the models with `mean_squared_error` and `mean_pinball_loss` metrics on the training dataset.
```
import pandas as pd
def highlight_min(x):
x_min = x.min()
return ["font-weight: bold" if v == x_min else "" for v in x]
results = []
for name, gbr in sorted(all_models.items()):
metrics = {"model": name}
y_pred = gbr.predict(X_train)
for alpha in [0.05, 0.5, 0.95]:
metrics["pbl=%1.2f" % alpha] = mean_pinball_loss(y_train, y_pred, alpha=alpha)
metrics["MSE"] = mean_squared_error(y_train, y_pred)
results.append(metrics)
pd.DataFrame(results).set_index("model").style.apply(highlight_min)
```
| | pbl=0.05 | pbl=0.50 | pbl=0.95 | MSE |
| --- | --- | --- | --- | --- |
| model | | | | |
| mse | 0.715413 | 0.715413 | 0.715413 | 7.750348 |
| q 0.05 | 0.127128 | 1.253445 | 2.379763 | 18.933253 |
| q 0.50 | 0.305438 | 0.622811 | 0.940184 | 9.827917 |
| q 0.95 | 3.909909 | 2.145957 | 0.382005 | 28.667219 |
One column shows all models evaluated by the same metric. The minimum number on a column should be obtained when the model is trained and measured with the same metric. This should be always the case on the training set if the training converged.
Note that because the target distribution is asymmetric, the expected conditional mean and conditional median are signficiantly different and therefore one could not use the squared error model get a good estimation of the conditional median nor the converse.
If the target distribution were symmetric and had no outliers (e.g. with a Gaussian noise), then median estimator and the least squares estimator would have yielded similar predictions.
We then do the same on the test set.
```
results = []
for name, gbr in sorted(all_models.items()):
metrics = {"model": name}
y_pred = gbr.predict(X_test)
for alpha in [0.05, 0.5, 0.95]:
metrics["pbl=%1.2f" % alpha] = mean_pinball_loss(y_test, y_pred, alpha=alpha)
metrics["MSE"] = mean_squared_error(y_test, y_pred)
results.append(metrics)
pd.DataFrame(results).set_index("model").style.apply(highlight_min)
```
| | pbl=0.05 | pbl=0.50 | pbl=0.95 | MSE |
| --- | --- | --- | --- | --- |
| model | | | | |
| mse | 0.917281 | 0.767498 | 0.617715 | 6.692901 |
| q 0.05 | 0.144204 | 1.245961 | 2.347717 | 15.648026 |
| q 0.50 | 0.412021 | 0.607752 | 0.803483 | 5.874771 |
| q 0.95 | 4.354394 | 2.355445 | 0.356497 | 34.852774 |
Errors are higher meaning the models slightly overfitted the data. It still shows that the best test metric is obtained when the model is trained by minimizing this same metric.
Note that the conditional median estimator is competitive with the squared error estimator in terms of MSE on the test set: this can be explained by the fact the squared error estimator is very sensitive to large outliers which can cause significant overfitting. This can be seen on the right hand side of the previous plot. The conditional median estimator is biased (underestimation for this asymmetric noise) but is also naturally robust to outliers and overfits less.
Calibration of the confidence interval
--------------------------------------
We can also evaluate the ability of the two extreme quantile estimators at producing a well-calibrated conditational 90%-confidence interval.
To do this we can compute the fraction of observations that fall between the predictions:
```
def coverage_fraction(y, y_low, y_high):
return np.mean(np.logical_and(y >= y_low, y <= y_high))
coverage_fraction(
y_train,
all_models["q 0.05"].predict(X_train),
all_models["q 0.95"].predict(X_train),
)
```
```
0.9
```
On the training set the calibration is very close to the expected coverage value for a 90% confidence interval.
```
coverage_fraction(
y_test, all_models["q 0.05"].predict(X_test), all_models["q 0.95"].predict(X_test)
)
```
```
0.868
```
On the test set, the estimated confidence interval is slightly too narrow. Note, however, that we would need to wrap those metrics in a cross-validation loop to assess their variability under data resampling.
Tuning the hyper-parameters of the quantile regressors
------------------------------------------------------
In the plot above, we observed that the 5th percentile regressor seems to underfit and could not adapt to sinusoidal shape of the signal.
The hyper-parameters of the model were approximately hand-tuned for the median regressor and there is no reason that the same hyper-parameters are suitable for the 5th percentile regressor.
To confirm this hypothesis, we tune the hyper-parameters of a new regressor of the 5th percentile by selecting the best model parameters by cross-validation on the pinball loss with alpha=0.05:
```
from sklearn.experimental import enable_halving_search_cv # noqa
from sklearn.model_selection import HalvingRandomSearchCV
from sklearn.metrics import make_scorer
from pprint import pprint
param_grid = dict(
learning_rate=[0.05, 0.1, 0.2],
max_depth=[2, 5, 10],
min_samples_leaf=[1, 5, 10, 20],
min_samples_split=[5, 10, 20, 30, 50],
)
alpha = 0.05
neg_mean_pinball_loss_05p_scorer = make_scorer(
mean_pinball_loss,
alpha=alpha,
greater_is_better=False, # maximize the negative loss
)
gbr = GradientBoostingRegressor(loss="quantile", alpha=alpha, random_state=0)
search_05p = HalvingRandomSearchCV(
gbr,
param_grid,
resource="n_estimators",
max_resources=250,
min_resources=50,
scoring=neg_mean_pinball_loss_05p_scorer,
n_jobs=2,
random_state=0,
).fit(X_train, y_train)
pprint(search_05p.best_params_)
```
```
{'learning_rate': 0.2,
'max_depth': 2,
'min_samples_leaf': 20,
'min_samples_split': 10,
'n_estimators': 150}
```
We observe that the hyper-parameters that were hand-tuned for the median regressor are in the same range as the hyper-parameters suitable for the 5th percentile regressor.
Let’s now tune the hyper-parameters for the 95th percentile regressor. We need to redefine the `scoring` metric used to select the best model, along with adjusting the alpha parameter of the inner gradient boosting estimator itself:
```
from sklearn.base import clone
alpha = 0.95
neg_mean_pinball_loss_95p_scorer = make_scorer(
mean_pinball_loss,
alpha=alpha,
greater_is_better=False, # maximize the negative loss
)
search_95p = clone(search_05p).set_params(
estimator__alpha=alpha,
scoring=neg_mean_pinball_loss_95p_scorer,
)
search_95p.fit(X_train, y_train)
pprint(search_95p.best_params_)
```
```
{'learning_rate': 0.05,
'max_depth': 2,
'min_samples_leaf': 5,
'min_samples_split': 20,
'n_estimators': 150}
```
The result shows that the hyper-parameters for the 95th percentile regressor identified by the search procedure are roughly in the same range as the hand- tuned hyper-parameters for the median regressor and the hyper-parameters identified by the search procedure for the 5th percentile regressor. However, the hyper-parameter searches did lead to an improved 90% confidence interval that is comprised by the predictions of those two tuned quantile regressors. Note that the prediction of the upper 95th percentile has a much coarser shape than the prediction of the lower 5th percentile because of the outliers:
```
y_lower = search_05p.predict(xx)
y_upper = search_95p.predict(xx)
fig = plt.figure(figsize=(10, 10))
plt.plot(xx, f(xx), "g:", linewidth=3, label=r"$f(x) = x\,\sin(x)$")
plt.plot(X_test, y_test, "b.", markersize=10, label="Test observations")
plt.plot(xx, y_upper, "k-")
plt.plot(xx, y_lower, "k-")
plt.fill_between(
xx.ravel(), y_lower, y_upper, alpha=0.4, label="Predicted 90% interval"
)
plt.xlabel("$x$")
plt.ylabel("$f(x)$")
plt.ylim(-10, 25)
plt.legend(loc="upper left")
plt.title("Prediction with tuned hyper-parameters")
plt.show()
```
The plot looks qualitatively better than for the untuned models, especially for the shape of the of lower quantile.
We now quantitatively evaluate the joint-calibration of the pair of estimators:
```
coverage_fraction(y_train, search_05p.predict(X_train), search_95p.predict(X_train))
```
```
0.9026666666666666
```
```
coverage_fraction(y_test, search_05p.predict(X_test), search_95p.predict(X_test))
```
```
0.796
```
The calibration of the tuned pair is sadly not better on the test set: the width of the estimated confidence interval is still too narrow.
Again, we would need to wrap this study in a cross-validation loop to better assess the variability of those estimates.
**Total running time of the script:** ( 0 minutes 8.406 seconds)
[`Download Python source code: plot_gradient_boosting_quantile.py`](https://scikit-learn.org/1.1/_downloads/2f3ef774a6d7e52e1e6b7ccbb75d25f0/plot_gradient_boosting_quantile.py)
[`Download Jupyter notebook: plot_gradient_boosting_quantile.ipynb`](https://scikit-learn.org/1.1/_downloads/b5ac5dfd67b0aab146fcb9faaac8480c/plot_gradient_boosting_quantile.ipynb)
| programming_docs |
scikit_learn Gradient Boosting regression Note
Click [here](#sphx-glr-download-auto-examples-ensemble-plot-gradient-boosting-regression-py) to download the full example code or to run this example in your browser via Binder
Gradient Boosting regression
============================
This example demonstrates Gradient Boosting to produce a predictive model from an ensemble of weak predictive models. Gradient boosting can be used for regression and classification problems. Here, we will train a model to tackle a diabetes regression task. We will obtain the results from [`GradientBoostingRegressor`](../../modules/generated/sklearn.ensemble.gradientboostingregressor#sklearn.ensemble.GradientBoostingRegressor "sklearn.ensemble.GradientBoostingRegressor") with least squares loss and 500 regression trees of depth 4.
Note: For larger datasets (n\_samples >= 10000), please refer to [`HistGradientBoostingRegressor`](../../modules/generated/sklearn.ensemble.histgradientboostingregressor#sklearn.ensemble.HistGradientBoostingRegressor "sklearn.ensemble.HistGradientBoostingRegressor").
```
# Author: Peter Prettenhofer <[email protected]>
# Maria Telenczuk <https://github.com/maikia>
# Katrina Ni <https://github.com/nilichen>
#
# License: BSD 3 clause
import matplotlib.pyplot as plt
import numpy as np
from sklearn import datasets, ensemble
from sklearn.inspection import permutation_importance
from sklearn.metrics import mean_squared_error
from sklearn.model_selection import train_test_split
```
Load the data
-------------
First we need to load the data.
```
diabetes = datasets.load_diabetes()
X, y = diabetes.data, diabetes.target
```
Data preprocessing
------------------
Next, we will split our dataset to use 90% for training and leave the rest for testing. We will also set the regression model parameters. You can play with these parameters to see how the results change.
`n_estimators` : the number of boosting stages that will be performed. Later, we will plot deviance against boosting iterations.
`max_depth` : limits the number of nodes in the tree. The best value depends on the interaction of the input variables.
`min_samples_split` : the minimum number of samples required to split an internal node.
`learning_rate` : how much the contribution of each tree will shrink.
`loss` : loss function to optimize. The least squares function is used in this case however, there are many other options (see [`GradientBoostingRegressor`](../../modules/generated/sklearn.ensemble.gradientboostingregressor#sklearn.ensemble.GradientBoostingRegressor "sklearn.ensemble.GradientBoostingRegressor") ).
```
X_train, X_test, y_train, y_test = train_test_split(
X, y, test_size=0.1, random_state=13
)
params = {
"n_estimators": 500,
"max_depth": 4,
"min_samples_split": 5,
"learning_rate": 0.01,
"loss": "squared_error",
}
```
Fit regression model
--------------------
Now we will initiate the gradient boosting regressors and fit it with our training data. Let’s also look and the mean squared error on the test data.
```
reg = ensemble.GradientBoostingRegressor(**params)
reg.fit(X_train, y_train)
mse = mean_squared_error(y_test, reg.predict(X_test))
print("The mean squared error (MSE) on test set: {:.4f}".format(mse))
```
```
The mean squared error (MSE) on test set: 3025.7877
```
Plot training deviance
----------------------
Finally, we will visualize the results. To do that we will first compute the test set deviance and then plot it against boosting iterations.
```
test_score = np.zeros((params["n_estimators"],), dtype=np.float64)
for i, y_pred in enumerate(reg.staged_predict(X_test)):
test_score[i] = reg.loss_(y_test, y_pred)
fig = plt.figure(figsize=(6, 6))
plt.subplot(1, 1, 1)
plt.title("Deviance")
plt.plot(
np.arange(params["n_estimators"]) + 1,
reg.train_score_,
"b-",
label="Training Set Deviance",
)
plt.plot(
np.arange(params["n_estimators"]) + 1, test_score, "r-", label="Test Set Deviance"
)
plt.legend(loc="upper right")
plt.xlabel("Boosting Iterations")
plt.ylabel("Deviance")
fig.tight_layout()
plt.show()
```
```
/home/runner/work/scikit-learn/scikit-learn/sklearn/utils/deprecation.py:103: FutureWarning: Attribute `loss_` was deprecated in version 1.1 and will be removed in 1.3.
warnings.warn(msg, category=FutureWarning)
```
Plot feature importance
-----------------------
Warning
Careful, impurity-based feature importances can be misleading for **high cardinality** features (many unique values). As an alternative, the permutation importances of `reg` can be computed on a held out test set. See [Permutation feature importance](../../modules/permutation_importance#permutation-importance) for more details.
For this example, the impurity-based and permutation methods identify the same 2 strongly predictive features but not in the same order. The third most predictive feature, “bp”, is also the same for the 2 methods. The remaining features are less predictive and the error bars of the permutation plot show that they overlap with 0.
```
feature_importance = reg.feature_importances_
sorted_idx = np.argsort(feature_importance)
pos = np.arange(sorted_idx.shape[0]) + 0.5
fig = plt.figure(figsize=(12, 6))
plt.subplot(1, 2, 1)
plt.barh(pos, feature_importance[sorted_idx], align="center")
plt.yticks(pos, np.array(diabetes.feature_names)[sorted_idx])
plt.title("Feature Importance (MDI)")
result = permutation_importance(
reg, X_test, y_test, n_repeats=10, random_state=42, n_jobs=2
)
sorted_idx = result.importances_mean.argsort()
plt.subplot(1, 2, 2)
plt.boxplot(
result.importances[sorted_idx].T,
vert=False,
labels=np.array(diabetes.feature_names)[sorted_idx],
)
plt.title("Permutation Importance (test set)")
fig.tight_layout()
plt.show()
```
**Total running time of the script:** ( 0 minutes 0.848 seconds)
[`Download Python source code: plot_gradient_boosting_regression.py`](https://scikit-learn.org/1.1/_downloads/e0186b37c52cdb964f7759aac5fbb9b9/plot_gradient_boosting_regression.py)
[`Download Jupyter notebook: plot_gradient_boosting_regression.ipynb`](https://scikit-learn.org/1.1/_downloads/c536eb92f539255e80e2b3ef5200e7a1/plot_gradient_boosting_regression.ipynb)
scikit_learn Feature transformations with ensembles of trees Note
Click [here](#sphx-glr-download-auto-examples-ensemble-plot-feature-transformation-py) to download the full example code or to run this example in your browser via Binder
Feature transformations with ensembles of trees
===============================================
Transform your features into a higher dimensional, sparse space. Then train a linear model on these features.
First fit an ensemble of trees (totally random trees, a random forest, or gradient boosted trees) on the training set. Then each leaf of each tree in the ensemble is assigned a fixed arbitrary feature index in a new feature space. These leaf indices are then encoded in a one-hot fashion.
Each sample goes through the decisions of each tree of the ensemble and ends up in one leaf per tree. The sample is encoded by setting feature values for these leaves to 1 and the other feature values to 0.
The resulting transformer has then learned a supervised, sparse, high-dimensional categorical embedding of the data.
```
# Author: Tim Head <[email protected]>
#
# License: BSD 3 clause
```
First, we will create a large dataset and split it into three sets:
* a set to train the ensemble methods which are later used to as a feature engineering transformer;
* a set to train the linear model;
* a set to test the linear model.
It is important to split the data in such way to avoid overfitting by leaking data.
```
from sklearn.datasets import make_classification
from sklearn.model_selection import train_test_split
X, y = make_classification(n_samples=80_000, random_state=10)
X_full_train, X_test, y_full_train, y_test = train_test_split(
X, y, test_size=0.5, random_state=10
)
X_train_ensemble, X_train_linear, y_train_ensemble, y_train_linear = train_test_split(
X_full_train, y_full_train, test_size=0.5, random_state=10
)
```
For each of the ensemble methods, we will use 10 estimators and a maximum depth of 3 levels.
```
n_estimators = 10
max_depth = 3
```
First, we will start by training the random forest and gradient boosting on the separated training set
```
from sklearn.ensemble import RandomForestClassifier, GradientBoostingClassifier
random_forest = RandomForestClassifier(
n_estimators=n_estimators, max_depth=max_depth, random_state=10
)
random_forest.fit(X_train_ensemble, y_train_ensemble)
gradient_boosting = GradientBoostingClassifier(
n_estimators=n_estimators, max_depth=max_depth, random_state=10
)
_ = gradient_boosting.fit(X_train_ensemble, y_train_ensemble)
```
Notice that [`HistGradientBoostingClassifier`](../../modules/generated/sklearn.ensemble.histgradientboostingclassifier#sklearn.ensemble.HistGradientBoostingClassifier "sklearn.ensemble.HistGradientBoostingClassifier") is much faster than [`GradientBoostingClassifier`](../../modules/generated/sklearn.ensemble.gradientboostingclassifier#sklearn.ensemble.GradientBoostingClassifier "sklearn.ensemble.GradientBoostingClassifier") starting with intermediate datasets (`n_samples >= 10_000`), which is not the case of the present example.
The [`RandomTreesEmbedding`](../../modules/generated/sklearn.ensemble.randomtreesembedding#sklearn.ensemble.RandomTreesEmbedding "sklearn.ensemble.RandomTreesEmbedding") is an unsupervised method and thus does not required to be trained independently.
```
from sklearn.ensemble import RandomTreesEmbedding
random_tree_embedding = RandomTreesEmbedding(
n_estimators=n_estimators, max_depth=max_depth, random_state=0
)
```
Now, we will create three pipelines that will use the above embedding as a preprocessing stage.
The random trees embedding can be directly pipelined with the logistic regression because it is a standard scikit-learn transformer.
```
from sklearn.linear_model import LogisticRegression
from sklearn.pipeline import make_pipeline
rt_model = make_pipeline(random_tree_embedding, LogisticRegression(max_iter=1000))
rt_model.fit(X_train_linear, y_train_linear)
```
```
Pipeline(steps=[('randomtreesembedding',
RandomTreesEmbedding(max_depth=3, n_estimators=10,
random_state=0)),
('logisticregression', LogisticRegression(max_iter=1000))])
```
**In a Jupyter environment, please rerun this cell to show the HTML representation or trust the notebook.
On GitHub, the HTML representation is unable to render, please try loading this page with nbviewer.org.**
Pipeline
```
Pipeline(steps=[('randomtreesembedding',
RandomTreesEmbedding(max_depth=3, n_estimators=10,
random_state=0)),
('logisticregression', LogisticRegression(max_iter=1000))])
```
RandomTreesEmbedding
```
RandomTreesEmbedding(max_depth=3, n_estimators=10, random_state=0)
```
LogisticRegression
```
LogisticRegression(max_iter=1000)
```
Then, we can pipeline random forest or gradient boosting with a logistic regression. However, the feature transformation will happen by calling the method `apply`. The pipeline in scikit-learn expects a call to `transform`. Therefore, we wrapped the call to `apply` within a `FunctionTransformer`.
```
from sklearn.preprocessing import FunctionTransformer
from sklearn.preprocessing import OneHotEncoder
def rf_apply(X, model):
return model.apply(X)
rf_leaves_yielder = FunctionTransformer(rf_apply, kw_args={"model": random_forest})
rf_model = make_pipeline(
rf_leaves_yielder,
OneHotEncoder(handle_unknown="ignore"),
LogisticRegression(max_iter=1000),
)
rf_model.fit(X_train_linear, y_train_linear)
```
```
Pipeline(steps=[('functiontransformer',
FunctionTransformer(func=<function rf_apply at 0x7f6e7ef0df70>,
kw_args={'model': RandomForestClassifier(max_depth=3,
n_estimators=10,
random_state=10)})),
('onehotencoder', OneHotEncoder(handle_unknown='ignore')),
('logisticregression', LogisticRegression(max_iter=1000))])
```
**In a Jupyter environment, please rerun this cell to show the HTML representation or trust the notebook.
On GitHub, the HTML representation is unable to render, please try loading this page with nbviewer.org.**
Pipeline
```
Pipeline(steps=[('functiontransformer',
FunctionTransformer(func=<function rf_apply at 0x7f6e7ef0df70>,
kw_args={'model': RandomForestClassifier(max_depth=3,
n_estimators=10,
random_state=10)})),
('onehotencoder', OneHotEncoder(handle_unknown='ignore')),
('logisticregression', LogisticRegression(max_iter=1000))])
```
FunctionTransformer
```
FunctionTransformer(func=<function rf_apply at 0x7f6e7ef0df70>,
kw_args={'model': RandomForestClassifier(max_depth=3,
n_estimators=10,
random_state=10)})
```
OneHotEncoder
```
OneHotEncoder(handle_unknown='ignore')
```
LogisticRegression
```
LogisticRegression(max_iter=1000)
```
```
def gbdt_apply(X, model):
return model.apply(X)[:, :, 0]
gbdt_leaves_yielder = FunctionTransformer(
gbdt_apply, kw_args={"model": gradient_boosting}
)
gbdt_model = make_pipeline(
gbdt_leaves_yielder,
OneHotEncoder(handle_unknown="ignore"),
LogisticRegression(max_iter=1000),
)
gbdt_model.fit(X_train_linear, y_train_linear)
```
```
Pipeline(steps=[('functiontransformer',
FunctionTransformer(func=<function gbdt_apply at 0x7f6e7ffbcd30>,
kw_args={'model': GradientBoostingClassifier(n_estimators=10,
random_state=10)})),
('onehotencoder', OneHotEncoder(handle_unknown='ignore')),
('logisticregression', LogisticRegression(max_iter=1000))])
```
**In a Jupyter environment, please rerun this cell to show the HTML representation or trust the notebook.
On GitHub, the HTML representation is unable to render, please try loading this page with nbviewer.org.**
Pipeline
```
Pipeline(steps=[('functiontransformer',
FunctionTransformer(func=<function gbdt_apply at 0x7f6e7ffbcd30>,
kw_args={'model': GradientBoostingClassifier(n_estimators=10,
random_state=10)})),
('onehotencoder', OneHotEncoder(handle_unknown='ignore')),
('logisticregression', LogisticRegression(max_iter=1000))])
```
FunctionTransformer
```
FunctionTransformer(func=<function gbdt_apply at 0x7f6e7ffbcd30>,
kw_args={'model': GradientBoostingClassifier(n_estimators=10,
random_state=10)})
```
OneHotEncoder
```
OneHotEncoder(handle_unknown='ignore')
```
LogisticRegression
```
LogisticRegression(max_iter=1000)
```
We can finally show the different ROC curves for all the models.
```
import matplotlib.pyplot as plt
from sklearn.metrics import RocCurveDisplay
fig, ax = plt.subplots()
models = [
("RT embedding -> LR", rt_model),
("RF", random_forest),
("RF embedding -> LR", rf_model),
("GBDT", gradient_boosting),
("GBDT embedding -> LR", gbdt_model),
]
model_displays = {}
for name, pipeline in models:
model_displays[name] = RocCurveDisplay.from_estimator(
pipeline, X_test, y_test, ax=ax, name=name
)
_ = ax.set_title("ROC curve")
```
```
fig, ax = plt.subplots()
for name, pipeline in models:
model_displays[name].plot(ax=ax)
ax.set_xlim(0, 0.2)
ax.set_ylim(0.8, 1)
_ = ax.set_title("ROC curve (zoomed in at top left)")
```
**Total running time of the script:** ( 0 minutes 2.458 seconds)
[`Download Python source code: plot_feature_transformation.py`](https://scikit-learn.org/1.1/_downloads/3a10dcfbc1a4bf1349c7101a429aa47b/plot_feature_transformation.py)
[`Download Jupyter notebook: plot_feature_transformation.ipynb`](https://scikit-learn.org/1.1/_downloads/bd48035d82fe0719ccbd66ac2192f65b/plot_feature_transformation.ipynb)
scikit_learn Pixel importances with a parallel forest of trees Note
Click [here](#sphx-glr-download-auto-examples-ensemble-plot-forest-importances-faces-py) to download the full example code or to run this example in your browser via Binder
Pixel importances with a parallel forest of trees
=================================================
This example shows the use of a forest of trees to evaluate the impurity based importance of the pixels in an image classification task on the faces dataset. The hotter the pixel, the more important it is.
The code below also illustrates how the construction and the computation of the predictions can be parallelized within multiple jobs.
Loading the data and model fitting
----------------------------------
First, we load the olivetti faces dataset and limit the dataset to contain only the first five classes. Then we train a random forest on the dataset and evaluate the impurity-based feature importance. One drawback of this method is that it cannot be evaluated on a separate test set. For this example, we are interested in representing the information learned from the full dataset. Also, we’ll set the number of cores to use for the tasks.
```
from sklearn.datasets import fetch_olivetti_faces
```
We select the number of cores to use to perform parallel fitting of the forest model. `-1` means use all available cores.
```
n_jobs = -1
```
Load the faces dataset
```
data = fetch_olivetti_faces()
X, y = data.data, data.target
```
Limit the dataset to 5 classes.
```
mask = y < 5
X = X[mask]
y = y[mask]
```
A random forest classifier will be fitted to compute the feature importances.
```
from sklearn.ensemble import RandomForestClassifier
forest = RandomForestClassifier(n_estimators=750, n_jobs=n_jobs, random_state=42)
forest.fit(X, y)
```
```
RandomForestClassifier(n_estimators=750, n_jobs=-1, random_state=42)
```
**In a Jupyter environment, please rerun this cell to show the HTML representation or trust the notebook.
On GitHub, the HTML representation is unable to render, please try loading this page with nbviewer.org.**
RandomForestClassifier
```
RandomForestClassifier(n_estimators=750, n_jobs=-1, random_state=42)
```
Feature importance based on mean decrease in impurity (MDI)
-----------------------------------------------------------
Feature importances are provided by the fitted attribute `feature_importances_` and they are computed as the mean and standard deviation of accumulation of the impurity decrease within each tree.
Warning
Impurity-based feature importances can be misleading for **high cardinality** features (many unique values). See [Permutation feature importance](../../modules/permutation_importance#permutation-importance) as an alternative.
```
import time
import matplotlib.pyplot as plt
start_time = time.time()
img_shape = data.images[0].shape
importances = forest.feature_importances_
elapsed_time = time.time() - start_time
print(f"Elapsed time to compute the importances: {elapsed_time:.3f} seconds")
imp_reshaped = importances.reshape(img_shape)
plt.matshow(imp_reshaped, cmap=plt.cm.hot)
plt.title("Pixel importances using impurity values")
plt.colorbar()
plt.show()
```
```
Elapsed time to compute the importances: 0.158 seconds
```
Can you still recognize a face?
The limitations of MDI is not a problem for this dataset because:
1. All features are (ordered) numeric and will thus not suffer the cardinality bias
2. We are only interested to represent knowledge of the forest acquired on the training set.
If these two conditions are not met, it is recommended to instead use the [`permutation_importance`](../../modules/generated/sklearn.inspection.permutation_importance#sklearn.inspection.permutation_importance "sklearn.inspection.permutation_importance").
**Total running time of the script:** ( 0 minutes 1.181 seconds)
[`Download Python source code: plot_forest_importances_faces.py`](https://scikit-learn.org/1.1/_downloads/8c4948cc8b7a7cdf93f443595cf74cfb/plot_forest_importances_faces.py)
[`Download Jupyter notebook: plot_forest_importances_faces.ipynb`](https://scikit-learn.org/1.1/_downloads/e47f5d9d36b71035e08a801a543acbb3/plot_forest_importances_faces.ipynb)
| programming_docs |
scikit_learn Gradient Boosting regularization Note
Click [here](#sphx-glr-download-auto-examples-ensemble-plot-gradient-boosting-regularization-py) to download the full example code or to run this example in your browser via Binder
Gradient Boosting regularization
================================
Illustration of the effect of different regularization strategies for Gradient Boosting. The example is taken from Hastie et al 2009 [[1]](#id2).
The loss function used is binomial deviance. Regularization via shrinkage (`learning_rate < 1.0`) improves performance considerably. In combination with shrinkage, stochastic gradient boosting (`subsample < 1.0`) can produce more accurate models by reducing the variance via bagging. Subsampling without shrinkage usually does poorly. Another strategy to reduce the variance is by subsampling the features analogous to the random splits in Random Forests (via the `max_features` parameter).
```
/home/runner/work/scikit-learn/scikit-learn/sklearn/utils/deprecation.py:103: FutureWarning: Attribute `loss_` was deprecated in version 1.1 and will be removed in 1.3.
warnings.warn(msg, category=FutureWarning)
/home/runner/work/scikit-learn/scikit-learn/sklearn/utils/deprecation.py:103: FutureWarning: Attribute `loss_` was deprecated in version 1.1 and will be removed in 1.3.
warnings.warn(msg, category=FutureWarning)
/home/runner/work/scikit-learn/scikit-learn/sklearn/utils/deprecation.py:103: FutureWarning: Attribute `loss_` was deprecated in version 1.1 and will be removed in 1.3.
warnings.warn(msg, category=FutureWarning)
/home/runner/work/scikit-learn/scikit-learn/sklearn/utils/deprecation.py:103: FutureWarning: Attribute `loss_` was deprecated in version 1.1 and will be removed in 1.3.
warnings.warn(msg, category=FutureWarning)
/home/runner/work/scikit-learn/scikit-learn/sklearn/utils/deprecation.py:103: FutureWarning: Attribute `loss_` was deprecated in version 1.1 and will be removed in 1.3.
warnings.warn(msg, category=FutureWarning)
```
```
# Author: Peter Prettenhofer <[email protected]>
#
# License: BSD 3 clause
import numpy as np
import matplotlib.pyplot as plt
from sklearn import ensemble
from sklearn import datasets
from sklearn.model_selection import train_test_split
X, y = datasets.make_hastie_10_2(n_samples=4000, random_state=1)
# map labels from {-1, 1} to {0, 1}
labels, y = np.unique(y, return_inverse=True)
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.8, random_state=0)
original_params = {
"n_estimators": 400,
"max_leaf_nodes": 4,
"max_depth": None,
"random_state": 2,
"min_samples_split": 5,
}
plt.figure()
for label, color, setting in [
("No shrinkage", "orange", {"learning_rate": 1.0, "subsample": 1.0}),
("learning_rate=0.2", "turquoise", {"learning_rate": 0.2, "subsample": 1.0}),
("subsample=0.5", "blue", {"learning_rate": 1.0, "subsample": 0.5}),
(
"learning_rate=0.2, subsample=0.5",
"gray",
{"learning_rate": 0.2, "subsample": 0.5},
),
(
"learning_rate=0.2, max_features=2",
"magenta",
{"learning_rate": 0.2, "max_features": 2},
),
]:
params = dict(original_params)
params.update(setting)
clf = ensemble.GradientBoostingClassifier(**params)
clf.fit(X_train, y_train)
# compute test set deviance
test_deviance = np.zeros((params["n_estimators"],), dtype=np.float64)
for i, y_pred in enumerate(clf.staged_decision_function(X_test)):
# clf.loss_ assumes that y_test[i] in {0, 1}
test_deviance[i] = clf.loss_(y_test, y_pred)
plt.plot(
(np.arange(test_deviance.shape[0]) + 1)[::5],
test_deviance[::5],
"-",
color=color,
label=label,
)
plt.legend(loc="upper left")
plt.xlabel("Boosting Iterations")
plt.ylabel("Test Set Deviance")
plt.show()
```
**Total running time of the script:** ( 0 minutes 3.429 seconds)
[`Download Python source code: plot_gradient_boosting_regularization.py`](https://scikit-learn.org/1.1/_downloads/e641093af989b69bc2b89b130bcf320f/plot_gradient_boosting_regularization.py)
[`Download Jupyter notebook: plot_gradient_boosting_regularization.ipynb`](https://scikit-learn.org/1.1/_downloads/0a90f2b8e2dadb7d37ca67b3f7adb656/plot_gradient_boosting_regularization.ipynb)
scikit_learn Plot class probabilities calculated by the VotingClassifier Note
Click [here](#sphx-glr-download-auto-examples-ensemble-plot-voting-probas-py) to download the full example code or to run this example in your browser via Binder
Plot class probabilities calculated by the VotingClassifier
===========================================================
Plot the class probabilities of the first sample in a toy dataset predicted by three different classifiers and averaged by the [`VotingClassifier`](../../modules/generated/sklearn.ensemble.votingclassifier#sklearn.ensemble.VotingClassifier "sklearn.ensemble.VotingClassifier").
First, three examplary classifiers are initialized ([`LogisticRegression`](../../modules/generated/sklearn.linear_model.logisticregression#sklearn.linear_model.LogisticRegression "sklearn.linear_model.LogisticRegression"), [`GaussianNB`](../../modules/generated/sklearn.naive_bayes.gaussiannb#sklearn.naive_bayes.GaussianNB "sklearn.naive_bayes.GaussianNB"), and [`RandomForestClassifier`](../../modules/generated/sklearn.ensemble.randomforestclassifier#sklearn.ensemble.RandomForestClassifier "sklearn.ensemble.RandomForestClassifier")) and used to initialize a soft-voting [`VotingClassifier`](../../modules/generated/sklearn.ensemble.votingclassifier#sklearn.ensemble.VotingClassifier "sklearn.ensemble.VotingClassifier") with weights `[1, 1, 5]`, which means that the predicted probabilities of the [`RandomForestClassifier`](../../modules/generated/sklearn.ensemble.randomforestclassifier#sklearn.ensemble.RandomForestClassifier "sklearn.ensemble.RandomForestClassifier") count 5 times as much as the weights of the other classifiers when the averaged probability is calculated.
To visualize the probability weighting, we fit each classifier on the training set and plot the predicted class probabilities for the first sample in this example dataset.
```
import numpy as np
import matplotlib.pyplot as plt
from sklearn.linear_model import LogisticRegression
from sklearn.naive_bayes import GaussianNB
from sklearn.ensemble import RandomForestClassifier
from sklearn.ensemble import VotingClassifier
clf1 = LogisticRegression(max_iter=1000, random_state=123)
clf2 = RandomForestClassifier(n_estimators=100, random_state=123)
clf3 = GaussianNB()
X = np.array([[-1.0, -1.0], [-1.2, -1.4], [-3.4, -2.2], [1.1, 1.2]])
y = np.array([1, 1, 2, 2])
eclf = VotingClassifier(
estimators=[("lr", clf1), ("rf", clf2), ("gnb", clf3)],
voting="soft",
weights=[1, 1, 5],
)
# predict class probabilities for all classifiers
probas = [c.fit(X, y).predict_proba(X) for c in (clf1, clf2, clf3, eclf)]
# get class probabilities for the first sample in the dataset
class1_1 = [pr[0, 0] for pr in probas]
class2_1 = [pr[0, 1] for pr in probas]
# plotting
N = 4 # number of groups
ind = np.arange(N) # group positions
width = 0.35 # bar width
fig, ax = plt.subplots()
# bars for classifier 1-3
p1 = ax.bar(ind, np.hstack(([class1_1[:-1], [0]])), width, color="green", edgecolor="k")
p2 = ax.bar(
ind + width,
np.hstack(([class2_1[:-1], [0]])),
width,
color="lightgreen",
edgecolor="k",
)
# bars for VotingClassifier
p3 = ax.bar(ind, [0, 0, 0, class1_1[-1]], width, color="blue", edgecolor="k")
p4 = ax.bar(
ind + width, [0, 0, 0, class2_1[-1]], width, color="steelblue", edgecolor="k"
)
# plot annotations
plt.axvline(2.8, color="k", linestyle="dashed")
ax.set_xticks(ind + width)
ax.set_xticklabels(
[
"LogisticRegression\nweight 1",
"GaussianNB\nweight 1",
"RandomForestClassifier\nweight 5",
"VotingClassifier\n(average probabilities)",
],
rotation=40,
ha="right",
)
plt.ylim([0, 1])
plt.title("Class probabilities for sample 1 by different classifiers")
plt.legend([p1[0], p2[0]], ["class 1", "class 2"], loc="upper left")
plt.tight_layout()
plt.show()
```
**Total running time of the script:** ( 0 minutes 0.268 seconds)
[`Download Python source code: plot_voting_probas.py`](https://scikit-learn.org/1.1/_downloads/93cd12369459b2e432d0a2665e19ef8a/plot_voting_probas.py)
[`Download Jupyter notebook: plot_voting_probas.ipynb`](https://scikit-learn.org/1.1/_downloads/7011de1f31ecdc52f138d7e582a6a455/plot_voting_probas.ipynb)
scikit_learn IsolationForest example Note
Click [here](#sphx-glr-download-auto-examples-ensemble-plot-isolation-forest-py) to download the full example code or to run this example in your browser via Binder
IsolationForest example
=======================
An example using [`IsolationForest`](../../modules/generated/sklearn.ensemble.isolationforest#sklearn.ensemble.IsolationForest "sklearn.ensemble.IsolationForest") for anomaly detection.
The IsolationForest ‘isolates’ observations by randomly selecting a feature and then randomly selecting a split value between the maximum and minimum values of the selected feature.
Since recursive partitioning can be represented by a tree structure, the number of splittings required to isolate a sample is equivalent to the path length from the root node to the terminating node.
This path length, averaged over a forest of such random trees, is a measure of normality and our decision function.
Random partitioning produces noticeable shorter paths for anomalies. Hence, when a forest of random trees collectively produce shorter path lengths for particular samples, they are highly likely to be anomalies.
```
import numpy as np
import matplotlib.pyplot as plt
from sklearn.ensemble import IsolationForest
rng = np.random.RandomState(42)
# Generate train data
X = 0.3 * rng.randn(100, 2)
X_train = np.r_[X + 2, X - 2]
# Generate some regular novel observations
X = 0.3 * rng.randn(20, 2)
X_test = np.r_[X + 2, X - 2]
# Generate some abnormal novel observations
X_outliers = rng.uniform(low=-4, high=4, size=(20, 2))
# fit the model
clf = IsolationForest(max_samples=100, random_state=rng)
clf.fit(X_train)
y_pred_train = clf.predict(X_train)
y_pred_test = clf.predict(X_test)
y_pred_outliers = clf.predict(X_outliers)
# plot the line, the samples, and the nearest vectors to the plane
xx, yy = np.meshgrid(np.linspace(-5, 5, 50), np.linspace(-5, 5, 50))
Z = clf.decision_function(np.c_[xx.ravel(), yy.ravel()])
Z = Z.reshape(xx.shape)
plt.title("IsolationForest")
plt.contourf(xx, yy, Z, cmap=plt.cm.Blues_r)
b1 = plt.scatter(X_train[:, 0], X_train[:, 1], c="white", s=20, edgecolor="k")
b2 = plt.scatter(X_test[:, 0], X_test[:, 1], c="green", s=20, edgecolor="k")
c = plt.scatter(X_outliers[:, 0], X_outliers[:, 1], c="red", s=20, edgecolor="k")
plt.axis("tight")
plt.xlim((-5, 5))
plt.ylim((-5, 5))
plt.legend(
[b1, b2, c],
["training observations", "new regular observations", "new abnormal observations"],
loc="upper left",
)
plt.show()
```
**Total running time of the script:** ( 0 minutes 0.337 seconds)
[`Download Python source code: plot_isolation_forest.py`](https://scikit-learn.org/1.1/_downloads/2108844cb1b17bae9a6b4c0b0fb3b211/plot_isolation_forest.py)
[`Download Jupyter notebook: plot_isolation_forest.ipynb`](https://scikit-learn.org/1.1/_downloads/f39c19ddd9f1c49a604c054eff707568/plot_isolation_forest.ipynb)
scikit_learn Color Quantization using K-Means Note
Click [here](#sphx-glr-download-auto-examples-cluster-plot-color-quantization-py) to download the full example code or to run this example in your browser via Binder
Color Quantization using K-Means
================================
Performs a pixel-wise Vector Quantization (VQ) of an image of the summer palace (China), reducing the number of colors required to show the image from 96,615 unique colors to 64, while preserving the overall appearance quality.
In this example, pixels are represented in a 3D-space and K-means is used to find 64 color clusters. In the image processing literature, the codebook obtained from K-means (the cluster centers) is called the color palette. Using a single byte, up to 256 colors can be addressed, whereas an RGB encoding requires 3 bytes per pixel. The GIF file format, for example, uses such a palette.
For comparison, a quantized image using a random codebook (colors picked up randomly) is also shown.
* 
*
* 
```
Fitting model on a small sub-sample of the data
done in 0.113s.
Predicting color indices on the full image (k-means)
done in 0.041s.
Predicting color indices on the full image (random)
done in 0.064s.
```
```
# Authors: Robert Layton <[email protected]>
# Olivier Grisel <[email protected]>
# Mathieu Blondel <[email protected]>
#
# License: BSD 3 clause
import numpy as np
import matplotlib.pyplot as plt
from sklearn.cluster import KMeans
from sklearn.metrics import pairwise_distances_argmin
from sklearn.datasets import load_sample_image
from sklearn.utils import shuffle
from time import time
n_colors = 64
# Load the Summer Palace photo
china = load_sample_image("china.jpg")
# Convert to floats instead of the default 8 bits integer coding. Dividing by
# 255 is important so that plt.imshow behaves works well on float data (need to
# be in the range [0-1])
china = np.array(china, dtype=np.float64) / 255
# Load Image and transform to a 2D numpy array.
w, h, d = original_shape = tuple(china.shape)
assert d == 3
image_array = np.reshape(china, (w * h, d))
print("Fitting model on a small sub-sample of the data")
t0 = time()
image_array_sample = shuffle(image_array, random_state=0, n_samples=1_000)
kmeans = KMeans(n_clusters=n_colors, random_state=0).fit(image_array_sample)
print(f"done in {time() - t0:0.3f}s.")
# Get labels for all points
print("Predicting color indices on the full image (k-means)")
t0 = time()
labels = kmeans.predict(image_array)
print(f"done in {time() - t0:0.3f}s.")
codebook_random = shuffle(image_array, random_state=0, n_samples=n_colors)
print("Predicting color indices on the full image (random)")
t0 = time()
labels_random = pairwise_distances_argmin(codebook_random, image_array, axis=0)
print(f"done in {time() - t0:0.3f}s.")
def recreate_image(codebook, labels, w, h):
"""Recreate the (compressed) image from the code book & labels"""
return codebook[labels].reshape(w, h, -1)
# Display all results, alongside original image
plt.figure(1)
plt.clf()
plt.axis("off")
plt.title("Original image (96,615 colors)")
plt.imshow(china)
plt.figure(2)
plt.clf()
plt.axis("off")
plt.title(f"Quantized image ({n_colors} colors, K-Means)")
plt.imshow(recreate_image(kmeans.cluster_centers_, labels, w, h))
plt.figure(3)
plt.clf()
plt.axis("off")
plt.title(f"Quantized image ({n_colors} colors, Random)")
plt.imshow(recreate_image(codebook_random, labels_random, w, h))
plt.show()
```
**Total running time of the script:** ( 0 minutes 0.551 seconds)
[`Download Python source code: plot_color_quantization.py`](https://scikit-learn.org/1.1/_downloads/d0e47fc5f3661efb101abfd4d9461afe/plot_color_quantization.py)
[`Download Jupyter notebook: plot_color_quantization.ipynb`](https://scikit-learn.org/1.1/_downloads/82ec115874a062f9e8fa17efc63384c0/plot_color_quantization.ipynb)
scikit_learn Plot Hierarchical Clustering Dendrogram Note
Click [here](#sphx-glr-download-auto-examples-cluster-plot-agglomerative-dendrogram-py) to download the full example code or to run this example in your browser via Binder
Plot Hierarchical Clustering Dendrogram
=======================================
This example plots the corresponding dendrogram of a hierarchical clustering using AgglomerativeClustering and the dendrogram method available in scipy.
```
import numpy as np
from matplotlib import pyplot as plt
from scipy.cluster.hierarchy import dendrogram
from sklearn.datasets import load_iris
from sklearn.cluster import AgglomerativeClustering
def plot_dendrogram(model, **kwargs):
# Create linkage matrix and then plot the dendrogram
# create the counts of samples under each node
counts = np.zeros(model.children_.shape[0])
n_samples = len(model.labels_)
for i, merge in enumerate(model.children_):
current_count = 0
for child_idx in merge:
if child_idx < n_samples:
current_count += 1 # leaf node
else:
current_count += counts[child_idx - n_samples]
counts[i] = current_count
linkage_matrix = np.column_stack(
[model.children_, model.distances_, counts]
).astype(float)
# Plot the corresponding dendrogram
dendrogram(linkage_matrix, **kwargs)
iris = load_iris()
X = iris.data
# setting distance_threshold=0 ensures we compute the full tree.
model = AgglomerativeClustering(distance_threshold=0, n_clusters=None)
model = model.fit(X)
plt.title("Hierarchical Clustering Dendrogram")
# plot the top three levels of the dendrogram
plot_dendrogram(model, truncate_mode="level", p=3)
plt.xlabel("Number of points in node (or index of point if no parenthesis).")
plt.show()
```
**Total running time of the script:** ( 0 minutes 0.077 seconds)
[`Download Python source code: plot_agglomerative_dendrogram.py`](https://scikit-learn.org/1.1/_downloads/4cb9ca0dda94124c7cb99dcaae983dab/plot_agglomerative_dendrogram.py)
[`Download Jupyter notebook: plot_agglomerative_dendrogram.ipynb`](https://scikit-learn.org/1.1/_downloads/72d877fe6a0a97bfa8950a086a14a445/plot_agglomerative_dendrogram.ipynb)
scikit_learn Agglomerative clustering with different metrics Note
Click [here](#sphx-glr-download-auto-examples-cluster-plot-agglomerative-clustering-metrics-py) to download the full example code or to run this example in your browser via Binder
Agglomerative clustering with different metrics
===============================================
Demonstrates the effect of different metrics on the hierarchical clustering.
The example is engineered to show the effect of the choice of different metrics. It is applied to waveforms, which can be seen as high-dimensional vector. Indeed, the difference between metrics is usually more pronounced in high dimension (in particular for euclidean and cityblock).
We generate data from three groups of waveforms. Two of the waveforms (waveform 1 and waveform 2) are proportional one to the other. The cosine distance is invariant to a scaling of the data, as a result, it cannot distinguish these two waveforms. Thus even with no noise, clustering using this distance will not separate out waveform 1 and 2.
We add observation noise to these waveforms. We generate very sparse noise: only 6% of the time points contain noise. As a result, the l1 norm of this noise (ie “cityblock” distance) is much smaller than it’s l2 norm (“euclidean” distance). This can be seen on the inter-class distance matrices: the values on the diagonal, that characterize the spread of the class, are much bigger for the Euclidean distance than for the cityblock distance.
When we apply clustering to the data, we find that the clustering reflects what was in the distance matrices. Indeed, for the Euclidean distance, the classes are ill-separated because of the noise, and thus the clustering does not separate the waveforms. For the cityblock distance, the separation is good and the waveform classes are recovered. Finally, the cosine distance does not separate at all waveform 1 and 2, thus the clustering puts them in the same cluster.
*
*
*
*
*
*
*
```
# Author: Gael Varoquaux
# License: BSD 3-Clause or CC-0
import matplotlib.pyplot as plt
import numpy as np
from sklearn.cluster import AgglomerativeClustering
from sklearn.metrics import pairwise_distances
np.random.seed(0)
# Generate waveform data
n_features = 2000
t = np.pi * np.linspace(0, 1, n_features)
def sqr(x):
return np.sign(np.cos(x))
X = list()
y = list()
for i, (phi, a) in enumerate([(0.5, 0.15), (0.5, 0.6), (0.3, 0.2)]):
for _ in range(30):
phase_noise = 0.01 * np.random.normal()
amplitude_noise = 0.04 * np.random.normal()
additional_noise = 1 - 2 * np.random.rand(n_features)
# Make the noise sparse
additional_noise[np.abs(additional_noise) < 0.997] = 0
X.append(
12
* (
(a + amplitude_noise) * (sqr(6 * (t + phi + phase_noise)))
+ additional_noise
)
)
y.append(i)
X = np.array(X)
y = np.array(y)
n_clusters = 3
labels = ("Waveform 1", "Waveform 2", "Waveform 3")
# Plot the ground-truth labelling
plt.figure()
plt.axes([0, 0, 1, 1])
for l, c, n in zip(range(n_clusters), "rgb", labels):
lines = plt.plot(X[y == l].T, c=c, alpha=0.5)
lines[0].set_label(n)
plt.legend(loc="best")
plt.axis("tight")
plt.axis("off")
plt.suptitle("Ground truth", size=20)
# Plot the distances
for index, metric in enumerate(["cosine", "euclidean", "cityblock"]):
avg_dist = np.zeros((n_clusters, n_clusters))
plt.figure(figsize=(5, 4.5))
for i in range(n_clusters):
for j in range(n_clusters):
avg_dist[i, j] = pairwise_distances(
X[y == i], X[y == j], metric=metric
).mean()
avg_dist /= avg_dist.max()
for i in range(n_clusters):
for j in range(n_clusters):
plt.text(
i,
j,
"%5.3f" % avg_dist[i, j],
verticalalignment="center",
horizontalalignment="center",
)
plt.imshow(avg_dist, interpolation="nearest", cmap=plt.cm.gnuplot2, vmin=0)
plt.xticks(range(n_clusters), labels, rotation=45)
plt.yticks(range(n_clusters), labels)
plt.colorbar()
plt.suptitle("Interclass %s distances" % metric, size=18)
plt.tight_layout()
# Plot clustering results
for index, metric in enumerate(["cosine", "euclidean", "cityblock"]):
model = AgglomerativeClustering(
n_clusters=n_clusters, linkage="average", affinity=metric
)
model.fit(X)
plt.figure()
plt.axes([0, 0, 1, 1])
for l, c in zip(np.arange(model.n_clusters), "rgbk"):
plt.plot(X[model.labels_ == l].T, c=c, alpha=0.5)
plt.axis("tight")
plt.axis("off")
plt.suptitle("AgglomerativeClustering(affinity=%s)" % metric, size=20)
plt.show()
```
**Total running time of the script:** ( 0 minutes 0.935 seconds)
[`Download Python source code: plot_agglomerative_clustering_metrics.py`](https://scikit-learn.org/1.1/_downloads/bc88e7ec572d6d2d2ff19cf0d75265c9/plot_agglomerative_clustering_metrics.py)
[`Download Jupyter notebook: plot_agglomerative_clustering_metrics.ipynb`](https://scikit-learn.org/1.1/_downloads/e7d5500e87a046a110ca0daebd702588/plot_agglomerative_clustering_metrics.ipynb)
| programming_docs |
scikit_learn An example of K-Means++ initialization Note
Click [here](#sphx-glr-download-auto-examples-cluster-plot-kmeans-plusplus-py) to download the full example code or to run this example in your browser via Binder
An example of K-Means++ initialization
======================================
An example to show the output of the [`sklearn.cluster.kmeans_plusplus`](../../modules/generated/sklearn.cluster.kmeans_plusplus#sklearn.cluster.kmeans_plusplus "sklearn.cluster.kmeans_plusplus") function for generating initial seeds for clustering.
K-Means++ is used as the default initialization for [K-means](../../modules/clustering#k-means).
```
from sklearn.cluster import kmeans_plusplus
from sklearn.datasets import make_blobs
import matplotlib.pyplot as plt
# Generate sample data
n_samples = 4000
n_components = 4
X, y_true = make_blobs(
n_samples=n_samples, centers=n_components, cluster_std=0.60, random_state=0
)
X = X[:, ::-1]
# Calculate seeds from kmeans++
centers_init, indices = kmeans_plusplus(X, n_clusters=4, random_state=0)
# Plot init seeds along side sample data
plt.figure(1)
colors = ["#4EACC5", "#FF9C34", "#4E9A06", "m"]
for k, col in enumerate(colors):
cluster_data = y_true == k
plt.scatter(X[cluster_data, 0], X[cluster_data, 1], c=col, marker=".", s=10)
plt.scatter(centers_init[:, 0], centers_init[:, 1], c="b", s=50)
plt.title("K-Means++ Initialization")
plt.xticks([])
plt.yticks([])
plt.show()
```
**Total running time of the script:** ( 0 minutes 0.060 seconds)
[`Download Python source code: plot_kmeans_plusplus.py`](https://scikit-learn.org/1.1/_downloads/fa03fd57e0f1a2cd66f3693283f7a6b3/plot_kmeans_plusplus.py)
[`Download Jupyter notebook: plot_kmeans_plusplus.ipynb`](https://scikit-learn.org/1.1/_downloads/77b640d8771f7ecdeb2dbc948ebce90a/plot_kmeans_plusplus.ipynb)
scikit_learn Demo of affinity propagation clustering algorithm Note
Click [here](#sphx-glr-download-auto-examples-cluster-plot-affinity-propagation-py) to download the full example code or to run this example in your browser via Binder
Demo of affinity propagation clustering algorithm
=================================================
Reference: Brendan J. Frey and Delbert Dueck, “Clustering by Passing Messages Between Data Points”, Science Feb. 2007
```
from sklearn.cluster import AffinityPropagation
from sklearn import metrics
from sklearn.datasets import make_blobs
```
Generate sample data
--------------------
```
centers = [[1, 1], [-1, -1], [1, -1]]
X, labels_true = make_blobs(
n_samples=300, centers=centers, cluster_std=0.5, random_state=0
)
```
Compute Affinity Propagation
----------------------------
```
af = AffinityPropagation(preference=-50, random_state=0).fit(X)
cluster_centers_indices = af.cluster_centers_indices_
labels = af.labels_
n_clusters_ = len(cluster_centers_indices)
print("Estimated number of clusters: %d" % n_clusters_)
print("Homogeneity: %0.3f" % metrics.homogeneity_score(labels_true, labels))
print("Completeness: %0.3f" % metrics.completeness_score(labels_true, labels))
print("V-measure: %0.3f" % metrics.v_measure_score(labels_true, labels))
print("Adjusted Rand Index: %0.3f" % metrics.adjusted_rand_score(labels_true, labels))
print(
"Adjusted Mutual Information: %0.3f"
% metrics.adjusted_mutual_info_score(labels_true, labels)
)
print(
"Silhouette Coefficient: %0.3f"
% metrics.silhouette_score(X, labels, metric="sqeuclidean")
)
```
```
Estimated number of clusters: 3
Homogeneity: 0.872
Completeness: 0.872
V-measure: 0.872
Adjusted Rand Index: 0.912
Adjusted Mutual Information: 0.871
Silhouette Coefficient: 0.753
```
Plot result
-----------
```
import matplotlib.pyplot as plt
from itertools import cycle
plt.close("all")
plt.figure(1)
plt.clf()
colors = cycle("bgrcmykbgrcmykbgrcmykbgrcmyk")
for k, col in zip(range(n_clusters_), colors):
class_members = labels == k
cluster_center = X[cluster_centers_indices[k]]
plt.plot(X[class_members, 0], X[class_members, 1], col + ".")
plt.plot(
cluster_center[0],
cluster_center[1],
"o",
markerfacecolor=col,
markeredgecolor="k",
markersize=14,
)
for x in X[class_members]:
plt.plot([cluster_center[0], x[0]], [cluster_center[1], x[1]], col)
plt.title("Estimated number of clusters: %d" % n_clusters_)
plt.show()
```
**Total running time of the script:** ( 0 minutes 0.272 seconds)
[`Download Python source code: plot_affinity_propagation.py`](https://scikit-learn.org/1.1/_downloads/2f930acda654766f8ba0fee08887bb41/plot_affinity_propagation.py)
[`Download Jupyter notebook: plot_affinity_propagation.ipynb`](https://scikit-learn.org/1.1/_downloads/91999ecc168932f9034d0cbc1cc248fa/plot_affinity_propagation.ipynb)
scikit_learn Compare BIRCH and MiniBatchKMeans Note
Click [here](#sphx-glr-download-auto-examples-cluster-plot-birch-vs-minibatchkmeans-py) to download the full example code or to run this example in your browser via Binder
Compare BIRCH and MiniBatchKMeans
=================================
This example compares the timing of BIRCH (with and without the global clustering step) and MiniBatchKMeans on a synthetic dataset having 25,000 samples and 2 features generated using make\_blobs.
Both `MiniBatchKMeans` and `BIRCH` are very scalable algorithms and could run efficiently on hundreds of thousands or even millions of datapoints. We chose to limit the dataset size of this example in the interest of keeping our Continuous Integration resource usage reasonable but the interested reader might enjoy editing this script to rerun it with a larger value for `n_samples`.
If `n_clusters` is set to None, the data is reduced from 25,000 samples to a set of 158 clusters. This can be viewed as a preprocessing step before the final (global) clustering step that further reduces these 158 clusters to 100 clusters.

```
BIRCH without global clustering as the final step took 0.57 seconds
n_clusters : 158
BIRCH with global clustering as the final step took 0.56 seconds
n_clusters : 100
Time taken to run MiniBatchKMeans 0.20 seconds
```
```
# Authors: Manoj Kumar <[email protected]
# Alexandre Gramfort <[email protected]>
# License: BSD 3 clause
from joblib import cpu_count
from itertools import cycle
from time import time
import numpy as np
import matplotlib.pyplot as plt
import matplotlib.colors as colors
from sklearn.cluster import Birch, MiniBatchKMeans
from sklearn.datasets import make_blobs
# Generate centers for the blobs so that it forms a 10 X 10 grid.
xx = np.linspace(-22, 22, 10)
yy = np.linspace(-22, 22, 10)
xx, yy = np.meshgrid(xx, yy)
n_centers = np.hstack((np.ravel(xx)[:, np.newaxis], np.ravel(yy)[:, np.newaxis]))
# Generate blobs to do a comparison between MiniBatchKMeans and BIRCH.
X, y = make_blobs(n_samples=25000, centers=n_centers, random_state=0)
# Use all colors that matplotlib provides by default.
colors_ = cycle(colors.cnames.keys())
fig = plt.figure(figsize=(12, 4))
fig.subplots_adjust(left=0.04, right=0.98, bottom=0.1, top=0.9)
# Compute clustering with BIRCH with and without the final clustering step
# and plot.
birch_models = [
Birch(threshold=1.7, n_clusters=None),
Birch(threshold=1.7, n_clusters=100),
]
final_step = ["without global clustering", "with global clustering"]
for ind, (birch_model, info) in enumerate(zip(birch_models, final_step)):
t = time()
birch_model.fit(X)
print("BIRCH %s as the final step took %0.2f seconds" % (info, (time() - t)))
# Plot result
labels = birch_model.labels_
centroids = birch_model.subcluster_centers_
n_clusters = np.unique(labels).size
print("n_clusters : %d" % n_clusters)
ax = fig.add_subplot(1, 3, ind + 1)
for this_centroid, k, col in zip(centroids, range(n_clusters), colors_):
mask = labels == k
ax.scatter(X[mask, 0], X[mask, 1], c="w", edgecolor=col, marker=".", alpha=0.5)
if birch_model.n_clusters is None:
ax.scatter(this_centroid[0], this_centroid[1], marker="+", c="k", s=25)
ax.set_ylim([-25, 25])
ax.set_xlim([-25, 25])
ax.set_autoscaley_on(False)
ax.set_title("BIRCH %s" % info)
# Compute clustering with MiniBatchKMeans.
mbk = MiniBatchKMeans(
init="k-means++",
n_clusters=100,
batch_size=256 * cpu_count(),
n_init=10,
max_no_improvement=10,
verbose=0,
random_state=0,
)
t0 = time()
mbk.fit(X)
t_mini_batch = time() - t0
print("Time taken to run MiniBatchKMeans %0.2f seconds" % t_mini_batch)
mbk_means_labels_unique = np.unique(mbk.labels_)
ax = fig.add_subplot(1, 3, 3)
for this_centroid, k, col in zip(mbk.cluster_centers_, range(n_clusters), colors_):
mask = mbk.labels_ == k
ax.scatter(X[mask, 0], X[mask, 1], marker=".", c="w", edgecolor=col, alpha=0.5)
ax.scatter(this_centroid[0], this_centroid[1], marker="+", c="k", s=25)
ax.set_xlim([-25, 25])
ax.set_ylim([-25, 25])
ax.set_title("MiniBatchKMeans")
ax.set_autoscaley_on(False)
plt.show()
```
**Total running time of the script:** ( 0 minutes 3.419 seconds)
[`Download Python source code: plot_birch_vs_minibatchkmeans.py`](https://scikit-learn.org/1.1/_downloads/e3c29fcee17ffb4a67e11b147b8a86bb/plot_birch_vs_minibatchkmeans.py)
[`Download Jupyter notebook: plot_birch_vs_minibatchkmeans.ipynb`](https://scikit-learn.org/1.1/_downloads/a373b9fdc21005d9a66ecf3df90eb49a/plot_birch_vs_minibatchkmeans.ipynb)
scikit_learn Empirical evaluation of the impact of k-means initialization Note
Click [here](#sphx-glr-download-auto-examples-cluster-plot-kmeans-stability-low-dim-dense-py) to download the full example code or to run this example in your browser via Binder
Empirical evaluation of the impact of k-means initialization
============================================================
Evaluate the ability of k-means initializations strategies to make the algorithm convergence robust as measured by the relative standard deviation of the inertia of the clustering (i.e. the sum of squared distances to the nearest cluster center).
The first plot shows the best inertia reached for each combination of the model (`KMeans` or `MiniBatchKMeans`) and the init method (`init="random"` or `init="kmeans++"`) for increasing values of the `n_init` parameter that controls the number of initializations.
The second plot demonstrate one single run of the `MiniBatchKMeans` estimator using a `init="random"` and `n_init=1`. This run leads to a bad convergence (local optimum) with estimated centers stuck between ground truth clusters.
The dataset used for evaluation is a 2D grid of isotropic Gaussian clusters widely spaced.
*
*
```
Evaluation of KMeans with k-means++ init
Evaluation of KMeans with random init
Evaluation of MiniBatchKMeans with k-means++ init
Evaluation of MiniBatchKMeans with random init
/home/runner/work/scikit-learn/scikit-learn/examples/cluster/plot_kmeans_stability_low_dim_dense.py:118: UserWarning: marker is redundantly defined by the 'marker' keyword argument and the fmt string "o" (-> marker='o'). The keyword argument will take precedence.
plt.plot(X[my_members, 0], X[my_members, 1], "o", marker=".", c=color)
```
```
# Author: Olivier Grisel <[email protected]>
# License: BSD 3 clause
import numpy as np
import matplotlib.pyplot as plt
import matplotlib.cm as cm
from sklearn.utils import shuffle
from sklearn.utils import check_random_state
from sklearn.cluster import MiniBatchKMeans
from sklearn.cluster import KMeans
random_state = np.random.RandomState(0)
# Number of run (with randomly generated dataset) for each strategy so as
# to be able to compute an estimate of the standard deviation
n_runs = 5
# k-means models can do several random inits so as to be able to trade
# CPU time for convergence robustness
n_init_range = np.array([1, 5, 10, 15, 20])
# Datasets generation parameters
n_samples_per_center = 100
grid_size = 3
scale = 0.1
n_clusters = grid_size**2
def make_data(random_state, n_samples_per_center, grid_size, scale):
random_state = check_random_state(random_state)
centers = np.array([[i, j] for i in range(grid_size) for j in range(grid_size)])
n_clusters_true, n_features = centers.shape
noise = random_state.normal(
scale=scale, size=(n_samples_per_center, centers.shape[1])
)
X = np.concatenate([c + noise for c in centers])
y = np.concatenate([[i] * n_samples_per_center for i in range(n_clusters_true)])
return shuffle(X, y, random_state=random_state)
# Part 1: Quantitative evaluation of various init methods
plt.figure()
plots = []
legends = []
cases = [
(KMeans, "k-means++", {}),
(KMeans, "random", {}),
(MiniBatchKMeans, "k-means++", {"max_no_improvement": 3}),
(MiniBatchKMeans, "random", {"max_no_improvement": 3, "init_size": 500}),
]
for factory, init, params in cases:
print("Evaluation of %s with %s init" % (factory.__name__, init))
inertia = np.empty((len(n_init_range), n_runs))
for run_id in range(n_runs):
X, y = make_data(run_id, n_samples_per_center, grid_size, scale)
for i, n_init in enumerate(n_init_range):
km = factory(
n_clusters=n_clusters,
init=init,
random_state=run_id,
n_init=n_init,
**params,
).fit(X)
inertia[i, run_id] = km.inertia_
p = plt.errorbar(n_init_range, inertia.mean(axis=1), inertia.std(axis=1))
plots.append(p[0])
legends.append("%s with %s init" % (factory.__name__, init))
plt.xlabel("n_init")
plt.ylabel("inertia")
plt.legend(plots, legends)
plt.title("Mean inertia for various k-means init across %d runs" % n_runs)
# Part 2: Qualitative visual inspection of the convergence
X, y = make_data(random_state, n_samples_per_center, grid_size, scale)
km = MiniBatchKMeans(
n_clusters=n_clusters, init="random", n_init=1, random_state=random_state
).fit(X)
plt.figure()
for k in range(n_clusters):
my_members = km.labels_ == k
color = cm.nipy_spectral(float(k) / n_clusters, 1)
plt.plot(X[my_members, 0], X[my_members, 1], "o", marker=".", c=color)
cluster_center = km.cluster_centers_[k]
plt.plot(
cluster_center[0],
cluster_center[1],
"o",
markerfacecolor=color,
markeredgecolor="k",
markersize=6,
)
plt.title(
"Example cluster allocation with a single random init\nwith MiniBatchKMeans"
)
plt.show()
```
**Total running time of the script:** ( 0 minutes 1.165 seconds)
[`Download Python source code: plot_kmeans_stability_low_dim_dense.py`](https://scikit-learn.org/1.1/_downloads/4b8d8f0d50e5aa937ac9571a35eadc28/plot_kmeans_stability_low_dim_dense.py)
[`Download Jupyter notebook: plot_kmeans_stability_low_dim_dense.ipynb`](https://scikit-learn.org/1.1/_downloads/7e9cf82b8b60275dd7851470d151af5f/plot_kmeans_stability_low_dim_dense.ipynb)
scikit_learn Various Agglomerative Clustering on a 2D embedding of digits Note
Click [here](#sphx-glr-download-auto-examples-cluster-plot-digits-linkage-py) to download the full example code or to run this example in your browser via Binder
Various Agglomerative Clustering on a 2D embedding of digits
============================================================
An illustration of various linkage option for agglomerative clustering on a 2D embedding of the digits dataset.
The goal of this example is to show intuitively how the metrics behave, and not to find good clusters for the digits. This is why the example works on a 2D embedding.
What this example shows us is the behavior “rich getting richer” of agglomerative clustering that tends to create uneven cluster sizes.
This behavior is pronounced for the average linkage strategy, that ends up with a couple of clusters with few datapoints.
The case of single linkage is even more pathologic with a very large cluster covering most digits, an intermediate size (clean) cluster with most zero digits and all other clusters being drawn from noise points around the fringes.
The other linkage strategies lead to more evenly distributed clusters that are therefore likely to be less sensible to a random resampling of the dataset.
*
*
*
*
```
Computing embedding
Done.
ward : 0.05s
average : 0.04s
complete : 0.04s
single : 0.02s
```
```
# Authors: Gael Varoquaux
# License: BSD 3 clause (C) INRIA 2014
from time import time
import numpy as np
from matplotlib import pyplot as plt
from sklearn import manifold, datasets
digits = datasets.load_digits()
X, y = digits.data, digits.target
n_samples, n_features = X.shape
np.random.seed(0)
# ----------------------------------------------------------------------
# Visualize the clustering
def plot_clustering(X_red, labels, title=None):
x_min, x_max = np.min(X_red, axis=0), np.max(X_red, axis=0)
X_red = (X_red - x_min) / (x_max - x_min)
plt.figure(figsize=(6, 4))
for digit in digits.target_names:
plt.scatter(
*X_red[y == digit].T,
marker=f"${digit}$",
s=50,
c=plt.cm.nipy_spectral(labels[y == digit] / 10),
alpha=0.5,
)
plt.xticks([])
plt.yticks([])
if title is not None:
plt.title(title, size=17)
plt.axis("off")
plt.tight_layout(rect=[0, 0.03, 1, 0.95])
# ----------------------------------------------------------------------
# 2D embedding of the digits dataset
print("Computing embedding")
X_red = manifold.SpectralEmbedding(n_components=2).fit_transform(X)
print("Done.")
from sklearn.cluster import AgglomerativeClustering
for linkage in ("ward", "average", "complete", "single"):
clustering = AgglomerativeClustering(linkage=linkage, n_clusters=10)
t0 = time()
clustering.fit(X_red)
print("%s :\t%.2fs" % (linkage, time() - t0))
plot_clustering(X_red, clustering.labels_, "%s linkage" % linkage)
plt.show()
```
**Total running time of the script:** ( 0 minutes 1.288 seconds)
[`Download Python source code: plot_digits_linkage.py`](https://scikit-learn.org/1.1/_downloads/2b34fe5e4c2bfccb9b7dbff3e93ff741/plot_digits_linkage.py)
[`Download Jupyter notebook: plot_digits_linkage.ipynb`](https://scikit-learn.org/1.1/_downloads/21ba44171c2107e5285a530bdf3dd0f6/plot_digits_linkage.ipynb)
scikit_learn Comparing different hierarchical linkage methods on toy datasets Note
Click [here](#sphx-glr-download-auto-examples-cluster-plot-linkage-comparison-py) to download the full example code or to run this example in your browser via Binder
Comparing different hierarchical linkage methods on toy datasets
================================================================
This example shows characteristics of different linkage methods for hierarchical clustering on datasets that are “interesting” but still in 2D.
The main observations to make are:
* single linkage is fast, and can perform well on non-globular data, but it performs poorly in the presence of noise.
* average and complete linkage perform well on cleanly separated globular clusters, but have mixed results otherwise.
* Ward is the most effective method for noisy data.
While these examples give some intuition about the algorithms, this intuition might not apply to very high dimensional data.
```
import time
import warnings
import numpy as np
import matplotlib.pyplot as plt
from sklearn import cluster, datasets
from sklearn.preprocessing import StandardScaler
from itertools import cycle, islice
np.random.seed(0)
```
Generate datasets. We choose the size big enough to see the scalability of the algorithms, but not too big to avoid too long running times
```
n_samples = 1500
noisy_circles = datasets.make_circles(n_samples=n_samples, factor=0.5, noise=0.05)
noisy_moons = datasets.make_moons(n_samples=n_samples, noise=0.05)
blobs = datasets.make_blobs(n_samples=n_samples, random_state=8)
no_structure = np.random.rand(n_samples, 2), None
# Anisotropicly distributed data
random_state = 170
X, y = datasets.make_blobs(n_samples=n_samples, random_state=random_state)
transformation = [[0.6, -0.6], [-0.4, 0.8]]
X_aniso = np.dot(X, transformation)
aniso = (X_aniso, y)
# blobs with varied variances
varied = datasets.make_blobs(
n_samples=n_samples, cluster_std=[1.0, 2.5, 0.5], random_state=random_state
)
```
Run the clustering and plot
```
# Set up cluster parameters
plt.figure(figsize=(9 * 1.3 + 2, 14.5))
plt.subplots_adjust(
left=0.02, right=0.98, bottom=0.001, top=0.96, wspace=0.05, hspace=0.01
)
plot_num = 1
default_base = {"n_neighbors": 10, "n_clusters": 3}
datasets = [
(noisy_circles, {"n_clusters": 2}),
(noisy_moons, {"n_clusters": 2}),
(varied, {"n_neighbors": 2}),
(aniso, {"n_neighbors": 2}),
(blobs, {}),
(no_structure, {}),
]
for i_dataset, (dataset, algo_params) in enumerate(datasets):
# update parameters with dataset-specific values
params = default_base.copy()
params.update(algo_params)
X, y = dataset
# normalize dataset for easier parameter selection
X = StandardScaler().fit_transform(X)
# ============
# Create cluster objects
# ============
ward = cluster.AgglomerativeClustering(
n_clusters=params["n_clusters"], linkage="ward"
)
complete = cluster.AgglomerativeClustering(
n_clusters=params["n_clusters"], linkage="complete"
)
average = cluster.AgglomerativeClustering(
n_clusters=params["n_clusters"], linkage="average"
)
single = cluster.AgglomerativeClustering(
n_clusters=params["n_clusters"], linkage="single"
)
clustering_algorithms = (
("Single Linkage", single),
("Average Linkage", average),
("Complete Linkage", complete),
("Ward Linkage", ward),
)
for name, algorithm in clustering_algorithms:
t0 = time.time()
# catch warnings related to kneighbors_graph
with warnings.catch_warnings():
warnings.filterwarnings(
"ignore",
message="the number of connected components of the "
+ "connectivity matrix is [0-9]{1,2}"
+ " > 1. Completing it to avoid stopping the tree early.",
category=UserWarning,
)
algorithm.fit(X)
t1 = time.time()
if hasattr(algorithm, "labels_"):
y_pred = algorithm.labels_.astype(int)
else:
y_pred = algorithm.predict(X)
plt.subplot(len(datasets), len(clustering_algorithms), plot_num)
if i_dataset == 0:
plt.title(name, size=18)
colors = np.array(
list(
islice(
cycle(
[
"#377eb8",
"#ff7f00",
"#4daf4a",
"#f781bf",
"#a65628",
"#984ea3",
"#999999",
"#e41a1c",
"#dede00",
]
),
int(max(y_pred) + 1),
)
)
)
plt.scatter(X[:, 0], X[:, 1], s=10, color=colors[y_pred])
plt.xlim(-2.5, 2.5)
plt.ylim(-2.5, 2.5)
plt.xticks(())
plt.yticks(())
plt.text(
0.99,
0.01,
("%.2fs" % (t1 - t0)).lstrip("0"),
transform=plt.gca().transAxes,
size=15,
horizontalalignment="right",
)
plot_num += 1
plt.show()
```
**Total running time of the script:** ( 0 minutes 1.761 seconds)
[`Download Python source code: plot_linkage_comparison.py`](https://scikit-learn.org/1.1/_downloads/2338f6e7d44c2931a41926d4f9726d9b/plot_linkage_comparison.py)
[`Download Jupyter notebook: plot_linkage_comparison.ipynb`](https://scikit-learn.org/1.1/_downloads/be7e9c5a81790b318c3a8028ced647ff/plot_linkage_comparison.ipynb)
| programming_docs |
scikit_learn Adjustment for chance in clustering performance evaluation Note
Click [here](#sphx-glr-download-auto-examples-cluster-plot-adjusted-for-chance-measures-py) to download the full example code or to run this example in your browser via Binder
Adjustment for chance in clustering performance evaluation
==========================================================
The following plots demonstrate the impact of the number of clusters and number of samples on various clustering performance evaluation metrics.
Non-adjusted measures such as the V-Measure show a dependency between the number of clusters and the number of samples: the mean V-Measure of random labeling increases significantly as the number of clusters is closer to the total number of samples used to compute the measure.
Adjusted for chance measure such as ARI display some random variations centered around a mean score of 0.0 for any number of samples and clusters.
Only adjusted measures can hence safely be used as a consensus index to evaluate the average stability of clustering algorithms for a given value of k on various overlapping sub-samples of the dataset.
*
*
```
Computing adjusted_rand_score for 10 values of n_clusters and n_samples=100
done in 0.027s
Computing v_measure_score for 10 values of n_clusters and n_samples=100
done in 0.043s
Computing ami_score for 10 values of n_clusters and n_samples=100
done in 0.235s
Computing mutual_info_score for 10 values of n_clusters and n_samples=100
done in 0.036s
Computing adjusted_rand_score for 10 values of n_clusters and n_samples=1000
done in 0.037s
Computing v_measure_score for 10 values of n_clusters and n_samples=1000
done in 0.059s
Computing ami_score for 10 values of n_clusters and n_samples=1000
done in 0.158s
Computing mutual_info_score for 10 values of n_clusters and n_samples=1000
done in 0.047s
```
```
# Author: Olivier Grisel <[email protected]>
# License: BSD 3 clause
import numpy as np
import matplotlib.pyplot as plt
from time import time
from sklearn import metrics
def uniform_labelings_scores(
score_func, n_samples, n_clusters_range, fixed_n_classes=None, n_runs=5, seed=42
):
"""Compute score for 2 random uniform cluster labelings.
Both random labelings have the same number of clusters for each value
possible value in ``n_clusters_range``.
When fixed_n_classes is not None the first labeling is considered a ground
truth class assignment with fixed number of classes.
"""
random_labels = np.random.RandomState(seed).randint
scores = np.zeros((len(n_clusters_range), n_runs))
if fixed_n_classes is not None:
labels_a = random_labels(low=0, high=fixed_n_classes, size=n_samples)
for i, k in enumerate(n_clusters_range):
for j in range(n_runs):
if fixed_n_classes is None:
labels_a = random_labels(low=0, high=k, size=n_samples)
labels_b = random_labels(low=0, high=k, size=n_samples)
scores[i, j] = score_func(labels_a, labels_b)
return scores
def ami_score(U, V):
return metrics.adjusted_mutual_info_score(U, V)
score_funcs = [
metrics.adjusted_rand_score,
metrics.v_measure_score,
ami_score,
metrics.mutual_info_score,
]
# 2 independent random clusterings with equal cluster number
n_samples = 100
n_clusters_range = np.linspace(2, n_samples, 10).astype(int)
plt.figure(1)
plots = []
names = []
for score_func in score_funcs:
print(
"Computing %s for %d values of n_clusters and n_samples=%d"
% (score_func.__name__, len(n_clusters_range), n_samples)
)
t0 = time()
scores = uniform_labelings_scores(score_func, n_samples, n_clusters_range)
print("done in %0.3fs" % (time() - t0))
plots.append(
plt.errorbar(n_clusters_range, np.median(scores, axis=1), scores.std(axis=1))[0]
)
names.append(score_func.__name__)
plt.title(
"Clustering measures for 2 random uniform labelings\nwith equal number of clusters"
)
plt.xlabel("Number of clusters (Number of samples is fixed to %d)" % n_samples)
plt.ylabel("Score value")
plt.legend(plots, names)
plt.ylim(bottom=-0.05, top=1.05)
# Random labeling with varying n_clusters against ground class labels
# with fixed number of clusters
n_samples = 1000
n_clusters_range = np.linspace(2, 100, 10).astype(int)
n_classes = 10
plt.figure(2)
plots = []
names = []
for score_func in score_funcs:
print(
"Computing %s for %d values of n_clusters and n_samples=%d"
% (score_func.__name__, len(n_clusters_range), n_samples)
)
t0 = time()
scores = uniform_labelings_scores(
score_func, n_samples, n_clusters_range, fixed_n_classes=n_classes
)
print("done in %0.3fs" % (time() - t0))
plots.append(
plt.errorbar(n_clusters_range, scores.mean(axis=1), scores.std(axis=1))[0]
)
names.append(score_func.__name__)
plt.title(
"Clustering measures for random uniform labeling\n"
"against reference assignment with %d classes" % n_classes
)
plt.xlabel("Number of clusters (Number of samples is fixed to %d)" % n_samples)
plt.ylabel("Score value")
plt.ylim(bottom=-0.05, top=1.05)
plt.legend(plots, names)
plt.show()
```
**Total running time of the script:** ( 0 minutes 0.802 seconds)
[`Download Python source code: plot_adjusted_for_chance_measures.py`](https://scikit-learn.org/1.1/_downloads/f641388d3d28570ff40709693f9cb7ca/plot_adjusted_for_chance_measures.py)
[`Download Jupyter notebook: plot_adjusted_for_chance_measures.ipynb`](https://scikit-learn.org/1.1/_downloads/7c06490f380b1e20e9558c6c5fde70ed/plot_adjusted_for_chance_measures.ipynb)
scikit_learn Demonstration of k-means assumptions Note
Click [here](#sphx-glr-download-auto-examples-cluster-plot-kmeans-assumptions-py) to download the full example code or to run this example in your browser via Binder
Demonstration of k-means assumptions
====================================
This example is meant to illustrate situations where k-means will produce unintuitive and possibly unexpected clusters. In the first three plots, the input data does not conform to some implicit assumption that k-means makes and undesirable clusters are produced as a result. In the last plot, k-means returns intuitive clusters despite unevenly sized blobs.
```
# Author: Phil Roth <[email protected]>
# License: BSD 3 clause
import numpy as np
import matplotlib.pyplot as plt
from sklearn.cluster import KMeans
from sklearn.datasets import make_blobs
plt.figure(figsize=(12, 12))
n_samples = 1500
random_state = 170
X, y = make_blobs(n_samples=n_samples, random_state=random_state)
# Incorrect number of clusters
y_pred = KMeans(n_clusters=2, random_state=random_state).fit_predict(X)
plt.subplot(221)
plt.scatter(X[:, 0], X[:, 1], c=y_pred)
plt.title("Incorrect Number of Blobs")
# Anisotropicly distributed data
transformation = [[0.60834549, -0.63667341], [-0.40887718, 0.85253229]]
X_aniso = np.dot(X, transformation)
y_pred = KMeans(n_clusters=3, random_state=random_state).fit_predict(X_aniso)
plt.subplot(222)
plt.scatter(X_aniso[:, 0], X_aniso[:, 1], c=y_pred)
plt.title("Anisotropicly Distributed Blobs")
# Different variance
X_varied, y_varied = make_blobs(
n_samples=n_samples, cluster_std=[1.0, 2.5, 0.5], random_state=random_state
)
y_pred = KMeans(n_clusters=3, random_state=random_state).fit_predict(X_varied)
plt.subplot(223)
plt.scatter(X_varied[:, 0], X_varied[:, 1], c=y_pred)
plt.title("Unequal Variance")
# Unevenly sized blobs
X_filtered = np.vstack((X[y == 0][:500], X[y == 1][:100], X[y == 2][:10]))
y_pred = KMeans(n_clusters=3, random_state=random_state).fit_predict(X_filtered)
plt.subplot(224)
plt.scatter(X_filtered[:, 0], X_filtered[:, 1], c=y_pred)
plt.title("Unevenly Sized Blobs")
plt.show()
```
**Total running time of the script:** ( 0 minutes 0.341 seconds)
[`Download Python source code: plot_kmeans_assumptions.py`](https://scikit-learn.org/1.1/_downloads/5d2d581a4569eb0718dbdb8abf7cbbdf/plot_kmeans_assumptions.py)
[`Download Jupyter notebook: plot_kmeans_assumptions.ipynb`](https://scikit-learn.org/1.1/_downloads/b05e6cdf6d51481f37bf29b0bb92995e/plot_kmeans_assumptions.ipynb)
scikit_learn A demo of the mean-shift clustering algorithm Note
Click [here](#sphx-glr-download-auto-examples-cluster-plot-mean-shift-py) to download the full example code or to run this example in your browser via Binder
A demo of the mean-shift clustering algorithm
=============================================
Reference:
Dorin Comaniciu and Peter Meer, “Mean Shift: A robust approach toward feature space analysis”. IEEE Transactions on Pattern Analysis and Machine Intelligence. 2002. pp. 603-619.
```
import numpy as np
from sklearn.cluster import MeanShift, estimate_bandwidth
from sklearn.datasets import make_blobs
```
Generate sample data
--------------------
```
centers = [[1, 1], [-1, -1], [1, -1]]
X, _ = make_blobs(n_samples=10000, centers=centers, cluster_std=0.6)
```
Compute clustering with MeanShift
---------------------------------
```
# The following bandwidth can be automatically detected using
bandwidth = estimate_bandwidth(X, quantile=0.2, n_samples=500)
ms = MeanShift(bandwidth=bandwidth, bin_seeding=True)
ms.fit(X)
labels = ms.labels_
cluster_centers = ms.cluster_centers_
labels_unique = np.unique(labels)
n_clusters_ = len(labels_unique)
print("number of estimated clusters : %d" % n_clusters_)
```
```
number of estimated clusters : 3
```
Plot result
-----------
```
import matplotlib.pyplot as plt
from itertools import cycle
plt.figure(1)
plt.clf()
colors = cycle("bgrcmykbgrcmykbgrcmykbgrcmyk")
for k, col in zip(range(n_clusters_), colors):
my_members = labels == k
cluster_center = cluster_centers[k]
plt.plot(X[my_members, 0], X[my_members, 1], col + ".")
plt.plot(
cluster_center[0],
cluster_center[1],
"o",
markerfacecolor=col,
markeredgecolor="k",
markersize=14,
)
plt.title("Estimated number of clusters: %d" % n_clusters_)
plt.show()
```
**Total running time of the script:** ( 0 minutes 0.388 seconds)
[`Download Python source code: plot_mean_shift.py`](https://scikit-learn.org/1.1/_downloads/585914164dc34984e397f8d3d61849a5/plot_mean_shift.py)
[`Download Jupyter notebook: plot_mean_shift.ipynb`](https://scikit-learn.org/1.1/_downloads/43f17c8225b2c8e15e22a7edf7adb8b5/plot_mean_shift.ipynb)
scikit_learn Online learning of a dictionary of parts of faces Note
Click [here](#sphx-glr-download-auto-examples-cluster-plot-dict-face-patches-py) to download the full example code or to run this example in your browser via Binder
Online learning of a dictionary of parts of faces
=================================================
This example uses a large dataset of faces to learn a set of 20 x 20 images patches that constitute faces.
From the programming standpoint, it is interesting because it shows how to use the online API of the scikit-learn to process a very large dataset by chunks. The way we proceed is that we load an image at a time and extract randomly 50 patches from this image. Once we have accumulated 500 of these patches (using 10 images), we run the [`partial_fit`](../../modules/generated/sklearn.cluster.minibatchkmeans#sklearn.cluster.MiniBatchKMeans.partial_fit "sklearn.cluster.MiniBatchKMeans.partial_fit") method of the online KMeans object, MiniBatchKMeans.
The verbose setting on the MiniBatchKMeans enables us to see that some clusters are reassigned during the successive calls to partial-fit. This is because the number of patches that they represent has become too low, and it is better to choose a random new cluster.
Load the data
-------------
```
from sklearn import datasets
faces = datasets.fetch_olivetti_faces()
```
```
downloading Olivetti faces from https://ndownloader.figshare.com/files/5976027 to /home/runner/scikit_learn_data
```
Learn the dictionary of images
------------------------------
```
import time
import numpy as np
from sklearn.cluster import MiniBatchKMeans
from sklearn.feature_extraction.image import extract_patches_2d
print("Learning the dictionary... ")
rng = np.random.RandomState(0)
kmeans = MiniBatchKMeans(n_clusters=81, random_state=rng, verbose=True)
patch_size = (20, 20)
buffer = []
t0 = time.time()
# The online learning part: cycle over the whole dataset 6 times
index = 0
for _ in range(6):
for img in faces.images:
data = extract_patches_2d(img, patch_size, max_patches=50, random_state=rng)
data = np.reshape(data, (len(data), -1))
buffer.append(data)
index += 1
if index % 10 == 0:
data = np.concatenate(buffer, axis=0)
data -= np.mean(data, axis=0)
data /= np.std(data, axis=0)
kmeans.partial_fit(data)
buffer = []
if index % 100 == 0:
print("Partial fit of %4i out of %i" % (index, 6 * len(faces.images)))
dt = time.time() - t0
print("done in %.2fs." % dt)
```
```
Learning the dictionary...
[MiniBatchKMeans] Reassigning 6 cluster centers.
[MiniBatchKMeans] Reassigning 3 cluster centers.
Partial fit of 100 out of 2400
[MiniBatchKMeans] Reassigning 3 cluster centers.
[MiniBatchKMeans] Reassigning 2 cluster centers.
Partial fit of 200 out of 2400
[MiniBatchKMeans] Reassigning 1 cluster centers.
[MiniBatchKMeans] Reassigning 1 cluster centers.
Partial fit of 300 out of 2400
Partial fit of 400 out of 2400
Partial fit of 500 out of 2400
Partial fit of 600 out of 2400
Partial fit of 700 out of 2400
Partial fit of 800 out of 2400
Partial fit of 900 out of 2400
Partial fit of 1000 out of 2400
Partial fit of 1100 out of 2400
Partial fit of 1200 out of 2400
Partial fit of 1300 out of 2400
Partial fit of 1400 out of 2400
Partial fit of 1500 out of 2400
Partial fit of 1600 out of 2400
Partial fit of 1700 out of 2400
Partial fit of 1800 out of 2400
Partial fit of 1900 out of 2400
Partial fit of 2000 out of 2400
Partial fit of 2100 out of 2400
Partial fit of 2200 out of 2400
Partial fit of 2300 out of 2400
Partial fit of 2400 out of 2400
done in 0.91s.
```
Plot the results
----------------
```
import matplotlib.pyplot as plt
plt.figure(figsize=(4.2, 4))
for i, patch in enumerate(kmeans.cluster_centers_):
plt.subplot(9, 9, i + 1)
plt.imshow(patch.reshape(patch_size), cmap=plt.cm.gray, interpolation="nearest")
plt.xticks(())
plt.yticks(())
plt.suptitle(
"Patches of faces\nTrain time %.1fs on %d patches" % (dt, 8 * len(faces.images)),
fontsize=16,
)
plt.subplots_adjust(0.08, 0.02, 0.92, 0.85, 0.08, 0.23)
plt.show()
```
**Total running time of the script:** ( 0 minutes 3.747 seconds)
[`Download Python source code: plot_dict_face_patches.py`](https://scikit-learn.org/1.1/_downloads/7f205ae026a8c21fcab1e6a86cfadb7d/plot_dict_face_patches.py)
[`Download Jupyter notebook: plot_dict_face_patches.ipynb`](https://scikit-learn.org/1.1/_downloads/0af0092c704518874f82d38d725bb97f/plot_dict_face_patches.ipynb)
scikit_learn Feature agglomeration Note
Click [here](#sphx-glr-download-auto-examples-cluster-plot-digits-agglomeration-py) to download the full example code or to run this example in your browser via Binder
Feature agglomeration
=====================
These images how similar features are merged together using feature agglomeration.
```
# Code source: Gaël Varoquaux
# Modified for documentation by Jaques Grobler
# License: BSD 3 clause
import numpy as np
import matplotlib.pyplot as plt
from sklearn import datasets, cluster
from sklearn.feature_extraction.image import grid_to_graph
digits = datasets.load_digits()
images = digits.images
X = np.reshape(images, (len(images), -1))
connectivity = grid_to_graph(*images[0].shape)
agglo = cluster.FeatureAgglomeration(connectivity=connectivity, n_clusters=32)
agglo.fit(X)
X_reduced = agglo.transform(X)
X_restored = agglo.inverse_transform(X_reduced)
images_restored = np.reshape(X_restored, images.shape)
plt.figure(1, figsize=(4, 3.5))
plt.clf()
plt.subplots_adjust(left=0.01, right=0.99, bottom=0.01, top=0.91)
for i in range(4):
plt.subplot(3, 4, i + 1)
plt.imshow(images[i], cmap=plt.cm.gray, vmax=16, interpolation="nearest")
plt.xticks(())
plt.yticks(())
if i == 1:
plt.title("Original data")
plt.subplot(3, 4, 4 + i + 1)
plt.imshow(images_restored[i], cmap=plt.cm.gray, vmax=16, interpolation="nearest")
if i == 1:
plt.title("Agglomerated data")
plt.xticks(())
plt.yticks(())
plt.subplot(3, 4, 10)
plt.imshow(
np.reshape(agglo.labels_, images[0].shape),
interpolation="nearest",
cmap=plt.cm.nipy_spectral,
)
plt.xticks(())
plt.yticks(())
plt.title("Labels")
plt.show()
```
**Total running time of the script:** ( 0 minutes 0.141 seconds)
[`Download Python source code: plot_digits_agglomeration.py`](https://scikit-learn.org/1.1/_downloads/2be83f6fd16a57d87ecf24c2aec45229/plot_digits_agglomeration.py)
[`Download Jupyter notebook: plot_digits_agglomeration.ipynb`](https://scikit-learn.org/1.1/_downloads/ec1f4d8025c9e2e1a227310276945765/plot_digits_agglomeration.ipynb)
scikit_learn A demo of structured Ward hierarchical clustering on an image of coins Note
Click [here](#sphx-glr-download-auto-examples-cluster-plot-coin-ward-segmentation-py) to download the full example code or to run this example in your browser via Binder
A demo of structured Ward hierarchical clustering on an image of coins
======================================================================
Compute the segmentation of a 2D image with Ward hierarchical clustering. The clustering is spatially constrained in order for each segmented region to be in one piece.
```
# Author : Vincent Michel, 2010
# Alexandre Gramfort, 2011
# License: BSD 3 clause
```
Generate data
-------------
```
from skimage.data import coins
orig_coins = coins()
```
Resize it to 20% of the original size to speed up the processing Applying a Gaussian filter for smoothing prior to down-scaling reduces aliasing artifacts.
```
import numpy as np
from scipy.ndimage import gaussian_filter
from skimage.transform import rescale
smoothened_coins = gaussian_filter(orig_coins, sigma=2)
rescaled_coins = rescale(
smoothened_coins,
0.2,
mode="reflect",
anti_aliasing=False,
)
X = np.reshape(rescaled_coins, (-1, 1))
```
Define structure of the data
----------------------------
Pixels are connected to their neighbors.
```
from sklearn.feature_extraction.image import grid_to_graph
connectivity = grid_to_graph(*rescaled_coins.shape)
```
Compute clustering
------------------
```
import time as time
from sklearn.cluster import AgglomerativeClustering
print("Compute structured hierarchical clustering...")
st = time.time()
n_clusters = 27 # number of regions
ward = AgglomerativeClustering(
n_clusters=n_clusters, linkage="ward", connectivity=connectivity
)
ward.fit(X)
label = np.reshape(ward.labels_, rescaled_coins.shape)
print(f"Elapsed time: {time.time() - st:.3f}s")
print(f"Number of pixels: {label.size}")
print(f"Number of clusters: {np.unique(label).size}")
```
```
Compute structured hierarchical clustering...
Elapsed time: 0.127s
Number of pixels: 4697
Number of clusters: 27
```
Plot the results on an image
----------------------------
Agglomerative clustering is able to segment each coin however, we have had to use a `n_cluster` larger than the number of coins because the segmentation is finding a large in the background.
```
import matplotlib.pyplot as plt
plt.figure(figsize=(5, 5))
plt.imshow(rescaled_coins, cmap=plt.cm.gray)
for l in range(n_clusters):
plt.contour(
label == l,
colors=[
plt.cm.nipy_spectral(l / float(n_clusters)),
],
)
plt.axis("off")
plt.show()
```
**Total running time of the script:** ( 0 minutes 0.426 seconds)
[`Download Python source code: plot_coin_ward_segmentation.py`](https://scikit-learn.org/1.1/_downloads/5eeecece5c41d6edcf4555b5e7c34350/plot_coin_ward_segmentation.py)
[`Download Jupyter notebook: plot_coin_ward_segmentation.ipynb`](https://scikit-learn.org/1.1/_downloads/c8c7c5458b84586cacd8498015126bc4/plot_coin_ward_segmentation.ipynb)
| programming_docs |
scikit_learn Comparison of the K-Means and MiniBatchKMeans clustering algorithms Note
Click [here](#sphx-glr-download-auto-examples-cluster-plot-mini-batch-kmeans-py) to download the full example code or to run this example in your browser via Binder
Comparison of the K-Means and MiniBatchKMeans clustering algorithms
===================================================================
We want to compare the performance of the MiniBatchKMeans and KMeans: the MiniBatchKMeans is faster, but gives slightly different results (see [Mini Batch K-Means](../../modules/clustering#mini-batch-kmeans)).
We will cluster a set of data, first with KMeans and then with MiniBatchKMeans, and plot the results. We will also plot the points that are labelled differently between the two algorithms.
Generate the data
-----------------
We start by generating the blobs of data to be clustered.
```
import numpy as np
from sklearn.datasets import make_blobs
np.random.seed(0)
batch_size = 45
centers = [[1, 1], [-1, -1], [1, -1]]
n_clusters = len(centers)
X, labels_true = make_blobs(n_samples=3000, centers=centers, cluster_std=0.7)
```
Compute clustering with KMeans
------------------------------
```
import time
from sklearn.cluster import KMeans
k_means = KMeans(init="k-means++", n_clusters=3, n_init=10)
t0 = time.time()
k_means.fit(X)
t_batch = time.time() - t0
```
Compute clustering with MiniBatchKMeans
---------------------------------------
```
from sklearn.cluster import MiniBatchKMeans
mbk = MiniBatchKMeans(
init="k-means++",
n_clusters=3,
batch_size=batch_size,
n_init=10,
max_no_improvement=10,
verbose=0,
)
t0 = time.time()
mbk.fit(X)
t_mini_batch = time.time() - t0
```
Establishing parity between clusters
------------------------------------
We want to have the same color for the same cluster from both the MiniBatchKMeans and the KMeans algorithm. Let’s pair the cluster centers per closest one.
```
from sklearn.metrics.pairwise import pairwise_distances_argmin
k_means_cluster_centers = k_means.cluster_centers_
order = pairwise_distances_argmin(k_means.cluster_centers_, mbk.cluster_centers_)
mbk_means_cluster_centers = mbk.cluster_centers_[order]
k_means_labels = pairwise_distances_argmin(X, k_means_cluster_centers)
mbk_means_labels = pairwise_distances_argmin(X, mbk_means_cluster_centers)
```
Plotting the results
--------------------
```
import matplotlib.pyplot as plt
fig = plt.figure(figsize=(8, 3))
fig.subplots_adjust(left=0.02, right=0.98, bottom=0.05, top=0.9)
colors = ["#4EACC5", "#FF9C34", "#4E9A06"]
# KMeans
ax = fig.add_subplot(1, 3, 1)
for k, col in zip(range(n_clusters), colors):
my_members = k_means_labels == k
cluster_center = k_means_cluster_centers[k]
ax.plot(X[my_members, 0], X[my_members, 1], "w", markerfacecolor=col, marker=".")
ax.plot(
cluster_center[0],
cluster_center[1],
"o",
markerfacecolor=col,
markeredgecolor="k",
markersize=6,
)
ax.set_title("KMeans")
ax.set_xticks(())
ax.set_yticks(())
plt.text(-3.5, 1.8, "train time: %.2fs\ninertia: %f" % (t_batch, k_means.inertia_))
# MiniBatchKMeans
ax = fig.add_subplot(1, 3, 2)
for k, col in zip(range(n_clusters), colors):
my_members = mbk_means_labels == k
cluster_center = mbk_means_cluster_centers[k]
ax.plot(X[my_members, 0], X[my_members, 1], "w", markerfacecolor=col, marker=".")
ax.plot(
cluster_center[0],
cluster_center[1],
"o",
markerfacecolor=col,
markeredgecolor="k",
markersize=6,
)
ax.set_title("MiniBatchKMeans")
ax.set_xticks(())
ax.set_yticks(())
plt.text(-3.5, 1.8, "train time: %.2fs\ninertia: %f" % (t_mini_batch, mbk.inertia_))
# Initialize the different array to all False
different = mbk_means_labels == 4
ax = fig.add_subplot(1, 3, 3)
for k in range(n_clusters):
different += (k_means_labels == k) != (mbk_means_labels == k)
identic = np.logical_not(different)
ax.plot(X[identic, 0], X[identic, 1], "w", markerfacecolor="#bbbbbb", marker=".")
ax.plot(X[different, 0], X[different, 1], "w", markerfacecolor="m", marker=".")
ax.set_title("Difference")
ax.set_xticks(())
ax.set_yticks(())
plt.show()
```
**Total running time of the script:** ( 0 minutes 0.176 seconds)
[`Download Python source code: plot_mini_batch_kmeans.py`](https://scikit-learn.org/1.1/_downloads/3735f7086bbd0007cd42d2c1f2b96f47/plot_mini_batch_kmeans.py)
[`Download Jupyter notebook: plot_mini_batch_kmeans.ipynb`](https://scikit-learn.org/1.1/_downloads/1f948ff6f5face5a362672c4e36dd01e/plot_mini_batch_kmeans.ipynb)
scikit_learn Demo of DBSCAN clustering algorithm Note
Click [here](#sphx-glr-download-auto-examples-cluster-plot-dbscan-py) to download the full example code or to run this example in your browser via Binder
Demo of DBSCAN clustering algorithm
===================================
Finds core samples of high density and expands clusters from them.
```
import numpy as np
from sklearn.cluster import DBSCAN
from sklearn import metrics
from sklearn.datasets import make_blobs
from sklearn.preprocessing import StandardScaler
```
Generate sample data
--------------------
```
centers = [[1, 1], [-1, -1], [1, -1]]
X, labels_true = make_blobs(
n_samples=750, centers=centers, cluster_std=0.4, random_state=0
)
X = StandardScaler().fit_transform(X)
```
Compute DBSCAN
--------------
```
db = DBSCAN(eps=0.3, min_samples=10).fit(X)
core_samples_mask = np.zeros_like(db.labels_, dtype=bool)
core_samples_mask[db.core_sample_indices_] = True
labels = db.labels_
# Number of clusters in labels, ignoring noise if present.
n_clusters_ = len(set(labels)) - (1 if -1 in labels else 0)
n_noise_ = list(labels).count(-1)
print("Estimated number of clusters: %d" % n_clusters_)
print("Estimated number of noise points: %d" % n_noise_)
print("Homogeneity: %0.3f" % metrics.homogeneity_score(labels_true, labels))
print("Completeness: %0.3f" % metrics.completeness_score(labels_true, labels))
print("V-measure: %0.3f" % metrics.v_measure_score(labels_true, labels))
print("Adjusted Rand Index: %0.3f" % metrics.adjusted_rand_score(labels_true, labels))
print(
"Adjusted Mutual Information: %0.3f"
% metrics.adjusted_mutual_info_score(labels_true, labels)
)
print("Silhouette Coefficient: %0.3f" % metrics.silhouette_score(X, labels))
```
```
Estimated number of clusters: 3
Estimated number of noise points: 18
Homogeneity: 0.953
Completeness: 0.883
V-measure: 0.917
Adjusted Rand Index: 0.952
Adjusted Mutual Information: 0.916
Silhouette Coefficient: 0.626
```
Plot result
-----------
```
import matplotlib.pyplot as plt
# Black removed and is used for noise instead.
unique_labels = set(labels)
colors = [plt.cm.Spectral(each) for each in np.linspace(0, 1, len(unique_labels))]
for k, col in zip(unique_labels, colors):
if k == -1:
# Black used for noise.
col = [0, 0, 0, 1]
class_member_mask = labels == k
xy = X[class_member_mask & core_samples_mask]
plt.plot(
xy[:, 0],
xy[:, 1],
"o",
markerfacecolor=tuple(col),
markeredgecolor="k",
markersize=14,
)
xy = X[class_member_mask & ~core_samples_mask]
plt.plot(
xy[:, 0],
xy[:, 1],
"o",
markerfacecolor=tuple(col),
markeredgecolor="k",
markersize=6,
)
plt.title("Estimated number of clusters: %d" % n_clusters_)
plt.show()
```
**Total running time of the script:** ( 0 minutes 0.087 seconds)
[`Download Python source code: plot_dbscan.py`](https://scikit-learn.org/1.1/_downloads/0b802e24fdcd192a452e91580f278039/plot_dbscan.py)
[`Download Jupyter notebook: plot_dbscan.ipynb`](https://scikit-learn.org/1.1/_downloads/23a5a575578f08cc0071cc070953a655/plot_dbscan.ipynb)
scikit_learn Hierarchical clustering: structured vs unstructured ward Note
Click [here](#sphx-glr-download-auto-examples-cluster-plot-ward-structured-vs-unstructured-py) to download the full example code or to run this example in your browser via Binder
Hierarchical clustering: structured vs unstructured ward
========================================================
Example builds a swiss roll dataset and runs hierarchical clustering on their position.
For more information, see [Hierarchical clustering](../../modules/clustering#hierarchical-clustering).
In a first step, the hierarchical clustering is performed without connectivity constraints on the structure and is solely based on distance, whereas in a second step the clustering is restricted to the k-Nearest Neighbors graph: it’s a hierarchical clustering with structure prior.
Some of the clusters learned without connectivity constraints do not respect the structure of the swiss roll and extend across different folds of the manifolds. On the opposite, when opposing connectivity constraints, the clusters form a nice parcellation of the swiss roll.
```
# Authors : Vincent Michel, 2010
# Alexandre Gramfort, 2010
# Gael Varoquaux, 2010
# License: BSD 3 clause
import time as time
# The following import is required
# for 3D projection to work with matplotlib < 3.2
import mpl_toolkits.mplot3d # noqa: F401
import numpy as np
```
Generate data
-------------
We start by generating the Swiss Roll dataset.
```
from sklearn.datasets import make_swiss_roll
n_samples = 1500
noise = 0.05
X, _ = make_swiss_roll(n_samples, noise=noise)
# Make it thinner
X[:, 1] *= 0.5
```
Compute clustering
------------------
We perform AgglomerativeClustering which comes under Hierarchical Clustering without any connectivity constraints.
```
from sklearn.cluster import AgglomerativeClustering
print("Compute unstructured hierarchical clustering...")
st = time.time()
ward = AgglomerativeClustering(n_clusters=6, linkage="ward").fit(X)
elapsed_time = time.time() - st
label = ward.labels_
print(f"Elapsed time: {elapsed_time:.2f}s")
print(f"Number of points: {label.size}")
```
```
Compute unstructured hierarchical clustering...
Elapsed time: 0.03s
Number of points: 1500
```
Plot result
-----------
Plotting the unstructured hierarchical clusters.
```
import matplotlib.pyplot as plt
fig1 = plt.figure()
ax1 = fig1.add_subplot(111, projection="3d", elev=7, azim=-80)
ax1.set_position([0, 0, 0.95, 1])
for l in np.unique(label):
ax1.scatter(
X[label == l, 0],
X[label == l, 1],
X[label == l, 2],
color=plt.cm.jet(float(l) / np.max(label + 1)),
s=20,
edgecolor="k",
)
_ = fig1.suptitle(f"Without connectivity constraints (time {elapsed_time:.2f}s)")
```
We are defining k-Nearest Neighbors with 10 neighbors
-----------------------------------------------------
```
from sklearn.neighbors import kneighbors_graph
connectivity = kneighbors_graph(X, n_neighbors=10, include_self=False)
```
Compute clustering
------------------
We perform AgglomerativeClustering again with connectivity constraints.
```
print("Compute structured hierarchical clustering...")
st = time.time()
ward = AgglomerativeClustering(
n_clusters=6, connectivity=connectivity, linkage="ward"
).fit(X)
elapsed_time = time.time() - st
label = ward.labels_
print(f"Elapsed time: {elapsed_time:.2f}s")
print(f"Number of points: {label.size}")
```
```
Compute structured hierarchical clustering...
Elapsed time: 0.05s
Number of points: 1500
```
Plot result
-----------
Plotting the structured hierarchical clusters.
```
fig2 = plt.figure()
ax2 = fig2.add_subplot(121, projection="3d", elev=7, azim=-80)
ax2.set_position([0, 0, 0.95, 1])
for l in np.unique(label):
ax2.scatter(
X[label == l, 0],
X[label == l, 1],
X[label == l, 2],
color=plt.cm.jet(float(l) / np.max(label + 1)),
s=20,
edgecolor="k",
)
fig2.suptitle(f"With connectivity constraints (time {elapsed_time:.2f}s)")
plt.show()
```
**Total running time of the script:** ( 0 minutes 0.325 seconds)
[`Download Python source code: plot_ward_structured_vs_unstructured.py`](https://scikit-learn.org/1.1/_downloads/4b5360e26b661e574ee526b19eee216f/plot_ward_structured_vs_unstructured.py)
[`Download Jupyter notebook: plot_ward_structured_vs_unstructured.ipynb`](https://scikit-learn.org/1.1/_downloads/e1d99a2f5c5c55550ce32108208f6477/plot_ward_structured_vs_unstructured.ipynb)
scikit_learn Segmenting the picture of greek coins in regions Note
Click [here](#sphx-glr-download-auto-examples-cluster-plot-coin-segmentation-py) to download the full example code or to run this example in your browser via Binder
Segmenting the picture of greek coins in regions
================================================
This example uses [Spectral clustering](../../modules/clustering#spectral-clustering) on a graph created from voxel-to-voxel difference on an image to break this image into multiple partly-homogeneous regions.
This procedure (spectral clustering on an image) is an efficient approximate solution for finding normalized graph cuts.
There are three options to assign labels:
* ‘kmeans’ spectral clustering clusters samples in the embedding space using a kmeans algorithm
* ‘discrete’ iteratively searches for the closest partition space to the embedding space of spectral clustering.
* ‘cluster\_qr’ assigns labels using the QR factorization with pivoting that directly determines the partition in the embedding space.
```
# Author: Gael Varoquaux <[email protected]>
# Brian Cheung
# Andrew Knyazev <[email protected]>
# License: BSD 3 clause
import time
import numpy as np
from scipy.ndimage import gaussian_filter
import matplotlib.pyplot as plt
from skimage.data import coins
from skimage.transform import rescale
from sklearn.feature_extraction import image
from sklearn.cluster import spectral_clustering
# load the coins as a numpy array
orig_coins = coins()
# Resize it to 20% of the original size to speed up the processing
# Applying a Gaussian filter for smoothing prior to down-scaling
# reduces aliasing artifacts.
smoothened_coins = gaussian_filter(orig_coins, sigma=2)
rescaled_coins = rescale(
smoothened_coins, 0.2, mode="reflect", anti_aliasing=False, multichannel=False
)
# Convert the image into a graph with the value of the gradient on the
# edges.
graph = image.img_to_graph(rescaled_coins)
# Take a decreasing function of the gradient: an exponential
# The smaller beta is, the more independent the segmentation is of the
# actual image. For beta=1, the segmentation is close to a voronoi
beta = 10
eps = 1e-6
graph.data = np.exp(-beta * graph.data / graph.data.std()) + eps
# The number of segmented regions to display needs to be chosen manually.
# The current version of 'spectral_clustering' does not support determining
# the number of good quality clusters automatically.
n_regions = 26
```
```
/home/runner/work/scikit-learn/scikit-learn/examples/cluster/plot_coin_segmentation.py:47: FutureWarning: `multichannel` is a deprecated argument name for `rescale`. It will be removed in version 1.0. Please use `channel_axis` instead.
rescaled_coins = rescale(
```
Compute and visualize the resulting regions
```
# Computing a few extra eigenvectors may speed up the eigen_solver.
# The spectral clustering quality may also benetif from requesting
# extra regions for segmentation.
n_regions_plus = 3
# Apply spectral clustering using the default eigen_solver='arpack'.
# Any implemented solver can be used: eigen_solver='arpack', 'lobpcg', or 'amg'.
# Choosing eigen_solver='amg' requires an extra package called 'pyamg'.
# The quality of segmentation and the speed of calculations is mostly determined
# by the choice of the solver and the value of the tolerance 'eigen_tol'.
# TODO: varying eigen_tol seems to have no effect for 'lobpcg' and 'amg' #21243.
for assign_labels in ("kmeans", "discretize", "cluster_qr"):
t0 = time.time()
labels = spectral_clustering(
graph,
n_clusters=(n_regions + n_regions_plus),
eigen_tol=1e-7,
assign_labels=assign_labels,
random_state=42,
)
t1 = time.time()
labels = labels.reshape(rescaled_coins.shape)
plt.figure(figsize=(5, 5))
plt.imshow(rescaled_coins, cmap=plt.cm.gray)
plt.xticks(())
plt.yticks(())
title = "Spectral clustering: %s, %.2fs" % (assign_labels, (t1 - t0))
print(title)
plt.title(title)
for l in range(n_regions):
colors = [plt.cm.nipy_spectral((l + 4) / float(n_regions + 4))]
plt.contour(labels == l, colors=colors)
# To view individual segments as appear comment in plt.pause(0.5)
plt.show()
# TODO: After #21194 is merged and #21243 is fixed, check which eigen_solver
# is the best and set eigen_solver='arpack', 'lobpcg', or 'amg' and eigen_tol
# explicitly in this example.
```
*
*
*
```
Spectral clustering: kmeans, 1.93s
Spectral clustering: discretize, 1.78s
Spectral clustering: cluster_qr, 1.76s
```
**Total running time of the script:** ( 0 minutes 6.038 seconds)
[`Download Python source code: plot_coin_segmentation.py`](https://scikit-learn.org/1.1/_downloads/2e86a4838807f09bbbb529d9643d45ab/plot_coin_segmentation.py)
[`Download Jupyter notebook: plot_coin_segmentation.ipynb`](https://scikit-learn.org/1.1/_downloads/006fc185672e58b056a5c134db26935c/plot_coin_segmentation.ipynb)
scikit_learn Demo of OPTICS clustering algorithm Note
Click [here](#sphx-glr-download-auto-examples-cluster-plot-optics-py) to download the full example code or to run this example in your browser via Binder
Demo of OPTICS clustering algorithm
===================================
Finds core samples of high density and expands clusters from them. This example uses data that is generated so that the clusters have different densities. The [`OPTICS`](../../modules/generated/sklearn.cluster.optics#sklearn.cluster.OPTICS "sklearn.cluster.OPTICS") is first used with its Xi cluster detection method, and then setting specific thresholds on the reachability, which corresponds to [`DBSCAN`](../../modules/generated/sklearn.cluster.dbscan#sklearn.cluster.DBSCAN "sklearn.cluster.DBSCAN"). We can see that the different clusters of OPTICS’s Xi method can be recovered with different choices of thresholds in DBSCAN.
```
# Authors: Shane Grigsby <[email protected]>
# Adrin Jalali <[email protected]>
# License: BSD 3 clause
from sklearn.cluster import OPTICS, cluster_optics_dbscan
import matplotlib.gridspec as gridspec
import matplotlib.pyplot as plt
import numpy as np
# Generate sample data
np.random.seed(0)
n_points_per_cluster = 250
C1 = [-5, -2] + 0.8 * np.random.randn(n_points_per_cluster, 2)
C2 = [4, -1] + 0.1 * np.random.randn(n_points_per_cluster, 2)
C3 = [1, -2] + 0.2 * np.random.randn(n_points_per_cluster, 2)
C4 = [-2, 3] + 0.3 * np.random.randn(n_points_per_cluster, 2)
C5 = [3, -2] + 1.6 * np.random.randn(n_points_per_cluster, 2)
C6 = [5, 6] + 2 * np.random.randn(n_points_per_cluster, 2)
X = np.vstack((C1, C2, C3, C4, C5, C6))
clust = OPTICS(min_samples=50, xi=0.05, min_cluster_size=0.05)
# Run the fit
clust.fit(X)
labels_050 = cluster_optics_dbscan(
reachability=clust.reachability_,
core_distances=clust.core_distances_,
ordering=clust.ordering_,
eps=0.5,
)
labels_200 = cluster_optics_dbscan(
reachability=clust.reachability_,
core_distances=clust.core_distances_,
ordering=clust.ordering_,
eps=2,
)
space = np.arange(len(X))
reachability = clust.reachability_[clust.ordering_]
labels = clust.labels_[clust.ordering_]
plt.figure(figsize=(10, 7))
G = gridspec.GridSpec(2, 3)
ax1 = plt.subplot(G[0, :])
ax2 = plt.subplot(G[1, 0])
ax3 = plt.subplot(G[1, 1])
ax4 = plt.subplot(G[1, 2])
# Reachability plot
colors = ["g.", "r.", "b.", "y.", "c."]
for klass, color in zip(range(0, 5), colors):
Xk = space[labels == klass]
Rk = reachability[labels == klass]
ax1.plot(Xk, Rk, color, alpha=0.3)
ax1.plot(space[labels == -1], reachability[labels == -1], "k.", alpha=0.3)
ax1.plot(space, np.full_like(space, 2.0, dtype=float), "k-", alpha=0.5)
ax1.plot(space, np.full_like(space, 0.5, dtype=float), "k-.", alpha=0.5)
ax1.set_ylabel("Reachability (epsilon distance)")
ax1.set_title("Reachability Plot")
# OPTICS
colors = ["g.", "r.", "b.", "y.", "c."]
for klass, color in zip(range(0, 5), colors):
Xk = X[clust.labels_ == klass]
ax2.plot(Xk[:, 0], Xk[:, 1], color, alpha=0.3)
ax2.plot(X[clust.labels_ == -1, 0], X[clust.labels_ == -1, 1], "k+", alpha=0.1)
ax2.set_title("Automatic Clustering\nOPTICS")
# DBSCAN at 0.5
colors = ["g", "greenyellow", "olive", "r", "b", "c"]
for klass, color in zip(range(0, 6), colors):
Xk = X[labels_050 == klass]
ax3.plot(Xk[:, 0], Xk[:, 1], color, alpha=0.3, marker=".")
ax3.plot(X[labels_050 == -1, 0], X[labels_050 == -1, 1], "k+", alpha=0.1)
ax3.set_title("Clustering at 0.5 epsilon cut\nDBSCAN")
# DBSCAN at 2.
colors = ["g.", "m.", "y.", "c."]
for klass, color in zip(range(0, 4), colors):
Xk = X[labels_200 == klass]
ax4.plot(Xk[:, 0], Xk[:, 1], color, alpha=0.3)
ax4.plot(X[labels_200 == -1, 0], X[labels_200 == -1, 1], "k+", alpha=0.1)
ax4.set_title("Clustering at 2.0 epsilon cut\nDBSCAN")
plt.tight_layout()
plt.show()
```
**Total running time of the script:** ( 0 minutes 0.952 seconds)
[`Download Python source code: plot_optics.py`](https://scikit-learn.org/1.1/_downloads/fe139d44b92c775a3b44dcefd61ea1bb/plot_optics.py)
[`Download Jupyter notebook: plot_optics.ipynb`](https://scikit-learn.org/1.1/_downloads/8025056f1d24411e898c2d0086371880/plot_optics.ipynb)
| programming_docs |
scikit_learn Vector Quantization Example Note
Click [here](#sphx-glr-download-auto-examples-cluster-plot-face-compress-py) to download the full example code or to run this example in your browser via Binder
Vector Quantization Example
===========================
Face, a 1024 x 768 size image of a raccoon face, is used here to illustrate how `k`-means is used for vector quantization.
*
*
*
*
```
# Code source: Gaël Varoquaux
# Modified for documentation by Jaques Grobler
# License: BSD 3 clause
import numpy as np
import scipy as sp
import matplotlib.pyplot as plt
from sklearn import cluster
try: # SciPy >= 0.16 have face in misc
from scipy.misc import face
face = face(gray=True)
except ImportError:
face = sp.face(gray=True)
n_clusters = 5
np.random.seed(0)
X = face.reshape((-1, 1)) # We need an (n_sample, n_feature) array
k_means = cluster.KMeans(n_clusters=n_clusters, n_init=4)
k_means.fit(X)
values = k_means.cluster_centers_.squeeze()
labels = k_means.labels_
# create an array from labels and values
face_compressed = np.choose(labels, values)
face_compressed.shape = face.shape
vmin = face.min()
vmax = face.max()
# original face
plt.figure(1, figsize=(3, 2.2))
plt.imshow(face, cmap=plt.cm.gray, vmin=vmin, vmax=256)
# compressed face
plt.figure(2, figsize=(3, 2.2))
plt.imshow(face_compressed, cmap=plt.cm.gray, vmin=vmin, vmax=vmax)
# equal bins face
regular_values = np.linspace(0, 256, n_clusters + 1)
regular_labels = np.searchsorted(regular_values, face) - 1
regular_values = 0.5 * (regular_values[1:] + regular_values[:-1]) # mean
regular_face = np.choose(regular_labels.ravel(), regular_values, mode="clip")
regular_face.shape = face.shape
plt.figure(3, figsize=(3, 2.2))
plt.imshow(regular_face, cmap=plt.cm.gray, vmin=vmin, vmax=vmax)
# histogram
plt.figure(4, figsize=(3, 2.2))
plt.clf()
plt.axes([0.01, 0.01, 0.98, 0.98])
plt.hist(X, bins=256, color=".5", edgecolor=".5")
plt.yticks(())
plt.xticks(regular_values)
values = np.sort(values)
for center_1, center_2 in zip(values[:-1], values[1:]):
plt.axvline(0.5 * (center_1 + center_2), color="b")
for center_1, center_2 in zip(regular_values[:-1], regular_values[1:]):
plt.axvline(0.5 * (center_1 + center_2), color="b", linestyle="--")
plt.show()
```
**Total running time of the script:** ( 0 minutes 1.731 seconds)
[`Download Python source code: plot_face_compress.py`](https://scikit-learn.org/1.1/_downloads/5bb71b0b2052531cacf3736b4d2b3a92/plot_face_compress.py)
[`Download Jupyter notebook: plot_face_compress.ipynb`](https://scikit-learn.org/1.1/_downloads/f52666c44d104a3e37802015751177fe/plot_face_compress.ipynb)
scikit_learn Feature agglomeration vs. univariate selection Note
Click [here](#sphx-glr-download-auto-examples-cluster-plot-feature-agglomeration-vs-univariate-selection-py) to download the full example code or to run this example in your browser via Binder
Feature agglomeration vs. univariate selection
==============================================
This example compares 2 dimensionality reduction strategies:
* univariate feature selection with Anova
* feature agglomeration with Ward hierarchical clustering
Both methods are compared in a regression problem using a BayesianRidge as supervised estimator.
```
# Author: Alexandre Gramfort <[email protected]>
# License: BSD 3 clause
```
```
import shutil
import tempfile
import numpy as np
import matplotlib.pyplot as plt
from scipy import linalg, ndimage
from joblib import Memory
from sklearn.feature_extraction.image import grid_to_graph
from sklearn import feature_selection
from sklearn.cluster import FeatureAgglomeration
from sklearn.linear_model import BayesianRidge
from sklearn.pipeline import Pipeline
from sklearn.model_selection import GridSearchCV
from sklearn.model_selection import KFold
```
Set parameters
```
n_samples = 200
size = 40 # image size
roi_size = 15
snr = 5.0
np.random.seed(0)
```
Generate data
```
coef = np.zeros((size, size))
coef[0:roi_size, 0:roi_size] = -1.0
coef[-roi_size:, -roi_size:] = 1.0
X = np.random.randn(n_samples, size**2)
for x in X: # smooth data
x[:] = ndimage.gaussian_filter(x.reshape(size, size), sigma=1.0).ravel()
X -= X.mean(axis=0)
X /= X.std(axis=0)
y = np.dot(X, coef.ravel())
```
add noise
```
noise = np.random.randn(y.shape[0])
noise_coef = (linalg.norm(y, 2) / np.exp(snr / 20.0)) / linalg.norm(noise, 2)
y += noise_coef * noise
```
Compute the coefs of a Bayesian Ridge with GridSearch
```
cv = KFold(2) # cross-validation generator for model selection
ridge = BayesianRidge()
cachedir = tempfile.mkdtemp()
mem = Memory(location=cachedir, verbose=1)
```
Ward agglomeration followed by BayesianRidge
```
connectivity = grid_to_graph(n_x=size, n_y=size)
ward = FeatureAgglomeration(n_clusters=10, connectivity=connectivity, memory=mem)
clf = Pipeline([("ward", ward), ("ridge", ridge)])
# Select the optimal number of parcels with grid search
clf = GridSearchCV(clf, {"ward__n_clusters": [10, 20, 30]}, n_jobs=1, cv=cv)
clf.fit(X, y) # set the best parameters
coef_ = clf.best_estimator_.steps[-1][1].coef_
coef_ = clf.best_estimator_.steps[0][1].inverse_transform(coef_)
coef_agglomeration_ = coef_.reshape(size, size)
```
```
________________________________________________________________________________
[Memory] Calling sklearn.cluster._agglomerative.ward_tree...
ward_tree(array([[-0.451933, ..., -0.675318],
...,
[ 0.275706, ..., -1.085711]]), connectivity=<1600x1600 sparse matrix of type '<class 'numpy.int64'>'
with 7840 stored elements in COOrdinate format>, n_clusters=None, return_distance=False)
________________________________________________________ward_tree - 0.0s, 0.0min
________________________________________________________________________________
[Memory] Calling sklearn.cluster._agglomerative.ward_tree...
ward_tree(array([[ 0.905206, ..., 0.161245],
...,
[-0.849835, ..., -1.091621]]), connectivity=<1600x1600 sparse matrix of type '<class 'numpy.int64'>'
with 7840 stored elements in COOrdinate format>, n_clusters=None, return_distance=False)
________________________________________________________ward_tree - 0.0s, 0.0min
________________________________________________________________________________
[Memory] Calling sklearn.cluster._agglomerative.ward_tree...
ward_tree(array([[ 0.905206, ..., -0.675318],
...,
[-0.849835, ..., -1.085711]]), connectivity=<1600x1600 sparse matrix of type '<class 'numpy.int64'>'
with 7840 stored elements in COOrdinate format>, n_clusters=None, return_distance=False)
________________________________________________________ward_tree - 0.0s, 0.0min
```
Anova univariate feature selection followed by BayesianRidge
```
f_regression = mem.cache(feature_selection.f_regression) # caching function
anova = feature_selection.SelectPercentile(f_regression)
clf = Pipeline([("anova", anova), ("ridge", ridge)])
# Select the optimal percentage of features with grid search
clf = GridSearchCV(clf, {"anova__percentile": [5, 10, 20]}, cv=cv)
clf.fit(X, y) # set the best parameters
coef_ = clf.best_estimator_.steps[-1][1].coef_
coef_ = clf.best_estimator_.steps[0][1].inverse_transform(coef_.reshape(1, -1))
coef_selection_ = coef_.reshape(size, size)
```
```
________________________________________________________________________________
[Memory] Calling sklearn.feature_selection._univariate_selection.f_regression...
f_regression(array([[-0.451933, ..., 0.275706],
...,
[-0.675318, ..., -1.085711]]),
array([ 25.267703, ..., -25.026711]))
_____________________________________________________f_regression - 0.0s, 0.0min
________________________________________________________________________________
[Memory] Calling sklearn.feature_selection._univariate_selection.f_regression...
f_regression(array([[ 0.905206, ..., -0.849835],
...,
[ 0.161245, ..., -1.091621]]),
array([ -27.447268, ..., -112.638768]))
_____________________________________________________f_regression - 0.0s, 0.0min
________________________________________________________________________________
[Memory] Calling sklearn.feature_selection._univariate_selection.f_regression...
f_regression(array([[ 0.905206, ..., -0.849835],
...,
[-0.675318, ..., -1.085711]]),
array([-27.447268, ..., -25.026711]))
_____________________________________________________f_regression - 0.0s, 0.0min
```
Inverse the transformation to plot the results on an image
```
plt.close("all")
plt.figure(figsize=(7.3, 2.7))
plt.subplot(1, 3, 1)
plt.imshow(coef, interpolation="nearest", cmap=plt.cm.RdBu_r)
plt.title("True weights")
plt.subplot(1, 3, 2)
plt.imshow(coef_selection_, interpolation="nearest", cmap=plt.cm.RdBu_r)
plt.title("Feature Selection")
plt.subplot(1, 3, 3)
plt.imshow(coef_agglomeration_, interpolation="nearest", cmap=plt.cm.RdBu_r)
plt.title("Feature Agglomeration")
plt.subplots_adjust(0.04, 0.0, 0.98, 0.94, 0.16, 0.26)
plt.show()
```
Attempt to remove the temporary cachedir, but don’t worry if it fails
```
shutil.rmtree(cachedir, ignore_errors=True)
```
**Total running time of the script:** ( 0 minutes 0.443 seconds)
[`Download Python source code: plot_feature_agglomeration_vs_univariate_selection.py`](https://scikit-learn.org/1.1/_downloads/6c7cb9f528114f658d5f562073332c24/plot_feature_agglomeration_vs_univariate_selection.py)
[`Download Jupyter notebook: plot_feature_agglomeration_vs_univariate_selection.ipynb`](https://scikit-learn.org/1.1/_downloads/fd3181da9f1988c60c583c95e97389f8/plot_feature_agglomeration_vs_univariate_selection.ipynb)
scikit_learn Selecting the number of clusters with silhouette analysis on KMeans clustering Note
Click [here](#sphx-glr-download-auto-examples-cluster-plot-kmeans-silhouette-analysis-py) to download the full example code or to run this example in your browser via Binder
Selecting the number of clusters with silhouette analysis on KMeans clustering
==============================================================================
Silhouette analysis can be used to study the separation distance between the resulting clusters. The silhouette plot displays a measure of how close each point in one cluster is to points in the neighboring clusters and thus provides a way to assess parameters like number of clusters visually. This measure has a range of [-1, 1].
Silhouette coefficients (as these values are referred to as) near +1 indicate that the sample is far away from the neighboring clusters. A value of 0 indicates that the sample is on or very close to the decision boundary between two neighboring clusters and negative values indicate that those samples might have been assigned to the wrong cluster.
In this example the silhouette analysis is used to choose an optimal value for `n_clusters`. The silhouette plot shows that the `n_clusters` value of 3, 5 and 6 are a bad pick for the given data due to the presence of clusters with below average silhouette scores and also due to wide fluctuations in the size of the silhouette plots. Silhouette analysis is more ambivalent in deciding between 2 and 4.
Also from the thickness of the silhouette plot the cluster size can be visualized. The silhouette plot for cluster 0 when `n_clusters` is equal to 2, is bigger in size owing to the grouping of the 3 sub clusters into one big cluster. However when the `n_clusters` is equal to 4, all the plots are more or less of similar thickness and hence are of similar sizes as can be also verified from the labelled scatter plot on the right.
*
*
*
*
*
```
For n_clusters = 2 The average silhouette_score is : 0.7049787496083262
For n_clusters = 3 The average silhouette_score is : 0.5882004012129721
For n_clusters = 4 The average silhouette_score is : 0.6505186632729437
For n_clusters = 5 The average silhouette_score is : 0.56376469026194
For n_clusters = 6 The average silhouette_score is : 0.4504666294372765
```
```
from sklearn.datasets import make_blobs
from sklearn.cluster import KMeans
from sklearn.metrics import silhouette_samples, silhouette_score
import matplotlib.pyplot as plt
import matplotlib.cm as cm
import numpy as np
# Generating the sample data from make_blobs
# This particular setting has one distinct cluster and 3 clusters placed close
# together.
X, y = make_blobs(
n_samples=500,
n_features=2,
centers=4,
cluster_std=1,
center_box=(-10.0, 10.0),
shuffle=True,
random_state=1,
) # For reproducibility
range_n_clusters = [2, 3, 4, 5, 6]
for n_clusters in range_n_clusters:
# Create a subplot with 1 row and 2 columns
fig, (ax1, ax2) = plt.subplots(1, 2)
fig.set_size_inches(18, 7)
# The 1st subplot is the silhouette plot
# The silhouette coefficient can range from -1, 1 but in this example all
# lie within [-0.1, 1]
ax1.set_xlim([-0.1, 1])
# The (n_clusters+1)*10 is for inserting blank space between silhouette
# plots of individual clusters, to demarcate them clearly.
ax1.set_ylim([0, len(X) + (n_clusters + 1) * 10])
# Initialize the clusterer with n_clusters value and a random generator
# seed of 10 for reproducibility.
clusterer = KMeans(n_clusters=n_clusters, random_state=10)
cluster_labels = clusterer.fit_predict(X)
# The silhouette_score gives the average value for all the samples.
# This gives a perspective into the density and separation of the formed
# clusters
silhouette_avg = silhouette_score(X, cluster_labels)
print(
"For n_clusters =",
n_clusters,
"The average silhouette_score is :",
silhouette_avg,
)
# Compute the silhouette scores for each sample
sample_silhouette_values = silhouette_samples(X, cluster_labels)
y_lower = 10
for i in range(n_clusters):
# Aggregate the silhouette scores for samples belonging to
# cluster i, and sort them
ith_cluster_silhouette_values = sample_silhouette_values[cluster_labels == i]
ith_cluster_silhouette_values.sort()
size_cluster_i = ith_cluster_silhouette_values.shape[0]
y_upper = y_lower + size_cluster_i
color = cm.nipy_spectral(float(i) / n_clusters)
ax1.fill_betweenx(
np.arange(y_lower, y_upper),
0,
ith_cluster_silhouette_values,
facecolor=color,
edgecolor=color,
alpha=0.7,
)
# Label the silhouette plots with their cluster numbers at the middle
ax1.text(-0.05, y_lower + 0.5 * size_cluster_i, str(i))
# Compute the new y_lower for next plot
y_lower = y_upper + 10 # 10 for the 0 samples
ax1.set_title("The silhouette plot for the various clusters.")
ax1.set_xlabel("The silhouette coefficient values")
ax1.set_ylabel("Cluster label")
# The vertical line for average silhouette score of all the values
ax1.axvline(x=silhouette_avg, color="red", linestyle="--")
ax1.set_yticks([]) # Clear the yaxis labels / ticks
ax1.set_xticks([-0.1, 0, 0.2, 0.4, 0.6, 0.8, 1])
# 2nd Plot showing the actual clusters formed
colors = cm.nipy_spectral(cluster_labels.astype(float) / n_clusters)
ax2.scatter(
X[:, 0], X[:, 1], marker=".", s=30, lw=0, alpha=0.7, c=colors, edgecolor="k"
)
# Labeling the clusters
centers = clusterer.cluster_centers_
# Draw white circles at cluster centers
ax2.scatter(
centers[:, 0],
centers[:, 1],
marker="o",
c="white",
alpha=1,
s=200,
edgecolor="k",
)
for i, c in enumerate(centers):
ax2.scatter(c[0], c[1], marker="$%d$" % i, alpha=1, s=50, edgecolor="k")
ax2.set_title("The visualization of the clustered data.")
ax2.set_xlabel("Feature space for the 1st feature")
ax2.set_ylabel("Feature space for the 2nd feature")
plt.suptitle(
"Silhouette analysis for KMeans clustering on sample data with n_clusters = %d"
% n_clusters,
fontsize=14,
fontweight="bold",
)
plt.show()
```
**Total running time of the script:** ( 0 minutes 1.011 seconds)
[`Download Python source code: plot_kmeans_silhouette_analysis.py`](https://scikit-learn.org/1.1/_downloads/586f6cb589cefcd68d55348630efbfa0/plot_kmeans_silhouette_analysis.py)
[`Download Jupyter notebook: plot_kmeans_silhouette_analysis.ipynb`](https://scikit-learn.org/1.1/_downloads/2434f000f4405168e6285a3e410c709f/plot_kmeans_silhouette_analysis.ipynb)
scikit_learn K-means Clustering Note
Click [here](#sphx-glr-download-auto-examples-cluster-plot-cluster-iris-py) to download the full example code or to run this example in your browser via Binder
K-means Clustering
==================
The plots display firstly what a K-means algorithm would yield using three clusters. It is then shown what the effect of a bad initialization is on the classification process: By setting n\_init to only 1 (default is 10), the amount of times that the algorithm will be run with different centroid seeds is reduced. The next plot displays what using eight clusters would deliver and finally the ground truth.
*
*
*
*
```
# Code source: Gaël Varoquaux
# Modified for documentation by Jaques Grobler
# License: BSD 3 clause
import numpy as np
import matplotlib.pyplot as plt
# Though the following import is not directly being used, it is required
# for 3D projection to work with matplotlib < 3.2
import mpl_toolkits.mplot3d # noqa: F401
from sklearn.cluster import KMeans
from sklearn import datasets
np.random.seed(5)
iris = datasets.load_iris()
X = iris.data
y = iris.target
estimators = [
("k_means_iris_8", KMeans(n_clusters=8)),
("k_means_iris_3", KMeans(n_clusters=3)),
("k_means_iris_bad_init", KMeans(n_clusters=3, n_init=1, init="random")),
]
fignum = 1
titles = ["8 clusters", "3 clusters", "3 clusters, bad initialization"]
for name, est in estimators:
fig = plt.figure(fignum, figsize=(4, 3))
ax = fig.add_subplot(111, projection="3d", elev=48, azim=134)
ax.set_position([0, 0, 0.95, 1])
est.fit(X)
labels = est.labels_
ax.scatter(X[:, 3], X[:, 0], X[:, 2], c=labels.astype(float), edgecolor="k")
ax.w_xaxis.set_ticklabels([])
ax.w_yaxis.set_ticklabels([])
ax.w_zaxis.set_ticklabels([])
ax.set_xlabel("Petal width")
ax.set_ylabel("Sepal length")
ax.set_zlabel("Petal length")
ax.set_title(titles[fignum - 1])
ax.dist = 12
fignum = fignum + 1
# Plot the ground truth
fig = plt.figure(fignum, figsize=(4, 3))
ax = fig.add_subplot(111, projection="3d", elev=48, azim=134)
ax.set_position([0, 0, 0.95, 1])
for name, label in [("Setosa", 0), ("Versicolour", 1), ("Virginica", 2)]:
ax.text3D(
X[y == label, 3].mean(),
X[y == label, 0].mean(),
X[y == label, 2].mean() + 2,
name,
horizontalalignment="center",
bbox=dict(alpha=0.2, edgecolor="w", facecolor="w"),
)
# Reorder the labels to have colors matching the cluster results
y = np.choose(y, [1, 2, 0]).astype(float)
ax.scatter(X[:, 3], X[:, 0], X[:, 2], c=y, edgecolor="k")
ax.w_xaxis.set_ticklabels([])
ax.w_yaxis.set_ticklabels([])
ax.w_zaxis.set_ticklabels([])
ax.set_xlabel("Petal width")
ax.set_ylabel("Sepal length")
ax.set_zlabel("Petal length")
ax.set_title("Ground Truth")
ax.dist = 12
fig.show()
```
**Total running time of the script:** ( 0 minutes 0.271 seconds)
[`Download Python source code: plot_cluster_iris.py`](https://scikit-learn.org/1.1/_downloads/a315e003c9ce53b89d5fa110538885fd/plot_cluster_iris.py)
[`Download Jupyter notebook: plot_cluster_iris.ipynb`](https://scikit-learn.org/1.1/_downloads/18eb95af29bd5554020a8428b3ceac54/plot_cluster_iris.ipynb)
scikit_learn Inductive Clustering Note
Click [here](#sphx-glr-download-auto-examples-cluster-plot-inductive-clustering-py) to download the full example code or to run this example in your browser via Binder
Inductive Clustering
====================
Clustering can be expensive, especially when our dataset contains millions of datapoints. Many clustering algorithms are not [inductive](https://scikit-learn.org/1.1/glossary.html#term-inductive) and so cannot be directly applied to new data samples without recomputing the clustering, which may be intractable. Instead, we can use clustering to then learn an inductive model with a classifier, which has several benefits:
* it allows the clusters to scale and apply to new data
* unlike re-fitting the clusters to new samples, it makes sure the labelling procedure is consistent over time
* it allows us to use the inferential capabilities of the classifier to describe or explain the clusters
This example illustrates a generic implementation of a meta-estimator which extends clustering by inducing a classifier from the cluster labels.
```
# Authors: Chirag Nagpal
# Christos Aridas
import matplotlib.pyplot as plt
from sklearn.base import BaseEstimator, clone
from sklearn.cluster import AgglomerativeClustering
from sklearn.datasets import make_blobs
from sklearn.ensemble import RandomForestClassifier
from sklearn.inspection import DecisionBoundaryDisplay
from sklearn.utils.metaestimators import available_if
from sklearn.utils.validation import check_is_fitted
N_SAMPLES = 5000
RANDOM_STATE = 42
def _classifier_has(attr):
"""Check if we can delegate a method to the underlying classifier.
First, we check the first fitted classifier if available, otherwise we
check the unfitted classifier.
"""
return lambda estimator: (
hasattr(estimator.classifier_, attr)
if hasattr(estimator, "classifier_")
else hasattr(estimator.classifier, attr)
)
class InductiveClusterer(BaseEstimator):
def __init__(self, clusterer, classifier):
self.clusterer = clusterer
self.classifier = classifier
def fit(self, X, y=None):
self.clusterer_ = clone(self.clusterer)
self.classifier_ = clone(self.classifier)
y = self.clusterer_.fit_predict(X)
self.classifier_.fit(X, y)
return self
@available_if(_classifier_has("predict"))
def predict(self, X):
check_is_fitted(self)
return self.classifier_.predict(X)
@available_if(_classifier_has("decision_function"))
def decision_function(self, X):
check_is_fitted(self)
return self.classifier_.decision_function(X)
def plot_scatter(X, color, alpha=0.5):
return plt.scatter(X[:, 0], X[:, 1], c=color, alpha=alpha, edgecolor="k")
# Generate some training data from clustering
X, y = make_blobs(
n_samples=N_SAMPLES,
cluster_std=[1.0, 1.0, 0.5],
centers=[(-5, -5), (0, 0), (5, 5)],
random_state=RANDOM_STATE,
)
# Train a clustering algorithm on the training data and get the cluster labels
clusterer = AgglomerativeClustering(n_clusters=3)
cluster_labels = clusterer.fit_predict(X)
plt.figure(figsize=(12, 4))
plt.subplot(131)
plot_scatter(X, cluster_labels)
plt.title("Ward Linkage")
# Generate new samples and plot them along with the original dataset
X_new, y_new = make_blobs(
n_samples=10, centers=[(-7, -1), (-2, 4), (3, 6)], random_state=RANDOM_STATE
)
plt.subplot(132)
plot_scatter(X, cluster_labels)
plot_scatter(X_new, "black", 1)
plt.title("Unknown instances")
# Declare the inductive learning model that it will be used to
# predict cluster membership for unknown instances
classifier = RandomForestClassifier(random_state=RANDOM_STATE)
inductive_learner = InductiveClusterer(clusterer, classifier).fit(X)
probable_clusters = inductive_learner.predict(X_new)
ax = plt.subplot(133)
plot_scatter(X, cluster_labels)
plot_scatter(X_new, probable_clusters)
# Plotting decision regions
DecisionBoundaryDisplay.from_estimator(
inductive_learner, X, response_method="predict", alpha=0.4, ax=ax
)
plt.title("Classify unknown instances")
plt.show()
```
**Total running time of the script:** ( 0 minutes 1.619 seconds)
[`Download Python source code: plot_inductive_clustering.py`](https://scikit-learn.org/1.1/_downloads/c4355efac5fa3ae540eec1deb5c097b8/plot_inductive_clustering.py)
[`Download Jupyter notebook: plot_inductive_clustering.ipynb`](https://scikit-learn.org/1.1/_downloads/768c66e612686c51be7a0d956e60a0a8/plot_inductive_clustering.ipynb)
| programming_docs |
scikit_learn Comparing different clustering algorithms on toy datasets Note
Click [here](#sphx-glr-download-auto-examples-cluster-plot-cluster-comparison-py) to download the full example code or to run this example in your browser via Binder
Comparing different clustering algorithms on toy datasets
=========================================================
This example shows characteristics of different clustering algorithms on datasets that are “interesting” but still in 2D. With the exception of the last dataset, the parameters of each of these dataset-algorithm pairs has been tuned to produce good clustering results. Some algorithms are more sensitive to parameter values than others.
The last dataset is an example of a ‘null’ situation for clustering: the data is homogeneous, and there is no good clustering. For this example, the null dataset uses the same parameters as the dataset in the row above it, which represents a mismatch in the parameter values and the data structure.
While these examples give some intuition about the algorithms, this intuition might not apply to very high dimensional data.

```
import time
import warnings
import numpy as np
import matplotlib.pyplot as plt
from sklearn import cluster, datasets, mixture
from sklearn.neighbors import kneighbors_graph
from sklearn.preprocessing import StandardScaler
from itertools import cycle, islice
np.random.seed(0)
# ============
# Generate datasets. We choose the size big enough to see the scalability
# of the algorithms, but not too big to avoid too long running times
# ============
n_samples = 500
noisy_circles = datasets.make_circles(n_samples=n_samples, factor=0.5, noise=0.05)
noisy_moons = datasets.make_moons(n_samples=n_samples, noise=0.05)
blobs = datasets.make_blobs(n_samples=n_samples, random_state=8)
no_structure = np.random.rand(n_samples, 2), None
# Anisotropicly distributed data
random_state = 170
X, y = datasets.make_blobs(n_samples=n_samples, random_state=random_state)
transformation = [[0.6, -0.6], [-0.4, 0.8]]
X_aniso = np.dot(X, transformation)
aniso = (X_aniso, y)
# blobs with varied variances
varied = datasets.make_blobs(
n_samples=n_samples, cluster_std=[1.0, 2.5, 0.5], random_state=random_state
)
# ============
# Set up cluster parameters
# ============
plt.figure(figsize=(9 * 2 + 3, 13))
plt.subplots_adjust(
left=0.02, right=0.98, bottom=0.001, top=0.95, wspace=0.05, hspace=0.01
)
plot_num = 1
default_base = {
"quantile": 0.3,
"eps": 0.3,
"damping": 0.9,
"preference": -200,
"n_neighbors": 3,
"n_clusters": 3,
"min_samples": 7,
"xi": 0.05,
"min_cluster_size": 0.1,
}
datasets = [
(
noisy_circles,
{
"damping": 0.77,
"preference": -240,
"quantile": 0.2,
"n_clusters": 2,
"min_samples": 7,
"xi": 0.08,
},
),
(
noisy_moons,
{
"damping": 0.75,
"preference": -220,
"n_clusters": 2,
"min_samples": 7,
"xi": 0.1,
},
),
(
varied,
{
"eps": 0.18,
"n_neighbors": 2,
"min_samples": 7,
"xi": 0.01,
"min_cluster_size": 0.2,
},
),
(
aniso,
{
"eps": 0.15,
"n_neighbors": 2,
"min_samples": 7,
"xi": 0.1,
"min_cluster_size": 0.2,
},
),
(blobs, {"min_samples": 7, "xi": 0.1, "min_cluster_size": 0.2}),
(no_structure, {}),
]
for i_dataset, (dataset, algo_params) in enumerate(datasets):
# update parameters with dataset-specific values
params = default_base.copy()
params.update(algo_params)
X, y = dataset
# normalize dataset for easier parameter selection
X = StandardScaler().fit_transform(X)
# estimate bandwidth for mean shift
bandwidth = cluster.estimate_bandwidth(X, quantile=params["quantile"])
# connectivity matrix for structured Ward
connectivity = kneighbors_graph(
X, n_neighbors=params["n_neighbors"], include_self=False
)
# make connectivity symmetric
connectivity = 0.5 * (connectivity + connectivity.T)
# ============
# Create cluster objects
# ============
ms = cluster.MeanShift(bandwidth=bandwidth, bin_seeding=True)
two_means = cluster.MiniBatchKMeans(n_clusters=params["n_clusters"])
ward = cluster.AgglomerativeClustering(
n_clusters=params["n_clusters"], linkage="ward", connectivity=connectivity
)
spectral = cluster.SpectralClustering(
n_clusters=params["n_clusters"],
eigen_solver="arpack",
affinity="nearest_neighbors",
)
dbscan = cluster.DBSCAN(eps=params["eps"])
optics = cluster.OPTICS(
min_samples=params["min_samples"],
xi=params["xi"],
min_cluster_size=params["min_cluster_size"],
)
affinity_propagation = cluster.AffinityPropagation(
damping=params["damping"], preference=params["preference"], random_state=0
)
average_linkage = cluster.AgglomerativeClustering(
linkage="average",
affinity="cityblock",
n_clusters=params["n_clusters"],
connectivity=connectivity,
)
birch = cluster.Birch(n_clusters=params["n_clusters"])
gmm = mixture.GaussianMixture(
n_components=params["n_clusters"], covariance_type="full"
)
clustering_algorithms = (
("MiniBatch\nKMeans", two_means),
("Affinity\nPropagation", affinity_propagation),
("MeanShift", ms),
("Spectral\nClustering", spectral),
("Ward", ward),
("Agglomerative\nClustering", average_linkage),
("DBSCAN", dbscan),
("OPTICS", optics),
("BIRCH", birch),
("Gaussian\nMixture", gmm),
)
for name, algorithm in clustering_algorithms:
t0 = time.time()
# catch warnings related to kneighbors_graph
with warnings.catch_warnings():
warnings.filterwarnings(
"ignore",
message="the number of connected components of the "
+ "connectivity matrix is [0-9]{1,2}"
+ " > 1. Completing it to avoid stopping the tree early.",
category=UserWarning,
)
warnings.filterwarnings(
"ignore",
message="Graph is not fully connected, spectral embedding"
+ " may not work as expected.",
category=UserWarning,
)
algorithm.fit(X)
t1 = time.time()
if hasattr(algorithm, "labels_"):
y_pred = algorithm.labels_.astype(int)
else:
y_pred = algorithm.predict(X)
plt.subplot(len(datasets), len(clustering_algorithms), plot_num)
if i_dataset == 0:
plt.title(name, size=18)
colors = np.array(
list(
islice(
cycle(
[
"#377eb8",
"#ff7f00",
"#4daf4a",
"#f781bf",
"#a65628",
"#984ea3",
"#999999",
"#e41a1c",
"#dede00",
]
),
int(max(y_pred) + 1),
)
)
)
# add black color for outliers (if any)
colors = np.append(colors, ["#000000"])
plt.scatter(X[:, 0], X[:, 1], s=10, color=colors[y_pred])
plt.xlim(-2.5, 2.5)
plt.ylim(-2.5, 2.5)
plt.xticks(())
plt.yticks(())
plt.text(
0.99,
0.01,
("%.2fs" % (t1 - t0)).lstrip("0"),
transform=plt.gca().transAxes,
size=15,
horizontalalignment="right",
)
plot_num += 1
plt.show()
```
**Total running time of the script:** ( 0 minutes 4.782 seconds)
[`Download Python source code: plot_cluster_comparison.py`](https://scikit-learn.org/1.1/_downloads/d5b3a28a1dd21d46ab866e29825586b7/plot_cluster_comparison.py)
[`Download Jupyter notebook: plot_cluster_comparison.ipynb`](https://scikit-learn.org/1.1/_downloads/803ca1bcd8dd2c364836a6784144355b/plot_cluster_comparison.ipynb)
scikit_learn A demo of K-Means clustering on the handwritten digits data Note
Click [here](#sphx-glr-download-auto-examples-cluster-plot-kmeans-digits-py) to download the full example code or to run this example in your browser via Binder
A demo of K-Means clustering on the handwritten digits data
===========================================================
In this example we compare the various initialization strategies for K-means in terms of runtime and quality of the results.
As the ground truth is known here, we also apply different cluster quality metrics to judge the goodness of fit of the cluster labels to the ground truth.
Cluster quality metrics evaluated (see [Clustering performance evaluation](../../modules/clustering#clustering-evaluation) for definitions and discussions of the metrics):
| Shorthand | full name |
| --- | --- |
| homo | homogeneity score |
| compl | completeness score |
| v-meas | V measure |
| ARI | adjusted Rand index |
| AMI | adjusted mutual information |
| silhouette | silhouette coefficient |
Load the dataset
----------------
We will start by loading the `digits` dataset. This dataset contains handwritten digits from 0 to 9. In the context of clustering, one would like to group images such that the handwritten digits on the image are the same.
```
import numpy as np
from sklearn.datasets import load_digits
data, labels = load_digits(return_X_y=True)
(n_samples, n_features), n_digits = data.shape, np.unique(labels).size
print(f"# digits: {n_digits}; # samples: {n_samples}; # features {n_features}")
```
```
# digits: 10; # samples: 1797; # features 64
```
Define our evaluation benchmark
-------------------------------
We will first our evaluation benchmark. During this benchmark, we intend to compare different initialization methods for KMeans. Our benchmark will:
* create a pipeline which will scale the data using a [`StandardScaler`](../../modules/generated/sklearn.preprocessing.standardscaler#sklearn.preprocessing.StandardScaler "sklearn.preprocessing.StandardScaler");
* train and time the pipeline fitting;
* measure the performance of the clustering obtained via different metrics.
```
from time import time
from sklearn import metrics
from sklearn.pipeline import make_pipeline
from sklearn.preprocessing import StandardScaler
def bench_k_means(kmeans, name, data, labels):
"""Benchmark to evaluate the KMeans initialization methods.
Parameters
----------
kmeans : KMeans instance
A :class:`~sklearn.cluster.KMeans` instance with the initialization
already set.
name : str
Name given to the strategy. It will be used to show the results in a
table.
data : ndarray of shape (n_samples, n_features)
The data to cluster.
labels : ndarray of shape (n_samples,)
The labels used to compute the clustering metrics which requires some
supervision.
"""
t0 = time()
estimator = make_pipeline(StandardScaler(), kmeans).fit(data)
fit_time = time() - t0
results = [name, fit_time, estimator[-1].inertia_]
# Define the metrics which require only the true labels and estimator
# labels
clustering_metrics = [
metrics.homogeneity_score,
metrics.completeness_score,
metrics.v_measure_score,
metrics.adjusted_rand_score,
metrics.adjusted_mutual_info_score,
]
results += [m(labels, estimator[-1].labels_) for m in clustering_metrics]
# The silhouette score requires the full dataset
results += [
metrics.silhouette_score(
data,
estimator[-1].labels_,
metric="euclidean",
sample_size=300,
)
]
# Show the results
formatter_result = (
"{:9s}\t{:.3f}s\t{:.0f}\t{:.3f}\t{:.3f}\t{:.3f}\t{:.3f}\t{:.3f}\t{:.3f}"
)
print(formatter_result.format(*results))
```
Run the benchmark
-----------------
We will compare three approaches:
* an initialization using `kmeans++`. This method is stochastic and we will run the initialization 4 times;
* a random initialization. This method is stochastic as well and we will run the initialization 4 times;
* an initialization based on a [`PCA`](../../modules/generated/sklearn.decomposition.pca#sklearn.decomposition.PCA "sklearn.decomposition.PCA") projection. Indeed, we will use the components of the [`PCA`](../../modules/generated/sklearn.decomposition.pca#sklearn.decomposition.PCA "sklearn.decomposition.PCA") to initialize KMeans. This method is deterministic and a single initialization suffice.
```
from sklearn.cluster import KMeans
from sklearn.decomposition import PCA
print(82 * "_")
print("init\t\ttime\tinertia\thomo\tcompl\tv-meas\tARI\tAMI\tsilhouette")
kmeans = KMeans(init="k-means++", n_clusters=n_digits, n_init=4, random_state=0)
bench_k_means(kmeans=kmeans, name="k-means++", data=data, labels=labels)
kmeans = KMeans(init="random", n_clusters=n_digits, n_init=4, random_state=0)
bench_k_means(kmeans=kmeans, name="random", data=data, labels=labels)
pca = PCA(n_components=n_digits).fit(data)
kmeans = KMeans(init=pca.components_, n_clusters=n_digits, n_init=1)
bench_k_means(kmeans=kmeans, name="PCA-based", data=data, labels=labels)
print(82 * "_")
```
```
__________________________________________________________________________________
init time inertia homo compl v-meas ARI AMI silhouette
k-means++ 0.042s 69662 0.680 0.719 0.699 0.570 0.695 0.181
random 0.026s 69707 0.675 0.716 0.694 0.560 0.691 0.174
PCA-based 0.011s 72686 0.636 0.658 0.647 0.521 0.643 0.142
__________________________________________________________________________________
```
Visualize the results on PCA-reduced data
-----------------------------------------
[`PCA`](../../modules/generated/sklearn.decomposition.pca#sklearn.decomposition.PCA "sklearn.decomposition.PCA") allows to project the data from the original 64-dimensional space into a lower dimensional space. Subsequently, we can use [`PCA`](../../modules/generated/sklearn.decomposition.pca#sklearn.decomposition.PCA "sklearn.decomposition.PCA") to project into a 2-dimensional space and plot the data and the clusters in this new space.
```
import matplotlib.pyplot as plt
reduced_data = PCA(n_components=2).fit_transform(data)
kmeans = KMeans(init="k-means++", n_clusters=n_digits, n_init=4)
kmeans.fit(reduced_data)
# Step size of the mesh. Decrease to increase the quality of the VQ.
h = 0.02 # point in the mesh [x_min, x_max]x[y_min, y_max].
# Plot the decision boundary. For that, we will assign a color to each
x_min, x_max = reduced_data[:, 0].min() - 1, reduced_data[:, 0].max() + 1
y_min, y_max = reduced_data[:, 1].min() - 1, reduced_data[:, 1].max() + 1
xx, yy = np.meshgrid(np.arange(x_min, x_max, h), np.arange(y_min, y_max, h))
# Obtain labels for each point in mesh. Use last trained model.
Z = kmeans.predict(np.c_[xx.ravel(), yy.ravel()])
# Put the result into a color plot
Z = Z.reshape(xx.shape)
plt.figure(1)
plt.clf()
plt.imshow(
Z,
interpolation="nearest",
extent=(xx.min(), xx.max(), yy.min(), yy.max()),
cmap=plt.cm.Paired,
aspect="auto",
origin="lower",
)
plt.plot(reduced_data[:, 0], reduced_data[:, 1], "k.", markersize=2)
# Plot the centroids as a white X
centroids = kmeans.cluster_centers_
plt.scatter(
centroids[:, 0],
centroids[:, 1],
marker="x",
s=169,
linewidths=3,
color="w",
zorder=10,
)
plt.title(
"K-means clustering on the digits dataset (PCA-reduced data)\n"
"Centroids are marked with white cross"
)
plt.xlim(x_min, x_max)
plt.ylim(y_min, y_max)
plt.xticks(())
plt.yticks(())
plt.show()
```
**Total running time of the script:** ( 0 minutes 0.768 seconds)
[`Download Python source code: plot_kmeans_digits.py`](https://scikit-learn.org/1.1/_downloads/5a87b25ba023ee709595b8d02049f021/plot_kmeans_digits.py)
[`Download Jupyter notebook: plot_kmeans_digits.ipynb`](https://scikit-learn.org/1.1/_downloads/6bf322ce1724c13e6e0f8f719ebd253c/plot_kmeans_digits.ipynb)
scikit_learn Bisecting K-Means and Regular K-Means Performance Comparison Note
Click [here](#sphx-glr-download-auto-examples-cluster-plot-bisect-kmeans-py) to download the full example code or to run this example in your browser via Binder
Bisecting K-Means and Regular K-Means Performance Comparison
============================================================
This example shows differences between Regular K-Means algorithm and Bisecting K-Means.
While K-Means clusterings are different when with increasing n\_clusters, Bisecting K-Means clustering build on top of the previous ones.
This difference can visually be observed.
```
import matplotlib.pyplot as plt
from sklearn.datasets import make_blobs
from sklearn.cluster import BisectingKMeans, KMeans
print(__doc__)
# Generate sample data
n_samples = 1000
random_state = 0
X, _ = make_blobs(n_samples=n_samples, centers=2, random_state=random_state)
# Number of cluster centers for KMeans and BisectingKMeans
n_clusters_list = [2, 3, 4, 5]
# Algorithms to compare
clustering_algorithms = {
"Bisecting K-Means": BisectingKMeans,
"K-Means": KMeans,
}
# Make subplots for each variant
fig, axs = plt.subplots(
len(clustering_algorithms), len(n_clusters_list), figsize=(15, 5)
)
axs = axs.T
for i, (algorithm_name, Algorithm) in enumerate(clustering_algorithms.items()):
for j, n_clusters in enumerate(n_clusters_list):
algo = Algorithm(n_clusters=n_clusters, random_state=random_state)
algo.fit(X)
centers = algo.cluster_centers_
axs[j, i].scatter(X[:, 0], X[:, 1], s=10, c=algo.labels_)
axs[j, i].scatter(centers[:, 0], centers[:, 1], c="r", s=20)
axs[j, i].set_title(f"{algorithm_name} : {n_clusters} clusters")
# Hide x labels and tick labels for top plots and y ticks for right plots.
for ax in axs.flat:
ax.label_outer()
ax.set_xticks([])
ax.set_yticks([])
plt.show()
```
**Total running time of the script:** ( 0 minutes 0.344 seconds)
[`Download Python source code: plot_bisect_kmeans.py`](https://scikit-learn.org/1.1/_downloads/73962cec5f14b10630f1a505fe761ab7/plot_bisect_kmeans.py)
[`Download Jupyter notebook: plot_bisect_kmeans.ipynb`](https://scikit-learn.org/1.1/_downloads/13db5212719118ea59532c291af3a8f9/plot_bisect_kmeans.ipynb)
scikit_learn Agglomerative clustering with and without structure Note
Click [here](#sphx-glr-download-auto-examples-cluster-plot-agglomerative-clustering-py) to download the full example code or to run this example in your browser via Binder
Agglomerative clustering with and without structure
===================================================
This example shows the effect of imposing a connectivity graph to capture local structure in the data. The graph is simply the graph of 20 nearest neighbors.
There are two advantages of imposing a connectivity. First, clustering without a connectivity matrix is much faster.
Second, when using a connectivity matrix, single, average and complete linkage are unstable and tend to create a few clusters that grow very quickly. Indeed, average and complete linkage fight this percolation behavior by considering all the distances between two clusters when merging them ( while single linkage exaggerates the behaviour by considering only the shortest distance between clusters). The connectivity graph breaks this mechanism for average and complete linkage, making them resemble the more brittle single linkage. This effect is more pronounced for very sparse graphs (try decreasing the number of neighbors in kneighbors\_graph) and with complete linkage. In particular, having a very small number of neighbors in the graph, imposes a geometry that is close to that of single linkage, which is well known to have this percolation instability.
*
*
*
*
```
# Authors: Gael Varoquaux, Nelle Varoquaux
# License: BSD 3 clause
import time
import matplotlib.pyplot as plt
import numpy as np
from sklearn.cluster import AgglomerativeClustering
from sklearn.neighbors import kneighbors_graph
# Generate sample data
n_samples = 1500
np.random.seed(0)
t = 1.5 * np.pi * (1 + 3 * np.random.rand(1, n_samples))
x = t * np.cos(t)
y = t * np.sin(t)
X = np.concatenate((x, y))
X += 0.7 * np.random.randn(2, n_samples)
X = X.T
# Create a graph capturing local connectivity. Larger number of neighbors
# will give more homogeneous clusters to the cost of computation
# time. A very large number of neighbors gives more evenly distributed
# cluster sizes, but may not impose the local manifold structure of
# the data
knn_graph = kneighbors_graph(X, 30, include_self=False)
for connectivity in (None, knn_graph):
for n_clusters in (30, 3):
plt.figure(figsize=(10, 4))
for index, linkage in enumerate(("average", "complete", "ward", "single")):
plt.subplot(1, 4, index + 1)
model = AgglomerativeClustering(
linkage=linkage, connectivity=connectivity, n_clusters=n_clusters
)
t0 = time.time()
model.fit(X)
elapsed_time = time.time() - t0
plt.scatter(X[:, 0], X[:, 1], c=model.labels_, cmap=plt.cm.nipy_spectral)
plt.title(
"linkage=%s\n(time %.2fs)" % (linkage, elapsed_time),
fontdict=dict(verticalalignment="top"),
)
plt.axis("equal")
plt.axis("off")
plt.subplots_adjust(bottom=0, top=0.83, wspace=0, left=0, right=1)
plt.suptitle(
"n_cluster=%i, connectivity=%r"
% (n_clusters, connectivity is not None),
size=17,
)
plt.show()
```
**Total running time of the script:** ( 0 minutes 1.573 seconds)
[`Download Python source code: plot_agglomerative_clustering.py`](https://scikit-learn.org/1.1/_downloads/473e94775f7181f54536fbb1f45b9e42/plot_agglomerative_clustering.py)
[`Download Jupyter notebook: plot_agglomerative_clustering.ipynb`](https://scikit-learn.org/1.1/_downloads/d2f474d24fdee2c16d20414456da98c4/plot_agglomerative_clustering.ipynb)
| programming_docs |
scikit_learn Spectral clustering for image segmentation Note
Click [here](#sphx-glr-download-auto-examples-cluster-plot-segmentation-toy-py) to download the full example code or to run this example in your browser via Binder
Spectral clustering for image segmentation
==========================================
In this example, an image with connected circles is generated and spectral clustering is used to separate the circles.
In these settings, the [Spectral clustering](../../modules/clustering#spectral-clustering) approach solves the problem know as ‘normalized graph cuts’: the image is seen as a graph of connected voxels, and the spectral clustering algorithm amounts to choosing graph cuts defining regions while minimizing the ratio of the gradient along the cut, and the volume of the region.
As the algorithm tries to balance the volume (ie balance the region sizes), if we take circles with different sizes, the segmentation fails.
In addition, as there is no useful information in the intensity of the image, or its gradient, we choose to perform the spectral clustering on a graph that is only weakly informed by the gradient. This is close to performing a Voronoi partition of the graph.
In addition, we use the mask of the objects to restrict the graph to the outline of the objects. In this example, we are interested in separating the objects one from the other, and not from the background.
```
# Authors: Emmanuelle Gouillart <[email protected]>
# Gael Varoquaux <[email protected]>
# License: BSD 3 clause
```
Generate the data
-----------------
```
import numpy as np
l = 100
x, y = np.indices((l, l))
center1 = (28, 24)
center2 = (40, 50)
center3 = (67, 58)
center4 = (24, 70)
radius1, radius2, radius3, radius4 = 16, 14, 15, 14
circle1 = (x - center1[0]) ** 2 + (y - center1[1]) ** 2 < radius1**2
circle2 = (x - center2[0]) ** 2 + (y - center2[1]) ** 2 < radius2**2
circle3 = (x - center3[0]) ** 2 + (y - center3[1]) ** 2 < radius3**2
circle4 = (x - center4[0]) ** 2 + (y - center4[1]) ** 2 < radius4**2
```
Plotting four circles
---------------------
```
img = circle1 + circle2 + circle3 + circle4
# We use a mask that limits to the foreground: the problem that we are
# interested in here is not separating the objects from the background,
# but separating them one from the other.
mask = img.astype(bool)
img = img.astype(float)
img += 1 + 0.2 * np.random.randn(*img.shape)
```
Convert the image into a graph with the value of the gradient on the edges.
```
from sklearn.feature_extraction import image
graph = image.img_to_graph(img, mask=mask)
```
Take a decreasing function of the gradient resulting in a segmentation that is close to a Voronoi partition
```
graph.data = np.exp(-graph.data / graph.data.std())
```
Here we perform spectral clustering using the arpack solver since amg is numerically unstable on this example. We then plot the results.
```
from sklearn.cluster import spectral_clustering
import matplotlib.pyplot as plt
labels = spectral_clustering(graph, n_clusters=4, eigen_solver="arpack")
label_im = np.full(mask.shape, -1.0)
label_im[mask] = labels
fig, axs = plt.subplots(nrows=1, ncols=2, figsize=(10, 5))
axs[0].matshow(img)
axs[1].matshow(label_im)
plt.show()
```
Plotting two circles
--------------------
Here we repeat the above process but only consider the first two circles we generated. Note that this results in a cleaner separation between the circles as the region sizes are easier to balance in this case.
```
img = circle1 + circle2
mask = img.astype(bool)
img = img.astype(float)
img += 1 + 0.2 * np.random.randn(*img.shape)
graph = image.img_to_graph(img, mask=mask)
graph.data = np.exp(-graph.data / graph.data.std())
labels = spectral_clustering(graph, n_clusters=2, eigen_solver="arpack")
label_im = np.full(mask.shape, -1.0)
label_im[mask] = labels
fig, axs = plt.subplots(nrows=1, ncols=2, figsize=(10, 5))
axs[0].matshow(img)
axs[1].matshow(label_im)
plt.show()
```
**Total running time of the script:** ( 0 minutes 0.473 seconds)
[`Download Python source code: plot_segmentation_toy.py`](https://scikit-learn.org/1.1/_downloads/9cdba5635a180cdcca7d23b9cf18ffac/plot_segmentation_toy.py)
[`Download Jupyter notebook: plot_segmentation_toy.ipynb`](https://scikit-learn.org/1.1/_downloads/e7311e6599ab6ff3129117a6d8c302ec/plot_segmentation_toy.ipynb)
scikit_learn Effect of varying threshold for self-training Note
Click [here](#sphx-glr-download-auto-examples-semi-supervised-plot-self-training-varying-threshold-py) to download the full example code or to run this example in your browser via Binder
Effect of varying threshold for self-training
=============================================
This example illustrates the effect of a varying threshold on self-training. The `breast_cancer` dataset is loaded, and labels are deleted such that only 50 out of 569 samples have labels. A `SelfTrainingClassifier` is fitted on this dataset, with varying thresholds.
The upper graph shows the amount of labeled samples that the classifier has available by the end of fit, and the accuracy of the classifier. The lower graph shows the last iteration in which a sample was labeled. All values are cross validated with 3 folds.
At low thresholds (in [0.4, 0.5]), the classifier learns from samples that were labeled with a low confidence. These low-confidence samples are likely have incorrect predicted labels, and as a result, fitting on these incorrect labels produces a poor accuracy. Note that the classifier labels almost all of the samples, and only takes one iteration.
For very high thresholds (in [0.9, 1)) we observe that the classifier does not augment its dataset (the amount of self-labeled samples is 0). As a result, the accuracy achieved with a threshold of 0.9999 is the same as a normal supervised classifier would achieve.
The optimal accuracy lies in between both of these extremes at a threshold of around 0.7.
```
# Authors: Oliver Rausch <[email protected]>
# License: BSD
import numpy as np
import matplotlib.pyplot as plt
from sklearn import datasets
from sklearn.svm import SVC
from sklearn.model_selection import StratifiedKFold
from sklearn.semi_supervised import SelfTrainingClassifier
from sklearn.metrics import accuracy_score
from sklearn.utils import shuffle
n_splits = 3
X, y = datasets.load_breast_cancer(return_X_y=True)
X, y = shuffle(X, y, random_state=42)
y_true = y.copy()
y[50:] = -1
total_samples = y.shape[0]
base_classifier = SVC(probability=True, gamma=0.001, random_state=42)
x_values = np.arange(0.4, 1.05, 0.05)
x_values = np.append(x_values, 0.99999)
scores = np.empty((x_values.shape[0], n_splits))
amount_labeled = np.empty((x_values.shape[0], n_splits))
amount_iterations = np.empty((x_values.shape[0], n_splits))
for i, threshold in enumerate(x_values):
self_training_clf = SelfTrainingClassifier(base_classifier, threshold=threshold)
# We need manual cross validation so that we don't treat -1 as a separate
# class when computing accuracy
skfolds = StratifiedKFold(n_splits=n_splits)
for fold, (train_index, test_index) in enumerate(skfolds.split(X, y)):
X_train = X[train_index]
y_train = y[train_index]
X_test = X[test_index]
y_test = y[test_index]
y_test_true = y_true[test_index]
self_training_clf.fit(X_train, y_train)
# The amount of labeled samples that at the end of fitting
amount_labeled[i, fold] = (
total_samples
- np.unique(self_training_clf.labeled_iter_, return_counts=True)[1][0]
)
# The last iteration the classifier labeled a sample in
amount_iterations[i, fold] = np.max(self_training_clf.labeled_iter_)
y_pred = self_training_clf.predict(X_test)
scores[i, fold] = accuracy_score(y_test_true, y_pred)
ax1 = plt.subplot(211)
ax1.errorbar(
x_values, scores.mean(axis=1), yerr=scores.std(axis=1), capsize=2, color="b"
)
ax1.set_ylabel("Accuracy", color="b")
ax1.tick_params("y", colors="b")
ax2 = ax1.twinx()
ax2.errorbar(
x_values,
amount_labeled.mean(axis=1),
yerr=amount_labeled.std(axis=1),
capsize=2,
color="g",
)
ax2.set_ylim(bottom=0)
ax2.set_ylabel("Amount of labeled samples", color="g")
ax2.tick_params("y", colors="g")
ax3 = plt.subplot(212, sharex=ax1)
ax3.errorbar(
x_values,
amount_iterations.mean(axis=1),
yerr=amount_iterations.std(axis=1),
capsize=2,
color="b",
)
ax3.set_ylim(bottom=0)
ax3.set_ylabel("Amount of iterations")
ax3.set_xlabel("Threshold")
plt.show()
```
**Total running time of the script:** ( 0 minutes 5.458 seconds)
[`Download Python source code: plot_self_training_varying_threshold.py`](https://scikit-learn.org/1.1/_downloads/966c9902a309bfbf613d6ab3cde2a9b8/plot_self_training_varying_threshold.py)
[`Download Jupyter notebook: plot_self_training_varying_threshold.ipynb`](https://scikit-learn.org/1.1/_downloads/7394345a6f70a1637fe759e076256013/plot_self_training_varying_threshold.ipynb)
scikit_learn Label Propagation learning a complex structure Note
Click [here](#sphx-glr-download-auto-examples-semi-supervised-plot-label-propagation-structure-py) to download the full example code or to run this example in your browser via Binder
Label Propagation learning a complex structure
==============================================
Example of LabelPropagation learning a complex internal structure to demonstrate “manifold learning”. The outer circle should be labeled “red” and the inner circle “blue”. Because both label groups lie inside their own distinct shape, we can see that the labels propagate correctly around the circle.
```
# Authors: Clay Woolam <[email protected]>
# Andreas Mueller <[email protected]>
# License: BSD
```
We generate a dataset with two concentric circles. In addition, a label is associated with each sample of the dataset that is: 0 (belonging to the outer circle), 1 (belonging to the inner circle), and -1 (unknown). Here, all labels but two are tagged as unknown.
```
import numpy as np
from sklearn.datasets import make_circles
n_samples = 200
X, y = make_circles(n_samples=n_samples, shuffle=False)
outer, inner = 0, 1
labels = np.full(n_samples, -1.0)
labels[0] = outer
labels[-1] = inner
```
Plot raw data
```
import matplotlib.pyplot as plt
plt.figure(figsize=(4, 4))
plt.scatter(
X[labels == outer, 0],
X[labels == outer, 1],
color="navy",
marker="s",
lw=0,
label="outer labeled",
s=10,
)
plt.scatter(
X[labels == inner, 0],
X[labels == inner, 1],
color="c",
marker="s",
lw=0,
label="inner labeled",
s=10,
)
plt.scatter(
X[labels == -1, 0],
X[labels == -1, 1],
color="darkorange",
marker=".",
label="unlabeled",
)
plt.legend(scatterpoints=1, shadow=False, loc="upper right")
plt.title("Raw data (2 classes=outer and inner)")
```
```
Text(0.5, 1.0, 'Raw data (2 classes=outer and inner)')
```
The aim of [`LabelSpreading`](../../modules/generated/sklearn.semi_supervised.labelspreading#sklearn.semi_supervised.LabelSpreading "sklearn.semi_supervised.LabelSpreading") is to associate a label to sample where the label is initially unknown.
```
from sklearn.semi_supervised import LabelSpreading
label_spread = LabelSpreading(kernel="knn", alpha=0.8)
label_spread.fit(X, labels)
```
```
LabelSpreading(alpha=0.8, kernel='knn')
```
**In a Jupyter environment, please rerun this cell to show the HTML representation or trust the notebook.
On GitHub, the HTML representation is unable to render, please try loading this page with nbviewer.org.**
LabelSpreading
```
LabelSpreading(alpha=0.8, kernel='knn')
```
Now, we can check which labels have been associated with each sample when the label was unknown.
```
output_labels = label_spread.transduction_
output_label_array = np.asarray(output_labels)
outer_numbers = np.where(output_label_array == outer)[0]
inner_numbers = np.where(output_label_array == inner)[0]
plt.figure(figsize=(4, 4))
plt.scatter(
X[outer_numbers, 0],
X[outer_numbers, 1],
color="navy",
marker="s",
lw=0,
s=10,
label="outer learned",
)
plt.scatter(
X[inner_numbers, 0],
X[inner_numbers, 1],
color="c",
marker="s",
lw=0,
s=10,
label="inner learned",
)
plt.legend(scatterpoints=1, shadow=False, loc="upper right")
plt.title("Labels learned with Label Spreading (KNN)")
plt.show()
```
**Total running time of the script:** ( 0 minutes 0.124 seconds)
[`Download Python source code: plot_label_propagation_structure.py`](https://scikit-learn.org/1.1/_downloads/c6e2877780eeb2421a441896c8ec77b7/plot_label_propagation_structure.py)
[`Download Jupyter notebook: plot_label_propagation_structure.ipynb`](https://scikit-learn.org/1.1/_downloads/9c824e9beef1b72c9f1ad3f39de0bf57/plot_label_propagation_structure.ipynb)
scikit_learn Label Propagation digits: Demonstrating performance Note
Click [here](#sphx-glr-download-auto-examples-semi-supervised-plot-label-propagation-digits-py) to download the full example code or to run this example in your browser via Binder
Label Propagation digits: Demonstrating performance
===================================================
This example demonstrates the power of semisupervised learning by training a Label Spreading model to classify handwritten digits with sets of very few labels.
The handwritten digit dataset has 1797 total points. The model will be trained using all points, but only 30 will be labeled. Results in the form of a confusion matrix and a series of metrics over each class will be very good.
At the end, the top 10 most uncertain predictions will be shown.
```
# Authors: Clay Woolam <[email protected]>
# License: BSD
```
Data generation
---------------
We use the digits dataset. We only use a subset of randomly selected samples.
```
from sklearn import datasets
import numpy as np
digits = datasets.load_digits()
rng = np.random.RandomState(2)
indices = np.arange(len(digits.data))
rng.shuffle(indices)
```
We selected 340 samples of which only 40 will be associated with a known label. Therefore, we store the indices of the 300 other samples for which we are not supposed to know their labels.
```
X = digits.data[indices[:340]]
y = digits.target[indices[:340]]
images = digits.images[indices[:340]]
n_total_samples = len(y)
n_labeled_points = 40
indices = np.arange(n_total_samples)
unlabeled_set = indices[n_labeled_points:]
```
Shuffle everything around
```
y_train = np.copy(y)
y_train[unlabeled_set] = -1
```
Semi-supervised learning
------------------------
We fit a [`LabelSpreading`](../../modules/generated/sklearn.semi_supervised.labelspreading#sklearn.semi_supervised.LabelSpreading "sklearn.semi_supervised.LabelSpreading") and use it to predict the unknown labels.
```
from sklearn.semi_supervised import LabelSpreading
from sklearn.metrics import classification_report
lp_model = LabelSpreading(gamma=0.25, max_iter=20)
lp_model.fit(X, y_train)
predicted_labels = lp_model.transduction_[unlabeled_set]
true_labels = y[unlabeled_set]
print(
"Label Spreading model: %d labeled & %d unlabeled points (%d total)"
% (n_labeled_points, n_total_samples - n_labeled_points, n_total_samples)
)
```
```
Label Spreading model: 40 labeled & 300 unlabeled points (340 total)
```
Classification report
```
print(classification_report(true_labels, predicted_labels))
```
```
precision recall f1-score support
0 1.00 1.00 1.00 27
1 0.82 1.00 0.90 37
2 1.00 0.86 0.92 28
3 1.00 0.80 0.89 35
4 0.92 1.00 0.96 24
5 0.74 0.94 0.83 34
6 0.89 0.96 0.92 25
7 0.94 0.89 0.91 35
8 1.00 0.68 0.81 31
9 0.81 0.88 0.84 24
accuracy 0.90 300
macro avg 0.91 0.90 0.90 300
weighted avg 0.91 0.90 0.90 300
```
Confusion matrix
```
from sklearn.metrics import ConfusionMatrixDisplay
ConfusionMatrixDisplay.from_predictions(
true_labels, predicted_labels, labels=lp_model.classes_
)
```
```
<sklearn.metrics._plot.confusion_matrix.ConfusionMatrixDisplay object at 0x7f6e59c01df0>
```
Plot the most uncertain predictions
-----------------------------------
Here, we will pick and show the 10 most uncertain predictions.
```
from scipy import stats
pred_entropies = stats.distributions.entropy(lp_model.label_distributions_.T)
```
Pick the top 10 most uncertain labels
```
uncertainty_index = np.argsort(pred_entropies)[-10:]
```
Plot
```
import matplotlib.pyplot as plt
f = plt.figure(figsize=(7, 5))
for index, image_index in enumerate(uncertainty_index):
image = images[image_index]
sub = f.add_subplot(2, 5, index + 1)
sub.imshow(image, cmap=plt.cm.gray_r)
plt.xticks([])
plt.yticks([])
sub.set_title(
"predict: %i\ntrue: %i" % (lp_model.transduction_[image_index], y[image_index])
)
f.suptitle("Learning with small amount of labeled data")
plt.show()
```
**Total running time of the script:** ( 0 minutes 0.318 seconds)
[`Download Python source code: plot_label_propagation_digits.py`](https://scikit-learn.org/1.1/_downloads/fb13b4879e1cc9657e76544444ca7197/plot_label_propagation_digits.py)
[`Download Jupyter notebook: plot_label_propagation_digits.ipynb`](https://scikit-learn.org/1.1/_downloads/22bab9f5303a6f1be42d14efb0f90b40/plot_label_propagation_digits.ipynb)
scikit_learn Decision boundary of semi-supervised classifiers versus SVM on the Iris dataset Note
Click [here](#sphx-glr-download-auto-examples-semi-supervised-plot-semi-supervised-versus-svm-iris-py) to download the full example code or to run this example in your browser via Binder
Decision boundary of semi-supervised classifiers versus SVM on the Iris dataset
===============================================================================
A comparison for the decision boundaries generated on the iris dataset by Label Spreading, Self-training and SVM.
This example demonstrates that Label Spreading and Self-training can learn good boundaries even when small amounts of labeled data are available.
Note that Self-training with 100% of the data is omitted as it is functionally identical to training the SVC on 100% of the data.
```
# Authors: Clay Woolam <[email protected]>
# Oliver Rausch <[email protected]>
# License: BSD
import numpy as np
import matplotlib.pyplot as plt
from sklearn import datasets
from sklearn.svm import SVC
from sklearn.semi_supervised import LabelSpreading
from sklearn.semi_supervised import SelfTrainingClassifier
iris = datasets.load_iris()
X = iris.data[:, :2]
y = iris.target
# step size in the mesh
h = 0.02
rng = np.random.RandomState(0)
y_rand = rng.rand(y.shape[0])
y_30 = np.copy(y)
y_30[y_rand < 0.3] = -1 # set random samples to be unlabeled
y_50 = np.copy(y)
y_50[y_rand < 0.5] = -1
# we create an instance of SVM and fit out data. We do not scale our
# data since we want to plot the support vectors
ls30 = (LabelSpreading().fit(X, y_30), y_30, "Label Spreading 30% data")
ls50 = (LabelSpreading().fit(X, y_50), y_50, "Label Spreading 50% data")
ls100 = (LabelSpreading().fit(X, y), y, "Label Spreading 100% data")
# the base classifier for self-training is identical to the SVC
base_classifier = SVC(kernel="rbf", gamma=0.5, probability=True)
st30 = (
SelfTrainingClassifier(base_classifier).fit(X, y_30),
y_30,
"Self-training 30% data",
)
st50 = (
SelfTrainingClassifier(base_classifier).fit(X, y_50),
y_50,
"Self-training 50% data",
)
rbf_svc = (SVC(kernel="rbf", gamma=0.5).fit(X, y), y, "SVC with rbf kernel")
# create a mesh to plot in
x_min, x_max = X[:, 0].min() - 1, X[:, 0].max() + 1
y_min, y_max = X[:, 1].min() - 1, X[:, 1].max() + 1
xx, yy = np.meshgrid(np.arange(x_min, x_max, h), np.arange(y_min, y_max, h))
color_map = {-1: (1, 1, 1), 0: (0, 0, 0.9), 1: (1, 0, 0), 2: (0.8, 0.6, 0)}
classifiers = (ls30, st30, ls50, st50, ls100, rbf_svc)
for i, (clf, y_train, title) in enumerate(classifiers):
# Plot the decision boundary. For that, we will assign a color to each
# point in the mesh [x_min, x_max]x[y_min, y_max].
plt.subplot(3, 2, i + 1)
Z = clf.predict(np.c_[xx.ravel(), yy.ravel()])
# Put the result into a color plot
Z = Z.reshape(xx.shape)
plt.contourf(xx, yy, Z, cmap=plt.cm.Paired)
plt.axis("off")
# Plot also the training points
colors = [color_map[y] for y in y_train]
plt.scatter(X[:, 0], X[:, 1], c=colors, edgecolors="black")
plt.title(title)
plt.suptitle("Unlabeled points are colored white", y=0.1)
plt.show()
```
**Total running time of the script:** ( 0 minutes 0.940 seconds)
[`Download Python source code: plot_semi_supervised_versus_svm_iris.py`](https://scikit-learn.org/1.1/_downloads/8219ffabb9724762c36d14d22f80a0d5/plot_semi_supervised_versus_svm_iris.py)
[`Download Jupyter notebook: plot_semi_supervised_versus_svm_iris.ipynb`](https://scikit-learn.org/1.1/_downloads/28056df7ed8b04d495f832aaab1b8c3e/plot_semi_supervised_versus_svm_iris.ipynb)
| programming_docs |
scikit_learn Semi-supervised Classification on a Text Dataset Note
Click [here](#sphx-glr-download-auto-examples-semi-supervised-plot-semi-supervised-newsgroups-py) to download the full example code or to run this example in your browser via Binder
Semi-supervised Classification on a Text Dataset
================================================
In this example, semi-supervised classifiers are trained on the 20 newsgroups dataset (which will be automatically downloaded).
You can adjust the number of categories by giving their names to the dataset loader or setting them to `None` to get all 20 of them.
```
2823 documents
5 categories
Supervised SGDClassifier on 100% of the data:
Number of training samples: 2117
Unlabeled samples in training set: 0
/home/runner/work/scikit-learn/scikit-learn/sklearn/linear_model/_stochastic_gradient.py:173: FutureWarning: The loss 'log' was deprecated in v1.1 and will be removed in version 1.3. Use `loss='log_loss'` which is equivalent.
warnings.warn(
Micro-averaged F1 score on test set: 0.902
----------
Supervised SGDClassifier on 20% of the training data:
Number of training samples: 460
Unlabeled samples in training set: 0
/home/runner/work/scikit-learn/scikit-learn/sklearn/linear_model/_stochastic_gradient.py:173: FutureWarning: The loss 'log' was deprecated in v1.1 and will be removed in version 1.3. Use `loss='log_loss'` which is equivalent.
warnings.warn(
Micro-averaged F1 score on test set: 0.773
----------
SelfTrainingClassifier on 20% of the training data (rest is unlabeled):
Number of training samples: 2117
Unlabeled samples in training set: 1657
/home/runner/work/scikit-learn/scikit-learn/sklearn/linear_model/_stochastic_gradient.py:173: FutureWarning: The loss 'log' was deprecated in v1.1 and will be removed in version 1.3. Use `loss='log_loss'` which is equivalent.
warnings.warn(
End of iteration 1, added 1088 new labels.
/home/runner/work/scikit-learn/scikit-learn/sklearn/linear_model/_stochastic_gradient.py:173: FutureWarning: The loss 'log' was deprecated in v1.1 and will be removed in version 1.3. Use `loss='log_loss'` which is equivalent.
warnings.warn(
End of iteration 2, added 185 new labels.
/home/runner/work/scikit-learn/scikit-learn/sklearn/linear_model/_stochastic_gradient.py:173: FutureWarning: The loss 'log' was deprecated in v1.1 and will be removed in version 1.3. Use `loss='log_loss'` which is equivalent.
warnings.warn(
End of iteration 3, added 53 new labels.
/home/runner/work/scikit-learn/scikit-learn/sklearn/linear_model/_stochastic_gradient.py:173: FutureWarning: The loss 'log' was deprecated in v1.1 and will be removed in version 1.3. Use `loss='log_loss'` which is equivalent.
warnings.warn(
End of iteration 4, added 23 new labels.
/home/runner/work/scikit-learn/scikit-learn/sklearn/linear_model/_stochastic_gradient.py:173: FutureWarning: The loss 'log' was deprecated in v1.1 and will be removed in version 1.3. Use `loss='log_loss'` which is equivalent.
warnings.warn(
End of iteration 5, added 11 new labels.
/home/runner/work/scikit-learn/scikit-learn/sklearn/linear_model/_stochastic_gradient.py:173: FutureWarning: The loss 'log' was deprecated in v1.1 and will be removed in version 1.3. Use `loss='log_loss'` which is equivalent.
warnings.warn(
End of iteration 6, added 11 new labels.
/home/runner/work/scikit-learn/scikit-learn/sklearn/linear_model/_stochastic_gradient.py:173: FutureWarning: The loss 'log' was deprecated in v1.1 and will be removed in version 1.3. Use `loss='log_loss'` which is equivalent.
warnings.warn(
End of iteration 7, added 3 new labels.
/home/runner/work/scikit-learn/scikit-learn/sklearn/linear_model/_stochastic_gradient.py:173: FutureWarning: The loss 'log' was deprecated in v1.1 and will be removed in version 1.3. Use `loss='log_loss'` which is equivalent.
warnings.warn(
End of iteration 8, added 6 new labels.
/home/runner/work/scikit-learn/scikit-learn/sklearn/linear_model/_stochastic_gradient.py:173: FutureWarning: The loss 'log' was deprecated in v1.1 and will be removed in version 1.3. Use `loss='log_loss'` which is equivalent.
warnings.warn(
End of iteration 9, added 4 new labels.
/home/runner/work/scikit-learn/scikit-learn/sklearn/linear_model/_stochastic_gradient.py:173: FutureWarning: The loss 'log' was deprecated in v1.1 and will be removed in version 1.3. Use `loss='log_loss'` which is equivalent.
warnings.warn(
End of iteration 10, added 2 new labels.
/home/runner/work/scikit-learn/scikit-learn/sklearn/linear_model/_stochastic_gradient.py:173: FutureWarning: The loss 'log' was deprecated in v1.1 and will be removed in version 1.3. Use `loss='log_loss'` which is equivalent.
warnings.warn(
Micro-averaged F1 score on test set: 0.843
----------
LabelSpreading on 20% of the data (rest is unlabeled):
Number of training samples: 2117
Unlabeled samples in training set: 1657
/home/runner/work/scikit-learn/scikit-learn/sklearn/utils/validation.py:727: FutureWarning: np.matrix usage is deprecated in 1.0 and will raise a TypeError in 1.2. Please convert to a numpy array with np.asarray. For more information see: https://numpy.org/doc/stable/reference/generated/numpy.matrix.html
warnings.warn(
/home/runner/work/scikit-learn/scikit-learn/sklearn/utils/validation.py:727: FutureWarning: np.matrix usage is deprecated in 1.0 and will raise a TypeError in 1.2. Please convert to a numpy array with np.asarray. For more information see: https://numpy.org/doc/stable/reference/generated/numpy.matrix.html
warnings.warn(
Micro-averaged F1 score on test set: 0.671
----------
```
```
import numpy as np
from sklearn.datasets import fetch_20newsgroups
from sklearn.feature_extraction.text import CountVectorizer
from sklearn.feature_extraction.text import TfidfTransformer
from sklearn.preprocessing import FunctionTransformer
from sklearn.linear_model import SGDClassifier
from sklearn.model_selection import train_test_split
from sklearn.pipeline import Pipeline
from sklearn.semi_supervised import SelfTrainingClassifier
from sklearn.semi_supervised import LabelSpreading
from sklearn.metrics import f1_score
# Loading dataset containing first five categories
data = fetch_20newsgroups(
subset="train",
categories=[
"alt.atheism",
"comp.graphics",
"comp.os.ms-windows.misc",
"comp.sys.ibm.pc.hardware",
"comp.sys.mac.hardware",
],
)
print("%d documents" % len(data.filenames))
print("%d categories" % len(data.target_names))
print()
# Parameters
sdg_params = dict(alpha=1e-5, penalty="l2", loss="log")
vectorizer_params = dict(ngram_range=(1, 2), min_df=5, max_df=0.8)
# Supervised Pipeline
pipeline = Pipeline(
[
("vect", CountVectorizer(**vectorizer_params)),
("tfidf", TfidfTransformer()),
("clf", SGDClassifier(**sdg_params)),
]
)
# SelfTraining Pipeline
st_pipeline = Pipeline(
[
("vect", CountVectorizer(**vectorizer_params)),
("tfidf", TfidfTransformer()),
("clf", SelfTrainingClassifier(SGDClassifier(**sdg_params), verbose=True)),
]
)
# LabelSpreading Pipeline
ls_pipeline = Pipeline(
[
("vect", CountVectorizer(**vectorizer_params)),
("tfidf", TfidfTransformer()),
# LabelSpreading does not support dense matrices
("todense", FunctionTransformer(lambda x: x.todense())),
("clf", LabelSpreading()),
]
)
def eval_and_print_metrics(clf, X_train, y_train, X_test, y_test):
print("Number of training samples:", len(X_train))
print("Unlabeled samples in training set:", sum(1 for x in y_train if x == -1))
clf.fit(X_train, y_train)
y_pred = clf.predict(X_test)
print(
"Micro-averaged F1 score on test set: %0.3f"
% f1_score(y_test, y_pred, average="micro")
)
print("-" * 10)
print()
if __name__ == "__main__":
X, y = data.data, data.target
X_train, X_test, y_train, y_test = train_test_split(X, y)
print("Supervised SGDClassifier on 100% of the data:")
eval_and_print_metrics(pipeline, X_train, y_train, X_test, y_test)
# select a mask of 20% of the train dataset
y_mask = np.random.rand(len(y_train)) < 0.2
# X_20 and y_20 are the subset of the train dataset indicated by the mask
X_20, y_20 = map(
list, zip(*((x, y) for x, y, m in zip(X_train, y_train, y_mask) if m))
)
print("Supervised SGDClassifier on 20% of the training data:")
eval_and_print_metrics(pipeline, X_20, y_20, X_test, y_test)
# set the non-masked subset to be unlabeled
y_train[~y_mask] = -1
print("SelfTrainingClassifier on 20% of the training data (rest is unlabeled):")
eval_and_print_metrics(st_pipeline, X_train, y_train, X_test, y_test)
print("LabelSpreading on 20% of the data (rest is unlabeled):")
eval_and_print_metrics(ls_pipeline, X_train, y_train, X_test, y_test)
```
**Total running time of the script:** ( 0 minutes 7.354 seconds)
[`Download Python source code: plot_semi_supervised_newsgroups.py`](https://scikit-learn.org/1.1/_downloads/7f9c06d88a8d544a3815452dacaa0548/plot_semi_supervised_newsgroups.py)
[`Download Jupyter notebook: plot_semi_supervised_newsgroups.ipynb`](https://scikit-learn.org/1.1/_downloads/ebb639877592982f79ada0c577699e8d/plot_semi_supervised_newsgroups.ipynb)
scikit_learn Label Propagation digits active learning Note
Click [here](#sphx-glr-download-auto-examples-semi-supervised-plot-label-propagation-digits-active-learning-py) to download the full example code or to run this example in your browser via Binder
Label Propagation digits active learning
========================================
Demonstrates an active learning technique to learn handwritten digits using label propagation.
We start by training a label propagation model with only 10 labeled points, then we select the top five most uncertain points to label. Next, we train with 15 labeled points (original 10 + 5 new ones). We repeat this process four times to have a model trained with 30 labeled examples. Note you can increase this to label more than 30 by changing `max_iterations`. Labeling more than 30 can be useful to get a sense for the speed of convergence of this active learning technique.
A plot will appear showing the top 5 most uncertain digits for each iteration of training. These may or may not contain mistakes, but we will train the next model with their true labels.
```
Iteration 0 ______________________________________________________________________
Label Spreading model: 40 labeled & 290 unlabeled (330 total)
precision recall f1-score support
0 1.00 1.00 1.00 22
1 0.78 0.69 0.73 26
2 0.93 0.93 0.93 29
3 1.00 0.89 0.94 27
4 0.92 0.96 0.94 23
5 0.96 0.70 0.81 33
6 0.97 0.97 0.97 35
7 0.94 0.91 0.92 33
8 0.62 0.89 0.74 28
9 0.73 0.79 0.76 34
accuracy 0.87 290
macro avg 0.89 0.87 0.87 290
weighted avg 0.88 0.87 0.87 290
Confusion matrix
[[22 0 0 0 0 0 0 0 0 0]
[ 0 18 2 0 0 0 1 0 5 0]
[ 0 0 27 0 0 0 0 0 2 0]
[ 0 0 0 24 0 0 0 0 3 0]
[ 0 1 0 0 22 0 0 0 0 0]
[ 0 0 0 0 0 23 0 0 0 10]
[ 0 1 0 0 0 0 34 0 0 0]
[ 0 0 0 0 0 0 0 30 3 0]
[ 0 3 0 0 0 0 0 0 25 0]
[ 0 0 0 0 2 1 0 2 2 27]]
Iteration 1 ______________________________________________________________________
Label Spreading model: 45 labeled & 285 unlabeled (330 total)
precision recall f1-score support
0 1.00 1.00 1.00 22
1 0.79 1.00 0.88 22
2 1.00 0.93 0.96 29
3 1.00 1.00 1.00 26
4 0.92 0.96 0.94 23
5 0.96 0.70 0.81 33
6 1.00 0.97 0.99 35
7 0.94 0.91 0.92 33
8 0.77 0.86 0.81 28
9 0.73 0.79 0.76 34
accuracy 0.90 285
macro avg 0.91 0.91 0.91 285
weighted avg 0.91 0.90 0.90 285
Confusion matrix
[[22 0 0 0 0 0 0 0 0 0]
[ 0 22 0 0 0 0 0 0 0 0]
[ 0 0 27 0 0 0 0 0 2 0]
[ 0 0 0 26 0 0 0 0 0 0]
[ 0 1 0 0 22 0 0 0 0 0]
[ 0 0 0 0 0 23 0 0 0 10]
[ 0 1 0 0 0 0 34 0 0 0]
[ 0 0 0 0 0 0 0 30 3 0]
[ 0 4 0 0 0 0 0 0 24 0]
[ 0 0 0 0 2 1 0 2 2 27]]
Iteration 2 ______________________________________________________________________
Label Spreading model: 50 labeled & 280 unlabeled (330 total)
precision recall f1-score support
0 1.00 1.00 1.00 22
1 0.85 1.00 0.92 22
2 1.00 1.00 1.00 28
3 1.00 1.00 1.00 26
4 0.87 1.00 0.93 20
5 0.96 0.70 0.81 33
6 1.00 0.97 0.99 35
7 0.94 1.00 0.97 32
8 0.92 0.86 0.89 28
9 0.73 0.79 0.76 34
accuracy 0.92 280
macro avg 0.93 0.93 0.93 280
weighted avg 0.93 0.92 0.92 280
Confusion matrix
[[22 0 0 0 0 0 0 0 0 0]
[ 0 22 0 0 0 0 0 0 0 0]
[ 0 0 28 0 0 0 0 0 0 0]
[ 0 0 0 26 0 0 0 0 0 0]
[ 0 0 0 0 20 0 0 0 0 0]
[ 0 0 0 0 0 23 0 0 0 10]
[ 0 1 0 0 0 0 34 0 0 0]
[ 0 0 0 0 0 0 0 32 0 0]
[ 0 3 0 0 1 0 0 0 24 0]
[ 0 0 0 0 2 1 0 2 2 27]]
Iteration 3 ______________________________________________________________________
Label Spreading model: 55 labeled & 275 unlabeled (330 total)
precision recall f1-score support
0 1.00 1.00 1.00 22
1 0.85 1.00 0.92 22
2 1.00 1.00 1.00 27
3 1.00 1.00 1.00 26
4 0.87 1.00 0.93 20
5 0.96 0.87 0.92 31
6 1.00 0.97 0.99 35
7 1.00 1.00 1.00 31
8 0.92 0.86 0.89 28
9 0.88 0.85 0.86 33
accuracy 0.95 275
macro avg 0.95 0.95 0.95 275
weighted avg 0.95 0.95 0.95 275
Confusion matrix
[[22 0 0 0 0 0 0 0 0 0]
[ 0 22 0 0 0 0 0 0 0 0]
[ 0 0 27 0 0 0 0 0 0 0]
[ 0 0 0 26 0 0 0 0 0 0]
[ 0 0 0 0 20 0 0 0 0 0]
[ 0 0 0 0 0 27 0 0 0 4]
[ 0 1 0 0 0 0 34 0 0 0]
[ 0 0 0 0 0 0 0 31 0 0]
[ 0 3 0 0 1 0 0 0 24 0]
[ 0 0 0 0 2 1 0 0 2 28]]
Iteration 4 ______________________________________________________________________
Label Spreading model: 60 labeled & 270 unlabeled (330 total)
precision recall f1-score support
0 1.00 1.00 1.00 22
1 0.96 1.00 0.98 22
2 1.00 0.96 0.98 27
3 0.96 1.00 0.98 25
4 0.86 1.00 0.93 19
5 0.96 0.87 0.92 31
6 1.00 0.97 0.99 35
7 1.00 1.00 1.00 31
8 0.92 0.96 0.94 25
9 0.88 0.85 0.86 33
accuracy 0.96 270
macro avg 0.95 0.96 0.96 270
weighted avg 0.96 0.96 0.96 270
Confusion matrix
[[22 0 0 0 0 0 0 0 0 0]
[ 0 22 0 0 0 0 0 0 0 0]
[ 0 0 26 1 0 0 0 0 0 0]
[ 0 0 0 25 0 0 0 0 0 0]
[ 0 0 0 0 19 0 0 0 0 0]
[ 0 0 0 0 0 27 0 0 0 4]
[ 0 1 0 0 0 0 34 0 0 0]
[ 0 0 0 0 0 0 0 31 0 0]
[ 0 0 0 0 1 0 0 0 24 0]
[ 0 0 0 0 2 1 0 0 2 28]]
```
```
# Authors: Clay Woolam <[email protected]>
# License: BSD
import numpy as np
import matplotlib.pyplot as plt
from scipy import stats
from sklearn import datasets
from sklearn.semi_supervised import LabelSpreading
from sklearn.metrics import classification_report, confusion_matrix
digits = datasets.load_digits()
rng = np.random.RandomState(0)
indices = np.arange(len(digits.data))
rng.shuffle(indices)
X = digits.data[indices[:330]]
y = digits.target[indices[:330]]
images = digits.images[indices[:330]]
n_total_samples = len(y)
n_labeled_points = 40
max_iterations = 5
unlabeled_indices = np.arange(n_total_samples)[n_labeled_points:]
f = plt.figure()
for i in range(max_iterations):
if len(unlabeled_indices) == 0:
print("No unlabeled items left to label.")
break
y_train = np.copy(y)
y_train[unlabeled_indices] = -1
lp_model = LabelSpreading(gamma=0.25, max_iter=20)
lp_model.fit(X, y_train)
predicted_labels = lp_model.transduction_[unlabeled_indices]
true_labels = y[unlabeled_indices]
cm = confusion_matrix(true_labels, predicted_labels, labels=lp_model.classes_)
print("Iteration %i %s" % (i, 70 * "_"))
print(
"Label Spreading model: %d labeled & %d unlabeled (%d total)"
% (n_labeled_points, n_total_samples - n_labeled_points, n_total_samples)
)
print(classification_report(true_labels, predicted_labels))
print("Confusion matrix")
print(cm)
# compute the entropies of transduced label distributions
pred_entropies = stats.distributions.entropy(lp_model.label_distributions_.T)
# select up to 5 digit examples that the classifier is most uncertain about
uncertainty_index = np.argsort(pred_entropies)[::-1]
uncertainty_index = uncertainty_index[
np.in1d(uncertainty_index, unlabeled_indices)
][:5]
# keep track of indices that we get labels for
delete_indices = np.array([], dtype=int)
# for more than 5 iterations, visualize the gain only on the first 5
if i < 5:
f.text(
0.05,
(1 - (i + 1) * 0.183),
"model %d\n\nfit with\n%d labels" % ((i + 1), i * 5 + 10),
size=10,
)
for index, image_index in enumerate(uncertainty_index):
image = images[image_index]
# for more than 5 iterations, visualize the gain only on the first 5
if i < 5:
sub = f.add_subplot(5, 5, index + 1 + (5 * i))
sub.imshow(image, cmap=plt.cm.gray_r, interpolation="none")
sub.set_title(
"predict: %i\ntrue: %i"
% (lp_model.transduction_[image_index], y[image_index]),
size=10,
)
sub.axis("off")
# labeling 5 points, remote from labeled set
(delete_index,) = np.where(unlabeled_indices == image_index)
delete_indices = np.concatenate((delete_indices, delete_index))
unlabeled_indices = np.delete(unlabeled_indices, delete_indices)
n_labeled_points += len(uncertainty_index)
f.suptitle(
"Active learning with Label Propagation.\nRows show 5 most "
"uncertain labels to learn with the next model.",
y=1.15,
)
plt.subplots_adjust(left=0.2, bottom=0.03, right=0.9, top=0.9, wspace=0.2, hspace=0.85)
plt.show()
```
**Total running time of the script:** ( 0 minutes 0.387 seconds)
[`Download Python source code: plot_label_propagation_digits_active_learning.py`](https://scikit-learn.org/1.1/_downloads/0dd5e0a7942f1446accf7dffd47aed28/plot_label_propagation_digits_active_learning.py)
[`Download Jupyter notebook: plot_label_propagation_digits_active_learning.ipynb`](https://scikit-learn.org/1.1/_downloads/a72c1f35af8f7695645ce87422b177bf/plot_label_propagation_digits_active_learning.ipynb)
| programming_docs |
scikit_learn Scalable learning with polynomial kernel approximation Note
Click [here](#sphx-glr-download-auto-examples-kernel-approximation-plot-scalable-poly-kernels-py) to download the full example code or to run this example in your browser via Binder
Scalable learning with polynomial kernel approximation
======================================================
This example illustrates the use of `PolynomialCountSketch` to efficiently generate polynomial kernel feature-space approximations. This is used to train linear classifiers that approximate the accuracy of kernelized ones.
We use the Covtype dataset [2], trying to reproduce the experiments on the original paper of Tensor Sketch [1], i.e. the algorithm implemented by [`PolynomialCountSketch`](../../modules/generated/sklearn.kernel_approximation.polynomialcountsketch#sklearn.kernel_approximation.PolynomialCountSketch "sklearn.kernel_approximation.PolynomialCountSketch").
First, we compute the accuracy of a linear classifier on the original features. Then, we train linear classifiers on different numbers of features (`n_components`) generated by [`PolynomialCountSketch`](../../modules/generated/sklearn.kernel_approximation.polynomialcountsketch#sklearn.kernel_approximation.PolynomialCountSketch "sklearn.kernel_approximation.PolynomialCountSketch"), approximating the accuracy of a kernelized classifier in a scalable manner.
```
# Author: Daniel Lopez-Sanchez <[email protected]>
# License: BSD 3 clause
```
Preparing the data
------------------
Load the Covtype dataset, which contains 581,012 samples with 54 features each, distributed among 6 classes. The goal of this dataset is to predict forest cover type from cartographic variables only (no remotely sensed data). After loading, we transform it into a binary classification problem to match the version of the dataset in the LIBSVM webpage [2], which was the one used in [1].
```
from sklearn.datasets import fetch_covtype
X, y = fetch_covtype(return_X_y=True)
y[y != 2] = 0
y[y == 2] = 1 # We will try to separate class 2 from the other 6 classes.
```
Partitioning the data
---------------------
Here we select 5,000 samples for training and 10,000 for testing. To actually reproduce the results in the original Tensor Sketch paper, select 100,000 for training.
```
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(
X, y, train_size=5_000, test_size=10_000, random_state=42
)
```
Feature normalization
---------------------
Now scale features to the range [0, 1] to match the format of the dataset in the LIBSVM webpage, and then normalize to unit length as done in the original Tensor Sketch paper [1].
```
from sklearn.preprocessing import MinMaxScaler, Normalizer
from sklearn.pipeline import make_pipeline
mm = make_pipeline(MinMaxScaler(), Normalizer())
X_train = mm.fit_transform(X_train)
X_test = mm.transform(X_test)
```
Establishing a baseline model
-----------------------------
As a baseline, train a linear SVM on the original features and print the accuracy. We also measure and store accuracies and training times to plot them later.
```
import time
from sklearn.svm import LinearSVC
results = {}
lsvm = LinearSVC()
start = time.time()
lsvm.fit(X_train, y_train)
lsvm_time = time.time() - start
lsvm_score = 100 * lsvm.score(X_test, y_test)
results["LSVM"] = {"time": lsvm_time, "score": lsvm_score}
print(f"Linear SVM score on raw features: {lsvm_score:.2f}%")
```
```
Linear SVM score on raw features: 75.62%
```
Establishing the kernel approximation model
-------------------------------------------
Then we train linear SVMs on the features generated by [`PolynomialCountSketch`](../../modules/generated/sklearn.kernel_approximation.polynomialcountsketch#sklearn.kernel_approximation.PolynomialCountSketch "sklearn.kernel_approximation.PolynomialCountSketch") with different values for `n_components`, showing that these kernel feature approximations improve the accuracy of linear classification. In typical application scenarios, `n_components` should be larger than the number of features in the input representation in order to achieve an improvement with respect to linear classification. As a rule of thumb, the optimum of evaluation score / run time cost is typically achieved at around `n_components` = 10 \* `n_features`, though this might depend on the specific dataset being handled. Note that, since the original samples have 54 features, the explicit feature map of the polynomial kernel of degree four would have approximately 8.5 million features (precisely, 54^4). Thanks to [`PolynomialCountSketch`](../../modules/generated/sklearn.kernel_approximation.polynomialcountsketch#sklearn.kernel_approximation.PolynomialCountSketch "sklearn.kernel_approximation.PolynomialCountSketch"), we can condense most of the discriminative information of that feature space into a much more compact representation. While we run the experiment only a single time (`n_runs` = 1) in this example, in practice one should repeat the experiment several times to compensate for the stochastic nature of [`PolynomialCountSketch`](../../modules/generated/sklearn.kernel_approximation.polynomialcountsketch#sklearn.kernel_approximation.PolynomialCountSketch "sklearn.kernel_approximation.PolynomialCountSketch").
```
from sklearn.kernel_approximation import PolynomialCountSketch
n_runs = 1
N_COMPONENTS = [250, 500, 1000, 2000]
for n_components in N_COMPONENTS:
ps_lsvm_time = 0
ps_lsvm_score = 0
for _ in range(n_runs):
pipeline = make_pipeline(
PolynomialCountSketch(n_components=n_components, degree=4),
LinearSVC(),
)
start = time.time()
pipeline.fit(X_train, y_train)
ps_lsvm_time += time.time() - start
ps_lsvm_score += 100 * pipeline.score(X_test, y_test)
ps_lsvm_time /= n_runs
ps_lsvm_score /= n_runs
results[f"LSVM + PS({n_components})"] = {
"time": ps_lsvm_time,
"score": ps_lsvm_score,
}
print(
f"Linear SVM score on {n_components} PolynomialCountSketch "
+ f"features: {ps_lsvm_score:.2f}%"
)
```
```
Linear SVM score on 250 PolynomialCountSketch features: 76.55%
Linear SVM score on 500 PolynomialCountSketch features: 76.92%
Linear SVM score on 1000 PolynomialCountSketch features: 77.79%
Linear SVM score on 2000 PolynomialCountSketch features: 78.59%
```
Establishing the kernelized SVM model
-------------------------------------
Train a kernelized SVM to see how well [`PolynomialCountSketch`](../../modules/generated/sklearn.kernel_approximation.polynomialcountsketch#sklearn.kernel_approximation.PolynomialCountSketch "sklearn.kernel_approximation.PolynomialCountSketch") is approximating the performance of the kernel. This, of course, may take some time, as the SVC class has a relatively poor scalability. This is the reason why kernel approximators are so useful:
```
from sklearn.svm import SVC
ksvm = SVC(C=500.0, kernel="poly", degree=4, coef0=0, gamma=1.0)
start = time.time()
ksvm.fit(X_train, y_train)
ksvm_time = time.time() - start
ksvm_score = 100 * ksvm.score(X_test, y_test)
results["KSVM"] = {"time": ksvm_time, "score": ksvm_score}
print(f"Kernel-SVM score on raw features: {ksvm_score:.2f}%")
```
```
Kernel-SVM score on raw features: 79.78%
```
Comparing the results
---------------------
Finally, plot the results of the different methods against their training times. As we can see, the kernelized SVM achieves a higher accuracy, but its training time is much larger and, most importantly, will grow much faster if the number of training samples increases.
```
import matplotlib.pyplot as plt
fig, ax = plt.subplots(figsize=(7, 7))
ax.scatter(
[
results["LSVM"]["time"],
],
[
results["LSVM"]["score"],
],
label="Linear SVM",
c="green",
marker="^",
)
ax.scatter(
[
results["LSVM + PS(250)"]["time"],
],
[
results["LSVM + PS(250)"]["score"],
],
label="Linear SVM + PolynomialCountSketch",
c="blue",
)
for n_components in N_COMPONENTS:
ax.scatter(
[
results[f"LSVM + PS({n_components})"]["time"],
],
[
results[f"LSVM + PS({n_components})"]["score"],
],
c="blue",
)
ax.annotate(
f"n_comp.={n_components}",
(
results[f"LSVM + PS({n_components})"]["time"],
results[f"LSVM + PS({n_components})"]["score"],
),
xytext=(-30, 10),
textcoords="offset pixels",
)
ax.scatter(
[
results["KSVM"]["time"],
],
[
results["KSVM"]["score"],
],
label="Kernel SVM",
c="red",
marker="x",
)
ax.set_xlabel("Training time (s)")
ax.set_ylabel("Accuracy (%)")
ax.legend()
plt.show()
```
### References
[1] Pham, Ninh and Rasmus Pagh. “Fast and scalable polynomial kernels via explicit feature maps.” KDD ‘13 (2013). <https://doi.org/10.1145/2487575.2487591>
[2] LIBSVM binary datasets repository <https://www.csie.ntu.edu.tw/~cjlin/libsvmtools/datasets/binary.html>
**Total running time of the script:** ( 0 minutes 12.357 seconds)
[`Download Python source code: plot_scalable_poly_kernels.py`](https://scikit-learn.org/1.1/_downloads/c557c992950d3a2cf0cc4280c9dbf39b/plot_scalable_poly_kernels.py)
[`Download Jupyter notebook: plot_scalable_poly_kernels.ipynb`](https://scikit-learn.org/1.1/_downloads/cb5b0b55b4ddb01e9ad80e6e28417c64/plot_scalable_poly_kernels.ipynb)
scikit_learn Probability calibration of classifiers Note
Click [here](#sphx-glr-download-auto-examples-calibration-plot-calibration-py) to download the full example code or to run this example in your browser via Binder
Probability calibration of classifiers
======================================
When performing classification you often want to predict not only the class label, but also the associated probability. This probability gives you some kind of confidence on the prediction. However, not all classifiers provide well-calibrated probabilities, some being over-confident while others being under-confident. Thus, a separate calibration of predicted probabilities is often desirable as a postprocessing. This example illustrates two different methods for this calibration and evaluates the quality of the returned probabilities using Brier’s score (see <https://en.wikipedia.org/wiki/Brier_score>).
Compared are the estimated probability using a Gaussian naive Bayes classifier without calibration, with a sigmoid calibration, and with a non-parametric isotonic calibration. One can observe that only the non-parametric model is able to provide a probability calibration that returns probabilities close to the expected 0.5 for most of the samples belonging to the middle cluster with heterogeneous labels. This results in a significantly improved Brier score.
```
# Authors:
# Mathieu Blondel <[email protected]>
# Alexandre Gramfort <[email protected]>
# Balazs Kegl <[email protected]>
# Jan Hendrik Metzen <[email protected]>
# License: BSD Style.
```
Generate synthetic dataset
--------------------------
```
import numpy as np
from sklearn.datasets import make_blobs
from sklearn.model_selection import train_test_split
n_samples = 50000
n_bins = 3 # use 3 bins for calibration_curve as we have 3 clusters here
# Generate 3 blobs with 2 classes where the second blob contains
# half positive samples and half negative samples. Probability in this
# blob is therefore 0.5.
centers = [(-5, -5), (0, 0), (5, 5)]
X, y = make_blobs(n_samples=n_samples, centers=centers, shuffle=False, random_state=42)
y[: n_samples // 2] = 0
y[n_samples // 2 :] = 1
sample_weight = np.random.RandomState(42).rand(y.shape[0])
# split train, test for calibration
X_train, X_test, y_train, y_test, sw_train, sw_test = train_test_split(
X, y, sample_weight, test_size=0.9, random_state=42
)
```
Gaussian Naive-Bayes
--------------------
```
from sklearn.calibration import CalibratedClassifierCV
from sklearn.metrics import brier_score_loss
from sklearn.naive_bayes import GaussianNB
# With no calibration
clf = GaussianNB()
clf.fit(X_train, y_train) # GaussianNB itself does not support sample-weights
prob_pos_clf = clf.predict_proba(X_test)[:, 1]
# With isotonic calibration
clf_isotonic = CalibratedClassifierCV(clf, cv=2, method="isotonic")
clf_isotonic.fit(X_train, y_train, sample_weight=sw_train)
prob_pos_isotonic = clf_isotonic.predict_proba(X_test)[:, 1]
# With sigmoid calibration
clf_sigmoid = CalibratedClassifierCV(clf, cv=2, method="sigmoid")
clf_sigmoid.fit(X_train, y_train, sample_weight=sw_train)
prob_pos_sigmoid = clf_sigmoid.predict_proba(X_test)[:, 1]
print("Brier score losses: (the smaller the better)")
clf_score = brier_score_loss(y_test, prob_pos_clf, sample_weight=sw_test)
print("No calibration: %1.3f" % clf_score)
clf_isotonic_score = brier_score_loss(y_test, prob_pos_isotonic, sample_weight=sw_test)
print("With isotonic calibration: %1.3f" % clf_isotonic_score)
clf_sigmoid_score = brier_score_loss(y_test, prob_pos_sigmoid, sample_weight=sw_test)
print("With sigmoid calibration: %1.3f" % clf_sigmoid_score)
```
```
Brier score losses: (the smaller the better)
No calibration: 0.104
With isotonic calibration: 0.084
With sigmoid calibration: 0.109
```
Plot data and the predicted probabilities
-----------------------------------------
```
from matplotlib import cm
import matplotlib.pyplot as plt
plt.figure()
y_unique = np.unique(y)
colors = cm.rainbow(np.linspace(0.0, 1.0, y_unique.size))
for this_y, color in zip(y_unique, colors):
this_X = X_train[y_train == this_y]
this_sw = sw_train[y_train == this_y]
plt.scatter(
this_X[:, 0],
this_X[:, 1],
s=this_sw * 50,
c=color[np.newaxis, :],
alpha=0.5,
edgecolor="k",
label="Class %s" % this_y,
)
plt.legend(loc="best")
plt.title("Data")
plt.figure()
order = np.lexsort((prob_pos_clf,))
plt.plot(prob_pos_clf[order], "r", label="No calibration (%1.3f)" % clf_score)
plt.plot(
prob_pos_isotonic[order],
"g",
linewidth=3,
label="Isotonic calibration (%1.3f)" % clf_isotonic_score,
)
plt.plot(
prob_pos_sigmoid[order],
"b",
linewidth=3,
label="Sigmoid calibration (%1.3f)" % clf_sigmoid_score,
)
plt.plot(
np.linspace(0, y_test.size, 51)[1::2],
y_test[order].reshape(25, -1).mean(1),
"k",
linewidth=3,
label=r"Empirical",
)
plt.ylim([-0.05, 1.05])
plt.xlabel("Instances sorted according to predicted probability (uncalibrated GNB)")
plt.ylabel("P(y=1)")
plt.legend(loc="upper left")
plt.title("Gaussian naive Bayes probabilities")
plt.show()
```
*
*
**Total running time of the script:** ( 0 minutes 0.303 seconds)
[`Download Python source code: plot_calibration.py`](https://scikit-learn.org/1.1/_downloads/0b39f715b5e32f01df3d212b6d822b82/plot_calibration.py)
[`Download Jupyter notebook: plot_calibration.ipynb`](https://scikit-learn.org/1.1/_downloads/0c15970ac17183d2bf864a9563081aeb/plot_calibration.ipynb)
scikit_learn Probability Calibration for 3-class classification Note
Click [here](#sphx-glr-download-auto-examples-calibration-plot-calibration-multiclass-py) to download the full example code or to run this example in your browser via Binder
Probability Calibration for 3-class classification
==================================================
This example illustrates how sigmoid [calibration](../../modules/calibration#calibration) changes predicted probabilities for a 3-class classification problem. Illustrated is the standard 2-simplex, where the three corners correspond to the three classes. Arrows point from the probability vectors predicted by an uncalibrated classifier to the probability vectors predicted by the same classifier after sigmoid calibration on a hold-out validation set. Colors indicate the true class of an instance (red: class 1, green: class 2, blue: class 3).
Data
----
Below, we generate a classification dataset with 2000 samples, 2 features and 3 target classes. We then split the data as follows:
* train: 600 samples (for training the classifier)
* valid: 400 samples (for calibrating predicted probabilities)
* test: 1000 samples
Note that we also create `X_train_valid` and `y_train_valid`, which consists of both the train and valid subsets. This is used when we only want to train the classifier but not calibrate the predicted probabilities.
```
# Author: Jan Hendrik Metzen <[email protected]>
# License: BSD Style.
import numpy as np
from sklearn.datasets import make_blobs
np.random.seed(0)
X, y = make_blobs(
n_samples=2000, n_features=2, centers=3, random_state=42, cluster_std=5.0
)
X_train, y_train = X[:600], y[:600]
X_valid, y_valid = X[600:1000], y[600:1000]
X_train_valid, y_train_valid = X[:1000], y[:1000]
X_test, y_test = X[1000:], y[1000:]
```
Fitting and calibration
-----------------------
First, we will train a [`RandomForestClassifier`](../../modules/generated/sklearn.ensemble.randomforestclassifier#sklearn.ensemble.RandomForestClassifier "sklearn.ensemble.RandomForestClassifier") with 25 base estimators (trees) on the concatenated train and validation data (1000 samples). This is the uncalibrated classifier.
```
from sklearn.ensemble import RandomForestClassifier
clf = RandomForestClassifier(n_estimators=25)
clf.fit(X_train_valid, y_train_valid)
```
```
RandomForestClassifier(n_estimators=25)
```
**In a Jupyter environment, please rerun this cell to show the HTML representation or trust the notebook.
On GitHub, the HTML representation is unable to render, please try loading this page with nbviewer.org.**
RandomForestClassifier
```
RandomForestClassifier(n_estimators=25)
```
To train the calibrated classifier, we start with the same [`RandomForestClassifier`](../../modules/generated/sklearn.ensemble.randomforestclassifier#sklearn.ensemble.RandomForestClassifier "sklearn.ensemble.RandomForestClassifier") but train it using only the train data subset (600 samples) then calibrate, with `method='sigmoid'`, using the valid data subset (400 samples) in a 2-stage process.
```
from sklearn.calibration import CalibratedClassifierCV
clf = RandomForestClassifier(n_estimators=25)
clf.fit(X_train, y_train)
cal_clf = CalibratedClassifierCV(clf, method="sigmoid", cv="prefit")
cal_clf.fit(X_valid, y_valid)
```
```
CalibratedClassifierCV(base_estimator=RandomForestClassifier(n_estimators=25),
cv='prefit')
```
**In a Jupyter environment, please rerun this cell to show the HTML representation or trust the notebook.
On GitHub, the HTML representation is unable to render, please try loading this page with nbviewer.org.**
CalibratedClassifierCV
```
CalibratedClassifierCV(base_estimator=RandomForestClassifier(n_estimators=25),
cv='prefit')
```
base\_estimator: RandomForestClassifier
```
RandomForestClassifier(n_estimators=25)
```
RandomForestClassifier
```
RandomForestClassifier(n_estimators=25)
```
Compare probabilities
---------------------
Below we plot a 2-simplex with arrows showing the change in predicted probabilities of the test samples.
```
import matplotlib.pyplot as plt
plt.figure(figsize=(10, 10))
colors = ["r", "g", "b"]
clf_probs = clf.predict_proba(X_test)
cal_clf_probs = cal_clf.predict_proba(X_test)
# Plot arrows
for i in range(clf_probs.shape[0]):
plt.arrow(
clf_probs[i, 0],
clf_probs[i, 1],
cal_clf_probs[i, 0] - clf_probs[i, 0],
cal_clf_probs[i, 1] - clf_probs[i, 1],
color=colors[y_test[i]],
head_width=1e-2,
)
# Plot perfect predictions, at each vertex
plt.plot([1.0], [0.0], "ro", ms=20, label="Class 1")
plt.plot([0.0], [1.0], "go", ms=20, label="Class 2")
plt.plot([0.0], [0.0], "bo", ms=20, label="Class 3")
# Plot boundaries of unit simplex
plt.plot([0.0, 1.0, 0.0, 0.0], [0.0, 0.0, 1.0, 0.0], "k", label="Simplex")
# Annotate points 6 points around the simplex, and mid point inside simplex
plt.annotate(
r"($\frac{1}{3}$, $\frac{1}{3}$, $\frac{1}{3}$)",
xy=(1.0 / 3, 1.0 / 3),
xytext=(1.0 / 3, 0.23),
xycoords="data",
arrowprops=dict(facecolor="black", shrink=0.05),
horizontalalignment="center",
verticalalignment="center",
)
plt.plot([1.0 / 3], [1.0 / 3], "ko", ms=5)
plt.annotate(
r"($\frac{1}{2}$, $0$, $\frac{1}{2}$)",
xy=(0.5, 0.0),
xytext=(0.5, 0.1),
xycoords="data",
arrowprops=dict(facecolor="black", shrink=0.05),
horizontalalignment="center",
verticalalignment="center",
)
plt.annotate(
r"($0$, $\frac{1}{2}$, $\frac{1}{2}$)",
xy=(0.0, 0.5),
xytext=(0.1, 0.5),
xycoords="data",
arrowprops=dict(facecolor="black", shrink=0.05),
horizontalalignment="center",
verticalalignment="center",
)
plt.annotate(
r"($\frac{1}{2}$, $\frac{1}{2}$, $0$)",
xy=(0.5, 0.5),
xytext=(0.6, 0.6),
xycoords="data",
arrowprops=dict(facecolor="black", shrink=0.05),
horizontalalignment="center",
verticalalignment="center",
)
plt.annotate(
r"($0$, $0$, $1$)",
xy=(0, 0),
xytext=(0.1, 0.1),
xycoords="data",
arrowprops=dict(facecolor="black", shrink=0.05),
horizontalalignment="center",
verticalalignment="center",
)
plt.annotate(
r"($1$, $0$, $0$)",
xy=(1, 0),
xytext=(1, 0.1),
xycoords="data",
arrowprops=dict(facecolor="black", shrink=0.05),
horizontalalignment="center",
verticalalignment="center",
)
plt.annotate(
r"($0$, $1$, $0$)",
xy=(0, 1),
xytext=(0.1, 1),
xycoords="data",
arrowprops=dict(facecolor="black", shrink=0.05),
horizontalalignment="center",
verticalalignment="center",
)
# Add grid
plt.grid(False)
for x in [0.0, 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1.0]:
plt.plot([0, x], [x, 0], "k", alpha=0.2)
plt.plot([0, 0 + (1 - x) / 2], [x, x + (1 - x) / 2], "k", alpha=0.2)
plt.plot([x, x + (1 - x) / 2], [0, 0 + (1 - x) / 2], "k", alpha=0.2)
plt.title("Change of predicted probabilities on test samples after sigmoid calibration")
plt.xlabel("Probability class 1")
plt.ylabel("Probability class 2")
plt.xlim(-0.05, 1.05)
plt.ylim(-0.05, 1.05)
_ = plt.legend(loc="best")
```
In the figure above, each vertex of the simplex represents a perfectly predicted class (e.g., 1, 0, 0). The mid point inside the simplex represents predicting the three classes with equal probability (i.e., 1/3, 1/3, 1/3). Each arrow starts at the uncalibrated probabilities and end with the arrow head at the calibrated probability. The color of the arrow represents the true class of that test sample.
The uncalibrated classifier is overly confident in its predictions and incurs a large [log loss](../../modules/model_evaluation#log-loss). The calibrated classifier incurs a lower [log loss](../../modules/model_evaluation#log-loss) due to two factors. First, notice in the figure above that the arrows generally point away from the edges of the simplex, where the probability of one class is 0. Second, a large proportion of the arrows point towards the true class, e.g., green arrows (samples where the true class is ‘green’) generally point towards the green vertex. This results in fewer over-confident, 0 predicted probabilities and at the same time an increase in the predicted probabilities of the correct class. Thus, the calibrated classifier produces more accurate predicted probabilities that incur a lower [log loss](../../modules/model_evaluation#log-loss)
We can show this objectively by comparing the [log loss](../../modules/model_evaluation#log-loss) of the uncalibrated and calibrated classifiers on the predictions of the 1000 test samples. Note that an alternative would have been to increase the number of base estimators (trees) of the [`RandomForestClassifier`](../../modules/generated/sklearn.ensemble.randomforestclassifier#sklearn.ensemble.RandomForestClassifier "sklearn.ensemble.RandomForestClassifier") which would have resulted in a similar decrease in [log loss](../../modules/model_evaluation#log-loss).
```
from sklearn.metrics import log_loss
score = log_loss(y_test, clf_probs)
cal_score = log_loss(y_test, cal_clf_probs)
print("Log-loss of")
print(f" * uncalibrated classifier: {score:.3f}")
print(f" * calibrated classifier: {cal_score:.3f}")
```
```
Log-loss of
* uncalibrated classifier: 1.290
* calibrated classifier: 0.549
```
Finally we generate a grid of possible uncalibrated probabilities over the 2-simplex, compute the corresponding calibrated probabilities and plot arrows for each. The arrows are colored according the highest uncalibrated probability. This illustrates the learned calibration map:
```
plt.figure(figsize=(10, 10))
# Generate grid of probability values
p1d = np.linspace(0, 1, 20)
p0, p1 = np.meshgrid(p1d, p1d)
p2 = 1 - p0 - p1
p = np.c_[p0.ravel(), p1.ravel(), p2.ravel()]
p = p[p[:, 2] >= 0]
# Use the three class-wise calibrators to compute calibrated probabilities
calibrated_classifier = cal_clf.calibrated_classifiers_[0]
prediction = np.vstack(
[
calibrator.predict(this_p)
for calibrator, this_p in zip(calibrated_classifier.calibrators, p.T)
]
).T
# Re-normalize the calibrated predictions to make sure they stay inside the
# simplex. This same renormalization step is performed internally by the
# predict method of CalibratedClassifierCV on multiclass problems.
prediction /= prediction.sum(axis=1)[:, None]
# Plot changes in predicted probabilities induced by the calibrators
for i in range(prediction.shape[0]):
plt.arrow(
p[i, 0],
p[i, 1],
prediction[i, 0] - p[i, 0],
prediction[i, 1] - p[i, 1],
head_width=1e-2,
color=colors[np.argmax(p[i])],
)
# Plot the boundaries of the unit simplex
plt.plot([0.0, 1.0, 0.0, 0.0], [0.0, 0.0, 1.0, 0.0], "k", label="Simplex")
plt.grid(False)
for x in [0.0, 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1.0]:
plt.plot([0, x], [x, 0], "k", alpha=0.2)
plt.plot([0, 0 + (1 - x) / 2], [x, x + (1 - x) / 2], "k", alpha=0.2)
plt.plot([x, x + (1 - x) / 2], [0, 0 + (1 - x) / 2], "k", alpha=0.2)
plt.title("Learned sigmoid calibration map")
plt.xlabel("Probability class 1")
plt.ylabel("Probability class 2")
plt.xlim(-0.05, 1.05)
plt.ylim(-0.05, 1.05)
plt.show()
```
**Total running time of the script:** ( 0 minutes 1.304 seconds)
[`Download Python source code: plot_calibration_multiclass.py`](https://scikit-learn.org/1.1/_downloads/f4a2350e7cc794cdb19840052e96a1e7/plot_calibration_multiclass.py)
[`Download Jupyter notebook: plot_calibration_multiclass.ipynb`](https://scikit-learn.org/1.1/_downloads/b367e30cc681ed484e0148f4ce9eccb0/plot_calibration_multiclass.ipynb)
| programming_docs |
scikit_learn Probability Calibration curves Note
Click [here](#sphx-glr-download-auto-examples-calibration-plot-calibration-curve-py) to download the full example code or to run this example in your browser via Binder
Probability Calibration curves
==============================
When performing classification one often wants to predict not only the class label, but also the associated probability. This probability gives some kind of confidence on the prediction. This example demonstrates how to visualize how well calibrated the predicted probabilities are using calibration curves, also known as reliability diagrams. Calibration of an uncalibrated classifier will also be demonstrated.
```
# Author: Alexandre Gramfort <[email protected]>
# Jan Hendrik Metzen <[email protected]>
# License: BSD 3 clause.
```
Dataset
-------
We will use a synthetic binary classification dataset with 100,000 samples and 20 features. Of the 20 features, only 2 are informative, 10 are redundant (random combinations of the informative features) and the remaining 8 are uninformative (random numbers). Of the 100,000 samples, 1,000 will be used for model fitting and the rest for testing.
```
from sklearn.datasets import make_classification
from sklearn.model_selection import train_test_split
X, y = make_classification(
n_samples=100_000, n_features=20, n_informative=2, n_redundant=10, random_state=42
)
X_train, X_test, y_train, y_test = train_test_split(
X, y, test_size=0.99, random_state=42
)
```
Calibration curves
------------------
### Gaussian Naive Bayes
First, we will compare:
* [`LogisticRegression`](../../modules/generated/sklearn.linear_model.logisticregression#sklearn.linear_model.LogisticRegression "sklearn.linear_model.LogisticRegression") (used as baseline since very often, properly regularized logistic regression is well calibrated by default thanks to the use of the log-loss)
* Uncalibrated [`GaussianNB`](../../modules/generated/sklearn.naive_bayes.gaussiannb#sklearn.naive_bayes.GaussianNB "sklearn.naive_bayes.GaussianNB")
* [`GaussianNB`](../../modules/generated/sklearn.naive_bayes.gaussiannb#sklearn.naive_bayes.GaussianNB "sklearn.naive_bayes.GaussianNB") with isotonic and sigmoid calibration (see [User Guide](../../modules/calibration#calibration))
Calibration curves for all 4 conditions are plotted below, with the average predicted probability for each bin on the x-axis and the fraction of positive classes in each bin on the y-axis.
```
import matplotlib.pyplot as plt
from matplotlib.gridspec import GridSpec
from sklearn.calibration import CalibratedClassifierCV, CalibrationDisplay
from sklearn.linear_model import LogisticRegression
from sklearn.naive_bayes import GaussianNB
lr = LogisticRegression(C=1.0)
gnb = GaussianNB()
gnb_isotonic = CalibratedClassifierCV(gnb, cv=2, method="isotonic")
gnb_sigmoid = CalibratedClassifierCV(gnb, cv=2, method="sigmoid")
clf_list = [
(lr, "Logistic"),
(gnb, "Naive Bayes"),
(gnb_isotonic, "Naive Bayes + Isotonic"),
(gnb_sigmoid, "Naive Bayes + Sigmoid"),
]
```
```
fig = plt.figure(figsize=(10, 10))
gs = GridSpec(4, 2)
colors = plt.cm.get_cmap("Dark2")
ax_calibration_curve = fig.add_subplot(gs[:2, :2])
calibration_displays = {}
for i, (clf, name) in enumerate(clf_list):
clf.fit(X_train, y_train)
display = CalibrationDisplay.from_estimator(
clf,
X_test,
y_test,
n_bins=10,
name=name,
ax=ax_calibration_curve,
color=colors(i),
)
calibration_displays[name] = display
ax_calibration_curve.grid()
ax_calibration_curve.set_title("Calibration plots (Naive Bayes)")
# Add histogram
grid_positions = [(2, 0), (2, 1), (3, 0), (3, 1)]
for i, (_, name) in enumerate(clf_list):
row, col = grid_positions[i]
ax = fig.add_subplot(gs[row, col])
ax.hist(
calibration_displays[name].y_prob,
range=(0, 1),
bins=10,
label=name,
color=colors(i),
)
ax.set(title=name, xlabel="Mean predicted probability", ylabel="Count")
plt.tight_layout()
plt.show()
```
Uncalibrated [`GaussianNB`](../../modules/generated/sklearn.naive_bayes.gaussiannb#sklearn.naive_bayes.GaussianNB "sklearn.naive_bayes.GaussianNB") is poorly calibrated because of the redundant features which violate the assumption of feature-independence and result in an overly confident classifier, which is indicated by the typical transposed-sigmoid curve. Calibration of the probabilities of [`GaussianNB`](../../modules/generated/sklearn.naive_bayes.gaussiannb#sklearn.naive_bayes.GaussianNB "sklearn.naive_bayes.GaussianNB") with [Isotonic regression](../../modules/isotonic#isotonic) can fix this issue as can be seen from the nearly diagonal calibration curve. :ref:sigmoid regression `<sigmoid_regressor>` also improves calibration slightly, albeit not as strongly as the non-parametric isotonic regression. This can be attributed to the fact that we have plenty of calibration data such that the greater flexibility of the non-parametric model can be exploited.
Below we will make a quantitative analysis considering several classification metrics: [Brier score loss](../../modules/model_evaluation#brier-score-loss), [Log loss](../../modules/model_evaluation#log-loss), [precision, recall, F1 score](../../modules/model_evaluation#precision-recall-f-measure-metrics) and [ROC AUC](../../modules/model_evaluation#roc-metrics).
```
from collections import defaultdict
import pandas as pd
from sklearn.metrics import (
precision_score,
recall_score,
f1_score,
brier_score_loss,
log_loss,
roc_auc_score,
)
scores = defaultdict(list)
for i, (clf, name) in enumerate(clf_list):
clf.fit(X_train, y_train)
y_prob = clf.predict_proba(X_test)
y_pred = clf.predict(X_test)
scores["Classifier"].append(name)
for metric in [brier_score_loss, log_loss]:
score_name = metric.__name__.replace("_", " ").replace("score", "").capitalize()
scores[score_name].append(metric(y_test, y_prob[:, 1]))
for metric in [precision_score, recall_score, f1_score, roc_auc_score]:
score_name = metric.__name__.replace("_", " ").replace("score", "").capitalize()
scores[score_name].append(metric(y_test, y_pred))
score_df = pd.DataFrame(scores).set_index("Classifier")
score_df.round(decimals=3)
score_df
```
| | Brier loss | Log loss | Precision | Recall | F1 | Roc auc |
| --- | --- | --- | --- | --- | --- | --- |
| Classifier | | | | | | |
| Logistic | 0.098921 | 0.323178 | 0.872009 | 0.851408 | 0.861586 | 0.863157 |
| Naive Bayes | 0.117608 | 0.782247 | 0.857400 | 0.875941 | 0.866571 | 0.865055 |
| Naive Bayes + Isotonic | 0.098332 | 0.368412 | 0.883065 | 0.836224 | 0.859007 | 0.862690 |
| Naive Bayes + Sigmoid | 0.108880 | 0.368896 | 0.861106 | 0.871277 | 0.866161 | 0.865300 |
Notice that although calibration improves the [Brier score loss](../../modules/model_evaluation#brier-score-loss) (a metric composed of calibration term and refinement term) and [Log loss](../../modules/model_evaluation#log-loss), it does not significantly alter the prediction accuracy measures (precision, recall and F1 score). This is because calibration should not significantly change prediction probabilities at the location of the decision threshold (at x = 0.5 on the graph). Calibration should however, make the predicted probabilities more accurate and thus more useful for making allocation decisions under uncertainty. Further, ROC AUC, should not change at all because calibration is a monotonic transformation. Indeed, no rank metrics are affected by calibration.
### Linear support vector classifier
Next, we will compare:
* [`LogisticRegression`](../../modules/generated/sklearn.linear_model.logisticregression#sklearn.linear_model.LogisticRegression "sklearn.linear_model.LogisticRegression") (baseline)
* Uncalibrated [`LinearSVC`](../../modules/generated/sklearn.svm.linearsvc#sklearn.svm.LinearSVC "sklearn.svm.LinearSVC"). Since SVC does not output probabilities by default, we naively scale the output of the [decision\_function](https://scikit-learn.org/1.1/glossary.html#term-decision_function) into [0, 1] by applying min-max scaling.
* [`LinearSVC`](../../modules/generated/sklearn.svm.linearsvc#sklearn.svm.LinearSVC "sklearn.svm.LinearSVC") with isotonic and sigmoid calibration (see [User Guide](../../modules/calibration#calibration))
```
import numpy as np
from sklearn.svm import LinearSVC
class NaivelyCalibratedLinearSVC(LinearSVC):
"""LinearSVC with `predict_proba` method that naively scales
`decision_function` output for binary classification."""
def fit(self, X, y):
super().fit(X, y)
df = self.decision_function(X)
self.df_min_ = df.min()
self.df_max_ = df.max()
def predict_proba(self, X):
"""Min-max scale output of `decision_function` to [0, 1]."""
df = self.decision_function(X)
calibrated_df = (df - self.df_min_) / (self.df_max_ - self.df_min_)
proba_pos_class = np.clip(calibrated_df, 0, 1)
proba_neg_class = 1 - proba_pos_class
proba = np.c_[proba_neg_class, proba_pos_class]
return proba
```
```
lr = LogisticRegression(C=1.0)
svc = NaivelyCalibratedLinearSVC(max_iter=10_000)
svc_isotonic = CalibratedClassifierCV(svc, cv=2, method="isotonic")
svc_sigmoid = CalibratedClassifierCV(svc, cv=2, method="sigmoid")
clf_list = [
(lr, "Logistic"),
(svc, "SVC"),
(svc_isotonic, "SVC + Isotonic"),
(svc_sigmoid, "SVC + Sigmoid"),
]
```
```
fig = plt.figure(figsize=(10, 10))
gs = GridSpec(4, 2)
ax_calibration_curve = fig.add_subplot(gs[:2, :2])
calibration_displays = {}
for i, (clf, name) in enumerate(clf_list):
clf.fit(X_train, y_train)
display = CalibrationDisplay.from_estimator(
clf,
X_test,
y_test,
n_bins=10,
name=name,
ax=ax_calibration_curve,
color=colors(i),
)
calibration_displays[name] = display
ax_calibration_curve.grid()
ax_calibration_curve.set_title("Calibration plots (SVC)")
# Add histogram
grid_positions = [(2, 0), (2, 1), (3, 0), (3, 1)]
for i, (_, name) in enumerate(clf_list):
row, col = grid_positions[i]
ax = fig.add_subplot(gs[row, col])
ax.hist(
calibration_displays[name].y_prob,
range=(0, 1),
bins=10,
label=name,
color=colors(i),
)
ax.set(title=name, xlabel="Mean predicted probability", ylabel="Count")
plt.tight_layout()
plt.show()
```
[`LinearSVC`](../../modules/generated/sklearn.svm.linearsvc#sklearn.svm.LinearSVC "sklearn.svm.LinearSVC") shows the opposite behavior to [`GaussianNB`](../../modules/generated/sklearn.naive_bayes.gaussiannb#sklearn.naive_bayes.GaussianNB "sklearn.naive_bayes.GaussianNB"); the calibration curve has a sigmoid shape, which is typical for an under-confident classifier. In the case of [`LinearSVC`](../../modules/generated/sklearn.svm.linearsvc#sklearn.svm.LinearSVC "sklearn.svm.LinearSVC"), this is caused by the margin property of the hinge loss, which focuses on samples that are close to the decision boundary (support vectors). Samples that are far away from the decision boundary do not impact the hinge loss. It thus makes sense that [`LinearSVC`](../../modules/generated/sklearn.svm.linearsvc#sklearn.svm.LinearSVC "sklearn.svm.LinearSVC") does not try to separate samples in the high confidence region regions. This leads to flatter calibration curves near 0 and 1 and is empirically shown with a variety of datasets in Niculescu-Mizil & Caruana [[1]](#id2).
Both kinds of calibration (sigmoid and isotonic) can fix this issue and yield similar results.
As before, we show the [Brier score loss](../../modules/model_evaluation#brier-score-loss), [Log loss](../../modules/model_evaluation#log-loss), [precision, recall, F1 score](../../modules/model_evaluation#precision-recall-f-measure-metrics) and [ROC AUC](../../modules/model_evaluation#roc-metrics).
```
scores = defaultdict(list)
for i, (clf, name) in enumerate(clf_list):
clf.fit(X_train, y_train)
y_prob = clf.predict_proba(X_test)
y_pred = clf.predict(X_test)
scores["Classifier"].append(name)
for metric in [brier_score_loss, log_loss]:
score_name = metric.__name__.replace("_", " ").replace("score", "").capitalize()
scores[score_name].append(metric(y_test, y_prob[:, 1]))
for metric in [precision_score, recall_score, f1_score, roc_auc_score]:
score_name = metric.__name__.replace("_", " ").replace("score", "").capitalize()
scores[score_name].append(metric(y_test, y_pred))
score_df = pd.DataFrame(scores).set_index("Classifier")
score_df.round(decimals=3)
score_df
```
| | Brier loss | Log loss | Precision | Recall | F1 | Roc auc |
| --- | --- | --- | --- | --- | --- | --- |
| Classifier | | | | | | |
| Logistic | 0.098921 | 0.323178 | 0.872009 | 0.851408 | 0.861586 | 0.863157 |
| SVC | 0.144944 | 0.465647 | 0.872201 | 0.851772 | 0.861865 | 0.863420 |
| SVC + Isotonic | 0.099827 | 0.374535 | 0.853032 | 0.878041 | 0.865356 | 0.863306 |
| SVC + Sigmoid | 0.098760 | 0.321306 | 0.873703 | 0.848723 | 0.861032 | 0.862957 |
As with [`GaussianNB`](../../modules/generated/sklearn.naive_bayes.gaussiannb#sklearn.naive_bayes.GaussianNB "sklearn.naive_bayes.GaussianNB") above, calibration improves both [Brier score loss](../../modules/model_evaluation#brier-score-loss) and [Log loss](../../modules/model_evaluation#log-loss) but does not alter the prediction accuracy measures (precision, recall and F1 score) much.
Summary
-------
Parametric sigmoid calibration can deal with situations where the calibration curve of the base classifier is sigmoid (e.g., for [`LinearSVC`](../../modules/generated/sklearn.svm.linearsvc#sklearn.svm.LinearSVC "sklearn.svm.LinearSVC")) but not where it is transposed-sigmoid (e.g., [`GaussianNB`](../../modules/generated/sklearn.naive_bayes.gaussiannb#sklearn.naive_bayes.GaussianNB "sklearn.naive_bayes.GaussianNB")). Non-parametric isotonic calibration can deal with both situations but may require more data to produce good results.
References
----------
**Total running time of the script:** ( 0 minutes 2.626 seconds)
[`Download Python source code: plot_calibration_curve.py`](https://scikit-learn.org/1.1/_downloads/85db957603c93bd3e0a4265ea6565b13/plot_calibration_curve.py)
[`Download Jupyter notebook: plot_calibration_curve.ipynb`](https://scikit-learn.org/1.1/_downloads/6d4f620ec6653356eb970c2a6ed62081/plot_calibration_curve.ipynb)
scikit_learn Comparison of Calibration of Classifiers Note
Click [here](#sphx-glr-download-auto-examples-calibration-plot-compare-calibration-py) to download the full example code or to run this example in your browser via Binder
Comparison of Calibration of Classifiers
========================================
Well calibrated classifiers are probabilistic classifiers for which the output of [predict\_proba](https://scikit-learn.org/1.1/glossary.html#term-predict_proba) can be directly interpreted as a confidence level. For instance, a well calibrated (binary) classifier should classify the samples such that for the samples to which it gave a [predict\_proba](https://scikit-learn.org/1.1/glossary.html#term-predict_proba) value close to 0.8, approximately 80% actually belong to the positive class.
In this example we will compare the calibration of four different models: [Logistic regression](../../modules/linear_model#logistic-regression), [Gaussian Naive Bayes](../../modules/naive_bayes#gaussian-naive-bayes), [Random Forest Classifier](../../modules/ensemble#forest) and [Linear SVM](../../modules/svm#svm-classification).
Author: Jan Hendrik Metzen <[[email protected]](mailto:jhm%40informatik.uni-bremen.de)> License: BSD 3 clause.
Dataset
-------
We will use a synthetic binary classification dataset with 100,000 samples and 20 features. Of the 20 features, only 2 are informative, 2 are redundant (random combinations of the informative features) and the remaining 16 are uninformative (random numbers). Of the 100,000 samples, 100 will be used for model fitting and the remaining for testing.
```
from sklearn.datasets import make_classification
from sklearn.model_selection import train_test_split
X, y = make_classification(
n_samples=100_000, n_features=20, n_informative=2, n_redundant=2, random_state=42
)
train_samples = 100 # Samples used for training the models
X_train, X_test, y_train, y_test = train_test_split(
X,
y,
shuffle=False,
test_size=100_000 - train_samples,
)
```
Calibration curves
------------------
Below, we train each of the four models with the small training dataset, then plot calibration curves (also known as reliability diagrams) using predicted probabilities of the test dataset. Calibration curves are created by binning predicted probabilities, then plotting the mean predicted probability in each bin against the observed frequency (‘fraction of positives’). Below the calibration curve, we plot a histogram showing the distribution of the predicted probabilities or more specifically, the number of samples in each predicted probability bin.
```
import numpy as np
from sklearn.svm import LinearSVC
class NaivelyCalibratedLinearSVC(LinearSVC):
"""LinearSVC with `predict_proba` method that naively scales
`decision_function` output."""
def fit(self, X, y):
super().fit(X, y)
df = self.decision_function(X)
self.df_min_ = df.min()
self.df_max_ = df.max()
def predict_proba(self, X):
"""Min-max scale output of `decision_function` to [0,1]."""
df = self.decision_function(X)
calibrated_df = (df - self.df_min_) / (self.df_max_ - self.df_min_)
proba_pos_class = np.clip(calibrated_df, 0, 1)
proba_neg_class = 1 - proba_pos_class
proba = np.c_[proba_neg_class, proba_pos_class]
return proba
```
```
from sklearn.calibration import CalibrationDisplay
from sklearn.ensemble import RandomForestClassifier
from sklearn.linear_model import LogisticRegression
from sklearn.naive_bayes import GaussianNB
# Create classifiers
lr = LogisticRegression()
gnb = GaussianNB()
svc = NaivelyCalibratedLinearSVC(C=1.0)
rfc = RandomForestClassifier()
clf_list = [
(lr, "Logistic"),
(gnb, "Naive Bayes"),
(svc, "SVC"),
(rfc, "Random forest"),
]
```
```
import matplotlib.pyplot as plt
from matplotlib.gridspec import GridSpec
fig = plt.figure(figsize=(10, 10))
gs = GridSpec(4, 2)
colors = plt.cm.get_cmap("Dark2")
ax_calibration_curve = fig.add_subplot(gs[:2, :2])
calibration_displays = {}
for i, (clf, name) in enumerate(clf_list):
clf.fit(X_train, y_train)
display = CalibrationDisplay.from_estimator(
clf,
X_test,
y_test,
n_bins=10,
name=name,
ax=ax_calibration_curve,
color=colors(i),
)
calibration_displays[name] = display
ax_calibration_curve.grid()
ax_calibration_curve.set_title("Calibration plots")
# Add histogram
grid_positions = [(2, 0), (2, 1), (3, 0), (3, 1)]
for i, (_, name) in enumerate(clf_list):
row, col = grid_positions[i]
ax = fig.add_subplot(gs[row, col])
ax.hist(
calibration_displays[name].y_prob,
range=(0, 1),
bins=10,
label=name,
color=colors(i),
)
ax.set(title=name, xlabel="Mean predicted probability", ylabel="Count")
plt.tight_layout()
plt.show()
```
[`LogisticRegression`](../../modules/generated/sklearn.linear_model.logisticregression#sklearn.linear_model.LogisticRegression "sklearn.linear_model.LogisticRegression") returns well calibrated predictions as it directly optimizes log-loss. In contrast, the other methods return biased probabilities, with different biases for each method:
* [`GaussianNB`](../../modules/generated/sklearn.naive_bayes.gaussiannb#sklearn.naive_bayes.GaussianNB "sklearn.naive_bayes.GaussianNB") tends to push probabilities to 0 or 1 (see histogram). This is mainly because the naive Bayes equation only provides correct estimate of probabilities when the assumption that features are conditionally independent holds [[2]](#id6). However, features tend to be positively correlated and is the case with this dataset, which contains 2 features generated as random linear combinations of the informative features. These correlated features are effectively being ‘counted twice’, resulting in pushing the predicted probabilities towards 0 and 1 [[3]](#id7).
* [`RandomForestClassifier`](../../modules/generated/sklearn.ensemble.randomforestclassifier#sklearn.ensemble.RandomForestClassifier "sklearn.ensemble.RandomForestClassifier") shows the opposite behavior: the histograms show peaks at approx. 0.2 and 0.9 probability, while probabilities close to 0 or 1 are very rare. An explanation for this is given by Niculescu-Mizil and Caruana [[1]](#id5): “Methods such as bagging and random forests that average predictions from a base set of models can have difficulty making predictions near 0 and 1 because variance in the underlying base models will bias predictions that should be near zero or one away from these values. Because predictions are restricted to the interval [0,1], errors caused by variance tend to be one- sided near zero and one. For example, if a model should predict p = 0 for a case, the only way bagging can achieve this is if all bagged trees predict zero. If we add noise to the trees that bagging is averaging over, this noise will cause some trees to predict values larger than 0 for this case, thus moving the average prediction of the bagged ensemble away from 0. We observe this effect most strongly with random forests because the base-level trees trained with random forests have relatively high variance due to feature subsetting.” As a result, the calibration curve shows a characteristic sigmoid shape, indicating that the classifier is under-confident and could return probabilities closer to 0 or 1.
* To show the performance of [`LinearSVC`](../../modules/generated/sklearn.svm.linearsvc#sklearn.svm.LinearSVC "sklearn.svm.LinearSVC"), we naively scale the output of the [decision\_function](https://scikit-learn.org/1.1/glossary.html#term-decision_function) into [0, 1] by applying min-max scaling, since SVC does not output probabilities by default. [`LinearSVC`](../../modules/generated/sklearn.svm.linearsvc#sklearn.svm.LinearSVC "sklearn.svm.LinearSVC") shows an even more sigmoid curve than the [`RandomForestClassifier`](../../modules/generated/sklearn.ensemble.randomforestclassifier#sklearn.ensemble.RandomForestClassifier "sklearn.ensemble.RandomForestClassifier"), which is typical for maximum-margin methods [[1]](#id5) as they focus on difficult to classify samples that are close to the decision boundary (the support vectors).
References
----------
**Total running time of the script:** ( 0 minutes 1.028 seconds)
[`Download Python source code: plot_compare_calibration.py`](https://scikit-learn.org/1.1/_downloads/a126bc8be59b9a8c240264570cda0bcb/plot_compare_calibration.py)
[`Download Jupyter notebook: plot_compare_calibration.ipynb`](https://scikit-learn.org/1.1/_downloads/757941223692da355c1f7de747af856d/plot_compare_calibration.ipynb)
| programming_docs |
scikit_learn Gaussian Processes regression: basic introductory example Note
Click [here](#sphx-glr-download-auto-examples-gaussian-process-plot-gpr-noisy-targets-py) to download the full example code or to run this example in your browser via Binder
Gaussian Processes regression: basic introductory example
=========================================================
A simple one-dimensional regression example computed in two different ways:
1. A noise-free case
2. A noisy case with known noise-level per datapoint
In both cases, the kernel’s parameters are estimated using the maximum likelihood principle.
The figures illustrate the interpolating property of the Gaussian Process model as well as its probabilistic nature in the form of a pointwise 95% confidence interval.
Note that `alpha` is a parameter to control the strength of the Tikhonov regularization on the assumed training points’ covariance matrix.
```
# Author: Vincent Dubourg <[email protected]>
# Jake Vanderplas <[email protected]>
# Jan Hendrik Metzen <[email protected]>
# Guillaume Lemaitre <[email protected]>
# License: BSD 3 clause
```
Dataset generation
------------------
We will start by generating a synthetic dataset. The true generative process is defined as \(f(x) = x \sin(x)\).
```
import numpy as np
X = np.linspace(start=0, stop=10, num=1_000).reshape(-1, 1)
y = np.squeeze(X * np.sin(X))
```
```
import matplotlib.pyplot as plt
plt.plot(X, y, label=r"$f(x) = x \sin(x)$", linestyle="dotted")
plt.legend()
plt.xlabel("$x$")
plt.ylabel("$f(x)$")
_ = plt.title("True generative process")
```
We will use this dataset in the next experiment to illustrate how Gaussian Process regression is working.
Example with noise-free target
------------------------------
In this first example, we will use the true generative process without adding any noise. For training the Gaussian Process regression, we will only select few samples.
```
rng = np.random.RandomState(1)
training_indices = rng.choice(np.arange(y.size), size=6, replace=False)
X_train, y_train = X[training_indices], y[training_indices]
```
Now, we fit a Gaussian process on these few training data samples. We will use a radial basis function (RBF) kernel and a constant parameter to fit the amplitude.
```
from sklearn.gaussian_process import GaussianProcessRegressor
from sklearn.gaussian_process.kernels import RBF
kernel = 1 * RBF(length_scale=1.0, length_scale_bounds=(1e-2, 1e2))
gaussian_process = GaussianProcessRegressor(kernel=kernel, n_restarts_optimizer=9)
gaussian_process.fit(X_train, y_train)
gaussian_process.kernel_
```
```
5.02**2 * RBF(length_scale=1.43)
```
After fitting our model, we see that the hyperparameters of the kernel have been optimized. Now, we will use our kernel to compute the mean prediction of the full dataset and plot the 95% confidence interval.
```
mean_prediction, std_prediction = gaussian_process.predict(X, return_std=True)
plt.plot(X, y, label=r"$f(x) = x \sin(x)$", linestyle="dotted")
plt.scatter(X_train, y_train, label="Observations")
plt.plot(X, mean_prediction, label="Mean prediction")
plt.fill_between(
X.ravel(),
mean_prediction - 1.96 * std_prediction,
mean_prediction + 1.96 * std_prediction,
alpha=0.5,
label=r"95% confidence interval",
)
plt.legend()
plt.xlabel("$x$")
plt.ylabel("$f(x)$")
_ = plt.title("Gaussian process regression on noise-free dataset")
```
We see that for a prediction made on a data point close to the one from the training set, the 95% confidence has a small amplitude. Whenever a sample falls far from training data, our model’s prediction is less accurate and the model prediction is less precise (higher uncertainty).
Example with noisy targets
--------------------------
We can repeat a similar experiment adding an additional noise to the target this time. It will allow seeing the effect of the noise on the fitted model.
We add some random Gaussian noise to the target with an arbitrary standard deviation.
```
noise_std = 0.75
y_train_noisy = y_train + rng.normal(loc=0.0, scale=noise_std, size=y_train.shape)
```
We create a similar Gaussian process model. In addition to the kernel, this time, we specify the parameter `alpha` which can be interpreted as the variance of a Gaussian noise.
```
gaussian_process = GaussianProcessRegressor(
kernel=kernel, alpha=noise_std**2, n_restarts_optimizer=9
)
gaussian_process.fit(X_train, y_train_noisy)
mean_prediction, std_prediction = gaussian_process.predict(X, return_std=True)
```
Let’s plot the mean prediction and the uncertainty region as before.
```
plt.plot(X, y, label=r"$f(x) = x \sin(x)$", linestyle="dotted")
plt.errorbar(
X_train,
y_train_noisy,
noise_std,
linestyle="None",
color="tab:blue",
marker=".",
markersize=10,
label="Observations",
)
plt.plot(X, mean_prediction, label="Mean prediction")
plt.fill_between(
X.ravel(),
mean_prediction - 1.96 * std_prediction,
mean_prediction + 1.96 * std_prediction,
color="tab:orange",
alpha=0.5,
label=r"95% confidence interval",
)
plt.legend()
plt.xlabel("$x$")
plt.ylabel("$f(x)$")
_ = plt.title("Gaussian process regression on a noisy dataset")
```
The noise affects the predictions close to the training samples: the predictive uncertainty near to the training samples is larger because we explicitly model a given level target noise independent of the input variable.
**Total running time of the script:** ( 0 minutes 0.440 seconds)
[`Download Python source code: plot_gpr_noisy_targets.py`](https://scikit-learn.org/1.1/_downloads/ba8fe72a3ef4d20bcdd7f2c95e635271/plot_gpr_noisy_targets.py)
[`Download Jupyter notebook: plot_gpr_noisy_targets.ipynb`](https://scikit-learn.org/1.1/_downloads/517bdba67b49ba04ad16cff789c15fba/plot_gpr_noisy_targets.ipynb)
scikit_learn Illustration of Gaussian process classification (GPC) on the XOR dataset Note
Click [here](#sphx-glr-download-auto-examples-gaussian-process-plot-gpc-xor-py) to download the full example code or to run this example in your browser via Binder
Illustration of Gaussian process classification (GPC) on the XOR dataset
========================================================================
This example illustrates GPC on XOR data. Compared are a stationary, isotropic kernel (RBF) and a non-stationary kernel (DotProduct). On this particular dataset, the DotProduct kernel obtains considerably better results because the class-boundaries are linear and coincide with the coordinate axes. In general, stationary kernels often obtain better results.
```
/home/runner/work/scikit-learn/scikit-learn/sklearn/gaussian_process/kernels.py:430: ConvergenceWarning: The optimal value found for dimension 0 of parameter k1__constant_value is close to the specified upper bound 100000.0. Increasing the bound and calling fit again may find a better value.
warnings.warn(
```
```
# Authors: Jan Hendrik Metzen <[email protected]>
#
# License: BSD 3 clause
import numpy as np
import matplotlib.pyplot as plt
from sklearn.gaussian_process import GaussianProcessClassifier
from sklearn.gaussian_process.kernels import RBF, DotProduct
xx, yy = np.meshgrid(np.linspace(-3, 3, 50), np.linspace(-3, 3, 50))
rng = np.random.RandomState(0)
X = rng.randn(200, 2)
Y = np.logical_xor(X[:, 0] > 0, X[:, 1] > 0)
# fit the model
plt.figure(figsize=(10, 5))
kernels = [1.0 * RBF(length_scale=1.15), 1.0 * DotProduct(sigma_0=1.0) ** 2]
for i, kernel in enumerate(kernels):
clf = GaussianProcessClassifier(kernel=kernel, warm_start=True).fit(X, Y)
# plot the decision function for each datapoint on the grid
Z = clf.predict_proba(np.vstack((xx.ravel(), yy.ravel())).T)[:, 1]
Z = Z.reshape(xx.shape)
plt.subplot(1, 2, i + 1)
image = plt.imshow(
Z,
interpolation="nearest",
extent=(xx.min(), xx.max(), yy.min(), yy.max()),
aspect="auto",
origin="lower",
cmap=plt.cm.PuOr_r,
)
contours = plt.contour(xx, yy, Z, levels=[0.5], linewidths=2, colors=["k"])
plt.scatter(X[:, 0], X[:, 1], s=30, c=Y, cmap=plt.cm.Paired, edgecolors=(0, 0, 0))
plt.xticks(())
plt.yticks(())
plt.axis([-3, 3, -3, 3])
plt.colorbar(image)
plt.title(
"%s\n Log-Marginal-Likelihood:%.3f"
% (clf.kernel_, clf.log_marginal_likelihood(clf.kernel_.theta)),
fontsize=12,
)
plt.tight_layout()
plt.show()
```
**Total running time of the script:** ( 0 minutes 0.390 seconds)
[`Download Python source code: plot_gpc_xor.py`](https://scikit-learn.org/1.1/_downloads/08fc4f471ae40388eb535678346dc9d1/plot_gpc_xor.py)
[`Download Jupyter notebook: plot_gpc_xor.ipynb`](https://scikit-learn.org/1.1/_downloads/578d975e1bd2f15587da4ccbce0a7d14/plot_gpc_xor.ipynb)
scikit_learn Illustration of prior and posterior Gaussian process for different kernels Note
Click [here](#sphx-glr-download-auto-examples-gaussian-process-plot-gpr-prior-posterior-py) to download the full example code or to run this example in your browser via Binder
Illustration of prior and posterior Gaussian process for different kernels
==========================================================================
This example illustrates the prior and posterior of a [`GaussianProcessRegressor`](../../modules/generated/sklearn.gaussian_process.gaussianprocessregressor#sklearn.gaussian_process.GaussianProcessRegressor "sklearn.gaussian_process.GaussianProcessRegressor") with different kernels. Mean, standard deviation, and 5 samples are shown for both prior and posterior distributions.
Here, we only give some illustration. To know more about kernels’ formulation, refer to the [User Guide](../../modules/gaussian_process#gp-kernels).
```
# Authors: Jan Hendrik Metzen <[email protected]>
# Guillaume Lemaitre <[email protected]>
# License: BSD 3 clause
```
Helper function
---------------
Before presenting each individual kernel available for Gaussian processes, we will define an helper function allowing us plotting samples drawn from the Gaussian process.
This function will take a [`GaussianProcessRegressor`](../../modules/generated/sklearn.gaussian_process.gaussianprocessregressor#sklearn.gaussian_process.GaussianProcessRegressor "sklearn.gaussian_process.GaussianProcessRegressor") model and will drawn sample from the Gaussian process. If the model was not fit, the samples are drawn from the prior distribution while after model fitting, the samples are drawn from the posterior distribution.
```
import matplotlib.pyplot as plt
import numpy as np
def plot_gpr_samples(gpr_model, n_samples, ax):
"""Plot samples drawn from the Gaussian process model.
If the Gaussian process model is not trained then the drawn samples are
drawn from the prior distribution. Otherwise, the samples are drawn from
the posterior distribution. Be aware that a sample here corresponds to a
function.
Parameters
----------
gpr_model : `GaussianProcessRegressor`
A :class:`~sklearn.gaussian_process.GaussianProcessRegressor` model.
n_samples : int
The number of samples to draw from the Gaussian process distribution.
ax : matplotlib axis
The matplotlib axis where to plot the samples.
"""
x = np.linspace(0, 5, 100)
X = x.reshape(-1, 1)
y_mean, y_std = gpr_model.predict(X, return_std=True)
y_samples = gpr_model.sample_y(X, n_samples)
for idx, single_prior in enumerate(y_samples.T):
ax.plot(
x,
single_prior,
linestyle="--",
alpha=0.7,
label=f"Sampled function #{idx + 1}",
)
ax.plot(x, y_mean, color="black", label="Mean")
ax.fill_between(
x,
y_mean - y_std,
y_mean + y_std,
alpha=0.1,
color="black",
label=r"$\pm$ 1 std. dev.",
)
ax.set_xlabel("x")
ax.set_ylabel("y")
ax.set_ylim([-3, 3])
```
Dataset and Gaussian process generation
---------------------------------------
We will create a training dataset that we will use in the different sections.
```
rng = np.random.RandomState(4)
X_train = rng.uniform(0, 5, 10).reshape(-1, 1)
y_train = np.sin((X_train[:, 0] - 2.5) ** 2)
n_samples = 5
```
Kernel cookbook
---------------
In this section, we illustrate some samples drawn from the prior and posterior distributions of the Gaussian process with different kernels.
### Radial Basis Function kernel
```
from sklearn.gaussian_process import GaussianProcessRegressor
from sklearn.gaussian_process.kernels import RBF
kernel = 1.0 * RBF(length_scale=1.0, length_scale_bounds=(1e-1, 10.0))
gpr = GaussianProcessRegressor(kernel=kernel, random_state=0)
fig, axs = plt.subplots(nrows=2, sharex=True, sharey=True, figsize=(10, 8))
# plot prior
plot_gpr_samples(gpr, n_samples=n_samples, ax=axs[0])
axs[0].set_title("Samples from prior distribution")
# plot posterior
gpr.fit(X_train, y_train)
plot_gpr_samples(gpr, n_samples=n_samples, ax=axs[1])
axs[1].scatter(X_train[:, 0], y_train, color="red", zorder=10, label="Observations")
axs[1].legend(bbox_to_anchor=(1.05, 1.5), loc="upper left")
axs[1].set_title("Samples from posterior distribution")
fig.suptitle("Radial Basis Function kernel", fontsize=18)
plt.tight_layout()
```
```
print(f"Kernel parameters before fit:\n{kernel})")
print(
f"Kernel parameters after fit: \n{gpr.kernel_} \n"
f"Log-likelihood: {gpr.log_marginal_likelihood(gpr.kernel_.theta):.3f}"
)
```
```
Kernel parameters before fit:
1**2 * RBF(length_scale=1))
Kernel parameters after fit:
0.594**2 * RBF(length_scale=0.279)
Log-likelihood: -0.067
```
### Rational Quadradtic kernel
```
from sklearn.gaussian_process.kernels import RationalQuadratic
kernel = 1.0 * RationalQuadratic(length_scale=1.0, alpha=0.1, alpha_bounds=(1e-5, 1e15))
gpr = GaussianProcessRegressor(kernel=kernel, random_state=0)
fig, axs = plt.subplots(nrows=2, sharex=True, sharey=True, figsize=(10, 8))
# plot prior
plot_gpr_samples(gpr, n_samples=n_samples, ax=axs[0])
axs[0].set_title("Samples from prior distribution")
# plot posterior
gpr.fit(X_train, y_train)
plot_gpr_samples(gpr, n_samples=n_samples, ax=axs[1])
axs[1].scatter(X_train[:, 0], y_train, color="red", zorder=10, label="Observations")
axs[1].legend(bbox_to_anchor=(1.05, 1.5), loc="upper left")
axs[1].set_title("Samples from posterior distribution")
fig.suptitle("Rational Quadratic kernel", fontsize=18)
plt.tight_layout()
```
```
/home/runner/work/scikit-learn/scikit-learn/sklearn/gaussian_process/_gpr.py:432: UserWarning: Predicted variances smaller than 0. Setting those variances to 0.
warnings.warn(
/home/runner/work/scikit-learn/scikit-learn/sklearn/gaussian_process/_gpr.py:479: RuntimeWarning: covariance is not positive-semidefinite.
y_samples = rng.multivariate_normal(y_mean, y_cov, n_samples).T
```
```
print(f"Kernel parameters before fit:\n{kernel})")
print(
f"Kernel parameters after fit: \n{gpr.kernel_} \n"
f"Log-likelihood: {gpr.log_marginal_likelihood(gpr.kernel_.theta):.3f}"
)
```
```
Kernel parameters before fit:
1**2 * RationalQuadratic(alpha=0.1, length_scale=1))
Kernel parameters after fit:
0.594**2 * RationalQuadratic(alpha=8.66e+09, length_scale=0.279)
Log-likelihood: -0.054
```
### Exp-Sine-Squared kernel
```
from sklearn.gaussian_process.kernels import ExpSineSquared
kernel = 1.0 * ExpSineSquared(
length_scale=1.0,
periodicity=3.0,
length_scale_bounds=(0.1, 10.0),
periodicity_bounds=(1.0, 10.0),
)
gpr = GaussianProcessRegressor(kernel=kernel, random_state=0)
fig, axs = plt.subplots(nrows=2, sharex=True, sharey=True, figsize=(10, 8))
# plot prior
plot_gpr_samples(gpr, n_samples=n_samples, ax=axs[0])
axs[0].set_title("Samples from prior distribution")
# plot posterior
gpr.fit(X_train, y_train)
plot_gpr_samples(gpr, n_samples=n_samples, ax=axs[1])
axs[1].scatter(X_train[:, 0], y_train, color="red", zorder=10, label="Observations")
axs[1].legend(bbox_to_anchor=(1.05, 1.5), loc="upper left")
axs[1].set_title("Samples from posterior distribution")
fig.suptitle("Exp-Sine-Squared kernel", fontsize=18)
plt.tight_layout()
```
```
print(f"Kernel parameters before fit:\n{kernel})")
print(
f"Kernel parameters after fit: \n{gpr.kernel_} \n"
f"Log-likelihood: {gpr.log_marginal_likelihood(gpr.kernel_.theta):.3f}"
)
```
```
Kernel parameters before fit:
1**2 * ExpSineSquared(length_scale=1, periodicity=3))
Kernel parameters after fit:
0.799**2 * ExpSineSquared(length_scale=0.791, periodicity=2.87)
Log-likelihood: 3.394
```
### Dot-product kernel
```
from sklearn.gaussian_process.kernels import ConstantKernel, DotProduct
kernel = ConstantKernel(0.1, (0.01, 10.0)) * (
DotProduct(sigma_0=1.0, sigma_0_bounds=(0.1, 10.0)) ** 2
)
gpr = GaussianProcessRegressor(kernel=kernel, random_state=0)
fig, axs = plt.subplots(nrows=2, sharex=True, sharey=True, figsize=(10, 8))
# plot prior
plot_gpr_samples(gpr, n_samples=n_samples, ax=axs[0])
axs[0].set_title("Samples from prior distribution")
# plot posterior
gpr.fit(X_train, y_train)
plot_gpr_samples(gpr, n_samples=n_samples, ax=axs[1])
axs[1].scatter(X_train[:, 0], y_train, color="red", zorder=10, label="Observations")
axs[1].legend(bbox_to_anchor=(1.05, 1.5), loc="upper left")
axs[1].set_title("Samples from posterior distribution")
fig.suptitle("Dot-product kernel", fontsize=18)
plt.tight_layout()
```
```
/home/runner/work/scikit-learn/scikit-learn/sklearn/gaussian_process/_gpr.py:616: ConvergenceWarning: lbfgs failed to converge (status=2):
ABNORMAL_TERMINATION_IN_LNSRCH.
Increase the number of iterations (max_iter) or scale the data as shown in:
https://scikit-learn.org/stable/modules/preprocessing.html
_check_optimize_result("lbfgs", opt_res)
/home/runner/work/scikit-learn/scikit-learn/sklearn/gaussian_process/_gpr.py:432: UserWarning: Predicted variances smaller than 0. Setting those variances to 0.
warnings.warn(
```
```
print(f"Kernel parameters before fit:\n{kernel})")
print(
f"Kernel parameters after fit: \n{gpr.kernel_} \n"
f"Log-likelihood: {gpr.log_marginal_likelihood(gpr.kernel_.theta):.3f}"
)
```
```
Kernel parameters before fit:
0.316**2 * DotProduct(sigma_0=1) ** 2)
Kernel parameters after fit:
3**2 * DotProduct(sigma_0=7.8) ** 2
Log-likelihood: -7173415029.706
```
### Matérn kernel
```
from sklearn.gaussian_process.kernels import Matern
kernel = 1.0 * Matern(length_scale=1.0, length_scale_bounds=(1e-1, 10.0), nu=1.5)
gpr = GaussianProcessRegressor(kernel=kernel, random_state=0)
fig, axs = plt.subplots(nrows=2, sharex=True, sharey=True, figsize=(10, 8))
# plot prior
plot_gpr_samples(gpr, n_samples=n_samples, ax=axs[0])
axs[0].set_title("Samples from prior distribution")
# plot posterior
gpr.fit(X_train, y_train)
plot_gpr_samples(gpr, n_samples=n_samples, ax=axs[1])
axs[1].scatter(X_train[:, 0], y_train, color="red", zorder=10, label="Observations")
axs[1].legend(bbox_to_anchor=(1.05, 1.5), loc="upper left")
axs[1].set_title("Samples from posterior distribution")
fig.suptitle("Matérn kernel", fontsize=18)
plt.tight_layout()
```
```
print(f"Kernel parameters before fit:\n{kernel})")
print(
f"Kernel parameters after fit: \n{gpr.kernel_} \n"
f"Log-likelihood: {gpr.log_marginal_likelihood(gpr.kernel_.theta):.3f}"
)
```
```
Kernel parameters before fit:
1**2 * Matern(length_scale=1, nu=1.5))
Kernel parameters after fit:
0.609**2 * Matern(length_scale=0.484, nu=1.5)
Log-likelihood: -1.185
```
**Total running time of the script:** ( 0 minutes 1.034 seconds)
[`Download Python source code: plot_gpr_prior_posterior.py`](https://scikit-learn.org/1.1/_downloads/1bcb2039afa126da41f1cea42b4a5866/plot_gpr_prior_posterior.py)
[`Download Jupyter notebook: plot_gpr_prior_posterior.ipynb`](https://scikit-learn.org/1.1/_downloads/75a08bb798ae7156529a808a0e08e7b4/plot_gpr_prior_posterior.ipynb)
| programming_docs |
scikit_learn Gaussian process classification (GPC) on iris dataset Note
Click [here](#sphx-glr-download-auto-examples-gaussian-process-plot-gpc-iris-py) to download the full example code or to run this example in your browser via Binder
Gaussian process classification (GPC) on iris dataset
=====================================================
This example illustrates the predicted probability of GPC for an isotropic and anisotropic RBF kernel on a two-dimensional version for the iris-dataset. The anisotropic RBF kernel obtains slightly higher log-marginal-likelihood by assigning different length-scales to the two feature dimensions.
```
import numpy as np
import matplotlib.pyplot as plt
from sklearn import datasets
from sklearn.gaussian_process import GaussianProcessClassifier
from sklearn.gaussian_process.kernels import RBF
# import some data to play with
iris = datasets.load_iris()
X = iris.data[:, :2] # we only take the first two features.
y = np.array(iris.target, dtype=int)
h = 0.02 # step size in the mesh
kernel = 1.0 * RBF([1.0])
gpc_rbf_isotropic = GaussianProcessClassifier(kernel=kernel).fit(X, y)
kernel = 1.0 * RBF([1.0, 1.0])
gpc_rbf_anisotropic = GaussianProcessClassifier(kernel=kernel).fit(X, y)
# create a mesh to plot in
x_min, x_max = X[:, 0].min() - 1, X[:, 0].max() + 1
y_min, y_max = X[:, 1].min() - 1, X[:, 1].max() + 1
xx, yy = np.meshgrid(np.arange(x_min, x_max, h), np.arange(y_min, y_max, h))
titles = ["Isotropic RBF", "Anisotropic RBF"]
plt.figure(figsize=(10, 5))
for i, clf in enumerate((gpc_rbf_isotropic, gpc_rbf_anisotropic)):
# Plot the predicted probabilities. For that, we will assign a color to
# each point in the mesh [x_min, m_max]x[y_min, y_max].
plt.subplot(1, 2, i + 1)
Z = clf.predict_proba(np.c_[xx.ravel(), yy.ravel()])
# Put the result into a color plot
Z = Z.reshape((xx.shape[0], xx.shape[1], 3))
plt.imshow(Z, extent=(x_min, x_max, y_min, y_max), origin="lower")
# Plot also the training points
plt.scatter(X[:, 0], X[:, 1], c=np.array(["r", "g", "b"])[y], edgecolors=(0, 0, 0))
plt.xlabel("Sepal length")
plt.ylabel("Sepal width")
plt.xlim(xx.min(), xx.max())
plt.ylim(yy.min(), yy.max())
plt.xticks(())
plt.yticks(())
plt.title(
"%s, LML: %.3f" % (titles[i], clf.log_marginal_likelihood(clf.kernel_.theta))
)
plt.tight_layout()
plt.show()
```
**Total running time of the script:** ( 0 minutes 3.116 seconds)
[`Download Python source code: plot_gpc_iris.py`](https://scikit-learn.org/1.1/_downloads/44d6b1038c2225e954af6a4f193c2a94/plot_gpc_iris.py)
[`Download Jupyter notebook: plot_gpc_iris.ipynb`](https://scikit-learn.org/1.1/_downloads/a2f99bb3ce0cbeefa9d2e44fe9daf4cd/plot_gpc_iris.ipynb)
scikit_learn Gaussian process regression (GPR) with noise-level estimation Note
Click [here](#sphx-glr-download-auto-examples-gaussian-process-plot-gpr-noisy-py) to download the full example code or to run this example in your browser via Binder
Gaussian process regression (GPR) with noise-level estimation
=============================================================
This example shows the ability of the [`WhiteKernel`](../../modules/generated/sklearn.gaussian_process.kernels.whitekernel#sklearn.gaussian_process.kernels.WhiteKernel "sklearn.gaussian_process.kernels.WhiteKernel") to estimate the noise level in the data. Moreover, we show the importance of kernel hyperparameters initialization.
```
# Authors: Jan Hendrik Metzen <[email protected]>
# Guillaume Lemaitre <[email protected]>
# License: BSD 3 clause
```
Data generation
---------------
We will work in a setting where `X` will contain a single feature. We create a function that will generate the target to be predicted. We will add an option to add some noise to the generated target.
```
import numpy as np
def target_generator(X, add_noise=False):
target = 0.5 + np.sin(3 * X)
if add_noise:
rng = np.random.RandomState(1)
target += rng.normal(0, 0.3, size=target.shape)
return target.squeeze()
```
Let’s have a look to the target generator where we will not add any noise to observe the signal that we would like to predict.
```
X = np.linspace(0, 5, num=30).reshape(-1, 1)
y = target_generator(X, add_noise=False)
```
```
import matplotlib.pyplot as plt
plt.plot(X, y, label="Expected signal")
plt.legend()
plt.xlabel("X")
_ = plt.ylabel("y")
```
The target is transforming the input `X` using a sine function. Now, we will generate few noisy training samples. To illustrate the noise level, we will plot the true signal together with the noisy training samples.
```
rng = np.random.RandomState(0)
X_train = rng.uniform(0, 5, size=20).reshape(-1, 1)
y_train = target_generator(X_train, add_noise=True)
```
```
plt.plot(X, y, label="Expected signal")
plt.scatter(
x=X_train[:, 0],
y=y_train,
color="black",
alpha=0.4,
label="Observations",
)
plt.legend()
plt.xlabel("X")
_ = plt.ylabel("y")
```
Optimisation of kernel hyperparameters in GPR
---------------------------------------------
Now, we will create a [`GaussianProcessRegressor`](../../modules/generated/sklearn.gaussian_process.gaussianprocessregressor#sklearn.gaussian_process.GaussianProcessRegressor "sklearn.gaussian_process.GaussianProcessRegressor") using an additive kernel adding a [`RBF`](../../modules/generated/sklearn.gaussian_process.kernels.rbf#sklearn.gaussian_process.kernels.RBF "sklearn.gaussian_process.kernels.RBF") and [`WhiteKernel`](../../modules/generated/sklearn.gaussian_process.kernels.whitekernel#sklearn.gaussian_process.kernels.WhiteKernel "sklearn.gaussian_process.kernels.WhiteKernel") kernels. The [`WhiteKernel`](../../modules/generated/sklearn.gaussian_process.kernels.whitekernel#sklearn.gaussian_process.kernels.WhiteKernel "sklearn.gaussian_process.kernels.WhiteKernel") is a kernel that will able to estimate the amount of noise present in the data while the [`RBF`](../../modules/generated/sklearn.gaussian_process.kernels.rbf#sklearn.gaussian_process.kernels.RBF "sklearn.gaussian_process.kernels.RBF") will serve at fitting the non-linearity between the data and the target.
However, we will show that the hyperparameter space contains several local minima. It will highlights the importance of initial hyperparameter values.
We will create a model using a kernel with a high noise level and a large length scale, which will explain all variations in the data by noise.
```
from sklearn.gaussian_process import GaussianProcessRegressor
from sklearn.gaussian_process.kernels import RBF, WhiteKernel
kernel = 1.0 * RBF(length_scale=1e1, length_scale_bounds=(1e-2, 1e3)) + WhiteKernel(
noise_level=1, noise_level_bounds=(1e-5, 1e1)
)
gpr = GaussianProcessRegressor(kernel=kernel, alpha=0.0)
gpr.fit(X_train, y_train)
y_mean, y_std = gpr.predict(X, return_std=True)
```
```
/home/runner/work/scikit-learn/scikit-learn/sklearn/gaussian_process/kernels.py:430: ConvergenceWarning: The optimal value found for dimension 0 of parameter k1__k2__length_scale is close to the specified upper bound 1000.0. Increasing the bound and calling fit again may find a better value.
warnings.warn(
```
```
plt.plot(X, y, label="Expected signal")
plt.scatter(x=X_train[:, 0], y=y_train, color="black", alpha=0.4, label="Observations")
plt.errorbar(X, y_mean, y_std)
plt.legend()
plt.xlabel("X")
plt.ylabel("y")
_ = plt.title(
f"Initial: {kernel}\nOptimum: {gpr.kernel_}\nLog-Marginal-Likelihood: "
f"{gpr.log_marginal_likelihood(gpr.kernel_.theta)}",
fontsize=8,
)
```
We see that the optimum kernel found still have a high noise level and an even larger length scale. Furthermore, we observe that the model does not provide faithful predictions.
Now, we will initialize the [`RBF`](../../modules/generated/sklearn.gaussian_process.kernels.rbf#sklearn.gaussian_process.kernels.RBF "sklearn.gaussian_process.kernels.RBF") with a larger `length_scale` and the [`WhiteKernel`](../../modules/generated/sklearn.gaussian_process.kernels.whitekernel#sklearn.gaussian_process.kernels.WhiteKernel "sklearn.gaussian_process.kernels.WhiteKernel") with a smaller noise level lower bound.
```
kernel = 1.0 * RBF(length_scale=1e-1, length_scale_bounds=(1e-2, 1e3)) + WhiteKernel(
noise_level=1e-2, noise_level_bounds=(1e-10, 1e1)
)
gpr = GaussianProcessRegressor(kernel=kernel, alpha=0.0)
gpr.fit(X_train, y_train)
y_mean, y_std = gpr.predict(X, return_std=True)
```
```
plt.plot(X, y, label="Expected signal")
plt.scatter(x=X_train[:, 0], y=y_train, color="black", alpha=0.4, label="Observations")
plt.errorbar(X, y_mean, y_std)
plt.legend()
plt.xlabel("X")
plt.ylabel("y")
_ = plt.title(
f"Initial: {kernel}\nOptimum: {gpr.kernel_}\nLog-Marginal-Likelihood: "
f"{gpr.log_marginal_likelihood(gpr.kernel_.theta)}",
fontsize=8,
)
```
First, we see that the model’s predictions are more precise than the previous model’s: this new model is able to estimate the noise-free functional relationship.
Looking at the kernel hyperparameters, we see that the best combination found has a smaller noise level and shorter length scale than the first model.
We can inspect the Log-Marginal-Likelihood (LML) of [`GaussianProcessRegressor`](../../modules/generated/sklearn.gaussian_process.gaussianprocessregressor#sklearn.gaussian_process.GaussianProcessRegressor "sklearn.gaussian_process.GaussianProcessRegressor") for different hyperparameters to get a sense of the local minima.
```
from matplotlib.colors import LogNorm
length_scale = np.logspace(-2, 4, num=50)
noise_level = np.logspace(-2, 1, num=50)
length_scale_grid, noise_level_grid = np.meshgrid(length_scale, noise_level)
log_marginal_likelihood = [
gpr.log_marginal_likelihood(theta=np.log([0.36, scale, noise]))
for scale, noise in zip(length_scale_grid.ravel(), noise_level_grid.ravel())
]
log_marginal_likelihood = np.reshape(
log_marginal_likelihood, newshape=noise_level_grid.shape
)
```
```
vmin, vmax = (-log_marginal_likelihood).min(), 50
level = np.around(np.logspace(np.log10(vmin), np.log10(vmax), num=50), decimals=1)
plt.contour(
length_scale_grid,
noise_level_grid,
-log_marginal_likelihood,
levels=level,
norm=LogNorm(vmin=vmin, vmax=vmax),
)
plt.colorbar()
plt.xscale("log")
plt.yscale("log")
plt.xlabel("Length-scale")
plt.ylabel("Noise-level")
plt.title("Log-marginal-likelihood")
plt.show()
```
We see that there are two local minima that correspond to the combination of hyperparameters previously found. Depending on the initial values for the hyperparameters, the gradient-based optimization might converge whether or not to the best model. It is thus important to repeat the optimization several times for different initializations.
**Total running time of the script:** ( 0 minutes 2.431 seconds)
[`Download Python source code: plot_gpr_noisy.py`](https://scikit-learn.org/1.1/_downloads/aaca2576b757b51627adfee40c458ed4/plot_gpr_noisy.py)
[`Download Jupyter notebook: plot_gpr_noisy.ipynb`](https://scikit-learn.org/1.1/_downloads/9235188b9f885ba3542bf0529565fdfe/plot_gpr_noisy.ipynb)
scikit_learn Gaussian process regression (GPR) on Mauna Loa CO2 data Note
Click [here](#sphx-glr-download-auto-examples-gaussian-process-plot-gpr-co2-py) to download the full example code or to run this example in your browser via Binder
Gaussian process regression (GPR) on Mauna Loa CO2 data
=======================================================
This example is based on Section 5.4.3 of “Gaussian Processes for Machine Learning” [[RW2006]](../../modules/gaussian_process#rw2006). It illustrates an example of complex kernel engineering and hyperparameter optimization using gradient ascent on the log-marginal-likelihood. The data consists of the monthly average atmospheric CO2 concentrations (in parts per million by volume (ppm)) collected at the Mauna Loa Observatory in Hawaii, between 1958 and 2001. The objective is to model the CO2 concentration as a function of the time \(t\) and extrapolate for years after 2001.
```
print(__doc__)
# Authors: Jan Hendrik Metzen <[email protected]>
# Guillaume Lemaitre <[email protected]>
# License: BSD 3 clause
```
Build the dataset
-----------------
We will derive a dataset from the Mauna Loa Observatory that collected air samples. We are interested in estimating the concentration of CO2 and extrapolate it for further year. First, we load the original dataset available in OpenML.
```
from sklearn.datasets import fetch_openml
co2 = fetch_openml(data_id=41187, as_frame=True)
co2.frame.head()
```
| | year | month | day | weight | flag | station | co2 |
| --- | --- | --- | --- | --- | --- | --- | --- |
| 0 | 1958.0 | 3.0 | 29.0 | 4.0 | 0.0 | MLO | 316.1 |
| 1 | 1958.0 | 4.0 | 5.0 | 6.0 | 0.0 | MLO | 317.3 |
| 2 | 1958.0 | 4.0 | 12.0 | 4.0 | 0.0 | MLO | 317.6 |
| 3 | 1958.0 | 4.0 | 19.0 | 6.0 | 0.0 | MLO | 317.5 |
| 4 | 1958.0 | 4.0 | 26.0 | 2.0 | 0.0 | MLO | 316.4 |
First, we process the original dataframe to create a date index and select only the CO2 column.
```
import pandas as pd
co2_data = co2.frame
co2_data["date"] = pd.to_datetime(co2_data[["year", "month", "day"]])
co2_data = co2_data[["date", "co2"]].set_index("date")
co2_data.head()
```
| | co2 |
| --- | --- |
| date | |
| 1958-03-29 | 316.1 |
| 1958-04-05 | 317.3 |
| 1958-04-12 | 317.6 |
| 1958-04-19 | 317.5 |
| 1958-04-26 | 316.4 |
```
co2_data.index.min(), co2_data.index.max()
```
```
(Timestamp('1958-03-29 00:00:00'), Timestamp('2001-12-29 00:00:00'))
```
We see that we get CO2 concentration for some days from March, 1958 to December, 2001. We can plot these raw information to have a better understanding.
```
import matplotlib.pyplot as plt
co2_data.plot()
plt.ylabel("CO$_2$ concentration (ppm)")
_ = plt.title("Raw air samples measurements from the Mauna Loa Observatory")
```
We will preprocess the dataset by taking a monthly average and drop month for which no measurements were collected. Such a processing will have an smoothing effect on the data.
```
co2_data = co2_data.resample("M").mean().dropna(axis="index", how="any")
co2_data.plot()
plt.ylabel("Monthly average of CO$_2$ concentration (ppm)")
_ = plt.title(
"Monthly average of air samples measurements\nfrom the Mauna Loa Observatory"
)
```
The idea in this example will be to predict the CO2 concentration in function of the date. We are as well interested in extrapolating for upcoming year after 2001.
As a first step, we will divide the data and the target to estimate. The data being a date, we will convert it into a numeric.
```
X = (co2_data.index.year + co2_data.index.month / 12).to_numpy().reshape(-1, 1)
y = co2_data["co2"].to_numpy()
```
Design the proper kernel
------------------------
To design the kernel to use with our Gaussian process, we can make some assumption regarding the data at hand. We observe that they have several characteristics: we see a long term rising trend, a pronounced seasonal variation and some smaller irregularities. We can use different appropriate kernel that would capture these features.
First, the long term rising trend could be fitted using a radial basis function (RBF) kernel with a large length-scale parameter. The RBF kernel with a large length-scale enforces this component to be smooth. An trending increase is not enforced as to give a degree of freedom to our model. The specific length-scale and the amplitude are free hyperparameters.
```
from sklearn.gaussian_process.kernels import RBF
long_term_trend_kernel = 50.0**2 * RBF(length_scale=50.0)
```
The seasonal variation is explained by the periodic exponential sine squared kernel with a fixed periodicity of 1 year. The length-scale of this periodic component, controlling its smoothness, is a free parameter. In order to allow decaying away from exact periodicity, the product with an RBF kernel is taken. The length-scale of this RBF component controls the decay time and is a further free parameter. This type of kernel is also known as locally periodic kernel.
```
from sklearn.gaussian_process.kernels import ExpSineSquared
seasonal_kernel = (
2.0**2
* RBF(length_scale=100.0)
* ExpSineSquared(length_scale=1.0, periodicity=1.0, periodicity_bounds="fixed")
)
```
The small irregularities are to be explained by a rational quadratic kernel component, whose length-scale and alpha parameter, which quantifies the diffuseness of the length-scales, are to be determined. A rational quadratic kernel is equivalent to an RBF kernel with several length-scale and will better accommodate the different irregularities.
```
from sklearn.gaussian_process.kernels import RationalQuadratic
irregularities_kernel = 0.5**2 * RationalQuadratic(length_scale=1.0, alpha=1.0)
```
Finally, the noise in the dataset can be accounted with a kernel consisting of an RBF kernel contribution, which shall explain the correlated noise components such as local weather phenomena, and a white kernel contribution for the white noise. The relative amplitudes and the RBF’s length scale are further free parameters.
```
from sklearn.gaussian_process.kernels import WhiteKernel
noise_kernel = 0.1**2 * RBF(length_scale=0.1) + WhiteKernel(
noise_level=0.1**2, noise_level_bounds=(1e-5, 1e5)
)
```
Thus, our final kernel is an addition of all previous kernel.
```
co2_kernel = (
long_term_trend_kernel + seasonal_kernel + irregularities_kernel + noise_kernel
)
co2_kernel
```
```
50**2 * RBF(length_scale=50) + 2**2 * RBF(length_scale=100) * ExpSineSquared(length_scale=1, periodicity=1) + 0.5**2 * RationalQuadratic(alpha=1, length_scale=1) + 0.1**2 * RBF(length_scale=0.1) + WhiteKernel(noise_level=0.01)
```
Model fitting and extrapolation
-------------------------------
Now, we are ready to use a Gaussian process regressor and fit the available data. To follow the example from the literature, we will subtract the mean from the target. We could have used `normalize_y=True`. However, doing so would have also scaled the target (dividing `y` by its standard deviation). Thus, the hyperparameters of the different kernel would have had different meaning since they would not have been expressed in ppm.
```
from sklearn.gaussian_process import GaussianProcessRegressor
y_mean = y.mean()
gaussian_process = GaussianProcessRegressor(kernel=co2_kernel, normalize_y=False)
gaussian_process.fit(X, y - y_mean)
```
```
GaussianProcessRegressor(kernel=50**2 * RBF(length_scale=50) + 2**2 * RBF(length_scale=100) * ExpSineSquared(length_scale=1, periodicity=1) + 0.5**2 * RationalQuadratic(alpha=1, length_scale=1) + 0.1**2 * RBF(length_scale=0.1) + WhiteKernel(noise_level=0.01))
```
**In a Jupyter environment, please rerun this cell to show the HTML representation or trust the notebook.
On GitHub, the HTML representation is unable to render, please try loading this page with nbviewer.org.**
GaussianProcessRegressor
```
GaussianProcessRegressor(kernel=50**2 * RBF(length_scale=50) + 2**2 * RBF(length_scale=100) * ExpSineSquared(length_scale=1, periodicity=1) + 0.5**2 * RationalQuadratic(alpha=1, length_scale=1) + 0.1**2 * RBF(length_scale=0.1) + WhiteKernel(noise_level=0.01))
```
Now, we will use the Gaussian process to predict on:
* training data to inspect the goodness of fit;
* future data to see the extrapolation done by the model.
Thus, we create synthetic data from 1958 to the current month. In addition, we need to add the subtracted mean computed during training.
```
import datetime
import numpy as np
today = datetime.datetime.now()
current_month = today.year + today.month / 12
X_test = np.linspace(start=1958, stop=current_month, num=1_000).reshape(-1, 1)
mean_y_pred, std_y_pred = gaussian_process.predict(X_test, return_std=True)
mean_y_pred += y_mean
```
```
plt.plot(X, y, color="black", linestyle="dashed", label="Measurements")
plt.plot(X_test, mean_y_pred, color="tab:blue", alpha=0.4, label="Gaussian process")
plt.fill_between(
X_test.ravel(),
mean_y_pred - std_y_pred,
mean_y_pred + std_y_pred,
color="tab:blue",
alpha=0.2,
)
plt.legend()
plt.xlabel("Year")
plt.ylabel("Monthly average of CO$_2$ concentration (ppm)")
_ = plt.title(
"Monthly average of air samples measurements\nfrom the Mauna Loa Observatory"
)
```
Our fitted model is capable to fit previous data properly and extrapolate to future year with confidence.
Interpretation of kernel hyperparameters
----------------------------------------
Now, we can have a look at the hyperparameters of the kernel.
```
gaussian_process.kernel_
```
```
44.8**2 * RBF(length_scale=51.6) + 2.64**2 * RBF(length_scale=91.5) * ExpSineSquared(length_scale=1.48, periodicity=1) + 0.536**2 * RationalQuadratic(alpha=2.89, length_scale=0.968) + 0.188**2 * RBF(length_scale=0.122) + WhiteKernel(noise_level=0.0367)
```
Thus, most of the target signal, with the mean subtracted, is explained by a long-term rising trend for ~45 ppm and a length-scale of ~52 years. The periodic component has an amplitude of ~2.6ppm, a decay time of ~90 years and a length-scale of ~1.5. The long decay time indicates that we have a component very close to a seasonal periodicity. The correlated noise has an amplitude of ~0.2 ppm with a length scale of ~0.12 years and a white-noise contribution of ~0.04 ppm. Thus, the overall noise level is very small, indicating that the data can be very well explained by the model.
**Total running time of the script:** ( 0 minutes 9.054 seconds)
[`Download Python source code: plot_gpr_co2.py`](https://scikit-learn.org/1.1/_downloads/1b8827af01c9a70017a4739bcf2e21a8/plot_gpr_co2.py)
[`Download Jupyter notebook: plot_gpr_co2.ipynb`](https://scikit-learn.org/1.1/_downloads/91a0c94f9f7c19d59a0ad06e77512326/plot_gpr_co2.ipynb)
| programming_docs |
scikit_learn Gaussian processes on discrete data structures Note
Click [here](#sphx-glr-download-auto-examples-gaussian-process-plot-gpr-on-structured-data-py) to download the full example code or to run this example in your browser via Binder
Gaussian processes on discrete data structures
==============================================
This example illustrates the use of Gaussian processes for regression and classification tasks on data that are not in fixed-length feature vector form. This is achieved through the use of kernel functions that operates directly on discrete structures such as variable-length sequences, trees, and graphs.
Specifically, here the input variables are some gene sequences stored as variable-length strings consisting of letters ‘A’, ‘T’, ‘C’, and ‘G’, while the output variables are floating point numbers and True/False labels in the regression and classification tasks, respectively.
A kernel between the gene sequences is defined using R-convolution [[1]](#id2) by integrating a binary letter-wise kernel over all pairs of letters among a pair of strings.
This example will generate three figures.
In the first figure, we visualize the value of the kernel, i.e. the similarity of the sequences, using a colormap. Brighter color here indicates higher similarity.
In the second figure, we show some regression result on a dataset of 6 sequences. Here we use the 1st, 2nd, 4th, and 5th sequences as the training set to make predictions on the 3rd and 6th sequences.
In the third figure, we demonstrate a classification model by training on 6 sequences and make predictions on another 5 sequences. The ground truth here is simply whether there is at least one ‘A’ in the sequence. Here the model makes four correct classifications and fails on one.
*
*
*
```
/home/runner/work/scikit-learn/scikit-learn/sklearn/gaussian_process/kernels.py:420: ConvergenceWarning: The optimal value found for dimension 0 of parameter baseline_similarity is close to the specified lower bound 1e-05. Decreasing the bound and calling fit again may find a better value.
warnings.warn(
/home/runner/work/scikit-learn/scikit-learn/examples/gaussian_process/plot_gpr_on_structured_data.py:174: UserWarning: You passed a edgecolor/edgecolors ((0, 1.0, 0.3)) for an unfilled marker ('x'). Matplotlib is ignoring the edgecolor in favor of the facecolor. This behavior may change in the future.
plt.scatter(
```
```
import numpy as np
import matplotlib.pyplot as plt
from sklearn.gaussian_process.kernels import Kernel, Hyperparameter
from sklearn.gaussian_process.kernels import GenericKernelMixin
from sklearn.gaussian_process import GaussianProcessRegressor
from sklearn.gaussian_process import GaussianProcessClassifier
from sklearn.base import clone
class SequenceKernel(GenericKernelMixin, Kernel):
"""
A minimal (but valid) convolutional kernel for sequences of variable
lengths."""
def __init__(self, baseline_similarity=0.5, baseline_similarity_bounds=(1e-5, 1)):
self.baseline_similarity = baseline_similarity
self.baseline_similarity_bounds = baseline_similarity_bounds
@property
def hyperparameter_baseline_similarity(self):
return Hyperparameter(
"baseline_similarity", "numeric", self.baseline_similarity_bounds
)
def _f(self, s1, s2):
"""
kernel value between a pair of sequences
"""
return sum(
[1.0 if c1 == c2 else self.baseline_similarity for c1 in s1 for c2 in s2]
)
def _g(self, s1, s2):
"""
kernel derivative between a pair of sequences
"""
return sum([0.0 if c1 == c2 else 1.0 for c1 in s1 for c2 in s2])
def __call__(self, X, Y=None, eval_gradient=False):
if Y is None:
Y = X
if eval_gradient:
return (
np.array([[self._f(x, y) for y in Y] for x in X]),
np.array([[[self._g(x, y)] for y in Y] for x in X]),
)
else:
return np.array([[self._f(x, y) for y in Y] for x in X])
def diag(self, X):
return np.array([self._f(x, x) for x in X])
def is_stationary(self):
return False
def clone_with_theta(self, theta):
cloned = clone(self)
cloned.theta = theta
return cloned
kernel = SequenceKernel()
"""
Sequence similarity matrix under the kernel
===========================================
"""
X = np.array(["AGCT", "AGC", "AACT", "TAA", "AAA", "GAACA"])
K = kernel(X)
D = kernel.diag(X)
plt.figure(figsize=(8, 5))
plt.imshow(np.diag(D**-0.5).dot(K).dot(np.diag(D**-0.5)))
plt.xticks(np.arange(len(X)), X)
plt.yticks(np.arange(len(X)), X)
plt.title("Sequence similarity under the kernel")
"""
Regression
==========
"""
X = np.array(["AGCT", "AGC", "AACT", "TAA", "AAA", "GAACA"])
Y = np.array([1.0, 1.0, 2.0, 2.0, 3.0, 3.0])
training_idx = [0, 1, 3, 4]
gp = GaussianProcessRegressor(kernel=kernel)
gp.fit(X[training_idx], Y[training_idx])
plt.figure(figsize=(8, 5))
plt.bar(np.arange(len(X)), gp.predict(X), color="b", label="prediction")
plt.bar(training_idx, Y[training_idx], width=0.2, color="r", alpha=1, label="training")
plt.xticks(np.arange(len(X)), X)
plt.title("Regression on sequences")
plt.legend()
"""
Classification
==============
"""
X_train = np.array(["AGCT", "CGA", "TAAC", "TCG", "CTTT", "TGCT"])
# whether there are 'A's in the sequence
Y_train = np.array([True, True, True, False, False, False])
gp = GaussianProcessClassifier(kernel)
gp.fit(X_train, Y_train)
X_test = ["AAA", "ATAG", "CTC", "CT", "C"]
Y_test = [True, True, False, False, False]
plt.figure(figsize=(8, 5))
plt.scatter(
np.arange(len(X_train)),
[1.0 if c else -1.0 for c in Y_train],
s=100,
marker="o",
edgecolor="none",
facecolor=(1, 0.75, 0),
label="training",
)
plt.scatter(
len(X_train) + np.arange(len(X_test)),
[1.0 if c else -1.0 for c in Y_test],
s=100,
marker="o",
edgecolor="none",
facecolor="r",
label="truth",
)
plt.scatter(
len(X_train) + np.arange(len(X_test)),
[1.0 if c else -1.0 for c in gp.predict(X_test)],
s=100,
marker="x",
edgecolor=(0, 1.0, 0.3),
linewidth=2,
label="prediction",
)
plt.xticks(np.arange(len(X_train) + len(X_test)), np.concatenate((X_train, X_test)))
plt.yticks([-1, 1], [False, True])
plt.title("Classification on sequences")
plt.legend()
plt.show()
```
**Total running time of the script:** ( 0 minutes 0.190 seconds)
[`Download Python source code: plot_gpr_on_structured_data.py`](https://scikit-learn.org/1.1/_downloads/348dd747b709a747e14c8bcdddf0a9b6/plot_gpr_on_structured_data.py)
[`Download Jupyter notebook: plot_gpr_on_structured_data.ipynb`](https://scikit-learn.org/1.1/_downloads/46c19b52b5a5ab5796725eb7e0688309/plot_gpr_on_structured_data.ipynb)
scikit_learn Comparison of kernel ridge and Gaussian process regression Note
Click [here](#sphx-glr-download-auto-examples-gaussian-process-plot-compare-gpr-krr-py) to download the full example code or to run this example in your browser via Binder
Comparison of kernel ridge and Gaussian process regression
==========================================================
This example illustrates differences between a kernel ridge regression and a Gaussian process regression.
Both kernel ridge regression and Gaussian process regression are using a so-called “kernel trick” to make their models expressive enough to fit the training data. However, the machine learning problems solved by the two methods are drastically different.
Kernel ridge regression will find the target function that minimizes a loss function (the mean squared error).
Instead of finding a single target function, the Gaussian process regression employs a probabilistic approach : a Gaussian posterior distribution over target functions is defined based on the Bayes’ theorem, Thus prior probabilities on target functions are being combined with a likelihood function defined by the observed training data to provide estimates of the posterior distributions.
We will illustrate these differences with an example and we will also focus on tuning the kernel hyperparameters.
```
# Authors: Jan Hendrik Metzen <[email protected]>
# Guillaume Lemaitre <[email protected]>
# License: BSD 3 clause
```
Generating a dataset
--------------------
We create a synthetic dataset. The true generative process will take a 1-D vector and compute its sine. Note that the period of this sine is thus \(2 \pi\). We will reuse this information later in this example.
```
import numpy as np
rng = np.random.RandomState(0)
data = np.linspace(0, 30, num=1_000).reshape(-1, 1)
target = np.sin(data).ravel()
```
Now, we can imagine a scenario where we get observations from this true process. However, we will add some challenges:
* the measurements will be noisy;
* only samples from the beginning of the signal will be available.
```
training_sample_indices = rng.choice(np.arange(0, 400), size=40, replace=False)
training_data = data[training_sample_indices]
training_noisy_target = target[training_sample_indices] + 0.5 * rng.randn(
len(training_sample_indices)
)
```
Let’s plot the true signal and the noisy measurements available for training.
```
import matplotlib.pyplot as plt
plt.plot(data, target, label="True signal", linewidth=2)
plt.scatter(
training_data,
training_noisy_target,
color="black",
label="Noisy measurements",
)
plt.legend()
plt.xlabel("data")
plt.ylabel("target")
_ = plt.title(
"Illustration of the true generative process and \n"
"noisy measurements available during training"
)
```
Limitations of a simple linear model
------------------------------------
First, we would like to highlight the limitations of a linear model given our dataset. We fit a [`Ridge`](../../modules/generated/sklearn.linear_model.ridge#sklearn.linear_model.Ridge "sklearn.linear_model.Ridge") and check the predictions of this model on our dataset.
```
from sklearn.linear_model import Ridge
ridge = Ridge().fit(training_data, training_noisy_target)
plt.plot(data, target, label="True signal", linewidth=2)
plt.scatter(
training_data,
training_noisy_target,
color="black",
label="Noisy measurements",
)
plt.plot(data, ridge.predict(data), label="Ridge regression")
plt.legend()
plt.xlabel("data")
plt.ylabel("target")
_ = plt.title("Limitation of a linear model such as ridge")
```
Such a ridge regressor underfits data since it is not expressive enough.
Kernel methods: kernel ridge and Gaussian process
-------------------------------------------------
### Kernel ridge
We can make the previous linear model more expressive by using a so-called kernel. A kernel is an embedding from the original feature space to another one. Simply put, it is used to map our original data into a newer and more complex feature space. This new space is explicitly defined by the choice of kernel.
In our case, we know that the true generative process is a periodic function. We can use a [`ExpSineSquared`](../../modules/generated/sklearn.gaussian_process.kernels.expsinesquared#sklearn.gaussian_process.kernels.ExpSineSquared "sklearn.gaussian_process.kernels.ExpSineSquared") kernel which allows recovering the periodicity. The class [`KernelRidge`](../../modules/generated/sklearn.kernel_ridge.kernelridge#sklearn.kernel_ridge.KernelRidge "sklearn.kernel_ridge.KernelRidge") will accept such a kernel.
Using this model together with a kernel is equivalent to embed the data using the mapping function of the kernel and then apply a ridge regression. In practice, the data are not mapped explicitly; instead the dot product between samples in the higher dimensional feature space is computed using the “kernel trick”.
Thus, let’s use such a [`KernelRidge`](../../modules/generated/sklearn.kernel_ridge.kernelridge#sklearn.kernel_ridge.KernelRidge "sklearn.kernel_ridge.KernelRidge").
```
import time
from sklearn.gaussian_process.kernels import ExpSineSquared
from sklearn.kernel_ridge import KernelRidge
kernel_ridge = KernelRidge(kernel=ExpSineSquared())
start_time = time.time()
kernel_ridge.fit(training_data, training_noisy_target)
print(
f"Fitting KernelRidge with default kernel: {time.time() - start_time:.3f} seconds"
)
```
```
Fitting KernelRidge with default kernel: 0.001 seconds
```
```
plt.plot(data, target, label="True signal", linewidth=2, linestyle="dashed")
plt.scatter(
training_data,
training_noisy_target,
color="black",
label="Noisy measurements",
)
plt.plot(
data,
kernel_ridge.predict(data),
label="Kernel ridge",
linewidth=2,
linestyle="dashdot",
)
plt.legend(loc="lower right")
plt.xlabel("data")
plt.ylabel("target")
_ = plt.title(
"Kernel ridge regression with an exponential sine squared\n "
"kernel using default hyperparameters"
)
```
This fitted model is not accurate. Indeed, we did not set the parameters of the kernel and instead used the default ones. We can inspect them.
```
kernel_ridge.kernel
```
```
ExpSineSquared(length_scale=1, periodicity=1)
```
Our kernel has two parameters: the length-scale and the periodicity. For our dataset, we use `sin` as the generative process, implying a \(2 \pi\)-periodicity for the signal. The default value of the parameter being \(1\), it explains the high frequency observed in the predictions of our model. Similar conclusions could be drawn with the length-scale parameter. Thus, it tell us that the kernel parameters need to be tuned. We will use a randomized search to tune the different parameters the kernel ridge model: the `alpha` parameter and the kernel parameters.
```
from sklearn.model_selection import RandomizedSearchCV
from sklearn.utils.fixes import loguniform
param_distributions = {
"alpha": loguniform(1e0, 1e3),
"kernel__length_scale": loguniform(1e-2, 1e2),
"kernel__periodicity": loguniform(1e0, 1e1),
}
kernel_ridge_tuned = RandomizedSearchCV(
kernel_ridge,
param_distributions=param_distributions,
n_iter=500,
random_state=0,
)
start_time = time.time()
kernel_ridge_tuned.fit(training_data, training_noisy_target)
print(f"Time for KernelRidge fitting: {time.time() - start_time:.3f} seconds")
```
```
Time for KernelRidge fitting: 2.530 seconds
```
Fitting the model is now more computationally expensive since we have to try several combinations of hyperparameters. We can have a look at the hyperparameters found to get some intuitions.
```
kernel_ridge_tuned.best_params_
```
```
{'alpha': 1.9915849773450223, 'kernel__length_scale': 0.7986499491396728, 'kernel__periodicity': 6.607275806426108}
```
Looking at the best parameters, we see that they are different from the defaults. We also see that the periodicity is closer to the expected value: \(2 \pi\). We can now inspect the predictions of our tuned kernel ridge.
```
start_time = time.time()
predictions_kr = kernel_ridge_tuned.predict(data)
print(f"Time for KernelRidge predict: {time.time() - start_time:.3f} seconds")
```
```
Time for KernelRidge predict: 0.001 seconds
```
```
plt.plot(data, target, label="True signal", linewidth=2, linestyle="dashed")
plt.scatter(
training_data,
training_noisy_target,
color="black",
label="Noisy measurements",
)
plt.plot(
data,
predictions_kr,
label="Kernel ridge",
linewidth=2,
linestyle="dashdot",
)
plt.legend(loc="lower right")
plt.xlabel("data")
plt.ylabel("target")
_ = plt.title(
"Kernel ridge regression with an exponential sine squared\n "
"kernel using tuned hyperparameters"
)
```
We get a much more accurate model. We still observe some errors mainly due to the noise added to the dataset.
### Gaussian process regression
Now, we will use a [`GaussianProcessRegressor`](../../modules/generated/sklearn.gaussian_process.gaussianprocessregressor#sklearn.gaussian_process.GaussianProcessRegressor "sklearn.gaussian_process.GaussianProcessRegressor") to fit the same dataset. When training a Gaussian process, the hyperparameters of the kernel are optimized during the fitting process. There is no need for an external hyperparameter search. Here, we create a slightly more complex kernel than for the kernel ridge regressor: we add a [`WhiteKernel`](../../modules/generated/sklearn.gaussian_process.kernels.whitekernel#sklearn.gaussian_process.kernels.WhiteKernel "sklearn.gaussian_process.kernels.WhiteKernel") that is used to estimate the noise in the dataset.
```
from sklearn.gaussian_process import GaussianProcessRegressor
from sklearn.gaussian_process.kernels import WhiteKernel
kernel = 1.0 * ExpSineSquared(1.0, 5.0, periodicity_bounds=(1e-2, 1e1)) + WhiteKernel(
1e-1
)
gaussian_process = GaussianProcessRegressor(kernel=kernel)
start_time = time.time()
gaussian_process.fit(training_data, training_noisy_target)
print(
f"Time for GaussianProcessRegressor fitting: {time.time() - start_time:.3f} seconds"
)
```
```
Time for GaussianProcessRegressor fitting: 0.027 seconds
```
The computation cost of training a Gaussian process is much less than the kernel ridge that uses a randomized search. We can check the parameters of the kernels that we computed.
```
gaussian_process.kernel_
```
```
0.675**2 * ExpSineSquared(length_scale=1.34, periodicity=6.57) + WhiteKernel(noise_level=0.182)
```
Indeed, we see that the parameters have been optimized. Looking at the `periodicity` parameter, we see that we found a period close to the theoretical value \(2 \pi\). We can have a look now at the predictions of our model.
```
start_time = time.time()
mean_predictions_gpr, std_predictions_gpr = gaussian_process.predict(
data,
return_std=True,
)
print(
f"Time for GaussianProcessRegressor predict: {time.time() - start_time:.3f} seconds"
)
```
```
Time for GaussianProcessRegressor predict: 0.001 seconds
```
```
plt.plot(data, target, label="True signal", linewidth=2, linestyle="dashed")
plt.scatter(
training_data,
training_noisy_target,
color="black",
label="Noisy measurements",
)
# Plot the predictions of the kernel ridge
plt.plot(
data,
predictions_kr,
label="Kernel ridge",
linewidth=2,
linestyle="dashdot",
)
# Plot the predictions of the gaussian process regressor
plt.plot(
data,
mean_predictions_gpr,
label="Gaussian process regressor",
linewidth=2,
linestyle="dotted",
)
plt.fill_between(
data.ravel(),
mean_predictions_gpr - std_predictions_gpr,
mean_predictions_gpr + std_predictions_gpr,
color="tab:green",
alpha=0.2,
)
plt.legend(loc="lower right")
plt.xlabel("data")
plt.ylabel("target")
_ = plt.title("Comparison between kernel ridge and gaussian process regressor")
```
We observe that the results of the kernel ridge and the Gaussian process regressor are close. However, the Gaussian process regressor also provide an uncertainty information that is not available with a kernel ridge. Due to the probabilistic formulation of the target functions, the Gaussian process can output the standard deviation (or the covariance) together with the mean predictions of the target functions.
However, it comes at a cost: the time to compute the predictions is higher with a Gaussian process.
Final conclusion
----------------
We can give a final word regarding the possibility of the two models to extrapolate. Indeed, we only provided the beginning of the signal as a training set. Using a periodic kernel forces our model to repeat the pattern found on the training set. Using this kernel information together with the capacity of the both models to extrapolate, we observe that the models will continue to predict the sine pattern.
Gaussian process allows to combine kernels together. Thus, we could associate the exponential sine squared kernel together with a radial basis function kernel.
```
from sklearn.gaussian_process.kernels import RBF
kernel = 1.0 * ExpSineSquared(1.0, 5.0, periodicity_bounds=(1e-2, 1e1)) * RBF(
length_scale=15, length_scale_bounds="fixed"
) + WhiteKernel(1e-1)
gaussian_process = GaussianProcessRegressor(kernel=kernel)
gaussian_process.fit(training_data, training_noisy_target)
mean_predictions_gpr, std_predictions_gpr = gaussian_process.predict(
data,
return_std=True,
)
```
```
plt.plot(data, target, label="True signal", linewidth=2, linestyle="dashed")
plt.scatter(
training_data,
training_noisy_target,
color="black",
label="Noisy measurements",
)
# Plot the predictions of the kernel ridge
plt.plot(
data,
predictions_kr,
label="Kernel ridge",
linewidth=2,
linestyle="dashdot",
)
# Plot the predictions of the gaussian process regressor
plt.plot(
data,
mean_predictions_gpr,
label="Gaussian process regressor",
linewidth=2,
linestyle="dotted",
)
plt.fill_between(
data.ravel(),
mean_predictions_gpr - std_predictions_gpr,
mean_predictions_gpr + std_predictions_gpr,
color="tab:green",
alpha=0.2,
)
plt.legend(loc="lower right")
plt.xlabel("data")
plt.ylabel("target")
_ = plt.title("Effect of using a radial basis function kernel")
```
The effect of using a radial basis function kernel will attenuate the periodicity effect once that no sample are available in the training. As testing samples get further away from the training ones, predictions are converging towards their mean and their standard deviation also increases.
**Total running time of the script:** ( 0 minutes 3.078 seconds)
[`Download Python source code: plot_compare_gpr_krr.py`](https://scikit-learn.org/1.1/_downloads/c499f9c8abaa56c9b615349a539cb6ae/plot_compare_gpr_krr.py)
[`Download Jupyter notebook: plot_compare_gpr_krr.ipynb`](https://scikit-learn.org/1.1/_downloads/c3bc3113489c0b7c9a698b430d691ddc/plot_compare_gpr_krr.ipynb)
| programming_docs |
scikit_learn Probabilistic predictions with Gaussian process classification (GPC) Note
Click [here](#sphx-glr-download-auto-examples-gaussian-process-plot-gpc-py) to download the full example code or to run this example in your browser via Binder
Probabilistic predictions with Gaussian process classification (GPC)
====================================================================
This example illustrates the predicted probability of GPC for an RBF kernel with different choices of the hyperparameters. The first figure shows the predicted probability of GPC with arbitrarily chosen hyperparameters and with the hyperparameters corresponding to the maximum log-marginal-likelihood (LML).
While the hyperparameters chosen by optimizing LML have a considerable larger LML, they perform slightly worse according to the log-loss on test data. The figure shows that this is because they exhibit a steep change of the class probabilities at the class boundaries (which is good) but have predicted probabilities close to 0.5 far away from the class boundaries (which is bad) This undesirable effect is caused by the Laplace approximation used internally by GPC.
The second figure shows the log-marginal-likelihood for different choices of the kernel’s hyperparameters, highlighting the two choices of the hyperparameters used in the first figure by black dots.
*
*
```
Log Marginal Likelihood (initial): -17.598
Log Marginal Likelihood (optimized): -3.875
Accuracy: 1.000 (initial) 1.000 (optimized)
Log-loss: 0.214 (initial) 0.319 (optimized)
```
```
# Authors: Jan Hendrik Metzen <[email protected]>
#
# License: BSD 3 clause
import numpy as np
from matplotlib import pyplot as plt
from sklearn.metrics import accuracy_score, log_loss
from sklearn.gaussian_process import GaussianProcessClassifier
from sklearn.gaussian_process.kernels import RBF
# Generate data
train_size = 50
rng = np.random.RandomState(0)
X = rng.uniform(0, 5, 100)[:, np.newaxis]
y = np.array(X[:, 0] > 2.5, dtype=int)
# Specify Gaussian Processes with fixed and optimized hyperparameters
gp_fix = GaussianProcessClassifier(kernel=1.0 * RBF(length_scale=1.0), optimizer=None)
gp_fix.fit(X[:train_size], y[:train_size])
gp_opt = GaussianProcessClassifier(kernel=1.0 * RBF(length_scale=1.0))
gp_opt.fit(X[:train_size], y[:train_size])
print(
"Log Marginal Likelihood (initial): %.3f"
% gp_fix.log_marginal_likelihood(gp_fix.kernel_.theta)
)
print(
"Log Marginal Likelihood (optimized): %.3f"
% gp_opt.log_marginal_likelihood(gp_opt.kernel_.theta)
)
print(
"Accuracy: %.3f (initial) %.3f (optimized)"
% (
accuracy_score(y[:train_size], gp_fix.predict(X[:train_size])),
accuracy_score(y[:train_size], gp_opt.predict(X[:train_size])),
)
)
print(
"Log-loss: %.3f (initial) %.3f (optimized)"
% (
log_loss(y[:train_size], gp_fix.predict_proba(X[:train_size])[:, 1]),
log_loss(y[:train_size], gp_opt.predict_proba(X[:train_size])[:, 1]),
)
)
# Plot posteriors
plt.figure()
plt.scatter(
X[:train_size, 0], y[:train_size], c="k", label="Train data", edgecolors=(0, 0, 0)
)
plt.scatter(
X[train_size:, 0], y[train_size:], c="g", label="Test data", edgecolors=(0, 0, 0)
)
X_ = np.linspace(0, 5, 100)
plt.plot(
X_,
gp_fix.predict_proba(X_[:, np.newaxis])[:, 1],
"r",
label="Initial kernel: %s" % gp_fix.kernel_,
)
plt.plot(
X_,
gp_opt.predict_proba(X_[:, np.newaxis])[:, 1],
"b",
label="Optimized kernel: %s" % gp_opt.kernel_,
)
plt.xlabel("Feature")
plt.ylabel("Class 1 probability")
plt.xlim(0, 5)
plt.ylim(-0.25, 1.5)
plt.legend(loc="best")
# Plot LML landscape
plt.figure()
theta0 = np.logspace(0, 8, 30)
theta1 = np.logspace(-1, 1, 29)
Theta0, Theta1 = np.meshgrid(theta0, theta1)
LML = [
[
gp_opt.log_marginal_likelihood(np.log([Theta0[i, j], Theta1[i, j]]))
for i in range(Theta0.shape[0])
]
for j in range(Theta0.shape[1])
]
LML = np.array(LML).T
plt.plot(
np.exp(gp_fix.kernel_.theta)[0], np.exp(gp_fix.kernel_.theta)[1], "ko", zorder=10
)
plt.plot(
np.exp(gp_opt.kernel_.theta)[0], np.exp(gp_opt.kernel_.theta)[1], "ko", zorder=10
)
plt.pcolor(Theta0, Theta1, LML)
plt.xscale("log")
plt.yscale("log")
plt.colorbar()
plt.xlabel("Magnitude")
plt.ylabel("Length-scale")
plt.title("Log-marginal-likelihood")
plt.show()
```
**Total running time of the script:** ( 0 minutes 1.823 seconds)
[`Download Python source code: plot_gpc.py`](https://scikit-learn.org/1.1/_downloads/a4394ee28b22f6d8930db11a3e185ba8/plot_gpc.py)
[`Download Jupyter notebook: plot_gpc.ipynb`](https://scikit-learn.org/1.1/_downloads/60bf71ece62c4a5b5fe3e40007de265b/plot_gpc.ipynb)
scikit_learn Iso-probability lines for Gaussian Processes classification (GPC) Note
Click [here](#sphx-glr-download-auto-examples-gaussian-process-plot-gpc-isoprobability-py) to download the full example code or to run this example in your browser via Binder
Iso-probability lines for Gaussian Processes classification (GPC)
=================================================================
A two-dimensional classification example showing iso-probability lines for the predicted probabilities.
```
Learned kernel: 0.0256**2 * DotProduct(sigma_0=5.72) ** 2
```
```
# Author: Vincent Dubourg <[email protected]>
# Adapted to GaussianProcessClassifier:
# Jan Hendrik Metzen <[email protected]>
# License: BSD 3 clause
import numpy as np
from matplotlib import pyplot as plt
from matplotlib import cm
from sklearn.gaussian_process import GaussianProcessClassifier
from sklearn.gaussian_process.kernels import DotProduct, ConstantKernel as C
# A few constants
lim = 8
def g(x):
"""The function to predict (classification will then consist in predicting
whether g(x) <= 0 or not)"""
return 5.0 - x[:, 1] - 0.5 * x[:, 0] ** 2.0
# Design of experiments
X = np.array(
[
[-4.61611719, -6.00099547],
[4.10469096, 5.32782448],
[0.00000000, -0.50000000],
[-6.17289014, -4.6984743],
[1.3109306, -6.93271427],
[-5.03823144, 3.10584743],
[-2.87600388, 6.74310541],
[5.21301203, 4.26386883],
]
)
# Observations
y = np.array(g(X) > 0, dtype=int)
# Instantiate and fit Gaussian Process Model
kernel = C(0.1, (1e-5, np.inf)) * DotProduct(sigma_0=0.1) ** 2
gp = GaussianProcessClassifier(kernel=kernel)
gp.fit(X, y)
print("Learned kernel: %s " % gp.kernel_)
# Evaluate real function and the predicted probability
res = 50
x1, x2 = np.meshgrid(np.linspace(-lim, lim, res), np.linspace(-lim, lim, res))
xx = np.vstack([x1.reshape(x1.size), x2.reshape(x2.size)]).T
y_true = g(xx)
y_prob = gp.predict_proba(xx)[:, 1]
y_true = y_true.reshape((res, res))
y_prob = y_prob.reshape((res, res))
# Plot the probabilistic classification iso-values
fig = plt.figure(1)
ax = fig.gca()
ax.axes.set_aspect("equal")
plt.xticks([])
plt.yticks([])
ax.set_xticklabels([])
ax.set_yticklabels([])
plt.xlabel("$x_1$")
plt.ylabel("$x_2$")
cax = plt.imshow(y_prob, cmap=cm.gray_r, alpha=0.8, extent=(-lim, lim, -lim, lim))
norm = plt.matplotlib.colors.Normalize(vmin=0.0, vmax=0.9)
cb = plt.colorbar(cax, ticks=[0.0, 0.2, 0.4, 0.6, 0.8, 1.0], norm=norm)
cb.set_label(r"${\rm \mathbb{P}}\left[\widehat{G}(\mathbf{x}) \leq 0\right]$")
plt.clim(0, 1)
plt.plot(X[y <= 0, 0], X[y <= 0, 1], "r.", markersize=12)
plt.plot(X[y > 0, 0], X[y > 0, 1], "b.", markersize=12)
plt.contour(x1, x2, y_true, [0.0], colors="k", linestyles="dashdot")
cs = plt.contour(x1, x2, y_prob, [0.666], colors="b", linestyles="solid")
plt.clabel(cs, fontsize=11)
cs = plt.contour(x1, x2, y_prob, [0.5], colors="k", linestyles="dashed")
plt.clabel(cs, fontsize=11)
cs = plt.contour(x1, x2, y_prob, [0.334], colors="r", linestyles="solid")
plt.clabel(cs, fontsize=11)
plt.show()
```
**Total running time of the script:** ( 0 minutes 0.145 seconds)
[`Download Python source code: plot_gpc_isoprobability.py`](https://scikit-learn.org/1.1/_downloads/e071b915e9660d5d0bb53ca12695c133/plot_gpc_isoprobability.py)
[`Download Jupyter notebook: plot_gpc_isoprobability.ipynb`](https://scikit-learn.org/1.1/_downloads/f927b32733adb25c5c8208225278f50c/plot_gpc_isoprobability.ipynb)
scikit_learn Column Transformer with Mixed Types Note
Click [here](#sphx-glr-download-auto-examples-compose-plot-column-transformer-mixed-types-py) to download the full example code or to run this example in your browser via Binder
Column Transformer with Mixed Types
===================================
This example illustrates how to apply different preprocessing and feature extraction pipelines to different subsets of features, using [`ColumnTransformer`](../../modules/generated/sklearn.compose.columntransformer#sklearn.compose.ColumnTransformer "sklearn.compose.ColumnTransformer"). This is particularly handy for the case of datasets that contain heterogeneous data types, since we may want to scale the numeric features and one-hot encode the categorical ones.
In this example, the numeric data is standard-scaled after mean-imputation. The categorical data is one-hot encoded via `OneHotEncoder`, which creates a new category for missing values.
In addition, we show two different ways to dispatch the columns to the particular pre-processor: by column names and by column data types.
Finally, the preprocessing pipeline is integrated in a full prediction pipeline using [`Pipeline`](../../modules/generated/sklearn.pipeline.pipeline#sklearn.pipeline.Pipeline "sklearn.pipeline.Pipeline"), together with a simple classification model.
```
# Author: Pedro Morales <[email protected]>
#
# License: BSD 3 clause
```
```
import numpy as np
from sklearn.compose import ColumnTransformer
from sklearn.datasets import fetch_openml
from sklearn.pipeline import Pipeline
from sklearn.impute import SimpleImputer
from sklearn.preprocessing import StandardScaler, OneHotEncoder
from sklearn.linear_model import LogisticRegression
from sklearn.model_selection import train_test_split, GridSearchCV
np.random.seed(0)
```
Load data from <https://www.openml.org/d/40945>
```
X, y = fetch_openml("titanic", version=1, as_frame=True, return_X_y=True)
# Alternatively X and y can be obtained directly from the frame attribute:
# X = titanic.frame.drop('survived', axis=1)
# y = titanic.frame['survived']
```
Use `ColumnTransformer` by selecting column by names
We will train our classifier with the following features:
Numeric Features:
* `age`: float;
* `fare`: float.
Categorical Features:
* `embarked`: categories encoded as strings `{'C', 'S', 'Q'}`;
* `sex`: categories encoded as strings `{'female', 'male'}`;
* `pclass`: ordinal integers `{1, 2, 3}`.
We create the preprocessing pipelines for both numeric and categorical data. Note that `pclass` could either be treated as a categorical or numeric feature.
```
numeric_features = ["age", "fare"]
numeric_transformer = Pipeline(
steps=[("imputer", SimpleImputer(strategy="median")), ("scaler", StandardScaler())]
)
categorical_features = ["embarked", "sex", "pclass"]
categorical_transformer = OneHotEncoder(handle_unknown="ignore")
preprocessor = ColumnTransformer(
transformers=[
("num", numeric_transformer, numeric_features),
("cat", categorical_transformer, categorical_features),
]
)
```
Append classifier to preprocessing pipeline. Now we have a full prediction pipeline.
```
clf = Pipeline(
steps=[("preprocessor", preprocessor), ("classifier", LogisticRegression())]
)
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=0)
clf.fit(X_train, y_train)
print("model score: %.3f" % clf.score(X_test, y_test))
```
```
model score: 0.790
```
HTML representation of `Pipeline` (display diagram)
When the `Pipeline` is printed out in a jupyter notebook an HTML representation of the estimator is displayed:
```
clf
```
```
Pipeline(steps=[('preprocessor',
ColumnTransformer(transformers=[('num',
Pipeline(steps=[('imputer',
SimpleImputer(strategy='median')),
('scaler',
StandardScaler())]),
['age', 'fare']),
('cat',
OneHotEncoder(handle_unknown='ignore'),
['embarked', 'sex',
'pclass'])])),
('classifier', LogisticRegression())])
```
**In a Jupyter environment, please rerun this cell to show the HTML representation or trust the notebook.
On GitHub, the HTML representation is unable to render, please try loading this page with nbviewer.org.**
Pipeline
```
Pipeline(steps=[('preprocessor',
ColumnTransformer(transformers=[('num',
Pipeline(steps=[('imputer',
SimpleImputer(strategy='median')),
('scaler',
StandardScaler())]),
['age', 'fare']),
('cat',
OneHotEncoder(handle_unknown='ignore'),
['embarked', 'sex',
'pclass'])])),
('classifier', LogisticRegression())])
```
preprocessor: ColumnTransformer
```
ColumnTransformer(transformers=[('num',
Pipeline(steps=[('imputer',
SimpleImputer(strategy='median')),
('scaler', StandardScaler())]),
['age', 'fare']),
('cat', OneHotEncoder(handle_unknown='ignore'),
['embarked', 'sex', 'pclass'])])
```
num
```
['age', 'fare']
```
SimpleImputer
```
SimpleImputer(strategy='median')
```
StandardScaler
```
StandardScaler()
```
cat
```
['embarked', 'sex', 'pclass']
```
OneHotEncoder
```
OneHotEncoder(handle_unknown='ignore')
```
LogisticRegression
```
LogisticRegression()
```
Use `ColumnTransformer` by selecting column by data types
When dealing with a cleaned dataset, the preprocessing can be automatic by using the data types of the column to decide whether to treat a column as a numerical or categorical feature. [`sklearn.compose.make_column_selector`](../../modules/generated/sklearn.compose.make_column_selector#sklearn.compose.make_column_selector "sklearn.compose.make_column_selector") gives this possibility. First, let’s only select a subset of columns to simplify our example.
```
subset_feature = ["embarked", "sex", "pclass", "age", "fare"]
X_train, X_test = X_train[subset_feature], X_test[subset_feature]
```
Then, we introspect the information regarding each column data type.
```
X_train.info()
```
```
<class 'pandas.core.frame.DataFrame'>
Int64Index: 1047 entries, 1118 to 684
Data columns (total 5 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 embarked 1045 non-null category
1 sex 1047 non-null category
2 pclass 1047 non-null float64
3 age 841 non-null float64
4 fare 1046 non-null float64
dtypes: category(2), float64(3)
memory usage: 35.0 KB
```
We can observe that the `embarked` and `sex` columns were tagged as `category` columns when loading the data with `fetch_openml`. Therefore, we can use this information to dispatch the categorical columns to the `categorical_transformer` and the remaining columns to the `numerical_transformer`.
Note
In practice, you will have to handle yourself the column data type. If you want some columns to be considered as `category`, you will have to convert them into categorical columns. If you are using pandas, you can refer to their documentation regarding [Categorical data](https://pandas.pydata.org/pandas-docs/stable/user_guide/categorical.html).
```
from sklearn.compose import make_column_selector as selector
preprocessor = ColumnTransformer(
transformers=[
("num", numeric_transformer, selector(dtype_exclude="category")),
("cat", categorical_transformer, selector(dtype_include="category")),
]
)
clf = Pipeline(
steps=[("preprocessor", preprocessor), ("classifier", LogisticRegression())]
)
clf.fit(X_train, y_train)
print("model score: %.3f" % clf.score(X_test, y_test))
clf
```
```
model score: 0.794
```
```
Pipeline(steps=[('preprocessor',
ColumnTransformer(transformers=[('num',
Pipeline(steps=[('imputer',
SimpleImputer(strategy='median')),
('scaler',
StandardScaler())]),
<sklearn.compose._column_transformer.make_column_selector object at 0x7f6e7e8f3ac0>),
('cat',
OneHotEncoder(handle_unknown='ignore'),
<sklearn.compose._column_transformer.make_column_selector object at 0x7f6e7e8f3e80>)])),
('classifier', LogisticRegression())])
```
**In a Jupyter environment, please rerun this cell to show the HTML representation or trust the notebook.
On GitHub, the HTML representation is unable to render, please try loading this page with nbviewer.org.**
Pipeline
```
Pipeline(steps=[('preprocessor',
ColumnTransformer(transformers=[('num',
Pipeline(steps=[('imputer',
SimpleImputer(strategy='median')),
('scaler',
StandardScaler())]),
<sklearn.compose._column_transformer.make_column_selector object at 0x7f6e7e8f3ac0>),
('cat',
OneHotEncoder(handle_unknown='ignore'),
<sklearn.compose._column_transformer.make_column_selector object at 0x7f6e7e8f3e80>)])),
('classifier', LogisticRegression())])
```
preprocessor: ColumnTransformer
```
ColumnTransformer(transformers=[('num',
Pipeline(steps=[('imputer',
SimpleImputer(strategy='median')),
('scaler', StandardScaler())]),
<sklearn.compose._column_transformer.make_column_selector object at 0x7f6e7e8f3ac0>),
('cat', OneHotEncoder(handle_unknown='ignore'),
<sklearn.compose._column_transformer.make_column_selector object at 0x7f6e7e8f3e80>)])
```
num
```
<sklearn.compose._column_transformer.make_column_selector object at 0x7f6e7e8f3ac0>
```
SimpleImputer
```
SimpleImputer(strategy='median')
```
StandardScaler
```
StandardScaler()
```
cat
```
<sklearn.compose._column_transformer.make_column_selector object at 0x7f6e7e8f3e80>
```
OneHotEncoder
```
OneHotEncoder(handle_unknown='ignore')
```
LogisticRegression
```
LogisticRegression()
```
The resulting score is not exactly the same as the one from the previous pipeline because the dtype-based selector treats the `pclass` column as a numeric feature instead of a categorical feature as previously:
```
selector(dtype_exclude="category")(X_train)
```
```
['pclass', 'age', 'fare']
```
```
selector(dtype_include="category")(X_train)
```
```
['embarked', 'sex']
```
Using the prediction pipeline in a grid search
Grid search can also be performed on the different preprocessing steps defined in the `ColumnTransformer` object, together with the classifier’s hyperparameters as part of the `Pipeline`. We will search for both the imputer strategy of the numeric preprocessing and the regularization parameter of the logistic regression using [`GridSearchCV`](../../modules/generated/sklearn.model_selection.gridsearchcv#sklearn.model_selection.GridSearchCV "sklearn.model_selection.GridSearchCV").
```
param_grid = {
"preprocessor__num__imputer__strategy": ["mean", "median"],
"classifier__C": [0.1, 1.0, 10, 100],
}
grid_search = GridSearchCV(clf, param_grid, cv=10)
grid_search
```
```
GridSearchCV(cv=10,
estimator=Pipeline(steps=[('preprocessor',
ColumnTransformer(transformers=[('num',
Pipeline(steps=[('imputer',
SimpleImputer(strategy='median')),
('scaler',
StandardScaler())]),
<sklearn.compose._column_transformer.make_column_selector object at 0x7f6e7e8f3ac0>),
('cat',
OneHotEncoder(handle_unknown='ignore'),
<sklearn.compose._column_transformer.make_column_selector object at 0x7f6e7e8f3e80>)])),
('classifier', LogisticRegression())]),
param_grid={'classifier__C': [0.1, 1.0, 10, 100],
'preprocessor__num__imputer__strategy': ['mean',
'median']})
```
**In a Jupyter environment, please rerun this cell to show the HTML representation or trust the notebook.
On GitHub, the HTML representation is unable to render, please try loading this page with nbviewer.org.**
GridSearchCV
```
GridSearchCV(cv=10,
estimator=Pipeline(steps=[('preprocessor',
ColumnTransformer(transformers=[('num',
Pipeline(steps=[('imputer',
SimpleImputer(strategy='median')),
('scaler',
StandardScaler())]),
<sklearn.compose._column_transformer.make_column_selector object at 0x7f6e7e8f3ac0>),
('cat',
OneHotEncoder(handle_unknown='ignore'),
<sklearn.compose._column_transformer.make_column_selector object at 0x7f6e7e8f3e80>)])),
('classifier', LogisticRegression())]),
param_grid={'classifier__C': [0.1, 1.0, 10, 100],
'preprocessor__num__imputer__strategy': ['mean',
'median']})
```
estimator: Pipeline
```
Pipeline(steps=[('preprocessor',
ColumnTransformer(transformers=[('num',
Pipeline(steps=[('imputer',
SimpleImputer(strategy='median')),
('scaler',
StandardScaler())]),
<sklearn.compose._column_transformer.make_column_selector object at 0x7f6e7e8f3ac0>),
('cat',
OneHotEncoder(handle_unknown='ignore'),
<sklearn.compose._column_transformer.make_column_selector object at 0x7f6e7e8f3e80>)])),
('classifier', LogisticRegression())])
```
preprocessor: ColumnTransformer
```
ColumnTransformer(transformers=[('num',
Pipeline(steps=[('imputer',
SimpleImputer(strategy='median')),
('scaler', StandardScaler())]),
<sklearn.compose._column_transformer.make_column_selector object at 0x7f6e7e8f3ac0>),
('cat', OneHotEncoder(handle_unknown='ignore'),
<sklearn.compose._column_transformer.make_column_selector object at 0x7f6e7e8f3e80>)])
```
num
```
<sklearn.compose._column_transformer.make_column_selector object at 0x7f6e7e8f3ac0>
```
SimpleImputer
```
SimpleImputer(strategy='median')
```
StandardScaler
```
StandardScaler()
```
cat
```
<sklearn.compose._column_transformer.make_column_selector object at 0x7f6e7e8f3e80>
```
OneHotEncoder
```
OneHotEncoder(handle_unknown='ignore')
```
LogisticRegression
```
LogisticRegression()
```
Calling ‘fit’ triggers the cross-validated search for the best hyper-parameters combination:
```
grid_search.fit(X_train, y_train)
print("Best params:")
print(grid_search.best_params_)
```
```
Best params:
{'classifier__C': 0.1, 'preprocessor__num__imputer__strategy': 'mean'}
```
The internal cross-validation scores obtained by those parameters is:
```
print(f"Internal CV score: {grid_search.best_score_:.3f}")
```
```
Internal CV score: 0.784
```
We can also introspect the top grid search results as a pandas dataframe:
```
import pandas as pd
cv_results = pd.DataFrame(grid_search.cv_results_)
cv_results = cv_results.sort_values("mean_test_score", ascending=False)
cv_results[
[
"mean_test_score",
"std_test_score",
"param_preprocessor__num__imputer__strategy",
"param_classifier__C",
]
].head(5)
```
| | mean\_test\_score | std\_test\_score | param\_preprocessor\_\_num\_\_imputer\_\_strategy | param\_classifier\_\_C |
| --- | --- | --- | --- | --- |
| 0 | 0.784167 | 0.035824 | mean | 0.1 |
| 2 | 0.780366 | 0.032722 | mean | 1.0 |
| 1 | 0.780348 | 0.037245 | median | 0.1 |
| 4 | 0.779414 | 0.033105 | mean | 10 |
| 6 | 0.779414 | 0.033105 | mean | 100 |
The best hyper-parameters have be used to re-fit a final model on the full training set. We can evaluate that final model on held out test data that was not used for hyperparameter tuning.
```
print(
(
"best logistic regression from grid search: %.3f"
% grid_search.score(X_test, y_test)
)
)
```
```
best logistic regression from grid search: 0.794
```
**Total running time of the script:** ( 0 minutes 1.311 seconds)
[`Download Python source code: plot_column_transformer_mixed_types.py`](https://scikit-learn.org/1.1/_downloads/79c38d2f2cb1f2ef7d68e0cc7ea7b4e4/plot_column_transformer_mixed_types.py)
[`Download Jupyter notebook: plot_column_transformer_mixed_types.ipynb`](https://scikit-learn.org/1.1/_downloads/26f110ad6cff1a8a7c58b1a00d8b8b5a/plot_column_transformer_mixed_types.ipynb)
| programming_docs |
scikit_learn Effect of transforming the targets in regression model Note
Click [here](#sphx-glr-download-auto-examples-compose-plot-transformed-target-py) to download the full example code or to run this example in your browser via Binder
Effect of transforming the targets in regression model
======================================================
In this example, we give an overview of [`TransformedTargetRegressor`](../../modules/generated/sklearn.compose.transformedtargetregressor#sklearn.compose.TransformedTargetRegressor "sklearn.compose.TransformedTargetRegressor"). We use two examples to illustrate the benefit of transforming the targets before learning a linear regression model. The first example uses synthetic data while the second example is based on the Ames housing data set.
```
# Author: Guillaume Lemaitre <[email protected]>
# License: BSD 3 clause
import numpy as np
import matplotlib.pyplot as plt
from sklearn.datasets import make_regression
from sklearn.model_selection import train_test_split
from sklearn.linear_model import RidgeCV
from sklearn.compose import TransformedTargetRegressor
from sklearn.metrics import median_absolute_error, r2_score
```
Synthetic example
-----------------
A synthetic random regression dataset is generated. The targets `y` are modified by:
1. translating all targets such that all entries are non-negative (by adding the absolute value of the lowest `y`) and
2. applying an exponential function to obtain non-linear targets which cannot be fitted using a simple linear model.
Therefore, a logarithmic (`np.log1p`) and an exponential function (`np.expm1`) will be used to transform the targets before training a linear regression model and using it for prediction.
```
X, y = make_regression(n_samples=10000, noise=100, random_state=0)
y = np.expm1((y + abs(y.min())) / 200)
y_trans = np.log1p(y)
```
Below we plot the probability density functions of the target before and after applying the logarithmic functions.
```
f, (ax0, ax1) = plt.subplots(1, 2)
ax0.hist(y, bins=100, density=True)
ax0.set_xlim([0, 2000])
ax0.set_ylabel("Probability")
ax0.set_xlabel("Target")
ax0.set_title("Target distribution")
ax1.hist(y_trans, bins=100, density=True)
ax1.set_ylabel("Probability")
ax1.set_xlabel("Target")
ax1.set_title("Transformed target distribution")
f.suptitle("Synthetic data", y=0.06, x=0.53)
f.tight_layout(rect=[0.05, 0.05, 0.95, 0.95])
X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=0)
```
At first, a linear model will be applied on the original targets. Due to the non-linearity, the model trained will not be precise during prediction. Subsequently, a logarithmic function is used to linearize the targets, allowing better prediction even with a similar linear model as reported by the median absolute error (MAE).
```
f, (ax0, ax1) = plt.subplots(1, 2, sharey=True)
# Use linear model
regr = RidgeCV()
regr.fit(X_train, y_train)
y_pred = regr.predict(X_test)
# Plot results
ax0.scatter(y_test, y_pred)
ax0.plot([0, 2000], [0, 2000], "--k")
ax0.set_ylabel("Target predicted")
ax0.set_xlabel("True Target")
ax0.set_title("Ridge regression \n without target transformation")
ax0.text(
100,
1750,
r"$R^2$=%.2f, MAE=%.2f"
% (r2_score(y_test, y_pred), median_absolute_error(y_test, y_pred)),
)
ax0.set_xlim([0, 2000])
ax0.set_ylim([0, 2000])
# Transform targets and use same linear model
regr_trans = TransformedTargetRegressor(
regressor=RidgeCV(), func=np.log1p, inverse_func=np.expm1
)
regr_trans.fit(X_train, y_train)
y_pred = regr_trans.predict(X_test)
ax1.scatter(y_test, y_pred)
ax1.plot([0, 2000], [0, 2000], "--k")
ax1.set_ylabel("Target predicted")
ax1.set_xlabel("True Target")
ax1.set_title("Ridge regression \n with target transformation")
ax1.text(
100,
1750,
r"$R^2$=%.2f, MAE=%.2f"
% (r2_score(y_test, y_pred), median_absolute_error(y_test, y_pred)),
)
ax1.set_xlim([0, 2000])
ax1.set_ylim([0, 2000])
f.suptitle("Synthetic data", y=0.035)
f.tight_layout(rect=[0.05, 0.05, 0.95, 0.95])
```
Real-world data set
-------------------
In a similar manner, the Ames housing data set is used to show the impact of transforming the targets before learning a model. In this example, the target to be predicted is the selling price of each house.
```
from sklearn.datasets import fetch_openml
from sklearn.preprocessing import QuantileTransformer, quantile_transform
ames = fetch_openml(name="house_prices", as_frame=True)
# Keep only numeric columns
X = ames.data.select_dtypes(np.number)
# Remove columns with NaN or Inf values
X = X.drop(columns=["LotFrontage", "GarageYrBlt", "MasVnrArea"])
y = ames.target
y_trans = quantile_transform(
y.to_frame(), n_quantiles=900, output_distribution="normal", copy=True
).squeeze()
```
A [`QuantileTransformer`](../../modules/generated/sklearn.preprocessing.quantiletransformer#sklearn.preprocessing.QuantileTransformer "sklearn.preprocessing.QuantileTransformer") is used to normalize the target distribution before applying a [`RidgeCV`](../../modules/generated/sklearn.linear_model.ridgecv#sklearn.linear_model.RidgeCV "sklearn.linear_model.RidgeCV") model.
```
f, (ax0, ax1) = plt.subplots(1, 2)
ax0.hist(y, bins=100, density=True)
ax0.set_ylabel("Probability")
ax0.set_xlabel("Target")
ax0.text(s="Target distribution", x=1.2e5, y=9.8e-6, fontsize=12)
ax0.ticklabel_format(axis="both", style="sci", scilimits=(0, 0))
ax1.hist(y_trans, bins=100, density=True)
ax1.set_ylabel("Probability")
ax1.set_xlabel("Target")
ax1.text(s="Transformed target distribution", x=-6.8, y=0.479, fontsize=12)
f.suptitle("Ames housing data: selling price", y=0.04)
f.tight_layout(rect=[0.05, 0.05, 0.95, 0.95])
X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=1)
```
The effect of the transformer is weaker than on the synthetic data. However, the transformation results in an increase in \(R^2\) and large decrease of the MAE. The residual plot (predicted target - true target vs predicted target) without target transformation takes on a curved, ‘reverse smile’ shape due to residual values that vary depending on the value of predicted target. With target transformation, the shape is more linear indicating better model fit.
```
f, (ax0, ax1) = plt.subplots(2, 2, sharey="row", figsize=(6.5, 8))
regr = RidgeCV()
regr.fit(X_train, y_train)
y_pred = regr.predict(X_test)
ax0[0].scatter(y_pred, y_test, s=8)
ax0[0].plot([0, 7e5], [0, 7e5], "--k")
ax0[0].set_ylabel("True target")
ax0[0].set_xlabel("Predicted target")
ax0[0].text(
s="Ridge regression \n without target transformation",
x=-5e4,
y=8e5,
fontsize=12,
multialignment="center",
)
ax0[0].text(
3e4,
64e4,
r"$R^2$=%.2f, MAE=%.2f"
% (r2_score(y_test, y_pred), median_absolute_error(y_test, y_pred)),
)
ax0[0].set_xlim([0, 7e5])
ax0[0].set_ylim([0, 7e5])
ax0[0].ticklabel_format(axis="both", style="sci", scilimits=(0, 0))
ax1[0].scatter(y_pred, (y_pred - y_test), s=8)
ax1[0].set_ylabel("Residual")
ax1[0].set_xlabel("Predicted target")
ax1[0].ticklabel_format(axis="both", style="sci", scilimits=(0, 0))
regr_trans = TransformedTargetRegressor(
regressor=RidgeCV(),
transformer=QuantileTransformer(n_quantiles=900, output_distribution="normal"),
)
regr_trans.fit(X_train, y_train)
y_pred = regr_trans.predict(X_test)
ax0[1].scatter(y_pred, y_test, s=8)
ax0[1].plot([0, 7e5], [0, 7e5], "--k")
ax0[1].set_ylabel("True target")
ax0[1].set_xlabel("Predicted target")
ax0[1].text(
s="Ridge regression \n with target transformation",
x=-5e4,
y=8e5,
fontsize=12,
multialignment="center",
)
ax0[1].text(
3e4,
64e4,
r"$R^2$=%.2f, MAE=%.2f"
% (r2_score(y_test, y_pred), median_absolute_error(y_test, y_pred)),
)
ax0[1].set_xlim([0, 7e5])
ax0[1].set_ylim([0, 7e5])
ax0[1].ticklabel_format(axis="both", style="sci", scilimits=(0, 0))
ax1[1].scatter(y_pred, (y_pred - y_test), s=8)
ax1[1].set_ylabel("Residual")
ax1[1].set_xlabel("Predicted target")
ax1[1].ticklabel_format(axis="both", style="sci", scilimits=(0, 0))
f.suptitle("Ames housing data: selling price", y=0.035)
plt.show()
```
**Total running time of the script:** ( 0 minutes 1.135 seconds)
[`Download Python source code: plot_transformed_target.py`](https://scikit-learn.org/1.1/_downloads/93d55b9dcb06fda6f82b4d16c9a3a70d/plot_transformed_target.py)
[`Download Jupyter notebook: plot_transformed_target.ipynb`](https://scikit-learn.org/1.1/_downloads/ea57d7ab1588de8f5bd1afc68f20de2f/plot_transformed_target.ipynb)
scikit_learn Pipelining: chaining a PCA and a logistic regression Note
Click [here](#sphx-glr-download-auto-examples-compose-plot-digits-pipe-py) to download the full example code or to run this example in your browser via Binder
Pipelining: chaining a PCA and a logistic regression
====================================================
The PCA does an unsupervised dimensionality reduction, while the logistic regression does the prediction.
We use a GridSearchCV to set the dimensionality of the PCA
```
Best parameter (CV score=0.924):
{'logistic__C': 0.046415888336127774, 'pca__n_components': 60}
/home/runner/mambaforge/envs/testenv/lib/python3.9/site-packages/pandas/core/indexes/base.py:6982: FutureWarning: In a future version, the Index constructor will not infer numeric dtypes when passed object-dtype sequences (matching Series behavior)
return Index(sequences[0], name=names)
```
```
# Code source: Gaël Varoquaux
# Modified for documentation by Jaques Grobler
# License: BSD 3 clause
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
from sklearn import datasets
from sklearn.decomposition import PCA
from sklearn.linear_model import LogisticRegression
from sklearn.pipeline import Pipeline
from sklearn.model_selection import GridSearchCV
from sklearn.preprocessing import StandardScaler
# Define a pipeline to search for the best combination of PCA truncation
# and classifier regularization.
pca = PCA()
# Define a Standard Scaler to normalize inputs
scaler = StandardScaler()
# set the tolerance to a large value to make the example faster
logistic = LogisticRegression(max_iter=10000, tol=0.1)
pipe = Pipeline(steps=[("scaler", scaler), ("pca", pca), ("logistic", logistic)])
X_digits, y_digits = datasets.load_digits(return_X_y=True)
# Parameters of pipelines can be set using '__' separated parameter names:
param_grid = {
"pca__n_components": [5, 15, 30, 45, 60],
"logistic__C": np.logspace(-4, 4, 4),
}
search = GridSearchCV(pipe, param_grid, n_jobs=2)
search.fit(X_digits, y_digits)
print("Best parameter (CV score=%0.3f):" % search.best_score_)
print(search.best_params_)
# Plot the PCA spectrum
pca.fit(X_digits)
fig, (ax0, ax1) = plt.subplots(nrows=2, sharex=True, figsize=(6, 6))
ax0.plot(
np.arange(1, pca.n_components_ + 1), pca.explained_variance_ratio_, "+", linewidth=2
)
ax0.set_ylabel("PCA explained variance ratio")
ax0.axvline(
search.best_estimator_.named_steps["pca"].n_components,
linestyle=":",
label="n_components chosen",
)
ax0.legend(prop=dict(size=12))
# For each number of components, find the best classifier results
results = pd.DataFrame(search.cv_results_)
components_col = "param_pca__n_components"
best_clfs = results.groupby(components_col).apply(
lambda g: g.nlargest(1, "mean_test_score")
)
best_clfs.plot(
x=components_col, y="mean_test_score", yerr="std_test_score", legend=False, ax=ax1
)
ax1.set_ylabel("Classification accuracy (val)")
ax1.set_xlabel("n_components")
plt.xlim(-1, 70)
plt.tight_layout()
plt.show()
```
**Total running time of the script:** ( 0 minutes 1.990 seconds)
[`Download Python source code: plot_digits_pipe.py`](https://scikit-learn.org/1.1/_downloads/ba89a400c6902f85c10199ff86947d23/plot_digits_pipe.py)
[`Download Jupyter notebook: plot_digits_pipe.ipynb`](https://scikit-learn.org/1.1/_downloads/898b30acf62919d918478efbe526195f/plot_digits_pipe.ipynb)
scikit_learn Concatenating multiple feature extraction methods Note
Click [here](#sphx-glr-download-auto-examples-compose-plot-feature-union-py) to download the full example code or to run this example in your browser via Binder
Concatenating multiple feature extraction methods
=================================================
In many real-world examples, there are many ways to extract features from a dataset. Often it is beneficial to combine several methods to obtain good performance. This example shows how to use `FeatureUnion` to combine features obtained by PCA and univariate selection.
Combining features using this transformer has the benefit that it allows cross validation and grid searches over the whole process.
The combination used in this example is not particularly helpful on this dataset and is only used to illustrate the usage of FeatureUnion.
```
Combined space has 3 features
Fitting 5 folds for each of 18 candidates, totalling 90 fits
[CV 1/5; 1/18] START features__pca__n_components=1, features__univ_select__k=1, svm__C=0.1
[CV 1/5; 1/18] END features__pca__n_components=1, features__univ_select__k=1, svm__C=0.1;, score=0.933 total time= 0.0s
[CV 2/5; 1/18] START features__pca__n_components=1, features__univ_select__k=1, svm__C=0.1
[CV 2/5; 1/18] END features__pca__n_components=1, features__univ_select__k=1, svm__C=0.1;, score=0.933 total time= 0.0s
[CV 3/5; 1/18] START features__pca__n_components=1, features__univ_select__k=1, svm__C=0.1
[CV 3/5; 1/18] END features__pca__n_components=1, features__univ_select__k=1, svm__C=0.1;, score=0.867 total time= 0.0s
[CV 4/5; 1/18] START features__pca__n_components=1, features__univ_select__k=1, svm__C=0.1
[CV 4/5; 1/18] END features__pca__n_components=1, features__univ_select__k=1, svm__C=0.1;, score=0.933 total time= 0.0s
[CV 5/5; 1/18] START features__pca__n_components=1, features__univ_select__k=1, svm__C=0.1
[CV 5/5; 1/18] END features__pca__n_components=1, features__univ_select__k=1, svm__C=0.1;, score=1.000 total time= 0.0s
[CV 1/5; 2/18] START features__pca__n_components=1, features__univ_select__k=1, svm__C=1
[CV 1/5; 2/18] END features__pca__n_components=1, features__univ_select__k=1, svm__C=1;, score=0.900 total time= 0.0s
[CV 2/5; 2/18] START features__pca__n_components=1, features__univ_select__k=1, svm__C=1
[CV 2/5; 2/18] END features__pca__n_components=1, features__univ_select__k=1, svm__C=1;, score=1.000 total time= 0.0s
[CV 3/5; 2/18] START features__pca__n_components=1, features__univ_select__k=1, svm__C=1
[CV 3/5; 2/18] END features__pca__n_components=1, features__univ_select__k=1, svm__C=1;, score=0.867 total time= 0.0s
[CV 4/5; 2/18] START features__pca__n_components=1, features__univ_select__k=1, svm__C=1
[CV 4/5; 2/18] END features__pca__n_components=1, features__univ_select__k=1, svm__C=1;, score=0.933 total time= 0.0s
[CV 5/5; 2/18] START features__pca__n_components=1, features__univ_select__k=1, svm__C=1
[CV 5/5; 2/18] END features__pca__n_components=1, features__univ_select__k=1, svm__C=1;, score=1.000 total time= 0.0s
[CV 1/5; 3/18] START features__pca__n_components=1, features__univ_select__k=1, svm__C=10
[CV 1/5; 3/18] END features__pca__n_components=1, features__univ_select__k=1, svm__C=10;, score=0.933 total time= 0.0s
[CV 2/5; 3/18] START features__pca__n_components=1, features__univ_select__k=1, svm__C=10
[CV 2/5; 3/18] END features__pca__n_components=1, features__univ_select__k=1, svm__C=10;, score=1.000 total time= 0.0s
[CV 3/5; 3/18] START features__pca__n_components=1, features__univ_select__k=1, svm__C=10
[CV 3/5; 3/18] END features__pca__n_components=1, features__univ_select__k=1, svm__C=10;, score=0.900 total time= 0.0s
[CV 4/5; 3/18] START features__pca__n_components=1, features__univ_select__k=1, svm__C=10
[CV 4/5; 3/18] END features__pca__n_components=1, features__univ_select__k=1, svm__C=10;, score=0.933 total time= 0.0s
[CV 5/5; 3/18] START features__pca__n_components=1, features__univ_select__k=1, svm__C=10
[CV 5/5; 3/18] END features__pca__n_components=1, features__univ_select__k=1, svm__C=10;, score=1.000 total time= 0.0s
[CV 1/5; 4/18] START features__pca__n_components=1, features__univ_select__k=2, svm__C=0.1
[CV 1/5; 4/18] END features__pca__n_components=1, features__univ_select__k=2, svm__C=0.1;, score=0.933 total time= 0.0s
[CV 2/5; 4/18] START features__pca__n_components=1, features__univ_select__k=2, svm__C=0.1
[CV 2/5; 4/18] END features__pca__n_components=1, features__univ_select__k=2, svm__C=0.1;, score=0.967 total time= 0.0s
[CV 3/5; 4/18] START features__pca__n_components=1, features__univ_select__k=2, svm__C=0.1
[CV 3/5; 4/18] END features__pca__n_components=1, features__univ_select__k=2, svm__C=0.1;, score=0.933 total time= 0.0s
[CV 4/5; 4/18] START features__pca__n_components=1, features__univ_select__k=2, svm__C=0.1
[CV 4/5; 4/18] END features__pca__n_components=1, features__univ_select__k=2, svm__C=0.1;, score=0.933 total time= 0.0s
[CV 5/5; 4/18] START features__pca__n_components=1, features__univ_select__k=2, svm__C=0.1
[CV 5/5; 4/18] END features__pca__n_components=1, features__univ_select__k=2, svm__C=0.1;, score=1.000 total time= 0.0s
[CV 1/5; 5/18] START features__pca__n_components=1, features__univ_select__k=2, svm__C=1
[CV 1/5; 5/18] END features__pca__n_components=1, features__univ_select__k=2, svm__C=1;, score=0.933 total time= 0.0s
[CV 2/5; 5/18] START features__pca__n_components=1, features__univ_select__k=2, svm__C=1
[CV 2/5; 5/18] END features__pca__n_components=1, features__univ_select__k=2, svm__C=1;, score=0.967 total time= 0.0s
[CV 3/5; 5/18] START features__pca__n_components=1, features__univ_select__k=2, svm__C=1
[CV 3/5; 5/18] END features__pca__n_components=1, features__univ_select__k=2, svm__C=1;, score=0.933 total time= 0.0s
[CV 4/5; 5/18] START features__pca__n_components=1, features__univ_select__k=2, svm__C=1
[CV 4/5; 5/18] END features__pca__n_components=1, features__univ_select__k=2, svm__C=1;, score=0.933 total time= 0.0s
[CV 5/5; 5/18] START features__pca__n_components=1, features__univ_select__k=2, svm__C=1
[CV 5/5; 5/18] END features__pca__n_components=1, features__univ_select__k=2, svm__C=1;, score=1.000 total time= 0.0s
[CV 1/5; 6/18] START features__pca__n_components=1, features__univ_select__k=2, svm__C=10
[CV 1/5; 6/18] END features__pca__n_components=1, features__univ_select__k=2, svm__C=10;, score=0.967 total time= 0.0s
[CV 2/5; 6/18] START features__pca__n_components=1, features__univ_select__k=2, svm__C=10
[CV 2/5; 6/18] END features__pca__n_components=1, features__univ_select__k=2, svm__C=10;, score=0.967 total time= 0.0s
[CV 3/5; 6/18] START features__pca__n_components=1, features__univ_select__k=2, svm__C=10
[CV 3/5; 6/18] END features__pca__n_components=1, features__univ_select__k=2, svm__C=10;, score=0.933 total time= 0.0s
[CV 4/5; 6/18] START features__pca__n_components=1, features__univ_select__k=2, svm__C=10
[CV 4/5; 6/18] END features__pca__n_components=1, features__univ_select__k=2, svm__C=10;, score=0.933 total time= 0.0s
[CV 5/5; 6/18] START features__pca__n_components=1, features__univ_select__k=2, svm__C=10
[CV 5/5; 6/18] END features__pca__n_components=1, features__univ_select__k=2, svm__C=10;, score=1.000 total time= 0.0s
[CV 1/5; 7/18] START features__pca__n_components=2, features__univ_select__k=1, svm__C=0.1
[CV 1/5; 7/18] END features__pca__n_components=2, features__univ_select__k=1, svm__C=0.1;, score=0.933 total time= 0.0s
[CV 2/5; 7/18] START features__pca__n_components=2, features__univ_select__k=1, svm__C=0.1
[CV 2/5; 7/18] END features__pca__n_components=2, features__univ_select__k=1, svm__C=0.1;, score=1.000 total time= 0.0s
[CV 3/5; 7/18] START features__pca__n_components=2, features__univ_select__k=1, svm__C=0.1
[CV 3/5; 7/18] END features__pca__n_components=2, features__univ_select__k=1, svm__C=0.1;, score=0.867 total time= 0.0s
[CV 4/5; 7/18] START features__pca__n_components=2, features__univ_select__k=1, svm__C=0.1
[CV 4/5; 7/18] END features__pca__n_components=2, features__univ_select__k=1, svm__C=0.1;, score=0.933 total time= 0.0s
[CV 5/5; 7/18] START features__pca__n_components=2, features__univ_select__k=1, svm__C=0.1
[CV 5/5; 7/18] END features__pca__n_components=2, features__univ_select__k=1, svm__C=0.1;, score=1.000 total time= 0.0s
[CV 1/5; 8/18] START features__pca__n_components=2, features__univ_select__k=1, svm__C=1
[CV 1/5; 8/18] END features__pca__n_components=2, features__univ_select__k=1, svm__C=1;, score=0.967 total time= 0.0s
[CV 2/5; 8/18] START features__pca__n_components=2, features__univ_select__k=1, svm__C=1
[CV 2/5; 8/18] END features__pca__n_components=2, features__univ_select__k=1, svm__C=1;, score=1.000 total time= 0.0s
[CV 3/5; 8/18] START features__pca__n_components=2, features__univ_select__k=1, svm__C=1
[CV 3/5; 8/18] END features__pca__n_components=2, features__univ_select__k=1, svm__C=1;, score=0.933 total time= 0.0s
[CV 4/5; 8/18] START features__pca__n_components=2, features__univ_select__k=1, svm__C=1
[CV 4/5; 8/18] END features__pca__n_components=2, features__univ_select__k=1, svm__C=1;, score=0.933 total time= 0.0s
[CV 5/5; 8/18] START features__pca__n_components=2, features__univ_select__k=1, svm__C=1
[CV 5/5; 8/18] END features__pca__n_components=2, features__univ_select__k=1, svm__C=1;, score=1.000 total time= 0.0s
[CV 1/5; 9/18] START features__pca__n_components=2, features__univ_select__k=1, svm__C=10
[CV 1/5; 9/18] END features__pca__n_components=2, features__univ_select__k=1, svm__C=10;, score=0.967 total time= 0.0s
[CV 2/5; 9/18] START features__pca__n_components=2, features__univ_select__k=1, svm__C=10
[CV 2/5; 9/18] END features__pca__n_components=2, features__univ_select__k=1, svm__C=10;, score=0.967 total time= 0.0s
[CV 3/5; 9/18] START features__pca__n_components=2, features__univ_select__k=1, svm__C=10
[CV 3/5; 9/18] END features__pca__n_components=2, features__univ_select__k=1, svm__C=10;, score=0.900 total time= 0.0s
[CV 4/5; 9/18] START features__pca__n_components=2, features__univ_select__k=1, svm__C=10
[CV 4/5; 9/18] END features__pca__n_components=2, features__univ_select__k=1, svm__C=10;, score=0.933 total time= 0.0s
[CV 5/5; 9/18] START features__pca__n_components=2, features__univ_select__k=1, svm__C=10
[CV 5/5; 9/18] END features__pca__n_components=2, features__univ_select__k=1, svm__C=10;, score=1.000 total time= 0.0s
[CV 1/5; 10/18] START features__pca__n_components=2, features__univ_select__k=2, svm__C=0.1
[CV 1/5; 10/18] END features__pca__n_components=2, features__univ_select__k=2, svm__C=0.1;, score=0.967 total time= 0.0s
[CV 2/5; 10/18] START features__pca__n_components=2, features__univ_select__k=2, svm__C=0.1
[CV 2/5; 10/18] END features__pca__n_components=2, features__univ_select__k=2, svm__C=0.1;, score=1.000 total time= 0.0s
[CV 3/5; 10/18] START features__pca__n_components=2, features__univ_select__k=2, svm__C=0.1
[CV 3/5; 10/18] END features__pca__n_components=2, features__univ_select__k=2, svm__C=0.1;, score=0.933 total time= 0.0s
[CV 4/5; 10/18] START features__pca__n_components=2, features__univ_select__k=2, svm__C=0.1
[CV 4/5; 10/18] END features__pca__n_components=2, features__univ_select__k=2, svm__C=0.1;, score=0.933 total time= 0.0s
[CV 5/5; 10/18] START features__pca__n_components=2, features__univ_select__k=2, svm__C=0.1
[CV 5/5; 10/18] END features__pca__n_components=2, features__univ_select__k=2, svm__C=0.1;, score=1.000 total time= 0.0s
[CV 1/5; 11/18] START features__pca__n_components=2, features__univ_select__k=2, svm__C=1
[CV 1/5; 11/18] END features__pca__n_components=2, features__univ_select__k=2, svm__C=1;, score=0.967 total time= 0.0s
[CV 2/5; 11/18] START features__pca__n_components=2, features__univ_select__k=2, svm__C=1
[CV 2/5; 11/18] END features__pca__n_components=2, features__univ_select__k=2, svm__C=1;, score=1.000 total time= 0.0s
[CV 3/5; 11/18] START features__pca__n_components=2, features__univ_select__k=2, svm__C=1
[CV 3/5; 11/18] END features__pca__n_components=2, features__univ_select__k=2, svm__C=1;, score=0.933 total time= 0.0s
[CV 4/5; 11/18] START features__pca__n_components=2, features__univ_select__k=2, svm__C=1
[CV 4/5; 11/18] END features__pca__n_components=2, features__univ_select__k=2, svm__C=1;, score=0.967 total time= 0.0s
[CV 5/5; 11/18] START features__pca__n_components=2, features__univ_select__k=2, svm__C=1
[CV 5/5; 11/18] END features__pca__n_components=2, features__univ_select__k=2, svm__C=1;, score=1.000 total time= 0.0s
[CV 1/5; 12/18] START features__pca__n_components=2, features__univ_select__k=2, svm__C=10
[CV 1/5; 12/18] END features__pca__n_components=2, features__univ_select__k=2, svm__C=10;, score=0.967 total time= 0.0s
[CV 2/5; 12/18] START features__pca__n_components=2, features__univ_select__k=2, svm__C=10
[CV 2/5; 12/18] END features__pca__n_components=2, features__univ_select__k=2, svm__C=10;, score=1.000 total time= 0.0s
[CV 3/5; 12/18] START features__pca__n_components=2, features__univ_select__k=2, svm__C=10
[CV 3/5; 12/18] END features__pca__n_components=2, features__univ_select__k=2, svm__C=10;, score=0.900 total time= 0.0s
[CV 4/5; 12/18] START features__pca__n_components=2, features__univ_select__k=2, svm__C=10
[CV 4/5; 12/18] END features__pca__n_components=2, features__univ_select__k=2, svm__C=10;, score=0.933 total time= 0.0s
[CV 5/5; 12/18] START features__pca__n_components=2, features__univ_select__k=2, svm__C=10
[CV 5/5; 12/18] END features__pca__n_components=2, features__univ_select__k=2, svm__C=10;, score=1.000 total time= 0.0s
[CV 1/5; 13/18] START features__pca__n_components=3, features__univ_select__k=1, svm__C=0.1
[CV 1/5; 13/18] END features__pca__n_components=3, features__univ_select__k=1, svm__C=0.1;, score=0.967 total time= 0.0s
[CV 2/5; 13/18] START features__pca__n_components=3, features__univ_select__k=1, svm__C=0.1
[CV 2/5; 13/18] END features__pca__n_components=3, features__univ_select__k=1, svm__C=0.1;, score=1.000 total time= 0.0s
[CV 3/5; 13/18] START features__pca__n_components=3, features__univ_select__k=1, svm__C=0.1
[CV 3/5; 13/18] END features__pca__n_components=3, features__univ_select__k=1, svm__C=0.1;, score=0.933 total time= 0.0s
[CV 4/5; 13/18] START features__pca__n_components=3, features__univ_select__k=1, svm__C=0.1
[CV 4/5; 13/18] END features__pca__n_components=3, features__univ_select__k=1, svm__C=0.1;, score=0.967 total time= 0.0s
[CV 5/5; 13/18] START features__pca__n_components=3, features__univ_select__k=1, svm__C=0.1
[CV 5/5; 13/18] END features__pca__n_components=3, features__univ_select__k=1, svm__C=0.1;, score=1.000 total time= 0.0s
[CV 1/5; 14/18] START features__pca__n_components=3, features__univ_select__k=1, svm__C=1
[CV 1/5; 14/18] END features__pca__n_components=3, features__univ_select__k=1, svm__C=1;, score=0.967 total time= 0.0s
[CV 2/5; 14/18] START features__pca__n_components=3, features__univ_select__k=1, svm__C=1
[CV 2/5; 14/18] END features__pca__n_components=3, features__univ_select__k=1, svm__C=1;, score=1.000 total time= 0.0s
[CV 3/5; 14/18] START features__pca__n_components=3, features__univ_select__k=1, svm__C=1
[CV 3/5; 14/18] END features__pca__n_components=3, features__univ_select__k=1, svm__C=1;, score=0.933 total time= 0.0s
[CV 4/5; 14/18] START features__pca__n_components=3, features__univ_select__k=1, svm__C=1
[CV 4/5; 14/18] END features__pca__n_components=3, features__univ_select__k=1, svm__C=1;, score=0.967 total time= 0.0s
[CV 5/5; 14/18] START features__pca__n_components=3, features__univ_select__k=1, svm__C=1
[CV 5/5; 14/18] END features__pca__n_components=3, features__univ_select__k=1, svm__C=1;, score=1.000 total time= 0.0s
[CV 1/5; 15/18] START features__pca__n_components=3, features__univ_select__k=1, svm__C=10
[CV 1/5; 15/18] END features__pca__n_components=3, features__univ_select__k=1, svm__C=10;, score=1.000 total time= 0.0s
[CV 2/5; 15/18] START features__pca__n_components=3, features__univ_select__k=1, svm__C=10
[CV 2/5; 15/18] END features__pca__n_components=3, features__univ_select__k=1, svm__C=10;, score=1.000 total time= 0.0s
[CV 3/5; 15/18] START features__pca__n_components=3, features__univ_select__k=1, svm__C=10
[CV 3/5; 15/18] END features__pca__n_components=3, features__univ_select__k=1, svm__C=10;, score=0.933 total time= 0.0s
[CV 4/5; 15/18] START features__pca__n_components=3, features__univ_select__k=1, svm__C=10
[CV 4/5; 15/18] END features__pca__n_components=3, features__univ_select__k=1, svm__C=10;, score=0.967 total time= 0.0s
[CV 5/5; 15/18] START features__pca__n_components=3, features__univ_select__k=1, svm__C=10
[CV 5/5; 15/18] END features__pca__n_components=3, features__univ_select__k=1, svm__C=10;, score=1.000 total time= 0.0s
[CV 1/5; 16/18] START features__pca__n_components=3, features__univ_select__k=2, svm__C=0.1
[CV 1/5; 16/18] END features__pca__n_components=3, features__univ_select__k=2, svm__C=0.1;, score=0.967 total time= 0.0s
[CV 2/5; 16/18] START features__pca__n_components=3, features__univ_select__k=2, svm__C=0.1
[CV 2/5; 16/18] END features__pca__n_components=3, features__univ_select__k=2, svm__C=0.1;, score=1.000 total time= 0.0s
[CV 3/5; 16/18] START features__pca__n_components=3, features__univ_select__k=2, svm__C=0.1
[CV 3/5; 16/18] END features__pca__n_components=3, features__univ_select__k=2, svm__C=0.1;, score=0.933 total time= 0.0s
[CV 4/5; 16/18] START features__pca__n_components=3, features__univ_select__k=2, svm__C=0.1
[CV 4/5; 16/18] END features__pca__n_components=3, features__univ_select__k=2, svm__C=0.1;, score=0.967 total time= 0.0s
[CV 5/5; 16/18] START features__pca__n_components=3, features__univ_select__k=2, svm__C=0.1
[CV 5/5; 16/18] END features__pca__n_components=3, features__univ_select__k=2, svm__C=0.1;, score=1.000 total time= 0.0s
[CV 1/5; 17/18] START features__pca__n_components=3, features__univ_select__k=2, svm__C=1
[CV 1/5; 17/18] END features__pca__n_components=3, features__univ_select__k=2, svm__C=1;, score=0.967 total time= 0.0s
[CV 2/5; 17/18] START features__pca__n_components=3, features__univ_select__k=2, svm__C=1
[CV 2/5; 17/18] END features__pca__n_components=3, features__univ_select__k=2, svm__C=1;, score=1.000 total time= 0.0s
[CV 3/5; 17/18] START features__pca__n_components=3, features__univ_select__k=2, svm__C=1
[CV 3/5; 17/18] END features__pca__n_components=3, features__univ_select__k=2, svm__C=1;, score=0.967 total time= 0.0s
[CV 4/5; 17/18] START features__pca__n_components=3, features__univ_select__k=2, svm__C=1
[CV 4/5; 17/18] END features__pca__n_components=3, features__univ_select__k=2, svm__C=1;, score=0.967 total time= 0.0s
[CV 5/5; 17/18] START features__pca__n_components=3, features__univ_select__k=2, svm__C=1
[CV 5/5; 17/18] END features__pca__n_components=3, features__univ_select__k=2, svm__C=1;, score=1.000 total time= 0.0s
[CV 1/5; 18/18] START features__pca__n_components=3, features__univ_select__k=2, svm__C=10
[CV 1/5; 18/18] END features__pca__n_components=3, features__univ_select__k=2, svm__C=10;, score=1.000 total time= 0.0s
[CV 2/5; 18/18] START features__pca__n_components=3, features__univ_select__k=2, svm__C=10
[CV 2/5; 18/18] END features__pca__n_components=3, features__univ_select__k=2, svm__C=10;, score=1.000 total time= 0.0s
[CV 3/5; 18/18] START features__pca__n_components=3, features__univ_select__k=2, svm__C=10
[CV 3/5; 18/18] END features__pca__n_components=3, features__univ_select__k=2, svm__C=10;, score=0.900 total time= 0.0s
[CV 4/5; 18/18] START features__pca__n_components=3, features__univ_select__k=2, svm__C=10
[CV 4/5; 18/18] END features__pca__n_components=3, features__univ_select__k=2, svm__C=10;, score=0.967 total time= 0.0s
[CV 5/5; 18/18] START features__pca__n_components=3, features__univ_select__k=2, svm__C=10
[CV 5/5; 18/18] END features__pca__n_components=3, features__univ_select__k=2, svm__C=10;, score=1.000 total time= 0.0s
Pipeline(steps=[('features',
FeatureUnion(transformer_list=[('pca', PCA(n_components=3)),
('univ_select',
SelectKBest(k=1))])),
('svm', SVC(C=10, kernel='linear'))])
```
```
# Author: Andreas Mueller <[email protected]>
#
# License: BSD 3 clause
from sklearn.pipeline import Pipeline, FeatureUnion
from sklearn.model_selection import GridSearchCV
from sklearn.svm import SVC
from sklearn.datasets import load_iris
from sklearn.decomposition import PCA
from sklearn.feature_selection import SelectKBest
iris = load_iris()
X, y = iris.data, iris.target
# This dataset is way too high-dimensional. Better do PCA:
pca = PCA(n_components=2)
# Maybe some original features were good, too?
selection = SelectKBest(k=1)
# Build estimator from PCA and Univariate selection:
combined_features = FeatureUnion([("pca", pca), ("univ_select", selection)])
# Use combined features to transform dataset:
X_features = combined_features.fit(X, y).transform(X)
print("Combined space has", X_features.shape[1], "features")
svm = SVC(kernel="linear")
# Do grid search over k, n_components and C:
pipeline = Pipeline([("features", combined_features), ("svm", svm)])
param_grid = dict(
features__pca__n_components=[1, 2, 3],
features__univ_select__k=[1, 2],
svm__C=[0.1, 1, 10],
)
grid_search = GridSearchCV(pipeline, param_grid=param_grid, verbose=10)
grid_search.fit(X, y)
print(grid_search.best_estimator_)
```
**Total running time of the script:** ( 0 minutes 0.290 seconds)
[`Download Python source code: plot_feature_union.py`](https://scikit-learn.org/1.1/_downloads/01fdc7c95204e4a420de7cd297711693/plot_feature_union.py)
[`Download Jupyter notebook: plot_feature_union.ipynb`](https://scikit-learn.org/1.1/_downloads/1273a3baa87138f2b817bfc78fe7ecb4/plot_feature_union.ipynb)
| programming_docs |
scikit_learn Column Transformer with Heterogeneous Data Sources Note
Click [here](#sphx-glr-download-auto-examples-compose-plot-column-transformer-py) to download the full example code or to run this example in your browser via Binder
Column Transformer with Heterogeneous Data Sources
==================================================
Datasets can often contain components that require different feature extraction and processing pipelines. This scenario might occur when:
1. your dataset consists of heterogeneous data types (e.g. raster images and text captions),
2. your dataset is stored in a [`pandas.DataFrame`](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.html#pandas.DataFrame "(in pandas v1.5.1)") and different columns require different processing pipelines.
This example demonstrates how to use [`ColumnTransformer`](../../modules/generated/sklearn.compose.columntransformer#sklearn.compose.ColumnTransformer "sklearn.compose.ColumnTransformer") on a dataset containing different types of features. The choice of features is not particularly helpful, but serves to illustrate the technique.
```
# Author: Matt Terry <[email protected]>
#
# License: BSD 3 clause
import numpy as np
from sklearn.preprocessing import FunctionTransformer
from sklearn.datasets import fetch_20newsgroups
from sklearn.decomposition import TruncatedSVD
from sklearn.feature_extraction import DictVectorizer
from sklearn.feature_extraction.text import TfidfVectorizer
from sklearn.metrics import classification_report
from sklearn.pipeline import Pipeline
from sklearn.compose import ColumnTransformer
from sklearn.svm import LinearSVC
```
20 newsgroups dataset
---------------------
We will use the [20 newsgroups dataset](https://scikit-learn.org/1.1/datasets/real_world.html#newsgroups-dataset), which comprises posts from newsgroups on 20 topics. This dataset is split into train and test subsets based on messages posted before and after a specific date. We will only use posts from 2 categories to speed up running time.
```
categories = ["sci.med", "sci.space"]
X_train, y_train = fetch_20newsgroups(
random_state=1,
subset="train",
categories=categories,
remove=("footers", "quotes"),
return_X_y=True,
)
X_test, y_test = fetch_20newsgroups(
random_state=1,
subset="test",
categories=categories,
remove=("footers", "quotes"),
return_X_y=True,
)
```
Each feature comprises meta information about that post, such as the subject, and the body of the news post.
```
print(X_train[0])
```
```
From: [email protected] (fred j mccall 575-3539)
Subject: Re: Metric vs English
Article-I.D.: mksol.1993Apr6.131900.8407
Organization: Texas Instruments Inc
Lines: 31
American, perhaps, but nothing military about it. I learned (mostly)
slugs when we talked English units in high school physics and while
the teacher was an ex-Navy fighter jock the book certainly wasn't
produced by the military.
[Poundals were just too flinking small and made the math come out
funny; sort of the same reason proponents of SI give for using that.]
--
"Insisting on perfect safety is for people who don't have the balls to live
in the real world." -- Mary Shafer, NASA Ames Dryden
```
Creating transformers
---------------------
First, we would like a transformer that extracts the subject and body of each post. Since this is a stateless transformation (does not require state information from training data), we can define a function that performs the data transformation then use [`FunctionTransformer`](../../modules/generated/sklearn.preprocessing.functiontransformer#sklearn.preprocessing.FunctionTransformer "sklearn.preprocessing.FunctionTransformer") to create a scikit-learn transformer.
```
def subject_body_extractor(posts):
# construct object dtype array with two columns
# first column = 'subject' and second column = 'body'
features = np.empty(shape=(len(posts), 2), dtype=object)
for i, text in enumerate(posts):
# temporary variable `_` stores '\n\n'
headers, _, body = text.partition("\n\n")
# store body text in second column
features[i, 1] = body
prefix = "Subject:"
sub = ""
# save text after 'Subject:' in first column
for line in headers.split("\n"):
if line.startswith(prefix):
sub = line[len(prefix) :]
break
features[i, 0] = sub
return features
subject_body_transformer = FunctionTransformer(subject_body_extractor)
```
We will also create a transformer that extracts the length of the text and the number of sentences.
```
def text_stats(posts):
return [{"length": len(text), "num_sentences": text.count(".")} for text in posts]
text_stats_transformer = FunctionTransformer(text_stats)
```
Classification pipeline
-----------------------
The pipeline below extracts the subject and body from each post using `SubjectBodyExtractor`, producing a (n\_samples, 2) array. This array is then used to compute standard bag-of-words features for the subject and body as well as text length and number of sentences on the body, using `ColumnTransformer`. We combine them, with weights, then train a classifier on the combined set of features.
```
pipeline = Pipeline(
[
# Extract subject & body
("subjectbody", subject_body_transformer),
# Use ColumnTransformer to combine the subject and body features
(
"union",
ColumnTransformer(
[
# bag-of-words for subject (col 0)
("subject", TfidfVectorizer(min_df=50), 0),
# bag-of-words with decomposition for body (col 1)
(
"body_bow",
Pipeline(
[
("tfidf", TfidfVectorizer()),
("best", TruncatedSVD(n_components=50)),
]
),
1,
),
# Pipeline for pulling text stats from post's body
(
"body_stats",
Pipeline(
[
(
"stats",
text_stats_transformer,
), # returns a list of dicts
(
"vect",
DictVectorizer(),
), # list of dicts -> feature matrix
]
),
1,
),
],
# weight above ColumnTransformer features
transformer_weights={
"subject": 0.8,
"body_bow": 0.5,
"body_stats": 1.0,
},
),
),
# Use a SVC classifier on the combined features
("svc", LinearSVC(dual=False)),
],
verbose=True,
)
```
Finally, we fit our pipeline on the training data and use it to predict topics for `X_test`. Performance metrics of our pipeline are then printed.
```
pipeline.fit(X_train, y_train)
y_pred = pipeline.predict(X_test)
print("Classification report:\n\n{}".format(classification_report(y_test, y_pred)))
```
```
[Pipeline] ....... (step 1 of 3) Processing subjectbody, total= 0.0s
[Pipeline] ............. (step 2 of 3) Processing union, total= 0.4s
[Pipeline] ............... (step 3 of 3) Processing svc, total= 0.0s
Classification report:
precision recall f1-score support
0 0.84 0.87 0.86 396
1 0.87 0.83 0.85 394
accuracy 0.85 790
macro avg 0.85 0.85 0.85 790
weighted avg 0.85 0.85 0.85 790
```
**Total running time of the script:** ( 0 minutes 2.388 seconds)
[`Download Python source code: plot_column_transformer.py`](https://scikit-learn.org/1.1/_downloads/3e8abcbcde21489054beb05cb87da525/plot_column_transformer.py)
[`Download Jupyter notebook: plot_column_transformer.ipynb`](https://scikit-learn.org/1.1/_downloads/15dc6d7a809edf988a7328336a25faec/plot_column_transformer.ipynb)
scikit_learn Selecting dimensionality reduction with Pipeline and GridSearchCV Note
Click [here](#sphx-glr-download-auto-examples-compose-plot-compare-reduction-py) to download the full example code or to run this example in your browser via Binder
Selecting dimensionality reduction with Pipeline and GridSearchCV
=================================================================
This example constructs a pipeline that does dimensionality reduction followed by prediction with a support vector classifier. It demonstrates the use of `GridSearchCV` and `Pipeline` to optimize over different classes of estimators in a single CV run – unsupervised `PCA` and `NMF` dimensionality reductions are compared to univariate feature selection during the grid search.
Additionally, `Pipeline` can be instantiated with the `memory` argument to memoize the transformers within the pipeline, avoiding to fit again the same transformers over and over.
Note that the use of `memory` to enable caching becomes interesting when the fitting of a transformer is costly.
Illustration of `Pipeline` and `GridSearchCV`
---------------------------------------------
```
# Authors: Robert McGibbon, Joel Nothman, Guillaume Lemaitre
import numpy as np
import matplotlib.pyplot as plt
from sklearn.datasets import load_digits
from sklearn.model_selection import GridSearchCV
from sklearn.pipeline import Pipeline
from sklearn.svm import LinearSVC
from sklearn.decomposition import PCA, NMF
from sklearn.feature_selection import SelectKBest, chi2
pipe = Pipeline(
[
# the reduce_dim stage is populated by the param_grid
("reduce_dim", "passthrough"),
("classify", LinearSVC(dual=False, max_iter=10000)),
]
)
N_FEATURES_OPTIONS = [2, 4, 8]
C_OPTIONS = [1, 10, 100, 1000]
param_grid = [
{
"reduce_dim": [PCA(iterated_power=7), NMF()],
"reduce_dim__n_components": N_FEATURES_OPTIONS,
"classify__C": C_OPTIONS,
},
{
"reduce_dim": [SelectKBest(chi2)],
"reduce_dim__k": N_FEATURES_OPTIONS,
"classify__C": C_OPTIONS,
},
]
reducer_labels = ["PCA", "NMF", "KBest(chi2)"]
grid = GridSearchCV(pipe, n_jobs=1, param_grid=param_grid)
X, y = load_digits(return_X_y=True)
grid.fit(X, y)
mean_scores = np.array(grid.cv_results_["mean_test_score"])
# scores are in the order of param_grid iteration, which is alphabetical
mean_scores = mean_scores.reshape(len(C_OPTIONS), -1, len(N_FEATURES_OPTIONS))
# select score for best C
mean_scores = mean_scores.max(axis=0)
bar_offsets = np.arange(len(N_FEATURES_OPTIONS)) * (len(reducer_labels) + 1) + 0.5
plt.figure()
COLORS = "bgrcmyk"
for i, (label, reducer_scores) in enumerate(zip(reducer_labels, mean_scores)):
plt.bar(bar_offsets + i, reducer_scores, label=label, color=COLORS[i])
plt.title("Comparing feature reduction techniques")
plt.xlabel("Reduced number of features")
plt.xticks(bar_offsets + len(reducer_labels) / 2, N_FEATURES_OPTIONS)
plt.ylabel("Digit classification accuracy")
plt.ylim((0, 1))
plt.legend(loc="upper left")
plt.show()
```
Caching transformers within a `Pipeline`
----------------------------------------
It is sometimes worthwhile storing the state of a specific transformer since it could be used again. Using a pipeline in `GridSearchCV` triggers such situations. Therefore, we use the argument `memory` to enable caching.
Warning
Note that this example is, however, only an illustration since for this specific case fitting PCA is not necessarily slower than loading the cache. Hence, use the `memory` constructor parameter when the fitting of a transformer is costly.
```
from joblib import Memory
from shutil import rmtree
# Create a temporary folder to store the transformers of the pipeline
location = "cachedir"
memory = Memory(location=location, verbose=10)
cached_pipe = Pipeline(
[("reduce_dim", PCA()), ("classify", LinearSVC(dual=False, max_iter=10000))],
memory=memory,
)
# This time, a cached pipeline will be used within the grid search
# Delete the temporary cache before exiting
memory.clear(warn=False)
rmtree(location)
```
The `PCA` fitting is only computed at the evaluation of the first configuration of the `C` parameter of the `LinearSVC` classifier. The other configurations of `C` will trigger the loading of the cached `PCA` estimator data, leading to save processing time. Therefore, the use of caching the pipeline using `memory` is highly beneficial when fitting a transformer is costly.
**Total running time of the script:** ( 0 minutes 3.938 seconds)
[`Download Python source code: plot_compare_reduction.py`](https://scikit-learn.org/1.1/_downloads/f30499f1e20a3d5f0f83bb90cd6ff348/plot_compare_reduction.py)
[`Download Jupyter notebook: plot_compare_reduction.ipynb`](https://scikit-learn.org/1.1/_downloads/e38f4849bd47832b7b365f2fa9d31dd6/plot_compare_reduction.ipynb)
scikit_learn The Digit Dataset Note
Click [here](#sphx-glr-download-auto-examples-datasets-plot-digits-last-image-py) to download the full example code or to run this example in your browser via Binder
The Digit Dataset
=================
This dataset is made up of 1797 8x8 images. Each image, like the one shown below, is of a hand-written digit. In order to utilize an 8x8 figure like this, we’d have to first transform it into a feature vector with length 64.
See [here](https://archive.ics.uci.edu/ml/datasets/Pen-Based+Recognition+of+Handwritten+Digits) for more information about this dataset.
```
# Code source: Gaël Varoquaux
# Modified for documentation by Jaques Grobler
# License: BSD 3 clause
from sklearn import datasets
import matplotlib.pyplot as plt
# Load the digits dataset
digits = datasets.load_digits()
# Display the last digit
plt.figure(1, figsize=(3, 3))
plt.imshow(digits.images[-1], cmap=plt.cm.gray_r, interpolation="nearest")
plt.show()
```
**Total running time of the script:** ( 0 minutes 0.049 seconds)
[`Download Python source code: plot_digits_last_image.py`](https://scikit-learn.org/1.1/_downloads/4ddbef459f3a80ccda29978b8cb0ef79/plot_digits_last_image.py)
[`Download Jupyter notebook: plot_digits_last_image.ipynb`](https://scikit-learn.org/1.1/_downloads/79791c0d96848daf4df02b5b61ced25d/plot_digits_last_image.ipynb)
scikit_learn The Iris Dataset Note
Click [here](#sphx-glr-download-auto-examples-datasets-plot-iris-dataset-py) to download the full example code or to run this example in your browser via Binder
The Iris Dataset
================
This data sets consists of 3 different types of irises’ (Setosa, Versicolour, and Virginica) petal and sepal length, stored in a 150x4 numpy.ndarray
The rows being the samples and the columns being: Sepal Length, Sepal Width, Petal Length and Petal Width.
The below plot uses the first two features. See [here](https://en.wikipedia.org/wiki/Iris_flower_data_set) for more information on this dataset.
*
*
```
# Code source: Gaël Varoquaux
# Modified for documentation by Jaques Grobler
# License: BSD 3 clause
import matplotlib.pyplot as plt
# unused but required import for doing 3d projections with matplotlib < 3.2
import mpl_toolkits.mplot3d # noqa: F401
from sklearn import datasets
from sklearn.decomposition import PCA
# import some data to play with
iris = datasets.load_iris()
X = iris.data[:, :2] # we only take the first two features.
y = iris.target
x_min, x_max = X[:, 0].min() - 0.5, X[:, 0].max() + 0.5
y_min, y_max = X[:, 1].min() - 0.5, X[:, 1].max() + 0.5
plt.figure(2, figsize=(8, 6))
plt.clf()
# Plot the training points
plt.scatter(X[:, 0], X[:, 1], c=y, cmap=plt.cm.Set1, edgecolor="k")
plt.xlabel("Sepal length")
plt.ylabel("Sepal width")
plt.xlim(x_min, x_max)
plt.ylim(y_min, y_max)
plt.xticks(())
plt.yticks(())
# To getter a better understanding of interaction of the dimensions
# plot the first three PCA dimensions
fig = plt.figure(1, figsize=(8, 6))
ax = fig.add_subplot(111, projection="3d", elev=-150, azim=110)
X_reduced = PCA(n_components=3).fit_transform(iris.data)
ax.scatter(
X_reduced[:, 0],
X_reduced[:, 1],
X_reduced[:, 2],
c=y,
cmap=plt.cm.Set1,
edgecolor="k",
s=40,
)
ax.set_title("First three PCA directions")
ax.set_xlabel("1st eigenvector")
ax.w_xaxis.set_ticklabels([])
ax.set_ylabel("2nd eigenvector")
ax.w_yaxis.set_ticklabels([])
ax.set_zlabel("3rd eigenvector")
ax.w_zaxis.set_ticklabels([])
plt.show()
```
**Total running time of the script:** ( 0 minutes 0.134 seconds)
[`Download Python source code: plot_iris_dataset.py`](https://scikit-learn.org/1.1/_downloads/5b08a262d5845e4674288edb801adf93/plot_iris_dataset.py)
[`Download Jupyter notebook: plot_iris_dataset.ipynb`](https://scikit-learn.org/1.1/_downloads/26998096b90db15754e891c733ae032c/plot_iris_dataset.ipynb)
scikit_learn Plot randomly generated classification dataset Note
Click [here](#sphx-glr-download-auto-examples-datasets-plot-random-dataset-py) to download the full example code or to run this example in your browser via Binder
Plot randomly generated classification dataset
==============================================
This example plots several randomly generated classification datasets. For easy visualization, all datasets have 2 features, plotted on the x and y axis. The color of each point represents its class label.
The first 4 plots use the [`make_classification`](../../modules/generated/sklearn.datasets.make_classification#sklearn.datasets.make_classification "sklearn.datasets.make_classification") with different numbers of informative features, clusters per class and classes. The final 2 plots use [`make_blobs`](../../modules/generated/sklearn.datasets.make_blobs#sklearn.datasets.make_blobs "sklearn.datasets.make_blobs") and [`make_gaussian_quantiles`](../../modules/generated/sklearn.datasets.make_gaussian_quantiles#sklearn.datasets.make_gaussian_quantiles "sklearn.datasets.make_gaussian_quantiles").
```
import matplotlib.pyplot as plt
from sklearn.datasets import make_classification
from sklearn.datasets import make_blobs
from sklearn.datasets import make_gaussian_quantiles
plt.figure(figsize=(8, 8))
plt.subplots_adjust(bottom=0.05, top=0.9, left=0.05, right=0.95)
plt.subplot(321)
plt.title("One informative feature, one cluster per class", fontsize="small")
X1, Y1 = make_classification(
n_features=2, n_redundant=0, n_informative=1, n_clusters_per_class=1
)
plt.scatter(X1[:, 0], X1[:, 1], marker="o", c=Y1, s=25, edgecolor="k")
plt.subplot(322)
plt.title("Two informative features, one cluster per class", fontsize="small")
X1, Y1 = make_classification(
n_features=2, n_redundant=0, n_informative=2, n_clusters_per_class=1
)
plt.scatter(X1[:, 0], X1[:, 1], marker="o", c=Y1, s=25, edgecolor="k")
plt.subplot(323)
plt.title("Two informative features, two clusters per class", fontsize="small")
X2, Y2 = make_classification(n_features=2, n_redundant=0, n_informative=2)
plt.scatter(X2[:, 0], X2[:, 1], marker="o", c=Y2, s=25, edgecolor="k")
plt.subplot(324)
plt.title("Multi-class, two informative features, one cluster", fontsize="small")
X1, Y1 = make_classification(
n_features=2, n_redundant=0, n_informative=2, n_clusters_per_class=1, n_classes=3
)
plt.scatter(X1[:, 0], X1[:, 1], marker="o", c=Y1, s=25, edgecolor="k")
plt.subplot(325)
plt.title("Three blobs", fontsize="small")
X1, Y1 = make_blobs(n_features=2, centers=3)
plt.scatter(X1[:, 0], X1[:, 1], marker="o", c=Y1, s=25, edgecolor="k")
plt.subplot(326)
plt.title("Gaussian divided into three quantiles", fontsize="small")
X1, Y1 = make_gaussian_quantiles(n_features=2, n_classes=3)
plt.scatter(X1[:, 0], X1[:, 1], marker="o", c=Y1, s=25, edgecolor="k")
plt.show()
```
**Total running time of the script:** ( 0 minutes 0.246 seconds)
[`Download Python source code: plot_random_dataset.py`](https://scikit-learn.org/1.1/_downloads/f7f92b18ff7c0777a7a509119e28ca5c/plot_random_dataset.py)
[`Download Jupyter notebook: plot_random_dataset.ipynb`](https://scikit-learn.org/1.1/_downloads/c940445d2436f8be049bcbd0dfdc0c49/plot_random_dataset.ipynb)
| programming_docs |
scikit_learn Plot randomly generated multilabel dataset Note
Click [here](#sphx-glr-download-auto-examples-datasets-plot-random-multilabel-dataset-py) to download the full example code or to run this example in your browser via Binder
Plot randomly generated multilabel dataset
==========================================
This illustrates the [`make_multilabel_classification`](../../modules/generated/sklearn.datasets.make_multilabel_classification#sklearn.datasets.make_multilabel_classification "sklearn.datasets.make_multilabel_classification") dataset generator. Each sample consists of counts of two features (up to 50 in total), which are differently distributed in each of two classes.
Points are labeled as follows, where Y means the class is present:
| 1 | 2 | 3 | Color |
| --- | --- | --- | --- |
| Y | N | N | Red |
| N | Y | N | Blue |
| N | N | Y | Yellow |
| Y | Y | N | Purple |
| Y | N | Y | Orange |
| Y | Y | N | Green |
| Y | Y | Y | Brown |
A star marks the expected sample for each class; its size reflects the probability of selecting that class label.
The left and right examples highlight the `n_labels` parameter: more of the samples in the right plot have 2 or 3 labels.
Note that this two-dimensional example is very degenerate: generally the number of features would be much greater than the “document length”, while here we have much larger documents than vocabulary. Similarly, with `n_classes > n_features`, it is much less likely that a feature distinguishes a particular class.
```
The data was generated from (random_state=757):
Class P(C) P(w0|C) P(w1|C)
red 0.42 0.51 0.49
blue 0.35 0.18 0.82
yellow 0.23 0.34 0.66
```
```
import numpy as np
import matplotlib.pyplot as plt
from sklearn.datasets import make_multilabel_classification as make_ml_clf
COLORS = np.array(
[
"!",
"#FF3333", # red
"#0198E1", # blue
"#BF5FFF", # purple
"#FCD116", # yellow
"#FF7216", # orange
"#4DBD33", # green
"#87421F", # brown
]
)
# Use same random seed for multiple calls to make_multilabel_classification to
# ensure same distributions
RANDOM_SEED = np.random.randint(2**10)
def plot_2d(ax, n_labels=1, n_classes=3, length=50):
X, Y, p_c, p_w_c = make_ml_clf(
n_samples=150,
n_features=2,
n_classes=n_classes,
n_labels=n_labels,
length=length,
allow_unlabeled=False,
return_distributions=True,
random_state=RANDOM_SEED,
)
ax.scatter(
X[:, 0], X[:, 1], color=COLORS.take((Y * [1, 2, 4]).sum(axis=1)), marker="."
)
ax.scatter(
p_w_c[0] * length,
p_w_c[1] * length,
marker="*",
linewidth=0.5,
edgecolor="black",
s=20 + 1500 * p_c**2,
color=COLORS.take([1, 2, 4]),
)
ax.set_xlabel("Feature 0 count")
return p_c, p_w_c
_, (ax1, ax2) = plt.subplots(1, 2, sharex="row", sharey="row", figsize=(8, 4))
plt.subplots_adjust(bottom=0.15)
p_c, p_w_c = plot_2d(ax1, n_labels=1)
ax1.set_title("n_labels=1, length=50")
ax1.set_ylabel("Feature 1 count")
plot_2d(ax2, n_labels=3)
ax2.set_title("n_labels=3, length=50")
ax2.set_xlim(left=0, auto=True)
ax2.set_ylim(bottom=0, auto=True)
plt.show()
print("The data was generated from (random_state=%d):" % RANDOM_SEED)
print("Class", "P(C)", "P(w0|C)", "P(w1|C)", sep="\t")
for k, p, p_w in zip(["red", "blue", "yellow"], p_c, p_w_c.T):
print("%s\t%0.2f\t%0.2f\t%0.2f" % (k, p, p_w[0], p_w[1]))
```
**Total running time of the script:** ( 0 minutes 0.106 seconds)
[`Download Python source code: plot_random_multilabel_dataset.py`](https://scikit-learn.org/1.1/_downloads/59fa6c0bdd3e601dd83ea28de5feeffd/plot_random_multilabel_dataset.py)
[`Download Jupyter notebook: plot_random_multilabel_dataset.ipynb`](https://scikit-learn.org/1.1/_downloads/a8ae7970b854e7e2bea48bb7688bd7b6/plot_random_multilabel_dataset.ipynb)
scikit_learn Lasso model selection via information criteria Note
Click [here](#sphx-glr-download-auto-examples-linear-model-plot-lasso-lars-ic-py) to download the full example code or to run this example in your browser via Binder
Lasso model selection via information criteria
==============================================
This example reproduces the example of Fig. 2 of [[ZHT2007]](#zht2007). A [`LassoLarsIC`](../../modules/generated/sklearn.linear_model.lassolarsic#sklearn.linear_model.LassoLarsIC "sklearn.linear_model.LassoLarsIC") estimator is fit on a diabetes dataset and the AIC and the BIC criteria are used to select the best model.
Note
It is important to note that the optimization to find `alpha` with [`LassoLarsIC`](../../modules/generated/sklearn.linear_model.lassolarsic#sklearn.linear_model.LassoLarsIC "sklearn.linear_model.LassoLarsIC") relies on the AIC or BIC criteria that are computed in-sample, thus on the training set directly. This approach differs from the cross-validation procedure. For a comparison of the two approaches, you can refer to the following example: [Lasso model selection: AIC-BIC / cross-validation](plot_lasso_model_selection#sphx-glr-auto-examples-linear-model-plot-lasso-model-selection-py).
```
# Author: Alexandre Gramfort
# Guillaume Lemaitre
# License: BSD 3 clause
```
We will use the diabetes dataset.
```
from sklearn.datasets import load_diabetes
X, y = load_diabetes(return_X_y=True, as_frame=True)
n_samples = X.shape[0]
X.head()
```
| | age | sex | bmi | bp | s1 | s2 | s3 | s4 | s5 | s6 |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| 0 | 0.038076 | 0.050680 | 0.061696 | 0.021872 | -0.044223 | -0.034821 | -0.043401 | -0.002592 | 0.019907 | -0.017646 |
| 1 | -0.001882 | -0.044642 | -0.051474 | -0.026328 | -0.008449 | -0.019163 | 0.074412 | -0.039493 | -0.068332 | -0.092204 |
| 2 | 0.085299 | 0.050680 | 0.044451 | -0.005670 | -0.045599 | -0.034194 | -0.032356 | -0.002592 | 0.002861 | -0.025930 |
| 3 | -0.089063 | -0.044642 | -0.011595 | -0.036656 | 0.012191 | 0.024991 | -0.036038 | 0.034309 | 0.022688 | -0.009362 |
| 4 | 0.005383 | -0.044642 | -0.036385 | 0.021872 | 0.003935 | 0.015596 | 0.008142 | -0.002592 | -0.031988 | -0.046641 |
Scikit-learn provides an estimator called `LinearLarsIC` that uses either Akaike’s information criterion (AIC) or the Bayesian information criterion (BIC) to select the best model. Before fitting this model, we will scale the dataset.
In the following, we are going to fit two models to compare the values reported by AIC and BIC.
```
from sklearn.preprocessing import StandardScaler
from sklearn.linear_model import LassoLarsIC
from sklearn.pipeline import make_pipeline
lasso_lars_ic = make_pipeline(
StandardScaler(), LassoLarsIC(criterion="aic", normalize=False)
).fit(X, y)
```
To be in line with the definition in [[ZHT2007]](#zht2007), we need to rescale the AIC and the BIC. Indeed, Zou et al. are ignoring some constant terms compared to the original definition of AIC derived from the maximum log-likelihood of a linear model. You can refer to [mathematical detail section for the User Guide](../../modules/linear_model#lasso-lars-ic).
```
def zou_et_al_criterion_rescaling(criterion, n_samples, noise_variance):
"""Rescale the information criterion to follow the definition of Zou et al."""
return criterion - n_samples * np.log(2 * np.pi * noise_variance) - n_samples
```
```
import numpy as np
aic_criterion = zou_et_al_criterion_rescaling(
lasso_lars_ic[-1].criterion_,
n_samples,
lasso_lars_ic[-1].noise_variance_,
)
index_alpha_path_aic = np.flatnonzero(
lasso_lars_ic[-1].alphas_ == lasso_lars_ic[-1].alpha_
)[0]
```
```
lasso_lars_ic.set_params(lassolarsic__criterion="bic").fit(X, y)
bic_criterion = zou_et_al_criterion_rescaling(
lasso_lars_ic[-1].criterion_,
n_samples,
lasso_lars_ic[-1].noise_variance_,
)
index_alpha_path_bic = np.flatnonzero(
lasso_lars_ic[-1].alphas_ == lasso_lars_ic[-1].alpha_
)[0]
```
Now that we collected the AIC and BIC, we can as well check that the minima of both criteria happen at the same alpha. Then, we can simplify the following plot.
```
index_alpha_path_aic == index_alpha_path_bic
```
```
True
```
Finally, we can plot the AIC and BIC criterion and the subsequent selected regularization parameter.
```
import matplotlib.pyplot as plt
plt.plot(aic_criterion, color="tab:blue", marker="o", label="AIC criterion")
plt.plot(bic_criterion, color="tab:orange", marker="o", label="BIC criterion")
plt.vlines(
index_alpha_path_bic,
aic_criterion.min(),
aic_criterion.max(),
color="black",
linestyle="--",
label="Selected alpha",
)
plt.legend()
plt.ylabel("Information criterion")
plt.xlabel("Lasso model sequence")
_ = plt.title("Lasso model selection via AIC and BIC")
```
**Total running time of the script:** ( 0 minutes 0.087 seconds)
[`Download Python source code: plot_lasso_lars_ic.py`](https://scikit-learn.org/1.1/_downloads/8c96d910ed5b614924f36c896b1934a6/plot_lasso_lars_ic.py)
[`Download Jupyter notebook: plot_lasso_lars_ic.ipynb`](https://scikit-learn.org/1.1/_downloads/51833337bfc73d152b44902e5baa50ff/plot_lasso_lars_ic.ipynb)
scikit_learn Regularization path of L1- Logistic Regression Note
Click [here](#sphx-glr-download-auto-examples-linear-model-plot-logistic-path-py) to download the full example code or to run this example in your browser via Binder
Regularization path of L1- Logistic Regression
==============================================
Train l1-penalized logistic regression models on a binary classification problem derived from the Iris dataset.
The models are ordered from strongest regularized to least regularized. The 4 coefficients of the models are collected and plotted as a “regularization path”: on the left-hand side of the figure (strong regularizers), all the coefficients are exactly 0. When regularization gets progressively looser, coefficients can get non-zero values one after the other.
Here we choose the liblinear solver because it can efficiently optimize for the Logistic Regression loss with a non-smooth, sparsity inducing l1 penalty.
Also note that we set a low value for the tolerance to make sure that the model has converged before collecting the coefficients.
We also use warm\_start=True which means that the coefficients of the models are reused to initialize the next model fit to speed-up the computation of the full-path.
```
# Author: Alexandre Gramfort <[email protected]>
# License: BSD 3 clause
```
Load data
---------
```
from sklearn import datasets
iris = datasets.load_iris()
X = iris.data
y = iris.target
X = X[y != 2]
y = y[y != 2]
X /= X.max() # Normalize X to speed-up convergence
```
Compute regularization path
---------------------------
```
import numpy as np
from sklearn import linear_model
from sklearn.svm import l1_min_c
cs = l1_min_c(X, y, loss="log") * np.logspace(0, 7, 16)
clf = linear_model.LogisticRegression(
penalty="l1",
solver="liblinear",
tol=1e-6,
max_iter=int(1e6),
warm_start=True,
intercept_scaling=10000.0,
)
coefs_ = []
for c in cs:
clf.set_params(C=c)
clf.fit(X, y)
coefs_.append(clf.coef_.ravel().copy())
coefs_ = np.array(coefs_)
```
Plot regularization path
------------------------
```
import matplotlib.pyplot as plt
plt.plot(np.log10(cs), coefs_, marker="o")
ymin, ymax = plt.ylim()
plt.xlabel("log(C)")
plt.ylabel("Coefficients")
plt.title("Logistic Regression Path")
plt.axis("tight")
plt.show()
```
**Total running time of the script:** ( 0 minutes 0.105 seconds)
[`Download Python source code: plot_logistic_path.py`](https://scikit-learn.org/1.1/_downloads/33d1c05f40549996ff7b58dfd3eb9d23/plot_logistic_path.py)
[`Download Jupyter notebook: plot_logistic_path.ipynb`](https://scikit-learn.org/1.1/_downloads/0b601219a14824c971bbf8bb797e8973/plot_logistic_path.ipynb)
scikit_learn MNIST classification using multinomial logistic + L1 Note
Click [here](#sphx-glr-download-auto-examples-linear-model-plot-sparse-logistic-regression-mnist-py) to download the full example code or to run this example in your browser via Binder
MNIST classification using multinomial logistic + L1
====================================================
Here we fit a multinomial logistic regression with L1 penalty on a subset of the MNIST digits classification task. We use the SAGA algorithm for this purpose: this a solver that is fast when the number of samples is significantly larger than the number of features and is able to finely optimize non-smooth objective functions which is the case with the l1-penalty. Test accuracy reaches > 0.8, while weight vectors remains *sparse* and therefore more easily *interpretable*.
Note that this accuracy of this l1-penalized linear model is significantly below what can be reached by an l2-penalized linear model or a non-linear multi-layer perceptron model on this dataset.
```
Sparsity with L1 penalty: 74.57%
Test score with L1 penalty: 0.8253
Example run in 19.240 s
```
```
# Author: Arthur Mensch <[email protected]>
# License: BSD 3 clause
import time
import matplotlib.pyplot as plt
import numpy as np
from sklearn.datasets import fetch_openml
from sklearn.linear_model import LogisticRegression
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import StandardScaler
from sklearn.utils import check_random_state
# Turn down for faster convergence
t0 = time.time()
train_samples = 5000
# Load data from https://www.openml.org/d/554
X, y = fetch_openml("mnist_784", version=1, return_X_y=True, as_frame=False)
random_state = check_random_state(0)
permutation = random_state.permutation(X.shape[0])
X = X[permutation]
y = y[permutation]
X = X.reshape((X.shape[0], -1))
X_train, X_test, y_train, y_test = train_test_split(
X, y, train_size=train_samples, test_size=10000
)
scaler = StandardScaler()
X_train = scaler.fit_transform(X_train)
X_test = scaler.transform(X_test)
# Turn up tolerance for faster convergence
clf = LogisticRegression(C=50.0 / train_samples, penalty="l1", solver="saga", tol=0.1)
clf.fit(X_train, y_train)
sparsity = np.mean(clf.coef_ == 0) * 100
score = clf.score(X_test, y_test)
# print('Best C % .4f' % clf.C_)
print("Sparsity with L1 penalty: %.2f%%" % sparsity)
print("Test score with L1 penalty: %.4f" % score)
coef = clf.coef_.copy()
plt.figure(figsize=(10, 5))
scale = np.abs(coef).max()
for i in range(10):
l1_plot = plt.subplot(2, 5, i + 1)
l1_plot.imshow(
coef[i].reshape(28, 28),
interpolation="nearest",
cmap=plt.cm.RdBu,
vmin=-scale,
vmax=scale,
)
l1_plot.set_xticks(())
l1_plot.set_yticks(())
l1_plot.set_xlabel("Class %i" % i)
plt.suptitle("Classification vector for...")
run_time = time.time() - t0
print("Example run in %.3f s" % run_time)
plt.show()
```
**Total running time of the script:** ( 0 minutes 19.305 seconds)
[`Download Python source code: plot_sparse_logistic_regression_mnist.py`](https://scikit-learn.org/1.1/_downloads/5a847ed2e1c03e450c5f9dee339423ad/plot_sparse_logistic_regression_mnist.py)
[`Download Jupyter notebook: plot_sparse_logistic_regression_mnist.ipynb`](https://scikit-learn.org/1.1/_downloads/2a5eb3b98b9fe593b9bfeb548a54175d/plot_sparse_logistic_regression_mnist.ipynb)
scikit_learn SGD: Maximum margin separating hyperplane Note
Click [here](#sphx-glr-download-auto-examples-linear-model-plot-sgd-separating-hyperplane-py) to download the full example code or to run this example in your browser via Binder
SGD: Maximum margin separating hyperplane
=========================================
Plot the maximum margin separating hyperplane within a two-class separable dataset using a linear Support Vector Machines classifier trained using SGD.
```
import numpy as np
import matplotlib.pyplot as plt
from sklearn.linear_model import SGDClassifier
from sklearn.datasets import make_blobs
# we create 50 separable points
X, Y = make_blobs(n_samples=50, centers=2, random_state=0, cluster_std=0.60)
# fit the model
clf = SGDClassifier(loss="hinge", alpha=0.01, max_iter=200)
clf.fit(X, Y)
# plot the line, the points, and the nearest vectors to the plane
xx = np.linspace(-1, 5, 10)
yy = np.linspace(-1, 5, 10)
X1, X2 = np.meshgrid(xx, yy)
Z = np.empty(X1.shape)
for (i, j), val in np.ndenumerate(X1):
x1 = val
x2 = X2[i, j]
p = clf.decision_function([[x1, x2]])
Z[i, j] = p[0]
levels = [-1.0, 0.0, 1.0]
linestyles = ["dashed", "solid", "dashed"]
colors = "k"
plt.contour(X1, X2, Z, levels, colors=colors, linestyles=linestyles)
plt.scatter(X[:, 0], X[:, 1], c=Y, cmap=plt.cm.Paired, edgecolor="black", s=20)
plt.axis("tight")
plt.show()
```
**Total running time of the script:** ( 0 minutes 0.062 seconds)
[`Download Python source code: plot_sgd_separating_hyperplane.py`](https://scikit-learn.org/1.1/_downloads/600083c06dc28955779ad845ac1dde60/plot_sgd_separating_hyperplane.py)
[`Download Jupyter notebook: plot_sgd_separating_hyperplane.ipynb`](https://scikit-learn.org/1.1/_downloads/305aa8bba883838e1ca0d690e78e9fbd/plot_sgd_separating_hyperplane.ipynb)
scikit_learn Plot Ridge coefficients as a function of the regularization Note
Click [here](#sphx-glr-download-auto-examples-linear-model-plot-ridge-path-py) to download the full example code or to run this example in your browser via Binder
Plot Ridge coefficients as a function of the regularization
===========================================================
Shows the effect of collinearity in the coefficients of an estimator.
[`Ridge`](../../modules/generated/sklearn.linear_model.ridge#sklearn.linear_model.Ridge "sklearn.linear_model.Ridge") Regression is the estimator used in this example. Each color represents a different feature of the coefficient vector, and this is displayed as a function of the regularization parameter.
This example also shows the usefulness of applying Ridge regression to highly ill-conditioned matrices. For such matrices, a slight change in the target variable can cause huge variances in the calculated weights. In such cases, it is useful to set a certain regularization (alpha) to reduce this variation (noise).
When alpha is very large, the regularization effect dominates the squared loss function and the coefficients tend to zero. At the end of the path, as alpha tends toward zero and the solution tends towards the ordinary least squares, coefficients exhibit big oscillations. In practise it is necessary to tune alpha in such a way that a balance is maintained between both.
```
# Author: Fabian Pedregosa -- <[email protected]>
# License: BSD 3 clause
import numpy as np
import matplotlib.pyplot as plt
from sklearn import linear_model
# X is the 10x10 Hilbert matrix
X = 1.0 / (np.arange(1, 11) + np.arange(0, 10)[:, np.newaxis])
y = np.ones(10)
```
Compute paths
-------------
```
n_alphas = 200
alphas = np.logspace(-10, -2, n_alphas)
coefs = []
for a in alphas:
ridge = linear_model.Ridge(alpha=a, fit_intercept=False)
ridge.fit(X, y)
coefs.append(ridge.coef_)
```
Display results
---------------
```
ax = plt.gca()
ax.plot(alphas, coefs)
ax.set_xscale("log")
ax.set_xlim(ax.get_xlim()[::-1]) # reverse axis
plt.xlabel("alpha")
plt.ylabel("weights")
plt.title("Ridge coefficients as a function of the regularization")
plt.axis("tight")
plt.show()
```
**Total running time of the script:** ( 0 minutes 0.128 seconds)
[`Download Python source code: plot_ridge_path.py`](https://scikit-learn.org/1.1/_downloads/9d5a4167bc60f250de65fe21497c1eb6/plot_ridge_path.py)
[`Download Jupyter notebook: plot_ridge_path.ipynb`](https://scikit-learn.org/1.1/_downloads/6caa16249d07b4f3e57d5f3bf102b137/plot_ridge_path.ipynb)
scikit_learn Lasso model selection: AIC-BIC / cross-validation Note
Click [here](#sphx-glr-download-auto-examples-linear-model-plot-lasso-model-selection-py) to download the full example code or to run this example in your browser via Binder
Lasso model selection: AIC-BIC / cross-validation
=================================================
This example focuses on model selection for Lasso models that are linear models with an L1 penalty for regression problems.
Indeed, several strategies can be used to select the value of the regularization parameter: via cross-validation or using an information criterion, namely AIC or BIC.
In what follows, we will discuss in details the different strategies.
```
# Author: Olivier Grisel
# Gael Varoquaux
# Alexandre Gramfort
# Guillaume Lemaitre
# License: BSD 3 clause
```
Dataset
-------
In this example, we will use the diabetes dataset.
```
from sklearn.datasets import load_diabetes
X, y = load_diabetes(return_X_y=True, as_frame=True)
X.head()
```
| | age | sex | bmi | bp | s1 | s2 | s3 | s4 | s5 | s6 |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| 0 | 0.038076 | 0.050680 | 0.061696 | 0.021872 | -0.044223 | -0.034821 | -0.043401 | -0.002592 | 0.019907 | -0.017646 |
| 1 | -0.001882 | -0.044642 | -0.051474 | -0.026328 | -0.008449 | -0.019163 | 0.074412 | -0.039493 | -0.068332 | -0.092204 |
| 2 | 0.085299 | 0.050680 | 0.044451 | -0.005670 | -0.045599 | -0.034194 | -0.032356 | -0.002592 | 0.002861 | -0.025930 |
| 3 | -0.089063 | -0.044642 | -0.011595 | -0.036656 | 0.012191 | 0.024991 | -0.036038 | 0.034309 | 0.022688 | -0.009362 |
| 4 | 0.005383 | -0.044642 | -0.036385 | 0.021872 | 0.003935 | 0.015596 | 0.008142 | -0.002592 | -0.031988 | -0.046641 |
In addition, we add some random features to the original data to better illustrate the feature selection performed by the Lasso model.
```
import numpy as np
import pandas as pd
rng = np.random.RandomState(42)
n_random_features = 14
X_random = pd.DataFrame(
rng.randn(X.shape[0], n_random_features),
columns=[f"random_{i:02d}" for i in range(n_random_features)],
)
X = pd.concat([X, X_random], axis=1)
# Show only a subset of the columns
X[X.columns[::3]].head()
```
| | age | bp | s3 | s6 | random\_02 | random\_05 | random\_08 | random\_11 |
| --- | --- | --- | --- | --- | --- | --- | --- | --- |
| 0 | 0.038076 | 0.021872 | -0.043401 | -0.017646 | 0.647689 | -0.234137 | -0.469474 | -0.465730 |
| 1 | -0.001882 | -0.026328 | 0.074412 | -0.092204 | -1.012831 | -1.412304 | 0.067528 | 0.110923 |
| 2 | 0.085299 | -0.005670 | -0.032356 | -0.025930 | -0.601707 | -1.057711 | 0.208864 | 0.196861 |
| 3 | -0.089063 | -0.036656 | -0.036038 | -0.009362 | -1.478522 | 1.057122 | 0.324084 | 0.611676 |
| 4 | 0.005383 | 0.021872 | 0.008142 | -0.046641 | 0.331263 | -0.185659 | 0.812526 | 1.003533 |
Selecting Lasso via an information criterion
--------------------------------------------
[`LassoLarsIC`](../../modules/generated/sklearn.linear_model.lassolarsic#sklearn.linear_model.LassoLarsIC "sklearn.linear_model.LassoLarsIC") provides a Lasso estimator that uses the Akaike information criterion (AIC) or the Bayes information criterion (BIC) to select the optimal value of the regularization parameter alpha.
Before fitting the model, we will standardize the data with a [`StandardScaler`](../../modules/generated/sklearn.preprocessing.standardscaler#sklearn.preprocessing.StandardScaler "sklearn.preprocessing.StandardScaler"). In addition, we will measure the time to fit and tune the hyperparameter alpha in order to compare with the cross-validation strategy.
We will first fit a Lasso model with the AIC criterion.
```
import time
from sklearn.preprocessing import StandardScaler
from sklearn.linear_model import LassoLarsIC
from sklearn.pipeline import make_pipeline
start_time = time.time()
lasso_lars_ic = make_pipeline(
StandardScaler(), LassoLarsIC(criterion="aic", normalize=False)
).fit(X, y)
fit_time = time.time() - start_time
```
We store the AIC metric for each value of alpha used during `fit`.
```
results = pd.DataFrame(
{
"alphas": lasso_lars_ic[-1].alphas_,
"AIC criterion": lasso_lars_ic[-1].criterion_,
}
).set_index("alphas")
alpha_aic = lasso_lars_ic[-1].alpha_
```
Now, we perform the same analysis using the BIC criterion.
```
lasso_lars_ic.set_params(lassolarsic__criterion="bic").fit(X, y)
results["BIC criterion"] = lasso_lars_ic[-1].criterion_
alpha_bic = lasso_lars_ic[-1].alpha_
```
We can check which value of `alpha` leads to the minimum AIC and BIC.
```
def highlight_min(x):
x_min = x.min()
return ["font-weight: bold" if v == x_min else "" for v in x]
results.style.apply(highlight_min)
```
| | AIC criterion | BIC criterion |
| --- | --- | --- |
| alphas | | |
| 45.160030 | 5244.764779 | 5244.764779 |
| 42.300343 | 5208.250639 | 5212.341949 |
| 21.542052 | 4928.018900 | 4936.201520 |
| 15.034077 | 4869.678359 | 4881.952289 |
| 6.189631 | 4815.437362 | 4831.802601 |
| 5.329616 | 4810.423641 | 4830.880191 |
| 4.306012 | 4803.573491 | 4828.121351 |
| 4.124225 | 4804.126502 | 4832.765671 |
| 3.820705 | 4803.621645 | 4836.352124 |
| 3.750389 | 4805.012521 | 4841.834310 |
| 3.570655 | 4805.290075 | 4846.203174 |
| 3.550213 | 4807.075887 | 4852.080295 |
| 3.358295 | 4806.878051 | 4855.973770 |
| 3.259297 | 4807.706026 | 4860.893055 |
| 3.237703 | 4809.440409 | 4866.718747 |
| 2.850031 | 4805.989341 | 4867.358990 |
| 2.384338 | 4801.702266 | 4867.163224 |
| 2.296575 | 4802.594754 | 4872.147022 |
| 2.031555 | 4801.236720 | 4874.880298 |
| 1.618263 | 4798.484109 | 4876.218997 |
| 1.526599 | 4799.543841 | 4881.370039 |
| 0.586798 | 4794.238744 | 4880.156252 |
| 0.445978 | 4795.589715 | 4885.598533 |
| 0.259031 | 4796.966981 | 4891.067109 |
| 0.032179 | 4794.662409 | 4888.762537 |
| 0.019069 | 4794.652739 | 4888.752867 |
| 0.000000 | 4796.626286 | 4894.817724 |
Finally, we can plot the AIC and BIC values for the different alpha values. The vertical lines in the plot correspond to the alpha chosen for each criterion. The selected alpha corresponds to the minimum of the AIC or BIC criterion.
```
ax = results.plot()
ax.vlines(
alpha_aic,
results["AIC criterion"].min(),
results["AIC criterion"].max(),
label="alpha: AIC estimate",
linestyles="--",
color="tab:blue",
)
ax.vlines(
alpha_bic,
results["BIC criterion"].min(),
results["BIC criterion"].max(),
label="alpha: BIC estimate",
linestyle="--",
color="tab:orange",
)
ax.set_xlabel(r"$\alpha$")
ax.set_ylabel("criterion")
ax.set_xscale("log")
ax.legend()
_ = ax.set_title(
f"Information-criterion for model selection (training time {fit_time:.2f}s)"
)
```
Model selection with an information-criterion is very fast. It relies on computing the criterion on the in-sample set provided to `fit`. Both criteria estimate the model generalization error based on the training set error and penalize this overly optimistic error. However, this penalty relies on a proper estimation of the degrees of freedom and the noise variance. Both are derived for large samples (asymptotic results) and assume the model is correct, i.e. that the data are actually generated by this model.
These models also tend to break when the problem is badly conditioned (more features than samples). It is then required to provide an estimate of the noise variance.
Selecting Lasso via cross-validation
------------------------------------
The Lasso estimator can be implemented with different solvers: coordinate descent and least angle regression. They differ with regards to their execution speed and sources of numerical errors.
In scikit-learn, two different estimators are available with integrated cross-validation: [`LassoCV`](../../modules/generated/sklearn.linear_model.lassocv#sklearn.linear_model.LassoCV "sklearn.linear_model.LassoCV") and [`LassoLarsCV`](../../modules/generated/sklearn.linear_model.lassolarscv#sklearn.linear_model.LassoLarsCV "sklearn.linear_model.LassoLarsCV") that respectively solve the problem with coordinate descent and least angle regression.
In the remainder of this section, we will present both approaches. For both algorithms, we will use a 20-fold cross-validation strategy.
### Lasso via coordinate descent
Let’s start by making the hyperparameter tuning using [`LassoCV`](../../modules/generated/sklearn.linear_model.lassocv#sklearn.linear_model.LassoCV "sklearn.linear_model.LassoCV").
```
from sklearn.linear_model import LassoCV
start_time = time.time()
model = make_pipeline(StandardScaler(), LassoCV(cv=20)).fit(X, y)
fit_time = time.time() - start_time
```
```
import matplotlib.pyplot as plt
ymin, ymax = 2300, 3800
lasso = model[-1]
plt.semilogx(lasso.alphas_, lasso.mse_path_, linestyle=":")
plt.plot(
lasso.alphas_,
lasso.mse_path_.mean(axis=-1),
color="black",
label="Average across the folds",
linewidth=2,
)
plt.axvline(lasso.alpha_, linestyle="--", color="black", label="alpha: CV estimate")
plt.ylim(ymin, ymax)
plt.xlabel(r"$\alpha$")
plt.ylabel("Mean square error")
plt.legend()
_ = plt.title(
f"Mean square error on each fold: coordinate descent (train time: {fit_time:.2f}s)"
)
```
### Lasso via least angle regression
Let’s start by making the hyperparameter tuning using [`LassoLarsCV`](../../modules/generated/sklearn.linear_model.lassolarscv#sklearn.linear_model.LassoLarsCV "sklearn.linear_model.LassoLarsCV").
```
from sklearn.linear_model import LassoLarsCV
start_time = time.time()
model = make_pipeline(StandardScaler(), LassoLarsCV(cv=20, normalize=False)).fit(X, y)
fit_time = time.time() - start_time
```
```
lasso = model[-1]
plt.semilogx(lasso.cv_alphas_, lasso.mse_path_, ":")
plt.semilogx(
lasso.cv_alphas_,
lasso.mse_path_.mean(axis=-1),
color="black",
label="Average across the folds",
linewidth=2,
)
plt.axvline(lasso.alpha_, linestyle="--", color="black", label="alpha CV")
plt.ylim(ymin, ymax)
plt.xlabel(r"$\alpha$")
plt.ylabel("Mean square error")
plt.legend()
_ = plt.title(f"Mean square error on each fold: Lars (train time: {fit_time:.2f}s)")
```
### Summary of cross-validation approach
Both algorithms give roughly the same results.
Lars computes a solution path only for each kink in the path. As a result, it is very efficient when there are only of few kinks, which is the case if there are few features or samples. Also, it is able to compute the full path without setting any hyperparameter. On the opposite, coordinate descent computes the path points on a pre-specified grid (here we use the default). Thus it is more efficient if the number of grid points is smaller than the number of kinks in the path. Such a strategy can be interesting if the number of features is really large and there are enough samples to be selected in each of the cross-validation fold. In terms of numerical errors, for heavily correlated variables, Lars will accumulate more errors, while the coordinate descent algorithm will only sample the path on a grid.
Note how the optimal value of alpha varies for each fold. This illustrates why nested-cross validation is a good strategy when trying to evaluate the performance of a method for which a parameter is chosen by cross-validation: this choice of parameter may not be optimal for a final evaluation on unseen test set only.
Conclusion
----------
In this tutorial, we presented two approaches for selecting the best hyperparameter `alpha`: one strategy finds the optimal value of `alpha` by only using the training set and some information criterion, and another strategy is based on cross-validation.
In this example, both approaches are working similarly. The in-sample hyperparameter selection even shows its efficacy in terms of computational performance. However, it can only be used when the number of samples is large enough compared to the number of features.
That’s why hyperparameter optimization via cross-validation is a safe strategy: it works in different settings.
**Total running time of the script:** ( 0 minutes 0.771 seconds)
[`Download Python source code: plot_lasso_model_selection.py`](https://scikit-learn.org/1.1/_downloads/58580795dd881384f33e7e6492e154e2/plot_lasso_model_selection.py)
[`Download Jupyter notebook: plot_lasso_model_selection.ipynb`](https://scikit-learn.org/1.1/_downloads/901ad159d788e1122a5616f537985e74/plot_lasso_model_selection.ipynb)
| programming_docs |
scikit_learn Fitting an Elastic Net with a precomputed Gram Matrix and Weighted Samples Note
Click [here](#sphx-glr-download-auto-examples-linear-model-plot-elastic-net-precomputed-gram-matrix-with-weighted-samples-py) to download the full example code or to run this example in your browser via Binder
Fitting an Elastic Net with a precomputed Gram Matrix and Weighted Samples
==========================================================================
The following example shows how to precompute the gram matrix while using weighted samples with an ElasticNet.
If weighted samples are used, the design matrix must be centered and then rescaled by the square root of the weight vector before the gram matrix is computed.
Note
`sample_weight` vector is also rescaled to sum to `n_samples`, see the
documentation for the `sample_weight` parameter to `linear_model.ElasticNet.fit`.
Let’s start by loading the dataset and creating some sample weights.
```
import numpy as np
from sklearn.datasets import make_regression
rng = np.random.RandomState(0)
n_samples = int(1e5)
X, y = make_regression(n_samples=n_samples, noise=0.5, random_state=rng)
sample_weight = rng.lognormal(size=n_samples)
# normalize the sample weights
normalized_weights = sample_weight * (n_samples / (sample_weight.sum()))
```
To fit the elastic net using the `precompute` option together with the sample weights, we must first center the design matrix, and rescale it by the normalized weights prior to computing the gram matrix.
```
X_offset = np.average(X, axis=0, weights=normalized_weights)
X_centered = X - np.average(X, axis=0, weights=normalized_weights)
X_scaled = X_centered * np.sqrt(normalized_weights)[:, np.newaxis]
gram = np.dot(X_scaled.T, X_scaled)
```
We can now proceed with fitting. We must passed the centered design matrix to `fit` otherwise the elastic net estimator will detect that it is uncentered and discard the gram matrix we passed. However, if we pass the scaled design matrix, the preprocessing code will incorrectly rescale it a second time.
```
from sklearn.linear_model import ElasticNet
lm = ElasticNet(alpha=0.01, precompute=gram)
lm.fit(X_centered, y, sample_weight=normalized_weights)
```
```
ElasticNet(alpha=0.01,
precompute=array([[ 9.98809919e+04, -4.48938813e+02, -1.03237920e+03, ...,
-2.25349312e+02, -3.53959628e+02, -1.67451144e+02],
[-4.48938813e+02, 1.00768662e+05, 1.19112072e+02, ...,
-1.07963978e+03, 7.47987268e+01, -5.76195467e+02],
[-1.03237920e+03, 1.19112072e+02, 1.00393284e+05, ...,
-3.07582983e+02, 6.66670169e+02, 2.65799352e+02],
...,
[-2.25349312e+02, -1.07963978e+03, -3.07582983e+02, ...,
9.99891212e+04, -4.58195950e+02, -1.58667835e+02],
[-3.53959628e+02, 7.47987268e+01, 6.66670169e+02, ...,
-4.58195950e+02, 9.98350372e+04, 5.60836363e+02],
[-1.67451144e+02, -5.76195467e+02, 2.65799352e+02, ...,
-1.58667835e+02, 5.60836363e+02, 1.00911944e+05]]))
```
**In a Jupyter environment, please rerun this cell to show the HTML representation or trust the notebook.
On GitHub, the HTML representation is unable to render, please try loading this page with nbviewer.org.**
ElasticNet
```
ElasticNet(alpha=0.01,
precompute=array([[ 9.98809919e+04, -4.48938813e+02, -1.03237920e+03, ...,
-2.25349312e+02, -3.53959628e+02, -1.67451144e+02],
[-4.48938813e+02, 1.00768662e+05, 1.19112072e+02, ...,
-1.07963978e+03, 7.47987268e+01, -5.76195467e+02],
[-1.03237920e+03, 1.19112072e+02, 1.00393284e+05, ...,
-3.07582983e+02, 6.66670169e+02, 2.65799352e+02],
...,
[-2.25349312e+02, -1.07963978e+03, -3.07582983e+02, ...,
9.99891212e+04, -4.58195950e+02, -1.58667835e+02],
[-3.53959628e+02, 7.47987268e+01, 6.66670169e+02, ...,
-4.58195950e+02, 9.98350372e+04, 5.60836363e+02],
[-1.67451144e+02, -5.76195467e+02, 2.65799352e+02, ...,
-1.58667835e+02, 5.60836363e+02, 1.00911944e+05]]))
```
**Total running time of the script:** ( 0 minutes 0.938 seconds)
[`Download Python source code: plot_elastic_net_precomputed_gram_matrix_with_weighted_samples.py`](https://scikit-learn.org/1.1/_downloads/c50f4529d4a653ccd8d5117ec9300975/plot_elastic_net_precomputed_gram_matrix_with_weighted_samples.py)
[`Download Jupyter notebook: plot_elastic_net_precomputed_gram_matrix_with_weighted_samples.ipynb`](https://scikit-learn.org/1.1/_downloads/1054d40caffbd65c52b20dac784c7c5c/plot_elastic_net_precomputed_gram_matrix_with_weighted_samples.ipynb)
scikit_learn Multiclass sparse logistic regression on 20newgroups Note
Click [here](#sphx-glr-download-auto-examples-linear-model-plot-sparse-logistic-regression-20newsgroups-py) to download the full example code or to run this example in your browser via Binder
Multiclass sparse logistic regression on 20newgroups
====================================================
Comparison of multinomial logistic L1 vs one-versus-rest L1 logistic regression to classify documents from the newgroups20 dataset. Multinomial logistic regression yields more accurate results and is faster to train on the larger scale dataset.
Here we use the l1 sparsity that trims the weights of not informative features to zero. This is good if the goal is to extract the strongly discriminative vocabulary of each class. If the goal is to get the best predictive accuracy, it is better to use the non sparsity-inducing l2 penalty instead.
A more traditional (and possibly better) way to predict on a sparse subset of input features would be to use univariate feature selection followed by a traditional (l2-penalised) logistic regression model.
```
Dataset 20newsgroup, train_samples=4500, n_features=130107, n_classes=20
[model=One versus Rest, solver=saga] Number of epochs: 1
[model=One versus Rest, solver=saga] Number of epochs: 2
[model=One versus Rest, solver=saga] Number of epochs: 3
Test accuracy for model ovr: 0.5960
% non-zero coefficients for model ovr, per class:
[0.26593496 0.43348936 0.26362917 0.31973683 0.37815029 0.2928359
0.27054655 0.62717609 0.19522393 0.30897646 0.34586917 0.28207552
0.34125758 0.29898468 0.34279478 0.59489497 0.38353048 0.35278655
0.19829832 0.14603365]
Run time (3 epochs) for model ovr:1.20
[model=Multinomial, solver=saga] Number of epochs: 1
[model=Multinomial, solver=saga] Number of epochs: 2
[model=Multinomial, solver=saga] Number of epochs: 5
Test accuracy for model multinomial: 0.6440
% non-zero coefficients for model multinomial, per class:
[0.36047253 0.1268187 0.10606655 0.17985197 0.5395559 0.07993421
0.06686804 0.21443888 0.11528972 0.2075215 0.10914094 0.11144673
0.13988486 0.09684337 0.26286057 0.11682692 0.55800226 0.17370318
0.11452112 0.14603365]
Run time (5 epochs) for model multinomial:1.14
Example run in 5.332 s
```
```
# Author: Arthur Mensch
import timeit
import warnings
import matplotlib.pyplot as plt
import numpy as np
from sklearn.datasets import fetch_20newsgroups_vectorized
from sklearn.linear_model import LogisticRegression
from sklearn.model_selection import train_test_split
from sklearn.exceptions import ConvergenceWarning
warnings.filterwarnings("ignore", category=ConvergenceWarning, module="sklearn")
t0 = timeit.default_timer()
# We use SAGA solver
solver = "saga"
# Turn down for faster run time
n_samples = 5000
X, y = fetch_20newsgroups_vectorized(subset="all", return_X_y=True)
X = X[:n_samples]
y = y[:n_samples]
X_train, X_test, y_train, y_test = train_test_split(
X, y, random_state=42, stratify=y, test_size=0.1
)
train_samples, n_features = X_train.shape
n_classes = np.unique(y).shape[0]
print(
"Dataset 20newsgroup, train_samples=%i, n_features=%i, n_classes=%i"
% (train_samples, n_features, n_classes)
)
models = {
"ovr": {"name": "One versus Rest", "iters": [1, 2, 3]},
"multinomial": {"name": "Multinomial", "iters": [1, 2, 5]},
}
for model in models:
# Add initial chance-level values for plotting purpose
accuracies = [1 / n_classes]
times = [0]
densities = [1]
model_params = models[model]
# Small number of epochs for fast runtime
for this_max_iter in model_params["iters"]:
print(
"[model=%s, solver=%s] Number of epochs: %s"
% (model_params["name"], solver, this_max_iter)
)
lr = LogisticRegression(
solver=solver,
multi_class=model,
penalty="l1",
max_iter=this_max_iter,
random_state=42,
)
t1 = timeit.default_timer()
lr.fit(X_train, y_train)
train_time = timeit.default_timer() - t1
y_pred = lr.predict(X_test)
accuracy = np.sum(y_pred == y_test) / y_test.shape[0]
density = np.mean(lr.coef_ != 0, axis=1) * 100
accuracies.append(accuracy)
densities.append(density)
times.append(train_time)
models[model]["times"] = times
models[model]["densities"] = densities
models[model]["accuracies"] = accuracies
print("Test accuracy for model %s: %.4f" % (model, accuracies[-1]))
print(
"%% non-zero coefficients for model %s, per class:\n %s"
% (model, densities[-1])
)
print(
"Run time (%i epochs) for model %s:%.2f"
% (model_params["iters"][-1], model, times[-1])
)
fig = plt.figure()
ax = fig.add_subplot(111)
for model in models:
name = models[model]["name"]
times = models[model]["times"]
accuracies = models[model]["accuracies"]
ax.plot(times, accuracies, marker="o", label="Model: %s" % name)
ax.set_xlabel("Train time (s)")
ax.set_ylabel("Test accuracy")
ax.legend()
fig.suptitle("Multinomial vs One-vs-Rest Logistic L1\nDataset %s" % "20newsgroups")
fig.tight_layout()
fig.subplots_adjust(top=0.85)
run_time = timeit.default_timer() - t0
print("Example run in %.3f s" % run_time)
plt.show()
```
**Total running time of the script:** ( 0 minutes 5.377 seconds)
[`Download Python source code: plot_sparse_logistic_regression_20newsgroups.py`](https://scikit-learn.org/1.1/_downloads/7e30d5b899fc588cbc75553c65cb76ed/plot_sparse_logistic_regression_20newsgroups.py)
[`Download Jupyter notebook: plot_sparse_logistic_regression_20newsgroups.ipynb`](https://scikit-learn.org/1.1/_downloads/583de4ea98c6544c52ea4c57e62b1813/plot_sparse_logistic_regression_20newsgroups.ipynb)
scikit_learn Poisson regression and non-normal loss Note
Click [here](#sphx-glr-download-auto-examples-linear-model-plot-poisson-regression-non-normal-loss-py) to download the full example code or to run this example in your browser via Binder
Poisson regression and non-normal loss
======================================
This example illustrates the use of log-linear Poisson regression on the [French Motor Third-Party Liability Claims dataset](https://www.openml.org/d/41214) from [[1]](#id2) and compares it with a linear model fitted with the usual least squared error and a non-linear GBRT model fitted with the Poisson loss (and a log-link).
A few definitions:
* A **policy** is a contract between an insurance company and an individual: the **policyholder**, that is, the vehicle driver in this case.
* A **claim** is the request made by a policyholder to the insurer to compensate for a loss covered by the insurance.
* The **exposure** is the duration of the insurance coverage of a given policy, in years.
* The claim **frequency** is the number of claims divided by the exposure, typically measured in number of claims per year.
In this dataset, each sample corresponds to an insurance policy. Available features include driver age, vehicle age, vehicle power, etc.
Our goal is to predict the expected frequency of claims following car accidents for a new policyholder given the historical data over a population of policyholders.
```
# Authors: Christian Lorentzen <[email protected]>
# Roman Yurchak <[email protected]>
# Olivier Grisel <[email protected]>
# License: BSD 3 clause
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
```
The French Motor Third-Party Liability Claims dataset
-----------------------------------------------------
Let’s load the motor claim dataset from OpenML: <https://www.openml.org/d/41214>
```
from sklearn.datasets import fetch_openml
df = fetch_openml(data_id=41214, as_frame=True).frame
df
```
| | IDpol | ClaimNb | Exposure | Area | VehPower | VehAge | DrivAge | BonusMalus | VehBrand | VehGas | Density | Region |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| 0 | 1.0 | 1.0 | 0.10000 | D | 5.0 | 0.0 | 55.0 | 50.0 | B12 | Regular | 1217.0 | R82 |
| 1 | 3.0 | 1.0 | 0.77000 | D | 5.0 | 0.0 | 55.0 | 50.0 | B12 | Regular | 1217.0 | R82 |
| 2 | 5.0 | 1.0 | 0.75000 | B | 6.0 | 2.0 | 52.0 | 50.0 | B12 | Diesel | 54.0 | R22 |
| 3 | 10.0 | 1.0 | 0.09000 | B | 7.0 | 0.0 | 46.0 | 50.0 | B12 | Diesel | 76.0 | R72 |
| 4 | 11.0 | 1.0 | 0.84000 | B | 7.0 | 0.0 | 46.0 | 50.0 | B12 | Diesel | 76.0 | R72 |
| ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... |
| 678008 | 6114326.0 | 0.0 | 0.00274 | E | 4.0 | 0.0 | 54.0 | 50.0 | B12 | Regular | 3317.0 | R93 |
| 678009 | 6114327.0 | 0.0 | 0.00274 | E | 4.0 | 0.0 | 41.0 | 95.0 | B12 | Regular | 9850.0 | R11 |
| 678010 | 6114328.0 | 0.0 | 0.00274 | D | 6.0 | 2.0 | 45.0 | 50.0 | B12 | Diesel | 1323.0 | R82 |
| 678011 | 6114329.0 | 0.0 | 0.00274 | B | 4.0 | 0.0 | 60.0 | 50.0 | B12 | Regular | 95.0 | R26 |
| 678012 | 6114330.0 | 0.0 | 0.00274 | B | 7.0 | 6.0 | 29.0 | 54.0 | B12 | Diesel | 65.0 | R72 |
678013 rows × 12 columns
The number of claims (`ClaimNb`) is a positive integer that can be modeled as a Poisson distribution. It is then assumed to be the number of discrete events occurring with a constant rate in a given time interval (`Exposure`, in units of years).
Here we want to model the frequency `y = ClaimNb / Exposure` conditionally on `X` via a (scaled) Poisson distribution, and use `Exposure` as `sample_weight`.
```
df["Frequency"] = df["ClaimNb"] / df["Exposure"]
print(
"Average Frequency = {}".format(np.average(df["Frequency"], weights=df["Exposure"]))
)
print(
"Fraction of exposure with zero claims = {0:.1%}".format(
df.loc[df["ClaimNb"] == 0, "Exposure"].sum() / df["Exposure"].sum()
)
)
fig, (ax0, ax1, ax2) = plt.subplots(ncols=3, figsize=(16, 4))
ax0.set_title("Number of claims")
_ = df["ClaimNb"].hist(bins=30, log=True, ax=ax0)
ax1.set_title("Exposure in years")
_ = df["Exposure"].hist(bins=30, log=True, ax=ax1)
ax2.set_title("Frequency (number of claims per year)")
_ = df["Frequency"].hist(bins=30, log=True, ax=ax2)
```
```
Average Frequency = 0.10070308464041304
Fraction of exposure with zero claims = 93.9%
```
The remaining columns can be used to predict the frequency of claim events. Those columns are very heterogeneous with a mix of categorical and numeric variables with different scales, possibly very unevenly distributed.
In order to fit linear models with those predictors it is therefore necessary to perform standard feature transformations as follows:
```
from sklearn.pipeline import make_pipeline
from sklearn.preprocessing import FunctionTransformer, OneHotEncoder
from sklearn.preprocessing import StandardScaler, KBinsDiscretizer
from sklearn.compose import ColumnTransformer
log_scale_transformer = make_pipeline(
FunctionTransformer(np.log, validate=False), StandardScaler()
)
linear_model_preprocessor = ColumnTransformer(
[
("passthrough_numeric", "passthrough", ["BonusMalus"]),
("binned_numeric", KBinsDiscretizer(n_bins=10), ["VehAge", "DrivAge"]),
("log_scaled_numeric", log_scale_transformer, ["Density"]),
(
"onehot_categorical",
OneHotEncoder(),
["VehBrand", "VehPower", "VehGas", "Region", "Area"],
),
],
remainder="drop",
)
```
A constant prediction baseline
------------------------------
It is worth noting that more than 93% of policyholders have zero claims. If we were to convert this problem into a binary classification task, it would be significantly imbalanced, and even a simplistic model that would only predict mean can achieve an accuracy of 93%.
To evaluate the pertinence of the used metrics, we will consider as a baseline a “dummy” estimator that constantly predicts the mean frequency of the training sample.
```
from sklearn.dummy import DummyRegressor
from sklearn.pipeline import Pipeline
from sklearn.model_selection import train_test_split
df_train, df_test = train_test_split(df, test_size=0.33, random_state=0)
dummy = Pipeline(
[
("preprocessor", linear_model_preprocessor),
("regressor", DummyRegressor(strategy="mean")),
]
).fit(df_train, df_train["Frequency"], regressor__sample_weight=df_train["Exposure"])
```
```
/home/runner/work/scikit-learn/scikit-learn/sklearn/preprocessing/_discretization.py:209: FutureWarning: In version 1.3 onwards, subsample=2e5 will be used by default. Set subsample explicitly to silence this warning in the mean time. Set subsample=None to disable subsampling explicitly.
warnings.warn(
```
Let’s compute the performance of this constant prediction baseline with 3 different regression metrics:
```
from sklearn.metrics import mean_squared_error
from sklearn.metrics import mean_absolute_error
from sklearn.metrics import mean_poisson_deviance
def score_estimator(estimator, df_test):
"""Score an estimator on the test set."""
y_pred = estimator.predict(df_test)
print(
"MSE: %.3f"
% mean_squared_error(
df_test["Frequency"], y_pred, sample_weight=df_test["Exposure"]
)
)
print(
"MAE: %.3f"
% mean_absolute_error(
df_test["Frequency"], y_pred, sample_weight=df_test["Exposure"]
)
)
# Ignore non-positive predictions, as they are invalid for
# the Poisson deviance.
mask = y_pred > 0
if (~mask).any():
n_masked, n_samples = (~mask).sum(), mask.shape[0]
print(
"WARNING: Estimator yields invalid, non-positive predictions "
f" for {n_masked} samples out of {n_samples}. These predictions "
"are ignored when computing the Poisson deviance."
)
print(
"mean Poisson deviance: %.3f"
% mean_poisson_deviance(
df_test["Frequency"][mask],
y_pred[mask],
sample_weight=df_test["Exposure"][mask],
)
)
print("Constant mean frequency evaluation:")
score_estimator(dummy, df_test)
```
```
Constant mean frequency evaluation:
MSE: 0.564
MAE: 0.189
mean Poisson deviance: 0.625
```
(Generalized) linear models
---------------------------
We start by modeling the target variable with the (l2 penalized) least squares linear regression model, more comonly known as Ridge regression. We use a low penalization `alpha`, as we expect such a linear model to under-fit on such a large dataset.
```
from sklearn.linear_model import Ridge
ridge_glm = Pipeline(
[
("preprocessor", linear_model_preprocessor),
("regressor", Ridge(alpha=1e-6)),
]
).fit(df_train, df_train["Frequency"], regressor__sample_weight=df_train["Exposure"])
```
```
/home/runner/work/scikit-learn/scikit-learn/sklearn/preprocessing/_discretization.py:209: FutureWarning: In version 1.3 onwards, subsample=2e5 will be used by default. Set subsample explicitly to silence this warning in the mean time. Set subsample=None to disable subsampling explicitly.
warnings.warn(
```
The Poisson deviance cannot be computed on non-positive values predicted by the model. For models that do return a few non-positive predictions (e.g. [`Ridge`](../../modules/generated/sklearn.linear_model.ridge#sklearn.linear_model.Ridge "sklearn.linear_model.Ridge")) we ignore the corresponding samples, meaning that the obtained Poisson deviance is approximate. An alternative approach could be to use [`TransformedTargetRegressor`](../../modules/generated/sklearn.compose.transformedtargetregressor#sklearn.compose.TransformedTargetRegressor "sklearn.compose.TransformedTargetRegressor") meta-estimator to map `y_pred` to a strictly positive domain.
```
print("Ridge evaluation:")
score_estimator(ridge_glm, df_test)
```
```
Ridge evaluation:
MSE: 0.560
MAE: 0.186
WARNING: Estimator yields invalid, non-positive predictions for 513 samples out of 223745. These predictions are ignored when computing the Poisson deviance.
mean Poisson deviance: 0.597
```
Next we fit the Poisson regressor on the target variable. We set the regularization strength `alpha` to approximately 1e-6 over number of samples (i.e. `1e-12`) in order to mimic the Ridge regressor whose L2 penalty term scales differently with the number of samples.
Since the Poisson regressor internally models the log of the expected target value instead of the expected value directly (log vs identity link function), the relationship between X and y is not exactly linear anymore. Therefore the Poisson regressor is called a Generalized Linear Model (GLM) rather than a vanilla linear model as is the case for Ridge regression.
```
from sklearn.linear_model import PoissonRegressor
n_samples = df_train.shape[0]
poisson_glm = Pipeline(
[
("preprocessor", linear_model_preprocessor),
("regressor", PoissonRegressor(alpha=1e-12, max_iter=300)),
]
)
poisson_glm.fit(
df_train, df_train["Frequency"], regressor__sample_weight=df_train["Exposure"]
)
print("PoissonRegressor evaluation:")
score_estimator(poisson_glm, df_test)
```
```
/home/runner/work/scikit-learn/scikit-learn/sklearn/preprocessing/_discretization.py:209: FutureWarning: In version 1.3 onwards, subsample=2e5 will be used by default. Set subsample explicitly to silence this warning in the mean time. Set subsample=None to disable subsampling explicitly.
warnings.warn(
PoissonRegressor evaluation:
MSE: 0.560
MAE: 0.186
mean Poisson deviance: 0.594
```
Gradient Boosting Regression Trees for Poisson regression
---------------------------------------------------------
Finally, we will consider a non-linear model, namely Gradient Boosting Regression Trees. Tree-based models do not require the categorical data to be one-hot encoded: instead, we can encode each category label with an arbitrary integer using [`OrdinalEncoder`](../../modules/generated/sklearn.preprocessing.ordinalencoder#sklearn.preprocessing.OrdinalEncoder "sklearn.preprocessing.OrdinalEncoder"). With this encoding, the trees will treat the categorical features as ordered features, which might not be always a desired behavior. However this effect is limited for deep enough trees which are able to recover the categorical nature of the features. The main advantage of the [`OrdinalEncoder`](../../modules/generated/sklearn.preprocessing.ordinalencoder#sklearn.preprocessing.OrdinalEncoder "sklearn.preprocessing.OrdinalEncoder") over the [`OneHotEncoder`](../../modules/generated/sklearn.preprocessing.onehotencoder#sklearn.preprocessing.OneHotEncoder "sklearn.preprocessing.OneHotEncoder") is that it will make training faster.
Gradient Boosting also gives the possibility to fit the trees with a Poisson loss (with an implicit log-link function) instead of the default least-squares loss. Here we only fit trees with the Poisson loss to keep this example concise.
```
from sklearn.ensemble import HistGradientBoostingRegressor
from sklearn.preprocessing import OrdinalEncoder
tree_preprocessor = ColumnTransformer(
[
(
"categorical",
OrdinalEncoder(),
["VehBrand", "VehPower", "VehGas", "Region", "Area"],
),
("numeric", "passthrough", ["VehAge", "DrivAge", "BonusMalus", "Density"]),
],
remainder="drop",
)
poisson_gbrt = Pipeline(
[
("preprocessor", tree_preprocessor),
(
"regressor",
HistGradientBoostingRegressor(loss="poisson", max_leaf_nodes=128),
),
]
)
poisson_gbrt.fit(
df_train, df_train["Frequency"], regressor__sample_weight=df_train["Exposure"]
)
print("Poisson Gradient Boosted Trees evaluation:")
score_estimator(poisson_gbrt, df_test)
```
```
Poisson Gradient Boosted Trees evaluation:
MSE: 0.566
MAE: 0.184
mean Poisson deviance: 0.575
```
Like the Poisson GLM above, the gradient boosted trees model minimizes the Poisson deviance. However, because of a higher predictive power, it reaches lower values of Poisson deviance.
Evaluating models with a single train / test split is prone to random fluctuations. If computing resources allow, it should be verified that cross-validated performance metrics would lead to similar conclusions.
The qualitative difference between these models can also be visualized by comparing the histogram of observed target values with that of predicted values:
```
fig, axes = plt.subplots(nrows=2, ncols=4, figsize=(16, 6), sharey=True)
fig.subplots_adjust(bottom=0.2)
n_bins = 20
for row_idx, label, df in zip(range(2), ["train", "test"], [df_train, df_test]):
df["Frequency"].hist(bins=np.linspace(-1, 30, n_bins), ax=axes[row_idx, 0])
axes[row_idx, 0].set_title("Data")
axes[row_idx, 0].set_yscale("log")
axes[row_idx, 0].set_xlabel("y (observed Frequency)")
axes[row_idx, 0].set_ylim([1e1, 5e5])
axes[row_idx, 0].set_ylabel(label + " samples")
for idx, model in enumerate([ridge_glm, poisson_glm, poisson_gbrt]):
y_pred = model.predict(df)
pd.Series(y_pred).hist(
bins=np.linspace(-1, 4, n_bins), ax=axes[row_idx, idx + 1]
)
axes[row_idx, idx + 1].set(
title=model[-1].__class__.__name__,
yscale="log",
xlabel="y_pred (predicted expected Frequency)",
)
plt.tight_layout()
```
The experimental data presents a long tail distribution for `y`. In all models, we predict the expected frequency of a random variable, so we will have necessarily fewer extreme values than for the observed realizations of that random variable. This explains that the mode of the histograms of model predictions doesn’t necessarily correspond to the smallest value. Additionally, the normal distribution used in `Ridge` has a constant variance, while for the Poisson distribution used in `PoissonRegressor` and `HistGradientBoostingRegressor`, the variance is proportional to the predicted expected value.
Thus, among the considered estimators, `PoissonRegressor` and `HistGradientBoostingRegressor` are a-priori better suited for modeling the long tail distribution of the non-negative data as compared to the `Ridge` model which makes a wrong assumption on the distribution of the target variable.
The `HistGradientBoostingRegressor` estimator has the most flexibility and is able to predict higher expected values.
Note that we could have used the least squares loss for the `HistGradientBoostingRegressor` model. This would wrongly assume a normal distributed response variable as does the `Ridge` model, and possibly also lead to slightly negative predictions. However the gradient boosted trees would still perform relatively well and in particular better than `PoissonRegressor` thanks to the flexibility of the trees combined with the large number of training samples.
Evaluation of the calibration of predictions
--------------------------------------------
To ensure that estimators yield reasonable predictions for different policyholder types, we can bin test samples according to `y_pred` returned by each model. Then for each bin, we compare the mean predicted `y_pred`, with the mean observed target:
```
from sklearn.utils import gen_even_slices
def _mean_frequency_by_risk_group(y_true, y_pred, sample_weight=None, n_bins=100):
"""Compare predictions and observations for bins ordered by y_pred.
We order the samples by ``y_pred`` and split it in bins.
In each bin the observed mean is compared with the predicted mean.
Parameters
----------
y_true: array-like of shape (n_samples,)
Ground truth (correct) target values.
y_pred: array-like of shape (n_samples,)
Estimated target values.
sample_weight : array-like of shape (n_samples,)
Sample weights.
n_bins: int
Number of bins to use.
Returns
-------
bin_centers: ndarray of shape (n_bins,)
bin centers
y_true_bin: ndarray of shape (n_bins,)
average y_pred for each bin
y_pred_bin: ndarray of shape (n_bins,)
average y_pred for each bin
"""
idx_sort = np.argsort(y_pred)
bin_centers = np.arange(0, 1, 1 / n_bins) + 0.5 / n_bins
y_pred_bin = np.zeros(n_bins)
y_true_bin = np.zeros(n_bins)
for n, sl in enumerate(gen_even_slices(len(y_true), n_bins)):
weights = sample_weight[idx_sort][sl]
y_pred_bin[n] = np.average(y_pred[idx_sort][sl], weights=weights)
y_true_bin[n] = np.average(y_true[idx_sort][sl], weights=weights)
return bin_centers, y_true_bin, y_pred_bin
print(f"Actual number of claims: {df_test['ClaimNb'].sum()}")
fig, ax = plt.subplots(nrows=2, ncols=2, figsize=(12, 8))
plt.subplots_adjust(wspace=0.3)
for axi, model in zip(ax.ravel(), [ridge_glm, poisson_glm, poisson_gbrt, dummy]):
y_pred = model.predict(df_test)
y_true = df_test["Frequency"].values
exposure = df_test["Exposure"].values
q, y_true_seg, y_pred_seg = _mean_frequency_by_risk_group(
y_true, y_pred, sample_weight=exposure, n_bins=10
)
# Name of the model after the estimator used in the last step of the
# pipeline.
print(f"Predicted number of claims by {model[-1]}: {np.sum(y_pred * exposure):.1f}")
axi.plot(q, y_pred_seg, marker="x", linestyle="--", label="predictions")
axi.plot(q, y_true_seg, marker="o", linestyle="--", label="observations")
axi.set_xlim(0, 1.0)
axi.set_ylim(0, 0.5)
axi.set(
title=model[-1],
xlabel="Fraction of samples sorted by y_pred",
ylabel="Mean Frequency (y_pred)",
)
axi.legend()
plt.tight_layout()
```
```
Actual number of claims: 11935.0
Predicted number of claims by Ridge(alpha=1e-06): 11931.9
Predicted number of claims by PoissonRegressor(alpha=1e-12, max_iter=300): 11939.1
Predicted number of claims by HistGradientBoostingRegressor(loss='poisson', max_leaf_nodes=128): 12196.1
Predicted number of claims by DummyRegressor(): 11931.2
```
The dummy regression model predicts a constant frequency. This model does not attribute the same tied rank to all samples but is none-the-less globally well calibrated (to estimate the mean frequency of the entire population).
The `Ridge` regression model can predict very low expected frequencies that do not match the data. It can therefore severely under-estimate the risk for some policyholders.
`PoissonRegressor` and `HistGradientBoostingRegressor` show better consistency between predicted and observed targets, especially for low predicted target values.
The sum of all predictions also confirms the calibration issue of the `Ridge` model: it under-estimates by more than 3% the total number of claims in the test set while the other three models can approximately recover the total number of claims of the test portfolio.
Evaluation of the ranking power
-------------------------------
For some business applications, we are interested in the ability of the model to rank the riskiest from the safest policyholders, irrespective of the absolute value of the prediction. In this case, the model evaluation would cast the problem as a ranking problem rather than a regression problem.
To compare the 3 models from this perspective, one can plot the cumulative proportion of claims vs the cumulative proportion of exposure for the test samples order by the model predictions, from safest to riskiest according to each model.
This plot is called a Lorenz curve and can be summarized by the Gini index:
```
from sklearn.metrics import auc
def lorenz_curve(y_true, y_pred, exposure):
y_true, y_pred = np.asarray(y_true), np.asarray(y_pred)
exposure = np.asarray(exposure)
# order samples by increasing predicted risk:
ranking = np.argsort(y_pred)
ranked_frequencies = y_true[ranking]
ranked_exposure = exposure[ranking]
cumulated_claims = np.cumsum(ranked_frequencies * ranked_exposure)
cumulated_claims /= cumulated_claims[-1]
cumulated_exposure = np.cumsum(ranked_exposure)
cumulated_exposure /= cumulated_exposure[-1]
return cumulated_exposure, cumulated_claims
fig, ax = plt.subplots(figsize=(8, 8))
for model in [dummy, ridge_glm, poisson_glm, poisson_gbrt]:
y_pred = model.predict(df_test)
cum_exposure, cum_claims = lorenz_curve(
df_test["Frequency"], y_pred, df_test["Exposure"]
)
gini = 1 - 2 * auc(cum_exposure, cum_claims)
label = "{} (Gini: {:.2f})".format(model[-1], gini)
ax.plot(cum_exposure, cum_claims, linestyle="-", label=label)
# Oracle model: y_pred == y_test
cum_exposure, cum_claims = lorenz_curve(
df_test["Frequency"], df_test["Frequency"], df_test["Exposure"]
)
gini = 1 - 2 * auc(cum_exposure, cum_claims)
label = "Oracle (Gini: {:.2f})".format(gini)
ax.plot(cum_exposure, cum_claims, linestyle="-.", color="gray", label=label)
# Random Baseline
ax.plot([0, 1], [0, 1], linestyle="--", color="black", label="Random baseline")
ax.set(
title="Lorenz curves by model",
xlabel="Cumulative proportion of exposure (from safest to riskiest)",
ylabel="Cumulative proportion of claims",
)
ax.legend(loc="upper left")
```
```
<matplotlib.legend.Legend object at 0x7f6e7ea778b0>
```
As expected, the dummy regressor is unable to correctly rank the samples and therefore performs the worst on this plot.
The tree-based model is significantly better at ranking policyholders by risk while the two linear models perform similarly.
All three models are significantly better than chance but also very far from making perfect predictions.
This last point is expected due to the nature of the problem: the occurrence of accidents is mostly dominated by circumstantial causes that are not captured in the columns of the dataset and can indeed be considered as purely random.
The linear models assume no interactions between the input variables which likely causes under-fitting. Inserting a polynomial feature extractor ([`PolynomialFeatures`](../../modules/generated/sklearn.preprocessing.polynomialfeatures#sklearn.preprocessing.PolynomialFeatures "sklearn.preprocessing.PolynomialFeatures")) indeed increases their discrimative power by 2 points of Gini index. In particular it improves the ability of the models to identify the top 5% riskiest profiles.
Main takeaways
--------------
* The performance of the models can be evaluated by their ability to yield well-calibrated predictions and a good ranking.
* The calibration of the model can be assessed by plotting the mean observed value vs the mean predicted value on groups of test samples binned by predicted risk.
* The least squares loss (along with the implicit use of the identity link function) of the Ridge regression model seems to cause this model to be badly calibrated. In particular, it tends to underestimate the risk and can even predict invalid negative frequencies.
* Using the Poisson loss with a log-link can correct these problems and lead to a well-calibrated linear model.
* The Gini index reflects the ability of a model to rank predictions irrespective of their absolute values, and therefore only assess their ranking power.
* Despite the improvement in calibration, the ranking power of both linear models are comparable and well below the ranking power of the Gradient Boosting Regression Trees.
* The Poisson deviance computed as an evaluation metric reflects both the calibration and the ranking power of the model. It also makes a linear assumption on the ideal relationship between the expected value and the variance of the response variable. For the sake of conciseness we did not check whether this assumption holds.
* Traditional regression metrics such as Mean Squared Error and Mean Absolute Error are hard to meaningfully interpret on count values with many zeros.
```
plt.show()
```
**Total running time of the script:** ( 0 minutes 46.789 seconds)
[`Download Python source code: plot_poisson_regression_non_normal_loss.py`](https://scikit-learn.org/1.1/_downloads/d08611fad91456a69eecccc558014285/plot_poisson_regression_non_normal_loss.py)
[`Download Jupyter notebook: plot_poisson_regression_non_normal_loss.ipynb`](https://scikit-learn.org/1.1/_downloads/2e4791a177381a6102b21e44083615c8/plot_poisson_regression_non_normal_loss.ipynb)
| programming_docs |
scikit_learn Comparing various online solvers Note
Click [here](#sphx-glr-download-auto-examples-linear-model-plot-sgd-comparison-py) to download the full example code or to run this example in your browser via Binder
Comparing various online solvers
================================
An example showing how different online solvers perform on the hand-written digits dataset.
```
training SGD
training ASGD
training Perceptron
training Passive-Aggressive I
training Passive-Aggressive II
training SAG
```
```
# Author: Rob Zinkov <rob at zinkov dot com>
# License: BSD 3 clause
import numpy as np
import matplotlib.pyplot as plt
from sklearn import datasets
from sklearn.model_selection import train_test_split
from sklearn.linear_model import SGDClassifier, Perceptron
from sklearn.linear_model import PassiveAggressiveClassifier
from sklearn.linear_model import LogisticRegression
heldout = [0.95, 0.90, 0.75, 0.50, 0.01]
# Number of rounds to fit and evaluate an estimator.
rounds = 10
X, y = datasets.load_digits(return_X_y=True)
classifiers = [
("SGD", SGDClassifier(max_iter=110)),
("ASGD", SGDClassifier(max_iter=110, average=True)),
("Perceptron", Perceptron(max_iter=110)),
(
"Passive-Aggressive I",
PassiveAggressiveClassifier(max_iter=110, loss="hinge", C=1.0, tol=1e-4),
),
(
"Passive-Aggressive II",
PassiveAggressiveClassifier(
max_iter=110, loss="squared_hinge", C=1.0, tol=1e-4
),
),
(
"SAG",
LogisticRegression(max_iter=110, solver="sag", tol=1e-1, C=1.0e4 / X.shape[0]),
),
]
xx = 1.0 - np.array(heldout)
for name, clf in classifiers:
print("training %s" % name)
rng = np.random.RandomState(42)
yy = []
for i in heldout:
yy_ = []
for r in range(rounds):
X_train, X_test, y_train, y_test = train_test_split(
X, y, test_size=i, random_state=rng
)
clf.fit(X_train, y_train)
y_pred = clf.predict(X_test)
yy_.append(1 - np.mean(y_pred == y_test))
yy.append(np.mean(yy_))
plt.plot(xx, yy, label=name)
plt.legend(loc="upper right")
plt.xlabel("Proportion train")
plt.ylabel("Test Error Rate")
plt.show()
```
**Total running time of the script:** ( 0 minutes 7.810 seconds)
[`Download Python source code: plot_sgd_comparison.py`](https://scikit-learn.org/1.1/_downloads/333498007f3352f5ce034c50c034c301/plot_sgd_comparison.py)
[`Download Jupyter notebook: plot_sgd_comparison.ipynb`](https://scikit-learn.org/1.1/_downloads/02f111fb3dd79805b161e14c564184fc/plot_sgd_comparison.ipynb)
scikit_learn Linear Regression Example Note
Click [here](#sphx-glr-download-auto-examples-linear-model-plot-ols-py) to download the full example code or to run this example in your browser via Binder
Linear Regression Example
=========================
The example below uses only the first feature of the `diabetes` dataset, in order to illustrate the data points within the two-dimensional plot. The straight line can be seen in the plot, showing how linear regression attempts to draw a straight line that will best minimize the residual sum of squares between the observed responses in the dataset, and the responses predicted by the linear approximation.
The coefficients, residual sum of squares and the coefficient of determination are also calculated.
```
Coefficients:
[938.23786125]
Mean squared error: 2548.07
Coefficient of determination: 0.47
```
```
# Code source: Jaques Grobler
# License: BSD 3 clause
import matplotlib.pyplot as plt
import numpy as np
from sklearn import datasets, linear_model
from sklearn.metrics import mean_squared_error, r2_score
# Load the diabetes dataset
diabetes_X, diabetes_y = datasets.load_diabetes(return_X_y=True)
# Use only one feature
diabetes_X = diabetes_X[:, np.newaxis, 2]
# Split the data into training/testing sets
diabetes_X_train = diabetes_X[:-20]
diabetes_X_test = diabetes_X[-20:]
# Split the targets into training/testing sets
diabetes_y_train = diabetes_y[:-20]
diabetes_y_test = diabetes_y[-20:]
# Create linear regression object
regr = linear_model.LinearRegression()
# Train the model using the training sets
regr.fit(diabetes_X_train, diabetes_y_train)
# Make predictions using the testing set
diabetes_y_pred = regr.predict(diabetes_X_test)
# The coefficients
print("Coefficients: \n", regr.coef_)
# The mean squared error
print("Mean squared error: %.2f" % mean_squared_error(diabetes_y_test, diabetes_y_pred))
# The coefficient of determination: 1 is perfect prediction
print("Coefficient of determination: %.2f" % r2_score(diabetes_y_test, diabetes_y_pred))
# Plot outputs
plt.scatter(diabetes_X_test, diabetes_y_test, color="black")
plt.plot(diabetes_X_test, diabetes_y_pred, color="blue", linewidth=3)
plt.xticks(())
plt.yticks(())
plt.show()
```
**Total running time of the script:** ( 0 minutes 0.037 seconds)
[`Download Python source code: plot_ols.py`](https://scikit-learn.org/1.1/_downloads/5ff7ffaf8076af51ffb8c5732f697c8e/plot_ols.py)
[`Download Jupyter notebook: plot_ols.ipynb`](https://scikit-learn.org/1.1/_downloads/ee7c3a1966ea76f4980405cbc44de6da/plot_ols.ipynb)
scikit_learn Robust linear estimator fitting Note
Click [here](#sphx-glr-download-auto-examples-linear-model-plot-robust-fit-py) to download the full example code or to run this example in your browser via Binder
Robust linear estimator fitting
===============================
Here a sine function is fit with a polynomial of order 3, for values close to zero.
Robust fitting is demoed in different situations:
* No measurement errors, only modelling errors (fitting a sine with a polynomial)
* Measurement errors in X
* Measurement errors in y
The median absolute deviation to non corrupt new data is used to judge the quality of the prediction.
What we can see that:
* RANSAC is good for strong outliers in the y direction
* TheilSen is good for small outliers, both in direction X and y, but has a break point above which it performs worse than OLS.
* The scores of HuberRegressor may not be compared directly to both TheilSen and RANSAC because it does not attempt to completely filter the outliers but lessen their effect.
*
*
*
*
*
```
from matplotlib import pyplot as plt
import numpy as np
from sklearn.linear_model import (
LinearRegression,
TheilSenRegressor,
RANSACRegressor,
HuberRegressor,
)
from sklearn.metrics import mean_squared_error
from sklearn.preprocessing import PolynomialFeatures
from sklearn.pipeline import make_pipeline
np.random.seed(42)
X = np.random.normal(size=400)
y = np.sin(X)
# Make sure that it X is 2D
X = X[:, np.newaxis]
X_test = np.random.normal(size=200)
y_test = np.sin(X_test)
X_test = X_test[:, np.newaxis]
y_errors = y.copy()
y_errors[::3] = 3
X_errors = X.copy()
X_errors[::3] = 3
y_errors_large = y.copy()
y_errors_large[::3] = 10
X_errors_large = X.copy()
X_errors_large[::3] = 10
estimators = [
("OLS", LinearRegression()),
("Theil-Sen", TheilSenRegressor(random_state=42)),
("RANSAC", RANSACRegressor(random_state=42)),
("HuberRegressor", HuberRegressor()),
]
colors = {
"OLS": "turquoise",
"Theil-Sen": "gold",
"RANSAC": "lightgreen",
"HuberRegressor": "black",
}
linestyle = {"OLS": "-", "Theil-Sen": "-.", "RANSAC": "--", "HuberRegressor": "--"}
lw = 3
x_plot = np.linspace(X.min(), X.max())
for title, this_X, this_y in [
("Modeling Errors Only", X, y),
("Corrupt X, Small Deviants", X_errors, y),
("Corrupt y, Small Deviants", X, y_errors),
("Corrupt X, Large Deviants", X_errors_large, y),
("Corrupt y, Large Deviants", X, y_errors_large),
]:
plt.figure(figsize=(5, 4))
plt.plot(this_X[:, 0], this_y, "b+")
for name, estimator in estimators:
model = make_pipeline(PolynomialFeatures(3), estimator)
model.fit(this_X, this_y)
mse = mean_squared_error(model.predict(X_test), y_test)
y_plot = model.predict(x_plot[:, np.newaxis])
plt.plot(
x_plot,
y_plot,
color=colors[name],
linestyle=linestyle[name],
linewidth=lw,
label="%s: error = %.3f" % (name, mse),
)
legend_title = "Error of Mean\nAbsolute Deviation\nto Non-corrupt Data"
legend = plt.legend(
loc="upper right", frameon=False, title=legend_title, prop=dict(size="x-small")
)
plt.xlim(-4, 10.2)
plt.ylim(-2, 10.2)
plt.title(title)
plt.show()
```
**Total running time of the script:** ( 0 minutes 1.794 seconds)
[`Download Python source code: plot_robust_fit.py`](https://scikit-learn.org/1.1/_downloads/a58a561a3187398ccd5be0dc2e18e01d/plot_robust_fit.py)
[`Download Jupyter notebook: plot_robust_fit.ipynb`](https://scikit-learn.org/1.1/_downloads/c93b4e0b422f79bfcb544a032cebe455/plot_robust_fit.ipynb)
scikit_learn Tweedie regression on insurance claims Note
Click [here](#sphx-glr-download-auto-examples-linear-model-plot-tweedie-regression-insurance-claims-py) to download the full example code or to run this example in your browser via Binder
Tweedie regression on insurance claims
======================================
This example illustrates the use of Poisson, Gamma and Tweedie regression on the [French Motor Third-Party Liability Claims dataset](https://www.openml.org/d/41214), and is inspired by an R tutorial [[1]](#id2).
In this dataset, each sample corresponds to an insurance policy, i.e. a contract within an insurance company and an individual (policyholder). Available features include driver age, vehicle age, vehicle power, etc.
A few definitions: a *claim* is the request made by a policyholder to the insurer to compensate for a loss covered by the insurance. The *claim amount* is the amount of money that the insurer must pay. The *exposure* is the duration of the insurance coverage of a given policy, in years.
Here our goal is to predict the expected value, i.e. the mean, of the total claim amount per exposure unit also referred to as the pure premium.
There are several possibilities to do that, two of which are:
1. Model the number of claims with a Poisson distribution, and the average claim amount per claim, also known as severity, as a Gamma distribution and multiply the predictions of both in order to get the total claim amount.
2. Model the total claim amount per exposure directly, typically with a Tweedie distribution of Tweedie power \(p \in (1, 2)\).
In this example we will illustrate both approaches. We start by defining a few helper functions for loading the data and visualizing results.
```
# Authors: Christian Lorentzen <[email protected]>
# Roman Yurchak <[email protected]>
# Olivier Grisel <[email protected]>
# License: BSD 3 clause
```
```
from functools import partial
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
from sklearn.datasets import fetch_openml
from sklearn.metrics import mean_tweedie_deviance
from sklearn.metrics import mean_absolute_error
from sklearn.metrics import mean_squared_error
def load_mtpl2(n_samples=100000):
"""Fetch the French Motor Third-Party Liability Claims dataset.
Parameters
----------
n_samples: int, default=100000
number of samples to select (for faster run time). Full dataset has
678013 samples.
"""
# freMTPL2freq dataset from https://www.openml.org/d/41214
df_freq = fetch_openml(data_id=41214, as_frame=True)["data"]
df_freq["IDpol"] = df_freq["IDpol"].astype(int)
df_freq.set_index("IDpol", inplace=True)
# freMTPL2sev dataset from https://www.openml.org/d/41215
df_sev = fetch_openml(data_id=41215, as_frame=True)["data"]
# sum ClaimAmount over identical IDs
df_sev = df_sev.groupby("IDpol").sum()
df = df_freq.join(df_sev, how="left")
df["ClaimAmount"].fillna(0, inplace=True)
# unquote string fields
for column_name in df.columns[df.dtypes.values == object]:
df[column_name] = df[column_name].str.strip("'")
return df.iloc[:n_samples]
def plot_obs_pred(
df,
feature,
weight,
observed,
predicted,
y_label=None,
title=None,
ax=None,
fill_legend=False,
):
"""Plot observed and predicted - aggregated per feature level.
Parameters
----------
df : DataFrame
input data
feature: str
a column name of df for the feature to be plotted
weight : str
column name of df with the values of weights or exposure
observed : str
a column name of df with the observed target
predicted : DataFrame
a dataframe, with the same index as df, with the predicted target
fill_legend : bool, default=False
whether to show fill_between legend
"""
# aggregate observed and predicted variables by feature level
df_ = df.loc[:, [feature, weight]].copy()
df_["observed"] = df[observed] * df[weight]
df_["predicted"] = predicted * df[weight]
df_ = (
df_.groupby([feature])[[weight, "observed", "predicted"]]
.sum()
.assign(observed=lambda x: x["observed"] / x[weight])
.assign(predicted=lambda x: x["predicted"] / x[weight])
)
ax = df_.loc[:, ["observed", "predicted"]].plot(style=".", ax=ax)
y_max = df_.loc[:, ["observed", "predicted"]].values.max() * 0.8
p2 = ax.fill_between(
df_.index,
0,
y_max * df_[weight] / df_[weight].values.max(),
color="g",
alpha=0.1,
)
if fill_legend:
ax.legend([p2], ["{} distribution".format(feature)])
ax.set(
ylabel=y_label if y_label is not None else None,
title=title if title is not None else "Train: Observed vs Predicted",
)
def score_estimator(
estimator,
X_train,
X_test,
df_train,
df_test,
target,
weights,
tweedie_powers=None,
):
"""Evaluate an estimator on train and test sets with different metrics"""
metrics = [
("D² explained", None), # Use default scorer if it exists
("mean abs. error", mean_absolute_error),
("mean squared error", mean_squared_error),
]
if tweedie_powers:
metrics += [
(
"mean Tweedie dev p={:.4f}".format(power),
partial(mean_tweedie_deviance, power=power),
)
for power in tweedie_powers
]
res = []
for subset_label, X, df in [
("train", X_train, df_train),
("test", X_test, df_test),
]:
y, _weights = df[target], df[weights]
for score_label, metric in metrics:
if isinstance(estimator, tuple) and len(estimator) == 2:
# Score the model consisting of the product of frequency and
# severity models.
est_freq, est_sev = estimator
y_pred = est_freq.predict(X) * est_sev.predict(X)
else:
y_pred = estimator.predict(X)
if metric is None:
if not hasattr(estimator, "score"):
continue
score = estimator.score(X, y, sample_weight=_weights)
else:
score = metric(y, y_pred, sample_weight=_weights)
res.append({"subset": subset_label, "metric": score_label, "score": score})
res = (
pd.DataFrame(res)
.set_index(["metric", "subset"])
.score.unstack(-1)
.round(4)
.loc[:, ["train", "test"]]
)
return res
```
Loading datasets, basic feature extraction and target definitions
-----------------------------------------------------------------
We construct the freMTPL2 dataset by joining the freMTPL2freq table, containing the number of claims (`ClaimNb`), with the freMTPL2sev table, containing the claim amount (`ClaimAmount`) for the same policy ids (`IDpol`).
```
from sklearn.pipeline import make_pipeline
from sklearn.preprocessing import FunctionTransformer, OneHotEncoder
from sklearn.preprocessing import StandardScaler, KBinsDiscretizer
from sklearn.compose import ColumnTransformer
df = load_mtpl2(n_samples=60000)
# Note: filter out claims with zero amount, as the severity model
# requires strictly positive target values.
df.loc[(df["ClaimAmount"] == 0) & (df["ClaimNb"] >= 1), "ClaimNb"] = 0
# Correct for unreasonable observations (that might be data error)
# and a few exceptionally large claim amounts
df["ClaimNb"] = df["ClaimNb"].clip(upper=4)
df["Exposure"] = df["Exposure"].clip(upper=1)
df["ClaimAmount"] = df["ClaimAmount"].clip(upper=200000)
log_scale_transformer = make_pipeline(
FunctionTransformer(func=np.log), StandardScaler()
)
column_trans = ColumnTransformer(
[
("binned_numeric", KBinsDiscretizer(n_bins=10), ["VehAge", "DrivAge"]),
(
"onehot_categorical",
OneHotEncoder(),
["VehBrand", "VehPower", "VehGas", "Region", "Area"],
),
("passthrough_numeric", "passthrough", ["BonusMalus"]),
("log_scaled_numeric", log_scale_transformer, ["Density"]),
],
remainder="drop",
)
X = column_trans.fit_transform(df)
# Insurances companies are interested in modeling the Pure Premium, that is
# the expected total claim amount per unit of exposure for each policyholder
# in their portfolio:
df["PurePremium"] = df["ClaimAmount"] / df["Exposure"]
# This can be indirectly approximated by a 2-step modeling: the product of the
# Frequency times the average claim amount per claim:
df["Frequency"] = df["ClaimNb"] / df["Exposure"]
df["AvgClaimAmount"] = df["ClaimAmount"] / np.fmax(df["ClaimNb"], 1)
with pd.option_context("display.max_columns", 15):
print(df[df.ClaimAmount > 0].head())
```
```
/home/runner/work/scikit-learn/scikit-learn/sklearn/preprocessing/_discretization.py:291: UserWarning: Bins whose width are too small (i.e., <= 1e-8) in feature 0 are removed. Consider decreasing the number of bins.
warnings.warn(
ClaimNb Exposure Area VehPower VehAge DrivAge BonusMalus VehBrand \
IDpol
139.0 1.0 0.75 F 7.0 1.0 61.0 50.0 B12
190.0 1.0 0.14 B 12.0 5.0 50.0 60.0 B12
414.0 1.0 0.14 E 4.0 0.0 36.0 85.0 B12
424.0 2.0 0.62 F 10.0 0.0 51.0 100.0 B12
463.0 1.0 0.31 A 5.0 0.0 45.0 50.0 B12
VehGas Density Region ClaimAmount PurePremium Frequency \
IDpol
139.0 Regular 27000.0 R11 303.00 404.000000 1.333333
190.0 Diesel 56.0 R25 1981.84 14156.000000 7.142857
414.0 Regular 4792.0 R11 1456.55 10403.928571 7.142857
424.0 Regular 27000.0 R11 10834.00 17474.193548 3.225806
463.0 Regular 12.0 R73 3986.67 12860.225806 3.225806
AvgClaimAmount
IDpol
139.0 303.00
190.0 1981.84
414.0 1456.55
424.0 5417.00
463.0 3986.67
```
Frequency model – Poisson distribution
--------------------------------------
The number of claims (`ClaimNb`) is a positive integer (0 included). Thus, this target can be modelled by a Poisson distribution. It is then assumed to be the number of discrete events occurring with a constant rate in a given time interval (`Exposure`, in units of years). Here we model the frequency `y = ClaimNb / Exposure`, which is still a (scaled) Poisson distribution, and use `Exposure` as `sample_weight`.
```
from sklearn.model_selection import train_test_split
from sklearn.linear_model import PoissonRegressor
df_train, df_test, X_train, X_test = train_test_split(df, X, random_state=0)
# The parameters of the model are estimated by minimizing the Poisson deviance
# on the training set via a quasi-Newton solver: l-BFGS. Some of the features
# are collinear, we use a weak penalization to avoid numerical issues.
glm_freq = PoissonRegressor(alpha=1e-3, max_iter=400)
glm_freq.fit(X_train, df_train["Frequency"], sample_weight=df_train["Exposure"])
scores = score_estimator(
glm_freq,
X_train,
X_test,
df_train,
df_test,
target="Frequency",
weights="Exposure",
)
print("Evaluation of PoissonRegressor on target Frequency")
print(scores)
```
```
Evaluation of PoissonRegressor on target Frequency
subset train test
metric
D² explained 0.0242 0.0214
mean abs. error 0.1706 0.1660
mean squared error 0.3041 0.3043
```
We can visually compare observed and predicted values, aggregated by the drivers age (`DrivAge`), vehicle age (`VehAge`) and the insurance bonus/malus (`BonusMalus`).
```
fig, ax = plt.subplots(ncols=2, nrows=2, figsize=(16, 8))
fig.subplots_adjust(hspace=0.3, wspace=0.2)
plot_obs_pred(
df=df_train,
feature="DrivAge",
weight="Exposure",
observed="Frequency",
predicted=glm_freq.predict(X_train),
y_label="Claim Frequency",
title="train data",
ax=ax[0, 0],
)
plot_obs_pred(
df=df_test,
feature="DrivAge",
weight="Exposure",
observed="Frequency",
predicted=glm_freq.predict(X_test),
y_label="Claim Frequency",
title="test data",
ax=ax[0, 1],
fill_legend=True,
)
plot_obs_pred(
df=df_test,
feature="VehAge",
weight="Exposure",
observed="Frequency",
predicted=glm_freq.predict(X_test),
y_label="Claim Frequency",
title="test data",
ax=ax[1, 0],
fill_legend=True,
)
plot_obs_pred(
df=df_test,
feature="BonusMalus",
weight="Exposure",
observed="Frequency",
predicted=glm_freq.predict(X_test),
y_label="Claim Frequency",
title="test data",
ax=ax[1, 1],
fill_legend=True,
)
```
According to the observed data, the frequency of accidents is higher for drivers younger than 30 years old, and is positively correlated with the `BonusMalus` variable. Our model is able to mostly correctly model this behaviour.
Severity Model - Gamma distribution
-----------------------------------
The mean claim amount or severity (`AvgClaimAmount`) can be empirically shown to follow approximately a Gamma distribution. We fit a GLM model for the severity with the same features as the frequency model.
Note:
* We filter out `ClaimAmount == 0` as the Gamma distribution has support on \((0, \infty)\), not \([0, \infty)\).
* We use `ClaimNb` as `sample_weight` to account for policies that contain more than one claim.
```
from sklearn.linear_model import GammaRegressor
mask_train = df_train["ClaimAmount"] > 0
mask_test = df_test["ClaimAmount"] > 0
glm_sev = GammaRegressor(alpha=10.0, max_iter=10000)
glm_sev.fit(
X_train[mask_train.values],
df_train.loc[mask_train, "AvgClaimAmount"],
sample_weight=df_train.loc[mask_train, "ClaimNb"],
)
scores = score_estimator(
glm_sev,
X_train[mask_train.values],
X_test[mask_test.values],
df_train[mask_train],
df_test[mask_test],
target="AvgClaimAmount",
weights="ClaimNb",
)
print("Evaluation of GammaRegressor on target AvgClaimAmount")
print(scores)
```
```
Evaluation of GammaRegressor on target AvgClaimAmount
subset train test
metric
D² explained 3.000000e-03 -9.600000e-03
mean abs. error 1.699197e+03 2.027923e+03
mean squared error 4.548147e+07 6.094863e+07
```
Here, the scores for the test data call for caution as they are significantly worse than for the training data indicating an overfit despite the strong regularization.
Note that the resulting model is the average claim amount per claim. As such, it is conditional on having at least one claim, and cannot be used to predict the average claim amount per policy in general.
```
print(
"Mean AvgClaim Amount per policy: %.2f "
% df_train["AvgClaimAmount"].mean()
)
print(
"Mean AvgClaim Amount | NbClaim > 0: %.2f"
% df_train["AvgClaimAmount"][df_train["AvgClaimAmount"] > 0].mean()
)
print(
"Predicted Mean AvgClaim Amount | NbClaim > 0: %.2f"
% glm_sev.predict(X_train).mean()
)
```
```
Mean AvgClaim Amount per policy: 97.89
Mean AvgClaim Amount | NbClaim > 0: 1899.60
Predicted Mean AvgClaim Amount | NbClaim > 0: 1884.40
```
We can visually compare observed and predicted values, aggregated for the drivers age (`DrivAge`).
```
fig, ax = plt.subplots(ncols=1, nrows=2, figsize=(16, 6))
plot_obs_pred(
df=df_train.loc[mask_train],
feature="DrivAge",
weight="Exposure",
observed="AvgClaimAmount",
predicted=glm_sev.predict(X_train[mask_train.values]),
y_label="Average Claim Severity",
title="train data",
ax=ax[0],
)
plot_obs_pred(
df=df_test.loc[mask_test],
feature="DrivAge",
weight="Exposure",
observed="AvgClaimAmount",
predicted=glm_sev.predict(X_test[mask_test.values]),
y_label="Average Claim Severity",
title="test data",
ax=ax[1],
fill_legend=True,
)
plt.tight_layout()
```
Overall, the drivers age (`DrivAge`) has a weak impact on the claim severity, both in observed and predicted data.
Pure Premium Modeling via a Product Model vs single TweedieRegressor
--------------------------------------------------------------------
As mentioned in the introduction, the total claim amount per unit of exposure can be modeled as the product of the prediction of the frequency model by the prediction of the severity model.
Alternatively, one can directly model the total loss with a unique Compound Poisson Gamma generalized linear model (with a log link function). This model is a special case of the Tweedie GLM with a “power” parameter \(p \in (1, 2)\). Here, we fix apriori the `power` parameter of the Tweedie model to some arbitrary value (1.9) in the valid range. Ideally one would select this value via grid-search by minimizing the negative log-likelihood of the Tweedie model, but unfortunately the current implementation does not allow for this (yet).
We will compare the performance of both approaches. To quantify the performance of both models, one can compute the mean deviance of the train and test data assuming a Compound Poisson-Gamma distribution of the total claim amount. This is equivalent to a Tweedie distribution with a `power` parameter between 1 and 2.
The [`sklearn.metrics.mean_tweedie_deviance`](../../modules/generated/sklearn.metrics.mean_tweedie_deviance#sklearn.metrics.mean_tweedie_deviance "sklearn.metrics.mean_tweedie_deviance") depends on a `power` parameter. As we do not know the true value of the `power` parameter, we here compute the mean deviances for a grid of possible values, and compare the models side by side, i.e. we compare them at identical values of `power`. Ideally, we hope that one model will be consistently better than the other, regardless of `power`.
```
from sklearn.linear_model import TweedieRegressor
glm_pure_premium = TweedieRegressor(power=1.9, alpha=0.1, max_iter=10000)
glm_pure_premium.fit(
X_train, df_train["PurePremium"], sample_weight=df_train["Exposure"]
)
tweedie_powers = [1.5, 1.7, 1.8, 1.9, 1.99, 1.999, 1.9999]
scores_product_model = score_estimator(
(glm_freq, glm_sev),
X_train,
X_test,
df_train,
df_test,
target="PurePremium",
weights="Exposure",
tweedie_powers=tweedie_powers,
)
scores_glm_pure_premium = score_estimator(
glm_pure_premium,
X_train,
X_test,
df_train,
df_test,
target="PurePremium",
weights="Exposure",
tweedie_powers=tweedie_powers,
)
scores = pd.concat(
[scores_product_model, scores_glm_pure_premium],
axis=1,
sort=True,
keys=("Product Model", "TweedieRegressor"),
)
print("Evaluation of the Product Model and the Tweedie Regressor on target PurePremium")
with pd.option_context("display.expand_frame_repr", False):
print(scores)
```
```
Evaluation of the Product Model and the Tweedie Regressor on target PurePremium
Product Model TweedieRegressor
subset train test train test
metric
D² explained NaN NaN 2.650000e-02 2.580000e-02
mean Tweedie dev p=1.5000 8.216730e+01 8.640090e+01 7.960780e+01 8.618690e+01
mean Tweedie dev p=1.7000 3.833270e+01 3.920430e+01 3.737380e+01 3.917450e+01
mean Tweedie dev p=1.8000 3.106620e+01 3.148760e+01 3.047890e+01 3.148130e+01
mean Tweedie dev p=1.9000 3.396070e+01 3.420620e+01 3.360060e+01 3.420820e+01
mean Tweedie dev p=1.9900 1.989231e+02 1.996421e+02 1.986911e+02 1.996461e+02
mean Tweedie dev p=1.9990 1.886428e+03 1.892749e+03 1.886206e+03 1.892753e+03
mean Tweedie dev p=1.9999 1.876452e+04 1.882692e+04 1.876430e+04 1.882692e+04
mean abs. error 3.245679e+02 3.468485e+02 3.202459e+02 3.397022e+02
mean squared error 1.469185e+08 3.325898e+07 1.469327e+08 3.325470e+07
```
In this example, both modeling approaches yield comparable performance metrics. For implementation reasons, the percentage of explained variance \(D^2\) is not available for the product model.
We can additionally validate these models by comparing observed and predicted total claim amount over the test and train subsets. We see that, on average, both model tend to underestimate the total claim (but this behavior depends on the amount of regularization).
```
res = []
for subset_label, X, df in [
("train", X_train, df_train),
("test", X_test, df_test),
]:
exposure = df["Exposure"].values
res.append(
{
"subset": subset_label,
"observed": df["ClaimAmount"].values.sum(),
"predicted, frequency*severity model": np.sum(
exposure * glm_freq.predict(X) * glm_sev.predict(X)
),
"predicted, tweedie, power=%.2f"
% glm_pure_premium.power: np.sum(exposure * glm_pure_premium.predict(X)),
}
)
print(pd.DataFrame(res).set_index("subset").T)
```
```
subset train test
observed 4.577616e+06 1.725665e+06
predicted, frequency*severity model 4.562939e+06 1.494451e+06
predicted, tweedie, power=1.90 4.451558e+06 1.431963e+06
```
Finally, we can compare the two models using a plot of cumulated claims: for each model, the policyholders are ranked from safest to riskiest based on the model predictions and the fraction of observed total cumulated claims is plotted on the y axis. This plot is often called the ordered Lorenz curve of the model.
The Gini coefficient (based on the area between the curve and the diagonal) can be used as a model selection metric to quantify the ability of the model to rank policyholders. Note that this metric does not reflect the ability of the models to make accurate predictions in terms of absolute value of total claim amounts but only in terms of relative amounts as a ranking metric. The Gini coefficient is upper bounded by 1.0 but even an oracle model that ranks the policyholders by the observed claim amounts cannot reach a score of 1.0.
We observe that both models are able to rank policyholders by risky-ness significantly better than chance although they are also both far from the oracle model due to the natural difficulty of the prediction problem from a few features: most accidents are not predictable and can be caused by environmental circumstances that are not described at all by the input features of the models.
Note that the Gini index only characterizes the ranking performance of the model but not its calibration: any monotonic transformation of the predictions leaves the Gini index of the model unchanged.
Finally one should highlight that the Compound Poisson Gamma model that is directly fit on the pure premium is operationally simpler to develop and maintain as it consists of a single scikit-learn estimator instead of a pair of models, each with its own set of hyperparameters.
```
from sklearn.metrics import auc
def lorenz_curve(y_true, y_pred, exposure):
y_true, y_pred = np.asarray(y_true), np.asarray(y_pred)
exposure = np.asarray(exposure)
# order samples by increasing predicted risk:
ranking = np.argsort(y_pred)
ranked_exposure = exposure[ranking]
ranked_pure_premium = y_true[ranking]
cumulated_claim_amount = np.cumsum(ranked_pure_premium * ranked_exposure)
cumulated_claim_amount /= cumulated_claim_amount[-1]
cumulated_samples = np.linspace(0, 1, len(cumulated_claim_amount))
return cumulated_samples, cumulated_claim_amount
fig, ax = plt.subplots(figsize=(8, 8))
y_pred_product = glm_freq.predict(X_test) * glm_sev.predict(X_test)
y_pred_total = glm_pure_premium.predict(X_test)
for label, y_pred in [
("Frequency * Severity model", y_pred_product),
("Compound Poisson Gamma", y_pred_total),
]:
ordered_samples, cum_claims = lorenz_curve(
df_test["PurePremium"], y_pred, df_test["Exposure"]
)
gini = 1 - 2 * auc(ordered_samples, cum_claims)
label += " (Gini index: {:.3f})".format(gini)
ax.plot(ordered_samples, cum_claims, linestyle="-", label=label)
# Oracle model: y_pred == y_test
ordered_samples, cum_claims = lorenz_curve(
df_test["PurePremium"], df_test["PurePremium"], df_test["Exposure"]
)
gini = 1 - 2 * auc(ordered_samples, cum_claims)
label = "Oracle (Gini index: {:.3f})".format(gini)
ax.plot(ordered_samples, cum_claims, linestyle="-.", color="gray", label=label)
# Random baseline
ax.plot([0, 1], [0, 1], linestyle="--", color="black", label="Random baseline")
ax.set(
title="Lorenz Curves",
xlabel="Fraction of policyholders\n(ordered by model from safest to riskiest)",
ylabel="Fraction of total claim amount",
)
ax.legend(loc="upper left")
plt.plot()
```
```
[]
```
**Total running time of the script:** ( 0 minutes 20.779 seconds)
[`Download Python source code: plot_tweedie_regression_insurance_claims.py`](https://scikit-learn.org/1.1/_downloads/86c888008757148890daaf43d664fa71/plot_tweedie_regression_insurance_claims.py)
[`Download Jupyter notebook: plot_tweedie_regression_insurance_claims.ipynb`](https://scikit-learn.org/1.1/_downloads/a97bf662e52d471b04e1ab480c0ad7f2/plot_tweedie_regression_insurance_claims.ipynb)
| programming_docs |
scikit_learn Lasso path using LARS Note
Click [here](#sphx-glr-download-auto-examples-linear-model-plot-lasso-lars-py) to download the full example code or to run this example in your browser via Binder
Lasso path using LARS
=====================
Computes Lasso Path along the regularization parameter using the LARS algorithm on the diabetes dataset. Each color represents a different feature of the coefficient vector, and this is displayed as a function of the regularization parameter.
```
Computing regularization path using the LARS ...
.
```
```
# Author: Fabian Pedregosa <[email protected]>
# Alexandre Gramfort <[email protected]>
# License: BSD 3 clause
import numpy as np
import matplotlib.pyplot as plt
from sklearn import linear_model
from sklearn import datasets
X, y = datasets.load_diabetes(return_X_y=True)
print("Computing regularization path using the LARS ...")
_, _, coefs = linear_model.lars_path(X, y, method="lasso", verbose=True)
xx = np.sum(np.abs(coefs.T), axis=1)
xx /= xx[-1]
plt.plot(xx, coefs.T)
ymin, ymax = plt.ylim()
plt.vlines(xx, ymin, ymax, linestyle="dashed")
plt.xlabel("|coef| / max|coef|")
plt.ylabel("Coefficients")
plt.title("LASSO Path")
plt.axis("tight")
plt.show()
```
**Total running time of the script:** ( 0 minutes 0.067 seconds)
[`Download Python source code: plot_lasso_lars.py`](https://scikit-learn.org/1.1/_downloads/9b30279eefb3398ed66923f02e087c20/plot_lasso_lars.py)
[`Download Jupyter notebook: plot_lasso_lars.ipynb`](https://scikit-learn.org/1.1/_downloads/aeeae45cd3cd50e5c66daa21027f9c2a/plot_lasso_lars.ipynb)
scikit_learn Curve Fitting with Bayesian Ridge Regression Note
Click [here](#sphx-glr-download-auto-examples-linear-model-plot-bayesian-ridge-curvefit-py) to download the full example code or to run this example in your browser via Binder
Curve Fitting with Bayesian Ridge Regression
============================================
Computes a Bayesian Ridge Regression of Sinusoids.
See [Bayesian Ridge Regression](../../modules/linear_model#bayesian-ridge-regression) for more information on the regressor.
In general, when fitting a curve with a polynomial by Bayesian ridge regression, the selection of initial values of the regularization parameters (alpha, lambda) may be important. This is because the regularization parameters are determined by an iterative procedure that depends on initial values.
In this example, the sinusoid is approximated by a polynomial using different pairs of initial values.
When starting from the default values (alpha\_init = 1.90, lambda\_init = 1.), the bias of the resulting curve is large, and the variance is small. So, lambda\_init should be relatively small (1.e-3) so as to reduce the bias.
Also, by evaluating log marginal likelihood (L) of these models, we can determine which one is better. It can be concluded that the model with larger L is more likely.
```
# Author: Yoshihiro Uchida <[email protected]>
```
Generate sinusoidal data with noise
-----------------------------------
```
import numpy as np
def func(x):
return np.sin(2 * np.pi * x)
size = 25
rng = np.random.RandomState(1234)
x_train = rng.uniform(0.0, 1.0, size)
y_train = func(x_train) + rng.normal(scale=0.1, size=size)
x_test = np.linspace(0.0, 1.0, 100)
```
Fit by cubic polynomial
-----------------------
```
from sklearn.linear_model import BayesianRidge
n_order = 3
X_train = np.vander(x_train, n_order + 1, increasing=True)
X_test = np.vander(x_test, n_order + 1, increasing=True)
reg = BayesianRidge(tol=1e-6, fit_intercept=False, compute_score=True)
```
Plot the true and predicted curves with log marginal likelihood (L)
-------------------------------------------------------------------
```
import matplotlib.pyplot as plt
fig, axes = plt.subplots(1, 2, figsize=(8, 4))
for i, ax in enumerate(axes):
# Bayesian ridge regression with different initial value pairs
if i == 0:
init = [1 / np.var(y_train), 1.0] # Default values
elif i == 1:
init = [1.0, 1e-3]
reg.set_params(alpha_init=init[0], lambda_init=init[1])
reg.fit(X_train, y_train)
ymean, ystd = reg.predict(X_test, return_std=True)
ax.plot(x_test, func(x_test), color="blue", label="sin($2\\pi x$)")
ax.scatter(x_train, y_train, s=50, alpha=0.5, label="observation")
ax.plot(x_test, ymean, color="red", label="predict mean")
ax.fill_between(
x_test, ymean - ystd, ymean + ystd, color="pink", alpha=0.5, label="predict std"
)
ax.set_ylim(-1.3, 1.3)
ax.legend()
title = "$\\alpha$_init$={:.2f},\\ \\lambda$_init$={}$".format(init[0], init[1])
if i == 0:
title += " (Default)"
ax.set_title(title, fontsize=12)
text = "$\\alpha={:.1f}$\n$\\lambda={:.3f}$\n$L={:.1f}$".format(
reg.alpha_, reg.lambda_, reg.scores_[-1]
)
ax.text(0.05, -1.0, text, fontsize=12)
plt.tight_layout()
plt.show()
```
**Total running time of the script:** ( 0 minutes 0.187 seconds)
[`Download Python source code: plot_bayesian_ridge_curvefit.py`](https://scikit-learn.org/1.1/_downloads/3e3e33acc340cbabe46e148ccc4c89cb/plot_bayesian_ridge_curvefit.py)
[`Download Jupyter notebook: plot_bayesian_ridge_curvefit.ipynb`](https://scikit-learn.org/1.1/_downloads/255cd67462a63d35efce903bdc60d75d/plot_bayesian_ridge_curvefit.ipynb)
scikit_learn Plot Ridge coefficients as a function of the L2 regularization Note
Click [here](#sphx-glr-download-auto-examples-linear-model-plot-ridge-coeffs-py) to download the full example code or to run this example in your browser via Binder
Plot Ridge coefficients as a function of the L2 regularization
==============================================================
[`Ridge`](../../modules/generated/sklearn.linear_model.ridge#sklearn.linear_model.Ridge "sklearn.linear_model.Ridge") Regression is the estimator used in this example. Each color in the left plot represents one different dimension of the coefficient vector, and this is displayed as a function of the regularization parameter. The right plot shows how exact the solution is. This example illustrates how a well defined solution is found by Ridge regression and how regularization affects the coefficients and their values. The plot on the right shows how the difference of the coefficients from the estimator changes as a function of regularization.
In this example the dependent variable Y is set as a function of the input features: y = X\*w + c. The coefficient vector w is randomly sampled from a normal distribution, whereas the bias term c is set to a constant.
As alpha tends toward zero the coefficients found by Ridge regression stabilize towards the randomly sampled vector w. For big alpha (strong regularisation) the coefficients are smaller (eventually converging at 0) leading to a simpler and biased solution. These dependencies can be observed on the left plot.
The right plot shows the mean squared error between the coefficients found by the model and the chosen vector w. Less regularised models retrieve the exact coefficients (error is equal to 0), stronger regularised models increase the error.
Please note that in this example the data is non-noisy, hence it is possible to extract the exact coefficients.
```
# Author: Kornel Kielczewski -- <[email protected]>
import matplotlib.pyplot as plt
import numpy as np
from sklearn.datasets import make_regression
from sklearn.linear_model import Ridge
from sklearn.metrics import mean_squared_error
clf = Ridge()
X, y, w = make_regression(
n_samples=10, n_features=10, coef=True, random_state=1, bias=3.5
)
coefs = []
errors = []
alphas = np.logspace(-6, 6, 200)
# Train the model with different regularisation strengths
for a in alphas:
clf.set_params(alpha=a)
clf.fit(X, y)
coefs.append(clf.coef_)
errors.append(mean_squared_error(clf.coef_, w))
# Display results
plt.figure(figsize=(20, 6))
plt.subplot(121)
ax = plt.gca()
ax.plot(alphas, coefs)
ax.set_xscale("log")
plt.xlabel("alpha")
plt.ylabel("weights")
plt.title("Ridge coefficients as a function of the regularization")
plt.axis("tight")
plt.subplot(122)
ax = plt.gca()
ax.plot(alphas, errors)
ax.set_xscale("log")
plt.xlabel("alpha")
plt.ylabel("error")
plt.title("Coefficient error as a function of the regularization")
plt.axis("tight")
plt.show()
```
**Total running time of the script:** ( 0 minutes 0.284 seconds)
[`Download Python source code: plot_ridge_coeffs.py`](https://scikit-learn.org/1.1/_downloads/b075b4610b24c70d5757248abef3fb61/plot_ridge_coeffs.py)
[`Download Jupyter notebook: plot_ridge_coeffs.ipynb`](https://scikit-learn.org/1.1/_downloads/44775ca43ce935c840a877c3d2ebed6c/plot_ridge_coeffs.ipynb)
scikit_learn L1 Penalty and Sparsity in Logistic Regression Note
Click [here](#sphx-glr-download-auto-examples-linear-model-plot-logistic-l1-l2-sparsity-py) to download the full example code or to run this example in your browser via Binder
L1 Penalty and Sparsity in Logistic Regression
==============================================
Comparison of the sparsity (percentage of zero coefficients) of solutions when L1, L2 and Elastic-Net penalty are used for different values of C. We can see that large values of C give more freedom to the model. Conversely, smaller values of C constrain the model more. In the L1 penalty case, this leads to sparser solutions. As expected, the Elastic-Net penalty sparsity is between that of L1 and L2.
We classify 8x8 images of digits into two classes: 0-4 against 5-9. The visualization shows coefficients of the models for varying C.
```
C=1.00
Sparsity with L1 penalty: 4.69%
Sparsity with Elastic-Net penalty: 4.69%
Sparsity with L2 penalty: 4.69%
Score with L1 penalty: 0.90
Score with Elastic-Net penalty: 0.90
Score with L2 penalty: 0.90
C=0.10
Sparsity with L1 penalty: 29.69%
Sparsity with Elastic-Net penalty: 14.06%
Sparsity with L2 penalty: 4.69%
Score with L1 penalty: 0.90
Score with Elastic-Net penalty: 0.90
Score with L2 penalty: 0.90
C=0.01
Sparsity with L1 penalty: 84.38%
Sparsity with Elastic-Net penalty: 68.75%
Sparsity with L2 penalty: 4.69%
Score with L1 penalty: 0.86
Score with Elastic-Net penalty: 0.88
Score with L2 penalty: 0.89
```
```
# Authors: Alexandre Gramfort <[email protected]>
# Mathieu Blondel <[email protected]>
# Andreas Mueller <[email protected]>
# License: BSD 3 clause
import numpy as np
import matplotlib.pyplot as plt
from sklearn.linear_model import LogisticRegression
from sklearn import datasets
from sklearn.preprocessing import StandardScaler
X, y = datasets.load_digits(return_X_y=True)
X = StandardScaler().fit_transform(X)
# classify small against large digits
y = (y > 4).astype(int)
l1_ratio = 0.5 # L1 weight in the Elastic-Net regularization
fig, axes = plt.subplots(3, 3)
# Set regularization parameter
for i, (C, axes_row) in enumerate(zip((1, 0.1, 0.01), axes)):
# turn down tolerance for short training time
clf_l1_LR = LogisticRegression(C=C, penalty="l1", tol=0.01, solver="saga")
clf_l2_LR = LogisticRegression(C=C, penalty="l2", tol=0.01, solver="saga")
clf_en_LR = LogisticRegression(
C=C, penalty="elasticnet", solver="saga", l1_ratio=l1_ratio, tol=0.01
)
clf_l1_LR.fit(X, y)
clf_l2_LR.fit(X, y)
clf_en_LR.fit(X, y)
coef_l1_LR = clf_l1_LR.coef_.ravel()
coef_l2_LR = clf_l2_LR.coef_.ravel()
coef_en_LR = clf_en_LR.coef_.ravel()
# coef_l1_LR contains zeros due to the
# L1 sparsity inducing norm
sparsity_l1_LR = np.mean(coef_l1_LR == 0) * 100
sparsity_l2_LR = np.mean(coef_l2_LR == 0) * 100
sparsity_en_LR = np.mean(coef_en_LR == 0) * 100
print("C=%.2f" % C)
print("{:<40} {:.2f}%".format("Sparsity with L1 penalty:", sparsity_l1_LR))
print("{:<40} {:.2f}%".format("Sparsity with Elastic-Net penalty:", sparsity_en_LR))
print("{:<40} {:.2f}%".format("Sparsity with L2 penalty:", sparsity_l2_LR))
print("{:<40} {:.2f}".format("Score with L1 penalty:", clf_l1_LR.score(X, y)))
print(
"{:<40} {:.2f}".format("Score with Elastic-Net penalty:", clf_en_LR.score(X, y))
)
print("{:<40} {:.2f}".format("Score with L2 penalty:", clf_l2_LR.score(X, y)))
if i == 0:
axes_row[0].set_title("L1 penalty")
axes_row[1].set_title("Elastic-Net\nl1_ratio = %s" % l1_ratio)
axes_row[2].set_title("L2 penalty")
for ax, coefs in zip(axes_row, [coef_l1_LR, coef_en_LR, coef_l2_LR]):
ax.imshow(
np.abs(coefs.reshape(8, 8)),
interpolation="nearest",
cmap="binary",
vmax=1,
vmin=0,
)
ax.set_xticks(())
ax.set_yticks(())
axes_row[0].set_ylabel("C = %s" % C)
plt.show()
```
**Total running time of the script:** ( 0 minutes 0.459 seconds)
[`Download Python source code: plot_logistic_l1_l2_sparsity.py`](https://scikit-learn.org/1.1/_downloads/fb191883ea7e76c5eb13dad28d2b0a72/plot_logistic_l1_l2_sparsity.py)
[`Download Jupyter notebook: plot_logistic_l1_l2_sparsity.ipynb`](https://scikit-learn.org/1.1/_downloads/aec731a57fcba7fde8f5e3d94ffc7c69/plot_logistic_l1_l2_sparsity.ipynb)
scikit_learn Sparsity Example: Fitting only features 1 and 2 Note
Click [here](#sphx-glr-download-auto-examples-linear-model-plot-ols-3d-py) to download the full example code or to run this example in your browser via Binder
Sparsity Example: Fitting only features 1 and 2
===============================================
Features 1 and 2 of the diabetes-dataset are fitted and plotted below. It illustrates that although feature 2 has a strong coefficient on the full model, it does not give us much regarding `y` when compared to just feature 1.
```
# Code source: Gaël Varoquaux
# Modified for documentation by Jaques Grobler
# License: BSD 3 clause
```
First we load the diabetes dataset.
```
from sklearn import datasets
import numpy as np
X, y = datasets.load_diabetes(return_X_y=True)
indices = (0, 1)
X_train = X[:-20, indices]
X_test = X[-20:, indices]
y_train = y[:-20]
y_test = y[-20:]
```
Next we fit a linear regression model.
```
from sklearn import linear_model
ols = linear_model.LinearRegression()
_ = ols.fit(X_train, y_train)
```
Finally we plot the figure from three different views.
```
import matplotlib.pyplot as plt
def plot_figs(fig_num, elev, azim, X_train, clf):
fig = plt.figure(fig_num, figsize=(4, 3))
plt.clf()
ax = fig.add_subplot(111, projection="3d", elev=elev, azim=azim)
ax.scatter(X_train[:, 0], X_train[:, 1], y_train, c="k", marker="+")
ax.plot_surface(
np.array([[-0.1, -0.1], [0.15, 0.15]]),
np.array([[-0.1, 0.15], [-0.1, 0.15]]),
clf.predict(
np.array([[-0.1, -0.1, 0.15, 0.15], [-0.1, 0.15, -0.1, 0.15]]).T
).reshape((2, 2)),
alpha=0.5,
)
ax.set_xlabel("X_1")
ax.set_ylabel("X_2")
ax.set_zlabel("Y")
ax.w_xaxis.set_ticklabels([])
ax.w_yaxis.set_ticklabels([])
ax.w_zaxis.set_ticklabels([])
# Generate the three different figures from different views
elev = 43.5
azim = -110
plot_figs(1, elev, azim, X_train, ols)
elev = -0.5
azim = 0
plot_figs(2, elev, azim, X_train, ols)
elev = -0.5
azim = 90
plot_figs(3, elev, azim, X_train, ols)
plt.show()
```
*
*
*
**Total running time of the script:** ( 0 minutes 0.155 seconds)
[`Download Python source code: plot_ols_3d.py`](https://scikit-learn.org/1.1/_downloads/2a401e4f9bbae968abdf35a831408f10/plot_ols_3d.py)
[`Download Jupyter notebook: plot_ols_3d.ipynb`](https://scikit-learn.org/1.1/_downloads/113d936063a64e05f36c8cd5e870300a/plot_ols_3d.ipynb)
scikit_learn Logistic function Note
Click [here](#sphx-glr-download-auto-examples-linear-model-plot-logistic-py) to download the full example code or to run this example in your browser via Binder
Logistic function
=================
Shown in the plot is how the logistic regression would, in this synthetic dataset, classify values as either 0 or 1, i.e. class one or two, using the logistic curve.
```
# Code source: Gael Varoquaux
# License: BSD 3 clause
import numpy as np
import matplotlib.pyplot as plt
from sklearn.linear_model import LogisticRegression, LinearRegression
from scipy.special import expit
# Generate a toy dataset, it's just a straight line with some Gaussian noise:
xmin, xmax = -5, 5
n_samples = 100
np.random.seed(0)
X = np.random.normal(size=n_samples)
y = (X > 0).astype(float)
X[X > 0] *= 4
X += 0.3 * np.random.normal(size=n_samples)
X = X[:, np.newaxis]
# Fit the classifier
clf = LogisticRegression(C=1e5)
clf.fit(X, y)
# and plot the result
plt.figure(1, figsize=(4, 3))
plt.clf()
plt.scatter(X.ravel(), y, color="black", zorder=20)
X_test = np.linspace(-5, 10, 300)
loss = expit(X_test * clf.coef_ + clf.intercept_).ravel()
plt.plot(X_test, loss, color="red", linewidth=3)
ols = LinearRegression()
ols.fit(X, y)
plt.plot(X_test, ols.coef_ * X_test + ols.intercept_, linewidth=1)
plt.axhline(0.5, color=".5")
plt.ylabel("y")
plt.xlabel("X")
plt.xticks(range(-5, 10))
plt.yticks([0, 0.5, 1])
plt.ylim(-0.25, 1.25)
plt.xlim(-4, 10)
plt.legend(
("Logistic Regression Model", "Linear Regression Model"),
loc="lower right",
fontsize="small",
)
plt.tight_layout()
plt.show()
```
**Total running time of the script:** ( 0 minutes 0.070 seconds)
[`Download Python source code: plot_logistic.py`](https://scikit-learn.org/1.1/_downloads/e9bb83e05aee6636f37a28a7de0813f3/plot_logistic.py)
[`Download Jupyter notebook: plot_logistic.ipynb`](https://scikit-learn.org/1.1/_downloads/eff5561dfcda3a1dd15da0cd7d817e0d/plot_logistic.ipynb)
scikit_learn SGD: convex loss functions Note
Click [here](#sphx-glr-download-auto-examples-linear-model-plot-sgd-loss-functions-py) to download the full example code or to run this example in your browser via Binder
SGD: convex loss functions
==========================
A plot that compares the various convex loss functions supported by [`SGDClassifier`](../../modules/generated/sklearn.linear_model.sgdclassifier#sklearn.linear_model.SGDClassifier "sklearn.linear_model.SGDClassifier") .
```
import numpy as np
import matplotlib.pyplot as plt
def modified_huber_loss(y_true, y_pred):
z = y_pred * y_true
loss = -4 * z
loss[z >= -1] = (1 - z[z >= -1]) ** 2
loss[z >= 1.0] = 0
return loss
xmin, xmax = -4, 4
xx = np.linspace(xmin, xmax, 100)
lw = 2
plt.plot([xmin, 0, 0, xmax], [1, 1, 0, 0], color="gold", lw=lw, label="Zero-one loss")
plt.plot(xx, np.where(xx < 1, 1 - xx, 0), color="teal", lw=lw, label="Hinge loss")
plt.plot(xx, -np.minimum(xx, 0), color="yellowgreen", lw=lw, label="Perceptron loss")
plt.plot(xx, np.log2(1 + np.exp(-xx)), color="cornflowerblue", lw=lw, label="Log loss")
plt.plot(
xx,
np.where(xx < 1, 1 - xx, 0) ** 2,
color="orange",
lw=lw,
label="Squared hinge loss",
)
plt.plot(
xx,
modified_huber_loss(xx, 1),
color="darkorchid",
lw=lw,
linestyle="--",
label="Modified Huber loss",
)
plt.ylim((0, 8))
plt.legend(loc="upper right")
plt.xlabel(r"Decision function $f(x)$")
plt.ylabel("$L(y=1, f(x))$")
plt.show()
```
**Total running time of the script:** ( 0 minutes 0.088 seconds)
[`Download Python source code: plot_sgd_loss_functions.py`](https://scikit-learn.org/1.1/_downloads/53882b11cf2430d2dadc527ba6c587d7/plot_sgd_loss_functions.py)
[`Download Jupyter notebook: plot_sgd_loss_functions.ipynb`](https://scikit-learn.org/1.1/_downloads/bad2c14c5cfbd2a6e2c7d563be123e11/plot_sgd_loss_functions.ipynb)
| programming_docs |
scikit_learn Robust linear model estimation using RANSAC Note
Click [here](#sphx-glr-download-auto-examples-linear-model-plot-ransac-py) to download the full example code or to run this example in your browser via Binder
Robust linear model estimation using RANSAC
===========================================
In this example we see how to robustly fit a linear model to faulty data using the RANSAC algorithm.
```
Estimated coefficients (true, linear regression, RANSAC):
82.1903908407869 [54.17236387] [82.08533159]
```
```
import numpy as np
from matplotlib import pyplot as plt
from sklearn import linear_model, datasets
n_samples = 1000
n_outliers = 50
X, y, coef = datasets.make_regression(
n_samples=n_samples,
n_features=1,
n_informative=1,
noise=10,
coef=True,
random_state=0,
)
# Add outlier data
np.random.seed(0)
X[:n_outliers] = 3 + 0.5 * np.random.normal(size=(n_outliers, 1))
y[:n_outliers] = -3 + 10 * np.random.normal(size=n_outliers)
# Fit line using all data
lr = linear_model.LinearRegression()
lr.fit(X, y)
# Robustly fit linear model with RANSAC algorithm
ransac = linear_model.RANSACRegressor()
ransac.fit(X, y)
inlier_mask = ransac.inlier_mask_
outlier_mask = np.logical_not(inlier_mask)
# Predict data of estimated models
line_X = np.arange(X.min(), X.max())[:, np.newaxis]
line_y = lr.predict(line_X)
line_y_ransac = ransac.predict(line_X)
# Compare estimated coefficients
print("Estimated coefficients (true, linear regression, RANSAC):")
print(coef, lr.coef_, ransac.estimator_.coef_)
lw = 2
plt.scatter(
X[inlier_mask], y[inlier_mask], color="yellowgreen", marker=".", label="Inliers"
)
plt.scatter(
X[outlier_mask], y[outlier_mask], color="gold", marker=".", label="Outliers"
)
plt.plot(line_X, line_y, color="navy", linewidth=lw, label="Linear regressor")
plt.plot(
line_X,
line_y_ransac,
color="cornflowerblue",
linewidth=lw,
label="RANSAC regressor",
)
plt.legend(loc="lower right")
plt.xlabel("Input")
plt.ylabel("Response")
plt.show()
```
**Total running time of the script:** ( 0 minutes 0.084 seconds)
[`Download Python source code: plot_ransac.py`](https://scikit-learn.org/1.1/_downloads/b4762c93d453d21e5679b926dd2c532c/plot_ransac.py)
[`Download Jupyter notebook: plot_ransac.ipynb`](https://scikit-learn.org/1.1/_downloads/99ccb624369d00fa81b6030937f80b04/plot_ransac.ipynb)
scikit_learn One-Class SVM versus One-Class SVM using Stochastic Gradient Descent Note
Click [here](#sphx-glr-download-auto-examples-linear-model-plot-sgdocsvm-vs-ocsvm-py) to download the full example code or to run this example in your browser via Binder
One-Class SVM versus One-Class SVM using Stochastic Gradient Descent
====================================================================
This example shows how to approximate the solution of [`sklearn.svm.OneClassSVM`](../../modules/generated/sklearn.svm.oneclasssvm#sklearn.svm.OneClassSVM "sklearn.svm.OneClassSVM") in the case of an RBF kernel with [`sklearn.linear_model.SGDOneClassSVM`](../../modules/generated/sklearn.linear_model.sgdoneclasssvm#sklearn.linear_model.SGDOneClassSVM "sklearn.linear_model.SGDOneClassSVM"), a Stochastic Gradient Descent (SGD) version of the One-Class SVM. A kernel approximation is first used in order to apply [`sklearn.linear_model.SGDOneClassSVM`](../../modules/generated/sklearn.linear_model.sgdoneclasssvm#sklearn.linear_model.SGDOneClassSVM "sklearn.linear_model.SGDOneClassSVM") which implements a linear One-Class SVM using SGD.
Note that [`sklearn.linear_model.SGDOneClassSVM`](../../modules/generated/sklearn.linear_model.sgdoneclasssvm#sklearn.linear_model.SGDOneClassSVM "sklearn.linear_model.SGDOneClassSVM") scales linearly with the number of samples whereas the complexity of a kernelized [`sklearn.svm.OneClassSVM`](../../modules/generated/sklearn.svm.oneclasssvm#sklearn.svm.OneClassSVM "sklearn.svm.OneClassSVM") is at best quadratic with respect to the number of samples. It is not the purpose of this example to illustrate the benefits of such an approximation in terms of computation time but rather to show that we obtain similar results on a toy dataset.
*
*
```
import numpy as np
import matplotlib.pyplot as plt
import matplotlib
from sklearn.svm import OneClassSVM
from sklearn.linear_model import SGDOneClassSVM
from sklearn.kernel_approximation import Nystroem
from sklearn.pipeline import make_pipeline
font = {"weight": "normal", "size": 15}
matplotlib.rc("font", **font)
random_state = 42
rng = np.random.RandomState(random_state)
# Generate train data
X = 0.3 * rng.randn(500, 2)
X_train = np.r_[X + 2, X - 2]
# Generate some regular novel observations
X = 0.3 * rng.randn(20, 2)
X_test = np.r_[X + 2, X - 2]
# Generate some abnormal novel observations
X_outliers = rng.uniform(low=-4, high=4, size=(20, 2))
xx, yy = np.meshgrid(np.linspace(-4.5, 4.5, 50), np.linspace(-4.5, 4.5, 50))
# OCSVM hyperparameters
nu = 0.05
gamma = 2.0
# Fit the One-Class SVM
clf = OneClassSVM(gamma=gamma, kernel="rbf", nu=nu)
clf.fit(X_train)
y_pred_train = clf.predict(X_train)
y_pred_test = clf.predict(X_test)
y_pred_outliers = clf.predict(X_outliers)
n_error_train = y_pred_train[y_pred_train == -1].size
n_error_test = y_pred_test[y_pred_test == -1].size
n_error_outliers = y_pred_outliers[y_pred_outliers == 1].size
Z = clf.decision_function(np.c_[xx.ravel(), yy.ravel()])
Z = Z.reshape(xx.shape)
# Fit the One-Class SVM using a kernel approximation and SGD
transform = Nystroem(gamma=gamma, random_state=random_state)
clf_sgd = SGDOneClassSVM(
nu=nu, shuffle=True, fit_intercept=True, random_state=random_state, tol=1e-4
)
pipe_sgd = make_pipeline(transform, clf_sgd)
pipe_sgd.fit(X_train)
y_pred_train_sgd = pipe_sgd.predict(X_train)
y_pred_test_sgd = pipe_sgd.predict(X_test)
y_pred_outliers_sgd = pipe_sgd.predict(X_outliers)
n_error_train_sgd = y_pred_train_sgd[y_pred_train_sgd == -1].size
n_error_test_sgd = y_pred_test_sgd[y_pred_test_sgd == -1].size
n_error_outliers_sgd = y_pred_outliers_sgd[y_pred_outliers_sgd == 1].size
Z_sgd = pipe_sgd.decision_function(np.c_[xx.ravel(), yy.ravel()])
Z_sgd = Z_sgd.reshape(xx.shape)
# plot the level sets of the decision function
plt.figure(figsize=(9, 6))
plt.title("One Class SVM")
plt.contourf(xx, yy, Z, levels=np.linspace(Z.min(), 0, 7), cmap=plt.cm.PuBu)
a = plt.contour(xx, yy, Z, levels=[0], linewidths=2, colors="darkred")
plt.contourf(xx, yy, Z, levels=[0, Z.max()], colors="palevioletred")
s = 20
b1 = plt.scatter(X_train[:, 0], X_train[:, 1], c="white", s=s, edgecolors="k")
b2 = plt.scatter(X_test[:, 0], X_test[:, 1], c="blueviolet", s=s, edgecolors="k")
c = plt.scatter(X_outliers[:, 0], X_outliers[:, 1], c="gold", s=s, edgecolors="k")
plt.axis("tight")
plt.xlim((-4.5, 4.5))
plt.ylim((-4.5, 4.5))
plt.legend(
[a.collections[0], b1, b2, c],
[
"learned frontier",
"training observations",
"new regular observations",
"new abnormal observations",
],
loc="upper left",
)
plt.xlabel(
"error train: %d/%d; errors novel regular: %d/%d; errors novel abnormal: %d/%d"
% (
n_error_train,
X_train.shape[0],
n_error_test,
X_test.shape[0],
n_error_outliers,
X_outliers.shape[0],
)
)
plt.show()
plt.figure(figsize=(9, 6))
plt.title("Online One-Class SVM")
plt.contourf(xx, yy, Z_sgd, levels=np.linspace(Z_sgd.min(), 0, 7), cmap=plt.cm.PuBu)
a = plt.contour(xx, yy, Z_sgd, levels=[0], linewidths=2, colors="darkred")
plt.contourf(xx, yy, Z_sgd, levels=[0, Z_sgd.max()], colors="palevioletred")
s = 20
b1 = plt.scatter(X_train[:, 0], X_train[:, 1], c="white", s=s, edgecolors="k")
b2 = plt.scatter(X_test[:, 0], X_test[:, 1], c="blueviolet", s=s, edgecolors="k")
c = plt.scatter(X_outliers[:, 0], X_outliers[:, 1], c="gold", s=s, edgecolors="k")
plt.axis("tight")
plt.xlim((-4.5, 4.5))
plt.ylim((-4.5, 4.5))
plt.legend(
[a.collections[0], b1, b2, c],
[
"learned frontier",
"training observations",
"new regular observations",
"new abnormal observations",
],
loc="upper left",
)
plt.xlabel(
"error train: %d/%d; errors novel regular: %d/%d; errors novel abnormal: %d/%d"
% (
n_error_train_sgd,
X_train.shape[0],
n_error_test_sgd,
X_test.shape[0],
n_error_outliers_sgd,
X_outliers.shape[0],
)
)
plt.show()
```
**Total running time of the script:** ( 0 minutes 0.269 seconds)
[`Download Python source code: plot_sgdocsvm_vs_ocsvm.py`](https://scikit-learn.org/1.1/_downloads/3692f2475b52d40c3632818c00ff5b52/plot_sgdocsvm_vs_ocsvm.py)
[`Download Jupyter notebook: plot_sgdocsvm_vs_ocsvm.ipynb`](https://scikit-learn.org/1.1/_downloads/69531acbb2cc9d87ef86f75c8eaba68d/plot_sgdocsvm_vs_ocsvm.ipynb)
scikit_learn Plot multinomial and One-vs-Rest Logistic Regression Note
Click [here](#sphx-glr-download-auto-examples-linear-model-plot-logistic-multinomial-py) to download the full example code or to run this example in your browser via Binder
Plot multinomial and One-vs-Rest Logistic Regression
====================================================
Plot decision surface of multinomial and One-vs-Rest Logistic Regression. The hyperplanes corresponding to the three One-vs-Rest (OVR) classifiers are represented by the dashed lines.
*
*
```
training score : 0.995 (multinomial)
training score : 0.976 (ovr)
```
```
# Authors: Tom Dupre la Tour <[email protected]>
# License: BSD 3 clause
import numpy as np
import matplotlib.pyplot as plt
from sklearn.datasets import make_blobs
from sklearn.linear_model import LogisticRegression
from sklearn.inspection import DecisionBoundaryDisplay
# make 3-class dataset for classification
centers = [[-5, 0], [0, 1.5], [5, -1]]
X, y = make_blobs(n_samples=1000, centers=centers, random_state=40)
transformation = [[0.4, 0.2], [-0.4, 1.2]]
X = np.dot(X, transformation)
for multi_class in ("multinomial", "ovr"):
clf = LogisticRegression(
solver="sag", max_iter=100, random_state=42, multi_class=multi_class
).fit(X, y)
# print the training scores
print("training score : %.3f (%s)" % (clf.score(X, y), multi_class))
_, ax = plt.subplots()
DecisionBoundaryDisplay.from_estimator(
clf, X, response_method="predict", cmap=plt.cm.Paired, ax=ax
)
plt.title("Decision surface of LogisticRegression (%s)" % multi_class)
plt.axis("tight")
# Plot also the training points
colors = "bry"
for i, color in zip(clf.classes_, colors):
idx = np.where(y == i)
plt.scatter(
X[idx, 0], X[idx, 1], c=color, cmap=plt.cm.Paired, edgecolor="black", s=20
)
# Plot the three one-against-all classifiers
xmin, xmax = plt.xlim()
ymin, ymax = plt.ylim()
coef = clf.coef_
intercept = clf.intercept_
def plot_hyperplane(c, color):
def line(x0):
return (-(x0 * coef[c, 0]) - intercept[c]) / coef[c, 1]
plt.plot([xmin, xmax], [line(xmin), line(xmax)], ls="--", color=color)
for i, color in zip(clf.classes_, colors):
plot_hyperplane(i, color)
plt.show()
```
**Total running time of the script:** ( 0 minutes 0.173 seconds)
[`Download Python source code: plot_logistic_multinomial.py`](https://scikit-learn.org/1.1/_downloads/85c522d0f7149f3e8274112a6d62256f/plot_logistic_multinomial.py)
[`Download Jupyter notebook: plot_logistic_multinomial.ipynb`](https://scikit-learn.org/1.1/_downloads/4c1663175b07cf9608b07331aa180eb7/plot_logistic_multinomial.ipynb)
scikit_learn Non-negative least squares Note
Click [here](#sphx-glr-download-auto-examples-linear-model-plot-nnls-py) to download the full example code or to run this example in your browser via Binder
Non-negative least squares
==========================
In this example, we fit a linear model with positive constraints on the regression coefficients and compare the estimated coefficients to a classic linear regression.
```
import numpy as np
import matplotlib.pyplot as plt
from sklearn.metrics import r2_score
```
Generate some random data
```
np.random.seed(42)
n_samples, n_features = 200, 50
X = np.random.randn(n_samples, n_features)
true_coef = 3 * np.random.randn(n_features)
# Threshold coefficients to render them non-negative
true_coef[true_coef < 0] = 0
y = np.dot(X, true_coef)
# Add some noise
y += 5 * np.random.normal(size=(n_samples,))
```
Split the data in train set and test set
```
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.5)
```
Fit the Non-Negative least squares.
```
from sklearn.linear_model import LinearRegression
reg_nnls = LinearRegression(positive=True)
y_pred_nnls = reg_nnls.fit(X_train, y_train).predict(X_test)
r2_score_nnls = r2_score(y_test, y_pred_nnls)
print("NNLS R2 score", r2_score_nnls)
```
```
NNLS R2 score 0.8225220806196526
```
Fit an OLS.
```
reg_ols = LinearRegression()
y_pred_ols = reg_ols.fit(X_train, y_train).predict(X_test)
r2_score_ols = r2_score(y_test, y_pred_ols)
print("OLS R2 score", r2_score_ols)
```
```
OLS R2 score 0.7436926291700354
```
Comparing the regression coefficients between OLS and NNLS, we can observe they are highly correlated (the dashed line is the identity relation), but the non-negative constraint shrinks some to 0. The Non-Negative Least squares inherently yield sparse results.
```
fig, ax = plt.subplots()
ax.plot(reg_ols.coef_, reg_nnls.coef_, linewidth=0, marker=".")
low_x, high_x = ax.get_xlim()
low_y, high_y = ax.get_ylim()
low = max(low_x, low_y)
high = min(high_x, high_y)
ax.plot([low, high], [low, high], ls="--", c=".3", alpha=0.5)
ax.set_xlabel("OLS regression coefficients", fontweight="bold")
ax.set_ylabel("NNLS regression coefficients", fontweight="bold")
```
```
Text(55.847222222222214, 0.5, 'NNLS regression coefficients')
```
**Total running time of the script:** ( 0 minutes 0.060 seconds)
[`Download Python source code: plot_nnls.py`](https://scikit-learn.org/1.1/_downloads/055e50ba9fa8c7dcf45dc8f1f32a0243/plot_nnls.py)
[`Download Jupyter notebook: plot_nnls.ipynb`](https://scikit-learn.org/1.1/_downloads/581dc0b49de1be345ed0b6010e5f873d/plot_nnls.ipynb)
scikit_learn Early stopping of Stochastic Gradient Descent Note
Click [here](#sphx-glr-download-auto-examples-linear-model-plot-sgd-early-stopping-py) to download the full example code or to run this example in your browser via Binder
Early stopping of Stochastic Gradient Descent
=============================================
Stochastic Gradient Descent is an optimization technique which minimizes a loss function in a stochastic fashion, performing a gradient descent step sample by sample. In particular, it is a very efficient method to fit linear models.
As a stochastic method, the loss function is not necessarily decreasing at each iteration, and convergence is only guaranteed in expectation. For this reason, monitoring the convergence on the loss function can be difficult.
Another approach is to monitor convergence on a validation score. In this case, the input data is split into a training set and a validation set. The model is then fitted on the training set and the stopping criterion is based on the prediction score computed on the validation set. This enables us to find the least number of iterations which is sufficient to build a model that generalizes well to unseen data and reduces the chance of over-fitting the training data.
This early stopping strategy is activated if `early_stopping=True`; otherwise the stopping criterion only uses the training loss on the entire input data. To better control the early stopping strategy, we can specify a parameter `validation_fraction` which set the fraction of the input dataset that we keep aside to compute the validation score. The optimization will continue until the validation score did not improve by at least `tol` during the last `n_iter_no_change` iterations. The actual number of iterations is available at the attribute `n_iter_`.
This example illustrates how the early stopping can used in the [`SGDClassifier`](../../modules/generated/sklearn.linear_model.sgdclassifier#sklearn.linear_model.SGDClassifier "sklearn.linear_model.SGDClassifier") model to achieve almost the same accuracy as compared to a model built without early stopping. This can significantly reduce training time. Note that scores differ between the stopping criteria even from early iterations because some of the training data is held out with the validation stopping criterion.
```
No stopping criterion: .................................................
Training loss: .................................................
Validation score: .................................................
```
```
# Authors: Tom Dupre la Tour
#
# License: BSD 3 clause
import time
import sys
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
from sklearn import linear_model
from sklearn.datasets import fetch_openml
from sklearn.model_selection import train_test_split
from sklearn.utils._testing import ignore_warnings
from sklearn.exceptions import ConvergenceWarning
from sklearn.utils import shuffle
def load_mnist(n_samples=None, class_0="0", class_1="8"):
"""Load MNIST, select two classes, shuffle and return only n_samples."""
# Load data from http://openml.org/d/554
mnist = fetch_openml("mnist_784", version=1, as_frame=False)
# take only two classes for binary classification
mask = np.logical_or(mnist.target == class_0, mnist.target == class_1)
X, y = shuffle(mnist.data[mask], mnist.target[mask], random_state=42)
if n_samples is not None:
X, y = X[:n_samples], y[:n_samples]
return X, y
@ignore_warnings(category=ConvergenceWarning)
def fit_and_score(estimator, max_iter, X_train, X_test, y_train, y_test):
"""Fit the estimator on the train set and score it on both sets"""
estimator.set_params(max_iter=max_iter)
estimator.set_params(random_state=0)
start = time.time()
estimator.fit(X_train, y_train)
fit_time = time.time() - start
n_iter = estimator.n_iter_
train_score = estimator.score(X_train, y_train)
test_score = estimator.score(X_test, y_test)
return fit_time, n_iter, train_score, test_score
# Define the estimators to compare
estimator_dict = {
"No stopping criterion": linear_model.SGDClassifier(n_iter_no_change=3),
"Training loss": linear_model.SGDClassifier(
early_stopping=False, n_iter_no_change=3, tol=0.1
),
"Validation score": linear_model.SGDClassifier(
early_stopping=True, n_iter_no_change=3, tol=0.0001, validation_fraction=0.2
),
}
# Load the dataset
X, y = load_mnist(n_samples=10000)
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.5, random_state=0)
results = []
for estimator_name, estimator in estimator_dict.items():
print(estimator_name + ": ", end="")
for max_iter in range(1, 50):
print(".", end="")
sys.stdout.flush()
fit_time, n_iter, train_score, test_score = fit_and_score(
estimator, max_iter, X_train, X_test, y_train, y_test
)
results.append(
(estimator_name, max_iter, fit_time, n_iter, train_score, test_score)
)
print("")
# Transform the results in a pandas dataframe for easy plotting
columns = [
"Stopping criterion",
"max_iter",
"Fit time (sec)",
"n_iter_",
"Train score",
"Test score",
]
results_df = pd.DataFrame(results, columns=columns)
# Define what to plot (x_axis, y_axis)
lines = "Stopping criterion"
plot_list = [
("max_iter", "Train score"),
("max_iter", "Test score"),
("max_iter", "n_iter_"),
("max_iter", "Fit time (sec)"),
]
nrows = 2
ncols = int(np.ceil(len(plot_list) / 2.0))
fig, axes = plt.subplots(nrows=nrows, ncols=ncols, figsize=(6 * ncols, 4 * nrows))
axes[0, 0].get_shared_y_axes().join(axes[0, 0], axes[0, 1])
for ax, (x_axis, y_axis) in zip(axes.ravel(), plot_list):
for criterion, group_df in results_df.groupby(lines):
group_df.plot(x=x_axis, y=y_axis, label=criterion, ax=ax)
ax.set_title(y_axis)
ax.legend(title=lines)
fig.tight_layout()
plt.show()
```
**Total running time of the script:** ( 0 minutes 44.453 seconds)
[`Download Python source code: plot_sgd_early_stopping.py`](https://scikit-learn.org/1.1/_downloads/b4d6bfda6769cc5cc1cf25427dec34d6/plot_sgd_early_stopping.py)
[`Download Jupyter notebook: plot_sgd_early_stopping.ipynb`](https://scikit-learn.org/1.1/_downloads/389fb4950ddfe12a741e6ac5b7d79193/plot_sgd_early_stopping.ipynb)
| programming_docs |
scikit_learn Ordinary Least Squares and Ridge Regression Variance Note
Click [here](#sphx-glr-download-auto-examples-linear-model-plot-ols-ridge-variance-py) to download the full example code or to run this example in your browser via Binder
Ordinary Least Squares and Ridge Regression Variance
====================================================
Due to the few points in each dimension and the straight line that linear regression uses to follow these points as well as it can, noise on the observations will cause great variance as shown in the first plot. Every line’s slope can vary quite a bit for each prediction due to the noise induced in the observations.
Ridge regression is basically minimizing a penalised version of the least-squared function. The penalising `shrinks` the value of the regression coefficients. Despite the few data points in each dimension, the slope of the prediction is much more stable and the variance in the line itself is greatly reduced, in comparison to that of the standard linear regression
*
*
```
# Code source: Gaël Varoquaux
# Modified for documentation by Jaques Grobler
# License: BSD 3 clause
import numpy as np
import matplotlib.pyplot as plt
from sklearn import linear_model
X_train = np.c_[0.5, 1].T
y_train = [0.5, 1]
X_test = np.c_[0, 2].T
np.random.seed(0)
classifiers = dict(
ols=linear_model.LinearRegression(), ridge=linear_model.Ridge(alpha=0.1)
)
for name, clf in classifiers.items():
fig, ax = plt.subplots(figsize=(4, 3))
for _ in range(6):
this_X = 0.1 * np.random.normal(size=(2, 1)) + X_train
clf.fit(this_X, y_train)
ax.plot(X_test, clf.predict(X_test), color="gray")
ax.scatter(this_X, y_train, s=3, c="gray", marker="o", zorder=10)
clf.fit(X_train, y_train)
ax.plot(X_test, clf.predict(X_test), linewidth=2, color="blue")
ax.scatter(X_train, y_train, s=30, c="red", marker="+", zorder=10)
ax.set_title(name)
ax.set_xlim(0, 2)
ax.set_ylim((0, 1.6))
ax.set_xlabel("X")
ax.set_ylabel("y")
fig.tight_layout()
plt.show()
```
**Total running time of the script:** ( 0 minutes 0.168 seconds)
[`Download Python source code: plot_ols_ridge_variance.py`](https://scikit-learn.org/1.1/_downloads/9d9fb0a68272db3e5ae8ad0da4cbcd69/plot_ols_ridge_variance.py)
[`Download Jupyter notebook: plot_ols_ridge_variance.ipynb`](https://scikit-learn.org/1.1/_downloads/09c15b8ca914c1951a06a9ce3431460f/plot_ols_ridge_variance.ipynb)
scikit_learn Lasso and Elastic Net Note
Click [here](#sphx-glr-download-auto-examples-linear-model-plot-lasso-coordinate-descent-path-py) to download the full example code or to run this example in your browser via Binder
Lasso and Elastic Net
=====================
Lasso and elastic net (L1 and L2 penalisation) implemented using a coordinate descent.
The coefficients can be forced to be positive.
*
*
*
```
Computing regularization path using the lasso...
Computing regularization path using the positive lasso...
Computing regularization path using the elastic net...
Computing regularization path using the positive elastic net...
```
```
# Author: Alexandre Gramfort <[email protected]>
# License: BSD 3 clause
from itertools import cycle
import numpy as np
import matplotlib.pyplot as plt
from sklearn.linear_model import lasso_path, enet_path
from sklearn import datasets
X, y = datasets.load_diabetes(return_X_y=True)
X /= X.std(axis=0) # Standardize data (easier to set the l1_ratio parameter)
# Compute paths
eps = 5e-3 # the smaller it is the longer is the path
print("Computing regularization path using the lasso...")
alphas_lasso, coefs_lasso, _ = lasso_path(X, y, eps=eps)
print("Computing regularization path using the positive lasso...")
alphas_positive_lasso, coefs_positive_lasso, _ = lasso_path(
X, y, eps=eps, positive=True
)
print("Computing regularization path using the elastic net...")
alphas_enet, coefs_enet, _ = enet_path(X, y, eps=eps, l1_ratio=0.8)
print("Computing regularization path using the positive elastic net...")
alphas_positive_enet, coefs_positive_enet, _ = enet_path(
X, y, eps=eps, l1_ratio=0.8, positive=True
)
# Display results
plt.figure(1)
colors = cycle(["b", "r", "g", "c", "k"])
neg_log_alphas_lasso = -np.log10(alphas_lasso)
neg_log_alphas_enet = -np.log10(alphas_enet)
for coef_l, coef_e, c in zip(coefs_lasso, coefs_enet, colors):
l1 = plt.plot(neg_log_alphas_lasso, coef_l, c=c)
l2 = plt.plot(neg_log_alphas_enet, coef_e, linestyle="--", c=c)
plt.xlabel("-Log(alpha)")
plt.ylabel("coefficients")
plt.title("Lasso and Elastic-Net Paths")
plt.legend((l1[-1], l2[-1]), ("Lasso", "Elastic-Net"), loc="lower left")
plt.axis("tight")
plt.figure(2)
neg_log_alphas_positive_lasso = -np.log10(alphas_positive_lasso)
for coef_l, coef_pl, c in zip(coefs_lasso, coefs_positive_lasso, colors):
l1 = plt.plot(neg_log_alphas_lasso, coef_l, c=c)
l2 = plt.plot(neg_log_alphas_positive_lasso, coef_pl, linestyle="--", c=c)
plt.xlabel("-Log(alpha)")
plt.ylabel("coefficients")
plt.title("Lasso and positive Lasso")
plt.legend((l1[-1], l2[-1]), ("Lasso", "positive Lasso"), loc="lower left")
plt.axis("tight")
plt.figure(3)
neg_log_alphas_positive_enet = -np.log10(alphas_positive_enet)
for coef_e, coef_pe, c in zip(coefs_enet, coefs_positive_enet, colors):
l1 = plt.plot(neg_log_alphas_enet, coef_e, c=c)
l2 = plt.plot(neg_log_alphas_positive_enet, coef_pe, linestyle="--", c=c)
plt.xlabel("-Log(alpha)")
plt.ylabel("coefficients")
plt.title("Elastic-Net and positive Elastic-Net")
plt.legend((l1[-1], l2[-1]), ("Elastic-Net", "positive Elastic-Net"), loc="lower left")
plt.axis("tight")
plt.show()
```
**Total running time of the script:** ( 0 minutes 0.249 seconds)
[`Download Python source code: plot_lasso_coordinate_descent_path.py`](https://scikit-learn.org/1.1/_downloads/1f682002d8b68c290d9d03599368e83d/plot_lasso_coordinate_descent_path.py)
[`Download Jupyter notebook: plot_lasso_coordinate_descent_path.ipynb`](https://scikit-learn.org/1.1/_downloads/4c4c075dc14e39d30d982d0b2818ea95/plot_lasso_coordinate_descent_path.ipynb)
scikit_learn Logistic Regression 3-class Classifier Note
Click [here](#sphx-glr-download-auto-examples-linear-model-plot-iris-logistic-py) to download the full example code or to run this example in your browser via Binder
Logistic Regression 3-class Classifier
======================================
Show below is a logistic-regression classifiers decision boundaries on the first two dimensions (sepal length and width) of the [iris](https://en.wikipedia.org/wiki/Iris_flower_data_set) dataset. The datapoints are colored according to their labels.
```
# Code source: Gaël Varoquaux
# Modified for documentation by Jaques Grobler
# License: BSD 3 clause
import matplotlib.pyplot as plt
from sklearn.linear_model import LogisticRegression
from sklearn import datasets
from sklearn.inspection import DecisionBoundaryDisplay
# import some data to play with
iris = datasets.load_iris()
X = iris.data[:, :2] # we only take the first two features.
Y = iris.target
# Create an instance of Logistic Regression Classifier and fit the data.
logreg = LogisticRegression(C=1e5)
logreg.fit(X, Y)
_, ax = plt.subplots(figsize=(4, 3))
DecisionBoundaryDisplay.from_estimator(
logreg,
X,
cmap=plt.cm.Paired,
ax=ax,
response_method="predict",
plot_method="pcolormesh",
shading="auto",
xlabel="Sepal length",
ylabel="Sepal width",
eps=0.5,
)
# Plot also the training points
plt.scatter(X[:, 0], X[:, 1], c=Y, edgecolors="k", cmap=plt.cm.Paired)
plt.xticks(())
plt.yticks(())
plt.show()
```
**Total running time of the script:** ( 0 minutes 0.047 seconds)
[`Download Python source code: plot_iris_logistic.py`](https://scikit-learn.org/1.1/_downloads/8597aee4ffb052082e2e71a7496b7ee0/plot_iris_logistic.py)
[`Download Jupyter notebook: plot_iris_logistic.ipynb`](https://scikit-learn.org/1.1/_downloads/557dc086a33038b608833f8490e00a43/plot_iris_logistic.ipynb)
scikit_learn Lasso on dense and sparse data Note
Click [here](#sphx-glr-download-auto-examples-linear-model-plot-lasso-dense-vs-sparse-data-py) to download the full example code or to run this example in your browser via Binder
Lasso on dense and sparse data
==============================
We show that linear\_model.Lasso provides the same results for dense and sparse data and that in the case of sparse data the speed is improved.
```
from time import time
from scipy import sparse
from scipy import linalg
from sklearn.datasets import make_regression
from sklearn.linear_model import Lasso
```
Comparing the two Lasso implementations on Dense data
-----------------------------------------------------
We create a linear regression problem that is suitable for the Lasso, that is to say, with more features than samples. We then store the data matrix in both dense (the usual) and sparse format, and train a Lasso on each. We compute the runtime of both and check that they learned the same model by computing the Euclidean norm of the difference between the coefficients they learned. Because the data is dense, we expect better runtime with a dense data format.
```
X, y = make_regression(n_samples=200, n_features=5000, random_state=0)
# create a copy of X in sparse format
X_sp = sparse.coo_matrix(X)
alpha = 1
sparse_lasso = Lasso(alpha=alpha, fit_intercept=False, max_iter=1000)
dense_lasso = Lasso(alpha=alpha, fit_intercept=False, max_iter=1000)
t0 = time()
sparse_lasso.fit(X_sp, y)
print(f"Sparse Lasso done in {(time() - t0):.3f}s")
t0 = time()
dense_lasso.fit(X, y)
print(f"Dense Lasso done in {(time() - t0):.3f}s")
# compare the regression coefficients
coeff_diff = linalg.norm(sparse_lasso.coef_ - dense_lasso.coef_)
print(f"Distance between coefficients : {coeff_diff:.2e}")
#
```
```
Sparse Lasso done in 0.092s
Dense Lasso done in 0.031s
Distance between coefficients : 1.01e-13
```
Comparing the two Lasso implementations on Sparse data
------------------------------------------------------
We make the previous problem sparse by replacing all small values with 0 and run the same comparisons as above. Because the data is now sparse, we expect the implementation that uses the sparse data format to be faster.
```
# make a copy of the previous data
Xs = X.copy()
# make Xs sparse by replacing the values lower than 2.5 with 0s
Xs[Xs < 2.5] = 0.0
# create a copy of Xs in sparse format
Xs_sp = sparse.coo_matrix(Xs)
Xs_sp = Xs_sp.tocsc()
# compute the proportion of non-zero coefficient in the data matrix
print(f"Matrix density : {(Xs_sp.nnz / float(X.size) * 100):.3f}%")
alpha = 0.1
sparse_lasso = Lasso(alpha=alpha, fit_intercept=False, max_iter=10000)
dense_lasso = Lasso(alpha=alpha, fit_intercept=False, max_iter=10000)
t0 = time()
sparse_lasso.fit(Xs_sp, y)
print(f"Sparse Lasso done in {(time() - t0):.3f}s")
t0 = time()
dense_lasso.fit(Xs, y)
print(f"Dense Lasso done in {(time() - t0):.3f}s")
# compare the regression coefficients
coeff_diff = linalg.norm(sparse_lasso.coef_ - dense_lasso.coef_)
print(f"Distance between coefficients : {coeff_diff:.2e}")
```
```
Matrix density : 0.626%
Sparse Lasso done in 0.146s
Dense Lasso done in 0.748s
Distance between coefficients : 8.65e-12
```
**Total running time of the script:** ( 0 minutes 1.081 seconds)
[`Download Python source code: plot_lasso_dense_vs_sparse_data.py`](https://scikit-learn.org/1.1/_downloads/510f5becea7ec7018a8eee43d6f12b1b/plot_lasso_dense_vs_sparse_data.py)
[`Download Jupyter notebook: plot_lasso_dense_vs_sparse_data.ipynb`](https://scikit-learn.org/1.1/_downloads/0aadb4e0dc9f402704c8a56152f01083/plot_lasso_dense_vs_sparse_data.ipynb)
scikit_learn Joint feature selection with multi-task Lasso Note
Click [here](#sphx-glr-download-auto-examples-linear-model-plot-multi-task-lasso-support-py) to download the full example code or to run this example in your browser via Binder
Joint feature selection with multi-task Lasso
=============================================
The multi-task lasso allows to fit multiple regression problems jointly enforcing the selected features to be the same across tasks. This example simulates sequential measurements, each task is a time instant, and the relevant features vary in amplitude over time while being the same. The multi-task lasso imposes that features that are selected at one time point are select for all time point. This makes feature selection by the Lasso more stable.
```
# Author: Alexandre Gramfort <[email protected]>
# License: BSD 3 clause
```
Generate data
-------------
```
import numpy as np
rng = np.random.RandomState(42)
# Generate some 2D coefficients with sine waves with random frequency and phase
n_samples, n_features, n_tasks = 100, 30, 40
n_relevant_features = 5
coef = np.zeros((n_tasks, n_features))
times = np.linspace(0, 2 * np.pi, n_tasks)
for k in range(n_relevant_features):
coef[:, k] = np.sin((1.0 + rng.randn(1)) * times + 3 * rng.randn(1))
X = rng.randn(n_samples, n_features)
Y = np.dot(X, coef.T) + rng.randn(n_samples, n_tasks)
```
Fit models
----------
```
from sklearn.linear_model import MultiTaskLasso, Lasso
coef_lasso_ = np.array([Lasso(alpha=0.5).fit(X, y).coef_ for y in Y.T])
coef_multi_task_lasso_ = MultiTaskLasso(alpha=1.0).fit(X, Y).coef_
```
Plot support and time series
----------------------------
```
import matplotlib.pyplot as plt
fig = plt.figure(figsize=(8, 5))
plt.subplot(1, 2, 1)
plt.spy(coef_lasso_)
plt.xlabel("Feature")
plt.ylabel("Time (or Task)")
plt.text(10, 5, "Lasso")
plt.subplot(1, 2, 2)
plt.spy(coef_multi_task_lasso_)
plt.xlabel("Feature")
plt.ylabel("Time (or Task)")
plt.text(10, 5, "MultiTaskLasso")
fig.suptitle("Coefficient non-zero location")
feature_to_plot = 0
plt.figure()
lw = 2
plt.plot(coef[:, feature_to_plot], color="seagreen", linewidth=lw, label="Ground truth")
plt.plot(
coef_lasso_[:, feature_to_plot], color="cornflowerblue", linewidth=lw, label="Lasso"
)
plt.plot(
coef_multi_task_lasso_[:, feature_to_plot],
color="gold",
linewidth=lw,
label="MultiTaskLasso",
)
plt.legend(loc="upper center")
plt.axis("tight")
plt.ylim([-1.1, 1.1])
plt.show()
```
*
*
**Total running time of the script:** ( 0 minutes 0.205 seconds)
[`Download Python source code: plot_multi_task_lasso_support.py`](https://scikit-learn.org/1.1/_downloads/2f0720ee8bdf39051b71d0070814a47b/plot_multi_task_lasso_support.py)
[`Download Jupyter notebook: plot_multi_task_lasso_support.ipynb`](https://scikit-learn.org/1.1/_downloads/f1a137ac46ba39a9ac507c515d184d90/plot_multi_task_lasso_support.ipynb)
scikit_learn Polynomial and Spline interpolation Note
Click [here](#sphx-glr-download-auto-examples-linear-model-plot-polynomial-interpolation-py) to download the full example code or to run this example in your browser via Binder
Polynomial and Spline interpolation
===================================
This example demonstrates how to approximate a function with polynomials up to degree `degree` by using ridge regression. We show two different ways given `n_samples` of 1d points `x_i`:
* [`PolynomialFeatures`](../../modules/generated/sklearn.preprocessing.polynomialfeatures#sklearn.preprocessing.PolynomialFeatures "sklearn.preprocessing.PolynomialFeatures") generates all monomials up to `degree`. This gives us the so called Vandermonde matrix with `n_samples` rows and `degree + 1` columns:
```
[[1, x_0, x_0 ** 2, x_0 ** 3, ..., x_0 ** degree],
[1, x_1, x_1 ** 2, x_1 ** 3, ..., x_1 ** degree],
...]
```
Intuitively, this matrix can be interpreted as a matrix of pseudo features (the points raised to some power). The matrix is akin to (but different from) the matrix induced by a polynomial kernel.
* [`SplineTransformer`](../../modules/generated/sklearn.preprocessing.splinetransformer#sklearn.preprocessing.SplineTransformer "sklearn.preprocessing.SplineTransformer") generates B-spline basis functions. A basis function of a B-spline is a piece-wise polynomial function of degree `degree` that is non-zero only between `degree+1` consecutive knots. Given `n_knots` number of knots, this results in matrix of `n_samples` rows and `n_knots + degree - 1` columns:
```
[[basis_1(x_0), basis_2(x_0), ...],
[basis_1(x_1), basis_2(x_1), ...],
...]
```
This example shows that these two transformers are well suited to model non-linear effects with a linear model, using a pipeline to add non-linear features. Kernel methods extend this idea and can induce very high (even infinite) dimensional feature spaces.
```
# Author: Mathieu Blondel
# Jake Vanderplas
# Christian Lorentzen
# Malte Londschien
# License: BSD 3 clause
import numpy as np
import matplotlib.pyplot as plt
from sklearn.linear_model import Ridge
from sklearn.preprocessing import PolynomialFeatures, SplineTransformer
from sklearn.pipeline import make_pipeline
```
We start by defining a function that we intend to approximate and prepare plotting it.
```
def f(x):
"""Function to be approximated by polynomial interpolation."""
return x * np.sin(x)
# whole range we want to plot
x_plot = np.linspace(-1, 11, 100)
```
To make it interesting, we only give a small subset of points to train on.
```
x_train = np.linspace(0, 10, 100)
rng = np.random.RandomState(0)
x_train = np.sort(rng.choice(x_train, size=20, replace=False))
y_train = f(x_train)
# create 2D-array versions of these arrays to feed to transformers
X_train = x_train[:, np.newaxis]
X_plot = x_plot[:, np.newaxis]
```
Now we are ready to create polynomial features and splines, fit on the training points and show how well they interpolate.
```
# plot function
lw = 2
fig, ax = plt.subplots()
ax.set_prop_cycle(
color=["black", "teal", "yellowgreen", "gold", "darkorange", "tomato"]
)
ax.plot(x_plot, f(x_plot), linewidth=lw, label="ground truth")
# plot training points
ax.scatter(x_train, y_train, label="training points")
# polynomial features
for degree in [3, 4, 5]:
model = make_pipeline(PolynomialFeatures(degree), Ridge(alpha=1e-3))
model.fit(X_train, y_train)
y_plot = model.predict(X_plot)
ax.plot(x_plot, y_plot, label=f"degree {degree}")
# B-spline with 4 + 3 - 1 = 6 basis functions
model = make_pipeline(SplineTransformer(n_knots=4, degree=3), Ridge(alpha=1e-3))
model.fit(X_train, y_train)
y_plot = model.predict(X_plot)
ax.plot(x_plot, y_plot, label="B-spline")
ax.legend(loc="lower center")
ax.set_ylim(-20, 10)
plt.show()
```
This shows nicely that higher degree polynomials can fit the data better. But at the same time, too high powers can show unwanted oscillatory behaviour and are particularly dangerous for extrapolation beyond the range of fitted data. This is an advantage of B-splines. They usually fit the data as well as polynomials and show very nice and smooth behaviour. They have also good options to control the extrapolation, which defaults to continue with a constant. Note that most often, you would rather increase the number of knots but keep `degree=3`.
In order to give more insights into the generated feature bases, we plot all columns of both transformers separately.
```
fig, axes = plt.subplots(ncols=2, figsize=(16, 5))
pft = PolynomialFeatures(degree=3).fit(X_train)
axes[0].plot(x_plot, pft.transform(X_plot))
axes[0].legend(axes[0].lines, [f"degree {n}" for n in range(4)])
axes[0].set_title("PolynomialFeatures")
splt = SplineTransformer(n_knots=4, degree=3).fit(X_train)
axes[1].plot(x_plot, splt.transform(X_plot))
axes[1].legend(axes[1].lines, [f"spline {n}" for n in range(6)])
axes[1].set_title("SplineTransformer")
# plot knots of spline
knots = splt.bsplines_[0].t
axes[1].vlines(knots[3:-3], ymin=0, ymax=0.8, linestyles="dashed")
plt.show()
```
In the left plot, we recognize the lines corresponding to simple monomials from `x**0` to `x**3`. In the right figure, we see the six B-spline basis functions of `degree=3` and also the four knot positions that were chosen during `fit`. Note that there are `degree` number of additional knots each to the left and to the right of the fitted interval. These are there for technical reasons, so we refrain from showing them. Every basis function has local support and is continued as a constant beyond the fitted range. This extrapolating behaviour could be changed by the argument `extrapolation`.
Periodic Splines
----------------
In the previous example we saw the limitations of polynomials and splines for extrapolation beyond the range of the training observations. In some settings, e.g. with seasonal effects, we expect a periodic continuation of the underlying signal. Such effects can be modelled using periodic splines, which have equal function value and equal derivatives at the first and last knot. In the following case we show how periodic splines provide a better fit both within and outside of the range of training data given the additional information of periodicity. The splines period is the distance between the first and last knot, which we specify manually.
Periodic splines can also be useful for naturally periodic features (such as day of the year), as the smoothness at the boundary knots prevents a jump in the transformed values (e.g. from Dec 31st to Jan 1st). For such naturally periodic features or more generally features where the period is known, it is advised to explicitly pass this information to the `SplineTransformer` by setting the knots manually.
```
def g(x):
"""Function to be approximated by periodic spline interpolation."""
return np.sin(x) - 0.7 * np.cos(x * 3)
y_train = g(x_train)
# Extend the test data into the future:
x_plot_ext = np.linspace(-1, 21, 200)
X_plot_ext = x_plot_ext[:, np.newaxis]
lw = 2
fig, ax = plt.subplots()
ax.set_prop_cycle(color=["black", "tomato", "teal"])
ax.plot(x_plot_ext, g(x_plot_ext), linewidth=lw, label="ground truth")
ax.scatter(x_train, y_train, label="training points")
for transformer, label in [
(SplineTransformer(degree=3, n_knots=10), "spline"),
(
SplineTransformer(
degree=3,
knots=np.linspace(0, 2 * np.pi, 10)[:, None],
extrapolation="periodic",
),
"periodic spline",
),
]:
model = make_pipeline(transformer, Ridge(alpha=1e-3))
model.fit(X_train, y_train)
y_plot_ext = model.predict(X_plot_ext)
ax.plot(x_plot_ext, y_plot_ext, label=label)
ax.legend()
fig.show()
```
```
fig, ax = plt.subplots()
knots = np.linspace(0, 2 * np.pi, 4)
splt = SplineTransformer(knots=knots[:, None], degree=3, extrapolation="periodic").fit(
X_train
)
ax.plot(x_plot_ext, splt.transform(X_plot_ext))
ax.legend(ax.lines, [f"spline {n}" for n in range(3)])
plt.show()
```
**Total running time of the script:** ( 0 minutes 0.385 seconds)
[`Download Python source code: plot_polynomial_interpolation.py`](https://scikit-learn.org/1.1/_downloads/421ef4f01c94bb40ae3ecc3a1dd0c886/plot_polynomial_interpolation.py)
[`Download Jupyter notebook: plot_polynomial_interpolation.ipynb`](https://scikit-learn.org/1.1/_downloads/97af0ebf12a51edb939605ce096cae13/plot_polynomial_interpolation.ipynb)
| programming_docs |
scikit_learn Quantile regression Note
Click [here](#sphx-glr-download-auto-examples-linear-model-plot-quantile-regression-py) to download the full example code or to run this example in your browser via Binder
Quantile regression
===================
This example illustrates how quantile regression can predict non-trivial conditional quantiles.
The left figure shows the case when the error distribution is normal, but has non-constant variance, i.e. with heteroscedasticity.
The right figure shows an example of an asymmetric error distribution, namely the Pareto distribution.
```
# Authors: David Dale <[email protected]>
# Christian Lorentzen <[email protected]>
# Guillaume Lemaitre <[email protected]>
# License: BSD 3 clause
```
Dataset generation
------------------
To illustrate the behaviour of quantile regression, we will generate two synthetic datasets. The true generative random processes for both datasets will be composed by the same expected value with a linear relationship with a single feature `x`.
```
import numpy as np
rng = np.random.RandomState(42)
x = np.linspace(start=0, stop=10, num=100)
X = x[:, np.newaxis]
y_true_mean = 10 + 0.5 * x
```
We will create two subsequent problems by changing the distribution of the target `y` while keeping the same expected value:
* in the first case, a heteroscedastic Normal noise is added;
* in the second case, an asymmetric Pareto noise is added.
```
y_normal = y_true_mean + rng.normal(loc=0, scale=0.5 + 0.5 * x, size=x.shape[0])
a = 5
y_pareto = y_true_mean + 10 * (rng.pareto(a, size=x.shape[0]) - 1 / (a - 1))
```
Let’s first visualize the datasets as well as the distribution of the residuals `y - mean(y)`.
```
import matplotlib.pyplot as plt
_, axs = plt.subplots(nrows=2, ncols=2, figsize=(15, 11), sharex="row", sharey="row")
axs[0, 0].plot(x, y_true_mean, label="True mean")
axs[0, 0].scatter(x, y_normal, color="black", alpha=0.5, label="Observations")
axs[1, 0].hist(y_true_mean - y_normal, edgecolor="black")
axs[0, 1].plot(x, y_true_mean, label="True mean")
axs[0, 1].scatter(x, y_pareto, color="black", alpha=0.5, label="Observations")
axs[1, 1].hist(y_true_mean - y_pareto, edgecolor="black")
axs[0, 0].set_title("Dataset with heteroscedastic Normal distributed targets")
axs[0, 1].set_title("Dataset with asymmetric Pareto distributed target")
axs[1, 0].set_title(
"Residuals distribution for heteroscedastic Normal distributed targets"
)
axs[1, 1].set_title("Residuals distribution for asymmetric Pareto distributed target")
axs[0, 0].legend()
axs[0, 1].legend()
axs[0, 0].set_ylabel("y")
axs[1, 0].set_ylabel("Counts")
axs[0, 1].set_xlabel("x")
axs[0, 0].set_xlabel("x")
axs[1, 0].set_xlabel("Residuals")
_ = axs[1, 1].set_xlabel("Residuals")
```
With the heteroscedastic Normal distributed target, we observe that the variance of the noise is increasing when the value of the feature `x` is increasing.
With the asymmetric Pareto distributed target, we observe that the positive residuals are bounded.
These types of noisy targets make the estimation via [`LinearRegression`](../../modules/generated/sklearn.linear_model.linearregression#sklearn.linear_model.LinearRegression "sklearn.linear_model.LinearRegression") less efficient, i.e. we need more data to get stable results and, in addition, large outliers can have a huge impact on the fitted coefficients. (Stated otherwise: in a setting with constant variance, ordinary least squares estimators converge much faster to the *true* coefficients with increasing sample size.)
In this asymmetric setting, the median or different quantiles give additional insights. On top of that, median estimation is much more robust to outliers and heavy tailed distributions. But note that extreme quantiles are estimated by very view data points. 95% quantile are more or less estimated by the 5% largest values and thus also a bit sensitive outliers.
In the remainder of this tutorial, we will show how [`QuantileRegressor`](../../modules/generated/sklearn.linear_model.quantileregressor#sklearn.linear_model.QuantileRegressor "sklearn.linear_model.QuantileRegressor") can be used in practice and give the intuition into the properties of the fitted models. Finally, we will compare the both [`QuantileRegressor`](../../modules/generated/sklearn.linear_model.quantileregressor#sklearn.linear_model.QuantileRegressor "sklearn.linear_model.QuantileRegressor") and [`LinearRegression`](../../modules/generated/sklearn.linear_model.linearregression#sklearn.linear_model.LinearRegression "sklearn.linear_model.LinearRegression").
Fitting a `QuantileRegressor`
-----------------------------
In this section, we want to estimate the conditional median as well as a low and high quantile fixed at 5% and 95%, respectively. Thus, we will get three linear models, one for each quantile.
We will use the quantiles at 5% and 95% to find the outliers in the training sample beyond the central 90% interval.
```
from sklearn.linear_model import QuantileRegressor
quantiles = [0.05, 0.5, 0.95]
predictions = {}
out_bounds_predictions = np.zeros_like(y_true_mean, dtype=np.bool_)
for quantile in quantiles:
qr = QuantileRegressor(quantile=quantile, alpha=0)
y_pred = qr.fit(X, y_normal).predict(X)
predictions[quantile] = y_pred
if quantile == min(quantiles):
out_bounds_predictions = np.logical_or(
out_bounds_predictions, y_pred >= y_normal
)
elif quantile == max(quantiles):
out_bounds_predictions = np.logical_or(
out_bounds_predictions, y_pred <= y_normal
)
```
Now, we can plot the three linear models and the distinguished samples that are within the central 90% interval from samples that are outside this interval.
```
plt.plot(X, y_true_mean, color="black", linestyle="dashed", label="True mean")
for quantile, y_pred in predictions.items():
plt.plot(X, y_pred, label=f"Quantile: {quantile}")
plt.scatter(
x[out_bounds_predictions],
y_normal[out_bounds_predictions],
color="black",
marker="+",
alpha=0.5,
label="Outside interval",
)
plt.scatter(
x[~out_bounds_predictions],
y_normal[~out_bounds_predictions],
color="black",
alpha=0.5,
label="Inside interval",
)
plt.legend()
plt.xlabel("x")
plt.ylabel("y")
_ = plt.title("Quantiles of heteroscedastic Normal distributed target")
```
Since the noise is still Normally distributed, in particular is symmetric, the true conditional mean and the true conditional median coincide. Indeed, we see that the estimated median almost hits the true mean. We observe the effect of having an increasing noise variance on the 5% and 95% quantiles: the slopes of those quantiles are very different and the interval between them becomes wider with increasing `x`.
To get an additional intuition regarding the meaning of the 5% and 95% quantiles estimators, one can count the number of samples above and below the predicted quantiles (represented by a cross on the above plot), considering that we have a total of 100 samples.
We can repeat the same experiment using the asymmetric Pareto distributed target.
```
quantiles = [0.05, 0.5, 0.95]
predictions = {}
out_bounds_predictions = np.zeros_like(y_true_mean, dtype=np.bool_)
for quantile in quantiles:
qr = QuantileRegressor(quantile=quantile, alpha=0)
y_pred = qr.fit(X, y_pareto).predict(X)
predictions[quantile] = y_pred
if quantile == min(quantiles):
out_bounds_predictions = np.logical_or(
out_bounds_predictions, y_pred >= y_pareto
)
elif quantile == max(quantiles):
out_bounds_predictions = np.logical_or(
out_bounds_predictions, y_pred <= y_pareto
)
```
```
plt.plot(X, y_true_mean, color="black", linestyle="dashed", label="True mean")
for quantile, y_pred in predictions.items():
plt.plot(X, y_pred, label=f"Quantile: {quantile}")
plt.scatter(
x[out_bounds_predictions],
y_pareto[out_bounds_predictions],
color="black",
marker="+",
alpha=0.5,
label="Outside interval",
)
plt.scatter(
x[~out_bounds_predictions],
y_pareto[~out_bounds_predictions],
color="black",
alpha=0.5,
label="Inside interval",
)
plt.legend()
plt.xlabel("x")
plt.ylabel("y")
_ = plt.title("Quantiles of asymmetric Pareto distributed target")
```
Due to the asymmetry of the distribution of the noise, we observe that the true mean and estimated conditional median are different. We also observe that each quantile model has different parameters to better fit the desired quantile. Note that ideally, all quantiles would be parallel in this case, which would become more visible with more data points or less extreme quantiles, e.g. 10% and 90%.
Comparing `QuantileRegressor` and `LinearRegression`
----------------------------------------------------
In this section, we will linger on the difference regarding the error that [`QuantileRegressor`](../../modules/generated/sklearn.linear_model.quantileregressor#sklearn.linear_model.QuantileRegressor "sklearn.linear_model.QuantileRegressor") and [`LinearRegression`](../../modules/generated/sklearn.linear_model.linearregression#sklearn.linear_model.LinearRegression "sklearn.linear_model.LinearRegression") are minimizing.
Indeed, [`LinearRegression`](../../modules/generated/sklearn.linear_model.linearregression#sklearn.linear_model.LinearRegression "sklearn.linear_model.LinearRegression") is a least squares approach minimizing the mean squared error (MSE) between the training and predicted targets. In contrast, [`QuantileRegressor`](../../modules/generated/sklearn.linear_model.quantileregressor#sklearn.linear_model.QuantileRegressor "sklearn.linear_model.QuantileRegressor") with `quantile=0.5` minimizes the mean absolute error (MAE) instead.
Let’s first compute the training errors of such models in terms of mean squared error and mean absolute error. We will use the asymmetric Pareto distributed target to make it more interesting as mean and median are not equal.
```
from sklearn.linear_model import LinearRegression
from sklearn.metrics import mean_absolute_error
from sklearn.metrics import mean_squared_error
linear_regression = LinearRegression()
quantile_regression = QuantileRegressor(quantile=0.5, alpha=0)
y_pred_lr = linear_regression.fit(X, y_pareto).predict(X)
y_pred_qr = quantile_regression.fit(X, y_pareto).predict(X)
print(
f"""Training error (in-sample performance)
{linear_regression.__class__.__name__}:
MAE = {mean_absolute_error(y_pareto, y_pred_lr):.3f}
MSE = {mean_squared_error(y_pareto, y_pred_lr):.3f}
{quantile_regression.__class__.__name__}:
MAE = {mean_absolute_error(y_pareto, y_pred_qr):.3f}
MSE = {mean_squared_error(y_pareto, y_pred_qr):.3f}
"""
)
```
```
Training error (in-sample performance)
LinearRegression:
MAE = 1.805
MSE = 6.486
QuantileRegressor:
MAE = 1.670
MSE = 7.025
```
On the training set, we see that MAE is lower for [`QuantileRegressor`](../../modules/generated/sklearn.linear_model.quantileregressor#sklearn.linear_model.QuantileRegressor "sklearn.linear_model.QuantileRegressor") than [`LinearRegression`](../../modules/generated/sklearn.linear_model.linearregression#sklearn.linear_model.LinearRegression "sklearn.linear_model.LinearRegression"). In contrast to that, MSE is lower for [`LinearRegression`](../../modules/generated/sklearn.linear_model.linearregression#sklearn.linear_model.LinearRegression "sklearn.linear_model.LinearRegression") than [`QuantileRegressor`](../../modules/generated/sklearn.linear_model.quantileregressor#sklearn.linear_model.QuantileRegressor "sklearn.linear_model.QuantileRegressor"). These results confirms that MAE is the loss minimized by [`QuantileRegressor`](../../modules/generated/sklearn.linear_model.quantileregressor#sklearn.linear_model.QuantileRegressor "sklearn.linear_model.QuantileRegressor") while MSE is the loss minimized [`LinearRegression`](../../modules/generated/sklearn.linear_model.linearregression#sklearn.linear_model.LinearRegression "sklearn.linear_model.LinearRegression").
We can make a similar evaluation but looking a the test error obtained by cross-validation.
```
from sklearn.model_selection import cross_validate
cv_results_lr = cross_validate(
linear_regression,
X,
y_pareto,
cv=3,
scoring=["neg_mean_absolute_error", "neg_mean_squared_error"],
)
cv_results_qr = cross_validate(
quantile_regression,
X,
y_pareto,
cv=3,
scoring=["neg_mean_absolute_error", "neg_mean_squared_error"],
)
print(
f"""Test error (cross-validated performance)
{linear_regression.__class__.__name__}:
MAE = {-cv_results_lr["test_neg_mean_absolute_error"].mean():.3f}
MSE = {-cv_results_lr["test_neg_mean_squared_error"].mean():.3f}
{quantile_regression.__class__.__name__}:
MAE = {-cv_results_qr["test_neg_mean_absolute_error"].mean():.3f}
MSE = {-cv_results_qr["test_neg_mean_squared_error"].mean():.3f}
"""
)
```
```
Test error (cross-validated performance)
LinearRegression:
MAE = 1.732
MSE = 6.690
QuantileRegressor:
MAE = 1.679
MSE = 7.129
```
We reach similar conclusions on the out-of-sample evaluation.
**Total running time of the script:** ( 0 minutes 0.908 seconds)
[`Download Python source code: plot_quantile_regression.py`](https://scikit-learn.org/1.1/_downloads/8a712694e4d011e8f35bfcb1b1b5fc82/plot_quantile_regression.py)
[`Download Jupyter notebook: plot_quantile_regression.ipynb`](https://scikit-learn.org/1.1/_downloads/10754505339af88c10b8a48127535a4a/plot_quantile_regression.ipynb)
scikit_learn Theil-Sen Regression Note
Click [here](#sphx-glr-download-auto-examples-linear-model-plot-theilsen-py) to download the full example code or to run this example in your browser via Binder
Theil-Sen Regression
====================
Computes a Theil-Sen Regression on a synthetic dataset.
See [Theil-Sen estimator: generalized-median-based estimator](../../modules/linear_model#theil-sen-regression) for more information on the regressor.
Compared to the OLS (ordinary least squares) estimator, the Theil-Sen estimator is robust against outliers. It has a breakdown point of about 29.3% in case of a simple linear regression which means that it can tolerate arbitrary corrupted data (outliers) of up to 29.3% in the two-dimensional case.
The estimation of the model is done by calculating the slopes and intercepts of a subpopulation of all possible combinations of p subsample points. If an intercept is fitted, p must be greater than or equal to n\_features + 1. The final slope and intercept is then defined as the spatial median of these slopes and intercepts.
In certain cases Theil-Sen performs better than [RANSAC](../../modules/linear_model#ransac-regression) which is also a robust method. This is illustrated in the second example below where outliers with respect to the x-axis perturb RANSAC. Tuning the `residual_threshold` parameter of RANSAC remedies this but in general a priori knowledge about the data and the nature of the outliers is needed. Due to the computational complexity of Theil-Sen it is recommended to use it only for small problems in terms of number of samples and features. For larger problems the `max_subpopulation` parameter restricts the magnitude of all possible combinations of p subsample points to a randomly chosen subset and therefore also limits the runtime. Therefore, Theil-Sen is applicable to larger problems with the drawback of losing some of its mathematical properties since it then works on a random subset.
```
# Author: Florian Wilhelm -- <[email protected]>
# License: BSD 3 clause
import time
import numpy as np
import matplotlib.pyplot as plt
from sklearn.linear_model import LinearRegression, TheilSenRegressor
from sklearn.linear_model import RANSACRegressor
estimators = [
("OLS", LinearRegression()),
("Theil-Sen", TheilSenRegressor(random_state=42)),
("RANSAC", RANSACRegressor(random_state=42)),
]
colors = {"OLS": "turquoise", "Theil-Sen": "gold", "RANSAC": "lightgreen"}
lw = 2
```
Outliers only in the y direction
--------------------------------
```
np.random.seed(0)
n_samples = 200
# Linear model y = 3*x + N(2, 0.1**2)
x = np.random.randn(n_samples)
w = 3.0
c = 2.0
noise = 0.1 * np.random.randn(n_samples)
y = w * x + c + noise
# 10% outliers
y[-20:] += -20 * x[-20:]
X = x[:, np.newaxis]
plt.scatter(x, y, color="indigo", marker="x", s=40)
line_x = np.array([-3, 3])
for name, estimator in estimators:
t0 = time.time()
estimator.fit(X, y)
elapsed_time = time.time() - t0
y_pred = estimator.predict(line_x.reshape(2, 1))
plt.plot(
line_x,
y_pred,
color=colors[name],
linewidth=lw,
label="%s (fit time: %.2fs)" % (name, elapsed_time),
)
plt.axis("tight")
plt.legend(loc="upper left")
_ = plt.title("Corrupt y")
```
Outliers in the X direction
---------------------------
```
np.random.seed(0)
# Linear model y = 3*x + N(2, 0.1**2)
x = np.random.randn(n_samples)
noise = 0.1 * np.random.randn(n_samples)
y = 3 * x + 2 + noise
# 10% outliers
x[-20:] = 9.9
y[-20:] += 22
X = x[:, np.newaxis]
plt.figure()
plt.scatter(x, y, color="indigo", marker="x", s=40)
line_x = np.array([-3, 10])
for name, estimator in estimators:
t0 = time.time()
estimator.fit(X, y)
elapsed_time = time.time() - t0
y_pred = estimator.predict(line_x.reshape(2, 1))
plt.plot(
line_x,
y_pred,
color=colors[name],
linewidth=lw,
label="%s (fit time: %.2fs)" % (name, elapsed_time),
)
plt.axis("tight")
plt.legend(loc="upper left")
plt.title("Corrupt x")
plt.show()
```
**Total running time of the script:** ( 0 minutes 0.504 seconds)
[`Download Python source code: plot_theilsen.py`](https://scikit-learn.org/1.1/_downloads/b0bb69a4a4a86dd9c76717ff515593af/plot_theilsen.py)
[`Download Jupyter notebook: plot_theilsen.ipynb`](https://scikit-learn.org/1.1/_downloads/16260993c16a6d249d6df4cb11cf8174/plot_theilsen.ipynb)
scikit_learn SGD: Penalties Note
Click [here](#sphx-glr-download-auto-examples-linear-model-plot-sgd-penalties-py) to download the full example code or to run this example in your browser via Binder
SGD: Penalties
==============
Contours of where the penalty is equal to 1 for the three penalties L1, L2 and elastic-net.
All of the above are supported by [`SGDClassifier`](../../modules/generated/sklearn.linear_model.sgdclassifier#sklearn.linear_model.SGDClassifier "sklearn.linear_model.SGDClassifier") and [`SGDRegressor`](../../modules/generated/sklearn.linear_model.sgdregressor#sklearn.linear_model.SGDRegressor "sklearn.linear_model.SGDRegressor").
```
import numpy as np
import matplotlib.pyplot as plt
l1_color = "navy"
l2_color = "c"
elastic_net_color = "darkorange"
line = np.linspace(-1.5, 1.5, 1001)
xx, yy = np.meshgrid(line, line)
l2 = xx**2 + yy**2
l1 = np.abs(xx) + np.abs(yy)
rho = 0.5
elastic_net = rho * l1 + (1 - rho) * l2
plt.figure(figsize=(10, 10), dpi=100)
ax = plt.gca()
elastic_net_contour = plt.contour(
xx, yy, elastic_net, levels=[1], colors=elastic_net_color
)
l2_contour = plt.contour(xx, yy, l2, levels=[1], colors=l2_color)
l1_contour = plt.contour(xx, yy, l1, levels=[1], colors=l1_color)
ax.set_aspect("equal")
ax.spines["left"].set_position("center")
ax.spines["right"].set_color("none")
ax.spines["bottom"].set_position("center")
ax.spines["top"].set_color("none")
plt.clabel(
elastic_net_contour,
inline=1,
fontsize=18,
fmt={1.0: "elastic-net"},
manual=[(-1, -1)],
)
plt.clabel(l2_contour, inline=1, fontsize=18, fmt={1.0: "L2"}, manual=[(-1, -1)])
plt.clabel(l1_contour, inline=1, fontsize=18, fmt={1.0: "L1"}, manual=[(-1, -1)])
plt.tight_layout()
plt.show()
```
**Total running time of the script:** ( 0 minutes 0.225 seconds)
[`Download Python source code: plot_sgd_penalties.py`](https://scikit-learn.org/1.1/_downloads/8f25e8f3f3c619cc58b57d51b2029f29/plot_sgd_penalties.py)
[`Download Jupyter notebook: plot_sgd_penalties.ipynb`](https://scikit-learn.org/1.1/_downloads/061854726c268bcdae5cd1c330cf8c75/plot_sgd_penalties.ipynb)
| programming_docs |
scikit_learn SGD: Weighted samples Note
Click [here](#sphx-glr-download-auto-examples-linear-model-plot-sgd-weighted-samples-py) to download the full example code or to run this example in your browser via Binder
SGD: Weighted samples
=====================
Plot decision function of a weighted dataset, where the size of points is proportional to its weight.
```
import numpy as np
import matplotlib.pyplot as plt
from sklearn import linear_model
# we create 20 points
np.random.seed(0)
X = np.r_[np.random.randn(10, 2) + [1, 1], np.random.randn(10, 2)]
y = [1] * 10 + [-1] * 10
sample_weight = 100 * np.abs(np.random.randn(20))
# and assign a bigger weight to the last 10 samples
sample_weight[:10] *= 10
# plot the weighted data points
xx, yy = np.meshgrid(np.linspace(-4, 5, 500), np.linspace(-4, 5, 500))
fig, ax = plt.subplots()
ax.scatter(
X[:, 0],
X[:, 1],
c=y,
s=sample_weight,
alpha=0.9,
cmap=plt.cm.bone,
edgecolor="black",
)
# fit the unweighted model
clf = linear_model.SGDClassifier(alpha=0.01, max_iter=100)
clf.fit(X, y)
Z = clf.decision_function(np.c_[xx.ravel(), yy.ravel()])
Z = Z.reshape(xx.shape)
no_weights = ax.contour(xx, yy, Z, levels=[0], linestyles=["solid"])
# fit the weighted model
clf = linear_model.SGDClassifier(alpha=0.01, max_iter=100)
clf.fit(X, y, sample_weight=sample_weight)
Z = clf.decision_function(np.c_[xx.ravel(), yy.ravel()])
Z = Z.reshape(xx.shape)
samples_weights = ax.contour(xx, yy, Z, levels=[0], linestyles=["dashed"])
no_weights_handles, _ = no_weights.legend_elements()
weights_handles, _ = samples_weights.legend_elements()
ax.legend(
[no_weights_handles[0], weights_handles[0]],
["no weights", "with weights"],
loc="lower left",
)
ax.set(xticks=(), yticks=())
plt.show()
```
**Total running time of the script:** ( 0 minutes 0.069 seconds)
[`Download Python source code: plot_sgd_weighted_samples.py`](https://scikit-learn.org/1.1/_downloads/1fbe96ffa301da6ec5d1c4be786df135/plot_sgd_weighted_samples.py)
[`Download Jupyter notebook: plot_sgd_weighted_samples.ipynb`](https://scikit-learn.org/1.1/_downloads/6142de78d839661549389d7029cf98b0/plot_sgd_weighted_samples.ipynb)
scikit_learn Comparing Linear Bayesian Regressors Note
Click [here](#sphx-glr-download-auto-examples-linear-model-plot-ard-py) to download the full example code or to run this example in your browser via Binder
Comparing Linear Bayesian Regressors
====================================
This example compares two different bayesian regressors:
* a [Automatic Relevance Determination - ARD](../../modules/linear_model#automatic-relevance-determination)
* a [Bayesian Ridge Regression](../../modules/linear_model#bayesian-ridge-regression)
In the first part, we use an [Ordinary Least Squares](../../modules/linear_model#ordinary-least-squares) (OLS) model as a baseline for comparing the models’ coefficients with respect to the true coefficients. Thereafter, we show that the estimation of such models is done by iteratively maximizing the marginal log-likelihood of the observations.
In the last section we plot predictions and uncertainties for the ARD and the Bayesian Ridge regressions using a polynomial feature expansion to fit a non-linear relationship between `X` and `y`.
```
# Author: Arturo Amor <[email protected]>
```
Models robustness to recover the ground truth weights
-----------------------------------------------------
### Generate synthetic dataset
We generate a dataset where `X` and `y` are linearly linked: 10 of the features of `X` will be used to generate `y`. The other features are not useful at predicting `y`. In addition, we generate a dataset where `n_samples
== n_features`. Such a setting is challenging for an OLS model and leads potentially to arbitrary large weights. Having a prior on the weights and a penalty alleviates the problem. Finally, gaussian noise is added.
```
from sklearn.datasets import make_regression
X, y, true_weights = make_regression(
n_samples=100,
n_features=100,
n_informative=10,
noise=8,
coef=True,
random_state=42,
)
```
### Fit the regressors
We now fit both Bayesian models and the OLS to later compare the models’ coefficients.
```
import pandas as pd
from sklearn.linear_model import ARDRegression, LinearRegression, BayesianRidge
olr = LinearRegression().fit(X, y)
brr = BayesianRidge(compute_score=True, n_iter=30).fit(X, y)
ard = ARDRegression(compute_score=True, n_iter=30).fit(X, y)
df = pd.DataFrame(
{
"Weights of true generative process": true_weights,
"ARDRegression": ard.coef_,
"BayesianRidge": brr.coef_,
"LinearRegression": olr.coef_,
}
)
```
### Plot the true and estimated coefficients
Now we compare the coefficients of each model with the weights of the true generative model.
```
import matplotlib.pyplot as plt
import seaborn as sns
from matplotlib.colors import SymLogNorm
plt.figure(figsize=(10, 6))
ax = sns.heatmap(
df.T,
norm=SymLogNorm(linthresh=10e-4, vmin=-80, vmax=80),
cbar_kws={"label": "coefficients' values"},
cmap="seismic_r",
)
plt.ylabel("linear model")
plt.xlabel("coefficients")
plt.tight_layout(rect=(0, 0, 1, 0.95))
_ = plt.title("Models' coefficients")
```
Due to the added noise, none of the models recover the true weights. Indeed, all models always have more than 10 non-zero coefficients. Compared to the OLS estimator, the coefficients using a Bayesian Ridge regression are slightly shifted toward zero, which stabilises them. The ARD regression provides a sparser solution: some of the non-informative coefficients are set exactly to zero, while shifting others closer to zero. Some non-informative coefficients are still present and retain large values.
### Plot the marginal log-likelihood
```
import numpy as np
ard_scores = -np.array(ard.scores_)
brr_scores = -np.array(brr.scores_)
plt.plot(ard_scores, color="navy", label="ARD")
plt.plot(brr_scores, color="red", label="BayesianRidge")
plt.ylabel("Log-likelihood")
plt.xlabel("Iterations")
plt.xlim(1, 30)
plt.legend()
_ = plt.title("Models log-likelihood")
```
Indeed, both models minimize the log-likelihood up to an arbitrary cutoff defined by the `n_iter` parameter.
Bayesian regressions with polynomial feature expansion
------------------------------------------------------
### Generate synthetic dataset
We create a target that is a non-linear function of the input feature. Noise following a standard uniform distribution is added.
```
from sklearn.pipeline import make_pipeline
from sklearn.preprocessing import PolynomialFeatures, StandardScaler
rng = np.random.RandomState(0)
n_samples = 110
# sort the data to make plotting easier later
X = np.sort(-10 * rng.rand(n_samples) + 10)
noise = rng.normal(0, 1, n_samples) * 1.35
y = np.sqrt(X) * np.sin(X) + noise
full_data = pd.DataFrame({"input_feature": X, "target": y})
X = X.reshape((-1, 1))
# extrapolation
X_plot = np.linspace(10, 10.4, 10)
y_plot = np.sqrt(X_plot) * np.sin(X_plot)
X_plot = np.concatenate((X, X_plot.reshape((-1, 1))))
y_plot = np.concatenate((y - noise, y_plot))
```
### Fit the regressors
Here we try a degree 10 polynomial to potentially overfit, though the bayesian linear models regularize the size of the polynomial coefficients. As `fit_intercept=True` by default for [`ARDRegression`](../../modules/generated/sklearn.linear_model.ardregression#sklearn.linear_model.ARDRegression "sklearn.linear_model.ARDRegression") and [`BayesianRidge`](../../modules/generated/sklearn.linear_model.bayesianridge#sklearn.linear_model.BayesianRidge "sklearn.linear_model.BayesianRidge"), then [`PolynomialFeatures`](../../modules/generated/sklearn.preprocessing.polynomialfeatures#sklearn.preprocessing.PolynomialFeatures "sklearn.preprocessing.PolynomialFeatures") should not introduce an additional bias feature. By setting `return_std=True`, the bayesian regressors return the standard deviation of the posterior distribution for the model parameters.
```
ard_poly = make_pipeline(
PolynomialFeatures(degree=10, include_bias=False),
StandardScaler(),
ARDRegression(),
).fit(X, y)
brr_poly = make_pipeline(
PolynomialFeatures(degree=10, include_bias=False),
StandardScaler(),
BayesianRidge(),
).fit(X, y)
y_ard, y_ard_std = ard_poly.predict(X_plot, return_std=True)
y_brr, y_brr_std = brr_poly.predict(X_plot, return_std=True)
```
### Plotting polynomial regressions with std errors of the scores
```
ax = sns.scatterplot(
data=full_data, x="input_feature", y="target", color="black", alpha=0.75
)
ax.plot(X_plot, y_plot, color="black", label="Ground Truth")
ax.plot(X_plot, y_brr, color="red", label="BayesianRidge with polynomial features")
ax.plot(X_plot, y_ard, color="navy", label="ARD with polynomial features")
ax.fill_between(
X_plot.ravel(),
y_ard - y_ard_std,
y_ard + y_ard_std,
color="navy",
alpha=0.3,
)
ax.fill_between(
X_plot.ravel(),
y_brr - y_brr_std,
y_brr + y_brr_std,
color="red",
alpha=0.3,
)
ax.legend()
_ = ax.set_title("Polynomial fit of a non-linear feature")
```
The error bars represent one standard deviation of the predicted gaussian distribution of the query points. Notice that the ARD regression captures the ground truth the best when using the default parameters in both models, but further reducing the `lambda_init` hyperparameter of the Bayesian Ridge can reduce its bias (see example [Curve Fitting with Bayesian Ridge Regression](plot_bayesian_ridge_curvefit#sphx-glr-auto-examples-linear-model-plot-bayesian-ridge-curvefit-py)). Finally, due to the intrinsic limitations of a polynomial regression, both models fail when extrapolating.
**Total running time of the script:** ( 0 minutes 0.548 seconds)
[`Download Python source code: plot_ard.py`](https://scikit-learn.org/1.1/_downloads/892d774326b523935a603b8700193195/plot_ard.py)
[`Download Jupyter notebook: plot_ard.ipynb`](https://scikit-learn.org/1.1/_downloads/c5d41d4d7d1dab3e49804c2e2c4222e8/plot_ard.ipynb)
scikit_learn Orthogonal Matching Pursuit Note
Click [here](#sphx-glr-download-auto-examples-linear-model-plot-omp-py) to download the full example code or to run this example in your browser via Binder
Orthogonal Matching Pursuit
===========================
Using orthogonal matching pursuit for recovering a sparse signal from a noisy measurement encoded with a dictionary
```
import matplotlib.pyplot as plt
import numpy as np
from sklearn.linear_model import OrthogonalMatchingPursuit
from sklearn.linear_model import OrthogonalMatchingPursuitCV
from sklearn.datasets import make_sparse_coded_signal
n_components, n_features = 512, 100
n_nonzero_coefs = 17
# generate the data
# y = Xw
# |x|_0 = n_nonzero_coefs
y, X, w = make_sparse_coded_signal(
n_samples=1,
n_components=n_components,
n_features=n_features,
n_nonzero_coefs=n_nonzero_coefs,
random_state=0,
data_transposed=True,
)
(idx,) = w.nonzero()
# distort the clean signal
y_noisy = y + 0.05 * np.random.randn(len(y))
# plot the sparse signal
plt.figure(figsize=(7, 7))
plt.subplot(4, 1, 1)
plt.xlim(0, 512)
plt.title("Sparse signal")
plt.stem(idx, w[idx], use_line_collection=True)
# plot the noise-free reconstruction
omp = OrthogonalMatchingPursuit(n_nonzero_coefs=n_nonzero_coefs, normalize=False)
omp.fit(X, y)
coef = omp.coef_
(idx_r,) = coef.nonzero()
plt.subplot(4, 1, 2)
plt.xlim(0, 512)
plt.title("Recovered signal from noise-free measurements")
plt.stem(idx_r, coef[idx_r], use_line_collection=True)
# plot the noisy reconstruction
omp.fit(X, y_noisy)
coef = omp.coef_
(idx_r,) = coef.nonzero()
plt.subplot(4, 1, 3)
plt.xlim(0, 512)
plt.title("Recovered signal from noisy measurements")
plt.stem(idx_r, coef[idx_r], use_line_collection=True)
# plot the noisy reconstruction with number of non-zeros set by CV
omp_cv = OrthogonalMatchingPursuitCV(normalize=False)
omp_cv.fit(X, y_noisy)
coef = omp_cv.coef_
(idx_r,) = coef.nonzero()
plt.subplot(4, 1, 4)
plt.xlim(0, 512)
plt.title("Recovered signal from noisy measurements with CV")
plt.stem(idx_r, coef[idx_r], use_line_collection=True)
plt.subplots_adjust(0.06, 0.04, 0.94, 0.90, 0.20, 0.38)
plt.suptitle("Sparse signal recovery with Orthogonal Matching Pursuit", fontsize=16)
plt.show()
```
**Total running time of the script:** ( 0 minutes 0.171 seconds)
[`Download Python source code: plot_omp.py`](https://scikit-learn.org/1.1/_downloads/9b5ca5a413df494778642d75caeb33d7/plot_omp.py)
[`Download Jupyter notebook: plot_omp.ipynb`](https://scikit-learn.org/1.1/_downloads/a79c521bd835e8739e06d7dafbfc4eb4/plot_omp.ipynb)
scikit_learn Lasso and Elastic Net for Sparse Signals Note
Click [here](#sphx-glr-download-auto-examples-linear-model-plot-lasso-and-elasticnet-py) to download the full example code or to run this example in your browser via Binder
Lasso and Elastic Net for Sparse Signals
========================================
Estimates Lasso and Elastic-Net regression models on a manually generated sparse signal corrupted with an additive noise. Estimated coefficients are compared with the ground-truth.
Data Generation
---------------
```
import numpy as np
import matplotlib.pyplot as plt
from sklearn.metrics import r2_score
np.random.seed(42)
n_samples, n_features = 50, 100
X = np.random.randn(n_samples, n_features)
# Decreasing coef w. alternated signs for visualization
idx = np.arange(n_features)
coef = (-1) ** idx * np.exp(-idx / 10)
coef[10:] = 0 # sparsify coef
y = np.dot(X, coef)
# Add noise
y += 0.01 * np.random.normal(size=n_samples)
# Split data in train set and test set
n_samples = X.shape[0]
X_train, y_train = X[: n_samples // 2], y[: n_samples // 2]
X_test, y_test = X[n_samples // 2 :], y[n_samples // 2 :]
```
Lasso
-----
```
from sklearn.linear_model import Lasso
alpha = 0.1
lasso = Lasso(alpha=alpha)
y_pred_lasso = lasso.fit(X_train, y_train).predict(X_test)
r2_score_lasso = r2_score(y_test, y_pred_lasso)
print(lasso)
print("r^2 on test data : %f" % r2_score_lasso)
```
```
Lasso(alpha=0.1)
r^2 on test data : 0.658064
```
ElasticNet
----------
```
from sklearn.linear_model import ElasticNet
enet = ElasticNet(alpha=alpha, l1_ratio=0.7)
y_pred_enet = enet.fit(X_train, y_train).predict(X_test)
r2_score_enet = r2_score(y_test, y_pred_enet)
print(enet)
print("r^2 on test data : %f" % r2_score_enet)
```
```
ElasticNet(alpha=0.1, l1_ratio=0.7)
r^2 on test data : 0.642515
```
Plot
----
```
m, s, _ = plt.stem(
np.where(enet.coef_)[0],
enet.coef_[enet.coef_ != 0],
markerfmt="x",
label="Elastic net coefficients",
use_line_collection=True,
)
plt.setp([m, s], color="#2ca02c")
m, s, _ = plt.stem(
np.where(lasso.coef_)[0],
lasso.coef_[lasso.coef_ != 0],
markerfmt="x",
label="Lasso coefficients",
use_line_collection=True,
)
plt.setp([m, s], color="#ff7f0e")
plt.stem(
np.where(coef)[0],
coef[coef != 0],
label="true coefficients",
markerfmt="bx",
use_line_collection=True,
)
plt.legend(loc="best")
plt.title(
"Lasso $R^2$: %.3f, Elastic Net $R^2$: %.3f" % (r2_score_lasso, r2_score_enet)
)
plt.show()
```
**Total running time of the script:** ( 0 minutes 0.085 seconds)
[`Download Python source code: plot_lasso_and_elasticnet.py`](https://scikit-learn.org/1.1/_downloads/3d58721191491072eecc520f0a45cdb3/plot_lasso_and_elasticnet.py)
[`Download Jupyter notebook: plot_lasso_and_elasticnet.ipynb`](https://scikit-learn.org/1.1/_downloads/6cd2f23417e24a8a2a445c34f1b57930/plot_lasso_and_elasticnet.ipynb)
scikit_learn HuberRegressor vs Ridge on dataset with strong outliers Note
Click [here](#sphx-glr-download-auto-examples-linear-model-plot-huber-vs-ridge-py) to download the full example code or to run this example in your browser via Binder
HuberRegressor vs Ridge on dataset with strong outliers
=======================================================
Fit Ridge and HuberRegressor on a dataset with outliers.
The example shows that the predictions in ridge are strongly influenced by the outliers present in the dataset. The Huber regressor is less influenced by the outliers since the model uses the linear loss for these. As the parameter epsilon is increased for the Huber regressor, the decision function approaches that of the ridge.
```
# Authors: Manoj Kumar [email protected]
# License: BSD 3 clause
import numpy as np
import matplotlib.pyplot as plt
from sklearn.datasets import make_regression
from sklearn.linear_model import HuberRegressor, Ridge
# Generate toy data.
rng = np.random.RandomState(0)
X, y = make_regression(
n_samples=20, n_features=1, random_state=0, noise=4.0, bias=100.0
)
# Add four strong outliers to the dataset.
X_outliers = rng.normal(0, 0.5, size=(4, 1))
y_outliers = rng.normal(0, 2.0, size=4)
X_outliers[:2, :] += X.max() + X.mean() / 4.0
X_outliers[2:, :] += X.min() - X.mean() / 4.0
y_outliers[:2] += y.min() - y.mean() / 4.0
y_outliers[2:] += y.max() + y.mean() / 4.0
X = np.vstack((X, X_outliers))
y = np.concatenate((y, y_outliers))
plt.plot(X, y, "b.")
# Fit the huber regressor over a series of epsilon values.
colors = ["r-", "b-", "y-", "m-"]
x = np.linspace(X.min(), X.max(), 7)
epsilon_values = [1, 1.5, 1.75, 1.9]
for k, epsilon in enumerate(epsilon_values):
huber = HuberRegressor(alpha=0.0, epsilon=epsilon)
huber.fit(X, y)
coef_ = huber.coef_ * x + huber.intercept_
plt.plot(x, coef_, colors[k], label="huber loss, %s" % epsilon)
# Fit a ridge regressor to compare it to huber regressor.
ridge = Ridge(alpha=0.0, random_state=0)
ridge.fit(X, y)
coef_ridge = ridge.coef_
coef_ = ridge.coef_ * x + ridge.intercept_
plt.plot(x, coef_, "g-", label="ridge regression")
plt.title("Comparison of HuberRegressor vs Ridge")
plt.xlabel("X")
plt.ylabel("y")
plt.legend(loc=0)
plt.show()
```
**Total running time of the script:** ( 0 minutes 0.088 seconds)
[`Download Python source code: plot_huber_vs_ridge.py`](https://scikit-learn.org/1.1/_downloads/9676f328f9e6c3e55f218a33cea5586f/plot_huber_vs_ridge.py)
[`Download Jupyter notebook: plot_huber_vs_ridge.ipynb`](https://scikit-learn.org/1.1/_downloads/c6a456b2390718e4dc79945608262e0b/plot_huber_vs_ridge.ipynb)
scikit_learn Plot multi-class SGD on the iris dataset Note
Click [here](#sphx-glr-download-auto-examples-linear-model-plot-sgd-iris-py) to download the full example code or to run this example in your browser via Binder
Plot multi-class SGD on the iris dataset
========================================
Plot decision surface of multi-class SGD on iris dataset. The hyperplanes corresponding to the three one-versus-all (OVA) classifiers are represented by the dashed lines.
```
import numpy as np
import matplotlib.pyplot as plt
from sklearn import datasets
from sklearn.linear_model import SGDClassifier
from sklearn.inspection import DecisionBoundaryDisplay
# import some data to play with
iris = datasets.load_iris()
# we only take the first two features. We could
# avoid this ugly slicing by using a two-dim dataset
X = iris.data[:, :2]
y = iris.target
colors = "bry"
# shuffle
idx = np.arange(X.shape[0])
np.random.seed(13)
np.random.shuffle(idx)
X = X[idx]
y = y[idx]
# standardize
mean = X.mean(axis=0)
std = X.std(axis=0)
X = (X - mean) / std
clf = SGDClassifier(alpha=0.001, max_iter=100).fit(X, y)
ax = plt.gca()
DecisionBoundaryDisplay.from_estimator(
clf,
X,
cmap=plt.cm.Paired,
ax=ax,
response_method="predict",
xlabel=iris.feature_names[0],
ylabel=iris.feature_names[1],
)
plt.axis("tight")
# Plot also the training points
for i, color in zip(clf.classes_, colors):
idx = np.where(y == i)
plt.scatter(
X[idx, 0],
X[idx, 1],
c=color,
label=iris.target_names[i],
cmap=plt.cm.Paired,
edgecolor="black",
s=20,
)
plt.title("Decision surface of multi-class SGD")
plt.axis("tight")
# Plot the three one-against-all classifiers
xmin, xmax = plt.xlim()
ymin, ymax = plt.ylim()
coef = clf.coef_
intercept = clf.intercept_
def plot_hyperplane(c, color):
def line(x0):
return (-(x0 * coef[c, 0]) - intercept[c]) / coef[c, 1]
plt.plot([xmin, xmax], [line(xmin), line(xmax)], ls="--", color=color)
for i, color in zip(clf.classes_, colors):
plot_hyperplane(i, color)
plt.legend()
plt.show()
```
**Total running time of the script:** ( 0 minutes 0.103 seconds)
[`Download Python source code: plot_sgd_iris.py`](https://scikit-learn.org/1.1/_downloads/de80e18b1c5bc8b447bbce5b9750c567/plot_sgd_iris.py)
[`Download Jupyter notebook: plot_sgd_iris.ipynb`](https://scikit-learn.org/1.1/_downloads/97ec5b2cda9af9d409cf94ba3ae5e270/plot_sgd_iris.ipynb)
| programming_docs |
scikit_learn Imputing missing values with variants of IterativeImputer Note
Click [here](#sphx-glr-download-auto-examples-impute-plot-iterative-imputer-variants-comparison-py) to download the full example code or to run this example in your browser via Binder
Imputing missing values with variants of IterativeImputer
=========================================================
The [`IterativeImputer`](../../modules/generated/sklearn.impute.iterativeimputer#sklearn.impute.IterativeImputer "sklearn.impute.IterativeImputer") class is very flexible - it can be used with a variety of estimators to do round-robin regression, treating every variable as an output in turn.
In this example we compare some estimators for the purpose of missing feature imputation with [`IterativeImputer`](../../modules/generated/sklearn.impute.iterativeimputer#sklearn.impute.IterativeImputer "sklearn.impute.IterativeImputer"):
* [`BayesianRidge`](../../modules/generated/sklearn.linear_model.bayesianridge#sklearn.linear_model.BayesianRidge "sklearn.linear_model.BayesianRidge"): regularized linear regression
* `RandomForestRegressor`: Forests of randomized trees regression
* `Nystroem`, [`Ridge`](../../modules/generated/sklearn.linear_model.ridge#sklearn.linear_model.Ridge "sklearn.linear_model.Ridge")): a pipeline with the expansion of a degree 2 polynomial kernel and regularized linear regression
* [`KNeighborsRegressor`](../../modules/generated/sklearn.neighbors.kneighborsregressor#sklearn.neighbors.KNeighborsRegressor "sklearn.neighbors.KNeighborsRegressor"): comparable to other KNN imputation approaches
Of particular interest is the ability of [`IterativeImputer`](../../modules/generated/sklearn.impute.iterativeimputer#sklearn.impute.IterativeImputer "sklearn.impute.IterativeImputer") to mimic the behavior of missForest, a popular imputation package for R.
Note that [`KNeighborsRegressor`](../../modules/generated/sklearn.neighbors.kneighborsregressor#sklearn.neighbors.KNeighborsRegressor "sklearn.neighbors.KNeighborsRegressor") is different from KNN imputation, which learns from samples with missing values by using a distance metric that accounts for missing values, rather than imputing them.
The goal is to compare different estimators to see which one is best for the [`IterativeImputer`](../../modules/generated/sklearn.impute.iterativeimputer#sklearn.impute.IterativeImputer "sklearn.impute.IterativeImputer") when using a [`BayesianRidge`](../../modules/generated/sklearn.linear_model.bayesianridge#sklearn.linear_model.BayesianRidge "sklearn.linear_model.BayesianRidge") estimator on the California housing dataset with a single value randomly removed from each row.
For this particular pattern of missing values we see that [`BayesianRidge`](../../modules/generated/sklearn.linear_model.bayesianridge#sklearn.linear_model.BayesianRidge "sklearn.linear_model.BayesianRidge") and [`RandomForestRegressor`](../../modules/generated/sklearn.ensemble.randomforestregressor#sklearn.ensemble.RandomForestRegressor "sklearn.ensemble.RandomForestRegressor") give the best results.
It should be noted that some estimators such as [`HistGradientBoostingRegressor`](../../modules/generated/sklearn.ensemble.histgradientboostingregressor#sklearn.ensemble.HistGradientBoostingRegressor "sklearn.ensemble.HistGradientBoostingRegressor") can natively deal with missing features and are often recommended over building pipelines with complex and costly missing values imputation strategies.
```
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
# To use this experimental feature, we need to explicitly ask for it:
from sklearn.experimental import enable_iterative_imputer # noqa
from sklearn.datasets import fetch_california_housing
from sklearn.impute import SimpleImputer
from sklearn.impute import IterativeImputer
from sklearn.linear_model import BayesianRidge, Ridge
from sklearn.kernel_approximation import Nystroem
from sklearn.ensemble import RandomForestRegressor
from sklearn.neighbors import KNeighborsRegressor
from sklearn.pipeline import make_pipeline
from sklearn.model_selection import cross_val_score
N_SPLITS = 5
rng = np.random.RandomState(0)
X_full, y_full = fetch_california_housing(return_X_y=True)
# ~2k samples is enough for the purpose of the example.
# Remove the following two lines for a slower run with different error bars.
X_full = X_full[::10]
y_full = y_full[::10]
n_samples, n_features = X_full.shape
# Estimate the score on the entire dataset, with no missing values
br_estimator = BayesianRidge()
score_full_data = pd.DataFrame(
cross_val_score(
br_estimator, X_full, y_full, scoring="neg_mean_squared_error", cv=N_SPLITS
),
columns=["Full Data"],
)
# Add a single missing value to each row
X_missing = X_full.copy()
y_missing = y_full
missing_samples = np.arange(n_samples)
missing_features = rng.choice(n_features, n_samples, replace=True)
X_missing[missing_samples, missing_features] = np.nan
# Estimate the score after imputation (mean and median strategies)
score_simple_imputer = pd.DataFrame()
for strategy in ("mean", "median"):
estimator = make_pipeline(
SimpleImputer(missing_values=np.nan, strategy=strategy), br_estimator
)
score_simple_imputer[strategy] = cross_val_score(
estimator, X_missing, y_missing, scoring="neg_mean_squared_error", cv=N_SPLITS
)
# Estimate the score after iterative imputation of the missing values
# with different estimators
estimators = [
BayesianRidge(),
RandomForestRegressor(
# We tuned the hyperparameters of the RandomForestRegressor to get a good
# enough predictive performance for a restricted execution time.
n_estimators=4,
max_depth=10,
bootstrap=True,
max_samples=0.5,
n_jobs=2,
random_state=0,
),
make_pipeline(
Nystroem(kernel="polynomial", degree=2, random_state=0), Ridge(alpha=1e3)
),
KNeighborsRegressor(n_neighbors=15),
]
score_iterative_imputer = pd.DataFrame()
# iterative imputer is sensible to the tolerance and
# dependent on the estimator used internally.
# we tuned the tolerance to keep this example run with limited computational
# resources while not changing the results too much compared to keeping the
# stricter default value for the tolerance parameter.
tolerances = (1e-3, 1e-1, 1e-1, 1e-2)
for impute_estimator, tol in zip(estimators, tolerances):
estimator = make_pipeline(
IterativeImputer(
random_state=0, estimator=impute_estimator, max_iter=25, tol=tol
),
br_estimator,
)
score_iterative_imputer[impute_estimator.__class__.__name__] = cross_val_score(
estimator, X_missing, y_missing, scoring="neg_mean_squared_error", cv=N_SPLITS
)
scores = pd.concat(
[score_full_data, score_simple_imputer, score_iterative_imputer],
keys=["Original", "SimpleImputer", "IterativeImputer"],
axis=1,
)
# plot california housing results
fig, ax = plt.subplots(figsize=(13, 6))
means = -scores.mean()
errors = scores.std()
means.plot.barh(xerr=errors, ax=ax)
ax.set_title("California Housing Regression with Different Imputation Methods")
ax.set_xlabel("MSE (smaller is better)")
ax.set_yticks(np.arange(means.shape[0]))
ax.set_yticklabels([" w/ ".join(label) for label in means.index.tolist()])
plt.tight_layout(pad=1)
plt.show()
```
**Total running time of the script:** ( 0 minutes 2.976 seconds)
[`Download Python source code: plot_iterative_imputer_variants_comparison.py`](https://scikit-learn.org/1.1/_downloads/54823a4305997fc1281f34ce676fb43e/plot_iterative_imputer_variants_comparison.py)
[`Download Jupyter notebook: plot_iterative_imputer_variants_comparison.ipynb`](https://scikit-learn.org/1.1/_downloads/067cd5d39b097d2c49dd98f563dac13a/plot_iterative_imputer_variants_comparison.ipynb)
scikit_learn Imputing missing values before building an estimator Note
Click [here](#sphx-glr-download-auto-examples-impute-plot-missing-values-py) to download the full example code or to run this example in your browser via Binder
Imputing missing values before building an estimator
====================================================
Missing values can be replaced by the mean, the median or the most frequent value using the basic [`SimpleImputer`](../../modules/generated/sklearn.impute.simpleimputer#sklearn.impute.SimpleImputer "sklearn.impute.SimpleImputer").
In this example we will investigate different imputation techniques:
* imputation by the constant value 0
* imputation by the mean value of each feature combined with a missing-ness indicator auxiliary variable
* k nearest neighbor imputation
* iterative imputation
We will use two datasets: Diabetes dataset which consists of 10 feature variables collected from diabetes patients with an aim to predict disease progression and California Housing dataset for which the target is the median house value for California districts.
As neither of these datasets have missing values, we will remove some values to create new versions with artificially missing data. The performance of [`RandomForestRegressor`](../../modules/generated/sklearn.ensemble.randomforestregressor#sklearn.ensemble.RandomForestRegressor "sklearn.ensemble.RandomForestRegressor") on the full original dataset is then compared the performance on the altered datasets with the artificially missing values imputed using different techniques.
```
# Authors: Maria Telenczuk <https://github.com/maikia>
# License: BSD 3 clause
```
Download the data and make missing values sets
----------------------------------------------
First we download the two datasets. Diabetes dataset is shipped with scikit-learn. It has 442 entries, each with 10 features. California Housing dataset is much larger with 20640 entries and 8 features. It needs to be downloaded. We will only use the first 400 entries for the sake of speeding up the calculations but feel free to use the whole dataset.
```
import numpy as np
from sklearn.datasets import fetch_california_housing
from sklearn.datasets import load_diabetes
rng = np.random.RandomState(42)
X_diabetes, y_diabetes = load_diabetes(return_X_y=True)
X_california, y_california = fetch_california_housing(return_X_y=True)
X_california = X_california[:300]
y_california = y_california[:300]
X_diabetes = X_diabetes[:300]
y_diabetes = y_diabetes[:300]
def add_missing_values(X_full, y_full):
n_samples, n_features = X_full.shape
# Add missing values in 75% of the lines
missing_rate = 0.75
n_missing_samples = int(n_samples * missing_rate)
missing_samples = np.zeros(n_samples, dtype=bool)
missing_samples[:n_missing_samples] = True
rng.shuffle(missing_samples)
missing_features = rng.randint(0, n_features, n_missing_samples)
X_missing = X_full.copy()
X_missing[missing_samples, missing_features] = np.nan
y_missing = y_full.copy()
return X_missing, y_missing
X_miss_california, y_miss_california = add_missing_values(X_california, y_california)
X_miss_diabetes, y_miss_diabetes = add_missing_values(X_diabetes, y_diabetes)
```
Impute the missing data and score
---------------------------------
Now we will write a function which will score the results on the differently imputed data. Let’s look at each imputer separately:
```
rng = np.random.RandomState(0)
from sklearn.ensemble import RandomForestRegressor
# To use the experimental IterativeImputer, we need to explicitly ask for it:
from sklearn.experimental import enable_iterative_imputer # noqa
from sklearn.impute import SimpleImputer, KNNImputer, IterativeImputer
from sklearn.model_selection import cross_val_score
from sklearn.pipeline import make_pipeline
N_SPLITS = 4
regressor = RandomForestRegressor(random_state=0)
```
### Missing information
In addition to imputing the missing values, the imputers have an `add_indicator` parameter that marks the values that were missing, which might carry some information.
```
def get_scores_for_imputer(imputer, X_missing, y_missing):
estimator = make_pipeline(imputer, regressor)
impute_scores = cross_val_score(
estimator, X_missing, y_missing, scoring="neg_mean_squared_error", cv=N_SPLITS
)
return impute_scores
x_labels = []
mses_california = np.zeros(5)
stds_california = np.zeros(5)
mses_diabetes = np.zeros(5)
stds_diabetes = np.zeros(5)
```
### Estimate the score
First, we want to estimate the score on the original data:
```
def get_full_score(X_full, y_full):
full_scores = cross_val_score(
regressor, X_full, y_full, scoring="neg_mean_squared_error", cv=N_SPLITS
)
return full_scores.mean(), full_scores.std()
mses_california[0], stds_california[0] = get_full_score(X_california, y_california)
mses_diabetes[0], stds_diabetes[0] = get_full_score(X_diabetes, y_diabetes)
x_labels.append("Full data")
```
### Replace missing values by 0
Now we will estimate the score on the data where the missing values are replaced by 0:
```
def get_impute_zero_score(X_missing, y_missing):
imputer = SimpleImputer(
missing_values=np.nan, add_indicator=True, strategy="constant", fill_value=0
)
zero_impute_scores = get_scores_for_imputer(imputer, X_missing, y_missing)
return zero_impute_scores.mean(), zero_impute_scores.std()
mses_california[1], stds_california[1] = get_impute_zero_score(
X_miss_california, y_miss_california
)
mses_diabetes[1], stds_diabetes[1] = get_impute_zero_score(
X_miss_diabetes, y_miss_diabetes
)
x_labels.append("Zero imputation")
```
### kNN-imputation of the missing values
[`KNNImputer`](../../modules/generated/sklearn.impute.knnimputer#sklearn.impute.KNNImputer "sklearn.impute.KNNImputer") imputes missing values using the weighted or unweighted mean of the desired number of nearest neighbors.
```
def get_impute_knn_score(X_missing, y_missing):
imputer = KNNImputer(missing_values=np.nan, add_indicator=True)
knn_impute_scores = get_scores_for_imputer(imputer, X_missing, y_missing)
return knn_impute_scores.mean(), knn_impute_scores.std()
mses_california[2], stds_california[2] = get_impute_knn_score(
X_miss_california, y_miss_california
)
mses_diabetes[2], stds_diabetes[2] = get_impute_knn_score(
X_miss_diabetes, y_miss_diabetes
)
x_labels.append("KNN Imputation")
```
### Impute missing values with mean
```
def get_impute_mean(X_missing, y_missing):
imputer = SimpleImputer(missing_values=np.nan, strategy="mean", add_indicator=True)
mean_impute_scores = get_scores_for_imputer(imputer, X_missing, y_missing)
return mean_impute_scores.mean(), mean_impute_scores.std()
mses_california[3], stds_california[3] = get_impute_mean(
X_miss_california, y_miss_california
)
mses_diabetes[3], stds_diabetes[3] = get_impute_mean(X_miss_diabetes, y_miss_diabetes)
x_labels.append("Mean Imputation")
```
### Iterative imputation of the missing values
Another option is the [`IterativeImputer`](../../modules/generated/sklearn.impute.iterativeimputer#sklearn.impute.IterativeImputer "sklearn.impute.IterativeImputer"). This uses round-robin linear regression, modeling each feature with missing values as a function of other features, in turn. The version implemented assumes Gaussian (output) variables. If your features are obviously non-normal, consider transforming them to look more normal to potentially improve performance.
```
def get_impute_iterative(X_missing, y_missing):
imputer = IterativeImputer(
missing_values=np.nan,
add_indicator=True,
random_state=0,
n_nearest_features=3,
max_iter=1,
sample_posterior=True,
)
iterative_impute_scores = get_scores_for_imputer(imputer, X_missing, y_missing)
return iterative_impute_scores.mean(), iterative_impute_scores.std()
mses_california[4], stds_california[4] = get_impute_iterative(
X_miss_california, y_miss_california
)
mses_diabetes[4], stds_diabetes[4] = get_impute_iterative(
X_miss_diabetes, y_miss_diabetes
)
x_labels.append("Iterative Imputation")
mses_diabetes = mses_diabetes * -1
mses_california = mses_california * -1
```
Plot the results
----------------
Finally we are going to visualize the score:
```
import matplotlib.pyplot as plt
n_bars = len(mses_diabetes)
xval = np.arange(n_bars)
colors = ["r", "g", "b", "orange", "black"]
# plot diabetes results
plt.figure(figsize=(12, 6))
ax1 = plt.subplot(121)
for j in xval:
ax1.barh(
j,
mses_diabetes[j],
xerr=stds_diabetes[j],
color=colors[j],
alpha=0.6,
align="center",
)
ax1.set_title("Imputation Techniques with Diabetes Data")
ax1.set_xlim(left=np.min(mses_diabetes) * 0.9, right=np.max(mses_diabetes) * 1.1)
ax1.set_yticks(xval)
ax1.set_xlabel("MSE")
ax1.invert_yaxis()
ax1.set_yticklabels(x_labels)
# plot california dataset results
ax2 = plt.subplot(122)
for j in xval:
ax2.barh(
j,
mses_california[j],
xerr=stds_california[j],
color=colors[j],
alpha=0.6,
align="center",
)
ax2.set_title("Imputation Techniques with California Data")
ax2.set_yticks(xval)
ax2.set_xlabel("MSE")
ax2.invert_yaxis()
ax2.set_yticklabels([""] * n_bars)
plt.show()
```
You can also try different techniques. For instance, the median is a more robust estimator for data with high magnitude variables which could dominate results (otherwise known as a ‘long tail’).
**Total running time of the script:** ( 0 minutes 6.340 seconds)
[`Download Python source code: plot_missing_values.py`](https://scikit-learn.org/1.1/_downloads/98345ee267d0372eda8faf906905730e/plot_missing_values.py)
[`Download Jupyter notebook: plot_missing_values.ipynb`](https://scikit-learn.org/1.1/_downloads/a440a8b10138c855100ed5820fdb36b6/plot_missing_values.ipynb)
scikit_learn Robust covariance estimation and Mahalanobis distances relevance Note
Click [here](#sphx-glr-download-auto-examples-covariance-plot-mahalanobis-distances-py) to download the full example code or to run this example in your browser via Binder
Robust covariance estimation and Mahalanobis distances relevance
================================================================
This example shows covariance estimation with Mahalanobis distances on Gaussian distributed data.
For Gaussian distributed data, the distance of an observation \(x\_i\) to the mode of the distribution can be computed using its Mahalanobis distance:
\[d\_{(\mu,\Sigma)}(x\_i)^2 = (x\_i - \mu)^T\Sigma^{-1}(x\_i - \mu)\] where \(\mu\) and \(\Sigma\) are the location and the covariance of the underlying Gaussian distributions.
In practice, \(\mu\) and \(\Sigma\) are replaced by some estimates. The standard covariance maximum likelihood estimate (MLE) is very sensitive to the presence of outliers in the data set and therefore, the downstream Mahalanobis distances also are. It would be better to use a robust estimator of covariance to guarantee that the estimation is resistant to “erroneous” observations in the dataset and that the calculated Mahalanobis distances accurately reflect the true organization of the observations.
The Minimum Covariance Determinant estimator (MCD) is a robust, high-breakdown point (i.e. it can be used to estimate the covariance matrix of highly contaminated datasets, up to \(\frac{n\_\text{samples}-n\_\text{features}-1}{2}\) outliers) estimator of covariance. The idea behind the MCD is to find \(\frac{n\_\text{samples}+n\_\text{features}+1}{2}\) observations whose empirical covariance has the smallest determinant, yielding a “pure” subset of observations from which to compute standards estimates of location and covariance. The MCD was introduced by P.J.Rousseuw in [[1]](#id2).
This example illustrates how the Mahalanobis distances are affected by outlying data. Observations drawn from a contaminating distribution are not distinguishable from the observations coming from the real, Gaussian distribution when using standard covariance MLE based Mahalanobis distances. Using MCD-based Mahalanobis distances, the two populations become distinguishable. Associated applications include outlier detection, observation ranking and clustering.
Note
See also [Robust vs Empirical covariance estimate](plot_robust_vs_empirical_covariance#sphx-glr-auto-examples-covariance-plot-robust-vs-empirical-covariance-py)
Generate data
-------------
First, we generate a dataset of 125 samples and 2 features. Both features are Gaussian distributed with mean of 0 but feature 1 has a standard deviation equal to 2 and feature 2 has a standard deviation equal to 1. Next, 25 samples are replaced with Gaussian outlier samples where feature 1 has a standard deviation equal to 1 and feature 2 has a standard deviation equal to 7.
```
import numpy as np
# for consistent results
np.random.seed(7)
n_samples = 125
n_outliers = 25
n_features = 2
# generate Gaussian data of shape (125, 2)
gen_cov = np.eye(n_features)
gen_cov[0, 0] = 2.0
X = np.dot(np.random.randn(n_samples, n_features), gen_cov)
# add some outliers
outliers_cov = np.eye(n_features)
outliers_cov[np.arange(1, n_features), np.arange(1, n_features)] = 7.0
X[-n_outliers:] = np.dot(np.random.randn(n_outliers, n_features), outliers_cov)
```
Comparison of results
---------------------
Below, we fit MCD and MLE based covariance estimators to our data and print the estimated covariance matrices. Note that the estimated variance of feature 2 is much higher with the MLE based estimator (7.5) than that of the MCD robust estimator (1.2). This shows that the MCD based robust estimator is much more resistant to the outlier samples, which were designed to have a much larger variance in feature 2.
```
import matplotlib.pyplot as plt
from sklearn.covariance import EmpiricalCovariance, MinCovDet
# fit a MCD robust estimator to data
robust_cov = MinCovDet().fit(X)
# fit a MLE estimator to data
emp_cov = EmpiricalCovariance().fit(X)
print(
"Estimated covariance matrix:\nMCD (Robust):\n{}\nMLE:\n{}".format(
robust_cov.covariance_, emp_cov.covariance_
)
)
```
```
Estimated covariance matrix:
MCD (Robust):
[[ 3.26253567e+00 -3.06695631e-03]
[-3.06695631e-03 1.22747343e+00]]
MLE:
[[ 3.23773583 -0.24640578]
[-0.24640578 7.51963999]]
```
To better visualize the difference, we plot contours of the Mahalanobis distances calculated by both methods. Notice that the robust MCD based Mahalanobis distances fit the inlier black points much better, whereas the MLE based distances are more influenced by the outlier red points.
```
fig, ax = plt.subplots(figsize=(10, 5))
# Plot data set
inlier_plot = ax.scatter(X[:, 0], X[:, 1], color="black", label="inliers")
outlier_plot = ax.scatter(
X[:, 0][-n_outliers:], X[:, 1][-n_outliers:], color="red", label="outliers"
)
ax.set_xlim(ax.get_xlim()[0], 10.0)
ax.set_title("Mahalanobis distances of a contaminated data set")
# Create meshgrid of feature 1 and feature 2 values
xx, yy = np.meshgrid(
np.linspace(plt.xlim()[0], plt.xlim()[1], 100),
np.linspace(plt.ylim()[0], plt.ylim()[1], 100),
)
zz = np.c_[xx.ravel(), yy.ravel()]
# Calculate the MLE based Mahalanobis distances of the meshgrid
mahal_emp_cov = emp_cov.mahalanobis(zz)
mahal_emp_cov = mahal_emp_cov.reshape(xx.shape)
emp_cov_contour = plt.contour(
xx, yy, np.sqrt(mahal_emp_cov), cmap=plt.cm.PuBu_r, linestyles="dashed"
)
# Calculate the MCD based Mahalanobis distances
mahal_robust_cov = robust_cov.mahalanobis(zz)
mahal_robust_cov = mahal_robust_cov.reshape(xx.shape)
robust_contour = ax.contour(
xx, yy, np.sqrt(mahal_robust_cov), cmap=plt.cm.YlOrBr_r, linestyles="dotted"
)
# Add legend
ax.legend(
[
emp_cov_contour.collections[1],
robust_contour.collections[1],
inlier_plot,
outlier_plot,
],
["MLE dist", "MCD dist", "inliers", "outliers"],
loc="upper right",
borderaxespad=0,
)
plt.show()
```
Finally, we highlight the ability of MCD based Mahalanobis distances to distinguish outliers. We take the cubic root of the Mahalanobis distances, yielding approximately normal distributions (as suggested by Wilson and Hilferty [[2]](#id3)), then plot the values of inlier and outlier samples with boxplots. The distribution of outlier samples is more separated from the distribution of inlier samples for robust MCD based Mahalanobis distances.
```
fig, (ax1, ax2) = plt.subplots(1, 2)
plt.subplots_adjust(wspace=0.6)
# Calculate cubic root of MLE Mahalanobis distances for samples
emp_mahal = emp_cov.mahalanobis(X - np.mean(X, 0)) ** (0.33)
# Plot boxplots
ax1.boxplot([emp_mahal[:-n_outliers], emp_mahal[-n_outliers:]], widths=0.25)
# Plot individual samples
ax1.plot(
np.full(n_samples - n_outliers, 1.26),
emp_mahal[:-n_outliers],
"+k",
markeredgewidth=1,
)
ax1.plot(np.full(n_outliers, 2.26), emp_mahal[-n_outliers:], "+k", markeredgewidth=1)
ax1.axes.set_xticklabels(("inliers", "outliers"), size=15)
ax1.set_ylabel(r"$\sqrt[3]{\rm{(Mahal. dist.)}}$", size=16)
ax1.set_title("Using non-robust estimates\n(Maximum Likelihood)")
# Calculate cubic root of MCD Mahalanobis distances for samples
robust_mahal = robust_cov.mahalanobis(X - robust_cov.location_) ** (0.33)
# Plot boxplots
ax2.boxplot([robust_mahal[:-n_outliers], robust_mahal[-n_outliers:]], widths=0.25)
# Plot individual samples
ax2.plot(
np.full(n_samples - n_outliers, 1.26),
robust_mahal[:-n_outliers],
"+k",
markeredgewidth=1,
)
ax2.plot(np.full(n_outliers, 2.26), robust_mahal[-n_outliers:], "+k", markeredgewidth=1)
ax2.axes.set_xticklabels(("inliers", "outliers"), size=15)
ax2.set_ylabel(r"$\sqrt[3]{\rm{(Mahal. dist.)}}$", size=16)
ax2.set_title("Using robust estimates\n(Minimum Covariance Determinant)")
plt.show()
```
**Total running time of the script:** ( 0 minutes 0.245 seconds)
[`Download Python source code: plot_mahalanobis_distances.py`](https://scikit-learn.org/1.1/_downloads/7aa7f72ae5f3350402429f7a9851b596/plot_mahalanobis_distances.py)
[`Download Jupyter notebook: plot_mahalanobis_distances.ipynb`](https://scikit-learn.org/1.1/_downloads/83d33d2afcbf708f386433bb1abb0785/plot_mahalanobis_distances.ipynb)
| programming_docs |
scikit_learn Ledoit-Wolf vs OAS estimation Note
Click [here](#sphx-glr-download-auto-examples-covariance-plot-lw-vs-oas-py) to download the full example code or to run this example in your browser via Binder
Ledoit-Wolf vs OAS estimation
=============================
The usual covariance maximum likelihood estimate can be regularized using shrinkage. Ledoit and Wolf proposed a close formula to compute the asymptotically optimal shrinkage parameter (minimizing a MSE criterion), yielding the Ledoit-Wolf covariance estimate.
Chen et al. proposed an improvement of the Ledoit-Wolf shrinkage parameter, the OAS coefficient, whose convergence is significantly better under the assumption that the data are Gaussian.
This example, inspired from Chen’s publication [1], shows a comparison of the estimated MSE of the LW and OAS methods, using Gaussian distributed data.
[1] “Shrinkage Algorithms for MMSE Covariance Estimation” Chen et al., IEEE Trans. on Sign. Proc., Volume 58, Issue 10, October 2010.
```
import numpy as np
import matplotlib.pyplot as plt
from scipy.linalg import toeplitz, cholesky
from sklearn.covariance import LedoitWolf, OAS
np.random.seed(0)
```
```
n_features = 100
# simulation covariance matrix (AR(1) process)
r = 0.1
real_cov = toeplitz(r ** np.arange(n_features))
coloring_matrix = cholesky(real_cov)
n_samples_range = np.arange(6, 31, 1)
repeat = 100
lw_mse = np.zeros((n_samples_range.size, repeat))
oa_mse = np.zeros((n_samples_range.size, repeat))
lw_shrinkage = np.zeros((n_samples_range.size, repeat))
oa_shrinkage = np.zeros((n_samples_range.size, repeat))
for i, n_samples in enumerate(n_samples_range):
for j in range(repeat):
X = np.dot(np.random.normal(size=(n_samples, n_features)), coloring_matrix.T)
lw = LedoitWolf(store_precision=False, assume_centered=True)
lw.fit(X)
lw_mse[i, j] = lw.error_norm(real_cov, scaling=False)
lw_shrinkage[i, j] = lw.shrinkage_
oa = OAS(store_precision=False, assume_centered=True)
oa.fit(X)
oa_mse[i, j] = oa.error_norm(real_cov, scaling=False)
oa_shrinkage[i, j] = oa.shrinkage_
# plot MSE
plt.subplot(2, 1, 1)
plt.errorbar(
n_samples_range,
lw_mse.mean(1),
yerr=lw_mse.std(1),
label="Ledoit-Wolf",
color="navy",
lw=2,
)
plt.errorbar(
n_samples_range,
oa_mse.mean(1),
yerr=oa_mse.std(1),
label="OAS",
color="darkorange",
lw=2,
)
plt.ylabel("Squared error")
plt.legend(loc="upper right")
plt.title("Comparison of covariance estimators")
plt.xlim(5, 31)
# plot shrinkage coefficient
plt.subplot(2, 1, 2)
plt.errorbar(
n_samples_range,
lw_shrinkage.mean(1),
yerr=lw_shrinkage.std(1),
label="Ledoit-Wolf",
color="navy",
lw=2,
)
plt.errorbar(
n_samples_range,
oa_shrinkage.mean(1),
yerr=oa_shrinkage.std(1),
label="OAS",
color="darkorange",
lw=2,
)
plt.xlabel("n_samples")
plt.ylabel("Shrinkage")
plt.legend(loc="lower right")
plt.ylim(plt.ylim()[0], 1.0 + (plt.ylim()[1] - plt.ylim()[0]) / 10.0)
plt.xlim(5, 31)
plt.show()
```
**Total running time of the script:** ( 0 minutes 1.491 seconds)
[`Download Python source code: plot_lw_vs_oas.py`](https://scikit-learn.org/1.1/_downloads/7b17afb06830fd0b89d4d902646dc86e/plot_lw_vs_oas.py)
[`Download Jupyter notebook: plot_lw_vs_oas.ipynb`](https://scikit-learn.org/1.1/_downloads/6d7313d090e7392c48a23c84b6fdec0c/plot_lw_vs_oas.ipynb)
scikit_learn Sparse inverse covariance estimation Note
Click [here](#sphx-glr-download-auto-examples-covariance-plot-sparse-cov-py) to download the full example code or to run this example in your browser via Binder
Sparse inverse covariance estimation
====================================
Using the GraphicalLasso estimator to learn a covariance and sparse precision from a small number of samples.
To estimate a probabilistic model (e.g. a Gaussian model), estimating the precision matrix, that is the inverse covariance matrix, is as important as estimating the covariance matrix. Indeed a Gaussian model is parametrized by the precision matrix.
To be in favorable recovery conditions, we sample the data from a model with a sparse inverse covariance matrix. In addition, we ensure that the data is not too much correlated (limiting the largest coefficient of the precision matrix) and that there a no small coefficients in the precision matrix that cannot be recovered. In addition, with a small number of observations, it is easier to recover a correlation matrix rather than a covariance, thus we scale the time series.
Here, the number of samples is slightly larger than the number of dimensions, thus the empirical covariance is still invertible. However, as the observations are strongly correlated, the empirical covariance matrix is ill-conditioned and as a result its inverse –the empirical precision matrix– is very far from the ground truth.
If we use l2 shrinkage, as with the Ledoit-Wolf estimator, as the number of samples is small, we need to shrink a lot. As a result, the Ledoit-Wolf precision is fairly close to the ground truth precision, that is not far from being diagonal, but the off-diagonal structure is lost.
The l1-penalized estimator can recover part of this off-diagonal structure. It learns a sparse precision. It is not able to recover the exact sparsity pattern: it detects too many non-zero coefficients. However, the highest non-zero coefficients of the l1 estimated correspond to the non-zero coefficients in the ground truth. Finally, the coefficients of the l1 precision estimate are biased toward zero: because of the penalty, they are all smaller than the corresponding ground truth value, as can be seen on the figure.
Note that, the color range of the precision matrices is tweaked to improve readability of the figure. The full range of values of the empirical precision is not displayed.
The alpha parameter of the GraphicalLasso setting the sparsity of the model is set by internal cross-validation in the GraphicalLassoCV. As can be seen on figure 2, the grid to compute the cross-validation score is iteratively refined in the neighborhood of the maximum.
```
# author: Gael Varoquaux <[email protected]>
# License: BSD 3 clause
# Copyright: INRIA
```
Generate the data
-----------------
```
import numpy as np
from scipy import linalg
from sklearn.datasets import make_sparse_spd_matrix
n_samples = 60
n_features = 20
prng = np.random.RandomState(1)
prec = make_sparse_spd_matrix(
n_features, alpha=0.98, smallest_coef=0.4, largest_coef=0.7, random_state=prng
)
cov = linalg.inv(prec)
d = np.sqrt(np.diag(cov))
cov /= d
cov /= d[:, np.newaxis]
prec *= d
prec *= d[:, np.newaxis]
X = prng.multivariate_normal(np.zeros(n_features), cov, size=n_samples)
X -= X.mean(axis=0)
X /= X.std(axis=0)
```
Estimate the covariance
-----------------------
```
from sklearn.covariance import GraphicalLassoCV, ledoit_wolf
emp_cov = np.dot(X.T, X) / n_samples
model = GraphicalLassoCV()
model.fit(X)
cov_ = model.covariance_
prec_ = model.precision_
lw_cov_, _ = ledoit_wolf(X)
lw_prec_ = linalg.inv(lw_cov_)
```
Plot the results
----------------
```
import matplotlib.pyplot as plt
plt.figure(figsize=(10, 6))
plt.subplots_adjust(left=0.02, right=0.98)
# plot the covariances
covs = [
("Empirical", emp_cov),
("Ledoit-Wolf", lw_cov_),
("GraphicalLassoCV", cov_),
("True", cov),
]
vmax = cov_.max()
for i, (name, this_cov) in enumerate(covs):
plt.subplot(2, 4, i + 1)
plt.imshow(
this_cov, interpolation="nearest", vmin=-vmax, vmax=vmax, cmap=plt.cm.RdBu_r
)
plt.xticks(())
plt.yticks(())
plt.title("%s covariance" % name)
# plot the precisions
precs = [
("Empirical", linalg.inv(emp_cov)),
("Ledoit-Wolf", lw_prec_),
("GraphicalLasso", prec_),
("True", prec),
]
vmax = 0.9 * prec_.max()
for i, (name, this_prec) in enumerate(precs):
ax = plt.subplot(2, 4, i + 5)
plt.imshow(
np.ma.masked_equal(this_prec, 0),
interpolation="nearest",
vmin=-vmax,
vmax=vmax,
cmap=plt.cm.RdBu_r,
)
plt.xticks(())
plt.yticks(())
plt.title("%s precision" % name)
if hasattr(ax, "set_facecolor"):
ax.set_facecolor(".7")
else:
ax.set_axis_bgcolor(".7")
```
```
# plot the model selection metric
plt.figure(figsize=(4, 3))
plt.axes([0.2, 0.15, 0.75, 0.7])
plt.plot(model.cv_results_["alphas"], model.cv_results_["mean_test_score"], "o-")
plt.axvline(model.alpha_, color=".5")
plt.title("Model selection")
plt.ylabel("Cross-validation score")
plt.xlabel("alpha")
plt.show()
```
**Total running time of the script:** ( 0 minutes 0.478 seconds)
[`Download Python source code: plot_sparse_cov.py`](https://scikit-learn.org/1.1/_downloads/ef716d06d01e43235aa0be61f66bd68d/plot_sparse_cov.py)
[`Download Jupyter notebook: plot_sparse_cov.ipynb`](https://scikit-learn.org/1.1/_downloads/493307eb257cfb3d4e056ee73a41842e/plot_sparse_cov.ipynb)
scikit_learn Robust vs Empirical covariance estimate Note
Click [here](#sphx-glr-download-auto-examples-covariance-plot-robust-vs-empirical-covariance-py) to download the full example code or to run this example in your browser via Binder
Robust vs Empirical covariance estimate
=======================================
The usual covariance maximum likelihood estimate is very sensitive to the presence of outliers in the data set. In such a case, it would be better to use a robust estimator of covariance to guarantee that the estimation is resistant to “erroneous” observations in the data set. [[1]](#id4), [[2]](#id5)
Minimum Covariance Determinant Estimator
----------------------------------------
The Minimum Covariance Determinant estimator is a robust, high-breakdown point (i.e. it can be used to estimate the covariance matrix of highly contaminated datasets, up to \(\frac{n\_\text{samples} - n\_\text{features}-1}{2}\) outliers) estimator of covariance. The idea is to find \(\frac{n\_\text{samples} + n\_\text{features}+1}{2}\) observations whose empirical covariance has the smallest determinant, yielding a “pure” subset of observations from which to compute standards estimates of location and covariance. After a correction step aiming at compensating the fact that the estimates were learned from only a portion of the initial data, we end up with robust estimates of the data set location and covariance.
The Minimum Covariance Determinant estimator (MCD) has been introduced by P.J.Rousseuw in [[3]](#id6).
Evaluation
----------
In this example, we compare the estimation errors that are made when using various types of location and covariance estimates on contaminated Gaussian distributed data sets:
* The mean and the empirical covariance of the full dataset, which break down as soon as there are outliers in the data set
* The robust MCD, that has a low error provided \(n\_\text{samples} > 5n\_\text{features}\)
* The mean and the empirical covariance of the observations that are known to be good ones. This can be considered as a “perfect” MCD estimation, so one can trust our implementation by comparing to this case.
References
----------
```
import numpy as np
import matplotlib.pyplot as plt
import matplotlib.font_manager
from sklearn.covariance import EmpiricalCovariance, MinCovDet
# example settings
n_samples = 80
n_features = 5
repeat = 10
range_n_outliers = np.concatenate(
(
np.linspace(0, n_samples / 8, 5),
np.linspace(n_samples / 8, n_samples / 2, 5)[1:-1],
)
).astype(int)
# definition of arrays to store results
err_loc_mcd = np.zeros((range_n_outliers.size, repeat))
err_cov_mcd = np.zeros((range_n_outliers.size, repeat))
err_loc_emp_full = np.zeros((range_n_outliers.size, repeat))
err_cov_emp_full = np.zeros((range_n_outliers.size, repeat))
err_loc_emp_pure = np.zeros((range_n_outliers.size, repeat))
err_cov_emp_pure = np.zeros((range_n_outliers.size, repeat))
# computation
for i, n_outliers in enumerate(range_n_outliers):
for j in range(repeat):
rng = np.random.RandomState(i * j)
# generate data
X = rng.randn(n_samples, n_features)
# add some outliers
outliers_index = rng.permutation(n_samples)[:n_outliers]
outliers_offset = 10.0 * (
np.random.randint(2, size=(n_outliers, n_features)) - 0.5
)
X[outliers_index] += outliers_offset
inliers_mask = np.ones(n_samples).astype(bool)
inliers_mask[outliers_index] = False
# fit a Minimum Covariance Determinant (MCD) robust estimator to data
mcd = MinCovDet().fit(X)
# compare raw robust estimates with the true location and covariance
err_loc_mcd[i, j] = np.sum(mcd.location_**2)
err_cov_mcd[i, j] = mcd.error_norm(np.eye(n_features))
# compare estimators learned from the full data set with true
# parameters
err_loc_emp_full[i, j] = np.sum(X.mean(0) ** 2)
err_cov_emp_full[i, j] = (
EmpiricalCovariance().fit(X).error_norm(np.eye(n_features))
)
# compare with an empirical covariance learned from a pure data set
# (i.e. "perfect" mcd)
pure_X = X[inliers_mask]
pure_location = pure_X.mean(0)
pure_emp_cov = EmpiricalCovariance().fit(pure_X)
err_loc_emp_pure[i, j] = np.sum(pure_location**2)
err_cov_emp_pure[i, j] = pure_emp_cov.error_norm(np.eye(n_features))
# Display results
font_prop = matplotlib.font_manager.FontProperties(size=11)
plt.subplot(2, 1, 1)
lw = 2
plt.errorbar(
range_n_outliers,
err_loc_mcd.mean(1),
yerr=err_loc_mcd.std(1) / np.sqrt(repeat),
label="Robust location",
lw=lw,
color="m",
)
plt.errorbar(
range_n_outliers,
err_loc_emp_full.mean(1),
yerr=err_loc_emp_full.std(1) / np.sqrt(repeat),
label="Full data set mean",
lw=lw,
color="green",
)
plt.errorbar(
range_n_outliers,
err_loc_emp_pure.mean(1),
yerr=err_loc_emp_pure.std(1) / np.sqrt(repeat),
label="Pure data set mean",
lw=lw,
color="black",
)
plt.title("Influence of outliers on the location estimation")
plt.ylabel(r"Error ($||\mu - \hat{\mu}||_2^2$)")
plt.legend(loc="upper left", prop=font_prop)
plt.subplot(2, 1, 2)
x_size = range_n_outliers.size
plt.errorbar(
range_n_outliers,
err_cov_mcd.mean(1),
yerr=err_cov_mcd.std(1),
label="Robust covariance (mcd)",
color="m",
)
plt.errorbar(
range_n_outliers[: (x_size // 5 + 1)],
err_cov_emp_full.mean(1)[: (x_size // 5 + 1)],
yerr=err_cov_emp_full.std(1)[: (x_size // 5 + 1)],
label="Full data set empirical covariance",
color="green",
)
plt.plot(
range_n_outliers[(x_size // 5) : (x_size // 2 - 1)],
err_cov_emp_full.mean(1)[(x_size // 5) : (x_size // 2 - 1)],
color="green",
ls="--",
)
plt.errorbar(
range_n_outliers,
err_cov_emp_pure.mean(1),
yerr=err_cov_emp_pure.std(1),
label="Pure data set empirical covariance",
color="black",
)
plt.title("Influence of outliers on the covariance estimation")
plt.xlabel("Amount of contamination (%)")
plt.ylabel("RMSE")
plt.legend(loc="upper center", prop=font_prop)
plt.show()
```
**Total running time of the script:** ( 0 minutes 2.194 seconds)
[`Download Python source code: plot_robust_vs_empirical_covariance.py`](https://scikit-learn.org/1.1/_downloads/55189006cedb95a2fc6bf8c216dab8f0/plot_robust_vs_empirical_covariance.py)
[`Download Jupyter notebook: plot_robust_vs_empirical_covariance.ipynb`](https://scikit-learn.org/1.1/_downloads/c0510ef77613771032b63870b7baaa31/plot_robust_vs_empirical_covariance.ipynb)
scikit_learn Shrinkage covariance estimation: LedoitWolf vs OAS and max-likelihood Note
Click [here](#sphx-glr-download-auto-examples-covariance-plot-covariance-estimation-py) to download the full example code or to run this example in your browser via Binder
Shrinkage covariance estimation: LedoitWolf vs OAS and max-likelihood
=====================================================================
When working with covariance estimation, the usual approach is to use a maximum likelihood estimator, such as the [`EmpiricalCovariance`](../../modules/generated/sklearn.covariance.empiricalcovariance#sklearn.covariance.EmpiricalCovariance "sklearn.covariance.EmpiricalCovariance"). It is unbiased, i.e. it converges to the true (population) covariance when given many observations. However, it can also be beneficial to regularize it, in order to reduce its variance; this, in turn, introduces some bias. This example illustrates the simple regularization used in [Shrunk Covariance](../../modules/covariance#shrunk-covariance) estimators. In particular, it focuses on how to set the amount of regularization, i.e. how to choose the bias-variance trade-off.
Generate sample data
--------------------
```
import numpy as np
n_features, n_samples = 40, 20
np.random.seed(42)
base_X_train = np.random.normal(size=(n_samples, n_features))
base_X_test = np.random.normal(size=(n_samples, n_features))
# Color samples
coloring_matrix = np.random.normal(size=(n_features, n_features))
X_train = np.dot(base_X_train, coloring_matrix)
X_test = np.dot(base_X_test, coloring_matrix)
```
Compute the likelihood on test data
-----------------------------------
```
from sklearn.covariance import ShrunkCovariance, empirical_covariance, log_likelihood
from scipy import linalg
# spanning a range of possible shrinkage coefficient values
shrinkages = np.logspace(-2, 0, 30)
negative_logliks = [
-ShrunkCovariance(shrinkage=s).fit(X_train).score(X_test) for s in shrinkages
]
# under the ground-truth model, which we would not have access to in real
# settings
real_cov = np.dot(coloring_matrix.T, coloring_matrix)
emp_cov = empirical_covariance(X_train)
loglik_real = -log_likelihood(emp_cov, linalg.inv(real_cov))
```
Compare different approaches to setting the regularization parameter
--------------------------------------------------------------------
Here we compare 3 approaches:
* Setting the parameter by cross-validating the likelihood on three folds according to a grid of potential shrinkage parameters.
* A close formula proposed by Ledoit and Wolf to compute the asymptotically optimal regularization parameter (minimizing a MSE criterion), yielding the [`LedoitWolf`](../../modules/generated/sklearn.covariance.ledoitwolf#sklearn.covariance.LedoitWolf "sklearn.covariance.LedoitWolf") covariance estimate.
* An improvement of the Ledoit-Wolf shrinkage, the [`OAS`](../../modules/generated/sklearn.covariance.oas#sklearn.covariance.OAS "sklearn.covariance.OAS"), proposed by Chen et al. Its convergence is significantly better under the assumption that the data are Gaussian, in particular for small samples.
```
from sklearn.model_selection import GridSearchCV
from sklearn.covariance import LedoitWolf, OAS
# GridSearch for an optimal shrinkage coefficient
tuned_parameters = [{"shrinkage": shrinkages}]
cv = GridSearchCV(ShrunkCovariance(), tuned_parameters)
cv.fit(X_train)
# Ledoit-Wolf optimal shrinkage coefficient estimate
lw = LedoitWolf()
loglik_lw = lw.fit(X_train).score(X_test)
# OAS coefficient estimate
oa = OAS()
loglik_oa = oa.fit(X_train).score(X_test)
```
Plot results
------------
To quantify estimation error, we plot the likelihood of unseen data for different values of the shrinkage parameter. We also show the choices by cross-validation, or with the LedoitWolf and OAS estimates.
```
import matplotlib.pyplot as plt
fig = plt.figure()
plt.title("Regularized covariance: likelihood and shrinkage coefficient")
plt.xlabel("Regularization parameter: shrinkage coefficient")
plt.ylabel("Error: negative log-likelihood on test data")
# range shrinkage curve
plt.loglog(shrinkages, negative_logliks, label="Negative log-likelihood")
plt.plot(plt.xlim(), 2 * [loglik_real], "--r", label="Real covariance likelihood")
# adjust view
lik_max = np.amax(negative_logliks)
lik_min = np.amin(negative_logliks)
ymin = lik_min - 6.0 * np.log((plt.ylim()[1] - plt.ylim()[0]))
ymax = lik_max + 10.0 * np.log(lik_max - lik_min)
xmin = shrinkages[0]
xmax = shrinkages[-1]
# LW likelihood
plt.vlines(
lw.shrinkage_,
ymin,
-loglik_lw,
color="magenta",
linewidth=3,
label="Ledoit-Wolf estimate",
)
# OAS likelihood
plt.vlines(
oa.shrinkage_, ymin, -loglik_oa, color="purple", linewidth=3, label="OAS estimate"
)
# best CV estimator likelihood
plt.vlines(
cv.best_estimator_.shrinkage,
ymin,
-cv.best_estimator_.score(X_test),
color="cyan",
linewidth=3,
label="Cross-validation best estimate",
)
plt.ylim(ymin, ymax)
plt.xlim(xmin, xmax)
plt.legend()
plt.show()
```
Note
The maximum likelihood estimate corresponds to no shrinkage, and thus performs poorly. The Ledoit-Wolf estimate performs really well, as it is close to the optimal and is not computationally costly. In this example, the OAS estimate is a bit further away. Interestingly, both approaches outperform cross-validation, which is significantly most computationally costly.
**Total running time of the script:** ( 0 minutes 0.348 seconds)
[`Download Python source code: plot_covariance_estimation.py`](https://scikit-learn.org/1.1/_downloads/29c38fef6831de20867ac61e068f2461/plot_covariance_estimation.py)
[`Download Jupyter notebook: plot_covariance_estimation.ipynb`](https://scikit-learn.org/1.1/_downloads/503dcbe9fdb65a8f83bd6e34b3adc769/plot_covariance_estimation.ipynb)
| programming_docs |
scikit_learn Classifier Chain Note
Click [here](#sphx-glr-download-auto-examples-multioutput-plot-classifier-chain-yeast-py) to download the full example code or to run this example in your browser via Binder
Classifier Chain
================
Example of using classifier chain on a multilabel dataset.
For this example we will use the [yeast](https://www.openml.org/d/40597) dataset which contains 2417 datapoints each with 103 features and 14 possible labels. Each data point has at least one label. As a baseline we first train a logistic regression classifier for each of the 14 labels. To evaluate the performance of these classifiers we predict on a held-out test set and calculate the [jaccard score](../../modules/model_evaluation#jaccard-similarity-score) for each sample.
Next we create 10 classifier chains. Each classifier chain contains a logistic regression model for each of the 14 labels. The models in each chain are ordered randomly. In addition to the 103 features in the dataset, each model gets the predictions of the preceding models in the chain as features (note that by default at training time each model gets the true labels as features). These additional features allow each chain to exploit correlations among the classes. The Jaccard similarity score for each chain tends to be greater than that of the set independent logistic models.
Because the models in each chain are arranged randomly there is significant variation in performance among the chains. Presumably there is an optimal ordering of the classes in a chain that will yield the best performance. However we do not know that ordering a priori. Instead we can construct an voting ensemble of classifier chains by averaging the binary predictions of the chains and apply a threshold of 0.5. The Jaccard similarity score of the ensemble is greater than that of the independent models and tends to exceed the score of each chain in the ensemble (although this is not guaranteed with randomly ordered chains).
```
# Author: Adam Kleczewski
# License: BSD 3 clause
import numpy as np
import matplotlib.pyplot as plt
from sklearn.datasets import fetch_openml
from sklearn.multioutput import ClassifierChain
from sklearn.model_selection import train_test_split
from sklearn.multiclass import OneVsRestClassifier
from sklearn.metrics import jaccard_score
from sklearn.linear_model import LogisticRegression
# Load a multi-label dataset from https://www.openml.org/d/40597
X, Y = fetch_openml("yeast", version=4, return_X_y=True)
Y = Y == "TRUE"
X_train, X_test, Y_train, Y_test = train_test_split(X, Y, test_size=0.2, random_state=0)
# Fit an independent logistic regression model for each class using the
# OneVsRestClassifier wrapper.
base_lr = LogisticRegression()
ovr = OneVsRestClassifier(base_lr)
ovr.fit(X_train, Y_train)
Y_pred_ovr = ovr.predict(X_test)
ovr_jaccard_score = jaccard_score(Y_test, Y_pred_ovr, average="samples")
# Fit an ensemble of logistic regression classifier chains and take the
# take the average prediction of all the chains.
chains = [ClassifierChain(base_lr, order="random", random_state=i) for i in range(10)]
for chain in chains:
chain.fit(X_train, Y_train)
Y_pred_chains = np.array([chain.predict(X_test) for chain in chains])
chain_jaccard_scores = [
jaccard_score(Y_test, Y_pred_chain >= 0.5, average="samples")
for Y_pred_chain in Y_pred_chains
]
Y_pred_ensemble = Y_pred_chains.mean(axis=0)
ensemble_jaccard_score = jaccard_score(
Y_test, Y_pred_ensemble >= 0.5, average="samples"
)
model_scores = [ovr_jaccard_score] + chain_jaccard_scores
model_scores.append(ensemble_jaccard_score)
model_names = (
"Independent",
"Chain 1",
"Chain 2",
"Chain 3",
"Chain 4",
"Chain 5",
"Chain 6",
"Chain 7",
"Chain 8",
"Chain 9",
"Chain 10",
"Ensemble",
)
x_pos = np.arange(len(model_names))
# Plot the Jaccard similarity scores for the independent model, each of the
# chains, and the ensemble (note that the vertical axis on this plot does
# not begin at 0).
fig, ax = plt.subplots(figsize=(7, 4))
ax.grid(True)
ax.set_title("Classifier Chain Ensemble Performance Comparison")
ax.set_xticks(x_pos)
ax.set_xticklabels(model_names, rotation="vertical")
ax.set_ylabel("Jaccard Similarity Score")
ax.set_ylim([min(model_scores) * 0.9, max(model_scores) * 1.1])
colors = ["r"] + ["b"] * len(chain_jaccard_scores) + ["g"]
ax.bar(x_pos, model_scores, alpha=0.5, color=colors)
plt.tight_layout()
plt.show()
```
**Total running time of the script:** ( 0 minutes 10.037 seconds)
[`Download Python source code: plot_classifier_chain_yeast.py`](https://scikit-learn.org/1.1/_downloads/c6856243c97f58098e60fb14d2bf3750/plot_classifier_chain_yeast.py)
[`Download Jupyter notebook: plot_classifier_chain_yeast.ipynb`](https://scikit-learn.org/1.1/_downloads/05ca8a4e90b4cc2acd69f9e24b4a1f3a/plot_classifier_chain_yeast.ipynb)
scikit_learn Comparing anomaly detection algorithms for outlier detection on toy datasets Note
Click [here](#sphx-glr-download-auto-examples-miscellaneous-plot-anomaly-comparison-py) to download the full example code or to run this example in your browser via Binder
Comparing anomaly detection algorithms for outlier detection on toy datasets
============================================================================
This example shows characteristics of different anomaly detection algorithms on 2D datasets. Datasets contain one or two modes (regions of high density) to illustrate the ability of algorithms to cope with multimodal data.
For each dataset, 15% of samples are generated as random uniform noise. This proportion is the value given to the nu parameter of the OneClassSVM and the contamination parameter of the other outlier detection algorithms. Decision boundaries between inliers and outliers are displayed in black except for Local Outlier Factor (LOF) as it has no predict method to be applied on new data when it is used for outlier detection.
The [`OneClassSVM`](../../modules/generated/sklearn.svm.oneclasssvm#sklearn.svm.OneClassSVM "sklearn.svm.OneClassSVM") is known to be sensitive to outliers and thus does not perform very well for outlier detection. This estimator is best suited for novelty detection when the training set is not contaminated by outliers. That said, outlier detection in high-dimension, or without any assumptions on the distribution of the inlying data is very challenging, and a One-class SVM might give useful results in these situations depending on the value of its hyperparameters.
The [`sklearn.linear_model.SGDOneClassSVM`](../../modules/generated/sklearn.linear_model.sgdoneclasssvm#sklearn.linear_model.SGDOneClassSVM "sklearn.linear_model.SGDOneClassSVM") is an implementation of the One-Class SVM based on stochastic gradient descent (SGD). Combined with kernel approximation, this estimator can be used to approximate the solution of a kernelized [`sklearn.svm.OneClassSVM`](../../modules/generated/sklearn.svm.oneclasssvm#sklearn.svm.OneClassSVM "sklearn.svm.OneClassSVM"). We note that, although not identical, the decision boundaries of the [`sklearn.linear_model.SGDOneClassSVM`](../../modules/generated/sklearn.linear_model.sgdoneclasssvm#sklearn.linear_model.SGDOneClassSVM "sklearn.linear_model.SGDOneClassSVM") and the ones of [`sklearn.svm.OneClassSVM`](../../modules/generated/sklearn.svm.oneclasssvm#sklearn.svm.OneClassSVM "sklearn.svm.OneClassSVM") are very similar. The main advantage of using [`sklearn.linear_model.SGDOneClassSVM`](../../modules/generated/sklearn.linear_model.sgdoneclasssvm#sklearn.linear_model.SGDOneClassSVM "sklearn.linear_model.SGDOneClassSVM") is that it scales linearly with the number of samples.
[`sklearn.covariance.EllipticEnvelope`](../../modules/generated/sklearn.covariance.ellipticenvelope#sklearn.covariance.EllipticEnvelope "sklearn.covariance.EllipticEnvelope") assumes the data is Gaussian and learns an ellipse. It thus degrades when the data is not unimodal. Notice however that this estimator is robust to outliers.
[`IsolationForest`](../../modules/generated/sklearn.ensemble.isolationforest#sklearn.ensemble.IsolationForest "sklearn.ensemble.IsolationForest") and [`LocalOutlierFactor`](../../modules/generated/sklearn.neighbors.localoutlierfactor#sklearn.neighbors.LocalOutlierFactor "sklearn.neighbors.LocalOutlierFactor") seem to perform reasonably well for multi-modal data sets. The advantage of [`LocalOutlierFactor`](../../modules/generated/sklearn.neighbors.localoutlierfactor#sklearn.neighbors.LocalOutlierFactor "sklearn.neighbors.LocalOutlierFactor") over the other estimators is shown for the third data set, where the two modes have different densities. This advantage is explained by the local aspect of LOF, meaning that it only compares the score of abnormality of one sample with the scores of its neighbors.
Finally, for the last data set, it is hard to say that one sample is more abnormal than another sample as they are uniformly distributed in a hypercube. Except for the [`OneClassSVM`](../../modules/generated/sklearn.svm.oneclasssvm#sklearn.svm.OneClassSVM "sklearn.svm.OneClassSVM") which overfits a little, all estimators present decent solutions for this situation. In such a case, it would be wise to look more closely at the scores of abnormality of the samples as a good estimator should assign similar scores to all the samples.
While these examples give some intuition about the algorithms, this intuition might not apply to very high dimensional data.
Finally, note that parameters of the models have been here handpicked but that in practice they need to be adjusted. In the absence of labelled data, the problem is completely unsupervised so model selection can be a challenge.

```
# Author: Alexandre Gramfort <[email protected]>
# Albert Thomas <[email protected]>
# License: BSD 3 clause
import time
import numpy as np
import matplotlib
import matplotlib.pyplot as plt
from sklearn import svm
from sklearn.datasets import make_moons, make_blobs
from sklearn.covariance import EllipticEnvelope
from sklearn.ensemble import IsolationForest
from sklearn.neighbors import LocalOutlierFactor
from sklearn.linear_model import SGDOneClassSVM
from sklearn.kernel_approximation import Nystroem
from sklearn.pipeline import make_pipeline
matplotlib.rcParams["contour.negative_linestyle"] = "solid"
# Example settings
n_samples = 300
outliers_fraction = 0.15
n_outliers = int(outliers_fraction * n_samples)
n_inliers = n_samples - n_outliers
# define outlier/anomaly detection methods to be compared.
# the SGDOneClassSVM must be used in a pipeline with a kernel approximation
# to give similar results to the OneClassSVM
anomaly_algorithms = [
("Robust covariance", EllipticEnvelope(contamination=outliers_fraction)),
("One-Class SVM", svm.OneClassSVM(nu=outliers_fraction, kernel="rbf", gamma=0.1)),
(
"One-Class SVM (SGD)",
make_pipeline(
Nystroem(gamma=0.1, random_state=42, n_components=150),
SGDOneClassSVM(
nu=outliers_fraction,
shuffle=True,
fit_intercept=True,
random_state=42,
tol=1e-6,
),
),
),
(
"Isolation Forest",
IsolationForest(contamination=outliers_fraction, random_state=42),
),
(
"Local Outlier Factor",
LocalOutlierFactor(n_neighbors=35, contamination=outliers_fraction),
),
]
# Define datasets
blobs_params = dict(random_state=0, n_samples=n_inliers, n_features=2)
datasets = [
make_blobs(centers=[[0, 0], [0, 0]], cluster_std=0.5, **blobs_params)[0],
make_blobs(centers=[[2, 2], [-2, -2]], cluster_std=[0.5, 0.5], **blobs_params)[0],
make_blobs(centers=[[2, 2], [-2, -2]], cluster_std=[1.5, 0.3], **blobs_params)[0],
4.0
* (
make_moons(n_samples=n_samples, noise=0.05, random_state=0)[0]
- np.array([0.5, 0.25])
),
14.0 * (np.random.RandomState(42).rand(n_samples, 2) - 0.5),
]
# Compare given classifiers under given settings
xx, yy = np.meshgrid(np.linspace(-7, 7, 150), np.linspace(-7, 7, 150))
plt.figure(figsize=(len(anomaly_algorithms) * 2 + 4, 12.5))
plt.subplots_adjust(
left=0.02, right=0.98, bottom=0.001, top=0.96, wspace=0.05, hspace=0.01
)
plot_num = 1
rng = np.random.RandomState(42)
for i_dataset, X in enumerate(datasets):
# Add outliers
X = np.concatenate([X, rng.uniform(low=-6, high=6, size=(n_outliers, 2))], axis=0)
for name, algorithm in anomaly_algorithms:
t0 = time.time()
algorithm.fit(X)
t1 = time.time()
plt.subplot(len(datasets), len(anomaly_algorithms), plot_num)
if i_dataset == 0:
plt.title(name, size=18)
# fit the data and tag outliers
if name == "Local Outlier Factor":
y_pred = algorithm.fit_predict(X)
else:
y_pred = algorithm.fit(X).predict(X)
# plot the levels lines and the points
if name != "Local Outlier Factor": # LOF does not implement predict
Z = algorithm.predict(np.c_[xx.ravel(), yy.ravel()])
Z = Z.reshape(xx.shape)
plt.contour(xx, yy, Z, levels=[0], linewidths=2, colors="black")
colors = np.array(["#377eb8", "#ff7f00"])
plt.scatter(X[:, 0], X[:, 1], s=10, color=colors[(y_pred + 1) // 2])
plt.xlim(-7, 7)
plt.ylim(-7, 7)
plt.xticks(())
plt.yticks(())
plt.text(
0.99,
0.01,
("%.2fs" % (t1 - t0)).lstrip("0"),
transform=plt.gca().transAxes,
size=15,
horizontalalignment="right",
)
plot_num += 1
plt.show()
```
**Total running time of the script:** ( 0 minutes 4.374 seconds)
[`Download Python source code: plot_anomaly_comparison.py`](https://scikit-learn.org/1.1/_downloads/9057446ce5a251004d1743be3a7c5f0e/plot_anomaly_comparison.py)
[`Download Jupyter notebook: plot_anomaly_comparison.ipynb`](https://scikit-learn.org/1.1/_downloads/1916a2a88b97d84ae7b7fb44d3f5503b/plot_anomaly_comparison.ipynb)
scikit_learn The Johnson-Lindenstrauss bound for embedding with random projections Note
Click [here](#sphx-glr-download-auto-examples-miscellaneous-plot-johnson-lindenstrauss-bound-py) to download the full example code or to run this example in your browser via Binder
The Johnson-Lindenstrauss bound for embedding with random projections
=====================================================================
The [Johnson-Lindenstrauss lemma](https://en.wikipedia.org/wiki/%20Johnson%E2%80%93Lindenstrauss_lemma) states that any high dimensional dataset can be randomly projected into a lower dimensional Euclidean space while controlling the distortion in the pairwise distances.
```
import sys
from time import time
import numpy as np
import matplotlib.pyplot as plt
from sklearn.random_projection import johnson_lindenstrauss_min_dim
from sklearn.random_projection import SparseRandomProjection
from sklearn.datasets import fetch_20newsgroups_vectorized
from sklearn.datasets import load_digits
from sklearn.metrics.pairwise import euclidean_distances
```
Theoretical bounds
------------------
The distortion introduced by a random projection `p` is asserted by the fact that `p` is defining an eps-embedding with good probability as defined by:
\[(1 - eps) \|u - v\|^2 < \|p(u) - p(v)\|^2 < (1 + eps) \|u - v\|^2\] Where `u` and `v` are any rows taken from a dataset of shape `(n_samples,
n_features)` and `p` is a projection by a random Gaussian `N(0, 1)` matrix of shape `(n_components, n_features)` (or a sparse Achlioptas matrix).
The minimum number of components to guarantees the eps-embedding is given by:
\[n\\_components \geq 4 log(n\\_samples) / (eps^2 / 2 - eps^3 / 3)\] The first plot shows that with an increasing number of samples `n_samples`, the minimal number of dimensions `n_components` increased logarithmically in order to guarantee an `eps`-embedding.
```
# range of admissible distortions
eps_range = np.linspace(0.1, 0.99, 5)
colors = plt.cm.Blues(np.linspace(0.3, 1.0, len(eps_range)))
# range of number of samples (observation) to embed
n_samples_range = np.logspace(1, 9, 9)
plt.figure()
for eps, color in zip(eps_range, colors):
min_n_components = johnson_lindenstrauss_min_dim(n_samples_range, eps=eps)
plt.loglog(n_samples_range, min_n_components, color=color)
plt.legend([f"eps = {eps:0.1f}" for eps in eps_range], loc="lower right")
plt.xlabel("Number of observations to eps-embed")
plt.ylabel("Minimum number of dimensions")
plt.title("Johnson-Lindenstrauss bounds:\nn_samples vs n_components")
plt.show()
```
The second plot shows that an increase of the admissible distortion `eps` allows to reduce drastically the minimal number of dimensions `n_components` for a given number of samples `n_samples`
```
# range of admissible distortions
eps_range = np.linspace(0.01, 0.99, 100)
# range of number of samples (observation) to embed
n_samples_range = np.logspace(2, 6, 5)
colors = plt.cm.Blues(np.linspace(0.3, 1.0, len(n_samples_range)))
plt.figure()
for n_samples, color in zip(n_samples_range, colors):
min_n_components = johnson_lindenstrauss_min_dim(n_samples, eps=eps_range)
plt.semilogy(eps_range, min_n_components, color=color)
plt.legend([f"n_samples = {n}" for n in n_samples_range], loc="upper right")
plt.xlabel("Distortion eps")
plt.ylabel("Minimum number of dimensions")
plt.title("Johnson-Lindenstrauss bounds:\nn_components vs eps")
plt.show()
```
Empirical validation
--------------------
We validate the above bounds on the 20 newsgroups text document (TF-IDF word frequencies) dataset or on the digits dataset:
* for the 20 newsgroups dataset some 300 documents with 100k features in total are projected using a sparse random matrix to smaller euclidean spaces with various values for the target number of dimensions `n_components`.
* for the digits dataset, some 8x8 gray level pixels data for 300 handwritten digits pictures are randomly projected to spaces for various larger number of dimensions `n_components`.
The default dataset is the 20 newsgroups dataset. To run the example on the digits dataset, pass the `--use-digits-dataset` command line argument to this script.
```
if "--use-digits-dataset" in sys.argv:
data = load_digits().data[:300]
else:
data = fetch_20newsgroups_vectorized().data[:300]
```
For each value of `n_components`, we plot:
* 2D distribution of sample pairs with pairwise distances in original and projected spaces as x- and y-axis respectively.
* 1D histogram of the ratio of those distances (projected / original).
```
n_samples, n_features = data.shape
print(
f"Embedding {n_samples} samples with dim {n_features} using various "
"random projections"
)
n_components_range = np.array([300, 1_000, 10_000])
dists = euclidean_distances(data, squared=True).ravel()
# select only non-identical samples pairs
nonzero = dists != 0
dists = dists[nonzero]
for n_components in n_components_range:
t0 = time()
rp = SparseRandomProjection(n_components=n_components)
projected_data = rp.fit_transform(data)
print(
f"Projected {n_samples} samples from {n_features} to {n_components} in "
f"{time() - t0:0.3f}s"
)
if hasattr(rp, "components_"):
n_bytes = rp.components_.data.nbytes
n_bytes += rp.components_.indices.nbytes
print(f"Random matrix with size: {n_bytes / 1e6:0.3f} MB")
projected_dists = euclidean_distances(projected_data, squared=True).ravel()[nonzero]
plt.figure()
min_dist = min(projected_dists.min(), dists.min())
max_dist = max(projected_dists.max(), dists.max())
plt.hexbin(
dists,
projected_dists,
gridsize=100,
cmap=plt.cm.PuBu,
extent=[min_dist, max_dist, min_dist, max_dist],
)
plt.xlabel("Pairwise squared distances in original space")
plt.ylabel("Pairwise squared distances in projected space")
plt.title("Pairwise distances distribution for n_components=%d" % n_components)
cb = plt.colorbar()
cb.set_label("Sample pairs counts")
rates = projected_dists / dists
print(f"Mean distances rate: {np.mean(rates):.2f} ({np.std(rates):.2f})")
plt.figure()
plt.hist(rates, bins=50, range=(0.0, 2.0), edgecolor="k", density=True)
plt.xlabel("Squared distances rate: projected / original")
plt.ylabel("Distribution of samples pairs")
plt.title("Histogram of pairwise distance rates for n_components=%d" % n_components)
# TODO: compute the expected value of eps and add them to the previous plot
# as vertical lines / region
plt.show()
```
*
*
*
*
*
*
```
Embedding 300 samples with dim 130107 using various random projections
Projected 300 samples from 130107 to 300 in 0.236s
Random matrix with size: 1.301 MB
Mean distances rate: 1.06 (0.21)
Projected 300 samples from 130107 to 1000 in 0.776s
Random matrix with size: 4.328 MB
Mean distances rate: 1.05 (0.10)
Projected 300 samples from 130107 to 10000 in 7.806s
Random matrix with size: 43.251 MB
Mean distances rate: 1.01 (0.03)
```
We can see that for low values of `n_components` the distribution is wide with many distorted pairs and a skewed distribution (due to the hard limit of zero ratio on the left as distances are always positives) while for larger values of `n_components` the distortion is controlled and the distances are well preserved by the random projection.
Remarks
-------
According to the JL lemma, projecting 300 samples without too much distortion will require at least several thousands dimensions, irrespective of the number of features of the original dataset.
Hence using random projections on the digits dataset which only has 64 features in the input space does not make sense: it does not allow for dimensionality reduction in this case.
On the twenty newsgroups on the other hand the dimensionality can be decreased from 56,436 down to 10,000 while reasonably preserving pairwise distances.
**Total running time of the script:** ( 0 minutes 10.818 seconds)
[`Download Python source code: plot_johnson_lindenstrauss_bound.py`](https://scikit-learn.org/1.1/_downloads/cba66f803bb263f8032bc4d46368e20b/plot_johnson_lindenstrauss_bound.py)
[`Download Jupyter notebook: plot_johnson_lindenstrauss_bound.ipynb`](https://scikit-learn.org/1.1/_downloads/285b194a4740110cb23e241031123972/plot_johnson_lindenstrauss_bound.ipynb)
| programming_docs |
scikit_learn Compact estimator representations Note
Click [here](#sphx-glr-download-auto-examples-miscellaneous-plot-changed-only-pprint-parameter-py) to download the full example code or to run this example in your browser via Binder
Compact estimator representations
=================================
This example illustrates the use of the print\_changed\_only global parameter.
Setting print\_changed\_only to True will alternate the representation of estimators to only show the parameters that have been set to non-default values. This can be used to have more compact representations.
```
Default representation:
LogisticRegression(penalty='l1')
With changed_only option:
LogisticRegression(penalty='l1')
```
```
from sklearn.linear_model import LogisticRegression
from sklearn import set_config
lr = LogisticRegression(penalty="l1")
print("Default representation:")
print(lr)
# LogisticRegression(C=1.0, class_weight=None, dual=False, fit_intercept=True,
# intercept_scaling=1, l1_ratio=None, max_iter=100,
# multi_class='auto', n_jobs=None, penalty='l1',
# random_state=None, solver='warn', tol=0.0001, verbose=0,
# warm_start=False)
set_config(print_changed_only=True)
print("\nWith changed_only option:")
print(lr)
# LogisticRegression(penalty='l1')
```
**Total running time of the script:** ( 0 minutes 0.001 seconds)
[`Download Python source code: plot_changed_only_pprint_parameter.py`](https://scikit-learn.org/1.1/_downloads/9a745322d64077b9b29de6e2eee95cb8/plot_changed_only_pprint_parameter.py)
[`Download Jupyter notebook: plot_changed_only_pprint_parameter.ipynb`](https://scikit-learn.org/1.1/_downloads/73b204eec0616bad4b738d300bf5c030/plot_changed_only_pprint_parameter.ipynb)
scikit_learn Comparison of kernel ridge regression and SVR Note
Click [here](#sphx-glr-download-auto-examples-miscellaneous-plot-kernel-ridge-regression-py) to download the full example code or to run this example in your browser via Binder
Comparison of kernel ridge regression and SVR
=============================================
Both kernel ridge regression (KRR) and SVR learn a non-linear function by employing the kernel trick, i.e., they learn a linear function in the space induced by the respective kernel which corresponds to a non-linear function in the original space. They differ in the loss functions (ridge versus epsilon-insensitive loss). In contrast to SVR, fitting a KRR can be done in closed-form and is typically faster for medium-sized datasets. On the other hand, the learned model is non-sparse and thus slower than SVR at prediction-time.
This example illustrates both methods on an artificial dataset, which consists of a sinusoidal target function and strong noise added to every fifth datapoint.
Authors: Jan Hendrik Metzen <[[email protected]](mailto:jhm%40informatik.uni-bremen.de)> License: BSD 3 clause
Generate sample data
--------------------
```
import numpy as np
rng = np.random.RandomState(42)
X = 5 * rng.rand(10000, 1)
y = np.sin(X).ravel()
# Add noise to targets
y[::5] += 3 * (0.5 - rng.rand(X.shape[0] // 5))
X_plot = np.linspace(0, 5, 100000)[:, None]
```
Construct the kernel-based regression models
--------------------------------------------
```
from sklearn.model_selection import GridSearchCV
from sklearn.svm import SVR
from sklearn.kernel_ridge import KernelRidge
train_size = 100
svr = GridSearchCV(
SVR(kernel="rbf", gamma=0.1),
param_grid={"C": [1e0, 1e1, 1e2, 1e3], "gamma": np.logspace(-2, 2, 5)},
)
kr = GridSearchCV(
KernelRidge(kernel="rbf", gamma=0.1),
param_grid={"alpha": [1e0, 0.1, 1e-2, 1e-3], "gamma": np.logspace(-2, 2, 5)},
)
```
Compare times of SVR and Kernel Ridge Regression
------------------------------------------------
```
import time
t0 = time.time()
svr.fit(X[:train_size], y[:train_size])
svr_fit = time.time() - t0
print(f"Best SVR with params: {svr.best_params_} and R2 score: {svr.best_score_:.3f}")
print("SVR complexity and bandwidth selected and model fitted in %.3f s" % svr_fit)
t0 = time.time()
kr.fit(X[:train_size], y[:train_size])
kr_fit = time.time() - t0
print(f"Best KRR with params: {kr.best_params_} and R2 score: {kr.best_score_:.3f}")
print("KRR complexity and bandwidth selected and model fitted in %.3f s" % kr_fit)
sv_ratio = svr.best_estimator_.support_.shape[0] / train_size
print("Support vector ratio: %.3f" % sv_ratio)
t0 = time.time()
y_svr = svr.predict(X_plot)
svr_predict = time.time() - t0
print("SVR prediction for %d inputs in %.3f s" % (X_plot.shape[0], svr_predict))
t0 = time.time()
y_kr = kr.predict(X_plot)
kr_predict = time.time() - t0
print("KRR prediction for %d inputs in %.3f s" % (X_plot.shape[0], kr_predict))
```
```
Best SVR with params: {'C': 1.0, 'gamma': 0.1} and R2 score: 0.737
SVR complexity and bandwidth selected and model fitted in 0.471 s
Best KRR with params: {'alpha': 0.1, 'gamma': 0.1} and R2 score: 0.723
KRR complexity and bandwidth selected and model fitted in 0.125 s
Support vector ratio: 0.340
SVR prediction for 100000 inputs in 0.120 s
KRR prediction for 100000 inputs in 0.095 s
```
Look at the results
-------------------
```
import matplotlib.pyplot as plt
sv_ind = svr.best_estimator_.support_
plt.scatter(
X[sv_ind],
y[sv_ind],
c="r",
s=50,
label="SVR support vectors",
zorder=2,
edgecolors=(0, 0, 0),
)
plt.scatter(X[:100], y[:100], c="k", label="data", zorder=1, edgecolors=(0, 0, 0))
plt.plot(
X_plot,
y_svr,
c="r",
label="SVR (fit: %.3fs, predict: %.3fs)" % (svr_fit, svr_predict),
)
plt.plot(
X_plot, y_kr, c="g", label="KRR (fit: %.3fs, predict: %.3fs)" % (kr_fit, kr_predict)
)
plt.xlabel("data")
plt.ylabel("target")
plt.title("SVR versus Kernel Ridge")
_ = plt.legend()
```
The previous figure compares the learned model of KRR and SVR when both complexity/regularization and bandwidth of the RBF kernel are optimized using grid-search. The learned functions are very similar; however, fitting KRR is approximatively 3-4 times faster than fitting SVR (both with grid-search).
Prediction of 100000 target values could be in theory approximately three times faster with SVR since it has learned a sparse model using only approximately 1/3 of the training datapoints as support vectors. However, in practice, this is not necessarily the case because of implementation details in the way the kernel function is computed for each model that can make the KRR model as fast or even faster despite computing more arithmetic operations.
Visualize training and prediction times
---------------------------------------
```
plt.figure()
sizes = np.logspace(1, 3.8, 7).astype(int)
for name, estimator in {
"KRR": KernelRidge(kernel="rbf", alpha=0.01, gamma=10),
"SVR": SVR(kernel="rbf", C=1e2, gamma=10),
}.items():
train_time = []
test_time = []
for train_test_size in sizes:
t0 = time.time()
estimator.fit(X[:train_test_size], y[:train_test_size])
train_time.append(time.time() - t0)
t0 = time.time()
estimator.predict(X_plot[:1000])
test_time.append(time.time() - t0)
plt.plot(
sizes,
train_time,
"o-",
color="r" if name == "SVR" else "g",
label="%s (train)" % name,
)
plt.plot(
sizes,
test_time,
"o--",
color="r" if name == "SVR" else "g",
label="%s (test)" % name,
)
plt.xscale("log")
plt.yscale("log")
plt.xlabel("Train size")
plt.ylabel("Time (seconds)")
plt.title("Execution Time")
_ = plt.legend(loc="best")
```
This figure compares the time for fitting and prediction of KRR and SVR for different sizes of the training set. Fitting KRR is faster than SVR for medium-sized training sets (less than a few thousand samples); however, for larger training sets SVR scales better. With regard to prediction time, SVR should be faster than KRR for all sizes of the training set because of the learned sparse solution, however this is not necessarily the case in practice because of implementation details. Note that the degree of sparsity and thus the prediction time depends on the parameters epsilon and C of the SVR.
Visualize the learning curves
-----------------------------
```
from sklearn.model_selection import learning_curve
plt.figure()
svr = SVR(kernel="rbf", C=1e1, gamma=0.1)
kr = KernelRidge(kernel="rbf", alpha=0.1, gamma=0.1)
train_sizes, train_scores_svr, test_scores_svr = learning_curve(
svr,
X[:100],
y[:100],
train_sizes=np.linspace(0.1, 1, 10),
scoring="neg_mean_squared_error",
cv=10,
)
train_sizes_abs, train_scores_kr, test_scores_kr = learning_curve(
kr,
X[:100],
y[:100],
train_sizes=np.linspace(0.1, 1, 10),
scoring="neg_mean_squared_error",
cv=10,
)
plt.plot(train_sizes, -test_scores_kr.mean(1), "o--", color="g", label="KRR")
plt.plot(train_sizes, -test_scores_svr.mean(1), "o--", color="r", label="SVR")
plt.xlabel("Train size")
plt.ylabel("Mean Squared Error")
plt.title("Learning curves")
plt.legend(loc="best")
plt.show()
```
**Total running time of the script:** ( 0 minutes 8.417 seconds)
[`Download Python source code: plot_kernel_ridge_regression.py`](https://scikit-learn.org/1.1/_downloads/1dcd684ce26b8c407ec2c2d2101c5c73/plot_kernel_ridge_regression.py)
[`Download Jupyter notebook: plot_kernel_ridge_regression.ipynb`](https://scikit-learn.org/1.1/_downloads/9d2f119ab4a1b6f1454c43b796f2c6a6/plot_kernel_ridge_regression.ipynb)
scikit_learn Displaying Pipelines Note
Click [here](#sphx-glr-download-auto-examples-miscellaneous-plot-pipeline-display-py) to download the full example code or to run this example in your browser via Binder
Displaying Pipelines
====================
The default configuration for displaying a pipeline in a Jupyter Notebook is `'diagram'` where `set_config(display='diagram')`. To deactivate HTML representation, use `set_config(display='text')`.
To see more detailed steps in the visualization of the pipeline, click on the steps in the pipeline.
Displaying a Pipeline with a Preprocessing Step and Classifier
--------------------------------------------------------------
This section constructs a [`Pipeline`](../../modules/generated/sklearn.pipeline.pipeline#sklearn.pipeline.Pipeline "sklearn.pipeline.Pipeline") with a preprocessing step, [`StandardScaler`](../../modules/generated/sklearn.preprocessing.standardscaler#sklearn.preprocessing.StandardScaler "sklearn.preprocessing.StandardScaler"), and classifier, [`LogisticRegression`](../../modules/generated/sklearn.linear_model.logisticregression#sklearn.linear_model.LogisticRegression "sklearn.linear_model.LogisticRegression"), and displays its visual representation.
```
from sklearn.pipeline import Pipeline
from sklearn.preprocessing import StandardScaler
from sklearn.linear_model import LogisticRegression
from sklearn import set_config
steps = [
("preprocessing", StandardScaler()),
("classifier", LogisticRegression()),
]
pipe = Pipeline(steps)
```
To visualize the diagram, the default is `display='diagram'`.
```
set_config(display="diagram")
pipe # click on the diagram below to see the details of each step
```
```
Pipeline(steps=[('preprocessing', StandardScaler()),
('classifier', LogisticRegression())])
```
**In a Jupyter environment, please rerun this cell to show the HTML representation or trust the notebook.
On GitHub, the HTML representation is unable to render, please try loading this page with nbviewer.org.**
Pipeline
```
Pipeline(steps=[('preprocessing', StandardScaler()),
('classifier', LogisticRegression())])
```
StandardScaler
```
StandardScaler()
```
LogisticRegression
```
LogisticRegression()
```
To view the text pipeline, change to `display='text'`.
```
set_config(display="text")
pipe
```
```
Pipeline(steps=[('preprocessing', StandardScaler()),
('classifier', LogisticRegression())])
```
Put back the default display
```
set_config(display="diagram")
```
Displaying a Pipeline Chaining Multiple Preprocessing Steps & Classifier
------------------------------------------------------------------------
This section constructs a [`Pipeline`](../../modules/generated/sklearn.pipeline.pipeline#sklearn.pipeline.Pipeline "sklearn.pipeline.Pipeline") with multiple preprocessing steps, [`PolynomialFeatures`](../../modules/generated/sklearn.preprocessing.polynomialfeatures#sklearn.preprocessing.PolynomialFeatures "sklearn.preprocessing.PolynomialFeatures") and [`StandardScaler`](../../modules/generated/sklearn.preprocessing.standardscaler#sklearn.preprocessing.StandardScaler "sklearn.preprocessing.StandardScaler"), and a classifier step, [`LogisticRegression`](../../modules/generated/sklearn.linear_model.logisticregression#sklearn.linear_model.LogisticRegression "sklearn.linear_model.LogisticRegression"), and displays its visual representation.
```
from sklearn.pipeline import Pipeline
from sklearn.preprocessing import StandardScaler, PolynomialFeatures
from sklearn.linear_model import LogisticRegression
steps = [
("standard_scaler", StandardScaler()),
("polynomial", PolynomialFeatures(degree=3)),
("classifier", LogisticRegression(C=2.0)),
]
pipe = Pipeline(steps)
pipe # click on the diagram below to see the details of each step
```
```
Pipeline(steps=[('standard_scaler', StandardScaler()),
('polynomial', PolynomialFeatures(degree=3)),
('classifier', LogisticRegression(C=2.0))])
```
**In a Jupyter environment, please rerun this cell to show the HTML representation or trust the notebook.
On GitHub, the HTML representation is unable to render, please try loading this page with nbviewer.org.**
Pipeline
```
Pipeline(steps=[('standard_scaler', StandardScaler()),
('polynomial', PolynomialFeatures(degree=3)),
('classifier', LogisticRegression(C=2.0))])
```
StandardScaler
```
StandardScaler()
```
PolynomialFeatures
```
PolynomialFeatures(degree=3)
```
LogisticRegression
```
LogisticRegression(C=2.0)
```
Displaying a Pipeline and Dimensionality Reduction and Classifier
-----------------------------------------------------------------
This section constructs a [`Pipeline`](../../modules/generated/sklearn.pipeline.pipeline#sklearn.pipeline.Pipeline "sklearn.pipeline.Pipeline") with a dimensionality reduction step, [`PCA`](../../modules/generated/sklearn.decomposition.pca#sklearn.decomposition.PCA "sklearn.decomposition.PCA"), a classifier, [`SVC`](../../modules/generated/sklearn.svm.svc#sklearn.svm.SVC "sklearn.svm.SVC"), and displays its visual representation.
```
from sklearn.pipeline import Pipeline
from sklearn.svm import SVC
from sklearn.decomposition import PCA
steps = [("reduce_dim", PCA(n_components=4)), ("classifier", SVC(kernel="linear"))]
pipe = Pipeline(steps)
pipe # click on the diagram below to see the details of each step
```
```
Pipeline(steps=[('reduce_dim', PCA(n_components=4)),
('classifier', SVC(kernel='linear'))])
```
**In a Jupyter environment, please rerun this cell to show the HTML representation or trust the notebook.
On GitHub, the HTML representation is unable to render, please try loading this page with nbviewer.org.**
Pipeline
```
Pipeline(steps=[('reduce_dim', PCA(n_components=4)),
('classifier', SVC(kernel='linear'))])
```
PCA
```
PCA(n_components=4)
```
SVC
```
SVC(kernel='linear')
```
Displaying a Complex Pipeline Chaining a Column Transformer
-----------------------------------------------------------
This section constructs a complex [`Pipeline`](../../modules/generated/sklearn.pipeline.pipeline#sklearn.pipeline.Pipeline "sklearn.pipeline.Pipeline") with a [`ColumnTransformer`](../../modules/generated/sklearn.compose.columntransformer#sklearn.compose.ColumnTransformer "sklearn.compose.ColumnTransformer") and a classifier, [`LogisticRegression`](../../modules/generated/sklearn.linear_model.logisticregression#sklearn.linear_model.LogisticRegression "sklearn.linear_model.LogisticRegression"), and displays its visual representation.
```
import numpy as np
from sklearn.pipeline import make_pipeline
from sklearn.pipeline import Pipeline
from sklearn.impute import SimpleImputer
from sklearn.compose import ColumnTransformer
from sklearn.preprocessing import OneHotEncoder, StandardScaler
from sklearn.linear_model import LogisticRegression
numeric_preprocessor = Pipeline(
steps=[
("imputation_mean", SimpleImputer(missing_values=np.nan, strategy="mean")),
("scaler", StandardScaler()),
]
)
categorical_preprocessor = Pipeline(
steps=[
(
"imputation_constant",
SimpleImputer(fill_value="missing", strategy="constant"),
),
("onehot", OneHotEncoder(handle_unknown="ignore")),
]
)
preprocessor = ColumnTransformer(
[
("categorical", categorical_preprocessor, ["state", "gender"]),
("numerical", numeric_preprocessor, ["age", "weight"]),
]
)
pipe = make_pipeline(preprocessor, LogisticRegression(max_iter=500))
pipe # click on the diagram below to see the details of each step
```
```
Pipeline(steps=[('columntransformer',
ColumnTransformer(transformers=[('categorical',
Pipeline(steps=[('imputation_constant',
SimpleImputer(fill_value='missing',
strategy='constant')),
('onehot',
OneHotEncoder(handle_unknown='ignore'))]),
['state', 'gender']),
('numerical',
Pipeline(steps=[('imputation_mean',
SimpleImputer()),
('scaler',
StandardScaler())]),
['age', 'weight'])])),
('logisticregression', LogisticRegression(max_iter=500))])
```
**In a Jupyter environment, please rerun this cell to show the HTML representation or trust the notebook.
On GitHub, the HTML representation is unable to render, please try loading this page with nbviewer.org.**
Pipeline
```
Pipeline(steps=[('columntransformer',
ColumnTransformer(transformers=[('categorical',
Pipeline(steps=[('imputation_constant',
SimpleImputer(fill_value='missing',
strategy='constant')),
('onehot',
OneHotEncoder(handle_unknown='ignore'))]),
['state', 'gender']),
('numerical',
Pipeline(steps=[('imputation_mean',
SimpleImputer()),
('scaler',
StandardScaler())]),
['age', 'weight'])])),
('logisticregression', LogisticRegression(max_iter=500))])
```
columntransformer: ColumnTransformer
```
ColumnTransformer(transformers=[('categorical',
Pipeline(steps=[('imputation_constant',
SimpleImputer(fill_value='missing',
strategy='constant')),
('onehot',
OneHotEncoder(handle_unknown='ignore'))]),
['state', 'gender']),
('numerical',
Pipeline(steps=[('imputation_mean',
SimpleImputer()),
('scaler', StandardScaler())]),
['age', 'weight'])])
```
categorical
```
['state', 'gender']
```
SimpleImputer
```
SimpleImputer(fill_value='missing', strategy='constant')
```
OneHotEncoder
```
OneHotEncoder(handle_unknown='ignore')
```
numerical
```
['age', 'weight']
```
SimpleImputer
```
SimpleImputer()
```
StandardScaler
```
StandardScaler()
```
LogisticRegression
```
LogisticRegression(max_iter=500)
```
Displaying a Grid Search over a Pipeline with a Classifier
----------------------------------------------------------
This section constructs a [`GridSearchCV`](../../modules/generated/sklearn.model_selection.gridsearchcv#sklearn.model_selection.GridSearchCV "sklearn.model_selection.GridSearchCV") over a [`Pipeline`](../../modules/generated/sklearn.pipeline.pipeline#sklearn.pipeline.Pipeline "sklearn.pipeline.Pipeline") with [`RandomForestClassifier`](../../modules/generated/sklearn.ensemble.randomforestclassifier#sklearn.ensemble.RandomForestClassifier "sklearn.ensemble.RandomForestClassifier") and displays its visual representation.
```
import numpy as np
from sklearn.pipeline import make_pipeline
from sklearn.pipeline import Pipeline
from sklearn.impute import SimpleImputer
from sklearn.compose import ColumnTransformer
from sklearn.preprocessing import OneHotEncoder, StandardScaler
from sklearn.ensemble import RandomForestClassifier
from sklearn.model_selection import GridSearchCV
numeric_preprocessor = Pipeline(
steps=[
("imputation_mean", SimpleImputer(missing_values=np.nan, strategy="mean")),
("scaler", StandardScaler()),
]
)
categorical_preprocessor = Pipeline(
steps=[
(
"imputation_constant",
SimpleImputer(fill_value="missing", strategy="constant"),
),
("onehot", OneHotEncoder(handle_unknown="ignore")),
]
)
preprocessor = ColumnTransformer(
[
("categorical", categorical_preprocessor, ["state", "gender"]),
("numerical", numeric_preprocessor, ["age", "weight"]),
]
)
pipe = Pipeline(
steps=[("preprocessor", preprocessor), ("classifier", RandomForestClassifier())]
)
param_grid = {
"classifier__n_estimators": [200, 500],
"classifier__max_features": ["auto", "sqrt", "log2"],
"classifier__max_depth": [4, 5, 6, 7, 8],
"classifier__criterion": ["gini", "entropy"],
}
grid_search = GridSearchCV(pipe, param_grid=param_grid, n_jobs=1)
grid_search # click on the diagram below to see the details of each step
```
```
GridSearchCV(estimator=Pipeline(steps=[('preprocessor',
ColumnTransformer(transformers=[('categorical',
Pipeline(steps=[('imputation_constant',
SimpleImputer(fill_value='missing',
strategy='constant')),
('onehot',
OneHotEncoder(handle_unknown='ignore'))]),
['state',
'gender']),
('numerical',
Pipeline(steps=[('imputation_mean',
SimpleImputer()),
('scaler',
StandardScaler())]),
['age',
'weight'])])),
('classifier',
RandomForestClassifier())]),
n_jobs=1,
param_grid={'classifier__criterion': ['gini', 'entropy'],
'classifier__max_depth': [4, 5, 6, 7, 8],
'classifier__max_features': ['auto', 'sqrt', 'log2'],
'classifier__n_estimators': [200, 500]})
```
**In a Jupyter environment, please rerun this cell to show the HTML representation or trust the notebook.
On GitHub, the HTML representation is unable to render, please try loading this page with nbviewer.org.**
GridSearchCV
```
GridSearchCV(estimator=Pipeline(steps=[('preprocessor',
ColumnTransformer(transformers=[('categorical',
Pipeline(steps=[('imputation_constant',
SimpleImputer(fill_value='missing',
strategy='constant')),
('onehot',
OneHotEncoder(handle_unknown='ignore'))]),
['state',
'gender']),
('numerical',
Pipeline(steps=[('imputation_mean',
SimpleImputer()),
('scaler',
StandardScaler())]),
['age',
'weight'])])),
('classifier',
RandomForestClassifier())]),
n_jobs=1,
param_grid={'classifier__criterion': ['gini', 'entropy'],
'classifier__max_depth': [4, 5, 6, 7, 8],
'classifier__max_features': ['auto', 'sqrt', 'log2'],
'classifier__n_estimators': [200, 500]})
```
estimator: Pipeline
```
Pipeline(steps=[('preprocessor',
ColumnTransformer(transformers=[('categorical',
Pipeline(steps=[('imputation_constant',
SimpleImputer(fill_value='missing',
strategy='constant')),
('onehot',
OneHotEncoder(handle_unknown='ignore'))]),
['state', 'gender']),
('numerical',
Pipeline(steps=[('imputation_mean',
SimpleImputer()),
('scaler',
StandardScaler())]),
['age', 'weight'])])),
('classifier', RandomForestClassifier())])
```
preprocessor: ColumnTransformer
```
ColumnTransformer(transformers=[('categorical',
Pipeline(steps=[('imputation_constant',
SimpleImputer(fill_value='missing',
strategy='constant')),
('onehot',
OneHotEncoder(handle_unknown='ignore'))]),
['state', 'gender']),
('numerical',
Pipeline(steps=[('imputation_mean',
SimpleImputer()),
('scaler', StandardScaler())]),
['age', 'weight'])])
```
categorical
```
['state', 'gender']
```
SimpleImputer
```
SimpleImputer(fill_value='missing', strategy='constant')
```
OneHotEncoder
```
OneHotEncoder(handle_unknown='ignore')
```
numerical
```
['age', 'weight']
```
SimpleImputer
```
SimpleImputer()
```
StandardScaler
```
StandardScaler()
```
RandomForestClassifier
```
RandomForestClassifier()
```
**Total running time of the script:** ( 0 minutes 0.086 seconds)
[`Download Python source code: plot_pipeline_display.py`](https://scikit-learn.org/1.1/_downloads/85d246b1849cf51d95634621ea0b7dd2/plot_pipeline_display.py)
[`Download Jupyter notebook: plot_pipeline_display.ipynb`](https://scikit-learn.org/1.1/_downloads/6ebaebc92484478ceb119d24fe9df21c/plot_pipeline_display.ipynb)
| programming_docs |
scikit_learn Multilabel classification Note
Click [here](#sphx-glr-download-auto-examples-miscellaneous-plot-multilabel-py) to download the full example code or to run this example in your browser via Binder
Multilabel classification
=========================
This example simulates a multi-label document classification problem. The dataset is generated randomly based on the following process:
* pick the number of labels: n ~ Poisson(n\_labels)
* n times, choose a class c: c ~ Multinomial(theta)
* pick the document length: k ~ Poisson(length)
* k times, choose a word: w ~ Multinomial(theta\_c)
In the above process, rejection sampling is used to make sure that n is more than 2, and that the document length is never zero. Likewise, we reject classes which have already been chosen. The documents that are assigned to both classes are plotted surrounded by two colored circles.
The classification is performed by projecting to the first two principal components found by PCA and CCA for visualisation purposes, followed by using the [`OneVsRestClassifier`](../../modules/generated/sklearn.multiclass.onevsrestclassifier#sklearn.multiclass.OneVsRestClassifier "sklearn.multiclass.OneVsRestClassifier") metaclassifier using two SVCs with linear kernels to learn a discriminative model for each class. Note that PCA is used to perform an unsupervised dimensionality reduction, while CCA is used to perform a supervised one.
Note: in the plot, “unlabeled samples” does not mean that we don’t know the labels (as in semi-supervised learning) but that the samples simply do *not* have a label.
```
# Authors: Vlad Niculae, Mathieu Blondel
# License: BSD 3 clause
import numpy as np
import matplotlib.pyplot as plt
from sklearn.datasets import make_multilabel_classification
from sklearn.multiclass import OneVsRestClassifier
from sklearn.svm import SVC
from sklearn.decomposition import PCA
from sklearn.cross_decomposition import CCA
def plot_hyperplane(clf, min_x, max_x, linestyle, label):
# get the separating hyperplane
w = clf.coef_[0]
a = -w[0] / w[1]
xx = np.linspace(min_x - 5, max_x + 5) # make sure the line is long enough
yy = a * xx - (clf.intercept_[0]) / w[1]
plt.plot(xx, yy, linestyle, label=label)
def plot_subfigure(X, Y, subplot, title, transform):
if transform == "pca":
X = PCA(n_components=2).fit_transform(X)
elif transform == "cca":
X = CCA(n_components=2).fit(X, Y).transform(X)
else:
raise ValueError
min_x = np.min(X[:, 0])
max_x = np.max(X[:, 0])
min_y = np.min(X[:, 1])
max_y = np.max(X[:, 1])
classif = OneVsRestClassifier(SVC(kernel="linear"))
classif.fit(X, Y)
plt.subplot(2, 2, subplot)
plt.title(title)
zero_class = np.where(Y[:, 0])
one_class = np.where(Y[:, 1])
plt.scatter(X[:, 0], X[:, 1], s=40, c="gray", edgecolors=(0, 0, 0))
plt.scatter(
X[zero_class, 0],
X[zero_class, 1],
s=160,
edgecolors="b",
facecolors="none",
linewidths=2,
label="Class 1",
)
plt.scatter(
X[one_class, 0],
X[one_class, 1],
s=80,
edgecolors="orange",
facecolors="none",
linewidths=2,
label="Class 2",
)
plot_hyperplane(
classif.estimators_[0], min_x, max_x, "k--", "Boundary\nfor class 1"
)
plot_hyperplane(
classif.estimators_[1], min_x, max_x, "k-.", "Boundary\nfor class 2"
)
plt.xticks(())
plt.yticks(())
plt.xlim(min_x - 0.5 * max_x, max_x + 0.5 * max_x)
plt.ylim(min_y - 0.5 * max_y, max_y + 0.5 * max_y)
if subplot == 2:
plt.xlabel("First principal component")
plt.ylabel("Second principal component")
plt.legend(loc="upper left")
plt.figure(figsize=(8, 6))
X, Y = make_multilabel_classification(
n_classes=2, n_labels=1, allow_unlabeled=True, random_state=1
)
plot_subfigure(X, Y, 1, "With unlabeled samples + CCA", "cca")
plot_subfigure(X, Y, 2, "With unlabeled samples + PCA", "pca")
X, Y = make_multilabel_classification(
n_classes=2, n_labels=1, allow_unlabeled=False, random_state=1
)
plot_subfigure(X, Y, 3, "Without unlabeled samples + CCA", "cca")
plot_subfigure(X, Y, 4, "Without unlabeled samples + PCA", "pca")
plt.subplots_adjust(0.04, 0.02, 0.97, 0.94, 0.09, 0.2)
plt.show()
```
**Total running time of the script:** ( 0 minutes 0.173 seconds)
[`Download Python source code: plot_multilabel.py`](https://scikit-learn.org/1.1/_downloads/d9b8062b664fa0515a7849d4a28e98ed/plot_multilabel.py)
[`Download Jupyter notebook: plot_multilabel.ipynb`](https://scikit-learn.org/1.1/_downloads/65b807da1fd0f3cbb60c1425fddba026/plot_multilabel.ipynb)
scikit_learn Advanced Plotting With Partial Dependence Note
Click [here](#sphx-glr-download-auto-examples-miscellaneous-plot-partial-dependence-visualization-api-py) to download the full example code or to run this example in your browser via Binder
Advanced Plotting With Partial Dependence
=========================================
The [`plot_partial_dependence`](../../modules/generated/sklearn.inspection.plot_partial_dependence#sklearn.inspection.plot_partial_dependence "sklearn.inspection.plot_partial_dependence") function returns a [`PartialDependenceDisplay`](../../modules/generated/sklearn.inspection.partialdependencedisplay#sklearn.inspection.PartialDependenceDisplay "sklearn.inspection.PartialDependenceDisplay") object that can be used for plotting without needing to recalculate the partial dependence. In this example, we show how to plot partial dependence plots and how to quickly customize the plot with the visualization API.
Note
See also [ROC Curve with Visualization API](plot_roc_curve_visualization_api#sphx-glr-auto-examples-miscellaneous-plot-roc-curve-visualization-api-py)
```
import pandas as pd
import matplotlib.pyplot as plt
from sklearn.datasets import load_diabetes
from sklearn.neural_network import MLPRegressor
from sklearn.preprocessing import StandardScaler
from sklearn.pipeline import make_pipeline
from sklearn.tree import DecisionTreeRegressor
from sklearn.inspection import PartialDependenceDisplay
```
Train models on the diabetes dataset
------------------------------------
First, we train a decision tree and a multi-layer perceptron on the diabetes dataset.
```
diabetes = load_diabetes()
X = pd.DataFrame(diabetes.data, columns=diabetes.feature_names)
y = diabetes.target
tree = DecisionTreeRegressor()
mlp = make_pipeline(
StandardScaler(),
MLPRegressor(hidden_layer_sizes=(100, 100), tol=1e-2, max_iter=500, random_state=0),
)
tree.fit(X, y)
mlp.fit(X, y)
```
```
Pipeline(steps=[('standardscaler', StandardScaler()),
('mlpregressor',
MLPRegressor(hidden_layer_sizes=(100, 100), max_iter=500,
random_state=0, tol=0.01))])
```
**In a Jupyter environment, please rerun this cell to show the HTML representation or trust the notebook.
On GitHub, the HTML representation is unable to render, please try loading this page with nbviewer.org.**
Pipeline
```
Pipeline(steps=[('standardscaler', StandardScaler()),
('mlpregressor',
MLPRegressor(hidden_layer_sizes=(100, 100), max_iter=500,
random_state=0, tol=0.01))])
```
StandardScaler
```
StandardScaler()
```
MLPRegressor
```
MLPRegressor(hidden_layer_sizes=(100, 100), max_iter=500, random_state=0,
tol=0.01)
```
Plotting partial dependence for two features
--------------------------------------------
We plot partial dependence curves for features “age” and “bmi” (body mass index) for the decision tree. With two features, [`from_estimator`](../../modules/generated/sklearn.inspection.partialdependencedisplay#sklearn.inspection.PartialDependenceDisplay.from_estimator "sklearn.inspection.PartialDependenceDisplay.from_estimator") expects to plot two curves. Here the plot function place a grid of two plots using the space defined by `ax` .
```
fig, ax = plt.subplots(figsize=(12, 6))
ax.set_title("Decision Tree")
tree_disp = PartialDependenceDisplay.from_estimator(tree, X, ["age", "bmi"], ax=ax)
```
The partial dependence curves can be plotted for the multi-layer perceptron. In this case, `line_kw` is passed to [`from_estimator`](../../modules/generated/sklearn.inspection.partialdependencedisplay#sklearn.inspection.PartialDependenceDisplay.from_estimator "sklearn.inspection.PartialDependenceDisplay.from_estimator") to change the color of the curve.
```
fig, ax = plt.subplots(figsize=(12, 6))
ax.set_title("Multi-layer Perceptron")
mlp_disp = PartialDependenceDisplay.from_estimator(
mlp, X, ["age", "bmi"], ax=ax, line_kw={"color": "red"}
)
```
Plotting partial dependence of the two models together
------------------------------------------------------
The `tree_disp` and `mlp_disp` [`PartialDependenceDisplay`](../../modules/generated/sklearn.inspection.partialdependencedisplay#sklearn.inspection.PartialDependenceDisplay "sklearn.inspection.PartialDependenceDisplay") objects contain all the computed information needed to recreate the partial dependence curves. This means we can easily create additional plots without needing to recompute the curves.
One way to plot the curves is to place them in the same figure, with the curves of each model on each row. First, we create a figure with two axes within two rows and one column. The two axes are passed to the [`plot`](../../modules/generated/sklearn.inspection.partialdependencedisplay#sklearn.inspection.PartialDependenceDisplay.plot "sklearn.inspection.PartialDependenceDisplay.plot") functions of `tree_disp` and `mlp_disp`. The given axes will be used by the plotting function to draw the partial dependence. The resulting plot places the decision tree partial dependence curves in the first row of the multi-layer perceptron in the second row.
```
fig, (ax1, ax2) = plt.subplots(2, 1, figsize=(10, 10))
tree_disp.plot(ax=ax1)
ax1.set_title("Decision Tree")
mlp_disp.plot(ax=ax2, line_kw={"color": "red"})
ax2.set_title("Multi-layer Perceptron")
```
```
Text(0.5, 1.0, 'Multi-layer Perceptron')
```
Another way to compare the curves is to plot them on top of each other. Here, we create a figure with one row and two columns. The axes are passed into the [`plot`](../../modules/generated/sklearn.inspection.partialdependencedisplay#sklearn.inspection.PartialDependenceDisplay.plot "sklearn.inspection.PartialDependenceDisplay.plot") function as a list, which will plot the partial dependence curves of each model on the same axes. The length of the axes list must be equal to the number of plots drawn.
```
fig, (ax1, ax2) = plt.subplots(1, 2, figsize=(10, 6))
tree_disp.plot(ax=[ax1, ax2], line_kw={"label": "Decision Tree"})
mlp_disp.plot(
ax=[ax1, ax2], line_kw={"label": "Multi-layer Perceptron", "color": "red"}
)
ax1.legend()
ax2.legend()
```
```
<matplotlib.legend.Legend object at 0x7f6e7edcad00>
```
`tree_disp.axes_` is a numpy array container the axes used to draw the partial dependence plots. This can be passed to `mlp_disp` to have the same affect of drawing the plots on top of each other. Furthermore, the `mlp_disp.figure_` stores the figure, which allows for resizing the figure after calling `plot`. In this case `tree_disp.axes_` has two dimensions, thus `plot` will only show the y label and y ticks on the left most plot.
```
tree_disp.plot(line_kw={"label": "Decision Tree"})
mlp_disp.plot(
line_kw={"label": "Multi-layer Perceptron", "color": "red"}, ax=tree_disp.axes_
)
tree_disp.figure_.set_size_inches(10, 6)
tree_disp.axes_[0, 0].legend()
tree_disp.axes_[0, 1].legend()
plt.show()
```
Plotting partial dependence for one feature
-------------------------------------------
Here, we plot the partial dependence curves for a single feature, “age”, on the same axes. In this case, `tree_disp.axes_` is passed into the second plot function.
```
tree_disp = PartialDependenceDisplay.from_estimator(tree, X, ["age"])
mlp_disp = PartialDependenceDisplay.from_estimator(
mlp, X, ["age"], ax=tree_disp.axes_, line_kw={"color": "red"}
)
```
**Total running time of the script:** ( 0 minutes 2.285 seconds)
[`Download Python source code: plot_partial_dependence_visualization_api.py`](https://scikit-learn.org/1.1/_downloads/1bba2567637a1618250bc13e249eb0d7/plot_partial_dependence_visualization_api.py)
[`Download Jupyter notebook: plot_partial_dependence_visualization_api.ipynb`](https://scikit-learn.org/1.1/_downloads/fbad5f36a76ec3e17c024c7b920e5552/plot_partial_dependence_visualization_api.ipynb)
scikit_learn Visualizations with Display Objects Note
Click [here](#sphx-glr-download-auto-examples-miscellaneous-plot-display-object-visualization-py) to download the full example code or to run this example in your browser via Binder
Visualizations with Display Objects
===================================
In this example, we will construct display objects, [`ConfusionMatrixDisplay`](../../modules/generated/sklearn.metrics.confusionmatrixdisplay#sklearn.metrics.ConfusionMatrixDisplay "sklearn.metrics.ConfusionMatrixDisplay"), [`RocCurveDisplay`](../../modules/generated/sklearn.metrics.roccurvedisplay#sklearn.metrics.RocCurveDisplay "sklearn.metrics.RocCurveDisplay"), and [`PrecisionRecallDisplay`](../../modules/generated/sklearn.metrics.precisionrecalldisplay#sklearn.metrics.PrecisionRecallDisplay "sklearn.metrics.PrecisionRecallDisplay") directly from their respective metrics. This is an alternative to using their corresponding plot functions when a model’s predictions are already computed or expensive to compute. Note that this is advanced usage, and in general we recommend using their respective plot functions.
Load Data and train model
-------------------------
For this example, we load a blood transfusion service center data set from `OpenML <https://www.openml.org/d/1464>`. This is a binary classification problem where the target is whether an individual donated blood. Then the data is split into a train and test dataset and a logistic regression is fitted with the train dataset.
```
from sklearn.datasets import fetch_openml
from sklearn.preprocessing import StandardScaler
from sklearn.pipeline import make_pipeline
from sklearn.linear_model import LogisticRegression
from sklearn.model_selection import train_test_split
X, y = fetch_openml(data_id=1464, return_X_y=True)
X_train, X_test, y_train, y_test = train_test_split(X, y, stratify=y)
clf = make_pipeline(StandardScaler(), LogisticRegression(random_state=0))
clf.fit(X_train, y_train)
```
```
Pipeline(steps=[('standardscaler', StandardScaler()),
('logisticregression', LogisticRegression(random_state=0))])
```
**In a Jupyter environment, please rerun this cell to show the HTML representation or trust the notebook.
On GitHub, the HTML representation is unable to render, please try loading this page with nbviewer.org.**
Pipeline
```
Pipeline(steps=[('standardscaler', StandardScaler()),
('logisticregression', LogisticRegression(random_state=0))])
```
StandardScaler
```
StandardScaler()
```
LogisticRegression
```
LogisticRegression(random_state=0)
```
### Create [`ConfusionMatrixDisplay`](../../modules/generated/sklearn.metrics.confusionmatrixdisplay#sklearn.metrics.ConfusionMatrixDisplay "sklearn.metrics.ConfusionMatrixDisplay")
With the fitted model, we compute the predictions of the model on the test dataset. These predictions are used to compute the confustion matrix which is plotted with the [`ConfusionMatrixDisplay`](../../modules/generated/sklearn.metrics.confusionmatrixdisplay#sklearn.metrics.ConfusionMatrixDisplay "sklearn.metrics.ConfusionMatrixDisplay")
```
from sklearn.metrics import confusion_matrix
from sklearn.metrics import ConfusionMatrixDisplay
y_pred = clf.predict(X_test)
cm = confusion_matrix(y_test, y_pred)
cm_display = ConfusionMatrixDisplay(cm).plot()
```
### Create [`RocCurveDisplay`](../../modules/generated/sklearn.metrics.roccurvedisplay#sklearn.metrics.RocCurveDisplay "sklearn.metrics.RocCurveDisplay")
The roc curve requires either the probabilities or the non-thresholded decision values from the estimator. Since the logistic regression provides a decision function, we will use it to plot the roc curve:
```
from sklearn.metrics import roc_curve
from sklearn.metrics import RocCurveDisplay
y_score = clf.decision_function(X_test)
fpr, tpr, _ = roc_curve(y_test, y_score, pos_label=clf.classes_[1])
roc_display = RocCurveDisplay(fpr=fpr, tpr=tpr).plot()
```
### Create [`PrecisionRecallDisplay`](../../modules/generated/sklearn.metrics.precisionrecalldisplay#sklearn.metrics.PrecisionRecallDisplay "sklearn.metrics.PrecisionRecallDisplay")
Similarly, the precision recall curve can be plotted using `y_score` from the prevision sections.
```
from sklearn.metrics import precision_recall_curve
from sklearn.metrics import PrecisionRecallDisplay
prec, recall, _ = precision_recall_curve(y_test, y_score, pos_label=clf.classes_[1])
pr_display = PrecisionRecallDisplay(precision=prec, recall=recall).plot()
```
### Combining the display objects into a single plot
The display objects store the computed values that were passed as arguments. This allows for the visualizations to be easliy combined using matplotlib’s API. In the following example, we place the displays next to each other in a row.
```
import matplotlib.pyplot as plt
fig, (ax1, ax2) = plt.subplots(1, 2, figsize=(12, 8))
roc_display.plot(ax=ax1)
pr_display.plot(ax=ax2)
plt.show()
```
**Total running time of the script:** ( 0 minutes 5.293 seconds)
[`Download Python source code: plot_display_object_visualization.py`](https://scikit-learn.org/1.1/_downloads/ff5c36c46d02d3f6e6acbc298cf36708/plot_display_object_visualization.py)
[`Download Jupyter notebook: plot_display_object_visualization.ipynb`](https://scikit-learn.org/1.1/_downloads/67dd3ba8fc7589bc10a0d9191f17ab66/plot_display_object_visualization.ipynb)
scikit_learn Evaluation of outlier detection estimators Note
Click [here](#sphx-glr-download-auto-examples-miscellaneous-plot-outlier-detection-bench-py) to download the full example code or to run this example in your browser via Binder
Evaluation of outlier detection estimators
==========================================
This example benchmarks outlier detection algorithms, [Local Outlier Factor](../../modules/outlier_detection#local-outlier-factor) (LOF) and [Isolation Forest](../../modules/outlier_detection#isolation-forest) (IForest), using ROC curves on classical anomaly detection datasets. The algorithm performance is assessed in an outlier detection context:
1. The algorithms are trained on the whole dataset which is assumed to contain outliers.
2. The ROC curve from [`RocCurveDisplay`](../../modules/generated/sklearn.metrics.roccurvedisplay#sklearn.metrics.RocCurveDisplay "sklearn.metrics.RocCurveDisplay") is computed on the same dataset using the knowledge of the labels.
```
# Author: Pharuj Rajborirug <[email protected]>
# License: BSD 3 clause
print(__doc__)
```
Define a data preprocessing function
------------------------------------
The example uses real-world datasets available in [`sklearn.datasets`](../../modules/classes#module-sklearn.datasets "sklearn.datasets") and the sample size of some datasets is reduced to speed up computation. After the data preprocessing, the datasets’ targets will have two classes, 0 representing inliers and 1 representing outliers. The `preprocess_dataset` function returns data and target.
```
import numpy as np
from sklearn.datasets import fetch_kddcup99, fetch_covtype, fetch_openml
from sklearn.preprocessing import LabelBinarizer
import pandas as pd
rng = np.random.RandomState(42)
def preprocess_dataset(dataset_name):
# loading and vectorization
print(f"Loading {dataset_name} data")
if dataset_name in ["http", "smtp", "SA", "SF"]:
dataset = fetch_kddcup99(subset=dataset_name, percent10=True, random_state=rng)
X = dataset.data
y = dataset.target
lb = LabelBinarizer()
if dataset_name == "SF":
idx = rng.choice(X.shape[0], int(X.shape[0] * 0.1), replace=False)
X = X[idx] # reduce the sample size
y = y[idx]
x1 = lb.fit_transform(X[:, 1].astype(str))
X = np.c_[X[:, :1], x1, X[:, 2:]]
elif dataset_name == "SA":
idx = rng.choice(X.shape[0], int(X.shape[0] * 0.1), replace=False)
X = X[idx] # reduce the sample size
y = y[idx]
x1 = lb.fit_transform(X[:, 1].astype(str))
x2 = lb.fit_transform(X[:, 2].astype(str))
x3 = lb.fit_transform(X[:, 3].astype(str))
X = np.c_[X[:, :1], x1, x2, x3, X[:, 4:]]
y = (y != b"normal.").astype(int)
if dataset_name == "forestcover":
dataset = fetch_covtype()
X = dataset.data
y = dataset.target
idx = rng.choice(X.shape[0], int(X.shape[0] * 0.1), replace=False)
X = X[idx] # reduce the sample size
y = y[idx]
# inliers are those with attribute 2
# outliers are those with attribute 4
s = (y == 2) + (y == 4)
X = X[s, :]
y = y[s]
y = (y != 2).astype(int)
if dataset_name in ["glass", "wdbc", "cardiotocography"]:
dataset = fetch_openml(name=dataset_name, version=1, as_frame=False)
X = dataset.data
y = dataset.target
if dataset_name == "glass":
s = y == "tableware"
y = s.astype(int)
if dataset_name == "wdbc":
s = y == "2"
y = s.astype(int)
X_mal, y_mal = X[s], y[s]
X_ben, y_ben = X[~s], y[~s]
# downsampled to 39 points (9.8% outliers)
idx = rng.choice(y_mal.shape[0], 39, replace=False)
X_mal2 = X_mal[idx]
y_mal2 = y_mal[idx]
X = np.concatenate((X_ben, X_mal2), axis=0)
y = np.concatenate((y_ben, y_mal2), axis=0)
if dataset_name == "cardiotocography":
s = y == "3"
y = s.astype(int)
# 0 represents inliers, and 1 represents outliers
y = pd.Series(y, dtype="category")
return (X, y)
```
Define an outlier prediction function
-------------------------------------
There is no particular reason to choose algorithms [`LocalOutlierFactor`](../../modules/generated/sklearn.neighbors.localoutlierfactor#sklearn.neighbors.LocalOutlierFactor "sklearn.neighbors.LocalOutlierFactor") and [`IsolationForest`](../../modules/generated/sklearn.ensemble.isolationforest#sklearn.ensemble.IsolationForest "sklearn.ensemble.IsolationForest"). The goal is to show that different algorithm performs well on different datasets. The following `compute_prediction` function returns average outlier score of X.
```
from sklearn.neighbors import LocalOutlierFactor
from sklearn.ensemble import IsolationForest
def compute_prediction(X, model_name):
print(f"Computing {model_name} prediction...")
if model_name == "LOF":
clf = LocalOutlierFactor(n_neighbors=20, contamination="auto")
clf.fit(X)
y_pred = clf.negative_outlier_factor_
if model_name == "IForest":
clf = IsolationForest(random_state=rng, contamination="auto")
y_pred = clf.fit(X).decision_function(X)
return y_pred
```
Plot and interpret results
--------------------------
The algorithm performance relates to how good the true positive rate (TPR) is at low value of the false positive rate (FPR). The best algorithms have the curve on the top-left of the plot and the area under curve (AUC) close to 1. The diagonal dashed line represents a random classification of outliers and inliers.
```
import math
import matplotlib.pyplot as plt
from sklearn.metrics import RocCurveDisplay
datasets_name = [
"http",
"smtp",
"SA",
"SF",
"forestcover",
"glass",
"wdbc",
"cardiotocography",
]
models_name = [
"LOF",
"IForest",
]
# plotting parameters
cols = 2
linewidth = 1
pos_label = 0 # mean 0 belongs to positive class
rows = math.ceil(len(datasets_name) / cols)
fig, axs = plt.subplots(rows, cols, figsize=(10, rows * 3))
for i, dataset_name in enumerate(datasets_name):
(X, y) = preprocess_dataset(dataset_name=dataset_name)
for model_name in models_name:
y_pred = compute_prediction(X, model_name=model_name)
display = RocCurveDisplay.from_predictions(
y,
y_pred,
pos_label=pos_label,
name=model_name,
linewidth=linewidth,
ax=axs[i // cols, i % cols],
)
axs[i // cols, i % cols].plot([0, 1], [0, 1], linewidth=linewidth, linestyle=":")
axs[i // cols, i % cols].set_title(dataset_name)
axs[i // cols, i % cols].set_xlabel("False Positive Rate")
axs[i // cols, i % cols].set_ylabel("True Positive Rate")
plt.tight_layout(pad=2.0) # spacing between subplots
plt.show()
```
```
Loading http data
Computing LOF prediction...
Computing IForest prediction...
Loading smtp data
Computing LOF prediction...
Computing IForest prediction...
Loading SA data
Computing LOF prediction...
Computing IForest prediction...
Loading SF data
Computing LOF prediction...
Computing IForest prediction...
Loading forestcover data
Computing LOF prediction...
Computing IForest prediction...
Loading glass data
Computing LOF prediction...
Computing IForest prediction...
Loading wdbc data
Computing LOF prediction...
Computing IForest prediction...
Loading cardiotocography data
Computing LOF prediction...
Computing IForest prediction...
```
**Total running time of the script:** ( 0 minutes 51.395 seconds)
[`Download Python source code: plot_outlier_detection_bench.py`](https://scikit-learn.org/1.1/_downloads/b7e32fe54d613dce0d3c376377af061d/plot_outlier_detection_bench.py)
[`Download Jupyter notebook: plot_outlier_detection_bench.ipynb`](https://scikit-learn.org/1.1/_downloads/eacb6a63c887dafcff02b3cee64854ef/plot_outlier_detection_bench.ipynb)
| programming_docs |
scikit_learn Face completion with a multi-output estimators Note
Click [here](#sphx-glr-download-auto-examples-miscellaneous-plot-multioutput-face-completion-py) to download the full example code or to run this example in your browser via Binder
Face completion with a multi-output estimators
==============================================
This example shows the use of multi-output estimator to complete images. The goal is to predict the lower half of a face given its upper half.
The first column of images shows true faces. The next columns illustrate how extremely randomized trees, k nearest neighbors, linear regression and ridge regression complete the lower half of those faces.
```
import numpy as np
import matplotlib.pyplot as plt
from sklearn.datasets import fetch_olivetti_faces
from sklearn.utils.validation import check_random_state
from sklearn.ensemble import ExtraTreesRegressor
from sklearn.neighbors import KNeighborsRegressor
from sklearn.linear_model import LinearRegression
from sklearn.linear_model import RidgeCV
# Load the faces datasets
data, targets = fetch_olivetti_faces(return_X_y=True)
train = data[targets < 30]
test = data[targets >= 30] # Test on independent people
# Test on a subset of people
n_faces = 5
rng = check_random_state(4)
face_ids = rng.randint(test.shape[0], size=(n_faces,))
test = test[face_ids, :]
n_pixels = data.shape[1]
# Upper half of the faces
X_train = train[:, : (n_pixels + 1) // 2]
# Lower half of the faces
y_train = train[:, n_pixels // 2 :]
X_test = test[:, : (n_pixels + 1) // 2]
y_test = test[:, n_pixels // 2 :]
# Fit estimators
ESTIMATORS = {
"Extra trees": ExtraTreesRegressor(
n_estimators=10, max_features=32, random_state=0
),
"K-nn": KNeighborsRegressor(),
"Linear regression": LinearRegression(),
"Ridge": RidgeCV(),
}
y_test_predict = dict()
for name, estimator in ESTIMATORS.items():
estimator.fit(X_train, y_train)
y_test_predict[name] = estimator.predict(X_test)
# Plot the completed faces
image_shape = (64, 64)
n_cols = 1 + len(ESTIMATORS)
plt.figure(figsize=(2.0 * n_cols, 2.26 * n_faces))
plt.suptitle("Face completion with multi-output estimators", size=16)
for i in range(n_faces):
true_face = np.hstack((X_test[i], y_test[i]))
if i:
sub = plt.subplot(n_faces, n_cols, i * n_cols + 1)
else:
sub = plt.subplot(n_faces, n_cols, i * n_cols + 1, title="true faces")
sub.axis("off")
sub.imshow(
true_face.reshape(image_shape), cmap=plt.cm.gray, interpolation="nearest"
)
for j, est in enumerate(sorted(ESTIMATORS)):
completed_face = np.hstack((X_test[i], y_test_predict[est][i]))
if i:
sub = plt.subplot(n_faces, n_cols, i * n_cols + 2 + j)
else:
sub = plt.subplot(n_faces, n_cols, i * n_cols + 2 + j, title=est)
sub.axis("off")
sub.imshow(
completed_face.reshape(image_shape),
cmap=plt.cm.gray,
interpolation="nearest",
)
plt.show()
```
**Total running time of the script:** ( 0 minutes 1.758 seconds)
[`Download Python source code: plot_multioutput_face_completion.py`](https://scikit-learn.org/1.1/_downloads/ea40aa6b32c0c42d58220adef4c4e0b8/plot_multioutput_face_completion.py)
[`Download Jupyter notebook: plot_multioutput_face_completion.ipynb`](https://scikit-learn.org/1.1/_downloads/59c5e476396afdc4788c35a49f6289b9/plot_multioutput_face_completion.ipynb)
scikit_learn ROC Curve with Visualization API Note
Click [here](#sphx-glr-download-auto-examples-miscellaneous-plot-roc-curve-visualization-api-py) to download the full example code or to run this example in your browser via Binder
ROC Curve with Visualization API
================================
Scikit-learn defines a simple API for creating visualizations for machine learning. The key features of this API is to allow for quick plotting and visual adjustments without recalculation. In this example, we will demonstrate how to use the visualization API by comparing ROC curves.
Load Data and Train a SVC
-------------------------
First, we load the wine dataset and convert it to a binary classification problem. Then, we train a support vector classifier on a training dataset.
```
import matplotlib.pyplot as plt
from sklearn.svm import SVC
from sklearn.ensemble import RandomForestClassifier
from sklearn.metrics import RocCurveDisplay
from sklearn.datasets import load_wine
from sklearn.model_selection import train_test_split
X, y = load_wine(return_X_y=True)
y = y == 2
X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=42)
svc = SVC(random_state=42)
svc.fit(X_train, y_train)
```
```
SVC(random_state=42)
```
**In a Jupyter environment, please rerun this cell to show the HTML representation or trust the notebook.
On GitHub, the HTML representation is unable to render, please try loading this page with nbviewer.org.**
SVC
```
SVC(random_state=42)
```
Plotting the ROC Curve
----------------------
Next, we plot the ROC curve with a single call to [`sklearn.metrics.RocCurveDisplay.from_estimator`](../../modules/generated/sklearn.metrics.roccurvedisplay#sklearn.metrics.RocCurveDisplay.from_estimator "sklearn.metrics.RocCurveDisplay.from_estimator"). The returned `svc_disp` object allows us to continue using the already computed ROC curve for the SVC in future plots.
```
svc_disp = RocCurveDisplay.from_estimator(svc, X_test, y_test)
plt.show()
```
Training a Random Forest and Plotting the ROC Curve
---------------------------------------------------
We train a random forest classifier and create a plot comparing it to the SVC ROC curve. Notice how `svc_disp` uses [`plot`](../../modules/generated/sklearn.metrics.roccurvedisplay#sklearn.metrics.RocCurveDisplay.plot "sklearn.metrics.RocCurveDisplay.plot") to plot the SVC ROC curve without recomputing the values of the roc curve itself. Furthermore, we pass `alpha=0.8` to the plot functions to adjust the alpha values of the curves.
```
rfc = RandomForestClassifier(n_estimators=10, random_state=42)
rfc.fit(X_train, y_train)
ax = plt.gca()
rfc_disp = RocCurveDisplay.from_estimator(rfc, X_test, y_test, ax=ax, alpha=0.8)
svc_disp.plot(ax=ax, alpha=0.8)
plt.show()
```
**Total running time of the script:** ( 0 minutes 0.136 seconds)
[`Download Python source code: plot_roc_curve_visualization_api.py`](https://scikit-learn.org/1.1/_downloads/837ade0d39a9ca8d8a3cc25eb9433a58/plot_roc_curve_visualization_api.py)
[`Download Jupyter notebook: plot_roc_curve_visualization_api.ipynb`](https://scikit-learn.org/1.1/_downloads/3577ee5e1121d4b5da38feb779aa44bb/plot_roc_curve_visualization_api.ipynb)
scikit_learn Explicit feature map approximation for RBF kernels Note
Click [here](#sphx-glr-download-auto-examples-miscellaneous-plot-kernel-approximation-py) to download the full example code or to run this example in your browser via Binder
Explicit feature map approximation for RBF kernels
==================================================
An example illustrating the approximation of the feature map of an RBF kernel.
It shows how to use [`RBFSampler`](../../modules/generated/sklearn.kernel_approximation.rbfsampler#sklearn.kernel_approximation.RBFSampler "sklearn.kernel_approximation.RBFSampler") and [`Nystroem`](../../modules/generated/sklearn.kernel_approximation.nystroem#sklearn.kernel_approximation.Nystroem "sklearn.kernel_approximation.Nystroem") to approximate the feature map of an RBF kernel for classification with an SVM on the digits dataset. Results using a linear SVM in the original space, a linear SVM using the approximate mappings and using a kernelized SVM are compared. Timings and accuracy for varying amounts of Monte Carlo samplings (in the case of [`RBFSampler`](../../modules/generated/sklearn.kernel_approximation.rbfsampler#sklearn.kernel_approximation.RBFSampler "sklearn.kernel_approximation.RBFSampler"), which uses random Fourier features) and different sized subsets of the training set (for [`Nystroem`](../../modules/generated/sklearn.kernel_approximation.nystroem#sklearn.kernel_approximation.Nystroem "sklearn.kernel_approximation.Nystroem")) for the approximate mapping are shown.
Please note that the dataset here is not large enough to show the benefits of kernel approximation, as the exact SVM is still reasonably fast.
Sampling more dimensions clearly leads to better classification results, but comes at a greater cost. This means there is a tradeoff between runtime and accuracy, given by the parameter n\_components. Note that solving the Linear SVM and also the approximate kernel SVM could be greatly accelerated by using stochastic gradient descent via [`SGDClassifier`](../../modules/generated/sklearn.linear_model.sgdclassifier#sklearn.linear_model.SGDClassifier "sklearn.linear_model.SGDClassifier"). This is not easily possible for the case of the kernelized SVM.
Python package and dataset imports, load dataset
------------------------------------------------
```
# Author: Gael Varoquaux <gael dot varoquaux at normalesup dot org>
# Andreas Mueller <[email protected]>
# License: BSD 3 clause
# Standard scientific Python imports
import matplotlib.pyplot as plt
import numpy as np
from time import time
# Import datasets, classifiers and performance metrics
from sklearn import datasets, svm, pipeline
from sklearn.kernel_approximation import RBFSampler, Nystroem
from sklearn.decomposition import PCA
# The digits dataset
digits = datasets.load_digits(n_class=9)
```
Timing and accuracy plots
-------------------------
To apply an classifier on this data, we need to flatten the image, to turn the data in a (samples, feature) matrix:
```
n_samples = len(digits.data)
data = digits.data / 16.0
data -= data.mean(axis=0)
# We learn the digits on the first half of the digits
data_train, targets_train = (data[: n_samples // 2], digits.target[: n_samples // 2])
# Now predict the value of the digit on the second half:
data_test, targets_test = (data[n_samples // 2 :], digits.target[n_samples // 2 :])
# data_test = scaler.transform(data_test)
# Create a classifier: a support vector classifier
kernel_svm = svm.SVC(gamma=0.2)
linear_svm = svm.LinearSVC()
# create pipeline from kernel approximation
# and linear svm
feature_map_fourier = RBFSampler(gamma=0.2, random_state=1)
feature_map_nystroem = Nystroem(gamma=0.2, random_state=1)
fourier_approx_svm = pipeline.Pipeline(
[("feature_map", feature_map_fourier), ("svm", svm.LinearSVC())]
)
nystroem_approx_svm = pipeline.Pipeline(
[("feature_map", feature_map_nystroem), ("svm", svm.LinearSVC())]
)
# fit and predict using linear and kernel svm:
kernel_svm_time = time()
kernel_svm.fit(data_train, targets_train)
kernel_svm_score = kernel_svm.score(data_test, targets_test)
kernel_svm_time = time() - kernel_svm_time
linear_svm_time = time()
linear_svm.fit(data_train, targets_train)
linear_svm_score = linear_svm.score(data_test, targets_test)
linear_svm_time = time() - linear_svm_time
sample_sizes = 30 * np.arange(1, 10)
fourier_scores = []
nystroem_scores = []
fourier_times = []
nystroem_times = []
for D in sample_sizes:
fourier_approx_svm.set_params(feature_map__n_components=D)
nystroem_approx_svm.set_params(feature_map__n_components=D)
start = time()
nystroem_approx_svm.fit(data_train, targets_train)
nystroem_times.append(time() - start)
start = time()
fourier_approx_svm.fit(data_train, targets_train)
fourier_times.append(time() - start)
fourier_score = fourier_approx_svm.score(data_test, targets_test)
nystroem_score = nystroem_approx_svm.score(data_test, targets_test)
nystroem_scores.append(nystroem_score)
fourier_scores.append(fourier_score)
# plot the results:
plt.figure(figsize=(16, 4))
accuracy = plt.subplot(121)
# second y axis for timings
timescale = plt.subplot(122)
accuracy.plot(sample_sizes, nystroem_scores, label="Nystroem approx. kernel")
timescale.plot(sample_sizes, nystroem_times, "--", label="Nystroem approx. kernel")
accuracy.plot(sample_sizes, fourier_scores, label="Fourier approx. kernel")
timescale.plot(sample_sizes, fourier_times, "--", label="Fourier approx. kernel")
# horizontal lines for exact rbf and linear kernels:
accuracy.plot(
[sample_sizes[0], sample_sizes[-1]],
[linear_svm_score, linear_svm_score],
label="linear svm",
)
timescale.plot(
[sample_sizes[0], sample_sizes[-1]],
[linear_svm_time, linear_svm_time],
"--",
label="linear svm",
)
accuracy.plot(
[sample_sizes[0], sample_sizes[-1]],
[kernel_svm_score, kernel_svm_score],
label="rbf svm",
)
timescale.plot(
[sample_sizes[0], sample_sizes[-1]],
[kernel_svm_time, kernel_svm_time],
"--",
label="rbf svm",
)
# vertical line for dataset dimensionality = 64
accuracy.plot([64, 64], [0.7, 1], label="n_features")
# legends and labels
accuracy.set_title("Classification accuracy")
timescale.set_title("Training times")
accuracy.set_xlim(sample_sizes[0], sample_sizes[-1])
accuracy.set_xticks(())
accuracy.set_ylim(np.min(fourier_scores), 1)
timescale.set_xlabel("Sampling steps = transformed feature dimension")
accuracy.set_ylabel("Classification accuracy")
timescale.set_ylabel("Training time in seconds")
accuracy.legend(loc="best")
timescale.legend(loc="best")
plt.tight_layout()
plt.show()
```
Decision Surfaces of RBF Kernel SVM and Linear SVM
--------------------------------------------------
The second plot visualized the decision surfaces of the RBF kernel SVM and the linear SVM with approximate kernel maps. The plot shows decision surfaces of the classifiers projected onto the first two principal components of the data. This visualization should be taken with a grain of salt since it is just an interesting slice through the decision surface in 64 dimensions. In particular note that a datapoint (represented as a dot) does not necessarily be classified into the region it is lying in, since it will not lie on the plane that the first two principal components span. The usage of [`RBFSampler`](../../modules/generated/sklearn.kernel_approximation.rbfsampler#sklearn.kernel_approximation.RBFSampler "sklearn.kernel_approximation.RBFSampler") and [`Nystroem`](../../modules/generated/sklearn.kernel_approximation.nystroem#sklearn.kernel_approximation.Nystroem "sklearn.kernel_approximation.Nystroem") is described in detail in [Kernel Approximation](../../modules/kernel_approximation#kernel-approximation).
```
# visualize the decision surface, projected down to the first
# two principal components of the dataset
pca = PCA(n_components=8).fit(data_train)
X = pca.transform(data_train)
# Generate grid along first two principal components
multiples = np.arange(-2, 2, 0.1)
# steps along first component
first = multiples[:, np.newaxis] * pca.components_[0, :]
# steps along second component
second = multiples[:, np.newaxis] * pca.components_[1, :]
# combine
grid = first[np.newaxis, :, :] + second[:, np.newaxis, :]
flat_grid = grid.reshape(-1, data.shape[1])
# title for the plots
titles = [
"SVC with rbf kernel",
"SVC (linear kernel)\n with Fourier rbf feature map\nn_components=100",
"SVC (linear kernel)\n with Nystroem rbf feature map\nn_components=100",
]
plt.figure(figsize=(18, 7.5))
plt.rcParams.update({"font.size": 14})
# predict and plot
for i, clf in enumerate((kernel_svm, nystroem_approx_svm, fourier_approx_svm)):
# Plot the decision boundary. For that, we will assign a color to each
# point in the mesh [x_min, x_max]x[y_min, y_max].
plt.subplot(1, 3, i + 1)
Z = clf.predict(flat_grid)
# Put the result into a color plot
Z = Z.reshape(grid.shape[:-1])
plt.contourf(multiples, multiples, Z, cmap=plt.cm.Paired)
plt.axis("off")
# Plot also the training points
plt.scatter(
X[:, 0], X[:, 1], c=targets_train, cmap=plt.cm.Paired, edgecolors=(0, 0, 0)
)
plt.title(titles[i])
plt.tight_layout()
plt.show()
```
**Total running time of the script:** ( 0 minutes 1.183 seconds)
[`Download Python source code: plot_kernel_approximation.py`](https://scikit-learn.org/1.1/_downloads/e1de994d83a2eb57b73b2f69bb7ada4a/plot_kernel_approximation.py)
[`Download Jupyter notebook: plot_kernel_approximation.ipynb`](https://scikit-learn.org/1.1/_downloads/083d8568c199bebbc1a847fc6c917e9e/plot_kernel_approximation.ipynb)
scikit_learn Isotonic Regression Note
Click [here](#sphx-glr-download-auto-examples-miscellaneous-plot-isotonic-regression-py) to download the full example code or to run this example in your browser via Binder
Isotonic Regression
===================
An illustration of the isotonic regression on generated data (non-linear monotonic trend with homoscedastic uniform noise).
The isotonic regression algorithm finds a non-decreasing approximation of a function while minimizing the mean squared error on the training data. The benefit of such a non-parametric model is that it does not assume any shape for the target function besides monotonicity. For comparison a linear regression is also presented.
The plot on the right-hand side shows the model prediction function that results from the linear interpolation of thresholds points. The thresholds points are a subset of the training input observations and their matching target values are computed by the isotonic non-parametric fit.
```
# Author: Nelle Varoquaux <[email protected]>
# Alexandre Gramfort <[email protected]>
# License: BSD
import numpy as np
import matplotlib.pyplot as plt
from matplotlib.collections import LineCollection
from sklearn.linear_model import LinearRegression
from sklearn.isotonic import IsotonicRegression
from sklearn.utils import check_random_state
n = 100
x = np.arange(n)
rs = check_random_state(0)
y = rs.randint(-50, 50, size=(n,)) + 50.0 * np.log1p(np.arange(n))
```
Fit IsotonicRegression and LinearRegression models:
```
ir = IsotonicRegression(out_of_bounds="clip")
y_ = ir.fit_transform(x, y)
lr = LinearRegression()
lr.fit(x[:, np.newaxis], y) # x needs to be 2d for LinearRegression
```
```
LinearRegression()
```
**In a Jupyter environment, please rerun this cell to show the HTML representation or trust the notebook.
On GitHub, the HTML representation is unable to render, please try loading this page with nbviewer.org.**
LinearRegression
```
LinearRegression()
```
Plot results:
```
segments = [[[i, y[i]], [i, y_[i]]] for i in range(n)]
lc = LineCollection(segments, zorder=0)
lc.set_array(np.ones(len(y)))
lc.set_linewidths(np.full(n, 0.5))
fig, (ax0, ax1) = plt.subplots(ncols=2, figsize=(12, 6))
ax0.plot(x, y, "C0.", markersize=12)
ax0.plot(x, y_, "C1.-", markersize=12)
ax0.plot(x, lr.predict(x[:, np.newaxis]), "C2-")
ax0.add_collection(lc)
ax0.legend(("Training data", "Isotonic fit", "Linear fit"), loc="lower right")
ax0.set_title("Isotonic regression fit on noisy data (n=%d)" % n)
x_test = np.linspace(-10, 110, 1000)
ax1.plot(x_test, ir.predict(x_test), "C1-")
ax1.plot(ir.X_thresholds_, ir.y_thresholds_, "C1.", markersize=12)
ax1.set_title("Prediction function (%d thresholds)" % len(ir.X_thresholds_))
plt.show()
```
Note that we explicitly passed `out_of_bounds="clip"` to the constructor of `IsotonicRegression` to control the way the model extrapolates outside of the range of data observed in the training set. This “clipping” extrapolation can be seen on the plot of the decision function on the right-hand.
**Total running time of the script:** ( 0 minutes 0.122 seconds)
[`Download Python source code: plot_isotonic_regression.py`](https://scikit-learn.org/1.1/_downloads/8209ef76ac59bf01aad3721a522859ef/plot_isotonic_regression.py)
[`Download Jupyter notebook: plot_isotonic_regression.ipynb`](https://scikit-learn.org/1.1/_downloads/f2e78295c97b04635d9e749896f8e08b/plot_isotonic_regression.ipynb)
| programming_docs |
scikit_learn Model-based and sequential feature selection Note
Click [here](#sphx-glr-download-auto-examples-feature-selection-plot-select-from-model-diabetes-py) to download the full example code or to run this example in your browser via Binder
Model-based and sequential feature selection
============================================
This example illustrates and compares two approaches for feature selection: [`SelectFromModel`](../../modules/generated/sklearn.feature_selection.selectfrommodel#sklearn.feature_selection.SelectFromModel "sklearn.feature_selection.SelectFromModel") which is based on feature importance, and `SequentialFeatureSelection` which relies on a greedy approach.
We use the Diabetes dataset, which consists of 10 features collected from 442 diabetes patients.
Authors: [Manoj Kumar](mailto:mks542%40nyu.edu), [Maria Telenczuk](https://github.com/maikia), Nicolas Hug.
License: BSD 3 clause
Loading the data
----------------
We first load the diabetes dataset which is available from within scikit-learn, and print its description:
```
from sklearn.datasets import load_diabetes
diabetes = load_diabetes()
X, y = diabetes.data, diabetes.target
print(diabetes.DESCR)
```
```
.. _diabetes_dataset:
Diabetes dataset
----------------
Ten baseline variables, age, sex, body mass index, average blood
pressure, and six blood serum measurements were obtained for each of n =
442 diabetes patients, as well as the response of interest, a
quantitative measure of disease progression one year after baseline.
**Data Set Characteristics:**
:Number of Instances: 442
:Number of Attributes: First 10 columns are numeric predictive values
:Target: Column 11 is a quantitative measure of disease progression one year after baseline
:Attribute Information:
- age age in years
- sex
- bmi body mass index
- bp average blood pressure
- s1 tc, total serum cholesterol
- s2 ldl, low-density lipoproteins
- s3 hdl, high-density lipoproteins
- s4 tch, total cholesterol / HDL
- s5 ltg, possibly log of serum triglycerides level
- s6 glu, blood sugar level
Note: Each of these 10 feature variables have been mean centered and scaled by the standard deviation times the square root of `n_samples` (i.e. the sum of squares of each column totals 1).
Source URL:
https://www4.stat.ncsu.edu/~boos/var.select/diabetes.html
For more information see:
Bradley Efron, Trevor Hastie, Iain Johnstone and Robert Tibshirani (2004) "Least Angle Regression," Annals of Statistics (with discussion), 407-499.
(https://web.stanford.edu/~hastie/Papers/LARS/LeastAngle_2002.pdf)
```
Feature importance from coefficients
------------------------------------
To get an idea of the importance of the features, we are going to use the [`RidgeCV`](../../modules/generated/sklearn.linear_model.ridgecv#sklearn.linear_model.RidgeCV "sklearn.linear_model.RidgeCV") estimator. The features with the highest absolute `coef_` value are considered the most important. We can observe the coefficients directly without needing to scale them (or scale the data) because from the description above, we know that the features were already standardized. For a more complete example on the interpretations of the coefficients of linear models, you may refer to [Common pitfalls in the interpretation of coefficients of linear models](../inspection/plot_linear_model_coefficient_interpretation#sphx-glr-auto-examples-inspection-plot-linear-model-coefficient-interpretation-py).
```
import matplotlib.pyplot as plt
import numpy as np
from sklearn.linear_model import RidgeCV
ridge = RidgeCV(alphas=np.logspace(-6, 6, num=5)).fit(X, y)
importance = np.abs(ridge.coef_)
feature_names = np.array(diabetes.feature_names)
plt.bar(height=importance, x=feature_names)
plt.title("Feature importances via coefficients")
plt.show()
```
Selecting features based on importance
--------------------------------------
Now we want to select the two features which are the most important according to the coefficients. The [`SelectFromModel`](../../modules/generated/sklearn.feature_selection.selectfrommodel#sklearn.feature_selection.SelectFromModel "sklearn.feature_selection.SelectFromModel") is meant just for that. [`SelectFromModel`](../../modules/generated/sklearn.feature_selection.selectfrommodel#sklearn.feature_selection.SelectFromModel "sklearn.feature_selection.SelectFromModel") accepts a `threshold` parameter and will select the features whose importance (defined by the coefficients) are above this threshold.
Since we want to select only 2 features, we will set this threshold slightly above the coefficient of third most important feature.
```
from sklearn.feature_selection import SelectFromModel
from time import time
threshold = np.sort(importance)[-3] + 0.01
tic = time()
sfm = SelectFromModel(ridge, threshold=threshold).fit(X, y)
toc = time()
print(f"Features selected by SelectFromModel: {feature_names[sfm.get_support()]}")
print(f"Done in {toc - tic:.3f}s")
```
```
Features selected by SelectFromModel: ['s1' 's5']
Done in 0.001s
```
Selecting features with Sequential Feature Selection
----------------------------------------------------
Another way of selecting features is to use [`SequentialFeatureSelector`](../../modules/generated/sklearn.feature_selection.sequentialfeatureselector#sklearn.feature_selection.SequentialFeatureSelector "sklearn.feature_selection.SequentialFeatureSelector") (SFS). SFS is a greedy procedure where, at each iteration, we choose the best new feature to add to our selected features based a cross-validation score. That is, we start with 0 features and choose the best single feature with the highest score. The procedure is repeated until we reach the desired number of selected features.
We can also go in the reverse direction (backward SFS), *i.e.* start with all the features and greedily choose features to remove one by one. We illustrate both approaches here.
```
from sklearn.feature_selection import SequentialFeatureSelector
tic_fwd = time()
sfs_forward = SequentialFeatureSelector(
ridge, n_features_to_select=2, direction="forward"
).fit(X, y)
toc_fwd = time()
tic_bwd = time()
sfs_backward = SequentialFeatureSelector(
ridge, n_features_to_select=2, direction="backward"
).fit(X, y)
toc_bwd = time()
print(
"Features selected by forward sequential selection: "
f"{feature_names[sfs_forward.get_support()]}"
)
print(f"Done in {toc_fwd - tic_fwd:.3f}s")
print(
"Features selected by backward sequential selection: "
f"{feature_names[sfs_backward.get_support()]}"
)
print(f"Done in {toc_bwd - tic_bwd:.3f}s")
```
```
Features selected by forward sequential selection: ['bmi' 's5']
Done in 0.114s
Features selected by backward sequential selection: ['bmi' 's5']
Done in 0.336s
```
Discussion
----------
Interestingly, forward and backward selection have selected the same set of features. In general, this isn’t the case and the two methods would lead to different results.
We also note that the features selected by SFS differ from those selected by feature importance: SFS selects `bmi` instead of `s1`. This does sound reasonable though, since `bmi` corresponds to the third most important feature according to the coefficients. It is quite remarkable considering that SFS makes no use of the coefficients at all.
To finish with, we should note that [`SelectFromModel`](../../modules/generated/sklearn.feature_selection.selectfrommodel#sklearn.feature_selection.SelectFromModel "sklearn.feature_selection.SelectFromModel") is significantly faster than SFS. Indeed, [`SelectFromModel`](../../modules/generated/sklearn.feature_selection.selectfrommodel#sklearn.feature_selection.SelectFromModel "sklearn.feature_selection.SelectFromModel") only needs to fit a model once, while SFS needs to cross-validate many different models for each of the iterations. SFS however works with any model, while [`SelectFromModel`](../../modules/generated/sklearn.feature_selection.selectfrommodel#sklearn.feature_selection.SelectFromModel "sklearn.feature_selection.SelectFromModel") requires the underlying estimator to expose a `coef_` attribute or a `feature_importances_` attribute. The forward SFS is faster than the backward SFS because it only needs to perform `n_features_to_select = 2` iterations, while the backward SFS needs to perform `n_features - n_features_to_select = 8` iterations.
**Total running time of the script:** ( 0 minutes 0.517 seconds)
[`Download Python source code: plot_select_from_model_diabetes.py`](https://scikit-learn.org/1.1/_downloads/53e76f761ef04e8d06fa5757554513b0/plot_select_from_model_diabetes.py)
[`Download Jupyter notebook: plot_select_from_model_diabetes.ipynb`](https://scikit-learn.org/1.1/_downloads/f1e887db7b101f4c858db7db12e9c7e2/plot_select_from_model_diabetes.ipynb)
scikit_learn Pipeline ANOVA SVM Note
Click [here](#sphx-glr-download-auto-examples-feature-selection-plot-feature-selection-pipeline-py) to download the full example code or to run this example in your browser via Binder
Pipeline ANOVA SVM
==================
This example shows how a feature selection can be easily integrated within a machine learning pipeline.
We also show that you can easily introspect part of the pipeline.
We will start by generating a binary classification dataset. Subsequently, we will divide the dataset into two subsets.
```
from sklearn.datasets import make_classification
from sklearn.model_selection import train_test_split
X, y = make_classification(
n_features=20,
n_informative=3,
n_redundant=0,
n_classes=2,
n_clusters_per_class=2,
random_state=42,
)
X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=42)
```
A common mistake done with feature selection is to search a subset of discriminative features on the full dataset instead of only using the training set. The usage of scikit-learn [`Pipeline`](../../modules/generated/sklearn.pipeline.pipeline#sklearn.pipeline.Pipeline "sklearn.pipeline.Pipeline") prevents to make such mistake.
Here, we will demonstrate how to build a pipeline where the first step will be the feature selection.
When calling `fit` on the training data, a subset of feature will be selected and the index of these selected features will be stored. The feature selector will subsequently reduce the number of feature and pass this subset to the classifier which will be trained.
```
from sklearn.feature_selection import SelectKBest, f_classif
from sklearn.pipeline import make_pipeline
from sklearn.svm import LinearSVC
anova_filter = SelectKBest(f_classif, k=3)
clf = LinearSVC()
anova_svm = make_pipeline(anova_filter, clf)
anova_svm.fit(X_train, y_train)
```
```
Pipeline(steps=[('selectkbest', SelectKBest(k=3)), ('linearsvc', LinearSVC())])
```
**In a Jupyter environment, please rerun this cell to show the HTML representation or trust the notebook.
On GitHub, the HTML representation is unable to render, please try loading this page with nbviewer.org.**
Pipeline
```
Pipeline(steps=[('selectkbest', SelectKBest(k=3)), ('linearsvc', LinearSVC())])
```
SelectKBest
```
SelectKBest(k=3)
```
LinearSVC
```
LinearSVC()
```
Once the training accomplished, we can predict on new unseen samples. In this case, the feature selector will only select the most discriminative features based on the information stored during training. Then, the data will be passed to the classifier which will make the prediction.
Here, we report the final metrics via a classification report.
```
from sklearn.metrics import classification_report
y_pred = anova_svm.predict(X_test)
print(classification_report(y_test, y_pred))
```
```
precision recall f1-score support
0 0.92 0.80 0.86 15
1 0.75 0.90 0.82 10
accuracy 0.84 25
macro avg 0.84 0.85 0.84 25
weighted avg 0.85 0.84 0.84 25
```
Be aware that you can inspect a step in the pipeline. For instance, we might be interested about the parameters of the classifier. Since we selected three features, we expect to have three coefficients.
```
anova_svm[-1].coef_
```
```
array([[0.75790919, 0.27158706, 0.26109741]])
```
However, we do not know which features where selected from the original dataset. We could proceed by several manner. Here, we will inverse the transformation of these coefficients to get information about the original space.
```
anova_svm[:-1].inverse_transform(anova_svm[-1].coef_)
```
```
array([[0. , 0. , 0.75790919, 0. , 0. ,
0. , 0. , 0. , 0. , 0.27158706,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0.26109741]])
```
We can see that the first three features where the selected features by the first step.
**Total running time of the script:** ( 0 minutes 0.008 seconds)
[`Download Python source code: plot_feature_selection_pipeline.py`](https://scikit-learn.org/1.1/_downloads/5a7e586367163444711012a4c5214817/plot_feature_selection_pipeline.py)
[`Download Jupyter notebook: plot_feature_selection_pipeline.ipynb`](https://scikit-learn.org/1.1/_downloads/51e6f272e94e3b63cfd48c4b41fbaa10/plot_feature_selection_pipeline.ipynb)
scikit_learn Recursive feature elimination with cross-validation Note
Click [here](#sphx-glr-download-auto-examples-feature-selection-plot-rfe-with-cross-validation-py) to download the full example code or to run this example in your browser via Binder
Recursive feature elimination with cross-validation
===================================================
A recursive feature elimination example with automatic tuning of the number of features selected with cross-validation.
```
Optimal number of features : 3
/home/runner/work/scikit-learn/scikit-learn/sklearn/utils/deprecation.py:103: FutureWarning: The `grid_scores_` attribute is deprecated in version 1.0 in favor of `cv_results_` and will be removed in version 1.2.
warnings.warn(msg, category=FutureWarning)
```
```
import matplotlib.pyplot as plt
from sklearn.svm import SVC
from sklearn.model_selection import StratifiedKFold
from sklearn.feature_selection import RFECV
from sklearn.datasets import make_classification
# Build a classification task using 3 informative features
X, y = make_classification(
n_samples=1000,
n_features=25,
n_informative=3,
n_redundant=2,
n_repeated=0,
n_classes=8,
n_clusters_per_class=1,
random_state=0,
)
# Create the RFE object and compute a cross-validated score.
svc = SVC(kernel="linear")
# The "accuracy" scoring shows the proportion of correct classifications
min_features_to_select = 1 # Minimum number of features to consider
rfecv = RFECV(
estimator=svc,
step=1,
cv=StratifiedKFold(2),
scoring="accuracy",
min_features_to_select=min_features_to_select,
)
rfecv.fit(X, y)
print("Optimal number of features : %d" % rfecv.n_features_)
# Plot number of features VS. cross-validation scores
plt.figure()
plt.xlabel("Number of features selected")
plt.ylabel("Cross validation score (accuracy)")
plt.plot(
range(min_features_to_select, len(rfecv.grid_scores_) + min_features_to_select),
rfecv.grid_scores_,
)
plt.show()
```
**Total running time of the script:** ( 0 minutes 1.943 seconds)
[`Download Python source code: plot_rfe_with_cross_validation.py`](https://scikit-learn.org/1.1/_downloads/592b2521e44501266ca5339d1fb123cb/plot_rfe_with_cross_validation.py)
[`Download Jupyter notebook: plot_rfe_with_cross_validation.ipynb`](https://scikit-learn.org/1.1/_downloads/949ed208b2147ed2b3e348e81fef52be/plot_rfe_with_cross_validation.ipynb)
scikit_learn Univariate Feature Selection Note
Click [here](#sphx-glr-download-auto-examples-feature-selection-plot-feature-selection-py) to download the full example code or to run this example in your browser via Binder
Univariate Feature Selection
============================
This notebook is an example of using univariate feature selection to improve classification accuracy on a noisy dataset.
In this example, some noisy (non informative) features are added to the iris dataset. Support vector machine (SVM) is used to classify the dataset both before and after applying univariate feature selection. For each feature, we plot the p-values for the univariate feature selection and the corresponding weights of SVMs. With this, we will compare model accuracy and examine the impact of univariate feature selection on model weights.
Generate sample data
--------------------
```
import numpy as np
from sklearn.datasets import load_iris
from sklearn.model_selection import train_test_split
# The iris dataset
X, y = load_iris(return_X_y=True)
# Some noisy data not correlated
E = np.random.RandomState(42).uniform(0, 0.1, size=(X.shape[0], 20))
# Add the noisy data to the informative features
X = np.hstack((X, E))
# Split dataset to select feature and evaluate the classifier
X_train, X_test, y_train, y_test = train_test_split(X, y, stratify=y, random_state=0)
```
Univariate feature selection
----------------------------
Univariate feature selection with F-test for feature scoring. We use the default selection function to select the four most significant features.
```
from sklearn.feature_selection import SelectKBest, f_classif
selector = SelectKBest(f_classif, k=4)
selector.fit(X_train, y_train)
scores = -np.log10(selector.pvalues_)
scores /= scores.max()
```
```
import matplotlib.pyplot as plt
X_indices = np.arange(X.shape[-1])
plt.figure(1)
plt.clf()
plt.bar(X_indices - 0.05, scores, width=0.2)
plt.title("Feature univariate score")
plt.xlabel("Feature number")
plt.ylabel(r"Univariate score ($-Log(p_{value})$)")
plt.show()
```
In the total set of features, only the 4 of the original features are significant. We can see that they have the highest score with univariate feature selection.
Compare with SVMs
-----------------
Without univariate feature selection
```
from sklearn.pipeline import make_pipeline
from sklearn.preprocessing import MinMaxScaler
from sklearn.svm import LinearSVC
clf = make_pipeline(MinMaxScaler(), LinearSVC())
clf.fit(X_train, y_train)
print(
"Classification accuracy without selecting features: {:.3f}".format(
clf.score(X_test, y_test)
)
)
svm_weights = np.abs(clf[-1].coef_).sum(axis=0)
svm_weights /= svm_weights.sum()
```
```
Classification accuracy without selecting features: 0.789
```
After univariate feature selection
```
clf_selected = make_pipeline(SelectKBest(f_classif, k=4), MinMaxScaler(), LinearSVC())
clf_selected.fit(X_train, y_train)
print(
"Classification accuracy after univariate feature selection: {:.3f}".format(
clf_selected.score(X_test, y_test)
)
)
svm_weights_selected = np.abs(clf_selected[-1].coef_).sum(axis=0)
svm_weights_selected /= svm_weights_selected.sum()
```
```
Classification accuracy after univariate feature selection: 0.868
```
```
plt.bar(
X_indices - 0.45, scores, width=0.2, label=r"Univariate score ($-Log(p_{value})$)"
)
plt.bar(X_indices - 0.25, svm_weights, width=0.2, label="SVM weight")
plt.bar(
X_indices[selector.get_support()] - 0.05,
svm_weights_selected,
width=0.2,
label="SVM weights after selection",
)
plt.title("Comparing feature selection")
plt.xlabel("Feature number")
plt.yticks(())
plt.axis("tight")
plt.legend(loc="upper right")
plt.show()
```
Without univariate feature selection, the SVM assigns a large weight to the first 4 original significant features, but also selects many of the non-informative features. Applying univariate feature selection before the SVM increases the SVM weight attributed to the significant features, and will thus improve classification.
**Total running time of the script:** ( 0 minutes 0.168 seconds)
[`Download Python source code: plot_feature_selection.py`](https://scikit-learn.org/1.1/_downloads/62397dcd82eb2478e27036ac96fe2ab9/plot_feature_selection.py)
[`Download Jupyter notebook: plot_feature_selection.ipynb`](https://scikit-learn.org/1.1/_downloads/fe71806a900680d092025bf56d0dfcb3/plot_feature_selection.ipynb)
| programming_docs |
scikit_learn Comparison of F-test and mutual information Note
Click [here](#sphx-glr-download-auto-examples-feature-selection-plot-f-test-vs-mi-py) to download the full example code or to run this example in your browser via Binder
Comparison of F-test and mutual information
===========================================
This example illustrates the differences between univariate F-test statistics and mutual information.
We consider 3 features x\_1, x\_2, x\_3 distributed uniformly over [0, 1], the target depends on them as follows:
y = x\_1 + sin(6 \* pi \* x\_2) + 0.1 \* N(0, 1), that is the third features is completely irrelevant.
The code below plots the dependency of y against individual x\_i and normalized values of univariate F-tests statistics and mutual information.
As F-test captures only linear dependency, it rates x\_1 as the most discriminative feature. On the other hand, mutual information can capture any kind of dependency between variables and it rates x\_2 as the most discriminative feature, which probably agrees better with our intuitive perception for this example. Both methods correctly marks x\_3 as irrelevant.
```
import numpy as np
import matplotlib.pyplot as plt
from sklearn.feature_selection import f_regression, mutual_info_regression
np.random.seed(0)
X = np.random.rand(1000, 3)
y = X[:, 0] + np.sin(6 * np.pi * X[:, 1]) + 0.1 * np.random.randn(1000)
f_test, _ = f_regression(X, y)
f_test /= np.max(f_test)
mi = mutual_info_regression(X, y)
mi /= np.max(mi)
plt.figure(figsize=(15, 5))
for i in range(3):
plt.subplot(1, 3, i + 1)
plt.scatter(X[:, i], y, edgecolor="black", s=20)
plt.xlabel("$x_{}$".format(i + 1), fontsize=14)
if i == 0:
plt.ylabel("$y$", fontsize=14)
plt.title("F-test={:.2f}, MI={:.2f}".format(f_test[i], mi[i]), fontsize=16)
plt.show()
```
**Total running time of the script:** ( 0 minutes 0.194 seconds)
[`Download Python source code: plot_f_test_vs_mi.py`](https://scikit-learn.org/1.1/_downloads/b15a0f93878767ffa709315b9ebf8a94/plot_f_test_vs_mi.py)
[`Download Jupyter notebook: plot_f_test_vs_mi.ipynb`](https://scikit-learn.org/1.1/_downloads/95e2652922af032381167a5aa13f2b36/plot_f_test_vs_mi.ipynb)
scikit_learn Recursive feature elimination Note
Click [here](#sphx-glr-download-auto-examples-feature-selection-plot-rfe-digits-py) to download the full example code or to run this example in your browser via Binder
Recursive feature elimination
=============================
A recursive feature elimination example showing the relevance of pixels in a digit classification task.
Note
See also [Recursive feature elimination with cross-validation](plot_rfe_with_cross_validation#sphx-glr-auto-examples-feature-selection-plot-rfe-with-cross-validation-py)
```
from sklearn.svm import SVC
from sklearn.datasets import load_digits
from sklearn.feature_selection import RFE
import matplotlib.pyplot as plt
# Load the digits dataset
digits = load_digits()
X = digits.images.reshape((len(digits.images), -1))
y = digits.target
# Create the RFE object and rank each pixel
svc = SVC(kernel="linear", C=1)
rfe = RFE(estimator=svc, n_features_to_select=1, step=1)
rfe.fit(X, y)
ranking = rfe.ranking_.reshape(digits.images[0].shape)
# Plot pixel ranking
plt.matshow(ranking, cmap=plt.cm.Blues)
plt.colorbar()
plt.title("Ranking of pixels with RFE")
plt.show()
```
**Total running time of the script:** ( 0 minutes 3.525 seconds)
[`Download Python source code: plot_rfe_digits.py`](https://scikit-learn.org/1.1/_downloads/f03a2a04a75f73f183e0407e663d121d/plot_rfe_digits.py)
[`Download Jupyter notebook: plot_rfe_digits.ipynb`](https://scikit-learn.org/1.1/_downloads/89b02c3e866d0979d0024f34b90877de/plot_rfe_digits.ipynb)
scikit_learn Permutation Importance with Multicollinear or Correlated Features Note
Click [here](#sphx-glr-download-auto-examples-inspection-plot-permutation-importance-multicollinear-py) to download the full example code or to run this example in your browser via Binder
Permutation Importance with Multicollinear or Correlated Features
=================================================================
In this example, we compute the permutation importance on the Wisconsin breast cancer dataset using [`permutation_importance`](../../modules/generated/sklearn.inspection.permutation_importance#sklearn.inspection.permutation_importance "sklearn.inspection.permutation_importance"). The [`RandomForestClassifier`](../../modules/generated/sklearn.ensemble.randomforestclassifier#sklearn.ensemble.RandomForestClassifier "sklearn.ensemble.RandomForestClassifier") can easily get about 97% accuracy on a test dataset. Because this dataset contains multicollinear features, the permutation importance will show that none of the features are important. One approach to handling multicollinearity is by performing hierarchical clustering on the features’ Spearman rank-order correlations, picking a threshold, and keeping a single feature from each cluster.
Note
See also [Permutation Importance vs Random Forest Feature Importance (MDI)](plot_permutation_importance#sphx-glr-auto-examples-inspection-plot-permutation-importance-py)
```
from collections import defaultdict
import matplotlib.pyplot as plt
import numpy as np
from scipy.stats import spearmanr
from scipy.cluster import hierarchy
from scipy.spatial.distance import squareform
from sklearn.datasets import load_breast_cancer
from sklearn.ensemble import RandomForestClassifier
from sklearn.inspection import permutation_importance
from sklearn.model_selection import train_test_split
```
Random Forest Feature Importance on Breast Cancer Data
------------------------------------------------------
First, we train a random forest on the breast cancer dataset and evaluate its accuracy on a test set:
```
data = load_breast_cancer()
X, y = data.data, data.target
X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=42)
clf = RandomForestClassifier(n_estimators=100, random_state=42)
clf.fit(X_train, y_train)
print("Accuracy on test data: {:.2f}".format(clf.score(X_test, y_test)))
```
```
Accuracy on test data: 0.97
```
Next, we plot the tree based feature importance and the permutation importance. The permutation importance plot shows that permuting a feature drops the accuracy by at most `0.012`, which would suggest that none of the features are important. This is in contradiction with the high test accuracy computed above: some feature must be important. The permutation importance is calculated on the training set to show how much the model relies on each feature during training.
```
result = permutation_importance(clf, X_train, y_train, n_repeats=10, random_state=42)
perm_sorted_idx = result.importances_mean.argsort()
tree_importance_sorted_idx = np.argsort(clf.feature_importances_)
tree_indices = np.arange(0, len(clf.feature_importances_)) + 0.5
fig, (ax1, ax2) = plt.subplots(1, 2, figsize=(12, 8))
ax1.barh(tree_indices, clf.feature_importances_[tree_importance_sorted_idx], height=0.7)
ax1.set_yticks(tree_indices)
ax1.set_yticklabels(data.feature_names[tree_importance_sorted_idx])
ax1.set_ylim((0, len(clf.feature_importances_)))
ax2.boxplot(
result.importances[perm_sorted_idx].T,
vert=False,
labels=data.feature_names[perm_sorted_idx],
)
fig.tight_layout()
plt.show()
```
Handling Multicollinear Features
--------------------------------
When features are collinear, permutating one feature will have little effect on the models performance because it can get the same information from a correlated feature. One way to handle multicollinear features is by performing hierarchical clustering on the Spearman rank-order correlations, picking a threshold, and keeping a single feature from each cluster. First, we plot a heatmap of the correlated features:
```
fig, (ax1, ax2) = plt.subplots(1, 2, figsize=(12, 8))
corr = spearmanr(X).correlation
# Ensure the correlation matrix is symmetric
corr = (corr + corr.T) / 2
np.fill_diagonal(corr, 1)
# We convert the correlation matrix to a distance matrix before performing
# hierarchical clustering using Ward's linkage.
distance_matrix = 1 - np.abs(corr)
dist_linkage = hierarchy.ward(squareform(distance_matrix))
dendro = hierarchy.dendrogram(
dist_linkage, labels=data.feature_names.tolist(), ax=ax1, leaf_rotation=90
)
dendro_idx = np.arange(0, len(dendro["ivl"]))
ax2.imshow(corr[dendro["leaves"], :][:, dendro["leaves"]])
ax2.set_xticks(dendro_idx)
ax2.set_yticks(dendro_idx)
ax2.set_xticklabels(dendro["ivl"], rotation="vertical")
ax2.set_yticklabels(dendro["ivl"])
fig.tight_layout()
plt.show()
```
Next, we manually pick a threshold by visual inspection of the dendrogram to group our features into clusters and choose a feature from each cluster to keep, select those features from our dataset, and train a new random forest. The test accuracy of the new random forest did not change much compared to the random forest trained on the complete dataset.
```
cluster_ids = hierarchy.fcluster(dist_linkage, 1, criterion="distance")
cluster_id_to_feature_ids = defaultdict(list)
for idx, cluster_id in enumerate(cluster_ids):
cluster_id_to_feature_ids[cluster_id].append(idx)
selected_features = [v[0] for v in cluster_id_to_feature_ids.values()]
X_train_sel = X_train[:, selected_features]
X_test_sel = X_test[:, selected_features]
clf_sel = RandomForestClassifier(n_estimators=100, random_state=42)
clf_sel.fit(X_train_sel, y_train)
print(
"Accuracy on test data with features removed: {:.2f}".format(
clf_sel.score(X_test_sel, y_test)
)
)
```
```
Accuracy on test data with features removed: 0.97
```
**Total running time of the script:** ( 0 minutes 3.908 seconds)
[`Download Python source code: plot_permutation_importance_multicollinear.py`](https://scikit-learn.org/1.1/_downloads/756be88c4ccd4c7bba02ab13f0d3258a/plot_permutation_importance_multicollinear.py)
[`Download Jupyter notebook: plot_permutation_importance_multicollinear.ipynb`](https://scikit-learn.org/1.1/_downloads/4941b506cc56c9cec00d40992e2a4207/plot_permutation_importance_multicollinear.ipynb)
scikit_learn Common pitfalls in the interpretation of coefficients of linear models Note
Click [here](#sphx-glr-download-auto-examples-inspection-plot-linear-model-coefficient-interpretation-py) to download the full example code or to run this example in your browser via Binder
Common pitfalls in the interpretation of coefficients of linear models
======================================================================
In linear models, the target value is modeled as a linear combination of the features (see the [Linear Models](../../modules/linear_model#linear-model) User Guide section for a description of a set of linear models available in scikit-learn). Coefficients in multiple linear models represent the relationship between the given feature, \(X\_i\) and the target, \(y\), assuming that all the other features remain constant ([conditional dependence](https://en.wikipedia.org/wiki/Conditional_dependence)). This is different from plotting \(X\_i\) versus \(y\) and fitting a linear relationship: in that case all possible values of the other features are taken into account in the estimation (marginal dependence).
This example will provide some hints in interpreting coefficient in linear models, pointing at problems that arise when either the linear model is not appropriate to describe the dataset, or when features are correlated.
We will use data from the [“Current Population Survey”](https://www.openml.org/d/534) from 1985 to predict wage as a function of various features such as experience, age, or education.
* [The dataset: wages](#the-dataset-wages)
* [The machine-learning pipeline](#the-machine-learning-pipeline)
* [Processing the dataset](#processing-the-dataset)
* [Interpreting coefficients: scale matters](#interpreting-coefficients-scale-matters)
* [Checking the variability of the coefficients](#checking-the-variability-of-the-coefficients)
* [The problem of correlated variables](#the-problem-of-correlated-variables)
* [Preprocessing numerical variables](#preprocessing-numerical-variables)
* [Linear models with regularization](#linear-models-with-regularization)
* [Linear models with sparse coefficients](#linear-models-with-sparse-coefficients)
* [Lessons learned](#lessons-learned)
```
import numpy as np
import scipy as sp
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
```
The dataset: wages
------------------
We fetch the data from [OpenML](http://openml.org/). Note that setting the parameter `as_frame` to True will retrieve the data as a pandas dataframe.
```
from sklearn.datasets import fetch_openml
survey = fetch_openml(data_id=534, as_frame=True)
```
Then, we identify features `X` and targets `y`: the column WAGE is our target variable (i.e., the variable which we want to predict).
```
X = survey.data[survey.feature_names]
X.describe(include="all")
```
| | EDUCATION | SOUTH | SEX | EXPERIENCE | UNION | AGE | RACE | OCCUPATION | SECTOR | MARR |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| count | 534.000000 | 534 | 534 | 534.000000 | 534 | 534.000000 | 534 | 534 | 534 | 534 |
| unique | NaN | 2 | 2 | NaN | 2 | NaN | 3 | 6 | 3 | 2 |
| top | NaN | no | male | NaN | not\_member | NaN | White | Other | Other | Married |
| freq | NaN | 378 | 289 | NaN | 438 | NaN | 440 | 156 | 411 | 350 |
| mean | 13.018727 | NaN | NaN | 17.822097 | NaN | 36.833333 | NaN | NaN | NaN | NaN |
| std | 2.615373 | NaN | NaN | 12.379710 | NaN | 11.726573 | NaN | NaN | NaN | NaN |
| min | 2.000000 | NaN | NaN | 0.000000 | NaN | 18.000000 | NaN | NaN | NaN | NaN |
| 25% | 12.000000 | NaN | NaN | 8.000000 | NaN | 28.000000 | NaN | NaN | NaN | NaN |
| 50% | 12.000000 | NaN | NaN | 15.000000 | NaN | 35.000000 | NaN | NaN | NaN | NaN |
| 75% | 15.000000 | NaN | NaN | 26.000000 | NaN | 44.000000 | NaN | NaN | NaN | NaN |
| max | 18.000000 | NaN | NaN | 55.000000 | NaN | 64.000000 | NaN | NaN | NaN | NaN |
Note that the dataset contains categorical and numerical variables. We will need to take this into account when preprocessing the dataset thereafter.
```
X.head()
```
| | EDUCATION | SOUTH | SEX | EXPERIENCE | UNION | AGE | RACE | OCCUPATION | SECTOR | MARR |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| 0 | 8.0 | no | female | 21.0 | not\_member | 35.0 | Hispanic | Other | Manufacturing | Married |
| 1 | 9.0 | no | female | 42.0 | not\_member | 57.0 | White | Other | Manufacturing | Married |
| 2 | 12.0 | no | male | 1.0 | not\_member | 19.0 | White | Other | Manufacturing | Unmarried |
| 3 | 12.0 | no | male | 4.0 | not\_member | 22.0 | White | Other | Other | Unmarried |
| 4 | 12.0 | no | male | 17.0 | not\_member | 35.0 | White | Other | Other | Married |
Our target for prediction: the wage. Wages are described as floating-point number in dollars per hour.
```
y = survey.target.values.ravel()
survey.target.head()
```
```
0 5.10
1 4.95
2 6.67
3 4.00
4 7.50
Name: WAGE, dtype: float64
```
We split the sample into a train and a test dataset. Only the train dataset will be used in the following exploratory analysis. This is a way to emulate a real situation where predictions are performed on an unknown target, and we don’t want our analysis and decisions to be biased by our knowledge of the test data.
```
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=42)
```
First, let’s get some insights by looking at the variable distributions and at the pairwise relationships between them. Only numerical variables will be used. In the following plot, each dot represents a sample.
```
train_dataset = X_train.copy()
train_dataset.insert(0, "WAGE", y_train)
_ = sns.pairplot(train_dataset, kind="reg", diag_kind="kde")
```
Looking closely at the WAGE distribution reveals that it has a long tail. For this reason, we should take its logarithm to turn it approximately into a normal distribution (linear models such as ridge or lasso work best for a normal distribution of error).
The WAGE is increasing when EDUCATION is increasing. Note that the dependence between WAGE and EDUCATION represented here is a marginal dependence, i.e., it describes the behavior of a specific variable without keeping the others fixed.
Also, the EXPERIENCE and AGE are strongly linearly correlated.
The machine-learning pipeline
-----------------------------
To design our machine-learning pipeline, we first manually check the type of data that we are dealing with:
```
survey.data.info()
```
```
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 534 entries, 0 to 533
Data columns (total 10 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 EDUCATION 534 non-null float64
1 SOUTH 534 non-null category
2 SEX 534 non-null category
3 EXPERIENCE 534 non-null float64
4 UNION 534 non-null category
5 AGE 534 non-null float64
6 RACE 534 non-null category
7 OCCUPATION 534 non-null category
8 SECTOR 534 non-null category
9 MARR 534 non-null category
dtypes: category(7), float64(3)
memory usage: 17.2 KB
```
As seen previously, the dataset contains columns with different data types and we need to apply a specific preprocessing for each data types. In particular categorical variables cannot be included in linear model if not coded as integers first. In addition, to avoid categorical features to be treated as ordered values, we need to one-hot-encode them. Our pre-processor will
* one-hot encode (i.e., generate a column by category) the categorical columns, only for non-binary categorical variables;
* as a first approach (we will see after how the normalisation of numerical values will affect our discussion), keep numerical values as they are.
```
from sklearn.compose import make_column_transformer
from sklearn.preprocessing import OneHotEncoder
categorical_columns = ["RACE", "OCCUPATION", "SECTOR", "MARR", "UNION", "SEX", "SOUTH"]
numerical_columns = ["EDUCATION", "EXPERIENCE", "AGE"]
preprocessor = make_column_transformer(
(OneHotEncoder(drop="if_binary"), categorical_columns),
remainder="passthrough",
verbose_feature_names_out=False, # avoid to prepend the preprocessor names
)
```
To describe the dataset as a linear model we use a ridge regressor with a very small regularization and to model the logarithm of the WAGE.
```
from sklearn.pipeline import make_pipeline
from sklearn.linear_model import Ridge
from sklearn.compose import TransformedTargetRegressor
model = make_pipeline(
preprocessor,
TransformedTargetRegressor(
regressor=Ridge(alpha=1e-10), func=np.log10, inverse_func=sp.special.exp10
),
)
```
Processing the dataset
----------------------
First, we fit the model.
```
model.fit(X_train, y_train)
```
```
Pipeline(steps=[('columntransformer',
ColumnTransformer(remainder='passthrough',
transformers=[('onehotencoder',
OneHotEncoder(drop='if_binary'),
['RACE', 'OCCUPATION',
'SECTOR', 'MARR', 'UNION',
'SEX', 'SOUTH'])],
verbose_feature_names_out=False)),
('transformedtargetregressor',
TransformedTargetRegressor(func=<ufunc 'log10'>,
inverse_func=<ufunc 'exp10'>,
regressor=Ridge(alpha=1e-10)))])
```
**In a Jupyter environment, please rerun this cell to show the HTML representation or trust the notebook.
On GitHub, the HTML representation is unable to render, please try loading this page with nbviewer.org.**
Pipeline
```
Pipeline(steps=[('columntransformer',
ColumnTransformer(remainder='passthrough',
transformers=[('onehotencoder',
OneHotEncoder(drop='if_binary'),
['RACE', 'OCCUPATION',
'SECTOR', 'MARR', 'UNION',
'SEX', 'SOUTH'])],
verbose_feature_names_out=False)),
('transformedtargetregressor',
TransformedTargetRegressor(func=<ufunc 'log10'>,
inverse_func=<ufunc 'exp10'>,
regressor=Ridge(alpha=1e-10)))])
```
columntransformer: ColumnTransformer
```
ColumnTransformer(remainder='passthrough',
transformers=[('onehotencoder',
OneHotEncoder(drop='if_binary'),
['RACE', 'OCCUPATION', 'SECTOR', 'MARR',
'UNION', 'SEX', 'SOUTH'])],
verbose_feature_names_out=False)
```
onehotencoder
```
['RACE', 'OCCUPATION', 'SECTOR', 'MARR', 'UNION', 'SEX', 'SOUTH']
```
OneHotEncoder
```
OneHotEncoder(drop='if_binary')
```
remainder
```
['EDUCATION', 'EXPERIENCE', 'AGE']
```
passthrough
```
passthrough
```
transformedtargetregressor: TransformedTargetRegressor
```
TransformedTargetRegressor(func=<ufunc 'log10'>, inverse_func=<ufunc 'exp10'>,
regressor=Ridge(alpha=1e-10))
```
regressor: Ridge
```
Ridge(alpha=1e-10)
```
Ridge
```
Ridge(alpha=1e-10)
```
Then we check the performance of the computed model plotting its predictions on the test set and computing, for example, the median absolute error of the model.
```
from sklearn.metrics import median_absolute_error
y_pred = model.predict(X_train)
mae = median_absolute_error(y_train, y_pred)
string_score = f"MAE on training set: {mae:.2f} $/hour"
y_pred = model.predict(X_test)
mae = median_absolute_error(y_test, y_pred)
string_score += f"\nMAE on testing set: {mae:.2f} $/hour"
```
```
fig, ax = plt.subplots(figsize=(5, 5))
plt.scatter(y_test, y_pred)
ax.plot([0, 1], [0, 1], transform=ax.transAxes, ls="--", c="red")
plt.text(3, 20, string_score)
plt.title("Ridge model, small regularization")
plt.ylabel("Model predictions")
plt.xlabel("Truths")
plt.xlim([0, 27])
_ = plt.ylim([0, 27])
```
The model learnt is far from being a good model making accurate predictions: this is obvious when looking at the plot above, where good predictions should lie on the red line.
In the following section, we will interpret the coefficients of the model. While we do so, we should keep in mind that any conclusion we draw is about the model that we build, rather than about the true (real-world) generative process of the data.
Interpreting coefficients: scale matters
----------------------------------------
First of all, we can take a look to the values of the coefficients of the regressor we have fitted.
```
feature_names = model[:-1].get_feature_names_out()
coefs = pd.DataFrame(
model[-1].regressor_.coef_,
columns=["Coefficients"],
index=feature_names,
)
coefs
```
| | Coefficients |
| --- | --- |
| RACE\_Hispanic | -0.013531 |
| RACE\_Other | -0.009087 |
| RACE\_White | 0.022582 |
| OCCUPATION\_Clerical | 0.000061 |
| OCCUPATION\_Management | 0.090544 |
| OCCUPATION\_Other | -0.025085 |
| OCCUPATION\_Professional | 0.071980 |
| OCCUPATION\_Sales | -0.046620 |
| OCCUPATION\_Service | -0.091037 |
| SECTOR\_Construction | -0.000165 |
| SECTOR\_Manufacturing | 0.031287 |
| SECTOR\_Other | -0.030993 |
| MARR\_Unmarried | -0.032405 |
| UNION\_not\_member | -0.117154 |
| SEX\_male | 0.090808 |
| SOUTH\_yes | -0.033823 |
| EDUCATION | 0.054699 |
| EXPERIENCE | 0.035005 |
| AGE | -0.030867 |
The AGE coefficient is expressed in “dollars/hour per living years” while the EDUCATION one is expressed in “dollars/hour per years of education”. This representation of the coefficients has the benefit of making clear the practical predictions of the model: an increase of \(1\) year in AGE means a decrease of \(0.030867\) dollars/hour, while an increase of \(1\) year in EDUCATION means an increase of \(0.054699\) dollars/hour. On the other hand, categorical variables (as UNION or SEX) are adimensional numbers taking either the value 0 or 1. Their coefficients are expressed in dollars/hour. Then, we cannot compare the magnitude of different coefficients since the features have different natural scales, and hence value ranges, because of their different unit of measure. This is more visible if we plot the coefficients.
```
coefs.plot.barh(figsize=(9, 7))
plt.title("Ridge model, small regularization")
plt.axvline(x=0, color=".5")
plt.xlabel("Raw coefficient values")
plt.subplots_adjust(left=0.3)
```
Indeed, from the plot above the most important factor in determining WAGE appears to be the variable UNION, even if our intuition might tell us that variables like EXPERIENCE should have more impact.
Looking at the coefficient plot to gauge feature importance can be misleading as some of them vary on a small scale, while others, like AGE, varies a lot more, several decades.
This is visible if we compare the standard deviations of different features.
```
X_train_preprocessed = pd.DataFrame(
model[:-1].transform(X_train), columns=feature_names
)
X_train_preprocessed.std(axis=0).plot.barh(figsize=(9, 7))
plt.title("Feature ranges")
plt.xlabel("Std. dev. of feature values")
plt.subplots_adjust(left=0.3)
```
Multiplying the coefficients by the standard deviation of the related feature would reduce all the coefficients to the same unit of measure. As we will see [after](#scaling-num) this is equivalent to normalize numerical variables to their standard deviation, as \(y = \sum{coef\_i \times X\_i} = \sum{(coef\_i \times std\_i) \times (X\_i / std\_i)}\).
In that way, we emphasize that the greater the variance of a feature, the larger the weight of the corresponding coefficient on the output, all else being equal.
```
coefs = pd.DataFrame(
model[-1].regressor_.coef_ * X_train_preprocessed.std(axis=0),
columns=["Coefficient importance"],
index=feature_names,
)
coefs.plot(kind="barh", figsize=(9, 7))
plt.xlabel("Coefficient values corrected by the feature's std. dev.")
plt.title("Ridge model, small regularization")
plt.axvline(x=0, color=".5")
plt.subplots_adjust(left=0.3)
```
Now that the coefficients have been scaled, we can safely compare them.
Warning
Why does the plot above suggest that an increase in age leads to a decrease in wage? Why the [initial pairplot](#marginal-dependencies) is telling the opposite?
The plot above tells us about dependencies between a specific feature and the target when all other features remain constant, i.e., **conditional dependencies**. An increase of the AGE will induce a decrease of the WAGE when all other features remain constant. On the contrary, an increase of the EXPERIENCE will induce an increase of the WAGE when all other features remain constant. Also, AGE, EXPERIENCE and EDUCATION are the three variables that most influence the model.
Checking the variability of the coefficients
--------------------------------------------
We can check the coefficient variability through cross-validation: it is a form of data perturbation (related to [resampling](https://en.wikipedia.org/wiki/Resampling_(statistics))).
If coefficients vary significantly when changing the input dataset their robustness is not guaranteed, and they should probably be interpreted with caution.
```
from sklearn.model_selection import cross_validate
from sklearn.model_selection import RepeatedKFold
cv = RepeatedKFold(n_splits=5, n_repeats=5, random_state=0)
cv_model = cross_validate(
model,
X,
y,
cv=cv,
return_estimator=True,
n_jobs=2,
)
coefs = pd.DataFrame(
[
est[-1].regressor_.coef_ * est[:-1].transform(X.iloc[train_idx]).std(axis=0)
for est, (train_idx, _) in zip(cv_model["estimator"], cv.split(X, y))
],
columns=feature_names,
)
```
```
plt.figure(figsize=(9, 7))
sns.stripplot(data=coefs, orient="h", color="k", alpha=0.5)
sns.boxplot(data=coefs, orient="h", color="cyan", saturation=0.5, whis=10)
plt.axvline(x=0, color=".5")
plt.xlabel("Coefficient importance")
plt.title("Coefficient importance and its variability")
plt.suptitle("Ridge model, small regularization")
plt.subplots_adjust(left=0.3)
```
The problem of correlated variables
-----------------------------------
The AGE and EXPERIENCE coefficients are affected by strong variability which might be due to the collinearity between the 2 features: as AGE and EXPERIENCE vary together in the data, their effect is difficult to tease apart.
To verify this interpretation we plot the variability of the AGE and EXPERIENCE coefficient.
```
plt.ylabel("Age coefficient")
plt.xlabel("Experience coefficient")
plt.grid(True)
plt.xlim(-0.4, 0.5)
plt.ylim(-0.4, 0.5)
plt.scatter(coefs["AGE"], coefs["EXPERIENCE"])
_ = plt.title("Co-variations of coefficients for AGE and EXPERIENCE across folds")
```
Two regions are populated: when the EXPERIENCE coefficient is positive the AGE one is negative and vice-versa.
To go further we remove one of the 2 features and check what is the impact on the model stability.
```
column_to_drop = ["AGE"]
cv_model = cross_validate(
model,
X.drop(columns=column_to_drop),
y,
cv=cv,
return_estimator=True,
n_jobs=2,
)
coefs = pd.DataFrame(
[
est[-1].regressor_.coef_
* est[:-1].transform(X.drop(columns=column_to_drop).iloc[train_idx]).std(axis=0)
for est, (train_idx, _) in zip(cv_model["estimator"], cv.split(X, y))
],
columns=feature_names[:-1],
)
```
```
plt.figure(figsize=(9, 7))
sns.stripplot(data=coefs, orient="h", color="k", alpha=0.5)
sns.boxplot(data=coefs, orient="h", color="cyan", saturation=0.5)
plt.axvline(x=0, color=".5")
plt.title("Coefficient importance and its variability")
plt.xlabel("Coefficient importance")
plt.suptitle("Ridge model, small regularization, AGE dropped")
plt.subplots_adjust(left=0.3)
```
The estimation of the EXPERIENCE coefficient now shows a much reduced variability. EXPERIENCE remains important for all models trained during cross-validation.
Preprocessing numerical variables
---------------------------------
As said above (see “[The machine-learning pipeline](#the-pipeline)”), we could also choose to scale numerical values before training the model. This can be useful when we apply a similar amount of regularization to all of them in the ridge. The preprocessor is redefined in order to subtract the mean and scale variables to unit variance.
```
from sklearn.preprocessing import StandardScaler
preprocessor = make_column_transformer(
(OneHotEncoder(drop="if_binary"), categorical_columns),
(StandardScaler(), numerical_columns),
)
```
The model will stay unchanged.
```
model = make_pipeline(
preprocessor,
TransformedTargetRegressor(
regressor=Ridge(alpha=1e-10), func=np.log10, inverse_func=sp.special.exp10
),
)
model.fit(X_train, y_train)
```
```
Pipeline(steps=[('columntransformer',
ColumnTransformer(transformers=[('onehotencoder',
OneHotEncoder(drop='if_binary'),
['RACE', 'OCCUPATION',
'SECTOR', 'MARR', 'UNION',
'SEX', 'SOUTH']),
('standardscaler',
StandardScaler(),
['EDUCATION', 'EXPERIENCE',
'AGE'])])),
('transformedtargetregressor',
TransformedTargetRegressor(func=<ufunc 'log10'>,
inverse_func=<ufunc 'exp10'>,
regressor=Ridge(alpha=1e-10)))])
```
**In a Jupyter environment, please rerun this cell to show the HTML representation or trust the notebook.
On GitHub, the HTML representation is unable to render, please try loading this page with nbviewer.org.**
Pipeline
```
Pipeline(steps=[('columntransformer',
ColumnTransformer(transformers=[('onehotencoder',
OneHotEncoder(drop='if_binary'),
['RACE', 'OCCUPATION',
'SECTOR', 'MARR', 'UNION',
'SEX', 'SOUTH']),
('standardscaler',
StandardScaler(),
['EDUCATION', 'EXPERIENCE',
'AGE'])])),
('transformedtargetregressor',
TransformedTargetRegressor(func=<ufunc 'log10'>,
inverse_func=<ufunc 'exp10'>,
regressor=Ridge(alpha=1e-10)))])
```
columntransformer: ColumnTransformer
```
ColumnTransformer(transformers=[('onehotencoder',
OneHotEncoder(drop='if_binary'),
['RACE', 'OCCUPATION', 'SECTOR', 'MARR',
'UNION', 'SEX', 'SOUTH']),
('standardscaler', StandardScaler(),
['EDUCATION', 'EXPERIENCE', 'AGE'])])
```
onehotencoder
```
['RACE', 'OCCUPATION', 'SECTOR', 'MARR', 'UNION', 'SEX', 'SOUTH']
```
OneHotEncoder
```
OneHotEncoder(drop='if_binary')
```
standardscaler
```
['EDUCATION', 'EXPERIENCE', 'AGE']
```
StandardScaler
```
StandardScaler()
```
transformedtargetregressor: TransformedTargetRegressor
```
TransformedTargetRegressor(func=<ufunc 'log10'>, inverse_func=<ufunc 'exp10'>,
regressor=Ridge(alpha=1e-10))
```
regressor: Ridge
```
Ridge(alpha=1e-10)
```
Ridge
```
Ridge(alpha=1e-10)
```
Again, we check the performance of the computed model using, for example, the median absolute error of the model and the R squared coefficient.
```
y_pred = model.predict(X_train)
mae = median_absolute_error(y_train, y_pred)
string_score = f"MAE on training set: {mae:.2f} $/hour"
y_pred = model.predict(X_test)
mae = median_absolute_error(y_test, y_pred)
string_score += f"\nMAE on testing set: {mae:.2f} $/hour"
```
```
fig, ax = plt.subplots(figsize=(6, 6))
plt.scatter(y_test, y_pred)
ax.plot([0, 1], [0, 1], transform=ax.transAxes, ls="--", c="red")
plt.text(3, 20, string_score)
plt.title("Ridge model, small regularization, normalized variables")
plt.ylabel("Model predictions")
plt.xlabel("Truths")
plt.xlim([0, 27])
_ = plt.ylim([0, 27])
```
For the coefficient analysis, scaling is not needed this time because it was performed during the preprocessing step.
```
coefs = pd.DataFrame(
model[-1].regressor_.coef_,
columns=["Coefficients importance"],
index=feature_names,
)
coefs.plot.barh(figsize=(9, 7))
plt.title("Ridge model, small regularization, normalized variables")
plt.xlabel("Raw coefficient values")
plt.axvline(x=0, color=".5")
plt.subplots_adjust(left=0.3)
```
We now inspect the coefficients across several cross-validation folds. As in the above example, we do not need to scale the coefficients by the std. dev. of the feature values since this scaling was already done in the preprocessing step of the pipeline.
```
cv_model = cross_validate(
model,
X,
y,
cv=cv,
return_estimator=True,
n_jobs=2,
)
coefs = pd.DataFrame(
[est[-1].regressor_.coef_ for est in cv_model["estimator"]], columns=feature_names
)
```
```
plt.figure(figsize=(9, 7))
sns.stripplot(data=coefs, orient="h", color="k", alpha=0.5)
sns.boxplot(data=coefs, orient="h", color="cyan", saturation=0.5, whis=10)
plt.axvline(x=0, color=".5")
plt.title("Coefficient variability")
plt.subplots_adjust(left=0.3)
```
The result is quite similar to the non-normalized case.
Linear models with regularization
---------------------------------
In machine-learning practice, ridge regression is more often used with non-negligible regularization.
Above, we limited this regularization to a very little amount. Regularization improves the conditioning of the problem and reduces the variance of the estimates. [`RidgeCV`](../../modules/generated/sklearn.linear_model.ridgecv#sklearn.linear_model.RidgeCV "sklearn.linear_model.RidgeCV") applies cross validation in order to determine which value of the regularization parameter (`alpha`) is best suited for prediction.
```
from sklearn.linear_model import RidgeCV
alphas = np.logspace(-10, 10, 21) # alpha values to be chosen from by cross-validation
model = make_pipeline(
preprocessor,
TransformedTargetRegressor(
regressor=RidgeCV(alphas=alphas),
func=np.log10,
inverse_func=sp.special.exp10,
),
)
model.fit(X_train, y_train)
```
```
Pipeline(steps=[('columntransformer',
ColumnTransformer(transformers=[('onehotencoder',
OneHotEncoder(drop='if_binary'),
['RACE', 'OCCUPATION',
'SECTOR', 'MARR', 'UNION',
'SEX', 'SOUTH']),
('standardscaler',
StandardScaler(),
['EDUCATION', 'EXPERIENCE',
'AGE'])])),
('transformedtargetregressor',
TransformedTargetRegressor(func=<ufunc 'log10'>,
inverse_func=<ufunc 'exp10'>,
regressor=RidgeCV(alphas=array([1.e-10, 1.e-09, 1.e-08, 1.e-07, 1.e-06, 1.e-05, 1.e-04, 1.e-03,
1.e-02, 1.e-01, 1.e+00, 1.e+01, 1.e+02, 1.e+03, 1.e+04, 1.e+05,
1.e+06, 1.e+07, 1.e+08, 1.e+09, 1.e+10]))))])
```
**In a Jupyter environment, please rerun this cell to show the HTML representation or trust the notebook.
On GitHub, the HTML representation is unable to render, please try loading this page with nbviewer.org.**
Pipeline
```
Pipeline(steps=[('columntransformer',
ColumnTransformer(transformers=[('onehotencoder',
OneHotEncoder(drop='if_binary'),
['RACE', 'OCCUPATION',
'SECTOR', 'MARR', 'UNION',
'SEX', 'SOUTH']),
('standardscaler',
StandardScaler(),
['EDUCATION', 'EXPERIENCE',
'AGE'])])),
('transformedtargetregressor',
TransformedTargetRegressor(func=<ufunc 'log10'>,
inverse_func=<ufunc 'exp10'>,
regressor=RidgeCV(alphas=array([1.e-10, 1.e-09, 1.e-08, 1.e-07, 1.e-06, 1.e-05, 1.e-04, 1.e-03,
1.e-02, 1.e-01, 1.e+00, 1.e+01, 1.e+02, 1.e+03, 1.e+04, 1.e+05,
1.e+06, 1.e+07, 1.e+08, 1.e+09, 1.e+10]))))])
```
columntransformer: ColumnTransformer
```
ColumnTransformer(transformers=[('onehotencoder',
OneHotEncoder(drop='if_binary'),
['RACE', 'OCCUPATION', 'SECTOR', 'MARR',
'UNION', 'SEX', 'SOUTH']),
('standardscaler', StandardScaler(),
['EDUCATION', 'EXPERIENCE', 'AGE'])])
```
onehotencoder
```
['RACE', 'OCCUPATION', 'SECTOR', 'MARR', 'UNION', 'SEX', 'SOUTH']
```
OneHotEncoder
```
OneHotEncoder(drop='if_binary')
```
standardscaler
```
['EDUCATION', 'EXPERIENCE', 'AGE']
```
StandardScaler
```
StandardScaler()
```
transformedtargetregressor: TransformedTargetRegressor
```
TransformedTargetRegressor(func=<ufunc 'log10'>, inverse_func=<ufunc 'exp10'>,
regressor=RidgeCV(alphas=array([1.e-10, 1.e-09, 1.e-08, 1.e-07, 1.e-06, 1.e-05, 1.e-04, 1.e-03,
1.e-02, 1.e-01, 1.e+00, 1.e+01, 1.e+02, 1.e+03, 1.e+04, 1.e+05,
1.e+06, 1.e+07, 1.e+08, 1.e+09, 1.e+10])))
```
regressor: RidgeCV
```
RidgeCV(alphas=array([1.e-10, 1.e-09, 1.e-08, 1.e-07, 1.e-06, 1.e-05, 1.e-04, 1.e-03,
1.e-02, 1.e-01, 1.e+00, 1.e+01, 1.e+02, 1.e+03, 1.e+04, 1.e+05,
1.e+06, 1.e+07, 1.e+08, 1.e+09, 1.e+10]))
```
RidgeCV
```
RidgeCV(alphas=array([1.e-10, 1.e-09, 1.e-08, 1.e-07, 1.e-06, 1.e-05, 1.e-04, 1.e-03,
1.e-02, 1.e-01, 1.e+00, 1.e+01, 1.e+02, 1.e+03, 1.e+04, 1.e+05,
1.e+06, 1.e+07, 1.e+08, 1.e+09, 1.e+10]))
```
First we check which value of \(\alpha\) has been selected.
```
model[-1].regressor_.alpha_
```
```
10.0
```
Then we check the quality of the predictions.
```
y_pred = model.predict(X_train)
mae = median_absolute_error(y_train, y_pred)
string_score = f"MAE on training set: {mae:.2f} $/hour"
y_pred = model.predict(X_test)
mae = median_absolute_error(y_test, y_pred)
string_score += f"\nMAE on testing set: {mae:.2f} $/hour"
```
```
fig, ax = plt.subplots(figsize=(6, 6))
plt.scatter(y_test, y_pred)
ax.plot([0, 1], [0, 1], transform=ax.transAxes, ls="--", c="red")
plt.text(3, 20, string_score)
plt.title("Ridge model, optimum regularization, normalized variables")
plt.ylabel("Model predictions")
plt.xlabel("Truths")
plt.xlim([0, 27])
_ = plt.ylim([0, 27])
```
The ability to reproduce the data of the regularized model is similar to the one of the non-regularized model.
```
coefs = pd.DataFrame(
model[-1].regressor_.coef_,
columns=["Coefficients importance"],
index=feature_names,
)
coefs.plot.barh(figsize=(9, 7))
plt.title("Ridge model, with regularization, normalized variables")
plt.xlabel("Raw coefficient values")
plt.axvline(x=0, color=".5")
plt.subplots_adjust(left=0.3)
```
The coefficients are significantly different. AGE and EXPERIENCE coefficients are both positive but they now have less influence on the prediction.
The regularization reduces the influence of correlated variables on the model because the weight is shared between the two predictive variables, so neither alone would have strong weights.
On the other hand, the weights obtained with regularization are more stable (see the [Ridge regression and classification](../../modules/linear_model#ridge-regression) User Guide section). This increased stability is visible from the plot, obtained from data perturbations, in a cross-validation. This plot can be compared with the [previous one](#covariation).
```
cv_model = cross_validate(
model,
X,
y,
cv=cv,
return_estimator=True,
n_jobs=2,
)
coefs = pd.DataFrame(
[est[-1].regressor_.coef_ for est in cv_model["estimator"]], columns=feature_names
)
```
```
plt.ylabel("Age coefficient")
plt.xlabel("Experience coefficient")
plt.grid(True)
plt.xlim(-0.4, 0.5)
plt.ylim(-0.4, 0.5)
plt.scatter(coefs["AGE"], coefs["EXPERIENCE"])
_ = plt.title("Co-variations of coefficients for AGE and EXPERIENCE across folds")
```
Linear models with sparse coefficients
--------------------------------------
Another possibility to take into account correlated variables in the dataset, is to estimate sparse coefficients. In some way we already did it manually when we dropped the AGE column in a previous ridge estimation.
Lasso models (see the [Lasso](../../modules/linear_model#lasso) User Guide section) estimates sparse coefficients. [`LassoCV`](../../modules/generated/sklearn.linear_model.lassocv#sklearn.linear_model.LassoCV "sklearn.linear_model.LassoCV") applies cross validation in order to determine which value of the regularization parameter (`alpha`) is best suited for the model estimation.
```
from sklearn.linear_model import LassoCV
alphas = np.logspace(-10, 10, 21) # alpha values to be chosen from by cross-validation
model = make_pipeline(
preprocessor,
TransformedTargetRegressor(
regressor=LassoCV(alphas=alphas, max_iter=100_000),
func=np.log10,
inverse_func=sp.special.exp10,
),
)
_ = model.fit(X_train, y_train)
```
First we verify which value of \(\alpha\) has been selected.
```
model[-1].regressor_.alpha_
```
```
0.001
```
Then we check the quality of the predictions.
```
y_pred = model.predict(X_train)
mae = median_absolute_error(y_train, y_pred)
string_score = f"MAE on training set: {mae:.2f} $/hour"
y_pred = model.predict(X_test)
mae = median_absolute_error(y_test, y_pred)
string_score += f"\nMAE on testing set: {mae:.2f} $/hour"
```
```
fig, ax = plt.subplots(figsize=(6, 6))
plt.scatter(y_test, y_pred)
ax.plot([0, 1], [0, 1], transform=ax.transAxes, ls="--", c="red")
plt.text(3, 20, string_score)
plt.title("Lasso model, regularization, normalized variables")
plt.ylabel("Model predictions")
plt.xlabel("Truths")
plt.xlim([0, 27])
_ = plt.ylim([0, 27])
```
For our dataset, again the model is not very predictive.
```
coefs = pd.DataFrame(
model[-1].regressor_.coef_,
columns=["Coefficients importance"],
index=feature_names,
)
coefs.plot(kind="barh", figsize=(9, 7))
plt.title("Lasso model, optimum regularization, normalized variables")
plt.axvline(x=0, color=".5")
plt.subplots_adjust(left=0.3)
```
A Lasso model identifies the correlation between AGE and EXPERIENCE and suppresses one of them for the sake of the prediction.
It is important to keep in mind that the coefficients that have been dropped may still be related to the outcome by themselves: the model chose to suppress them because they bring little or no additional information on top of the other features. Additionally, this selection is unstable for correlated features, and should be interpreted with caution.
Indeed, we can check the variability of the coefficients across folds.
```
cv_model = cross_validate(
model,
X,
y,
cv=cv,
return_estimator=True,
n_jobs=2,
)
coefs = pd.DataFrame(
[est[-1].regressor_.coef_ for est in cv_model["estimator"]], columns=feature_names
)
```
```
plt.figure(figsize=(9, 7))
sns.stripplot(data=coefs, orient="h", color="k", alpha=0.5)
sns.boxplot(data=coefs, orient="h", color="cyan", saturation=0.5, whis=100)
plt.axvline(x=0, color=".5")
plt.title("Coefficient variability")
plt.subplots_adjust(left=0.3)
```
We observe that the AGE and EXPERIENCE coefficients are varying a lot depending of the fold.
Lessons learned
---------------
* Coefficients must be scaled to the same unit of measure to retrieve feature importance. Scaling them with the standard-deviation of the feature is a useful proxy.
* Coefficients in multivariate linear models represent the dependency between a given feature and the target, **conditional** on the other features.
* Correlated features induce instabilities in the coefficients of linear models and their effects cannot be well teased apart.
* Different linear models respond differently to feature correlation and coefficients could significantly vary from one another.
* Inspecting coefficients across the folds of a cross-validation loop gives an idea of their stability.
**Total running time of the script:** ( 0 minutes 16.491 seconds)
[`Download Python source code: plot_linear_model_coefficient_interpretation.py`](https://scikit-learn.org/1.1/_downloads/521b554adefca348463adbbe047d7e99/plot_linear_model_coefficient_interpretation.py)
[`Download Jupyter notebook: plot_linear_model_coefficient_interpretation.ipynb`](https://scikit-learn.org/1.1/_downloads/cf0f90f46eb559facf7f63f124f61e04/plot_linear_model_coefficient_interpretation.ipynb)
| programming_docs |
scikit_learn Partial Dependence and Individual Conditional Expectation Plots Note
Click [here](#sphx-glr-download-auto-examples-inspection-plot-partial-dependence-py) to download the full example code or to run this example in your browser via Binder
Partial Dependence and Individual Conditional Expectation Plots
===============================================================
Partial dependence plots show the dependence between the target function [[2]](#id5) and a set of features of interest, marginalizing over the values of all other features (the complement features). Due to the limits of human perception, the size of the set of features of interest must be small (usually, one or two) thus they are usually chosen among the most important features.
Similarly, an individual conditional expectation (ICE) plot [[3]](#id6) shows the dependence between the target function and a feature of interest. However, unlike partial dependence plots, which show the average effect of the features of interest, ICE plots visualize the dependence of the prediction on a feature for each [sample](https://scikit-learn.org/1.1/glossary.html#term-sample) separately, with one line per sample. Only one feature of interest is supported for ICE plots.
This example shows how to obtain partial dependence and ICE plots from a [`MLPRegressor`](../../modules/generated/sklearn.neural_network.mlpregressor#sklearn.neural_network.MLPRegressor "sklearn.neural_network.MLPRegressor") and a [`HistGradientBoostingRegressor`](../../modules/generated/sklearn.ensemble.histgradientboostingregressor#sklearn.ensemble.HistGradientBoostingRegressor "sklearn.ensemble.HistGradientBoostingRegressor") trained on the California housing dataset. The example is taken from [[1]](#id4).
California Housing data preprocessing
-------------------------------------
Center target to avoid gradient boosting init bias: gradient boosting with the ‘recursion’ method does not account for the initial estimator (here the average target, by default).
```
import pandas as pd
from sklearn.datasets import fetch_california_housing
from sklearn.model_selection import train_test_split
cal_housing = fetch_california_housing()
X = pd.DataFrame(cal_housing.data, columns=cal_housing.feature_names)
y = cal_housing.target
y -= y.mean()
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.1, random_state=0)
```
1-way partial dependence with different models
----------------------------------------------
In this section, we will compute 1-way partial dependence with two different machine-learning models: (i) a multi-layer perceptron and (ii) a gradient-boosting. With these two models, we illustrate how to compute and interpret both partial dependence plot (PDP) and individual conditional expectation (ICE).
### Multi-layer perceptron
Let’s fit a [`MLPRegressor`](../../modules/generated/sklearn.neural_network.mlpregressor#sklearn.neural_network.MLPRegressor "sklearn.neural_network.MLPRegressor") and compute single-variable partial dependence plots.
```
from time import time
from sklearn.pipeline import make_pipeline
from sklearn.preprocessing import QuantileTransformer
from sklearn.neural_network import MLPRegressor
print("Training MLPRegressor...")
tic = time()
est = make_pipeline(
QuantileTransformer(),
MLPRegressor(
hidden_layer_sizes=(30, 15),
learning_rate_init=0.01,
early_stopping=True,
random_state=0,
),
)
est.fit(X_train, y_train)
print(f"done in {time() - tic:.3f}s")
print(f"Test R2 score: {est.score(X_test, y_test):.2f}")
```
```
Training MLPRegressor...
done in 2.228s
Test R2 score: 0.82
```
We configured a pipeline to scale the numerical input features and tuned the neural network size and learning rate to get a reasonable compromise between training time and predictive performance on a test set.
Importantly, this tabular dataset has very different dynamic ranges for its features. Neural networks tend to be very sensitive to features with varying scales and forgetting to preprocess the numeric feature would lead to a very poor model.
It would be possible to get even higher predictive performance with a larger neural network but the training would also be significantly more expensive.
Note that it is important to check that the model is accurate enough on a test set before plotting the partial dependence since there would be little use in explaining the impact of a given feature on the prediction function of a poor model.
We will plot the partial dependence, both individual (ICE) and averaged one (PDP). We limit to only 50 ICE curves to not overcrowd the plot.
```
from sklearn.inspection import PartialDependenceDisplay
common_params = {
"subsample": 50,
"n_jobs": 2,
"grid_resolution": 20,
"centered": True,
"random_state": 0,
}
print("Computing partial dependence plots...")
tic = time()
display = PartialDependenceDisplay.from_estimator(
est,
X_train,
features=["MedInc", "AveOccup", "HouseAge", "AveRooms"],
kind="both",
**common_params,
)
print(f"done in {time() - tic:.3f}s")
display.figure_.suptitle(
"Partial dependence of house value on non-location features\n"
"for the California housing dataset, with MLPRegressor"
)
display.figure_.subplots_adjust(hspace=0.3)
```
```
Computing partial dependence plots...
done in 1.292s
```
### Gradient boosting
Let’s now fit a [`HistGradientBoostingRegressor`](../../modules/generated/sklearn.ensemble.histgradientboostingregressor#sklearn.ensemble.HistGradientBoostingRegressor "sklearn.ensemble.HistGradientBoostingRegressor") and compute the partial dependence on the same features.
```
from sklearn.ensemble import HistGradientBoostingRegressor
print("Training HistGradientBoostingRegressor...")
tic = time()
est = HistGradientBoostingRegressor(random_state=0)
est.fit(X_train, y_train)
print(f"done in {time() - tic:.3f}s")
print(f"Test R2 score: {est.score(X_test, y_test):.2f}")
```
```
Training HistGradientBoostingRegressor...
done in 0.346s
Test R2 score: 0.85
```
Here, we used the default hyperparameters for the gradient boosting model without any preprocessing as tree-based models are naturally robust to monotonic transformations of numerical features.
Note that on this tabular dataset, Gradient Boosting Machines are both significantly faster to train and more accurate than neural networks. It is also significantly cheaper to tune their hyperparameters (the defaults tend to work well while this is not often the case for neural networks).
We will plot the partial dependence, both individual (ICE) and averaged one (PDP). We limit to only 50 ICE curves to not overcrowd the plot.
```
print("Computing partial dependence plots...")
tic = time()
display = PartialDependenceDisplay.from_estimator(
est,
X_train,
features=["MedInc", "AveOccup", "HouseAge", "AveRooms"],
kind="both",
**common_params,
)
print(f"done in {time() - tic:.3f}s")
display.figure_.suptitle(
"Partial dependence of house value on non-location features\n"
"for the California housing dataset, with Gradient Boosting"
)
display.figure_.subplots_adjust(wspace=0.4, hspace=0.3)
```
```
Computing partial dependence plots...
done in 6.117s
```
### Analysis of the plots
We can clearly see on the PDPs (dashed orange line) that the median house price shows a linear relationship with the median income (top left) and that the house price drops when the average occupants per household increases (top middle). The top right plot shows that the house age in a district does not have a strong influence on the (median) house price; so does the average rooms per household.
The ICE curves (light blue lines) complement the analysis: we can see that there are some exceptions (which are better highlighted with the option `centered=True`), where the house price remains constant with respect to median income and average occupants variations. On the other hand, while the house age (top right) does not have a strong influence on the median house price on average, there seems to be a number of exceptions where the house price increases when between the ages 15-25. Similar exceptions can be observed for the average number of rooms (bottom left). Therefore, ICE plots show some individual effect which are attenuated by taking the averages.
In all plots, the tick marks on the x-axis represent the deciles of the feature values in the training data.
We also observe that [`MLPRegressor`](../../modules/generated/sklearn.neural_network.mlpregressor#sklearn.neural_network.MLPRegressor "sklearn.neural_network.MLPRegressor") has much smoother predictions than [`HistGradientBoostingRegressor`](../../modules/generated/sklearn.ensemble.histgradientboostingregressor#sklearn.ensemble.HistGradientBoostingRegressor "sklearn.ensemble.HistGradientBoostingRegressor").
However, it is worth noting that we are creating potential meaningless synthetic samples if features are correlated.
2D interaction plots
--------------------
PDPs with two features of interest enable us to visualize interactions among them. However, ICEs cannot be plotted in an easy manner and thus interpreted. Another consideration is linked to the performance to compute the PDPs. With the tree-based algorithm, when only PDPs are requested, they can be computed on an efficient way using the `'recursion'` method.
```
import matplotlib.pyplot as plt
print("Computing partial dependence plots...")
tic = time()
_, ax = plt.subplots(ncols=3, figsize=(9, 4))
# Note that we could have called the method `from_estimator` three times and
# provide one feature, one kind of plot, and one axis for each call.
display = PartialDependenceDisplay.from_estimator(
est,
X_train,
features=["AveOccup", "HouseAge", ("AveOccup", "HouseAge")],
kind=["both", "both", "average"],
ax=ax,
**common_params,
)
print(f"done in {time() - tic:.3f}s")
display.figure_.suptitle(
"Partial dependence of house value on non-location features\n"
"for the California housing dataset, with Gradient Boosting"
)
display.figure_.subplots_adjust(wspace=0.4, hspace=0.3)
```
```
Computing partial dependence plots...
done in 3.185s
```
The two-way partial dependence plot shows the dependence of median house price on joint values of house age and average occupants per household. We can clearly see an interaction between the two features: for an average occupancy greater than two, the house price is nearly independent of the house age, whereas for values less than two there is a strong dependence on age.
3D interaction plots
--------------------
Let’s make the same partial dependence plot for the 2 features interaction, this time in 3 dimensions.
```
import numpy as np
# unused but required import for doing 3d projections with matplotlib < 3.2
import mpl_toolkits.mplot3d # noqa: F401
from sklearn.inspection import partial_dependence
fig = plt.figure()
features = ("AveOccup", "HouseAge")
pdp = partial_dependence(
est, X_train, features=features, kind="average", grid_resolution=10
)
XX, YY = np.meshgrid(pdp["values"][0], pdp["values"][1])
Z = pdp.average[0].T
ax = fig.add_subplot(projection="3d")
fig.add_axes(ax)
surf = ax.plot_surface(XX, YY, Z, rstride=1, cstride=1, cmap=plt.cm.BuPu, edgecolor="k")
ax.set_xlabel(features[0])
ax.set_ylabel(features[1])
ax.set_zlabel("Partial dependence")
# pretty init view
ax.view_init(elev=22, azim=122)
plt.colorbar(surf)
plt.suptitle(
"Partial dependence of house value on median\n"
"age and average occupancy, with Gradient Boosting"
)
plt.subplots_adjust(top=0.9)
plt.show()
```
**Total running time of the script:** ( 0 minutes 13.711 seconds)
[`Download Python source code: plot_partial_dependence.py`](https://scikit-learn.org/1.1/_downloads/bcd609cfe29c9da1f51c848e18b89c76/plot_partial_dependence.py)
[`Download Jupyter notebook: plot_partial_dependence.ipynb`](https://scikit-learn.org/1.1/_downloads/21b82d82985712b5de6347f382c77c86/plot_partial_dependence.ipynb)
scikit_learn Permutation Importance vs Random Forest Feature Importance (MDI) Note
Click [here](#sphx-glr-download-auto-examples-inspection-plot-permutation-importance-py) to download the full example code or to run this example in your browser via Binder
Permutation Importance vs Random Forest Feature Importance (MDI)
================================================================
In this example, we will compare the impurity-based feature importance of [`RandomForestClassifier`](../../modules/generated/sklearn.ensemble.randomforestclassifier#sklearn.ensemble.RandomForestClassifier "sklearn.ensemble.RandomForestClassifier") with the permutation importance on the titanic dataset using [`permutation_importance`](../../modules/generated/sklearn.inspection.permutation_importance#sklearn.inspection.permutation_importance "sklearn.inspection.permutation_importance"). We will show that the impurity-based feature importance can inflate the importance of numerical features.
Furthermore, the impurity-based feature importance of random forests suffers from being computed on statistics derived from the training dataset: the importances can be high even for features that are not predictive of the target variable, as long as the model has the capacity to use them to overfit.
This example shows how to use Permutation Importances as an alternative that can mitigate those limitations.
```
import numpy as np
```
Data Loading and Feature Engineering
------------------------------------
Let’s use pandas to load a copy of the titanic dataset. The following shows how to apply separate preprocessing on numerical and categorical features.
We further include two random variables that are not correlated in any way with the target variable (`survived`):
* `random_num` is a high cardinality numerical variable (as many unique values as records).
* `random_cat` is a low cardinality categorical variable (3 possible values).
```
from sklearn.datasets import fetch_openml
from sklearn.model_selection import train_test_split
X, y = fetch_openml("titanic", version=1, as_frame=True, return_X_y=True)
rng = np.random.RandomState(seed=42)
X["random_cat"] = rng.randint(3, size=X.shape[0])
X["random_num"] = rng.randn(X.shape[0])
categorical_columns = ["pclass", "sex", "embarked", "random_cat"]
numerical_columns = ["age", "sibsp", "parch", "fare", "random_num"]
X = X[categorical_columns + numerical_columns]
X_train, X_test, y_train, y_test = train_test_split(X, y, stratify=y, random_state=42)
```
We define a predictive model based on a random forest. Therefore, we will make the following preprocessing steps:
* use [`OrdinalEncoder`](../../modules/generated/sklearn.preprocessing.ordinalencoder#sklearn.preprocessing.OrdinalEncoder "sklearn.preprocessing.OrdinalEncoder") to encode the categorical features;
* use [`SimpleImputer`](../../modules/generated/sklearn.impute.simpleimputer#sklearn.impute.SimpleImputer "sklearn.impute.SimpleImputer") to fill missing values for numerical features using a mean strategy.
```
from sklearn.ensemble import RandomForestClassifier
from sklearn.impute import SimpleImputer
from sklearn.compose import ColumnTransformer
from sklearn.pipeline import Pipeline
from sklearn.preprocessing import OrdinalEncoder
categorical_encoder = OrdinalEncoder(
handle_unknown="use_encoded_value", unknown_value=-1, encoded_missing_value=-1
)
numerical_pipe = SimpleImputer(strategy="mean")
preprocessing = ColumnTransformer(
[
("cat", categorical_encoder, categorical_columns),
("num", numerical_pipe, numerical_columns),
],
verbose_feature_names_out=False,
)
rf = Pipeline(
[
("preprocess", preprocessing),
("classifier", RandomForestClassifier(random_state=42)),
]
)
rf.fit(X_train, y_train)
```
```
Pipeline(steps=[('preprocess',
ColumnTransformer(transformers=[('cat',
OrdinalEncoder(encoded_missing_value=-1,
handle_unknown='use_encoded_value',
unknown_value=-1),
['pclass', 'sex', 'embarked',
'random_cat']),
('num', SimpleImputer(),
['age', 'sibsp', 'parch',
'fare', 'random_num'])],
verbose_feature_names_out=False)),
('classifier', RandomForestClassifier(random_state=42))])
```
**In a Jupyter environment, please rerun this cell to show the HTML representation or trust the notebook.
On GitHub, the HTML representation is unable to render, please try loading this page with nbviewer.org.**
Pipeline
```
Pipeline(steps=[('preprocess',
ColumnTransformer(transformers=[('cat',
OrdinalEncoder(encoded_missing_value=-1,
handle_unknown='use_encoded_value',
unknown_value=-1),
['pclass', 'sex', 'embarked',
'random_cat']),
('num', SimpleImputer(),
['age', 'sibsp', 'parch',
'fare', 'random_num'])],
verbose_feature_names_out=False)),
('classifier', RandomForestClassifier(random_state=42))])
```
preprocess: ColumnTransformer
```
ColumnTransformer(transformers=[('cat',
OrdinalEncoder(encoded_missing_value=-1,
handle_unknown='use_encoded_value',
unknown_value=-1),
['pclass', 'sex', 'embarked', 'random_cat']),
('num', SimpleImputer(),
['age', 'sibsp', 'parch', 'fare',
'random_num'])],
verbose_feature_names_out=False)
```
cat
```
['pclass', 'sex', 'embarked', 'random_cat']
```
OrdinalEncoder
```
OrdinalEncoder(encoded_missing_value=-1, handle_unknown='use_encoded_value',
unknown_value=-1)
```
num
```
['age', 'sibsp', 'parch', 'fare', 'random_num']
```
SimpleImputer
```
SimpleImputer()
```
RandomForestClassifier
```
RandomForestClassifier(random_state=42)
```
Accuracy of the Model
---------------------
Prior to inspecting the feature importances, it is important to check that the model predictive performance is high enough. Indeed there would be little interest of inspecting the important features of a non-predictive model.
Here one can observe that the train accuracy is very high (the forest model has enough capacity to completely memorize the training set) but it can still generalize well enough to the test set thanks to the built-in bagging of random forests.
It might be possible to trade some accuracy on the training set for a slightly better accuracy on the test set by limiting the capacity of the trees (for instance by setting `min_samples_leaf=5` or `min_samples_leaf=10`) so as to limit overfitting while not introducing too much underfitting.
However let’s keep our high capacity random forest model for now so as to illustrate some pitfalls with feature importance on variables with many unique values.
```
print(f"RF train accuracy: {rf.score(X_train, y_train):.3f}")
print(f"RF test accuracy: {rf.score(X_test, y_test):.3f}")
```
```
RF train accuracy: 1.000
RF test accuracy: 0.814
```
Tree’s Feature Importance from Mean Decrease in Impurity (MDI)
--------------------------------------------------------------
The impurity-based feature importance ranks the numerical features to be the most important features. As a result, the non-predictive `random_num` variable is ranked as one of the most important features!
This problem stems from two limitations of impurity-based feature importances:
* impurity-based importances are biased towards high cardinality features;
* impurity-based importances are computed on training set statistics and therefore do not reflect the ability of feature to be useful to make predictions that generalize to the test set (when the model has enough capacity).
The bias towards high cardinality features explains why the `random_num` has a really large importance in comparison with `random_cat` while we would expect both random features to have a null importance.
The fact that we use training set statistics explains why both the `random_num` and `random_cat` features have a non-null importance.
```
import pandas as pd
feature_names = rf[:-1].get_feature_names_out()
mdi_importances = pd.Series(
rf[-1].feature_importances_, index=feature_names
).sort_values(ascending=True)
```
```
ax = mdi_importances.plot.barh()
ax.set_title("Random Forest Feature Importances (MDI)")
ax.figure.tight_layout()
```
As an alternative, the permutation importances of `rf` are computed on a held out test set. This shows that the low cardinality categorical feature, `sex` and `pclass` are the most important feature. Indeed, permuting the values of these features will lead to most decrease in accuracy score of the model on the test set.
Also note that both random features have very low importances (close to 0) as expected.
```
from sklearn.inspection import permutation_importance
result = permutation_importance(
rf, X_test, y_test, n_repeats=10, random_state=42, n_jobs=2
)
sorted_importances_idx = result.importances_mean.argsort()
importances = pd.DataFrame(
result.importances[sorted_importances_idx].T,
columns=X.columns[sorted_importances_idx],
)
ax = importances.plot.box(vert=False, whis=10)
ax.set_title("Permutation Importances (test set)")
ax.axvline(x=0, color="k", linestyle="--")
ax.set_xlabel("Decrease in accuracy score")
ax.figure.tight_layout()
```
It is also possible to compute the permutation importances on the training set. This reveals that `random_num` and `random_cat` get a significantly higher importance ranking than when computed on the test set. The difference between those two plots is a confirmation that the RF model has enough capacity to use that random numerical and categorical features to overfit.
```
result = permutation_importance(
rf, X_train, y_train, n_repeats=10, random_state=42, n_jobs=2
)
sorted_importances_idx = result.importances_mean.argsort()
importances = pd.DataFrame(
result.importances[sorted_importances_idx].T,
columns=X.columns[sorted_importances_idx],
)
ax = importances.plot.box(vert=False, whis=10)
ax.set_title("Permutation Importances (train set)")
ax.axvline(x=0, color="k", linestyle="--")
ax.set_xlabel("Decrease in accuracy score")
ax.figure.tight_layout()
```
We can further retry the experiment by limiting the capacity of the trees to overfit by setting `min_samples_leaf` at 20 data points.
```
rf.set_params(classifier__min_samples_leaf=20).fit(X_train, y_train)
```
```
Pipeline(steps=[('preprocess',
ColumnTransformer(transformers=[('cat',
OrdinalEncoder(encoded_missing_value=-1,
handle_unknown='use_encoded_value',
unknown_value=-1),
['pclass', 'sex', 'embarked',
'random_cat']),
('num', SimpleImputer(),
['age', 'sibsp', 'parch',
'fare', 'random_num'])],
verbose_feature_names_out=False)),
('classifier',
RandomForestClassifier(min_samples_leaf=20, random_state=42))])
```
**In a Jupyter environment, please rerun this cell to show the HTML representation or trust the notebook.
On GitHub, the HTML representation is unable to render, please try loading this page with nbviewer.org.**
Pipeline
```
Pipeline(steps=[('preprocess',
ColumnTransformer(transformers=[('cat',
OrdinalEncoder(encoded_missing_value=-1,
handle_unknown='use_encoded_value',
unknown_value=-1),
['pclass', 'sex', 'embarked',
'random_cat']),
('num', SimpleImputer(),
['age', 'sibsp', 'parch',
'fare', 'random_num'])],
verbose_feature_names_out=False)),
('classifier',
RandomForestClassifier(min_samples_leaf=20, random_state=42))])
```
preprocess: ColumnTransformer
```
ColumnTransformer(transformers=[('cat',
OrdinalEncoder(encoded_missing_value=-1,
handle_unknown='use_encoded_value',
unknown_value=-1),
['pclass', 'sex', 'embarked', 'random_cat']),
('num', SimpleImputer(),
['age', 'sibsp', 'parch', 'fare',
'random_num'])],
verbose_feature_names_out=False)
```
cat
```
['pclass', 'sex', 'embarked', 'random_cat']
```
OrdinalEncoder
```
OrdinalEncoder(encoded_missing_value=-1, handle_unknown='use_encoded_value',
unknown_value=-1)
```
num
```
['age', 'sibsp', 'parch', 'fare', 'random_num']
```
SimpleImputer
```
SimpleImputer()
```
RandomForestClassifier
```
RandomForestClassifier(min_samples_leaf=20, random_state=42)
```
Observing the accuracy score on the training and testing set, we observe that the two metrics are very similar now. Therefore, our model is not overfitting anymore. We can then check the permutation importances with this new model.
```
print(f"RF train accuracy: {rf.score(X_train, y_train):.3f}")
print(f"RF test accuracy: {rf.score(X_test, y_test):.3f}")
```
```
RF train accuracy: 0.810
RF test accuracy: 0.832
```
```
train_result = permutation_importance(
rf, X_train, y_train, n_repeats=10, random_state=42, n_jobs=2
)
test_results = permutation_importance(
rf, X_test, y_test, n_repeats=10, random_state=42, n_jobs=2
)
sorted_importances_idx = train_result.importances_mean.argsort()
```
```
train_importances = pd.DataFrame(
train_result.importances[sorted_importances_idx].T,
columns=X.columns[sorted_importances_idx],
)
test_importances = pd.DataFrame(
test_results.importances[sorted_importances_idx].T,
columns=X.columns[sorted_importances_idx],
)
```
```
for name, importances in zip(["train", "test"], [train_importances, test_importances]):
ax = importances.plot.box(vert=False, whis=10)
ax.set_title(f"Permutation Importances ({name} set)")
ax.set_xlabel("Decrease in accuracy score")
ax.axvline(x=0, color="k", linestyle="--")
ax.figure.tight_layout()
```
*
*
Now, we can observe that on both sets, the `random_num` and `random_cat` features have a lower importance compared to the overfitting random forest. However, the conclusions regarding the importance of the other features are still valid.
**Total running time of the script:** ( 0 minutes 4.791 seconds)
[`Download Python source code: plot_permutation_importance.py`](https://scikit-learn.org/1.1/_downloads/3c3c738275484acc54821615bf72894a/plot_permutation_importance.py)
[`Download Jupyter notebook: plot_permutation_importance.ipynb`](https://scikit-learn.org/1.1/_downloads/f99bb35c32eb8028063a1428c3999b84/plot_permutation_importance.ipynb)
| programming_docs |
scikit_learn Plot the support vectors in LinearSVC Note
Click [here](#sphx-glr-download-auto-examples-svm-plot-linearsvc-support-vectors-py) to download the full example code or to run this example in your browser via Binder
Plot the support vectors in LinearSVC
=====================================
Unlike SVC (based on LIBSVM), LinearSVC (based on LIBLINEAR) does not provide the support vectors. This example demonstrates how to obtain the support vectors in LinearSVC.
```
import numpy as np
import matplotlib.pyplot as plt
from sklearn.datasets import make_blobs
from sklearn.svm import LinearSVC
from sklearn.inspection import DecisionBoundaryDisplay
X, y = make_blobs(n_samples=40, centers=2, random_state=0)
plt.figure(figsize=(10, 5))
for i, C in enumerate([1, 100]):
# "hinge" is the standard SVM loss
clf = LinearSVC(C=C, loss="hinge", random_state=42).fit(X, y)
# obtain the support vectors through the decision function
decision_function = clf.decision_function(X)
# we can also calculate the decision function manually
# decision_function = np.dot(X, clf.coef_[0]) + clf.intercept_[0]
# The support vectors are the samples that lie within the margin
# boundaries, whose size is conventionally constrained to 1
support_vector_indices = np.where(np.abs(decision_function) <= 1 + 1e-15)[0]
support_vectors = X[support_vector_indices]
plt.subplot(1, 2, i + 1)
plt.scatter(X[:, 0], X[:, 1], c=y, s=30, cmap=plt.cm.Paired)
ax = plt.gca()
DecisionBoundaryDisplay.from_estimator(
clf,
X,
ax=ax,
grid_resolution=50,
plot_method="contour",
colors="k",
levels=[-1, 0, 1],
alpha=0.5,
linestyles=["--", "-", "--"],
)
plt.scatter(
support_vectors[:, 0],
support_vectors[:, 1],
s=100,
linewidth=1,
facecolors="none",
edgecolors="k",
)
plt.title("C=" + str(C))
plt.tight_layout()
plt.show()
```
**Total running time of the script:** ( 0 minutes 0.119 seconds)
[`Download Python source code: plot_linearsvc_support_vectors.py`](https://scikit-learn.org/1.1/_downloads/036b9372e2e7802453cbb994da7a6786/plot_linearsvc_support_vectors.py)
[`Download Jupyter notebook: plot_linearsvc_support_vectors.ipynb`](https://scikit-learn.org/1.1/_downloads/12a392e818ac5fa47dd91461855f3f77/plot_linearsvc_support_vectors.ipynb)
scikit_learn SVM Tie Breaking Example Note
Click [here](#sphx-glr-download-auto-examples-svm-plot-svm-tie-breaking-py) to download the full example code or to run this example in your browser via Binder
SVM Tie Breaking Example
========================
Tie breaking is costly if `decision_function_shape='ovr'`, and therefore it is not enabled by default. This example illustrates the effect of the `break_ties` parameter for a multiclass classification problem and `decision_function_shape='ovr'`.
The two plots differ only in the area in the middle where the classes are tied. If `break_ties=False`, all input in that area would be classified as one class, whereas if `break_ties=True`, the tie-breaking mechanism will create a non-convex decision boundary in that area.
```
# Code source: Andreas Mueller, Adrin Jalali
# License: BSD 3 clause
import numpy as np
import matplotlib.pyplot as plt
from sklearn.svm import SVC
from sklearn.datasets import make_blobs
X, y = make_blobs(random_state=27)
fig, sub = plt.subplots(2, 1, figsize=(5, 8))
titles = ("break_ties = False", "break_ties = True")
for break_ties, title, ax in zip((False, True), titles, sub.flatten()):
svm = SVC(
kernel="linear", C=1, break_ties=break_ties, decision_function_shape="ovr"
).fit(X, y)
xlim = [X[:, 0].min(), X[:, 0].max()]
ylim = [X[:, 1].min(), X[:, 1].max()]
xs = np.linspace(xlim[0], xlim[1], 1000)
ys = np.linspace(ylim[0], ylim[1], 1000)
xx, yy = np.meshgrid(xs, ys)
pred = svm.predict(np.c_[xx.ravel(), yy.ravel()])
colors = [plt.cm.Accent(i) for i in [0, 4, 7]]
points = ax.scatter(X[:, 0], X[:, 1], c=y, cmap="Accent")
classes = [(0, 1), (0, 2), (1, 2)]
line = np.linspace(X[:, 1].min() - 5, X[:, 1].max() + 5)
ax.imshow(
-pred.reshape(xx.shape),
cmap="Accent",
alpha=0.2,
extent=(xlim[0], xlim[1], ylim[1], ylim[0]),
)
for coef, intercept, col in zip(svm.coef_, svm.intercept_, classes):
line2 = -(line * coef[1] + intercept) / coef[0]
ax.plot(line2, line, "-", c=colors[col[0]])
ax.plot(line2, line, "--", c=colors[col[1]])
ax.set_xlim(xlim)
ax.set_ylim(ylim)
ax.set_title(title)
ax.set_aspect("equal")
plt.show()
```
**Total running time of the script:** ( 0 minutes 0.967 seconds)
[`Download Python source code: plot_svm_tie_breaking.py`](https://scikit-learn.org/1.1/_downloads/87c535e0896b225a19d2142c6f7c6744/plot_svm_tie_breaking.py)
[`Download Jupyter notebook: plot_svm_tie_breaking.ipynb`](https://scikit-learn.org/1.1/_downloads/358cfc6f7881b63fd4067f3b13ced9ab/plot_svm_tie_breaking.ipynb)
scikit_learn SVM with custom kernel Note
Click [here](#sphx-glr-download-auto-examples-svm-plot-custom-kernel-py) to download the full example code or to run this example in your browser via Binder
SVM with custom kernel
======================
Simple usage of Support Vector Machines to classify a sample. It will plot the decision surface and the support vectors.
```
import numpy as np
import matplotlib.pyplot as plt
from sklearn import svm, datasets
from sklearn.inspection import DecisionBoundaryDisplay
# import some data to play with
iris = datasets.load_iris()
X = iris.data[:, :2] # we only take the first two features. We could
# avoid this ugly slicing by using a two-dim dataset
Y = iris.target
def my_kernel(X, Y):
"""
We create a custom kernel:
(2 0)
k(X, Y) = X ( ) Y.T
(0 1)
"""
M = np.array([[2, 0], [0, 1.0]])
return np.dot(np.dot(X, M), Y.T)
h = 0.02 # step size in the mesh
# we create an instance of SVM and fit out data.
clf = svm.SVC(kernel=my_kernel)
clf.fit(X, Y)
ax = plt.gca()
DecisionBoundaryDisplay.from_estimator(
clf,
X,
cmap=plt.cm.Paired,
ax=ax,
response_method="predict",
plot_method="pcolormesh",
shading="auto",
)
# Plot also the training points
plt.scatter(X[:, 0], X[:, 1], c=Y, cmap=plt.cm.Paired, edgecolors="k")
plt.title("3-Class classification using Support Vector Machine with custom kernel")
plt.axis("tight")
plt.show()
```
**Total running time of the script:** ( 0 minutes 0.080 seconds)
[`Download Python source code: plot_custom_kernel.py`](https://scikit-learn.org/1.1/_downloads/31b78d45b09682c72f57e77cf94a939e/plot_custom_kernel.py)
[`Download Jupyter notebook: plot_custom_kernel.ipynb`](https://scikit-learn.org/1.1/_downloads/1160eee327e01cad702c4964e1c69f45/plot_custom_kernel.ipynb)
scikit_learn Scaling the regularization parameter for SVCs Note
Click [here](#sphx-glr-download-auto-examples-svm-plot-svm-scale-c-py) to download the full example code or to run this example in your browser via Binder
Scaling the regularization parameter for SVCs
=============================================
The following example illustrates the effect of scaling the regularization parameter when using [Support Vector Machines](../../modules/svm#svm) for [classification](../../modules/svm#svm-classification). For SVC classification, we are interested in a risk minimization for the equation:
\[C \sum\_{i=1, n} \mathcal{L} (f(x\_i), y\_i) + \Omega (w)\] where
* \(C\) is used to set the amount of regularization
* \(\mathcal{L}\) is a `loss` function of our samples and our model parameters.
* \(\Omega\) is a `penalty` function of our model parameters
If we consider the loss function to be the individual error per sample, then the data-fit term, or the sum of the error for each sample, will increase as we add more samples. The penalization term, however, will not increase.
When using, for example, [cross validation](../../modules/cross_validation#cross-validation), to set the amount of regularization with `C`, there will be a different amount of samples between the main problem and the smaller problems within the folds of the cross validation.
Since our loss function is dependent on the amount of samples, the latter will influence the selected value of `C`. The question that arises is `How do we optimally adjust C to
account for the different amount of training samples?`
The figures below are used to illustrate the effect of scaling our `C` to compensate for the change in the number of samples, in the case of using an `l1` penalty, as well as the `l2` penalty.
l1-penalty case
---------------
In the `l1` case, theory says that prediction consistency (i.e. that under given hypothesis, the estimator learned predicts as well as a model knowing the true distribution) is not possible because of the bias of the `l1`. It does say, however, that model consistency, in terms of finding the right set of non-zero parameters as well as their signs, can be achieved by scaling `C1`.
l2-penalty case
---------------
The theory says that in order to achieve prediction consistency, the penalty parameter should be kept constant as the number of samples grow.
Simulations
-----------
The two figures below plot the values of `C` on the `x-axis` and the corresponding cross-validation scores on the `y-axis`, for several different fractions of a generated data-set.
In the `l1` penalty case, the cross-validation-error correlates best with the test-error, when scaling our `C` with the number of samples, `n`, which can be seen in the first figure.
For the `l2` penalty case, the best result comes from the case where `C` is not scaled.
*
*
```
# Author: Andreas Mueller <[email protected]>
# Jaques Grobler <[email protected]>
# License: BSD 3 clause
import numpy as np
import matplotlib.pyplot as plt
from sklearn.svm import LinearSVC
from sklearn.model_selection import ShuffleSplit
from sklearn.model_selection import GridSearchCV
from sklearn.utils import check_random_state
from sklearn import datasets
rnd = check_random_state(1)
# set up dataset
n_samples = 100
n_features = 300
# l1 data (only 5 informative features)
X_1, y_1 = datasets.make_classification(
n_samples=n_samples, n_features=n_features, n_informative=5, random_state=1
)
# l2 data: non sparse, but less features
y_2 = np.sign(0.5 - rnd.rand(n_samples))
X_2 = rnd.randn(n_samples, n_features // 5) + y_2[:, np.newaxis]
X_2 += 5 * rnd.randn(n_samples, n_features // 5)
clf_sets = [
(
LinearSVC(penalty="l1", loss="squared_hinge", dual=False, tol=1e-3),
np.logspace(-2.3, -1.3, 10),
X_1,
y_1,
),
(
LinearSVC(penalty="l2", loss="squared_hinge", dual=True),
np.logspace(-4.5, -2, 10),
X_2,
y_2,
),
]
colors = ["navy", "cyan", "darkorange"]
lw = 2
for clf, cs, X, y in clf_sets:
# set up the plot for each regressor
fig, axes = plt.subplots(nrows=2, sharey=True, figsize=(9, 10))
for k, train_size in enumerate(np.linspace(0.3, 0.7, 3)[::-1]):
param_grid = dict(C=cs)
# To get nice curve, we need a large number of iterations to
# reduce the variance
grid = GridSearchCV(
clf,
refit=False,
param_grid=param_grid,
cv=ShuffleSplit(
train_size=train_size, test_size=0.3, n_splits=50, random_state=1
),
)
grid.fit(X, y)
scores = grid.cv_results_["mean_test_score"]
scales = [
(1, "No scaling"),
((n_samples * train_size), "1/n_samples"),
]
for ax, (scaler, name) in zip(axes, scales):
ax.set_xlabel("C")
ax.set_ylabel("CV Score")
grid_cs = cs * float(scaler) # scale the C's
ax.semilogx(
grid_cs,
scores,
label="fraction %.2f" % train_size,
color=colors[k],
lw=lw,
)
ax.set_title(
"scaling=%s, penalty=%s, loss=%s" % (name, clf.penalty, clf.loss)
)
plt.legend(loc="best")
plt.show()
```
**Total running time of the script:** ( 0 minutes 3.656 seconds)
[`Download Python source code: plot_svm_scale_c.py`](https://scikit-learn.org/1.1/_downloads/71939e5f0d3f41f3e224fafac2fea9f2/plot_svm_scale_c.py)
[`Download Jupyter notebook: plot_svm_scale_c.ipynb`](https://scikit-learn.org/1.1/_downloads/1ed4d16a866c9fe4d86a05477e6d0664/plot_svm_scale_c.ipynb)
scikit_learn Non-linear SVM Note
Click [here](#sphx-glr-download-auto-examples-svm-plot-svm-nonlinear-py) to download the full example code or to run this example in your browser via Binder
Non-linear SVM
==============
Perform binary classification using non-linear SVC with RBF kernel. The target to predict is a XOR of the inputs.
The color map illustrates the decision function learned by the SVC.
```
import numpy as np
import matplotlib.pyplot as plt
from sklearn import svm
xx, yy = np.meshgrid(np.linspace(-3, 3, 500), np.linspace(-3, 3, 500))
np.random.seed(0)
X = np.random.randn(300, 2)
Y = np.logical_xor(X[:, 0] > 0, X[:, 1] > 0)
# fit the model
clf = svm.NuSVC(gamma="auto")
clf.fit(X, Y)
# plot the decision function for each datapoint on the grid
Z = clf.decision_function(np.c_[xx.ravel(), yy.ravel()])
Z = Z.reshape(xx.shape)
plt.imshow(
Z,
interpolation="nearest",
extent=(xx.min(), xx.max(), yy.min(), yy.max()),
aspect="auto",
origin="lower",
cmap=plt.cm.PuOr_r,
)
contours = plt.contour(xx, yy, Z, levels=[0], linewidths=2, linestyles="dashed")
plt.scatter(X[:, 0], X[:, 1], s=30, c=Y, cmap=plt.cm.Paired, edgecolors="k")
plt.xticks(())
plt.yticks(())
plt.axis([-3, 3, -3, 3])
plt.show()
```
**Total running time of the script:** ( 0 minutes 1.542 seconds)
[`Download Python source code: plot_svm_nonlinear.py`](https://scikit-learn.org/1.1/_downloads/e8efe2a99bfe31c0f78509cb952cc8d5/plot_svm_nonlinear.py)
[`Download Jupyter notebook: plot_svm_nonlinear.ipynb`](https://scikit-learn.org/1.1/_downloads/8aa42a9df15e5712b4b0d5161f6ea6e9/plot_svm_nonlinear.ipynb)
scikit_learn Support Vector Regression (SVR) using linear and non-linear kernels Note
Click [here](#sphx-glr-download-auto-examples-svm-plot-svm-regression-py) to download the full example code or to run this example in your browser via Binder
Support Vector Regression (SVR) using linear and non-linear kernels
===================================================================
Toy example of 1D regression using linear, polynomial and RBF kernels.
```
import numpy as np
from sklearn.svm import SVR
import matplotlib.pyplot as plt
```
Generate sample data
--------------------
```
X = np.sort(5 * np.random.rand(40, 1), axis=0)
y = np.sin(X).ravel()
# add noise to targets
y[::5] += 3 * (0.5 - np.random.rand(8))
```
Fit regression model
--------------------
```
svr_rbf = SVR(kernel="rbf", C=100, gamma=0.1, epsilon=0.1)
svr_lin = SVR(kernel="linear", C=100, gamma="auto")
svr_poly = SVR(kernel="poly", C=100, gamma="auto", degree=3, epsilon=0.1, coef0=1)
```
Look at the results
-------------------
```
lw = 2
svrs = [svr_rbf, svr_lin, svr_poly]
kernel_label = ["RBF", "Linear", "Polynomial"]
model_color = ["m", "c", "g"]
fig, axes = plt.subplots(nrows=1, ncols=3, figsize=(15, 10), sharey=True)
for ix, svr in enumerate(svrs):
axes[ix].plot(
X,
svr.fit(X, y).predict(X),
color=model_color[ix],
lw=lw,
label="{} model".format(kernel_label[ix]),
)
axes[ix].scatter(
X[svr.support_],
y[svr.support_],
facecolor="none",
edgecolor=model_color[ix],
s=50,
label="{} support vectors".format(kernel_label[ix]),
)
axes[ix].scatter(
X[np.setdiff1d(np.arange(len(X)), svr.support_)],
y[np.setdiff1d(np.arange(len(X)), svr.support_)],
facecolor="none",
edgecolor="k",
s=50,
label="other training data",
)
axes[ix].legend(
loc="upper center",
bbox_to_anchor=(0.5, 1.1),
ncol=1,
fancybox=True,
shadow=True,
)
fig.text(0.5, 0.04, "data", ha="center", va="center")
fig.text(0.06, 0.5, "target", ha="center", va="center", rotation="vertical")
fig.suptitle("Support Vector Regression", fontsize=14)
plt.show()
```
**Total running time of the script:** ( 0 minutes 2.396 seconds)
[`Download Python source code: plot_svm_regression.py`](https://scikit-learn.org/1.1/_downloads/92eee1e3e3d824360fee295386f12216/plot_svm_regression.py)
[`Download Jupyter notebook: plot_svm_regression.ipynb`](https://scikit-learn.org/1.1/_downloads/85b2b4f3ecdf3d37834c318b4889b4c0/plot_svm_regression.ipynb)
scikit_learn SVM: Weighted samples Note
Click [here](#sphx-glr-download-auto-examples-svm-plot-weighted-samples-py) to download the full example code or to run this example in your browser via Binder
SVM: Weighted samples
=====================
Plot decision function of a weighted dataset, where the size of points is proportional to its weight.
The sample weighting rescales the C parameter, which means that the classifier puts more emphasis on getting these points right. The effect might often be subtle. To emphasize the effect here, we particularly weight outliers, making the deformation of the decision boundary very visible.
```
import numpy as np
import matplotlib.pyplot as plt
from sklearn import svm
def plot_decision_function(classifier, sample_weight, axis, title):
# plot the decision function
xx, yy = np.meshgrid(np.linspace(-4, 5, 500), np.linspace(-4, 5, 500))
Z = classifier.decision_function(np.c_[xx.ravel(), yy.ravel()])
Z = Z.reshape(xx.shape)
# plot the line, the points, and the nearest vectors to the plane
axis.contourf(xx, yy, Z, alpha=0.75, cmap=plt.cm.bone)
axis.scatter(
X[:, 0],
X[:, 1],
c=y,
s=100 * sample_weight,
alpha=0.9,
cmap=plt.cm.bone,
edgecolors="black",
)
axis.axis("off")
axis.set_title(title)
# we create 20 points
np.random.seed(0)
X = np.r_[np.random.randn(10, 2) + [1, 1], np.random.randn(10, 2)]
y = [1] * 10 + [-1] * 10
sample_weight_last_ten = abs(np.random.randn(len(X)))
sample_weight_constant = np.ones(len(X))
# and bigger weights to some outliers
sample_weight_last_ten[15:] *= 5
sample_weight_last_ten[9] *= 15
# Fit the models.
# This model does not take into account sample weights.
clf_no_weights = svm.SVC(gamma=1)
clf_no_weights.fit(X, y)
# This other model takes into account some dedicated sample weights.
clf_weights = svm.SVC(gamma=1)
clf_weights.fit(X, y, sample_weight=sample_weight_last_ten)
fig, axes = plt.subplots(1, 2, figsize=(14, 6))
plot_decision_function(
clf_no_weights, sample_weight_constant, axes[0], "Constant weights"
)
plot_decision_function(clf_weights, sample_weight_last_ten, axes[1], "Modified weights")
plt.show()
```
**Total running time of the script:** ( 0 minutes 0.472 seconds)
[`Download Python source code: plot_weighted_samples.py`](https://scikit-learn.org/1.1/_downloads/d03d4aeab237925427bfe3c81433b953/plot_weighted_samples.py)
[`Download Jupyter notebook: plot_weighted_samples.ipynb`](https://scikit-learn.org/1.1/_downloads/cea3fc06a4f342c635cadafa63b33319/plot_weighted_samples.ipynb)
scikit_learn Plot different SVM classifiers in the iris dataset Note
Click [here](#sphx-glr-download-auto-examples-svm-plot-iris-svc-py) to download the full example code or to run this example in your browser via Binder
Plot different SVM classifiers in the iris dataset
==================================================
Comparison of different linear SVM classifiers on a 2D projection of the iris dataset. We only consider the first 2 features of this dataset:
* Sepal length
* Sepal width
This example shows how to plot the decision surface for four SVM classifiers with different kernels.
The linear models `LinearSVC()` and `SVC(kernel='linear')` yield slightly different decision boundaries. This can be a consequence of the following differences:
* `LinearSVC` minimizes the squared hinge loss while `SVC` minimizes the regular hinge loss.
* `LinearSVC` uses the One-vs-All (also known as One-vs-Rest) multiclass reduction while `SVC` uses the One-vs-One multiclass reduction.
Both linear models have linear decision boundaries (intersecting hyperplanes) while the non-linear kernel models (polynomial or Gaussian RBF) have more flexible non-linear decision boundaries with shapes that depend on the kind of kernel and its parameters.
Note
while plotting the decision function of classifiers for toy 2D datasets can help get an intuitive understanding of their respective expressive power, be aware that those intuitions don’t always generalize to more realistic high-dimensional problems.
```
import matplotlib.pyplot as plt
from sklearn import svm, datasets
from sklearn.inspection import DecisionBoundaryDisplay
# import some data to play with
iris = datasets.load_iris()
# Take the first two features. We could avoid this by using a two-dim dataset
X = iris.data[:, :2]
y = iris.target
# we create an instance of SVM and fit out data. We do not scale our
# data since we want to plot the support vectors
C = 1.0 # SVM regularization parameter
models = (
svm.SVC(kernel="linear", C=C),
svm.LinearSVC(C=C, max_iter=10000),
svm.SVC(kernel="rbf", gamma=0.7, C=C),
svm.SVC(kernel="poly", degree=3, gamma="auto", C=C),
)
models = (clf.fit(X, y) for clf in models)
# title for the plots
titles = (
"SVC with linear kernel",
"LinearSVC (linear kernel)",
"SVC with RBF kernel",
"SVC with polynomial (degree 3) kernel",
)
# Set-up 2x2 grid for plotting.
fig, sub = plt.subplots(2, 2)
plt.subplots_adjust(wspace=0.4, hspace=0.4)
X0, X1 = X[:, 0], X[:, 1]
for clf, title, ax in zip(models, titles, sub.flatten()):
disp = DecisionBoundaryDisplay.from_estimator(
clf,
X,
response_method="predict",
cmap=plt.cm.coolwarm,
alpha=0.8,
ax=ax,
xlabel=iris.feature_names[0],
ylabel=iris.feature_names[1],
)
ax.scatter(X0, X1, c=y, cmap=plt.cm.coolwarm, s=20, edgecolors="k")
ax.set_xticks(())
ax.set_yticks(())
ax.set_title(title)
plt.show()
```
**Total running time of the script:** ( 0 minutes 0.218 seconds)
[`Download Python source code: plot_iris_svc.py`](https://scikit-learn.org/1.1/_downloads/4186bc506946013950b224b06f827118/plot_iris_svc.py)
[`Download Jupyter notebook: plot_iris_svc.ipynb`](https://scikit-learn.org/1.1/_downloads/f84209ea397909becdc84b5de1a5b047/plot_iris_svc.ipynb)
| programming_docs |
scikit_learn SVM-Anova: SVM with univariate feature selection Note
Click [here](#sphx-glr-download-auto-examples-svm-plot-svm-anova-py) to download the full example code or to run this example in your browser via Binder
SVM-Anova: SVM with univariate feature selection
================================================
This example shows how to perform univariate feature selection before running a SVC (support vector classifier) to improve the classification scores. We use the iris dataset (4 features) and add 36 non-informative features. We can find that our model achieves best performance when we select around 10% of features.
Load some data to play with
---------------------------
```
import numpy as np
from sklearn.datasets import load_iris
X, y = load_iris(return_X_y=True)
# Add non-informative features
rng = np.random.RandomState(0)
X = np.hstack((X, 2 * rng.random((X.shape[0], 36))))
```
Create the pipeline
-------------------
```
from sklearn.pipeline import Pipeline
from sklearn.feature_selection import SelectPercentile, chi2
from sklearn.preprocessing import StandardScaler
from sklearn.svm import SVC
# Create a feature-selection transform, a scaler and an instance of SVM that we
# combine together to have a full-blown estimator
clf = Pipeline(
[
("anova", SelectPercentile(chi2)),
("scaler", StandardScaler()),
("svc", SVC(gamma="auto")),
]
)
```
Plot the cross-validation score as a function of percentile of features
-----------------------------------------------------------------------
```
import matplotlib.pyplot as plt
from sklearn.model_selection import cross_val_score
score_means = list()
score_stds = list()
percentiles = (1, 3, 6, 10, 15, 20, 30, 40, 60, 80, 100)
for percentile in percentiles:
clf.set_params(anova__percentile=percentile)
this_scores = cross_val_score(clf, X, y)
score_means.append(this_scores.mean())
score_stds.append(this_scores.std())
plt.errorbar(percentiles, score_means, np.array(score_stds))
plt.title("Performance of the SVM-Anova varying the percentile of features selected")
plt.xticks(np.linspace(0, 100, 11, endpoint=True))
plt.xlabel("Percentile")
plt.ylabel("Accuracy Score")
plt.axis("tight")
plt.show()
```
**Total running time of the script:** ( 0 minutes 0.257 seconds)
[`Download Python source code: plot_svm_anova.py`](https://scikit-learn.org/1.1/_downloads/41973816d3932cd07b75d8825fd2c13d/plot_svm_anova.py)
[`Download Jupyter notebook: plot_svm_anova.ipynb`](https://scikit-learn.org/1.1/_downloads/6f4a6a0d8063b616c4aa4db2865de57c/plot_svm_anova.ipynb)
scikit_learn SVM: Separating hyperplane for unbalanced classes Note
Click [here](#sphx-glr-download-auto-examples-svm-plot-separating-hyperplane-unbalanced-py) to download the full example code or to run this example in your browser via Binder
SVM: Separating hyperplane for unbalanced classes
=================================================
Find the optimal separating hyperplane using an SVC for classes that are unbalanced.
We first find the separating plane with a plain SVC and then plot (dashed) the separating hyperplane with automatically correction for unbalanced classes.
Note
This example will also work by replacing `SVC(kernel="linear")` with `SGDClassifier(loss="hinge")`. Setting the `loss` parameter of the [`SGDClassifier`](../../modules/generated/sklearn.linear_model.sgdclassifier#sklearn.linear_model.SGDClassifier "sklearn.linear_model.SGDClassifier") equal to `hinge` will yield behaviour such as that of a SVC with a linear kernel.
For example try instead of the `SVC`:
```
clf = SGDClassifier(n_iter=100, alpha=0.01)
```
```
import matplotlib.pyplot as plt
from sklearn import svm
from sklearn.datasets import make_blobs
from sklearn.inspection import DecisionBoundaryDisplay
# we create two clusters of random points
n_samples_1 = 1000
n_samples_2 = 100
centers = [[0.0, 0.0], [2.0, 2.0]]
clusters_std = [1.5, 0.5]
X, y = make_blobs(
n_samples=[n_samples_1, n_samples_2],
centers=centers,
cluster_std=clusters_std,
random_state=0,
shuffle=False,
)
# fit the model and get the separating hyperplane
clf = svm.SVC(kernel="linear", C=1.0)
clf.fit(X, y)
# fit the model and get the separating hyperplane using weighted classes
wclf = svm.SVC(kernel="linear", class_weight={1: 10})
wclf.fit(X, y)
# plot the samples
plt.scatter(X[:, 0], X[:, 1], c=y, cmap=plt.cm.Paired, edgecolors="k")
# plot the decision functions for both classifiers
ax = plt.gca()
disp = DecisionBoundaryDisplay.from_estimator(
clf,
X,
plot_method="contour",
colors="k",
levels=[0],
alpha=0.5,
linestyles=["-"],
ax=ax,
)
# plot decision boundary and margins for weighted classes
wdisp = DecisionBoundaryDisplay.from_estimator(
wclf,
X,
plot_method="contour",
colors="r",
levels=[0],
alpha=0.5,
linestyles=["-"],
ax=ax,
)
plt.legend(
[disp.surface_.collections[0], wdisp.surface_.collections[0]],
["non weighted", "weighted"],
loc="upper right",
)
plt.show()
```
**Total running time of the script:** ( 0 minutes 0.162 seconds)
[`Download Python source code: plot_separating_hyperplane_unbalanced.py`](https://scikit-learn.org/1.1/_downloads/ff58cea6f60833cdaaaa13c98576eac5/plot_separating_hyperplane_unbalanced.py)
[`Download Jupyter notebook: plot_separating_hyperplane_unbalanced.ipynb`](https://scikit-learn.org/1.1/_downloads/d22cc4079af5678a112fa5aa77700680/plot_separating_hyperplane_unbalanced.ipynb)
scikit_learn SVM: Maximum margin separating hyperplane Note
Click [here](#sphx-glr-download-auto-examples-svm-plot-separating-hyperplane-py) to download the full example code or to run this example in your browser via Binder
SVM: Maximum margin separating hyperplane
=========================================
Plot the maximum margin separating hyperplane within a two-class separable dataset using a Support Vector Machine classifier with linear kernel.
```
import matplotlib.pyplot as plt
from sklearn import svm
from sklearn.datasets import make_blobs
from sklearn.inspection import DecisionBoundaryDisplay
# we create 40 separable points
X, y = make_blobs(n_samples=40, centers=2, random_state=6)
# fit the model, don't regularize for illustration purposes
clf = svm.SVC(kernel="linear", C=1000)
clf.fit(X, y)
plt.scatter(X[:, 0], X[:, 1], c=y, s=30, cmap=plt.cm.Paired)
# plot the decision function
ax = plt.gca()
DecisionBoundaryDisplay.from_estimator(
clf,
X,
plot_method="contour",
colors="k",
levels=[-1, 0, 1],
alpha=0.5,
linestyles=["--", "-", "--"],
ax=ax,
)
# plot support vectors
ax.scatter(
clf.support_vectors_[:, 0],
clf.support_vectors_[:, 1],
s=100,
linewidth=1,
facecolors="none",
edgecolors="k",
)
plt.show()
```
**Total running time of the script:** ( 0 minutes 0.061 seconds)
[`Download Python source code: plot_separating_hyperplane.py`](https://scikit-learn.org/1.1/_downloads/438bb54b7ea4cc4c565d0dbe5647119f/plot_separating_hyperplane.py)
[`Download Jupyter notebook: plot_separating_hyperplane.ipynb`](https://scikit-learn.org/1.1/_downloads/e6a6423db5b5d652b6a6deafc8ca7ee0/plot_separating_hyperplane.ipynb)
scikit_learn SVM Margins Example Note
Click [here](#sphx-glr-download-auto-examples-svm-plot-svm-margin-py) to download the full example code or to run this example in your browser via Binder
SVM Margins Example
===================
The plots below illustrate the effect the parameter `C` has on the separation line. A large value of `C` basically tells our model that we do not have that much faith in our data’s distribution, and will only consider points close to line of separation.
A small value of `C` includes more/all the observations, allowing the margins to be calculated using all the data in the area.
*
*
```
# Code source: Gaël Varoquaux
# Modified for documentation by Jaques Grobler
# License: BSD 3 clause
import numpy as np
import matplotlib.pyplot as plt
from matplotlib import cm
from sklearn import svm
# we create 40 separable points
np.random.seed(0)
X = np.r_[np.random.randn(20, 2) - [2, 2], np.random.randn(20, 2) + [2, 2]]
Y = [0] * 20 + [1] * 20
# figure number
fignum = 1
# fit the model
for name, penalty in (("unreg", 1), ("reg", 0.05)):
clf = svm.SVC(kernel="linear", C=penalty)
clf.fit(X, Y)
# get the separating hyperplane
w = clf.coef_[0]
a = -w[0] / w[1]
xx = np.linspace(-5, 5)
yy = a * xx - (clf.intercept_[0]) / w[1]
# plot the parallels to the separating hyperplane that pass through the
# support vectors (margin away from hyperplane in direction
# perpendicular to hyperplane). This is sqrt(1+a^2) away vertically in
# 2-d.
margin = 1 / np.sqrt(np.sum(clf.coef_**2))
yy_down = yy - np.sqrt(1 + a**2) * margin
yy_up = yy + np.sqrt(1 + a**2) * margin
# plot the line, the points, and the nearest vectors to the plane
plt.figure(fignum, figsize=(4, 3))
plt.clf()
plt.plot(xx, yy, "k-")
plt.plot(xx, yy_down, "k--")
plt.plot(xx, yy_up, "k--")
plt.scatter(
clf.support_vectors_[:, 0],
clf.support_vectors_[:, 1],
s=80,
facecolors="none",
zorder=10,
edgecolors="k",
cmap=cm.get_cmap("RdBu"),
)
plt.scatter(
X[:, 0], X[:, 1], c=Y, zorder=10, cmap=cm.get_cmap("RdBu"), edgecolors="k"
)
plt.axis("tight")
x_min = -4.8
x_max = 4.2
y_min = -6
y_max = 6
YY, XX = np.meshgrid(yy, xx)
xy = np.vstack([XX.ravel(), YY.ravel()]).T
Z = clf.decision_function(xy).reshape(XX.shape)
# Put the result into a contour plot
plt.contourf(XX, YY, Z, cmap=cm.get_cmap("RdBu"), alpha=0.5, linestyles=["-"])
plt.xlim(x_min, x_max)
plt.ylim(y_min, y_max)
plt.xticks(())
plt.yticks(())
fignum = fignum + 1
plt.show()
```
**Total running time of the script:** ( 0 minutes 0.070 seconds)
[`Download Python source code: plot_svm_margin.py`](https://scikit-learn.org/1.1/_downloads/a6621d4718b448229cd4e8e18fdbe4c6/plot_svm_margin.py)
[`Download Jupyter notebook: plot_svm_margin.ipynb`](https://scikit-learn.org/1.1/_downloads/e5984993ae4c01e36276dcfed2bf0838/plot_svm_margin.ipynb)
scikit_learn One-class SVM with non-linear kernel (RBF) Note
Click [here](#sphx-glr-download-auto-examples-svm-plot-oneclass-py) to download the full example code or to run this example in your browser via Binder
One-class SVM with non-linear kernel (RBF)
==========================================
An example using a one-class SVM for novelty detection.
[One-class SVM](../../modules/svm#svm-outlier-detection) is an unsupervised algorithm that learns a decision function for novelty detection: classifying new data as similar or different to the training set.
```
import numpy as np
import matplotlib.pyplot as plt
import matplotlib.font_manager
from sklearn import svm
xx, yy = np.meshgrid(np.linspace(-5, 5, 500), np.linspace(-5, 5, 500))
# Generate train data
X = 0.3 * np.random.randn(100, 2)
X_train = np.r_[X + 2, X - 2]
# Generate some regular novel observations
X = 0.3 * np.random.randn(20, 2)
X_test = np.r_[X + 2, X - 2]
# Generate some abnormal novel observations
X_outliers = np.random.uniform(low=-4, high=4, size=(20, 2))
# fit the model
clf = svm.OneClassSVM(nu=0.1, kernel="rbf", gamma=0.1)
clf.fit(X_train)
y_pred_train = clf.predict(X_train)
y_pred_test = clf.predict(X_test)
y_pred_outliers = clf.predict(X_outliers)
n_error_train = y_pred_train[y_pred_train == -1].size
n_error_test = y_pred_test[y_pred_test == -1].size
n_error_outliers = y_pred_outliers[y_pred_outliers == 1].size
# plot the line, the points, and the nearest vectors to the plane
Z = clf.decision_function(np.c_[xx.ravel(), yy.ravel()])
Z = Z.reshape(xx.shape)
plt.title("Novelty Detection")
plt.contourf(xx, yy, Z, levels=np.linspace(Z.min(), 0, 7), cmap=plt.cm.PuBu)
a = plt.contour(xx, yy, Z, levels=[0], linewidths=2, colors="darkred")
plt.contourf(xx, yy, Z, levels=[0, Z.max()], colors="palevioletred")
s = 40
b1 = plt.scatter(X_train[:, 0], X_train[:, 1], c="white", s=s, edgecolors="k")
b2 = plt.scatter(X_test[:, 0], X_test[:, 1], c="blueviolet", s=s, edgecolors="k")
c = plt.scatter(X_outliers[:, 0], X_outliers[:, 1], c="gold", s=s, edgecolors="k")
plt.axis("tight")
plt.xlim((-5, 5))
plt.ylim((-5, 5))
plt.legend(
[a.collections[0], b1, b2, c],
[
"learned frontier",
"training observations",
"new regular observations",
"new abnormal observations",
],
loc="upper left",
prop=matplotlib.font_manager.FontProperties(size=11),
)
plt.xlabel(
"error train: %d/200 ; errors novel regular: %d/40 ; errors novel abnormal: %d/40"
% (n_error_train, n_error_test, n_error_outliers)
)
plt.show()
```
**Total running time of the script:** ( 0 minutes 0.359 seconds)
[`Download Python source code: plot_oneclass.py`](https://scikit-learn.org/1.1/_downloads/616e8a231ab03301473c9183f6cf03e8/plot_oneclass.py)
[`Download Jupyter notebook: plot_oneclass.ipynb`](https://scikit-learn.org/1.1/_downloads/179a84f8da8ce09af733c9a82135ca4d/plot_oneclass.ipynb)
scikit_learn SVM-Kernels Note
Click [here](#sphx-glr-download-auto-examples-svm-plot-svm-kernels-py) to download the full example code or to run this example in your browser via Binder
SVM-Kernels
===========
Three different types of SVM-Kernels are displayed below. The polynomial and RBF are especially useful when the data-points are not linearly separable.
*
*
*
```
# Code source: Gaël Varoquaux
# License: BSD 3 clause
import numpy as np
import matplotlib.pyplot as plt
from sklearn import svm
# Our dataset and targets
X = np.c_[
(0.4, -0.7),
(-1.5, -1),
(-1.4, -0.9),
(-1.3, -1.2),
(-1.1, -0.2),
(-1.2, -0.4),
(-0.5, 1.2),
(-1.5, 2.1),
(1, 1),
# --
(1.3, 0.8),
(1.2, 0.5),
(0.2, -2),
(0.5, -2.4),
(0.2, -2.3),
(0, -2.7),
(1.3, 2.1),
].T
Y = [0] * 8 + [1] * 8
# figure number
fignum = 1
# fit the model
for kernel in ("linear", "poly", "rbf"):
clf = svm.SVC(kernel=kernel, gamma=2)
clf.fit(X, Y)
# plot the line, the points, and the nearest vectors to the plane
plt.figure(fignum, figsize=(4, 3))
plt.clf()
plt.scatter(
clf.support_vectors_[:, 0],
clf.support_vectors_[:, 1],
s=80,
facecolors="none",
zorder=10,
edgecolors="k",
)
plt.scatter(X[:, 0], X[:, 1], c=Y, zorder=10, cmap=plt.cm.Paired, edgecolors="k")
plt.axis("tight")
x_min = -3
x_max = 3
y_min = -3
y_max = 3
XX, YY = np.mgrid[x_min:x_max:200j, y_min:y_max:200j]
Z = clf.decision_function(np.c_[XX.ravel(), YY.ravel()])
# Put the result into a color plot
Z = Z.reshape(XX.shape)
plt.figure(fignum, figsize=(4, 3))
plt.pcolormesh(XX, YY, Z > 0, cmap=plt.cm.Paired)
plt.contour(
XX,
YY,
Z,
colors=["k", "k", "k"],
linestyles=["--", "-", "--"],
levels=[-0.5, 0, 0.5],
)
plt.xlim(x_min, x_max)
plt.ylim(y_min, y_max)
plt.xticks(())
plt.yticks(())
fignum = fignum + 1
plt.show()
```
**Total running time of the script:** ( 0 minutes 0.180 seconds)
[`Download Python source code: plot_svm_kernels.py`](https://scikit-learn.org/1.1/_downloads/8975399471ae75debd0b26fbe3013719/plot_svm_kernels.py)
[`Download Jupyter notebook: plot_svm_kernels.ipynb`](https://scikit-learn.org/1.1/_downloads/264f6891fa2130246a013d5f089e7b2e/plot_svm_kernels.ipynb)
scikit_learn RBF SVM parameters Note
Click [here](#sphx-glr-download-auto-examples-svm-plot-rbf-parameters-py) to download the full example code or to run this example in your browser via Binder
RBF SVM parameters
==================
This example illustrates the effect of the parameters `gamma` and `C` of the Radial Basis Function (RBF) kernel SVM.
Intuitively, the `gamma` parameter defines how far the influence of a single training example reaches, with low values meaning ‘far’ and high values meaning ‘close’. The `gamma` parameters can be seen as the inverse of the radius of influence of samples selected by the model as support vectors.
The `C` parameter trades off correct classification of training examples against maximization of the decision function’s margin. For larger values of `C`, a smaller margin will be accepted if the decision function is better at classifying all training points correctly. A lower `C` will encourage a larger margin, therefore a simpler decision function, at the cost of training accuracy. In other words `C` behaves as a regularization parameter in the SVM.
The first plot is a visualization of the decision function for a variety of parameter values on a simplified classification problem involving only 2 input features and 2 possible target classes (binary classification). Note that this kind of plot is not possible to do for problems with more features or target classes.
The second plot is a heatmap of the classifier’s cross-validation accuracy as a function of `C` and `gamma`. For this example we explore a relatively large grid for illustration purposes. In practice, a logarithmic grid from \(10^{-3}\) to \(10^3\) is usually sufficient. If the best parameters lie on the boundaries of the grid, it can be extended in that direction in a subsequent search.
Note that the heat map plot has a special colorbar with a midpoint value close to the score values of the best performing models so as to make it easy to tell them apart in the blink of an eye.
The behavior of the model is very sensitive to the `gamma` parameter. If `gamma` is too large, the radius of the area of influence of the support vectors only includes the support vector itself and no amount of regularization with `C` will be able to prevent overfitting.
When `gamma` is very small, the model is too constrained and cannot capture the complexity or “shape” of the data. The region of influence of any selected support vector would include the whole training set. The resulting model will behave similarly to a linear model with a set of hyperplanes that separate the centers of high density of any pair of two classes.
For intermediate values, we can see on the second plot that good models can be found on a diagonal of `C` and `gamma`. Smooth models (lower `gamma` values) can be made more complex by increasing the importance of classifying each point correctly (larger `C` values) hence the diagonal of good performing models.
Finally, one can also observe that for some intermediate values of `gamma` we get equally performing models when `C` becomes very large. This suggests that the set of support vectors does not change anymore. The radius of the RBF kernel alone acts as a good structural regularizer. Increasing `C` further doesn’t help, likely because there are no more training points in violation (inside the margin or wrongly classified), or at least no better solution can be found. Scores being equal, it may make sense to use the smaller `C` values, since very high `C` values typically increase fitting time.
On the other hand, lower `C` values generally lead to more support vectors, which may increase prediction time. Therefore, lowering the value of `C` involves a trade-off between fitting time and prediction time.
We should also note that small differences in scores results from the random splits of the cross-validation procedure. Those spurious variations can be smoothed out by increasing the number of CV iterations `n_splits` at the expense of compute time. Increasing the value number of `C_range` and `gamma_range` steps will increase the resolution of the hyper-parameter heat map.
Utility class to move the midpoint of a colormap to be around the values of interest.
```
import numpy as np
from matplotlib.colors import Normalize
class MidpointNormalize(Normalize):
def __init__(self, vmin=None, vmax=None, midpoint=None, clip=False):
self.midpoint = midpoint
Normalize.__init__(self, vmin, vmax, clip)
def __call__(self, value, clip=None):
x, y = [self.vmin, self.midpoint, self.vmax], [0, 0.5, 1]
return np.ma.masked_array(np.interp(value, x, y))
```
Load and prepare data set
-------------------------
dataset for grid search
```
from sklearn.datasets import load_iris
iris = load_iris()
X = iris.data
y = iris.target
```
Dataset for decision function visualization: we only keep the first two features in X and sub-sample the dataset to keep only 2 classes and make it a binary classification problem.
```
X_2d = X[:, :2]
X_2d = X_2d[y > 0]
y_2d = y[y > 0]
y_2d -= 1
```
It is usually a good idea to scale the data for SVM training. We are cheating a bit in this example in scaling all of the data, instead of fitting the transformation on the training set and just applying it on the test set.
```
from sklearn.preprocessing import StandardScaler
scaler = StandardScaler()
X = scaler.fit_transform(X)
X_2d = scaler.fit_transform(X_2d)
```
Train classifiers
-----------------
For an initial search, a logarithmic grid with basis 10 is often helpful. Using a basis of 2, a finer tuning can be achieved but at a much higher cost.
```
from sklearn.svm import SVC
from sklearn.model_selection import StratifiedShuffleSplit
from sklearn.model_selection import GridSearchCV
C_range = np.logspace(-2, 10, 13)
gamma_range = np.logspace(-9, 3, 13)
param_grid = dict(gamma=gamma_range, C=C_range)
cv = StratifiedShuffleSplit(n_splits=5, test_size=0.2, random_state=42)
grid = GridSearchCV(SVC(), param_grid=param_grid, cv=cv)
grid.fit(X, y)
print(
"The best parameters are %s with a score of %0.2f"
% (grid.best_params_, grid.best_score_)
)
```
```
The best parameters are {'C': 1.0, 'gamma': 0.1} with a score of 0.97
```
Now we need to fit a classifier for all parameters in the 2d version (we use a smaller set of parameters here because it takes a while to train)
```
C_2d_range = [1e-2, 1, 1e2]
gamma_2d_range = [1e-1, 1, 1e1]
classifiers = []
for C in C_2d_range:
for gamma in gamma_2d_range:
clf = SVC(C=C, gamma=gamma)
clf.fit(X_2d, y_2d)
classifiers.append((C, gamma, clf))
```
Visualization
-------------
draw visualization of parameter effects
```
import matplotlib.pyplot as plt
plt.figure(figsize=(8, 6))
xx, yy = np.meshgrid(np.linspace(-3, 3, 200), np.linspace(-3, 3, 200))
for k, (C, gamma, clf) in enumerate(classifiers):
# evaluate decision function in a grid
Z = clf.decision_function(np.c_[xx.ravel(), yy.ravel()])
Z = Z.reshape(xx.shape)
# visualize decision function for these parameters
plt.subplot(len(C_2d_range), len(gamma_2d_range), k + 1)
plt.title("gamma=10^%d, C=10^%d" % (np.log10(gamma), np.log10(C)), size="medium")
# visualize parameter's effect on decision function
plt.pcolormesh(xx, yy, -Z, cmap=plt.cm.RdBu)
plt.scatter(X_2d[:, 0], X_2d[:, 1], c=y_2d, cmap=plt.cm.RdBu_r, edgecolors="k")
plt.xticks(())
plt.yticks(())
plt.axis("tight")
scores = grid.cv_results_["mean_test_score"].reshape(len(C_range), len(gamma_range))
```
Draw heatmap of the validation accuracy as a function of gamma and C
The score are encoded as colors with the hot colormap which varies from dark red to bright yellow. As the most interesting scores are all located in the 0.92 to 0.97 range we use a custom normalizer to set the mid-point to 0.92 so as to make it easier to visualize the small variations of score values in the interesting range while not brutally collapsing all the low score values to the same color.
```
plt.figure(figsize=(8, 6))
plt.subplots_adjust(left=0.2, right=0.95, bottom=0.15, top=0.95)
plt.imshow(
scores,
interpolation="nearest",
cmap=plt.cm.hot,
norm=MidpointNormalize(vmin=0.2, midpoint=0.92),
)
plt.xlabel("gamma")
plt.ylabel("C")
plt.colorbar()
plt.xticks(np.arange(len(gamma_range)), gamma_range, rotation=45)
plt.yticks(np.arange(len(C_range)), C_range)
plt.title("Validation accuracy")
plt.show()
```
**Total running time of the script:** ( 0 minutes 4.455 seconds)
[`Download Python source code: plot_rbf_parameters.py`](https://scikit-learn.org/1.1/_downloads/ea8b449d4699d078ef9cc5cded54cc67/plot_rbf_parameters.py)
[`Download Jupyter notebook: plot_rbf_parameters.ipynb`](https://scikit-learn.org/1.1/_downloads/d55388904f5399e98ed36e971c4da3cf/plot_rbf_parameters.ipynb)
| programming_docs |
scikit_learn Manifold learning on handwritten digits: Locally Linear Embedding, Isomap… Note
Click [here](#sphx-glr-download-auto-examples-manifold-plot-lle-digits-py) to download the full example code or to run this example in your browser via Binder
Manifold learning on handwritten digits: Locally Linear Embedding, Isomap…
==========================================================================
We illustrate various embedding techniques on the digits dataset.
```
# Authors: Fabian Pedregosa <[email protected]>
# Olivier Grisel <[email protected]>
# Mathieu Blondel <[email protected]>
# Gael Varoquaux
# Guillaume Lemaitre <[email protected]>
# License: BSD 3 clause (C) INRIA 2011
```
Load digits dataset
-------------------
We will load the digits dataset and only use six first of the ten available classes.
```
from sklearn.datasets import load_digits
digits = load_digits(n_class=6)
X, y = digits.data, digits.target
n_samples, n_features = X.shape
n_neighbors = 30
```
We can plot the first hundred digits from this data set.
```
import matplotlib.pyplot as plt
fig, axs = plt.subplots(nrows=10, ncols=10, figsize=(6, 6))
for idx, ax in enumerate(axs.ravel()):
ax.imshow(X[idx].reshape((8, 8)), cmap=plt.cm.binary)
ax.axis("off")
_ = fig.suptitle("A selection from the 64-dimensional digits dataset", fontsize=16)
```
Helper function to plot embedding
---------------------------------
Below, we will use different techniques to embed the digits dataset. We will plot the projection of the original data onto each embedding. It will allow us to check whether or digits are grouped together in the embedding space, or scattered across it.
```
import numpy as np
from matplotlib import offsetbox
from sklearn.preprocessing import MinMaxScaler
def plot_embedding(X, title):
_, ax = plt.subplots()
X = MinMaxScaler().fit_transform(X)
for digit in digits.target_names:
ax.scatter(
*X[y == digit].T,
marker=f"${digit}$",
s=60,
color=plt.cm.Dark2(digit),
alpha=0.425,
zorder=2,
)
shown_images = np.array([[1.0, 1.0]]) # just something big
for i in range(X.shape[0]):
# plot every digit on the embedding
# show an annotation box for a group of digits
dist = np.sum((X[i] - shown_images) ** 2, 1)
if np.min(dist) < 4e-3:
# don't show points that are too close
continue
shown_images = np.concatenate([shown_images, [X[i]]], axis=0)
imagebox = offsetbox.AnnotationBbox(
offsetbox.OffsetImage(digits.images[i], cmap=plt.cm.gray_r), X[i]
)
imagebox.set(zorder=1)
ax.add_artist(imagebox)
ax.set_title(title)
ax.axis("off")
```
Embedding techniques comparison
-------------------------------
Below, we compare different techniques. However, there are a couple of things to note:
* the [`RandomTreesEmbedding`](../../modules/generated/sklearn.ensemble.randomtreesembedding#sklearn.ensemble.RandomTreesEmbedding "sklearn.ensemble.RandomTreesEmbedding") is not technically a manifold embedding method, as it learn a high-dimensional representation on which we apply a dimensionality reduction method. However, it is often useful to cast a dataset into a representation in which the classes are linearly-separable.
* the [`LinearDiscriminantAnalysis`](../../modules/generated/sklearn.discriminant_analysis.lineardiscriminantanalysis#sklearn.discriminant_analysis.LinearDiscriminantAnalysis "sklearn.discriminant_analysis.LinearDiscriminantAnalysis") and the [`NeighborhoodComponentsAnalysis`](../../modules/generated/sklearn.neighbors.neighborhoodcomponentsanalysis#sklearn.neighbors.NeighborhoodComponentsAnalysis "sklearn.neighbors.NeighborhoodComponentsAnalysis"), are supervised dimensionality reduction method, i.e. they make use of the provided labels, contrary to other methods.
* the [`TSNE`](../../modules/generated/sklearn.manifold.tsne#sklearn.manifold.TSNE "sklearn.manifold.TSNE") is initialized with the embedding that is generated by PCA in this example. It ensures global stability of the embedding, i.e., the embedding does not depend on random initialization.
```
from sklearn.decomposition import TruncatedSVD
from sklearn.discriminant_analysis import LinearDiscriminantAnalysis
from sklearn.ensemble import RandomTreesEmbedding
from sklearn.manifold import (
Isomap,
LocallyLinearEmbedding,
MDS,
SpectralEmbedding,
TSNE,
)
from sklearn.neighbors import NeighborhoodComponentsAnalysis
from sklearn.pipeline import make_pipeline
from sklearn.random_projection import SparseRandomProjection
embeddings = {
"Random projection embedding": SparseRandomProjection(
n_components=2, random_state=42
),
"Truncated SVD embedding": TruncatedSVD(n_components=2),
"Linear Discriminant Analysis embedding": LinearDiscriminantAnalysis(
n_components=2
),
"Isomap embedding": Isomap(n_neighbors=n_neighbors, n_components=2),
"Standard LLE embedding": LocallyLinearEmbedding(
n_neighbors=n_neighbors, n_components=2, method="standard"
),
"Modified LLE embedding": LocallyLinearEmbedding(
n_neighbors=n_neighbors, n_components=2, method="modified"
),
"Hessian LLE embedding": LocallyLinearEmbedding(
n_neighbors=n_neighbors, n_components=2, method="hessian"
),
"LTSA LLE embedding": LocallyLinearEmbedding(
n_neighbors=n_neighbors, n_components=2, method="ltsa"
),
"MDS embedding": MDS(n_components=2, n_init=1, max_iter=120, n_jobs=2),
"Random Trees embedding": make_pipeline(
RandomTreesEmbedding(n_estimators=200, max_depth=5, random_state=0),
TruncatedSVD(n_components=2),
),
"Spectral embedding": SpectralEmbedding(
n_components=2, random_state=0, eigen_solver="arpack"
),
"t-SNE embeedding": TSNE(
n_components=2,
init="pca",
learning_rate="auto",
n_iter=500,
n_iter_without_progress=150,
n_jobs=2,
random_state=0,
),
"NCA embedding": NeighborhoodComponentsAnalysis(
n_components=2, init="pca", random_state=0
),
}
```
Once we declared all the methodes of interest, we can run and perform the projection of the original data. We will store the projected data as well as the computational time needed to perform each projection.
```
from time import time
projections, timing = {}, {}
for name, transformer in embeddings.items():
if name.startswith("Linear Discriminant Analysis"):
data = X.copy()
data.flat[:: X.shape[1] + 1] += 0.01 # Make X invertible
else:
data = X
print(f"Computing {name}...")
start_time = time()
projections[name] = transformer.fit_transform(data, y)
timing[name] = time() - start_time
```
```
Computing Random projection embedding...
Computing Truncated SVD embedding...
Computing Linear Discriminant Analysis embedding...
Computing Isomap embedding...
Computing Standard LLE embedding...
Computing Modified LLE embedding...
Computing Hessian LLE embedding...
Computing LTSA LLE embedding...
Computing MDS embedding...
Computing Random Trees embedding...
Computing Spectral embedding...
Computing t-SNE embeedding...
/home/runner/work/scikit-learn/scikit-learn/sklearn/manifold/_t_sne.py:996: FutureWarning: The PCA initialization in TSNE will change to have the standard deviation of PC1 equal to 1e-4 in 1.2. This will ensure better convergence.
warnings.warn(
Computing NCA embedding...
```
Finally, we can plot the resulting projection given by each method.
```
for name in timing:
title = f"{name} (time {timing[name]:.3f}s)"
plot_embedding(projections[name], title)
plt.show()
```
*
*
*
*
*
*
*
*
*
*
*
*
*
**Total running time of the script:** ( 0 minutes 14.856 seconds)
[`Download Python source code: plot_lle_digits.py`](https://scikit-learn.org/1.1/_downloads/9d97cc4ed755b7f2c7f9311bccc89a00/plot_lle_digits.py)
[`Download Jupyter notebook: plot_lle_digits.ipynb`](https://scikit-learn.org/1.1/_downloads/1e0968da80ca868bbdf21c1d0547f68c/plot_lle_digits.ipynb)
scikit_learn Swiss Roll And Swiss-Hole Reduction Note
Click [here](#sphx-glr-download-auto-examples-manifold-plot-swissroll-py) to download the full example code or to run this example in your browser via Binder
Swiss Roll And Swiss-Hole Reduction
===================================
This notebook seeks to compare two popular non-linear dimensionality techniques, T-distributed Stochastic Neighbor Embedding (t-SNE) and Locally Linear Embedding (LLE), on the classic Swiss Roll dataset. Then, we will explore how they both deal with the addition of a hole in the data.
Swiss Roll
----------
We start by generating the Swiss Roll dataset.
```
import matplotlib.pyplot as plt
from sklearn import manifold, datasets
sr_points, sr_color = datasets.make_swiss_roll(n_samples=1500, random_state=0)
```
Now, let’s take a look at our data:
```
fig = plt.figure(figsize=(8, 6))
ax = fig.add_subplot(111, projection="3d")
fig.add_axes(ax)
ax.scatter(
sr_points[:, 0], sr_points[:, 1], sr_points[:, 2], c=sr_color, s=50, alpha=0.8
)
ax.set_title("Swiss Roll in Ambient Space")
ax.view_init(azim=-66, elev=12)
_ = ax.text2D(0.8, 0.05, s="n_samples=1500", transform=ax.transAxes)
```
Computing the LLE and t-SNE embeddings, we find that LLE seems to unroll the Swiss Roll pretty effectively. t-SNE on the other hand, is able to preserve the general structure of the data, but, poorly represents the continuous nature of our original data. Instead, it seems to unnecessarily clump sections of points together.
```
sr_lle, sr_err = manifold.locally_linear_embedding(
sr_points, n_neighbors=12, n_components=2
)
sr_tsne = manifold.TSNE(
n_components=2, learning_rate="auto", perplexity=40, init="pca", random_state=0
).fit_transform(sr_points)
fig, axs = plt.subplots(figsize=(8, 8), nrows=2)
axs[0].scatter(sr_lle[:, 0], sr_lle[:, 1], c=sr_color)
axs[0].set_title("LLE Embedding of Swiss Roll")
axs[1].scatter(sr_tsne[:, 0], sr_tsne[:, 1], c=sr_color)
_ = axs[1].set_title("t-SNE Embedding of Swiss Roll")
```
```
/home/runner/work/scikit-learn/scikit-learn/sklearn/manifold/_t_sne.py:996: FutureWarning: The PCA initialization in TSNE will change to have the standard deviation of PC1 equal to 1e-4 in 1.2. This will ensure better convergence.
warnings.warn(
```
Note
LLE seems to be stretching the points from the center (purple) of the swiss roll. However, we observe that this is simply a byproduct of how the data was generated. There is a higher density of points near the center of the roll, which ultimately affects how LLE reconstructs the data in a lower dimension.
Swiss-Hole
----------
Now let’s take a look at how both algorithms deal with us adding a hole to the data. First, we generate the Swiss-Hole dataset and plot it:
```
sh_points, sh_color = datasets.make_swiss_roll(
n_samples=1500, hole=True, random_state=0
)
fig = plt.figure(figsize=(8, 6))
ax = fig.add_subplot(111, projection="3d")
fig.add_axes(ax)
ax.scatter(
sh_points[:, 0], sh_points[:, 1], sh_points[:, 2], c=sh_color, s=50, alpha=0.8
)
ax.set_title("Swiss-Hole in Ambient Space")
ax.view_init(azim=-66, elev=12)
_ = ax.text2D(0.8, 0.05, s="n_samples=1500", transform=ax.transAxes)
```
Computing the LLE and t-SNE embeddings, we obtain similar results to the Swiss Roll. LLE very capably unrolls the data and even preserves the hole. t-SNE, again seems to clump sections of points together, but, we note that it preserves the general topology of the original data.
```
sh_lle, sh_err = manifold.locally_linear_embedding(
sh_points, n_neighbors=12, n_components=2
)
sh_tsne = manifold.TSNE(
n_components=2, learning_rate="auto", perplexity=40, init="random", random_state=0
).fit_transform(sh_points)
fig, axs = plt.subplots(figsize=(8, 8), nrows=2)
axs[0].scatter(sh_lle[:, 0], sh_lle[:, 1], c=sh_color)
axs[0].set_title("LLE Embedding of Swiss-Hole")
axs[1].scatter(sh_tsne[:, 0], sh_tsne[:, 1], c=sh_color)
_ = axs[1].set_title("t-SNE Embedding of Swiss-Hole")
```
Concluding remarks
------------------
We note that t-SNE benefits from testing more combinations of parameters. Better results could probably have been obtained by better tuning these parameters.
We observe that, as seen in the “Manifold learning on handwritten digits” example, t-SNE generally performs better than LLE on real world data.
**Total running time of the script:** ( 0 minutes 18.006 seconds)
[`Download Python source code: plot_swissroll.py`](https://scikit-learn.org/1.1/_downloads/7f0a2318ad82288d649c688011f52618/plot_swissroll.py)
[`Download Jupyter notebook: plot_swissroll.ipynb`](https://scikit-learn.org/1.1/_downloads/4c773264381a88c3d3933952c6040058/plot_swissroll.ipynb)
scikit_learn t-SNE: The effect of various perplexity values on the shape Note
Click [here](#sphx-glr-download-auto-examples-manifold-plot-t-sne-perplexity-py) to download the full example code or to run this example in your browser via Binder
t-SNE: The effect of various perplexity values on the shape
===========================================================
An illustration of t-SNE on the two concentric circles and the S-curve datasets for different perplexity values.
We observe a tendency towards clearer shapes as the perplexity value increases.
The size, the distance and the shape of clusters may vary upon initialization, perplexity values and does not always convey a meaning.
As shown below, t-SNE for higher perplexities finds meaningful topology of two concentric circles, however the size and the distance of the circles varies slightly from the original. Contrary to the two circles dataset, the shapes visually diverge from S-curve topology on the S-curve dataset even for larger perplexity values.
For further details, “How to Use t-SNE Effectively” <https://distill.pub/2016/misread-tsne/> provides a good discussion of the effects of various parameters, as well as interactive plots to explore those effects.
```
circles, perplexity=5 in 0.15 sec
circles, perplexity=30 in 0.23 sec
circles, perplexity=50 in 0.26 sec
circles, perplexity=100 in 0.26 sec
S-curve, perplexity=5 in 0.15 sec
S-curve, perplexity=30 in 0.21 sec
S-curve, perplexity=50 in 0.26 sec
S-curve, perplexity=100 in 0.26 sec
uniform grid, perplexity=5 in 0.19 sec
uniform grid, perplexity=30 in 0.27 sec
uniform grid, perplexity=50 in 0.3 sec
uniform grid, perplexity=100 in 0.3 sec
```
```
# Author: Narine Kokhlikyan <[email protected]>
# License: BSD
import numpy as np
import matplotlib.pyplot as plt
from matplotlib.ticker import NullFormatter
from sklearn import manifold, datasets
from time import time
n_samples = 150
n_components = 2
(fig, subplots) = plt.subplots(3, 5, figsize=(15, 8))
perplexities = [5, 30, 50, 100]
X, y = datasets.make_circles(
n_samples=n_samples, factor=0.5, noise=0.05, random_state=0
)
red = y == 0
green = y == 1
ax = subplots[0][0]
ax.scatter(X[red, 0], X[red, 1], c="r")
ax.scatter(X[green, 0], X[green, 1], c="g")
ax.xaxis.set_major_formatter(NullFormatter())
ax.yaxis.set_major_formatter(NullFormatter())
plt.axis("tight")
for i, perplexity in enumerate(perplexities):
ax = subplots[0][i + 1]
t0 = time()
tsne = manifold.TSNE(
n_components=n_components,
init="random",
random_state=0,
perplexity=perplexity,
learning_rate="auto",
n_iter=300,
)
Y = tsne.fit_transform(X)
t1 = time()
print("circles, perplexity=%d in %.2g sec" % (perplexity, t1 - t0))
ax.set_title("Perplexity=%d" % perplexity)
ax.scatter(Y[red, 0], Y[red, 1], c="r")
ax.scatter(Y[green, 0], Y[green, 1], c="g")
ax.xaxis.set_major_formatter(NullFormatter())
ax.yaxis.set_major_formatter(NullFormatter())
ax.axis("tight")
# Another example using s-curve
X, color = datasets.make_s_curve(n_samples, random_state=0)
ax = subplots[1][0]
ax.scatter(X[:, 0], X[:, 2], c=color)
ax.xaxis.set_major_formatter(NullFormatter())
ax.yaxis.set_major_formatter(NullFormatter())
for i, perplexity in enumerate(perplexities):
ax = subplots[1][i + 1]
t0 = time()
tsne = manifold.TSNE(
n_components=n_components,
init="random",
random_state=0,
perplexity=perplexity,
learning_rate="auto",
n_iter=300,
)
Y = tsne.fit_transform(X)
t1 = time()
print("S-curve, perplexity=%d in %.2g sec" % (perplexity, t1 - t0))
ax.set_title("Perplexity=%d" % perplexity)
ax.scatter(Y[:, 0], Y[:, 1], c=color)
ax.xaxis.set_major_formatter(NullFormatter())
ax.yaxis.set_major_formatter(NullFormatter())
ax.axis("tight")
# Another example using a 2D uniform grid
x = np.linspace(0, 1, int(np.sqrt(n_samples)))
xx, yy = np.meshgrid(x, x)
X = np.hstack(
[
xx.ravel().reshape(-1, 1),
yy.ravel().reshape(-1, 1),
]
)
color = xx.ravel()
ax = subplots[2][0]
ax.scatter(X[:, 0], X[:, 1], c=color)
ax.xaxis.set_major_formatter(NullFormatter())
ax.yaxis.set_major_formatter(NullFormatter())
for i, perplexity in enumerate(perplexities):
ax = subplots[2][i + 1]
t0 = time()
tsne = manifold.TSNE(
n_components=n_components,
init="random",
random_state=0,
perplexity=perplexity,
learning_rate="auto",
n_iter=400,
)
Y = tsne.fit_transform(X)
t1 = time()
print("uniform grid, perplexity=%d in %.2g sec" % (perplexity, t1 - t0))
ax.set_title("Perplexity=%d" % perplexity)
ax.scatter(Y[:, 0], Y[:, 1], c=color)
ax.xaxis.set_major_formatter(NullFormatter())
ax.yaxis.set_major_formatter(NullFormatter())
ax.axis("tight")
plt.show()
```
**Total running time of the script:** ( 0 minutes 3.333 seconds)
[`Download Python source code: plot_t_sne_perplexity.py`](https://scikit-learn.org/1.1/_downloads/dec20a8d5f622301132b632f5e0bd532/plot_t_sne_perplexity.py)
[`Download Jupyter notebook: plot_t_sne_perplexity.ipynb`](https://scikit-learn.org/1.1/_downloads/8b98cea0e0ec1ca3cc503c13ddac0537/plot_t_sne_perplexity.ipynb)
scikit_learn Manifold Learning methods on a severed sphere Note
Click [here](#sphx-glr-download-auto-examples-manifold-plot-manifold-sphere-py) to download the full example code or to run this example in your browser via Binder
Manifold Learning methods on a severed sphere
=============================================
An application of the different [Manifold learning](../../modules/manifold#manifold) techniques on a spherical data-set. Here one can see the use of dimensionality reduction in order to gain some intuition regarding the manifold learning methods. Regarding the dataset, the poles are cut from the sphere, as well as a thin slice down its side. This enables the manifold learning techniques to ‘spread it open’ whilst projecting it onto two dimensions.
For a similar example, where the methods are applied to the S-curve dataset, see [Comparison of Manifold Learning methods](plot_compare_methods#sphx-glr-auto-examples-manifold-plot-compare-methods-py)
Note that the purpose of the [MDS](../../modules/manifold#multidimensional-scaling) is to find a low-dimensional representation of the data (here 2D) in which the distances respect well the distances in the original high-dimensional space, unlike other manifold-learning algorithms, it does not seeks an isotropic representation of the data in the low-dimensional space. Here the manifold problem matches fairly that of representing a flat map of the Earth, as with [map projection](https://en.wikipedia.org/wiki/Map_projection)

```
standard: 0.052 sec
ltsa: 0.077 sec
hessian: 0.14 sec
modified: 0.1 sec
ISO: 0.22 sec
MDS: 0.49 sec
Spectral Embedding: 0.042 sec
t-SNE: 3.8 sec
```
```
# Author: Jaques Grobler <[email protected]>
# License: BSD 3 clause
from time import time
import numpy as np
import matplotlib.pyplot as plt
from matplotlib.ticker import NullFormatter
from sklearn import manifold
from sklearn.utils import check_random_state
# Unused but required import for doing 3d projections with matplotlib < 3.2
import mpl_toolkits.mplot3d # noqa: F401
import warnings
# Variables for manifold learning.
n_neighbors = 10
n_samples = 1000
# Create our sphere.
random_state = check_random_state(0)
p = random_state.rand(n_samples) * (2 * np.pi - 0.55)
t = random_state.rand(n_samples) * np.pi
# Sever the poles from the sphere.
indices = (t < (np.pi - (np.pi / 8))) & (t > ((np.pi / 8)))
colors = p[indices]
x, y, z = (
np.sin(t[indices]) * np.cos(p[indices]),
np.sin(t[indices]) * np.sin(p[indices]),
np.cos(t[indices]),
)
# Plot our dataset.
fig = plt.figure(figsize=(15, 8))
plt.suptitle(
"Manifold Learning with %i points, %i neighbors" % (1000, n_neighbors), fontsize=14
)
ax = fig.add_subplot(251, projection="3d")
ax.scatter(x, y, z, c=p[indices], cmap=plt.cm.rainbow)
ax.view_init(40, -10)
sphere_data = np.array([x, y, z]).T
# Perform Locally Linear Embedding Manifold learning
methods = ["standard", "ltsa", "hessian", "modified"]
labels = ["LLE", "LTSA", "Hessian LLE", "Modified LLE"]
for i, method in enumerate(methods):
t0 = time()
trans_data = (
manifold.LocallyLinearEmbedding(
n_neighbors=n_neighbors, n_components=2, method=method
)
.fit_transform(sphere_data)
.T
)
t1 = time()
print("%s: %.2g sec" % (methods[i], t1 - t0))
ax = fig.add_subplot(252 + i)
plt.scatter(trans_data[0], trans_data[1], c=colors, cmap=plt.cm.rainbow)
plt.title("%s (%.2g sec)" % (labels[i], t1 - t0))
ax.xaxis.set_major_formatter(NullFormatter())
ax.yaxis.set_major_formatter(NullFormatter())
plt.axis("tight")
# Perform Isomap Manifold learning.
t0 = time()
trans_data = (
manifold.Isomap(n_neighbors=n_neighbors, n_components=2)
.fit_transform(sphere_data)
.T
)
t1 = time()
print("%s: %.2g sec" % ("ISO", t1 - t0))
ax = fig.add_subplot(257)
plt.scatter(trans_data[0], trans_data[1], c=colors, cmap=plt.cm.rainbow)
plt.title("%s (%.2g sec)" % ("Isomap", t1 - t0))
ax.xaxis.set_major_formatter(NullFormatter())
ax.yaxis.set_major_formatter(NullFormatter())
plt.axis("tight")
# Perform Multi-dimensional scaling.
t0 = time()
mds = manifold.MDS(2, max_iter=100, n_init=1)
trans_data = mds.fit_transform(sphere_data).T
t1 = time()
print("MDS: %.2g sec" % (t1 - t0))
ax = fig.add_subplot(258)
plt.scatter(trans_data[0], trans_data[1], c=colors, cmap=plt.cm.rainbow)
plt.title("MDS (%.2g sec)" % (t1 - t0))
ax.xaxis.set_major_formatter(NullFormatter())
ax.yaxis.set_major_formatter(NullFormatter())
plt.axis("tight")
# Perform Spectral Embedding.
t0 = time()
se = manifold.SpectralEmbedding(n_components=2, n_neighbors=n_neighbors)
trans_data = se.fit_transform(sphere_data).T
t1 = time()
print("Spectral Embedding: %.2g sec" % (t1 - t0))
ax = fig.add_subplot(259)
plt.scatter(trans_data[0], trans_data[1], c=colors, cmap=plt.cm.rainbow)
plt.title("Spectral Embedding (%.2g sec)" % (t1 - t0))
ax.xaxis.set_major_formatter(NullFormatter())
ax.yaxis.set_major_formatter(NullFormatter())
plt.axis("tight")
# Perform t-distributed stochastic neighbor embedding.
# TODO(1.2) Remove warning handling.
with warnings.catch_warnings():
warnings.filterwarnings(
"ignore", message="The PCA initialization", category=FutureWarning
)
t0 = time()
tsne = manifold.TSNE(
n_components=2, init="pca", random_state=0, learning_rate="auto"
)
trans_data = tsne.fit_transform(sphere_data).T
t1 = time()
print("t-SNE: %.2g sec" % (t1 - t0))
ax = fig.add_subplot(2, 5, 10)
plt.scatter(trans_data[0], trans_data[1], c=colors, cmap=plt.cm.rainbow)
plt.title("t-SNE (%.2g sec)" % (t1 - t0))
ax.xaxis.set_major_formatter(NullFormatter())
ax.yaxis.set_major_formatter(NullFormatter())
plt.axis("tight")
plt.show()
```
**Total running time of the script:** ( 0 minutes 5.426 seconds)
[`Download Python source code: plot_manifold_sphere.py`](https://scikit-learn.org/1.1/_downloads/9846b34238b553e03157c49723da2b04/plot_manifold_sphere.py)
[`Download Jupyter notebook: plot_manifold_sphere.ipynb`](https://scikit-learn.org/1.1/_downloads/604c0a9de0e1b80dae9e6754fdb27014/plot_manifold_sphere.ipynb)
| programming_docs |
scikit_learn Multi-dimensional scaling Note
Click [here](#sphx-glr-download-auto-examples-manifold-plot-mds-py) to download the full example code or to run this example in your browser via Binder
Multi-dimensional scaling
=========================
An illustration of the metric and non-metric MDS on generated noisy data.
The reconstructed points using the metric MDS and non metric MDS are slightly shifted to avoid overlapping.
```
# Author: Nelle Varoquaux <[email protected]>
# License: BSD
import numpy as np
from matplotlib import pyplot as plt
from matplotlib.collections import LineCollection
from sklearn import manifold
from sklearn.metrics import euclidean_distances
from sklearn.decomposition import PCA
EPSILON = np.finfo(np.float32).eps
n_samples = 20
seed = np.random.RandomState(seed=3)
X_true = seed.randint(0, 20, 2 * n_samples).astype(float)
X_true = X_true.reshape((n_samples, 2))
# Center the data
X_true -= X_true.mean()
similarities = euclidean_distances(X_true)
# Add noise to the similarities
noise = np.random.rand(n_samples, n_samples)
noise = noise + noise.T
noise[np.arange(noise.shape[0]), np.arange(noise.shape[0])] = 0
similarities += noise
mds = manifold.MDS(
n_components=2,
max_iter=3000,
eps=1e-9,
random_state=seed,
dissimilarity="precomputed",
n_jobs=1,
)
pos = mds.fit(similarities).embedding_
nmds = manifold.MDS(
n_components=2,
metric=False,
max_iter=3000,
eps=1e-12,
dissimilarity="precomputed",
random_state=seed,
n_jobs=1,
n_init=1,
)
npos = nmds.fit_transform(similarities, init=pos)
# Rescale the data
pos *= np.sqrt((X_true**2).sum()) / np.sqrt((pos**2).sum())
npos *= np.sqrt((X_true**2).sum()) / np.sqrt((npos**2).sum())
# Rotate the data
clf = PCA(n_components=2)
X_true = clf.fit_transform(X_true)
pos = clf.fit_transform(pos)
npos = clf.fit_transform(npos)
fig = plt.figure(1)
ax = plt.axes([0.0, 0.0, 1.0, 1.0])
s = 100
plt.scatter(X_true[:, 0], X_true[:, 1], color="navy", s=s, lw=0, label="True Position")
plt.scatter(pos[:, 0], pos[:, 1], color="turquoise", s=s, lw=0, label="MDS")
plt.scatter(npos[:, 0], npos[:, 1], color="darkorange", s=s, lw=0, label="NMDS")
plt.legend(scatterpoints=1, loc="best", shadow=False)
similarities = similarities.max() / (similarities + EPSILON) * 100
np.fill_diagonal(similarities, 0)
# Plot the edges
start_idx, end_idx = np.where(pos)
# a sequence of (*line0*, *line1*, *line2*), where::
# linen = (x0, y0), (x1, y1), ... (xm, ym)
segments = [
[X_true[i, :], X_true[j, :]] for i in range(len(pos)) for j in range(len(pos))
]
values = np.abs(similarities)
lc = LineCollection(
segments, zorder=0, cmap=plt.cm.Blues, norm=plt.Normalize(0, values.max())
)
lc.set_array(similarities.flatten())
lc.set_linewidths(np.full(len(segments), 0.5))
ax.add_collection(lc)
plt.show()
```
**Total running time of the script:** ( 0 minutes 0.159 seconds)
[`Download Python source code: plot_mds.py`](https://scikit-learn.org/1.1/_downloads/d64ed0728005aaeba058d3ccec909e73/plot_mds.py)
[`Download Jupyter notebook: plot_mds.ipynb`](https://scikit-learn.org/1.1/_downloads/a9a5b8f39dd796eae478e4c9c39cd207/plot_mds.ipynb)
scikit_learn Comparison of Manifold Learning methods Note
Click [here](#sphx-glr-download-auto-examples-manifold-plot-compare-methods-py) to download the full example code or to run this example in your browser via Binder
Comparison of Manifold Learning methods
=======================================
An illustration of dimensionality reduction on the S-curve dataset with various manifold learning methods.
For a discussion and comparison of these algorithms, see the [manifold module page](../../modules/manifold#manifold)
For a similar example, where the methods are applied to a sphere dataset, see [Manifold Learning methods on a severed sphere](plot_manifold_sphere#sphx-glr-auto-examples-manifold-plot-manifold-sphere-py)
Note that the purpose of the MDS is to find a low-dimensional representation of the data (here 2D) in which the distances respect well the distances in the original high-dimensional space, unlike other manifold-learning algorithms, it does not seeks an isotropic representation of the data in the low-dimensional space.
```
# Author: Jake Vanderplas -- <[email protected]>
```
Dataset preparation
-------------------
We start by generating the S-curve dataset.
```
from numpy.random import RandomState
import matplotlib.pyplot as plt
from matplotlib import ticker
# unused but required import for doing 3d projections with matplotlib < 3.2
import mpl_toolkits.mplot3d # noqa: F401
from sklearn import manifold, datasets
rng = RandomState(0)
n_samples = 1500
S_points, S_color = datasets.make_s_curve(n_samples, random_state=rng)
```
Let’s look at the original data. Also define some helping functions, which we will use further on.
```
def plot_3d(points, points_color, title):
x, y, z = points.T
fig, ax = plt.subplots(
figsize=(6, 6),
facecolor="white",
tight_layout=True,
subplot_kw={"projection": "3d"},
)
fig.suptitle(title, size=16)
col = ax.scatter(x, y, z, c=points_color, s=50, alpha=0.8)
ax.view_init(azim=-60, elev=9)
ax.xaxis.set_major_locator(ticker.MultipleLocator(1))
ax.yaxis.set_major_locator(ticker.MultipleLocator(1))
ax.zaxis.set_major_locator(ticker.MultipleLocator(1))
fig.colorbar(col, ax=ax, orientation="horizontal", shrink=0.6, aspect=60, pad=0.01)
plt.show()
def plot_2d(points, points_color, title):
fig, ax = plt.subplots(figsize=(3, 3), facecolor="white", constrained_layout=True)
fig.suptitle(title, size=16)
add_2d_scatter(ax, points, points_color)
plt.show()
def add_2d_scatter(ax, points, points_color, title=None):
x, y = points.T
ax.scatter(x, y, c=points_color, s=50, alpha=0.8)
ax.set_title(title)
ax.xaxis.set_major_formatter(ticker.NullFormatter())
ax.yaxis.set_major_formatter(ticker.NullFormatter())
plot_3d(S_points, S_color, "Original S-curve samples")
```
Define algorithms for the manifold learning
-------------------------------------------
Manifold learning is an approach to non-linear dimensionality reduction. Algorithms for this task are based on the idea that the dimensionality of many data sets is only artificially high.
Read more in the [User Guide](../../modules/manifold#manifold).
```
n_neighbors = 12 # neighborhood which is used to recover the locally linear structure
n_components = 2 # number of coordinates for the manifold
```
### Locally Linear Embeddings
Locally linear embedding (LLE) can be thought of as a series of local Principal Component Analyses which are globally compared to find the best non-linear embedding. Read more in the [User Guide](../../modules/manifold#locally-linear-embedding).
```
params = {
"n_neighbors": n_neighbors,
"n_components": n_components,
"eigen_solver": "auto",
"random_state": rng,
}
lle_standard = manifold.LocallyLinearEmbedding(method="standard", **params)
S_standard = lle_standard.fit_transform(S_points)
lle_ltsa = manifold.LocallyLinearEmbedding(method="ltsa", **params)
S_ltsa = lle_ltsa.fit_transform(S_points)
lle_hessian = manifold.LocallyLinearEmbedding(method="hessian", **params)
S_hessian = lle_hessian.fit_transform(S_points)
lle_mod = manifold.LocallyLinearEmbedding(method="modified", modified_tol=0.8, **params)
S_mod = lle_mod.fit_transform(S_points)
```
```
fig, axs = plt.subplots(
nrows=2, ncols=2, figsize=(7, 7), facecolor="white", constrained_layout=True
)
fig.suptitle("Locally Linear Embeddings", size=16)
lle_methods = [
("Standard locally linear embedding", S_standard),
("Local tangent space alignment", S_ltsa),
("Hessian eigenmap", S_hessian),
("Modified locally linear embedding", S_mod),
]
for ax, method in zip(axs.flat, lle_methods):
name, points = method
add_2d_scatter(ax, points, S_color, name)
plt.show()
```
 ### Isomap Embedding
Non-linear dimensionality reduction through Isometric Mapping. Isomap seeks a lower-dimensional embedding which maintains geodesic distances between all points. Read more in the [User Guide](../../modules/manifold#isomap).
```
isomap = manifold.Isomap(n_neighbors=n_neighbors, n_components=n_components, p=1)
S_isomap = isomap.fit_transform(S_points)
plot_2d(S_isomap, S_color, "Isomap Embedding")
```
### Multidimensional scaling
Multidimensional scaling (MDS) seeks a low-dimensional representation of the data in which the distances respect well the distances in the original high-dimensional space. Read more in the [User Guide](../../modules/manifold#multidimensional-scaling).
```
md_scaling = manifold.MDS(
n_components=n_components, max_iter=50, n_init=4, random_state=rng
)
S_scaling = md_scaling.fit_transform(S_points)
plot_2d(S_scaling, S_color, "Multidimensional scaling")
```
### Spectral embedding for non-linear dimensionality reduction
This implementation uses Laplacian Eigenmaps, which finds a low dimensional representation of the data using a spectral decomposition of the graph Laplacian. Read more in the [User Guide](../../modules/manifold#spectral-embedding).
```
spectral = manifold.SpectralEmbedding(
n_components=n_components, n_neighbors=n_neighbors
)
S_spectral = spectral.fit_transform(S_points)
plot_2d(S_spectral, S_color, "Spectral Embedding")
```
### T-distributed Stochastic Neighbor Embedding
It converts similarities between data points to joint probabilities and tries to minimize the Kullback-Leibler divergence between the joint probabilities of the low-dimensional embedding and the high-dimensional data. t-SNE has a cost function that is not convex, i.e. with different initializations we can get different results. Read more in the [User Guide](../../modules/manifold#t-sne).
```
t_sne = manifold.TSNE(
n_components=n_components,
learning_rate="auto",
perplexity=30,
n_iter=250,
init="random",
random_state=rng,
)
S_t_sne = t_sne.fit_transform(S_points)
plot_2d(S_t_sne, S_color, "T-distributed Stochastic \n Neighbor Embedding")
```
**Total running time of the script:** ( 0 minutes 11.397 seconds)
[`Download Python source code: plot_compare_methods.py`](https://scikit-learn.org/1.1/_downloads/cda53b33015268619bc212d32b7000b9/plot_compare_methods.py)
[`Download Jupyter notebook: plot_compare_methods.ipynb`](https://scikit-learn.org/1.1/_downloads/c8db473878b6afea8e75e36dc828f109/plot_compare_methods.ipynb)
scikit_learn SVM Exercise Note
Click [here](#sphx-glr-download-auto-examples-exercises-plot-iris-exercise-py) to download the full example code or to run this example in your browser via Binder
SVM Exercise
============
A tutorial exercise for using different SVM kernels.
This exercise is used in the [Using kernels](../../tutorial/statistical_inference/supervised_learning#using-kernels-tut) part of the [Supervised learning: predicting an output variable from high-dimensional observations](../../tutorial/statistical_inference/supervised_learning#supervised-learning-tut) section of the [A tutorial on statistical-learning for scientific data processing](../../tutorial/statistical_inference/index#stat-learn-tut-index).
*
*
*
```
import numpy as np
import matplotlib.pyplot as plt
from sklearn import datasets, svm
iris = datasets.load_iris()
X = iris.data
y = iris.target
X = X[y != 0, :2]
y = y[y != 0]
n_sample = len(X)
np.random.seed(0)
order = np.random.permutation(n_sample)
X = X[order]
y = y[order].astype(float)
X_train = X[: int(0.9 * n_sample)]
y_train = y[: int(0.9 * n_sample)]
X_test = X[int(0.9 * n_sample) :]
y_test = y[int(0.9 * n_sample) :]
# fit the model
for kernel in ("linear", "rbf", "poly"):
clf = svm.SVC(kernel=kernel, gamma=10)
clf.fit(X_train, y_train)
plt.figure()
plt.clf()
plt.scatter(
X[:, 0], X[:, 1], c=y, zorder=10, cmap=plt.cm.Paired, edgecolor="k", s=20
)
# Circle out the test data
plt.scatter(
X_test[:, 0], X_test[:, 1], s=80, facecolors="none", zorder=10, edgecolor="k"
)
plt.axis("tight")
x_min = X[:, 0].min()
x_max = X[:, 0].max()
y_min = X[:, 1].min()
y_max = X[:, 1].max()
XX, YY = np.mgrid[x_min:x_max:200j, y_min:y_max:200j]
Z = clf.decision_function(np.c_[XX.ravel(), YY.ravel()])
# Put the result into a color plot
Z = Z.reshape(XX.shape)
plt.pcolormesh(XX, YY, Z > 0, cmap=plt.cm.Paired)
plt.contour(
XX,
YY,
Z,
colors=["k", "k", "k"],
linestyles=["--", "-", "--"],
levels=[-0.5, 0, 0.5],
)
plt.title(kernel)
plt.show()
```
**Total running time of the script:** ( 0 minutes 5.053 seconds)
[`Download Python source code: plot_iris_exercise.py`](https://scikit-learn.org/1.1/_downloads/a3ad6892094cf4c9641b7b11f9263348/plot_iris_exercise.py)
[`Download Jupyter notebook: plot_iris_exercise.ipynb`](https://scikit-learn.org/1.1/_downloads/eb3b0c569e154d9b77fdc4a01497b2db/plot_iris_exercise.ipynb)
scikit_learn Cross-validation on Digits Dataset Exercise Note
Click [here](#sphx-glr-download-auto-examples-exercises-plot-cv-digits-py) to download the full example code or to run this example in your browser via Binder
Cross-validation on Digits Dataset Exercise
===========================================
A tutorial exercise using Cross-validation with an SVM on the Digits dataset.
This exercise is used in the [Cross-validation generators](../../tutorial/statistical_inference/model_selection#cv-generators-tut) part of the [Model selection: choosing estimators and their parameters](../../tutorial/statistical_inference/model_selection#model-selection-tut) section of the [A tutorial on statistical-learning for scientific data processing](../../tutorial/statistical_inference/index#stat-learn-tut-index).
```
import numpy as np
from sklearn.model_selection import cross_val_score
from sklearn import datasets, svm
X, y = datasets.load_digits(return_X_y=True)
svc = svm.SVC(kernel="linear")
C_s = np.logspace(-10, 0, 10)
scores = list()
scores_std = list()
for C in C_s:
svc.C = C
this_scores = cross_val_score(svc, X, y, n_jobs=1)
scores.append(np.mean(this_scores))
scores_std.append(np.std(this_scores))
# Do the plotting
import matplotlib.pyplot as plt
plt.figure()
plt.semilogx(C_s, scores)
plt.semilogx(C_s, np.array(scores) + np.array(scores_std), "b--")
plt.semilogx(C_s, np.array(scores) - np.array(scores_std), "b--")
locs, labels = plt.yticks()
plt.yticks(locs, list(map(lambda x: "%g" % x, locs)))
plt.ylabel("CV score")
plt.xlabel("Parameter C")
plt.ylim(0, 1.1)
plt.show()
```
**Total running time of the script:** ( 0 minutes 4.606 seconds)
[`Download Python source code: plot_cv_digits.py`](https://scikit-learn.org/1.1/_downloads/fd36f9f313ec11c434f71fda91733975/plot_cv_digits.py)
[`Download Jupyter notebook: plot_cv_digits.ipynb`](https://scikit-learn.org/1.1/_downloads/48802d222a21d57b36b5e7a61adb770c/plot_cv_digits.ipynb)
scikit_learn Digits Classification Exercise Note
Click [here](#sphx-glr-download-auto-examples-exercises-plot-digits-classification-exercise-py) to download the full example code or to run this example in your browser via Binder
Digits Classification Exercise
==============================
A tutorial exercise regarding the use of classification techniques on the Digits dataset.
This exercise is used in the [Classification](../../tutorial/statistical_inference/supervised_learning#clf-tut) part of the [Supervised learning: predicting an output variable from high-dimensional observations](../../tutorial/statistical_inference/supervised_learning#supervised-learning-tut) section of the [A tutorial on statistical-learning for scientific data processing](../../tutorial/statistical_inference/index#stat-learn-tut-index).
```
KNN score: 0.961111
LogisticRegression score: 0.933333
```
```
from sklearn import datasets, neighbors, linear_model
X_digits, y_digits = datasets.load_digits(return_X_y=True)
X_digits = X_digits / X_digits.max()
n_samples = len(X_digits)
X_train = X_digits[: int(0.9 * n_samples)]
y_train = y_digits[: int(0.9 * n_samples)]
X_test = X_digits[int(0.9 * n_samples) :]
y_test = y_digits[int(0.9 * n_samples) :]
knn = neighbors.KNeighborsClassifier()
logistic = linear_model.LogisticRegression(max_iter=1000)
print("KNN score: %f" % knn.fit(X_train, y_train).score(X_test, y_test))
print(
"LogisticRegression score: %f"
% logistic.fit(X_train, y_train).score(X_test, y_test)
)
```
**Total running time of the script:** ( 0 minutes 0.140 seconds)
[`Download Python source code: plot_digits_classification_exercise.py`](https://scikit-learn.org/1.1/_downloads/e4d278c5c3a8450d66b5dd01a57ae923/plot_digits_classification_exercise.py)
[`Download Jupyter notebook: plot_digits_classification_exercise.ipynb`](https://scikit-learn.org/1.1/_downloads/93e61a11238f7256c97ee66b7e6a275b/plot_digits_classification_exercise.ipynb)
scikit_learn Cross-validation on diabetes Dataset Exercise Note
Click [here](#sphx-glr-download-auto-examples-exercises-plot-cv-diabetes-py) to download the full example code or to run this example in your browser via Binder
Cross-validation on diabetes Dataset Exercise
=============================================
A tutorial exercise which uses cross-validation with linear models.
This exercise is used in the [Cross-validated estimators](../../tutorial/statistical_inference/model_selection#cv-estimators-tut) part of the [Model selection: choosing estimators and their parameters](../../tutorial/statistical_inference/model_selection#model-selection-tut) section of the [A tutorial on statistical-learning for scientific data processing](../../tutorial/statistical_inference/index#stat-learn-tut-index).
Load dataset and apply GridSearchCV
-----------------------------------
```
import matplotlib.pyplot as plt
import numpy as np
from sklearn import datasets
from sklearn.linear_model import Lasso
from sklearn.model_selection import GridSearchCV
X, y = datasets.load_diabetes(return_X_y=True)
X = X[:150]
y = y[:150]
lasso = Lasso(random_state=0, max_iter=10000)
alphas = np.logspace(-4, -0.5, 30)
tuned_parameters = [{"alpha": alphas}]
n_folds = 5
clf = GridSearchCV(lasso, tuned_parameters, cv=n_folds, refit=False)
clf.fit(X, y)
scores = clf.cv_results_["mean_test_score"]
scores_std = clf.cv_results_["std_test_score"]
```
Plot error lines showing +/- std. errors of the scores
------------------------------------------------------
```
plt.figure().set_size_inches(8, 6)
plt.semilogx(alphas, scores)
std_error = scores_std / np.sqrt(n_folds)
plt.semilogx(alphas, scores + std_error, "b--")
plt.semilogx(alphas, scores - std_error, "b--")
# alpha=0.2 controls the translucency of the fill color
plt.fill_between(alphas, scores + std_error, scores - std_error, alpha=0.2)
plt.ylabel("CV score +/- std error")
plt.xlabel("alpha")
plt.axhline(np.max(scores), linestyle="--", color=".5")
plt.xlim([alphas[0], alphas[-1]])
```
```
(0.0001, 0.31622776601683794)
```
Bonus: how much can you trust the selection of alpha?
-----------------------------------------------------
```
# To answer this question we use the LassoCV object that sets its alpha
# parameter automatically from the data by internal cross-validation (i.e. it
# performs cross-validation on the training data it receives).
# We use external cross-validation to see how much the automatically obtained
# alphas differ across different cross-validation folds.
from sklearn.linear_model import LassoCV
from sklearn.model_selection import KFold
lasso_cv = LassoCV(alphas=alphas, random_state=0, max_iter=10000)
k_fold = KFold(3)
print("Answer to the bonus question:", "how much can you trust the selection of alpha?")
print()
print("Alpha parameters maximising the generalization score on different")
print("subsets of the data:")
for k, (train, test) in enumerate(k_fold.split(X, y)):
lasso_cv.fit(X[train], y[train])
print(
"[fold {0}] alpha: {1:.5f}, score: {2:.5f}".format(
k, lasso_cv.alpha_, lasso_cv.score(X[test], y[test])
)
)
print()
print("Answer: Not very much since we obtained different alphas for different")
print("subsets of the data and moreover, the scores for these alphas differ")
print("quite substantially.")
plt.show()
```
```
Answer to the bonus question: how much can you trust the selection of alpha?
Alpha parameters maximising the generalization score on different
subsets of the data:
[fold 0] alpha: 0.05968, score: 0.54209
[fold 1] alpha: 0.04520, score: 0.15521
[fold 2] alpha: 0.07880, score: 0.45192
Answer: Not very much since we obtained different alphas for different
subsets of the data and moreover, the scores for these alphas differ
quite substantially.
```
**Total running time of the script:** ( 0 minutes 0.433 seconds)
[`Download Python source code: plot_cv_diabetes.py`](https://scikit-learn.org/1.1/_downloads/428a26d23bc55d1c898a0e4361695ad0/plot_cv_diabetes.py)
[`Download Jupyter notebook: plot_cv_diabetes.ipynb`](https://scikit-learn.org/1.1/_downloads/27d42183163dfa32c3c487b21701b537/plot_cv_diabetes.ipynb)
| programming_docs |
scikit_learn Time-related feature engineering Note
Click [here](#sphx-glr-download-auto-examples-applications-plot-cyclical-feature-engineering-py) to download the full example code or to run this example in your browser via Binder
Time-related feature engineering
================================
This notebook introduces different strategies to leverage time-related features for a bike sharing demand regression task that is highly dependent on business cycles (days, weeks, months) and yearly season cycles.
In the process, we introduce how to perform periodic feature engineering using the [`sklearn.preprocessing.SplineTransformer`](../../modules/generated/sklearn.preprocessing.splinetransformer#sklearn.preprocessing.SplineTransformer "sklearn.preprocessing.SplineTransformer") class and its `extrapolation="periodic"` option.
Data exploration on the Bike Sharing Demand dataset
---------------------------------------------------
We start by loading the data from the OpenML repository.
```
from sklearn.datasets import fetch_openml
bike_sharing = fetch_openml("Bike_Sharing_Demand", version=2, as_frame=True)
df = bike_sharing.frame
```
To get a quick understanding of the periodic patterns of the data, let us have a look at the average demand per hour during a week.
Note that the week starts on a Sunday, during the weekend. We can clearly distinguish the commute patterns in the morning and evenings of the work days and the leisure use of the bikes on the weekends with a more spread peak demand around the middle of the days:
```
import matplotlib.pyplot as plt
fig, ax = plt.subplots(figsize=(12, 4))
average_week_demand = df.groupby(["weekday", "hour"]).mean()["count"]
average_week_demand.plot(ax=ax)
_ = ax.set(
title="Average hourly bike demand during the week",
xticks=[i * 24 for i in range(7)],
xticklabels=["Sun", "Mon", "Tue", "Wed", "Thu", "Fri", "Sat"],
xlabel="Time of the week",
ylabel="Number of bike rentals",
)
```
The target of the prediction problem is the absolute count of bike rentals on a hourly basis:
```
df["count"].max()
```
```
977.0
```
Let us rescale the target variable (number of hourly bike rentals) to predict a relative demand so that the mean absolute error is more easily interpreted as a fraction of the maximum demand.
Note
The fit method of the models used in this notebook all minimize the mean squared error to estimate the conditional mean instead of the mean absolute error that would fit an estimator of the conditional median.
When reporting performance measure on the test set in the discussion, we instead choose to focus on the mean absolute error that is more intuitive than the (root) mean squared error. Note, however, that the best models for one metric are also the best for the other in this study.
```
y = df["count"] / df["count"].max()
```
```
fig, ax = plt.subplots(figsize=(12, 4))
y.hist(bins=30, ax=ax)
_ = ax.set(
xlabel="Fraction of rented fleet demand",
ylabel="Number of hours",
)
```
The input feature data frame is a time annotated hourly log of variables describing the weather conditions. It includes both numerical and categorical variables. Note that the time information has already been expanded into several complementary columns.
```
X = df.drop("count", axis="columns")
X
```
| | season | year | month | hour | holiday | weekday | workingday | weather | temp | feel\_temp | humidity | windspeed |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| 0 | spring | 0.0 | 1.0 | 0.0 | False | 6.0 | False | clear | 9.84 | 14.395 | 0.81 | 0.0000 |
| 1 | spring | 0.0 | 1.0 | 1.0 | False | 6.0 | False | clear | 9.02 | 13.635 | 0.80 | 0.0000 |
| 2 | spring | 0.0 | 1.0 | 2.0 | False | 6.0 | False | clear | 9.02 | 13.635 | 0.80 | 0.0000 |
| 3 | spring | 0.0 | 1.0 | 3.0 | False | 6.0 | False | clear | 9.84 | 14.395 | 0.75 | 0.0000 |
| 4 | spring | 0.0 | 1.0 | 4.0 | False | 6.0 | False | clear | 9.84 | 14.395 | 0.75 | 0.0000 |
| ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... |
| 17374 | spring | 1.0 | 12.0 | 19.0 | False | 1.0 | True | misty | 10.66 | 12.880 | 0.60 | 11.0014 |
| 17375 | spring | 1.0 | 12.0 | 20.0 | False | 1.0 | True | misty | 10.66 | 12.880 | 0.60 | 11.0014 |
| 17376 | spring | 1.0 | 12.0 | 21.0 | False | 1.0 | True | clear | 10.66 | 12.880 | 0.60 | 11.0014 |
| 17377 | spring | 1.0 | 12.0 | 22.0 | False | 1.0 | True | clear | 10.66 | 13.635 | 0.56 | 8.9981 |
| 17378 | spring | 1.0 | 12.0 | 23.0 | False | 1.0 | True | clear | 10.66 | 13.635 | 0.65 | 8.9981 |
17379 rows × 12 columns
Note
If the time information was only present as a date or datetime column, we could have expanded it into hour-in-the-day, day-in-the-week, day-in-the-month, month-in-the-year using pandas: <https://pandas.pydata.org/pandas-docs/stable/user_guide/timeseries.html#time-date-components>
We now introspect the distribution of the categorical variables, starting with `"weather"`:
```
X["weather"].value_counts()
```
```
clear 11413
misty 4544
rain 1419
heavy_rain 3
Name: weather, dtype: int64
```
Since there are only 3 `"heavy_rain"` events, we cannot use this category to train machine learning models with cross validation. Instead, we simplify the representation by collapsing those into the `"rain"` category.
```
X["weather"].replace(to_replace="heavy_rain", value="rain", inplace=True)
```
```
X["weather"].value_counts()
```
```
clear 11413
misty 4544
rain 1422
Name: weather, dtype: int64
```
As expected, the `"season"` variable is well balanced:
```
X["season"].value_counts()
```
```
fall 4496
summer 4409
spring 4242
winter 4232
Name: season, dtype: int64
```
Time-based cross-validation
---------------------------
Since the dataset is a time-ordered event log (hourly demand), we will use a time-sensitive cross-validation splitter to evaluate our demand forecasting model as realistically as possible. We use a gap of 2 days between the train and test side of the splits. We also limit the training set size to make the performance of the CV folds more stable.
1000 test datapoints should be enough to quantify the performance of the model. This represents a bit less than a month and a half of contiguous test data:
```
from sklearn.model_selection import TimeSeriesSplit
ts_cv = TimeSeriesSplit(
n_splits=5,
gap=48,
max_train_size=10000,
test_size=1000,
)
```
Let us manually inspect the various splits to check that the `TimeSeriesSplit` works as we expect, starting with the first split:
```
all_splits = list(ts_cv.split(X, y))
train_0, test_0 = all_splits[0]
```
```
X.iloc[test_0]
```
| | season | year | month | hour | holiday | weekday | workingday | weather | temp | feel\_temp | humidity | windspeed |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| 12379 | summer | 1.0 | 6.0 | 0.0 | False | 2.0 | True | clear | 22.14 | 25.760 | 0.68 | 27.9993 |
| 12380 | summer | 1.0 | 6.0 | 1.0 | False | 2.0 | True | misty | 21.32 | 25.000 | 0.77 | 22.0028 |
| 12381 | summer | 1.0 | 6.0 | 2.0 | False | 2.0 | True | rain | 21.32 | 25.000 | 0.72 | 19.9995 |
| 12382 | summer | 1.0 | 6.0 | 3.0 | False | 2.0 | True | rain | 20.50 | 24.240 | 0.82 | 12.9980 |
| 12383 | summer | 1.0 | 6.0 | 4.0 | False | 2.0 | True | rain | 20.50 | 24.240 | 0.82 | 12.9980 |
| ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... |
| 13374 | fall | 1.0 | 7.0 | 11.0 | False | 1.0 | True | clear | 34.44 | 40.150 | 0.53 | 15.0013 |
| 13375 | fall | 1.0 | 7.0 | 12.0 | False | 1.0 | True | clear | 34.44 | 39.395 | 0.49 | 8.9981 |
| 13376 | fall | 1.0 | 7.0 | 13.0 | False | 1.0 | True | clear | 34.44 | 39.395 | 0.49 | 19.0012 |
| 13377 | fall | 1.0 | 7.0 | 14.0 | False | 1.0 | True | clear | 36.08 | 40.910 | 0.42 | 7.0015 |
| 13378 | fall | 1.0 | 7.0 | 15.0 | False | 1.0 | True | clear | 35.26 | 40.150 | 0.47 | 16.9979 |
1000 rows × 12 columns
```
X.iloc[train_0]
```
| | season | year | month | hour | holiday | weekday | workingday | weather | temp | feel\_temp | humidity | windspeed |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| 2331 | summer | 0.0 | 4.0 | 1.0 | False | 2.0 | True | misty | 25.42 | 31.060 | 0.50 | 6.0032 |
| 2332 | summer | 0.0 | 4.0 | 2.0 | False | 2.0 | True | misty | 24.60 | 31.060 | 0.53 | 8.9981 |
| 2333 | summer | 0.0 | 4.0 | 3.0 | False | 2.0 | True | misty | 23.78 | 27.275 | 0.56 | 8.9981 |
| 2334 | summer | 0.0 | 4.0 | 4.0 | False | 2.0 | True | misty | 22.96 | 26.515 | 0.64 | 8.9981 |
| 2335 | summer | 0.0 | 4.0 | 5.0 | False | 2.0 | True | misty | 22.14 | 25.760 | 0.68 | 8.9981 |
| ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... |
| 12326 | summer | 1.0 | 6.0 | 19.0 | False | 6.0 | False | clear | 26.24 | 31.060 | 0.36 | 11.0014 |
| 12327 | summer | 1.0 | 6.0 | 20.0 | False | 6.0 | False | clear | 25.42 | 31.060 | 0.35 | 19.0012 |
| 12328 | summer | 1.0 | 6.0 | 21.0 | False | 6.0 | False | clear | 24.60 | 31.060 | 0.40 | 7.0015 |
| 12329 | summer | 1.0 | 6.0 | 22.0 | False | 6.0 | False | clear | 23.78 | 27.275 | 0.46 | 8.9981 |
| 12330 | summer | 1.0 | 6.0 | 23.0 | False | 6.0 | False | clear | 22.96 | 26.515 | 0.52 | 7.0015 |
10000 rows × 12 columns
We now inspect the last split:
```
train_4, test_4 = all_splits[4]
```
```
X.iloc[test_4]
```
| | season | year | month | hour | holiday | weekday | workingday | weather | temp | feel\_temp | humidity | windspeed |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| 16379 | winter | 1.0 | 11.0 | 5.0 | False | 2.0 | True | misty | 13.94 | 16.665 | 0.66 | 8.9981 |
| 16380 | winter | 1.0 | 11.0 | 6.0 | False | 2.0 | True | misty | 13.94 | 16.665 | 0.71 | 11.0014 |
| 16381 | winter | 1.0 | 11.0 | 7.0 | False | 2.0 | True | clear | 13.12 | 16.665 | 0.76 | 6.0032 |
| 16382 | winter | 1.0 | 11.0 | 8.0 | False | 2.0 | True | clear | 13.94 | 16.665 | 0.71 | 8.9981 |
| 16383 | winter | 1.0 | 11.0 | 9.0 | False | 2.0 | True | misty | 14.76 | 18.940 | 0.71 | 0.0000 |
| ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... |
| 17374 | spring | 1.0 | 12.0 | 19.0 | False | 1.0 | True | misty | 10.66 | 12.880 | 0.60 | 11.0014 |
| 17375 | spring | 1.0 | 12.0 | 20.0 | False | 1.0 | True | misty | 10.66 | 12.880 | 0.60 | 11.0014 |
| 17376 | spring | 1.0 | 12.0 | 21.0 | False | 1.0 | True | clear | 10.66 | 12.880 | 0.60 | 11.0014 |
| 17377 | spring | 1.0 | 12.0 | 22.0 | False | 1.0 | True | clear | 10.66 | 13.635 | 0.56 | 8.9981 |
| 17378 | spring | 1.0 | 12.0 | 23.0 | False | 1.0 | True | clear | 10.66 | 13.635 | 0.65 | 8.9981 |
1000 rows × 12 columns
```
X.iloc[train_4]
```
| | season | year | month | hour | holiday | weekday | workingday | weather | temp | feel\_temp | humidity | windspeed |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| 6331 | winter | 0.0 | 9.0 | 9.0 | False | 1.0 | True | misty | 26.24 | 28.790 | 0.89 | 12.9980 |
| 6332 | winter | 0.0 | 9.0 | 10.0 | False | 1.0 | True | misty | 26.24 | 28.790 | 0.89 | 12.9980 |
| 6333 | winter | 0.0 | 9.0 | 11.0 | False | 1.0 | True | clear | 27.88 | 31.820 | 0.79 | 15.0013 |
| 6334 | winter | 0.0 | 9.0 | 12.0 | False | 1.0 | True | misty | 27.88 | 31.820 | 0.79 | 11.0014 |
| 6335 | winter | 0.0 | 9.0 | 13.0 | False | 1.0 | True | misty | 28.70 | 33.335 | 0.74 | 11.0014 |
| ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... |
| 16326 | winter | 1.0 | 11.0 | 0.0 | False | 0.0 | False | misty | 12.30 | 15.150 | 0.70 | 11.0014 |
| 16327 | winter | 1.0 | 11.0 | 1.0 | False | 0.0 | False | clear | 12.30 | 14.395 | 0.70 | 12.9980 |
| 16328 | winter | 1.0 | 11.0 | 2.0 | False | 0.0 | False | clear | 11.48 | 14.395 | 0.81 | 7.0015 |
| 16329 | winter | 1.0 | 11.0 | 3.0 | False | 0.0 | False | misty | 12.30 | 15.150 | 0.81 | 11.0014 |
| 16330 | winter | 1.0 | 11.0 | 4.0 | False | 0.0 | False | misty | 12.30 | 14.395 | 0.81 | 12.9980 |
10000 rows × 12 columns
All is well. We are now ready to do some predictive modeling!
Gradient Boosting
-----------------
Gradient Boosting Regression with decision trees is often flexible enough to efficiently handle heteorogenous tabular data with a mix of categorical and numerical features as long as the number of samples is large enough.
Here, we do minimal ordinal encoding for the categorical variables and then let the model know that it should treat those as categorical variables by using a dedicated tree splitting rule. Since we use an ordinal encoder, we pass the list of categorical values explicitly to use a logical order when encoding the categories as integers instead of the lexicographical order. This also has the added benefit of preventing any issue with unknown categories when using cross-validation.
The numerical variables need no preprocessing and, for the sake of simplicity, we only try the default hyper-parameters for this model:
```
from sklearn.pipeline import make_pipeline
from sklearn.preprocessing import OrdinalEncoder
from sklearn.compose import ColumnTransformer
from sklearn.ensemble import HistGradientBoostingRegressor
from sklearn.model_selection import cross_validate
categorical_columns = [
"weather",
"season",
"holiday",
"workingday",
]
categories = [
["clear", "misty", "rain"],
["spring", "summer", "fall", "winter"],
["False", "True"],
["False", "True"],
]
ordinal_encoder = OrdinalEncoder(categories=categories)
gbrt_pipeline = make_pipeline(
ColumnTransformer(
transformers=[
("categorical", ordinal_encoder, categorical_columns),
],
remainder="passthrough",
),
HistGradientBoostingRegressor(
categorical_features=range(4),
),
)
```
Lets evaluate our gradient boosting model with the mean absolute error of the relative demand averaged across our 5 time-based cross-validation splits:
```
def evaluate(model, X, y, cv):
cv_results = cross_validate(
model,
X,
y,
cv=cv,
scoring=["neg_mean_absolute_error", "neg_root_mean_squared_error"],
)
mae = -cv_results["test_neg_mean_absolute_error"]
rmse = -cv_results["test_neg_root_mean_squared_error"]
print(
f"Mean Absolute Error: {mae.mean():.3f} +/- {mae.std():.3f}\n"
f"Root Mean Squared Error: {rmse.mean():.3f} +/- {rmse.std():.3f}"
)
evaluate(gbrt_pipeline, X, y, cv=ts_cv)
```
```
Mean Absolute Error: 0.044 +/- 0.003
Root Mean Squared Error: 0.068 +/- 0.005
```
This model has an average error around 4 to 5% of the maximum demand. This is quite good for a first trial without any hyper-parameter tuning! We just had to make the categorical variables explicit. Note that the time related features are passed as is, i.e. without processing them. But this is not much of a problem for tree-based models as they can learn a non-monotonic relationship between ordinal input features and the target.
This is not the case for linear regression models as we will see in the following.
Naive linear regression
-----------------------
As usual for linear models, categorical variables need to be one-hot encoded. For consistency, we scale the numerical features to the same 0-1 range using class:`sklearn.preprocessing.MinMaxScaler`, although in this case it does not impact the results much because they are already on comparable scales:
```
from sklearn.preprocessing import OneHotEncoder
from sklearn.preprocessing import MinMaxScaler
from sklearn.linear_model import RidgeCV
import numpy as np
one_hot_encoder = OneHotEncoder(handle_unknown="ignore", sparse=False)
alphas = np.logspace(-6, 6, 25)
naive_linear_pipeline = make_pipeline(
ColumnTransformer(
transformers=[
("categorical", one_hot_encoder, categorical_columns),
],
remainder=MinMaxScaler(),
),
RidgeCV(alphas=alphas),
)
evaluate(naive_linear_pipeline, X, y, cv=ts_cv)
```
```
Mean Absolute Error: 0.142 +/- 0.014
Root Mean Squared Error: 0.184 +/- 0.020
```
The performance is not good: the average error is around 14% of the maximum demand. This is more than three times higher than the average error of the gradient boosting model. We can suspect that the naive original encoding (merely min-max scaled) of the periodic time-related features might prevent the linear regression model to properly leverage the time information: linear regression does not automatically model non-monotonic relationships between the input features and the target. Non-linear terms have to be engineered in the input.
For example, the raw numerical encoding of the `"hour"` feature prevents the linear model from recognizing that an increase of hour in the morning from 6 to 8 should have a strong positive impact on the number of bike rentals while an increase of similar magnitude in the evening from 18 to 20 should have a strong negative impact on the predicted number of bike rentals.
Time-steps as categories
------------------------
Since the time features are encoded in a discrete manner using integers (24 unique values in the “hours” feature), we could decide to treat those as categorical variables using a one-hot encoding and thereby ignore any assumption implied by the ordering of the hour values.
Using one-hot encoding for the time features gives the linear model a lot more flexibility as we introduce one additional feature per discrete time level.
```
one_hot_linear_pipeline = make_pipeline(
ColumnTransformer(
transformers=[
("categorical", one_hot_encoder, categorical_columns),
("one_hot_time", one_hot_encoder, ["hour", "weekday", "month"]),
],
remainder=MinMaxScaler(),
),
RidgeCV(alphas=alphas),
)
evaluate(one_hot_linear_pipeline, X, y, cv=ts_cv)
```
```
Mean Absolute Error: 0.099 +/- 0.011
Root Mean Squared Error: 0.131 +/- 0.011
```
The average error rate of this model is 10% which is much better than using the original (ordinal) encoding of the time feature, confirming our intuition that the linear regression model benefits from the added flexibility to not treat time progression in a monotonic manner.
However, this introduces a very large number of new features. If the time of the day was represented in minutes since the start of the day instead of hours, one-hot encoding would have introduced 1440 features instead of 24. This could cause some significant overfitting. To avoid this we could use [`sklearn.preprocessing.KBinsDiscretizer`](../../modules/generated/sklearn.preprocessing.kbinsdiscretizer#sklearn.preprocessing.KBinsDiscretizer "sklearn.preprocessing.KBinsDiscretizer") instead to re-bin the number of levels of fine-grained ordinal or numerical variables while still benefitting from the non-monotonic expressivity advantages of one-hot encoding.
Finally, we also observe that one-hot encoding completely ignores the ordering of the hour levels while this could be an interesting inductive bias to preserve to some level. In the following we try to explore smooth, non-monotonic encoding that locally preserves the relative ordering of time features.
Trigonometric features
----------------------
As a first attempt, we can try to encode each of those periodic features using a sine and cosine transformation with the matching period.
Each ordinal time feature is transformed into 2 features that together encode equivalent information in a non-monotonic way, and more importantly without any jump between the first and the last value of the periodic range.
```
from sklearn.preprocessing import FunctionTransformer
def sin_transformer(period):
return FunctionTransformer(lambda x: np.sin(x / period * 2 * np.pi))
def cos_transformer(period):
return FunctionTransformer(lambda x: np.cos(x / period * 2 * np.pi))
```
Let us visualize the effect of this feature expansion on some synthetic hour data with a bit of extrapolation beyond hour=23:
```
import pandas as pd
hour_df = pd.DataFrame(
np.arange(26).reshape(-1, 1),
columns=["hour"],
)
hour_df["hour_sin"] = sin_transformer(24).fit_transform(hour_df)["hour"]
hour_df["hour_cos"] = cos_transformer(24).fit_transform(hour_df)["hour"]
hour_df.plot(x="hour")
_ = plt.title("Trigonometric encoding for the 'hour' feature")
```
Let’s use a 2D scatter plot with the hours encoded as colors to better see how this representation maps the 24 hours of the day to a 2D space, akin to some sort of a 24 hour version of an analog clock. Note that the “25th” hour is mapped back to the 1st hour because of the periodic nature of the sine/cosine representation.
```
fig, ax = plt.subplots(figsize=(7, 5))
sp = ax.scatter(hour_df["hour_sin"], hour_df["hour_cos"], c=hour_df["hour"])
ax.set(
xlabel="sin(hour)",
ylabel="cos(hour)",
)
_ = fig.colorbar(sp)
```
We can now build a feature extraction pipeline using this strategy:
```
cyclic_cossin_transformer = ColumnTransformer(
transformers=[
("categorical", one_hot_encoder, categorical_columns),
("month_sin", sin_transformer(12), ["month"]),
("month_cos", cos_transformer(12), ["month"]),
("weekday_sin", sin_transformer(7), ["weekday"]),
("weekday_cos", cos_transformer(7), ["weekday"]),
("hour_sin", sin_transformer(24), ["hour"]),
("hour_cos", cos_transformer(24), ["hour"]),
],
remainder=MinMaxScaler(),
)
cyclic_cossin_linear_pipeline = make_pipeline(
cyclic_cossin_transformer,
RidgeCV(alphas=alphas),
)
evaluate(cyclic_cossin_linear_pipeline, X, y, cv=ts_cv)
```
```
Mean Absolute Error: 0.125 +/- 0.014
Root Mean Squared Error: 0.166 +/- 0.020
```
The performance of our linear regression model with this simple feature engineering is a bit better than using the original ordinal time features but worse than using the one-hot encoded time features. We will further analyze possible reasons for this disappointing outcome at the end of this notebook.
Periodic spline features
------------------------
We can try an alternative encoding of the periodic time-related features using spline transformations with a large enough number of splines, and as a result a larger number of expanded features compared to the sine/cosine transformation:
```
from sklearn.preprocessing import SplineTransformer
def periodic_spline_transformer(period, n_splines=None, degree=3):
if n_splines is None:
n_splines = period
n_knots = n_splines + 1 # periodic and include_bias is True
return SplineTransformer(
degree=degree,
n_knots=n_knots,
knots=np.linspace(0, period, n_knots).reshape(n_knots, 1),
extrapolation="periodic",
include_bias=True,
)
```
Again, let us visualize the effect of this feature expansion on some synthetic hour data with a bit of extrapolation beyond hour=23:
```
hour_df = pd.DataFrame(
np.linspace(0, 26, 1000).reshape(-1, 1),
columns=["hour"],
)
splines = periodic_spline_transformer(24, n_splines=12).fit_transform(hour_df)
splines_df = pd.DataFrame(
splines,
columns=[f"spline_{i}" for i in range(splines.shape[1])],
)
pd.concat([hour_df, splines_df], axis="columns").plot(x="hour", cmap=plt.cm.tab20b)
_ = plt.title("Periodic spline-based encoding for the 'hour' feature")
```
Thanks to the use of the `extrapolation="periodic"` parameter, we observe that the feature encoding stays smooth when extrapolating beyond midnight.
We can now build a predictive pipeline using this alternative periodic feature engineering strategy.
It is possible to use fewer splines than discrete levels for those ordinal values. This makes spline-based encoding more efficient than one-hot encoding while preserving most of the expressivity:
```
cyclic_spline_transformer = ColumnTransformer(
transformers=[
("categorical", one_hot_encoder, categorical_columns),
("cyclic_month", periodic_spline_transformer(12, n_splines=6), ["month"]),
("cyclic_weekday", periodic_spline_transformer(7, n_splines=3), ["weekday"]),
("cyclic_hour", periodic_spline_transformer(24, n_splines=12), ["hour"]),
],
remainder=MinMaxScaler(),
)
cyclic_spline_linear_pipeline = make_pipeline(
cyclic_spline_transformer,
RidgeCV(alphas=alphas),
)
evaluate(cyclic_spline_linear_pipeline, X, y, cv=ts_cv)
```
```
Mean Absolute Error: 0.097 +/- 0.011
Root Mean Squared Error: 0.132 +/- 0.013
```
Spline features make it possible for the linear model to successfully leverage the periodic time-related features and reduce the error from ~14% to ~10% of the maximum demand, which is similar to what we observed with the one-hot encoded features.
Qualitative analysis of the impact of features on linear model predictions
--------------------------------------------------------------------------
Here, we want to visualize the impact of the feature engineering choices on the time related shape of the predictions.
To do so we consider an arbitrary time-based split to compare the predictions on a range of held out data points.
```
naive_linear_pipeline.fit(X.iloc[train_0], y.iloc[train_0])
naive_linear_predictions = naive_linear_pipeline.predict(X.iloc[test_0])
one_hot_linear_pipeline.fit(X.iloc[train_0], y.iloc[train_0])
one_hot_linear_predictions = one_hot_linear_pipeline.predict(X.iloc[test_0])
cyclic_cossin_linear_pipeline.fit(X.iloc[train_0], y.iloc[train_0])
cyclic_cossin_linear_predictions = cyclic_cossin_linear_pipeline.predict(X.iloc[test_0])
cyclic_spline_linear_pipeline.fit(X.iloc[train_0], y.iloc[train_0])
cyclic_spline_linear_predictions = cyclic_spline_linear_pipeline.predict(X.iloc[test_0])
```
We visualize those predictions by zooming on the last 96 hours (4 days) of the test set to get some qualitative insights:
```
last_hours = slice(-96, None)
fig, ax = plt.subplots(figsize=(12, 4))
fig.suptitle("Predictions by linear models")
ax.plot(
y.iloc[test_0].values[last_hours],
"x-",
alpha=0.2,
label="Actual demand",
color="black",
)
ax.plot(naive_linear_predictions[last_hours], "x-", label="Ordinal time features")
ax.plot(
cyclic_cossin_linear_predictions[last_hours],
"x-",
label="Trigonometric time features",
)
ax.plot(
cyclic_spline_linear_predictions[last_hours],
"x-",
label="Spline-based time features",
)
ax.plot(
one_hot_linear_predictions[last_hours],
"x-",
label="One-hot time features",
)
_ = ax.legend()
```
We can draw the following conclusions from the above plot:
* The **raw ordinal time-related features** are problematic because they do not capture the natural periodicity: we observe a big jump in the predictions at the end of each day when the hour features goes from 23 back to 0. We can expect similar artifacts at the end of each week or each year.
* As expected, the **trigonometric features** (sine and cosine) do not have these discontinuities at midnight, but the linear regression model fails to leverage those features to properly model intra-day variations. Using trigonometric features for higher harmonics or additional trigonometric features for the natural period with different phases could potentially fix this problem.
* the **periodic spline-based features** fix those two problems at once: they give more expressivity to the linear model by making it possible to focus on specific hours thanks to the use of 12 splines. Furthermore the `extrapolation="periodic"` option enforces a smooth representation between `hour=23` and `hour=0`.
* The **one-hot encoded features** behave similarly to the periodic spline-based features but are more spiky: for instance they can better model the morning peak during the week days since this peak lasts shorter than an hour. However, we will see in the following that what can be an advantage for linear models is not necessarily one for more expressive models.
We can also compare the number of features extracted by each feature engineering pipeline:
```
naive_linear_pipeline[:-1].transform(X).shape
```
```
(17379, 19)
```
```
one_hot_linear_pipeline[:-1].transform(X).shape
```
```
(17379, 59)
```
```
cyclic_cossin_linear_pipeline[:-1].transform(X).shape
```
```
(17379, 22)
```
```
cyclic_spline_linear_pipeline[:-1].transform(X).shape
```
```
(17379, 37)
```
This confirms that the one-hot encoding and the spline encoding strategies create a lot more features for the time representation than the alternatives, which in turn gives the downstream linear model more flexibility (degrees of freedom) to avoid underfitting.
Finally, we observe that none of the linear models can approximate the true bike rentals demand, especially for the peaks that can be very sharp at rush hours during the working days but much flatter during the week-ends: the most accurate linear models based on splines or one-hot encoding tend to forecast peaks of commuting-related bike rentals even on the week-ends and under-estimate the commuting-related events during the working days.
These systematic prediction errors reveal a form of under-fitting and can be explained by the lack of interactions terms between features, e.g. “workingday” and features derived from “hours”. This issue will be addressed in the following section.
Modeling pairwise interactions with splines and polynomial features
-------------------------------------------------------------------
Linear models do not automatically capture interaction effects between input features. It does not help that some features are marginally non-linear as is the case with features constructed by `SplineTransformer` (or one-hot encoding or binning).
However, it is possible to use the `PolynomialFeatures` class on coarse grained spline encoded hours to model the “workingday”/”hours” interaction explicitly without introducing too many new variables:
```
from sklearn.preprocessing import PolynomialFeatures
from sklearn.pipeline import FeatureUnion
hour_workday_interaction = make_pipeline(
ColumnTransformer(
[
("cyclic_hour", periodic_spline_transformer(24, n_splines=8), ["hour"]),
("workingday", FunctionTransformer(lambda x: x == "True"), ["workingday"]),
]
),
PolynomialFeatures(degree=2, interaction_only=True, include_bias=False),
)
```
Those features are then combined with the ones already computed in the previous spline-base pipeline. We can observe a nice performance improvemnt by modeling this pairwise interaction explicitly:
```
cyclic_spline_interactions_pipeline = make_pipeline(
FeatureUnion(
[
("marginal", cyclic_spline_transformer),
("interactions", hour_workday_interaction),
]
),
RidgeCV(alphas=alphas),
)
evaluate(cyclic_spline_interactions_pipeline, X, y, cv=ts_cv)
```
```
Mean Absolute Error: 0.078 +/- 0.009
Root Mean Squared Error: 0.104 +/- 0.009
```
Modeling non-linear feature interactions with kernels
-----------------------------------------------------
The previous analysis highlighted the need to model the interactions between `"workingday"` and `"hours"`. Another example of a such a non-linear interaction that we would like to model could be the impact of the rain that might not be the same during the working days and the week-ends and holidays for instance.
To model all such interactions, we could either use a polynomial expansion on all marginal features at once, after their spline-based expansion. However, this would create a quadratic number of features which can cause overfitting and computational tractability issues.
Alternatively, we can use the Nyström method to compute an approximate polynomial kernel expansion. Let us try the latter:
```
from sklearn.kernel_approximation import Nystroem
cyclic_spline_poly_pipeline = make_pipeline(
cyclic_spline_transformer,
Nystroem(kernel="poly", degree=2, n_components=300, random_state=0),
RidgeCV(alphas=alphas),
)
evaluate(cyclic_spline_poly_pipeline, X, y, cv=ts_cv)
```
```
Mean Absolute Error: 0.053 +/- 0.002
Root Mean Squared Error: 0.076 +/- 0.004
```
We observe that this model can almost rival the performance of the gradient boosted trees with an average error around 5% of the maximum demand.
Note that while the final step of this pipeline is a linear regression model, the intermediate steps such as the spline feature extraction and the Nyström kernel approximation are highly non-linear. As a result the compound pipeline is much more expressive than a simple linear regression model with raw features.
For the sake of completeness, we also evaluate the combination of one-hot encoding and kernel approximation:
```
one_hot_poly_pipeline = make_pipeline(
ColumnTransformer(
transformers=[
("categorical", one_hot_encoder, categorical_columns),
("one_hot_time", one_hot_encoder, ["hour", "weekday", "month"]),
],
remainder="passthrough",
),
Nystroem(kernel="poly", degree=2, n_components=300, random_state=0),
RidgeCV(alphas=alphas),
)
evaluate(one_hot_poly_pipeline, X, y, cv=ts_cv)
```
```
Mean Absolute Error: 0.082 +/- 0.006
Root Mean Squared Error: 0.111 +/- 0.011
```
While one-hot encoded features were competitive with spline-based features when using linear models, this is no longer the case when using a low-rank approximation of a non-linear kernel: this can be explained by the fact that spline features are smoother and allow the kernel approximation to find a more expressive decision function.
Let us now have a qualitative look at the predictions of the kernel models and of the gradient boosted trees that should be able to better model non-linear interactions between features:
```
gbrt_pipeline.fit(X.iloc[train_0], y.iloc[train_0])
gbrt_predictions = gbrt_pipeline.predict(X.iloc[test_0])
one_hot_poly_pipeline.fit(X.iloc[train_0], y.iloc[train_0])
one_hot_poly_predictions = one_hot_poly_pipeline.predict(X.iloc[test_0])
cyclic_spline_poly_pipeline.fit(X.iloc[train_0], y.iloc[train_0])
cyclic_spline_poly_predictions = cyclic_spline_poly_pipeline.predict(X.iloc[test_0])
```
Again we zoom on the last 4 days of the test set:
```
last_hours = slice(-96, None)
fig, ax = plt.subplots(figsize=(12, 4))
fig.suptitle("Predictions by non-linear regression models")
ax.plot(
y.iloc[test_0].values[last_hours],
"x-",
alpha=0.2,
label="Actual demand",
color="black",
)
ax.plot(
gbrt_predictions[last_hours],
"x-",
label="Gradient Boosted Trees",
)
ax.plot(
one_hot_poly_predictions[last_hours],
"x-",
label="One-hot + polynomial kernel",
)
ax.plot(
cyclic_spline_poly_predictions[last_hours],
"x-",
label="Splines + polynomial kernel",
)
_ = ax.legend()
```
First, note that trees can naturally model non-linear feature interactions since, by default, decision trees are allowed to grow beyond a depth of 2 levels.
Here, we can observe that the combinations of spline features and non-linear kernels works quite well and can almost rival the accuracy of the gradient boosting regression trees.
On the contrary, one-hot encoded time features do not perform that well with the low rank kernel model. In particular, they significantly over-estimate the low demand hours more than the competing models.
We also observe that none of the models can successfully predict some of the peak rentals at the rush hours during the working days. It is possible that access to additional features would be required to further improve the accuracy of the predictions. For instance, it could be useful to have access to the geographical repartition of the fleet at any point in time or the fraction of bikes that are immobilized because they need servicing.
Let us finally get a more quantative look at the prediction errors of those three models using the true vs predicted demand scatter plots:
```
fig, axes = plt.subplots(ncols=3, figsize=(12, 4), sharey=True)
fig.suptitle("Non-linear regression models")
predictions = [
one_hot_poly_predictions,
cyclic_spline_poly_predictions,
gbrt_predictions,
]
labels = [
"One hot + polynomial kernel",
"Splines + polynomial kernel",
"Gradient Boosted Trees",
]
for ax, pred, label in zip(axes, predictions, labels):
ax.scatter(y.iloc[test_0].values, pred, alpha=0.3, label=label)
ax.plot([0, 1], [0, 1], "--", label="Perfect model")
ax.set(
xlim=(0, 1),
ylim=(0, 1),
xlabel="True demand",
ylabel="Predicted demand",
)
ax.legend()
plt.show()
```
This visualization confirms the conclusions we draw on the previous plot.
All models under-estimate the high demand events (working day rush hours), but gradient boosting a bit less so. The low demand events are well predicted on average by gradient boosting while the one-hot polynomial regression pipeline seems to systematically over-estimate demand in that regime. Overall the predictions of the gradient boosted trees are closer to the diagonal than for the kernel models.
Concluding remarks
------------------
We note that we could have obtained slightly better results for kernel models by using more components (higher rank kernel approximation) at the cost of longer fit and prediction durations. For large values of `n_components`, the performance of the one-hot encoded features would even match the spline features.
The `Nystroem` + `RidgeCV` regressor could also have been replaced by [`MLPRegressor`](../../modules/generated/sklearn.neural_network.mlpregressor#sklearn.neural_network.MLPRegressor "sklearn.neural_network.MLPRegressor") with one or two hidden layers and we would have obtained quite similar results.
The dataset we used in this case study is sampled on a hourly basis. However cyclic spline-based features could model time-within-day or time-within-week very efficiently with finer-grained time resolutions (for instance with measurements taken every minute instead of every hours) without introducing more features. One-hot encoding time representations would not offer this flexibility.
Finally, in this notebook we used `RidgeCV` because it is very efficient from a computational point of view. However, it models the target variable as a Gaussian random variable with constant variance. For positive regression problems, it is likely that using a Poisson or Gamma distribution would make more sense. This could be achieved by using `GridSearchCV(TweedieRegressor(power=2), param_grid({"alpha": alphas}))` instead of `RidgeCV`.
**Total running time of the script:** ( 0 minutes 21.211 seconds)
[`Download Python source code: plot_cyclical_feature_engineering.py`](https://scikit-learn.org/1.1/_downloads/9fcbbc59ab27a20d07e209a711ac4f50/plot_cyclical_feature_engineering.py)
[`Download Jupyter notebook: plot_cyclical_feature_engineering.ipynb`](https://scikit-learn.org/1.1/_downloads/7012baed63b9a27f121bae611b8285c2/plot_cyclical_feature_engineering.ipynb)
| programming_docs |
scikit_learn Outlier detection on a real data set Note
Click [here](#sphx-glr-download-auto-examples-applications-plot-outlier-detection-wine-py) to download the full example code or to run this example in your browser via Binder
Outlier detection on a real data set
====================================
This example illustrates the need for robust covariance estimation on a real data set. It is useful both for outlier detection and for a better understanding of the data structure.
We selected two sets of two variables from the Wine data set as an illustration of what kind of analysis can be done with several outlier detection tools. For the purpose of visualization, we are working with two-dimensional examples, but one should be aware that things are not so trivial in high-dimension, as it will be pointed out.
In both examples below, the main result is that the empirical covariance estimate, as a non-robust one, is highly influenced by the heterogeneous structure of the observations. Although the robust covariance estimate is able to focus on the main mode of the data distribution, it sticks to the assumption that the data should be Gaussian distributed, yielding some biased estimation of the data structure, but yet accurate to some extent. The One-Class SVM does not assume any parametric form of the data distribution and can therefore model the complex shape of the data much better.
First example
-------------
The first example illustrates how the Minimum Covariance Determinant robust estimator can help concentrate on a relevant cluster when outlying points exist. Here the empirical covariance estimation is skewed by points outside of the main cluster. Of course, some screening tools would have pointed out the presence of two clusters (Support Vector Machines, Gaussian Mixture Models, univariate outlier detection, …). But had it been a high-dimensional example, none of these could be applied that easily.
```
# Author: Virgile Fritsch <[email protected]>
# License: BSD 3 clause
import numpy as np
from sklearn.covariance import EllipticEnvelope
from sklearn.svm import OneClassSVM
import matplotlib.pyplot as plt
import matplotlib.font_manager
from sklearn.datasets import load_wine
# Define "classifiers" to be used
classifiers = {
"Empirical Covariance": EllipticEnvelope(support_fraction=1.0, contamination=0.25),
"Robust Covariance (Minimum Covariance Determinant)": EllipticEnvelope(
contamination=0.25
),
"OCSVM": OneClassSVM(nu=0.25, gamma=0.35),
}
colors = ["m", "g", "b"]
legend1 = {}
legend2 = {}
# Get data
X1 = load_wine()["data"][:, [1, 2]] # two clusters
# Learn a frontier for outlier detection with several classifiers
xx1, yy1 = np.meshgrid(np.linspace(0, 6, 500), np.linspace(1, 4.5, 500))
for i, (clf_name, clf) in enumerate(classifiers.items()):
plt.figure(1)
clf.fit(X1)
Z1 = clf.decision_function(np.c_[xx1.ravel(), yy1.ravel()])
Z1 = Z1.reshape(xx1.shape)
legend1[clf_name] = plt.contour(
xx1, yy1, Z1, levels=[0], linewidths=2, colors=colors[i]
)
legend1_values_list = list(legend1.values())
legend1_keys_list = list(legend1.keys())
# Plot the results (= shape of the data points cloud)
plt.figure(1) # two clusters
plt.title("Outlier detection on a real data set (wine recognition)")
plt.scatter(X1[:, 0], X1[:, 1], color="black")
bbox_args = dict(boxstyle="round", fc="0.8")
arrow_args = dict(arrowstyle="->")
plt.annotate(
"outlying points",
xy=(4, 2),
xycoords="data",
textcoords="data",
xytext=(3, 1.25),
bbox=bbox_args,
arrowprops=arrow_args,
)
plt.xlim((xx1.min(), xx1.max()))
plt.ylim((yy1.min(), yy1.max()))
plt.legend(
(
legend1_values_list[0].collections[0],
legend1_values_list[1].collections[0],
legend1_values_list[2].collections[0],
),
(legend1_keys_list[0], legend1_keys_list[1], legend1_keys_list[2]),
loc="upper center",
prop=matplotlib.font_manager.FontProperties(size=11),
)
plt.ylabel("ash")
plt.xlabel("malic_acid")
plt.show()
```
Second example
--------------
The second example shows the ability of the Minimum Covariance Determinant robust estimator of covariance to concentrate on the main mode of the data distribution: the location seems to be well estimated, although the covariance is hard to estimate due to the banana-shaped distribution. Anyway, we can get rid of some outlying observations. The One-Class SVM is able to capture the real data structure, but the difficulty is to adjust its kernel bandwidth parameter so as to obtain a good compromise between the shape of the data scatter matrix and the risk of over-fitting the data.
```
# Get data
X2 = load_wine()["data"][:, [6, 9]] # "banana"-shaped
# Learn a frontier for outlier detection with several classifiers
xx2, yy2 = np.meshgrid(np.linspace(-1, 5.5, 500), np.linspace(-2.5, 19, 500))
for i, (clf_name, clf) in enumerate(classifiers.items()):
plt.figure(2)
clf.fit(X2)
Z2 = clf.decision_function(np.c_[xx2.ravel(), yy2.ravel()])
Z2 = Z2.reshape(xx2.shape)
legend2[clf_name] = plt.contour(
xx2, yy2, Z2, levels=[0], linewidths=2, colors=colors[i]
)
legend2_values_list = list(legend2.values())
legend2_keys_list = list(legend2.keys())
# Plot the results (= shape of the data points cloud)
plt.figure(2) # "banana" shape
plt.title("Outlier detection on a real data set (wine recognition)")
plt.scatter(X2[:, 0], X2[:, 1], color="black")
plt.xlim((xx2.min(), xx2.max()))
plt.ylim((yy2.min(), yy2.max()))
plt.legend(
(
legend2_values_list[0].collections[0],
legend2_values_list[1].collections[0],
legend2_values_list[2].collections[0],
),
(legend2_keys_list[0], legend2_keys_list[1], legend2_keys_list[2]),
loc="upper center",
prop=matplotlib.font_manager.FontProperties(size=11),
)
plt.ylabel("color_intensity")
plt.xlabel("flavanoids")
plt.show()
```
**Total running time of the script:** ( 0 minutes 1.241 seconds)
[`Download Python source code: plot_outlier_detection_wine.py`](https://scikit-learn.org/1.1/_downloads/609eccf9ab7d476daf68967ce1fce0b7/plot_outlier_detection_wine.py)
[`Download Jupyter notebook: plot_outlier_detection_wine.ipynb`](https://scikit-learn.org/1.1/_downloads/dd28338257df6d2a7e6b9ff5f2743272/plot_outlier_detection_wine.ipynb)
scikit_learn Prediction Latency Note
Click [here](#sphx-glr-download-auto-examples-applications-plot-prediction-latency-py) to download the full example code or to run this example in your browser via Binder
Prediction Latency
==================
This is an example showing the prediction latency of various scikit-learn estimators.
The goal is to measure the latency one can expect when doing predictions either in bulk or atomic (i.e. one by one) mode.
The plots represent the distribution of the prediction latency as a boxplot.
```
# Authors: Eustache Diemert <[email protected]>
# License: BSD 3 clause
from collections import defaultdict
import time
import gc
import numpy as np
import matplotlib.pyplot as plt
from sklearn.preprocessing import StandardScaler
from sklearn.model_selection import train_test_split
from sklearn.datasets import make_regression
from sklearn.ensemble import RandomForestRegressor
from sklearn.linear_model import Ridge
from sklearn.linear_model import SGDRegressor
from sklearn.svm import SVR
from sklearn.utils import shuffle
def _not_in_sphinx():
# Hack to detect whether we are running by the sphinx builder
return "__file__" in globals()
```
Benchmark and plot helper functions
-----------------------------------
```
def atomic_benchmark_estimator(estimator, X_test, verbose=False):
"""Measure runtime prediction of each instance."""
n_instances = X_test.shape[0]
runtimes = np.zeros(n_instances, dtype=float)
for i in range(n_instances):
instance = X_test[[i], :]
start = time.time()
estimator.predict(instance)
runtimes[i] = time.time() - start
if verbose:
print(
"atomic_benchmark runtimes:",
min(runtimes),
np.percentile(runtimes, 50),
max(runtimes),
)
return runtimes
def bulk_benchmark_estimator(estimator, X_test, n_bulk_repeats, verbose):
"""Measure runtime prediction of the whole input."""
n_instances = X_test.shape[0]
runtimes = np.zeros(n_bulk_repeats, dtype=float)
for i in range(n_bulk_repeats):
start = time.time()
estimator.predict(X_test)
runtimes[i] = time.time() - start
runtimes = np.array(list(map(lambda x: x / float(n_instances), runtimes)))
if verbose:
print(
"bulk_benchmark runtimes:",
min(runtimes),
np.percentile(runtimes, 50),
max(runtimes),
)
return runtimes
def benchmark_estimator(estimator, X_test, n_bulk_repeats=30, verbose=False):
"""
Measure runtimes of prediction in both atomic and bulk mode.
Parameters
----------
estimator : already trained estimator supporting `predict()`
X_test : test input
n_bulk_repeats : how many times to repeat when evaluating bulk mode
Returns
-------
atomic_runtimes, bulk_runtimes : a pair of `np.array` which contain the
runtimes in seconds.
"""
atomic_runtimes = atomic_benchmark_estimator(estimator, X_test, verbose)
bulk_runtimes = bulk_benchmark_estimator(estimator, X_test, n_bulk_repeats, verbose)
return atomic_runtimes, bulk_runtimes
def generate_dataset(n_train, n_test, n_features, noise=0.1, verbose=False):
"""Generate a regression dataset with the given parameters."""
if verbose:
print("generating dataset...")
X, y, coef = make_regression(
n_samples=n_train + n_test, n_features=n_features, noise=noise, coef=True
)
random_seed = 13
X_train, X_test, y_train, y_test = train_test_split(
X, y, train_size=n_train, test_size=n_test, random_state=random_seed
)
X_train, y_train = shuffle(X_train, y_train, random_state=random_seed)
X_scaler = StandardScaler()
X_train = X_scaler.fit_transform(X_train)
X_test = X_scaler.transform(X_test)
y_scaler = StandardScaler()
y_train = y_scaler.fit_transform(y_train[:, None])[:, 0]
y_test = y_scaler.transform(y_test[:, None])[:, 0]
gc.collect()
if verbose:
print("ok")
return X_train, y_train, X_test, y_test
def boxplot_runtimes(runtimes, pred_type, configuration):
"""
Plot a new `Figure` with boxplots of prediction runtimes.
Parameters
----------
runtimes : list of `np.array` of latencies in micro-seconds
cls_names : list of estimator class names that generated the runtimes
pred_type : 'bulk' or 'atomic'
"""
fig, ax1 = plt.subplots(figsize=(10, 6))
bp = plt.boxplot(
runtimes,
)
cls_infos = [
"%s\n(%d %s)"
% (
estimator_conf["name"],
estimator_conf["complexity_computer"](estimator_conf["instance"]),
estimator_conf["complexity_label"],
)
for estimator_conf in configuration["estimators"]
]
plt.setp(ax1, xticklabels=cls_infos)
plt.setp(bp["boxes"], color="black")
plt.setp(bp["whiskers"], color="black")
plt.setp(bp["fliers"], color="red", marker="+")
ax1.yaxis.grid(True, linestyle="-", which="major", color="lightgrey", alpha=0.5)
ax1.set_axisbelow(True)
ax1.set_title(
"Prediction Time per Instance - %s, %d feats."
% (pred_type.capitalize(), configuration["n_features"])
)
ax1.set_ylabel("Prediction Time (us)")
plt.show()
def benchmark(configuration):
"""Run the whole benchmark."""
X_train, y_train, X_test, y_test = generate_dataset(
configuration["n_train"], configuration["n_test"], configuration["n_features"]
)
stats = {}
for estimator_conf in configuration["estimators"]:
print("Benchmarking", estimator_conf["instance"])
estimator_conf["instance"].fit(X_train, y_train)
gc.collect()
a, b = benchmark_estimator(estimator_conf["instance"], X_test)
stats[estimator_conf["name"]] = {"atomic": a, "bulk": b}
cls_names = [
estimator_conf["name"] for estimator_conf in configuration["estimators"]
]
runtimes = [1e6 * stats[clf_name]["atomic"] for clf_name in cls_names]
boxplot_runtimes(runtimes, "atomic", configuration)
runtimes = [1e6 * stats[clf_name]["bulk"] for clf_name in cls_names]
boxplot_runtimes(runtimes, "bulk (%d)" % configuration["n_test"], configuration)
def n_feature_influence(estimators, n_train, n_test, n_features, percentile):
"""
Estimate influence of the number of features on prediction time.
Parameters
----------
estimators : dict of (name (str), estimator) to benchmark
n_train : nber of training instances (int)
n_test : nber of testing instances (int)
n_features : list of feature-space dimensionality to test (int)
percentile : percentile at which to measure the speed (int [0-100])
Returns:
--------
percentiles : dict(estimator_name,
dict(n_features, percentile_perf_in_us))
"""
percentiles = defaultdict(defaultdict)
for n in n_features:
print("benchmarking with %d features" % n)
X_train, y_train, X_test, y_test = generate_dataset(n_train, n_test, n)
for cls_name, estimator in estimators.items():
estimator.fit(X_train, y_train)
gc.collect()
runtimes = bulk_benchmark_estimator(estimator, X_test, 30, False)
percentiles[cls_name][n] = 1e6 * np.percentile(runtimes, percentile)
return percentiles
def plot_n_features_influence(percentiles, percentile):
fig, ax1 = plt.subplots(figsize=(10, 6))
colors = ["r", "g", "b"]
for i, cls_name in enumerate(percentiles.keys()):
x = np.array(sorted([n for n in percentiles[cls_name].keys()]))
y = np.array([percentiles[cls_name][n] for n in x])
plt.plot(
x,
y,
color=colors[i],
)
ax1.yaxis.grid(True, linestyle="-", which="major", color="lightgrey", alpha=0.5)
ax1.set_axisbelow(True)
ax1.set_title("Evolution of Prediction Time with #Features")
ax1.set_xlabel("#Features")
ax1.set_ylabel("Prediction Time at %d%%-ile (us)" % percentile)
plt.show()
def benchmark_throughputs(configuration, duration_secs=0.1):
"""benchmark throughput for different estimators."""
X_train, y_train, X_test, y_test = generate_dataset(
configuration["n_train"], configuration["n_test"], configuration["n_features"]
)
throughputs = dict()
for estimator_config in configuration["estimators"]:
estimator_config["instance"].fit(X_train, y_train)
start_time = time.time()
n_predictions = 0
while (time.time() - start_time) < duration_secs:
estimator_config["instance"].predict(X_test[[0]])
n_predictions += 1
throughputs[estimator_config["name"]] = n_predictions / duration_secs
return throughputs
def plot_benchmark_throughput(throughputs, configuration):
fig, ax = plt.subplots(figsize=(10, 6))
colors = ["r", "g", "b"]
cls_infos = [
"%s\n(%d %s)"
% (
estimator_conf["name"],
estimator_conf["complexity_computer"](estimator_conf["instance"]),
estimator_conf["complexity_label"],
)
for estimator_conf in configuration["estimators"]
]
cls_values = [
throughputs[estimator_conf["name"]]
for estimator_conf in configuration["estimators"]
]
plt.bar(range(len(throughputs)), cls_values, width=0.5, color=colors)
ax.set_xticks(np.linspace(0.25, len(throughputs) - 0.75, len(throughputs)))
ax.set_xticklabels(cls_infos, fontsize=10)
ymax = max(cls_values) * 1.2
ax.set_ylim((0, ymax))
ax.set_ylabel("Throughput (predictions/sec)")
ax.set_title(
"Prediction Throughput for different estimators (%d features)"
% configuration["n_features"]
)
plt.show()
```
Benchmark bulk/atomic prediction speed for various regressors
-------------------------------------------------------------
```
configuration = {
"n_train": int(1e3),
"n_test": int(1e2),
"n_features": int(1e2),
"estimators": [
{
"name": "Linear Model",
"instance": SGDRegressor(
penalty="elasticnet", alpha=0.01, l1_ratio=0.25, tol=1e-4
),
"complexity_label": "non-zero coefficients",
"complexity_computer": lambda clf: np.count_nonzero(clf.coef_),
},
{
"name": "RandomForest",
"instance": RandomForestRegressor(),
"complexity_label": "estimators",
"complexity_computer": lambda clf: clf.n_estimators,
},
{
"name": "SVR",
"instance": SVR(kernel="rbf"),
"complexity_label": "support vectors",
"complexity_computer": lambda clf: len(clf.support_vectors_),
},
],
}
benchmark(configuration)
```
*
*
```
Benchmarking SGDRegressor(alpha=0.01, l1_ratio=0.25, penalty='elasticnet', tol=0.0001)
Benchmarking RandomForestRegressor()
Benchmarking SVR()
```
Benchmark n\_features influence on prediction speed
---------------------------------------------------
```
percentile = 90
percentiles = n_feature_influence(
{"ridge": Ridge()},
configuration["n_train"],
configuration["n_test"],
[100, 250, 500],
percentile,
)
plot_n_features_influence(percentiles, percentile)
```
```
benchmarking with 100 features
benchmarking with 250 features
benchmarking with 500 features
```
Benchmark throughput
--------------------
```
throughputs = benchmark_throughputs(configuration)
plot_benchmark_throughput(throughputs, configuration)
```
**Total running time of the script:** ( 0 minutes 9.376 seconds)
[`Download Python source code: plot_prediction_latency.py`](https://scikit-learn.org/1.1/_downloads/5f054219fb38e926537d741fe5832e8c/plot_prediction_latency.py)
[`Download Jupyter notebook: plot_prediction_latency.ipynb`](https://scikit-learn.org/1.1/_downloads/2c8efe31be0d68b7945dbfbff0788dd3/plot_prediction_latency.ipynb)
scikit_learn Faces recognition example using eigenfaces and SVMs Note
Click [here](#sphx-glr-download-auto-examples-applications-plot-face-recognition-py) to download the full example code or to run this example in your browser via Binder
Faces recognition example using eigenfaces and SVMs
===================================================
The dataset used in this example is a preprocessed excerpt of the “Labeled Faces in the Wild”, aka [LFW](http://vis-www.cs.umass.edu/lfw/):
<http://vis-www.cs.umass.edu/lfw/lfw-funneled.tgz> (233MB)
```
from time import time
import matplotlib.pyplot as plt
from sklearn.model_selection import train_test_split
from sklearn.model_selection import RandomizedSearchCV
from sklearn.datasets import fetch_lfw_people
from sklearn.metrics import classification_report
from sklearn.metrics import ConfusionMatrixDisplay
from sklearn.preprocessing import StandardScaler
from sklearn.decomposition import PCA
from sklearn.svm import SVC
from sklearn.utils.fixes import loguniform
```
Download the data, if not already on disk and load it as numpy arrays
```
lfw_people = fetch_lfw_people(min_faces_per_person=70, resize=0.4)
# introspect the images arrays to find the shapes (for plotting)
n_samples, h, w = lfw_people.images.shape
# for machine learning we use the 2 data directly (as relative pixel
# positions info is ignored by this model)
X = lfw_people.data
n_features = X.shape[1]
# the label to predict is the id of the person
y = lfw_people.target
target_names = lfw_people.target_names
n_classes = target_names.shape[0]
print("Total dataset size:")
print("n_samples: %d" % n_samples)
print("n_features: %d" % n_features)
print("n_classes: %d" % n_classes)
```
```
Total dataset size:
n_samples: 1288
n_features: 1850
n_classes: 7
```
Split into a training set and a test and keep 25% of the data for testing.
```
X_train, X_test, y_train, y_test = train_test_split(
X, y, test_size=0.25, random_state=42
)
scaler = StandardScaler()
X_train = scaler.fit_transform(X_train)
X_test = scaler.transform(X_test)
```
Compute a PCA (eigenfaces) on the face dataset (treated as unlabeled dataset): unsupervised feature extraction / dimensionality reduction
```
n_components = 150
print(
"Extracting the top %d eigenfaces from %d faces" % (n_components, X_train.shape[0])
)
t0 = time()
pca = PCA(n_components=n_components, svd_solver="randomized", whiten=True).fit(X_train)
print("done in %0.3fs" % (time() - t0))
eigenfaces = pca.components_.reshape((n_components, h, w))
print("Projecting the input data on the eigenfaces orthonormal basis")
t0 = time()
X_train_pca = pca.transform(X_train)
X_test_pca = pca.transform(X_test)
print("done in %0.3fs" % (time() - t0))
```
```
Extracting the top 150 eigenfaces from 966 faces
done in 0.083s
Projecting the input data on the eigenfaces orthonormal basis
done in 0.010s
```
Train a SVM classification model
```
print("Fitting the classifier to the training set")
t0 = time()
param_grid = {
"C": loguniform(1e3, 1e5),
"gamma": loguniform(1e-4, 1e-1),
}
clf = RandomizedSearchCV(
SVC(kernel="rbf", class_weight="balanced"), param_grid, n_iter=10
)
clf = clf.fit(X_train_pca, y_train)
print("done in %0.3fs" % (time() - t0))
print("Best estimator found by grid search:")
print(clf.best_estimator_)
```
```
Fitting the classifier to the training set
done in 5.647s
Best estimator found by grid search:
SVC(C=76823.03433306453, class_weight='balanced', gamma=0.003418945823095797)
```
Quantitative evaluation of the model quality on the test set
```
print("Predicting people's names on the test set")
t0 = time()
y_pred = clf.predict(X_test_pca)
print("done in %0.3fs" % (time() - t0))
print(classification_report(y_test, y_pred, target_names=target_names))
ConfusionMatrixDisplay.from_estimator(
clf, X_test_pca, y_test, display_labels=target_names, xticks_rotation="vertical"
)
plt.tight_layout()
plt.show()
```
```
Predicting people's names on the test set
done in 0.044s
precision recall f1-score support
Ariel Sharon 0.83 0.77 0.80 13
Colin Powell 0.87 0.90 0.89 60
Donald Rumsfeld 0.72 0.67 0.69 27
George W Bush 0.90 0.95 0.93 146
Gerhard Schroeder 0.78 0.84 0.81 25
Hugo Chavez 1.00 0.53 0.70 15
Tony Blair 0.85 0.81 0.83 36
accuracy 0.87 322
macro avg 0.85 0.78 0.81 322
weighted avg 0.87 0.87 0.86 322
```
Qualitative evaluation of the predictions using matplotlib
```
def plot_gallery(images, titles, h, w, n_row=3, n_col=4):
"""Helper function to plot a gallery of portraits"""
plt.figure(figsize=(1.8 * n_col, 2.4 * n_row))
plt.subplots_adjust(bottom=0, left=0.01, right=0.99, top=0.90, hspace=0.35)
for i in range(n_row * n_col):
plt.subplot(n_row, n_col, i + 1)
plt.imshow(images[i].reshape((h, w)), cmap=plt.cm.gray)
plt.title(titles[i], size=12)
plt.xticks(())
plt.yticks(())
```
plot the result of the prediction on a portion of the test set
```
def title(y_pred, y_test, target_names, i):
pred_name = target_names[y_pred[i]].rsplit(" ", 1)[-1]
true_name = target_names[y_test[i]].rsplit(" ", 1)[-1]
return "predicted: %s\ntrue: %s" % (pred_name, true_name)
prediction_titles = [
title(y_pred, y_test, target_names, i) for i in range(y_pred.shape[0])
]
plot_gallery(X_test, prediction_titles, h, w)
```
plot the gallery of the most significative eigenfaces
```
eigenface_titles = ["eigenface %d" % i for i in range(eigenfaces.shape[0])]
plot_gallery(eigenfaces, eigenface_titles, h, w)
plt.show()
```
Face recognition problem would be much more effectively solved by training convolutional neural networks but this family of models is outside of the scope of the scikit-learn library. Interested readers should instead try to use pytorch or tensorflow to implement such models.
**Total running time of the script:** ( 0 minutes 26.769 seconds)
[`Download Python source code: plot_face_recognition.py`](https://scikit-learn.org/1.1/_downloads/b3a994b2ad66fe78bcedaf151ab78b07/plot_face_recognition.py)
[`Download Jupyter notebook: plot_face_recognition.ipynb`](https://scikit-learn.org/1.1/_downloads/23e3d7fa2388aef4e9a60c4a6caf166d/plot_face_recognition.ipynb)
| programming_docs |
scikit_learn Wikipedia principal eigenvector Note
Click [here](#sphx-glr-download-auto-examples-applications-wikipedia-principal-eigenvector-py) to download the full example code or to run this example in your browser via Binder
Wikipedia principal eigenvector
===============================
A classical way to assert the relative importance of vertices in a graph is to compute the principal eigenvector of the adjacency matrix so as to assign to each vertex the values of the components of the first eigenvector as a centrality score:
<https://en.wikipedia.org/wiki/Eigenvector_centrality>
On the graph of webpages and links those values are called the PageRank scores by Google.
The goal of this example is to analyze the graph of links inside wikipedia articles to rank articles by relative importance according to this eigenvector centrality.
The traditional way to compute the principal eigenvector is to use the power iteration method:
<https://en.wikipedia.org/wiki/Power_iteration>
Here the computation is achieved thanks to Martinsson’s Randomized SVD algorithm implemented in scikit-learn.
The graph data is fetched from the DBpedia dumps. DBpedia is an extraction of the latent structured data of the Wikipedia content.
```
# Author: Olivier Grisel <[email protected]>
# License: BSD 3 clause
from bz2 import BZ2File
import os
from datetime import datetime
from pprint import pprint
from time import time
import numpy as np
from scipy import sparse
from sklearn.decomposition import randomized_svd
from urllib.request import urlopen
```
Download data, if not already on disk
-------------------------------------
```
redirects_url = "http://downloads.dbpedia.org/3.5.1/en/redirects_en.nt.bz2"
redirects_filename = redirects_url.rsplit("/", 1)[1]
page_links_url = "http://downloads.dbpedia.org/3.5.1/en/page_links_en.nt.bz2"
page_links_filename = page_links_url.rsplit("/", 1)[1]
resources = [
(redirects_url, redirects_filename),
(page_links_url, page_links_filename),
]
for url, filename in resources:
if not os.path.exists(filename):
print("Downloading data from '%s', please wait..." % url)
opener = urlopen(url)
open(filename, "wb").write(opener.read())
print()
```
Loading the redirect files
--------------------------
```
def index(redirects, index_map, k):
"""Find the index of an article name after redirect resolution"""
k = redirects.get(k, k)
return index_map.setdefault(k, len(index_map))
DBPEDIA_RESOURCE_PREFIX_LEN = len("http://dbpedia.org/resource/")
SHORTNAME_SLICE = slice(DBPEDIA_RESOURCE_PREFIX_LEN + 1, -1)
def short_name(nt_uri):
"""Remove the < and > URI markers and the common URI prefix"""
return nt_uri[SHORTNAME_SLICE]
def get_redirects(redirects_filename):
"""Parse the redirections and build a transitively closed map out of it"""
redirects = {}
print("Parsing the NT redirect file")
for l, line in enumerate(BZ2File(redirects_filename)):
split = line.split()
if len(split) != 4:
print("ignoring malformed line: " + line)
continue
redirects[short_name(split[0])] = short_name(split[2])
if l % 1000000 == 0:
print("[%s] line: %08d" % (datetime.now().isoformat(), l))
# compute the transitive closure
print("Computing the transitive closure of the redirect relation")
for l, source in enumerate(redirects.keys()):
transitive_target = None
target = redirects[source]
seen = {source}
while True:
transitive_target = target
target = redirects.get(target)
if target is None or target in seen:
break
seen.add(target)
redirects[source] = transitive_target
if l % 1000000 == 0:
print("[%s] line: %08d" % (datetime.now().isoformat(), l))
return redirects
```
Computing the Adjacency matrix
------------------------------
```
def get_adjacency_matrix(redirects_filename, page_links_filename, limit=None):
"""Extract the adjacency graph as a scipy sparse matrix
Redirects are resolved first.
Returns X, the scipy sparse adjacency matrix, redirects as python
dict from article names to article names and index_map a python dict
from article names to python int (article indexes).
"""
print("Computing the redirect map")
redirects = get_redirects(redirects_filename)
print("Computing the integer index map")
index_map = dict()
links = list()
for l, line in enumerate(BZ2File(page_links_filename)):
split = line.split()
if len(split) != 4:
print("ignoring malformed line: " + line)
continue
i = index(redirects, index_map, short_name(split[0]))
j = index(redirects, index_map, short_name(split[2]))
links.append((i, j))
if l % 1000000 == 0:
print("[%s] line: %08d" % (datetime.now().isoformat(), l))
if limit is not None and l >= limit - 1:
break
print("Computing the adjacency matrix")
X = sparse.lil_matrix((len(index_map), len(index_map)), dtype=np.float32)
for i, j in links:
X[i, j] = 1.0
del links
print("Converting to CSR representation")
X = X.tocsr()
print("CSR conversion done")
return X, redirects, index_map
# stop after 5M links to make it possible to work in RAM
X, redirects, index_map = get_adjacency_matrix(
redirects_filename, page_links_filename, limit=5000000
)
names = {i: name for name, i in index_map.items()}
```
Computing Principal Singular Vector using Randomized SVD
--------------------------------------------------------
```
print("Computing the principal singular vectors using randomized_svd")
t0 = time()
U, s, V = randomized_svd(X, 5, n_iter=3)
print("done in %0.3fs" % (time() - t0))
# print the names of the wikipedia related strongest components of the
# principal singular vector which should be similar to the highest eigenvector
print("Top wikipedia pages according to principal singular vectors")
pprint([names[i] for i in np.abs(U.T[0]).argsort()[-10:]])
pprint([names[i] for i in np.abs(V[0]).argsort()[-10:]])
```
Computing Centrality scores
---------------------------
```
def centrality_scores(X, alpha=0.85, max_iter=100, tol=1e-10):
"""Power iteration computation of the principal eigenvector
This method is also known as Google PageRank and the implementation
is based on the one from the NetworkX project (BSD licensed too)
with copyrights by:
Aric Hagberg <[email protected]>
Dan Schult <[email protected]>
Pieter Swart <[email protected]>
"""
n = X.shape[0]
X = X.copy()
incoming_counts = np.asarray(X.sum(axis=1)).ravel()
print("Normalizing the graph")
for i in incoming_counts.nonzero()[0]:
X.data[X.indptr[i] : X.indptr[i + 1]] *= 1.0 / incoming_counts[i]
dangle = np.asarray(np.where(np.isclose(X.sum(axis=1), 0), 1.0 / n, 0)).ravel()
scores = np.full(n, 1.0 / n, dtype=np.float32) # initial guess
for i in range(max_iter):
print("power iteration #%d" % i)
prev_scores = scores
scores = (
alpha * (scores * X + np.dot(dangle, prev_scores))
+ (1 - alpha) * prev_scores.sum() / n
)
# check convergence: normalized l_inf norm
scores_max = np.abs(scores).max()
if scores_max == 0.0:
scores_max = 1.0
err = np.abs(scores - prev_scores).max() / scores_max
print("error: %0.6f" % err)
if err < n * tol:
return scores
return scores
print("Computing principal eigenvector score using a power iteration method")
t0 = time()
scores = centrality_scores(X, max_iter=100)
print("done in %0.3fs" % (time() - t0))
pprint([names[i] for i in np.abs(scores).argsort()[-10:]])
```
**Total running time of the script:** ( 0 minutes 0.000 seconds)
[`Download Python source code: wikipedia_principal_eigenvector.py`](https://scikit-learn.org/1.1/_downloads/51bc3899ceeec0ecf99c5f72ff1fd241/wikipedia_principal_eigenvector.py)
[`Download Jupyter notebook: wikipedia_principal_eigenvector.ipynb`](https://scikit-learn.org/1.1/_downloads/948a4dfa149766b475b1cf2515f289d1/wikipedia_principal_eigenvector.ipynb)
scikit_learn Out-of-core classification of text documents Note
Click [here](#sphx-glr-download-auto-examples-applications-plot-out-of-core-classification-py) to download the full example code or to run this example in your browser via Binder
Out-of-core classification of text documents
============================================
This is an example showing how scikit-learn can be used for classification using an out-of-core approach: learning from data that doesn’t fit into main memory. We make use of an online classifier, i.e., one that supports the partial\_fit method, that will be fed with batches of examples. To guarantee that the features space remains the same over time we leverage a HashingVectorizer that will project each example into the same feature space. This is especially useful in the case of text classification where new features (words) may appear in each batch.
```
# Authors: Eustache Diemert <[email protected]>
# @FedericoV <https://github.com/FedericoV/>
# License: BSD 3 clause
from glob import glob
import itertools
import os.path
import re
import tarfile
import time
import sys
import numpy as np
import matplotlib.pyplot as plt
from matplotlib import rcParams
from html.parser import HTMLParser
from urllib.request import urlretrieve
from sklearn.datasets import get_data_home
from sklearn.feature_extraction.text import HashingVectorizer
from sklearn.linear_model import SGDClassifier
from sklearn.linear_model import PassiveAggressiveClassifier
from sklearn.linear_model import Perceptron
from sklearn.naive_bayes import MultinomialNB
def _not_in_sphinx():
# Hack to detect whether we are running by the sphinx builder
return "__file__" in globals()
```
Reuters Dataset related routines
--------------------------------
The dataset used in this example is Reuters-21578 as provided by the UCI ML repository. It will be automatically downloaded and uncompressed on first run.
```
class ReutersParser(HTMLParser):
"""Utility class to parse a SGML file and yield documents one at a time."""
def __init__(self, encoding="latin-1"):
HTMLParser.__init__(self)
self._reset()
self.encoding = encoding
def handle_starttag(self, tag, attrs):
method = "start_" + tag
getattr(self, method, lambda x: None)(attrs)
def handle_endtag(self, tag):
method = "end_" + tag
getattr(self, method, lambda: None)()
def _reset(self):
self.in_title = 0
self.in_body = 0
self.in_topics = 0
self.in_topic_d = 0
self.title = ""
self.body = ""
self.topics = []
self.topic_d = ""
def parse(self, fd):
self.docs = []
for chunk in fd:
self.feed(chunk.decode(self.encoding))
for doc in self.docs:
yield doc
self.docs = []
self.close()
def handle_data(self, data):
if self.in_body:
self.body += data
elif self.in_title:
self.title += data
elif self.in_topic_d:
self.topic_d += data
def start_reuters(self, attributes):
pass
def end_reuters(self):
self.body = re.sub(r"\s+", r" ", self.body)
self.docs.append(
{"title": self.title, "body": self.body, "topics": self.topics}
)
self._reset()
def start_title(self, attributes):
self.in_title = 1
def end_title(self):
self.in_title = 0
def start_body(self, attributes):
self.in_body = 1
def end_body(self):
self.in_body = 0
def start_topics(self, attributes):
self.in_topics = 1
def end_topics(self):
self.in_topics = 0
def start_d(self, attributes):
self.in_topic_d = 1
def end_d(self):
self.in_topic_d = 0
self.topics.append(self.topic_d)
self.topic_d = ""
def stream_reuters_documents(data_path=None):
"""Iterate over documents of the Reuters dataset.
The Reuters archive will automatically be downloaded and uncompressed if
the `data_path` directory does not exist.
Documents are represented as dictionaries with 'body' (str),
'title' (str), 'topics' (list(str)) keys.
"""
DOWNLOAD_URL = (
"http://archive.ics.uci.edu/ml/machine-learning-databases/"
"reuters21578-mld/reuters21578.tar.gz"
)
ARCHIVE_FILENAME = "reuters21578.tar.gz"
if data_path is None:
data_path = os.path.join(get_data_home(), "reuters")
if not os.path.exists(data_path):
"""Download the dataset."""
print("downloading dataset (once and for all) into %s" % data_path)
os.mkdir(data_path)
def progress(blocknum, bs, size):
total_sz_mb = "%.2f MB" % (size / 1e6)
current_sz_mb = "%.2f MB" % ((blocknum * bs) / 1e6)
if _not_in_sphinx():
sys.stdout.write("\rdownloaded %s / %s" % (current_sz_mb, total_sz_mb))
archive_path = os.path.join(data_path, ARCHIVE_FILENAME)
urlretrieve(DOWNLOAD_URL, filename=archive_path, reporthook=progress)
if _not_in_sphinx():
sys.stdout.write("\r")
print("untarring Reuters dataset...")
tarfile.open(archive_path, "r:gz").extractall(data_path)
print("done.")
parser = ReutersParser()
for filename in glob(os.path.join(data_path, "*.sgm")):
for doc in parser.parse(open(filename, "rb")):
yield doc
```
Main
----
Create the vectorizer and limit the number of features to a reasonable maximum
```
vectorizer = HashingVectorizer(
decode_error="ignore", n_features=2**18, alternate_sign=False
)
# Iterator over parsed Reuters SGML files.
data_stream = stream_reuters_documents()
# We learn a binary classification between the "acq" class and all the others.
# "acq" was chosen as it is more or less evenly distributed in the Reuters
# files. For other datasets, one should take care of creating a test set with
# a realistic portion of positive instances.
all_classes = np.array([0, 1])
positive_class = "acq"
# Here are some classifiers that support the `partial_fit` method
partial_fit_classifiers = {
"SGD": SGDClassifier(max_iter=5),
"Perceptron": Perceptron(),
"NB Multinomial": MultinomialNB(alpha=0.01),
"Passive-Aggressive": PassiveAggressiveClassifier(),
}
def get_minibatch(doc_iter, size, pos_class=positive_class):
"""Extract a minibatch of examples, return a tuple X_text, y.
Note: size is before excluding invalid docs with no topics assigned.
"""
data = [
("{title}\n\n{body}".format(**doc), pos_class in doc["topics"])
for doc in itertools.islice(doc_iter, size)
if doc["topics"]
]
if not len(data):
return np.asarray([], dtype=int), np.asarray([], dtype=int)
X_text, y = zip(*data)
return X_text, np.asarray(y, dtype=int)
def iter_minibatches(doc_iter, minibatch_size):
"""Generator of minibatches."""
X_text, y = get_minibatch(doc_iter, minibatch_size)
while len(X_text):
yield X_text, y
X_text, y = get_minibatch(doc_iter, minibatch_size)
# test data statistics
test_stats = {"n_test": 0, "n_test_pos": 0}
# First we hold out a number of examples to estimate accuracy
n_test_documents = 1000
tick = time.time()
X_test_text, y_test = get_minibatch(data_stream, 1000)
parsing_time = time.time() - tick
tick = time.time()
X_test = vectorizer.transform(X_test_text)
vectorizing_time = time.time() - tick
test_stats["n_test"] += len(y_test)
test_stats["n_test_pos"] += sum(y_test)
print("Test set is %d documents (%d positive)" % (len(y_test), sum(y_test)))
def progress(cls_name, stats):
"""Report progress information, return a string."""
duration = time.time() - stats["t0"]
s = "%20s classifier : \t" % cls_name
s += "%(n_train)6d train docs (%(n_train_pos)6d positive) " % stats
s += "%(n_test)6d test docs (%(n_test_pos)6d positive) " % test_stats
s += "accuracy: %(accuracy).3f " % stats
s += "in %.2fs (%5d docs/s)" % (duration, stats["n_train"] / duration)
return s
cls_stats = {}
for cls_name in partial_fit_classifiers:
stats = {
"n_train": 0,
"n_train_pos": 0,
"accuracy": 0.0,
"accuracy_history": [(0, 0)],
"t0": time.time(),
"runtime_history": [(0, 0)],
"total_fit_time": 0.0,
}
cls_stats[cls_name] = stats
get_minibatch(data_stream, n_test_documents)
# Discard test set
# We will feed the classifier with mini-batches of 1000 documents; this means
# we have at most 1000 docs in memory at any time. The smaller the document
# batch, the bigger the relative overhead of the partial fit methods.
minibatch_size = 1000
# Create the data_stream that parses Reuters SGML files and iterates on
# documents as a stream.
minibatch_iterators = iter_minibatches(data_stream, minibatch_size)
total_vect_time = 0.0
# Main loop : iterate on mini-batches of examples
for i, (X_train_text, y_train) in enumerate(minibatch_iterators):
tick = time.time()
X_train = vectorizer.transform(X_train_text)
total_vect_time += time.time() - tick
for cls_name, cls in partial_fit_classifiers.items():
tick = time.time()
# update estimator with examples in the current mini-batch
cls.partial_fit(X_train, y_train, classes=all_classes)
# accumulate test accuracy stats
cls_stats[cls_name]["total_fit_time"] += time.time() - tick
cls_stats[cls_name]["n_train"] += X_train.shape[0]
cls_stats[cls_name]["n_train_pos"] += sum(y_train)
tick = time.time()
cls_stats[cls_name]["accuracy"] = cls.score(X_test, y_test)
cls_stats[cls_name]["prediction_time"] = time.time() - tick
acc_history = (cls_stats[cls_name]["accuracy"], cls_stats[cls_name]["n_train"])
cls_stats[cls_name]["accuracy_history"].append(acc_history)
run_history = (
cls_stats[cls_name]["accuracy"],
total_vect_time + cls_stats[cls_name]["total_fit_time"],
)
cls_stats[cls_name]["runtime_history"].append(run_history)
if i % 3 == 0:
print(progress(cls_name, cls_stats[cls_name]))
if i % 3 == 0:
print("\n")
```
```
downloading dataset (once and for all) into /home/runner/scikit_learn_data/reuters
untarring Reuters dataset...
done.
Test set is 969 documents (155 positive)
SGD classifier : 988 train docs ( 122 positive) 969 test docs ( 155 positive) accuracy: 0.843 in 0.61s ( 1618 docs/s)
Perceptron classifier : 988 train docs ( 122 positive) 969 test docs ( 155 positive) accuracy: 0.915 in 0.61s ( 1611 docs/s)
NB Multinomial classifier : 988 train docs ( 122 positive) 969 test docs ( 155 positive) accuracy: 0.840 in 0.62s ( 1589 docs/s)
Passive-Aggressive classifier : 988 train docs ( 122 positive) 969 test docs ( 155 positive) accuracy: 0.901 in 0.62s ( 1583 docs/s)
SGD classifier : 3918 train docs ( 441 positive) 969 test docs ( 155 positive) accuracy: 0.937 in 1.62s ( 2419 docs/s)
Perceptron classifier : 3918 train docs ( 441 positive) 969 test docs ( 155 positive) accuracy: 0.923 in 1.62s ( 2416 docs/s)
NB Multinomial classifier : 3918 train docs ( 441 positive) 969 test docs ( 155 positive) accuracy: 0.842 in 1.63s ( 2404 docs/s)
Passive-Aggressive classifier : 3918 train docs ( 441 positive) 969 test docs ( 155 positive) accuracy: 0.950 in 1.63s ( 2400 docs/s)
SGD classifier : 6298 train docs ( 692 positive) 969 test docs ( 155 positive) accuracy: 0.949 in 2.52s ( 2499 docs/s)
Perceptron classifier : 6298 train docs ( 692 positive) 969 test docs ( 155 positive) accuracy: 0.941 in 2.52s ( 2497 docs/s)
NB Multinomial classifier : 6298 train docs ( 692 positive) 969 test docs ( 155 positive) accuracy: 0.857 in 2.53s ( 2490 docs/s)
Passive-Aggressive classifier : 6298 train docs ( 692 positive) 969 test docs ( 155 positive) accuracy: 0.947 in 2.53s ( 2488 docs/s)
SGD classifier : 8910 train docs ( 1014 positive) 969 test docs ( 155 positive) accuracy: 0.954 in 3.49s ( 2553 docs/s)
Perceptron classifier : 8910 train docs ( 1014 positive) 969 test docs ( 155 positive) accuracy: 0.947 in 3.49s ( 2552 docs/s)
NB Multinomial classifier : 8910 train docs ( 1014 positive) 969 test docs ( 155 positive) accuracy: 0.870 in 3.50s ( 2546 docs/s)
Passive-Aggressive classifier : 8910 train docs ( 1014 positive) 969 test docs ( 155 positive) accuracy: 0.941 in 3.50s ( 2544 docs/s)
SGD classifier : 11435 train docs ( 1361 positive) 969 test docs ( 155 positive) accuracy: 0.945 in 4.44s ( 2574 docs/s)
Perceptron classifier : 11435 train docs ( 1361 positive) 969 test docs ( 155 positive) accuracy: 0.946 in 4.44s ( 2572 docs/s)
NB Multinomial classifier : 11435 train docs ( 1361 positive) 969 test docs ( 155 positive) accuracy: 0.886 in 4.45s ( 2568 docs/s)
Passive-Aggressive classifier : 11435 train docs ( 1361 positive) 969 test docs ( 155 positive) accuracy: 0.962 in 4.45s ( 2567 docs/s)
SGD classifier : 14344 train docs ( 1676 positive) 969 test docs ( 155 positive) accuracy: 0.955 in 5.46s ( 2625 docs/s)
Perceptron classifier : 14344 train docs ( 1676 positive) 969 test docs ( 155 positive) accuracy: 0.959 in 5.46s ( 2624 docs/s)
NB Multinomial classifier : 14344 train docs ( 1676 positive) 969 test docs ( 155 positive) accuracy: 0.901 in 5.47s ( 2621 docs/s)
Passive-Aggressive classifier : 14344 train docs ( 1676 positive) 969 test docs ( 155 positive) accuracy: 0.943 in 5.47s ( 2620 docs/s)
SGD classifier : 17260 train docs ( 2066 positive) 969 test docs ( 155 positive) accuracy: 0.960 in 6.47s ( 2665 docs/s)
Perceptron classifier : 17260 train docs ( 2066 positive) 969 test docs ( 155 positive) accuracy: 0.948 in 6.48s ( 2664 docs/s)
NB Multinomial classifier : 17260 train docs ( 2066 positive) 969 test docs ( 155 positive) accuracy: 0.913 in 6.48s ( 2661 docs/s)
Passive-Aggressive classifier : 17260 train docs ( 2066 positive) 969 test docs ( 155 positive) accuracy: 0.953 in 6.49s ( 2660 docs/s)
```
Plot results
------------
The plot represents the learning curve of the classifier: the evolution of classification accuracy over the course of the mini-batches. Accuracy is measured on the first 1000 samples, held out as a validation set.
To limit the memory consumption, we queue examples up to a fixed amount before feeding them to the learner.
```
def plot_accuracy(x, y, x_legend):
"""Plot accuracy as a function of x."""
x = np.array(x)
y = np.array(y)
plt.title("Classification accuracy as a function of %s" % x_legend)
plt.xlabel("%s" % x_legend)
plt.ylabel("Accuracy")
plt.grid(True)
plt.plot(x, y)
rcParams["legend.fontsize"] = 10
cls_names = list(sorted(cls_stats.keys()))
# Plot accuracy evolution
plt.figure()
for _, stats in sorted(cls_stats.items()):
# Plot accuracy evolution with #examples
accuracy, n_examples = zip(*stats["accuracy_history"])
plot_accuracy(n_examples, accuracy, "training examples (#)")
ax = plt.gca()
ax.set_ylim((0.8, 1))
plt.legend(cls_names, loc="best")
plt.figure()
for _, stats in sorted(cls_stats.items()):
# Plot accuracy evolution with runtime
accuracy, runtime = zip(*stats["runtime_history"])
plot_accuracy(runtime, accuracy, "runtime (s)")
ax = plt.gca()
ax.set_ylim((0.8, 1))
plt.legend(cls_names, loc="best")
# Plot fitting times
plt.figure()
fig = plt.gcf()
cls_runtime = [stats["total_fit_time"] for cls_name, stats in sorted(cls_stats.items())]
cls_runtime.append(total_vect_time)
cls_names.append("Vectorization")
bar_colors = ["b", "g", "r", "c", "m", "y"]
ax = plt.subplot(111)
rectangles = plt.bar(range(len(cls_names)), cls_runtime, width=0.5, color=bar_colors)
ax.set_xticks(np.linspace(0, len(cls_names) - 1, len(cls_names)))
ax.set_xticklabels(cls_names, fontsize=10)
ymax = max(cls_runtime) * 1.2
ax.set_ylim((0, ymax))
ax.set_ylabel("runtime (s)")
ax.set_title("Training Times")
def autolabel(rectangles):
"""attach some text vi autolabel on rectangles."""
for rect in rectangles:
height = rect.get_height()
ax.text(
rect.get_x() + rect.get_width() / 2.0,
1.05 * height,
"%.4f" % height,
ha="center",
va="bottom",
)
plt.setp(plt.xticks()[1], rotation=30)
autolabel(rectangles)
plt.tight_layout()
plt.show()
# Plot prediction times
plt.figure()
cls_runtime = []
cls_names = list(sorted(cls_stats.keys()))
for cls_name, stats in sorted(cls_stats.items()):
cls_runtime.append(stats["prediction_time"])
cls_runtime.append(parsing_time)
cls_names.append("Read/Parse\n+Feat.Extr.")
cls_runtime.append(vectorizing_time)
cls_names.append("Hashing\n+Vect.")
ax = plt.subplot(111)
rectangles = plt.bar(range(len(cls_names)), cls_runtime, width=0.5, color=bar_colors)
ax.set_xticks(np.linspace(0, len(cls_names) - 1, len(cls_names)))
ax.set_xticklabels(cls_names, fontsize=8)
plt.setp(plt.xticks()[1], rotation=30)
ymax = max(cls_runtime) * 1.2
ax.set_ylim((0, ymax))
ax.set_ylabel("runtime (s)")
ax.set_title("Prediction Times (%d instances)" % n_test_documents)
autolabel(rectangles)
plt.tight_layout()
plt.show()
```
*
*
*
*
**Total running time of the script:** ( 0 minutes 8.477 seconds)
[`Download Python source code: plot_out_of_core_classification.py`](https://scikit-learn.org/1.1/_downloads/f7c999465d2f8d68e0c04bec778aa48e/plot_out_of_core_classification.py)
[`Download Jupyter notebook: plot_out_of_core_classification.ipynb`](https://scikit-learn.org/1.1/_downloads/c08598f3ffe66017f7cad294026ee0b9/plot_out_of_core_classification.ipynb)
| programming_docs |
scikit_learn Topic extraction with Non-negative Matrix Factorization and Latent Dirichlet Allocation Note
Click [here](#sphx-glr-download-auto-examples-applications-plot-topics-extraction-with-nmf-lda-py) to download the full example code or to run this example in your browser via Binder
Topic extraction with Non-negative Matrix Factorization and Latent Dirichlet Allocation
=======================================================================================
This is an example of applying [`NMF`](../../modules/generated/sklearn.decomposition.nmf#sklearn.decomposition.NMF "sklearn.decomposition.NMF") and [`LatentDirichletAllocation`](../../modules/generated/sklearn.decomposition.latentdirichletallocation#sklearn.decomposition.LatentDirichletAllocation "sklearn.decomposition.LatentDirichletAllocation") on a corpus of documents and extract additive models of the topic structure of the corpus. The output is a plot of topics, each represented as bar plot using top few words based on weights.
Non-negative Matrix Factorization is applied with two different objective functions: the Frobenius norm, and the generalized Kullback-Leibler divergence. The latter is equivalent to Probabilistic Latent Semantic Indexing.
The default parameters (n\_samples / n\_features / n\_components) should make the example runnable in a couple of tens of seconds. You can try to increase the dimensions of the problem, but be aware that the time complexity is polynomial in NMF. In LDA, the time complexity is proportional to (n\_samples \* iterations).
* 
* 
* 
* 
* 
```
Loading dataset...
done in 1.015s.
Extracting tf-idf features for NMF...
done in 0.232s.
Extracting tf features for LDA...
done in 0.235s.
Fitting the NMF model (Frobenius norm) with tf-idf features, n_samples=2000 and n_features=1000...
done in 0.068s.
Fitting the NMF model (generalized Kullback-Leibler divergence) with tf-idf features, n_samples=2000 and n_features=1000...
done in 1.011s.
Fitting the MiniBatchNMF model (Frobenius norm) with tf-idf features, n_samples=2000 and n_features=1000, batch_size=128...
done in 0.081s.
Fitting the MiniBatchNMF model (generalized Kullback-Leibler divergence) with tf-idf features, n_samples=2000 and n_features=1000, batch_size=128...
done in 0.226s.
Fitting LDA models with tf features, n_samples=2000 and n_features=1000...
done in 2.480s.
```
```
# Author: Olivier Grisel <[email protected]>
# Lars Buitinck
# Chyi-Kwei Yau <[email protected]>
# License: BSD 3 clause
from time import time
import matplotlib.pyplot as plt
from sklearn.feature_extraction.text import TfidfVectorizer, CountVectorizer
from sklearn.decomposition import NMF, MiniBatchNMF, LatentDirichletAllocation
from sklearn.datasets import fetch_20newsgroups
n_samples = 2000
n_features = 1000
n_components = 10
n_top_words = 20
batch_size = 128
init = "nndsvda"
def plot_top_words(model, feature_names, n_top_words, title):
fig, axes = plt.subplots(2, 5, figsize=(30, 15), sharex=True)
axes = axes.flatten()
for topic_idx, topic in enumerate(model.components_):
top_features_ind = topic.argsort()[: -n_top_words - 1 : -1]
top_features = [feature_names[i] for i in top_features_ind]
weights = topic[top_features_ind]
ax = axes[topic_idx]
ax.barh(top_features, weights, height=0.7)
ax.set_title(f"Topic {topic_idx +1}", fontdict={"fontsize": 30})
ax.invert_yaxis()
ax.tick_params(axis="both", which="major", labelsize=20)
for i in "top right left".split():
ax.spines[i].set_visible(False)
fig.suptitle(title, fontsize=40)
plt.subplots_adjust(top=0.90, bottom=0.05, wspace=0.90, hspace=0.3)
plt.show()
# Load the 20 newsgroups dataset and vectorize it. We use a few heuristics
# to filter out useless terms early on: the posts are stripped of headers,
# footers and quoted replies, and common English words, words occurring in
# only one document or in at least 95% of the documents are removed.
print("Loading dataset...")
t0 = time()
data, _ = fetch_20newsgroups(
shuffle=True,
random_state=1,
remove=("headers", "footers", "quotes"),
return_X_y=True,
)
data_samples = data[:n_samples]
print("done in %0.3fs." % (time() - t0))
# Use tf-idf features for NMF.
print("Extracting tf-idf features for NMF...")
tfidf_vectorizer = TfidfVectorizer(
max_df=0.95, min_df=2, max_features=n_features, stop_words="english"
)
t0 = time()
tfidf = tfidf_vectorizer.fit_transform(data_samples)
print("done in %0.3fs." % (time() - t0))
# Use tf (raw term count) features for LDA.
print("Extracting tf features for LDA...")
tf_vectorizer = CountVectorizer(
max_df=0.95, min_df=2, max_features=n_features, stop_words="english"
)
t0 = time()
tf = tf_vectorizer.fit_transform(data_samples)
print("done in %0.3fs." % (time() - t0))
print()
# Fit the NMF model
print(
"Fitting the NMF model (Frobenius norm) with tf-idf features, "
"n_samples=%d and n_features=%d..." % (n_samples, n_features)
)
t0 = time()
nmf = NMF(
n_components=n_components,
random_state=1,
init=init,
beta_loss="frobenius",
alpha_W=0.00005,
alpha_H=0.00005,
l1_ratio=1,
).fit(tfidf)
print("done in %0.3fs." % (time() - t0))
tfidf_feature_names = tfidf_vectorizer.get_feature_names_out()
plot_top_words(
nmf, tfidf_feature_names, n_top_words, "Topics in NMF model (Frobenius norm)"
)
# Fit the NMF model
print(
"\n" * 2,
"Fitting the NMF model (generalized Kullback-Leibler "
"divergence) with tf-idf features, n_samples=%d and n_features=%d..."
% (n_samples, n_features),
)
t0 = time()
nmf = NMF(
n_components=n_components,
random_state=1,
init=init,
beta_loss="kullback-leibler",
solver="mu",
max_iter=1000,
alpha_W=0.00005,
alpha_H=0.00005,
l1_ratio=0.5,
).fit(tfidf)
print("done in %0.3fs." % (time() - t0))
tfidf_feature_names = tfidf_vectorizer.get_feature_names_out()
plot_top_words(
nmf,
tfidf_feature_names,
n_top_words,
"Topics in NMF model (generalized Kullback-Leibler divergence)",
)
# Fit the MiniBatchNMF model
print(
"\n" * 2,
"Fitting the MiniBatchNMF model (Frobenius norm) with tf-idf "
"features, n_samples=%d and n_features=%d, batch_size=%d..."
% (n_samples, n_features, batch_size),
)
t0 = time()
mbnmf = MiniBatchNMF(
n_components=n_components,
random_state=1,
batch_size=batch_size,
init=init,
beta_loss="frobenius",
alpha_W=0.00005,
alpha_H=0.00005,
l1_ratio=0.5,
).fit(tfidf)
print("done in %0.3fs." % (time() - t0))
tfidf_feature_names = tfidf_vectorizer.get_feature_names_out()
plot_top_words(
mbnmf,
tfidf_feature_names,
n_top_words,
"Topics in MiniBatchNMF model (Frobenius norm)",
)
# Fit the MiniBatchNMF model
print(
"\n" * 2,
"Fitting the MiniBatchNMF model (generalized Kullback-Leibler "
"divergence) with tf-idf features, n_samples=%d and n_features=%d, "
"batch_size=%d..." % (n_samples, n_features, batch_size),
)
t0 = time()
mbnmf = MiniBatchNMF(
n_components=n_components,
random_state=1,
batch_size=batch_size,
init=init,
beta_loss="kullback-leibler",
alpha_W=0.00005,
alpha_H=0.00005,
l1_ratio=0.5,
).fit(tfidf)
print("done in %0.3fs." % (time() - t0))
tfidf_feature_names = tfidf_vectorizer.get_feature_names_out()
plot_top_words(
mbnmf,
tfidf_feature_names,
n_top_words,
"Topics in MiniBatchNMF model (generalized Kullback-Leibler divergence)",
)
print(
"\n" * 2,
"Fitting LDA models with tf features, n_samples=%d and n_features=%d..."
% (n_samples, n_features),
)
lda = LatentDirichletAllocation(
n_components=n_components,
max_iter=5,
learning_method="online",
learning_offset=50.0,
random_state=0,
)
t0 = time()
lda.fit(tf)
print("done in %0.3fs." % (time() - t0))
tf_feature_names = tf_vectorizer.get_feature_names_out()
plot_top_words(lda, tf_feature_names, n_top_words, "Topics in LDA model")
```
**Total running time of the script:** ( 0 minutes 9.834 seconds)
[`Download Python source code: plot_topics_extraction_with_nmf_lda.py`](https://scikit-learn.org/1.1/_downloads/2b2bebba7f9fb4d03b9c12d63c8b44ad/plot_topics_extraction_with_nmf_lda.py)
[`Download Jupyter notebook: plot_topics_extraction_with_nmf_lda.ipynb`](https://scikit-learn.org/1.1/_downloads/b26574ccf9c31e12ab2afd8d683f3279/plot_topics_extraction_with_nmf_lda.ipynb)
scikit_learn Species distribution modeling Note
Click [here](#sphx-glr-download-auto-examples-applications-plot-species-distribution-modeling-py) to download the full example code or to run this example in your browser via Binder
Species distribution modeling
=============================
Modeling species’ geographic distributions is an important problem in conservation biology. In this example we model the geographic distribution of two south american mammals given past observations and 14 environmental variables. Since we have only positive examples (there are no unsuccessful observations), we cast this problem as a density estimation problem and use the [`OneClassSVM`](../../modules/generated/sklearn.svm.oneclasssvm#sklearn.svm.OneClassSVM "sklearn.svm.OneClassSVM") as our modeling tool. The dataset is provided by Phillips et. al. (2006). If available, the example uses [basemap](https://matplotlib.org/basemap/) to plot the coast lines and national boundaries of South America.
The two species are:
* [“Bradypus variegatus”](http://www.iucnredlist.org/details/3038/0) , the Brown-throated Sloth.
* [“Microryzomys minutus”](http://www.iucnredlist.org/details/13408/0) , also known as the Forest Small Rice Rat, a rodent that lives in Peru, Colombia, Ecuador, Peru, and Venezuela.
References
----------
* [“Maximum entropy modeling of species geographic distributions”](http://rob.schapire.net/papers/ecolmod.pdf) S. J. Phillips, R. P. Anderson, R. E. Schapire - Ecological Modelling, 190:231-259, 2006.
```
________________________________________________________________________________
Modeling distribution of species 'bradypus variegatus'
- fit OneClassSVM ... done.
- plot coastlines from coverage
- predict species distribution
Area under the ROC curve : 0.868443
________________________________________________________________________________
Modeling distribution of species 'microryzomys minutus'
- fit OneClassSVM ... done.
- plot coastlines from coverage
- predict species distribution
Area under the ROC curve : 0.993919
time elapsed: 11.03s
```
```
# Authors: Peter Prettenhofer <[email protected]>
# Jake Vanderplas <[email protected]>
#
# License: BSD 3 clause
from time import time
import numpy as np
import matplotlib.pyplot as plt
from sklearn.utils import Bunch
from sklearn.datasets import fetch_species_distributions
from sklearn import svm, metrics
# if basemap is available, we'll use it.
# otherwise, we'll improvise later...
try:
from mpl_toolkits.basemap import Basemap
basemap = True
except ImportError:
basemap = False
def construct_grids(batch):
"""Construct the map grid from the batch object
Parameters
----------
batch : Batch object
The object returned by :func:`fetch_species_distributions`
Returns
-------
(xgrid, ygrid) : 1-D arrays
The grid corresponding to the values in batch.coverages
"""
# x,y coordinates for corner cells
xmin = batch.x_left_lower_corner + batch.grid_size
xmax = xmin + (batch.Nx * batch.grid_size)
ymin = batch.y_left_lower_corner + batch.grid_size
ymax = ymin + (batch.Ny * batch.grid_size)
# x coordinates of the grid cells
xgrid = np.arange(xmin, xmax, batch.grid_size)
# y coordinates of the grid cells
ygrid = np.arange(ymin, ymax, batch.grid_size)
return (xgrid, ygrid)
def create_species_bunch(species_name, train, test, coverages, xgrid, ygrid):
"""Create a bunch with information about a particular organism
This will use the test/train record arrays to extract the
data specific to the given species name.
"""
bunch = Bunch(name=" ".join(species_name.split("_")[:2]))
species_name = species_name.encode("ascii")
points = dict(test=test, train=train)
for label, pts in points.items():
# choose points associated with the desired species
pts = pts[pts["species"] == species_name]
bunch["pts_%s" % label] = pts
# determine coverage values for each of the training & testing points
ix = np.searchsorted(xgrid, pts["dd long"])
iy = np.searchsorted(ygrid, pts["dd lat"])
bunch["cov_%s" % label] = coverages[:, -iy, ix].T
return bunch
def plot_species_distribution(
species=("bradypus_variegatus_0", "microryzomys_minutus_0")
):
"""
Plot the species distribution.
"""
if len(species) > 2:
print(
"Note: when more than two species are provided,"
" only the first two will be used"
)
t0 = time()
# Load the compressed data
data = fetch_species_distributions()
# Set up the data grid
xgrid, ygrid = construct_grids(data)
# The grid in x,y coordinates
X, Y = np.meshgrid(xgrid, ygrid[::-1])
# create a bunch for each species
BV_bunch = create_species_bunch(
species[0], data.train, data.test, data.coverages, xgrid, ygrid
)
MM_bunch = create_species_bunch(
species[1], data.train, data.test, data.coverages, xgrid, ygrid
)
# background points (grid coordinates) for evaluation
np.random.seed(13)
background_points = np.c_[
np.random.randint(low=0, high=data.Ny, size=10000),
np.random.randint(low=0, high=data.Nx, size=10000),
].T
# We'll make use of the fact that coverages[6] has measurements at all
# land points. This will help us decide between land and water.
land_reference = data.coverages[6]
# Fit, predict, and plot for each species.
for i, species in enumerate([BV_bunch, MM_bunch]):
print("_" * 80)
print("Modeling distribution of species '%s'" % species.name)
# Standardize features
mean = species.cov_train.mean(axis=0)
std = species.cov_train.std(axis=0)
train_cover_std = (species.cov_train - mean) / std
# Fit OneClassSVM
print(" - fit OneClassSVM ... ", end="")
clf = svm.OneClassSVM(nu=0.1, kernel="rbf", gamma=0.5)
clf.fit(train_cover_std)
print("done.")
# Plot map of South America
plt.subplot(1, 2, i + 1)
if basemap:
print(" - plot coastlines using basemap")
m = Basemap(
projection="cyl",
llcrnrlat=Y.min(),
urcrnrlat=Y.max(),
llcrnrlon=X.min(),
urcrnrlon=X.max(),
resolution="c",
)
m.drawcoastlines()
m.drawcountries()
else:
print(" - plot coastlines from coverage")
plt.contour(
X, Y, land_reference, levels=[-9998], colors="k", linestyles="solid"
)
plt.xticks([])
plt.yticks([])
print(" - predict species distribution")
# Predict species distribution using the training data
Z = np.ones((data.Ny, data.Nx), dtype=np.float64)
# We'll predict only for the land points.
idx = np.where(land_reference > -9999)
coverages_land = data.coverages[:, idx[0], idx[1]].T
pred = clf.decision_function((coverages_land - mean) / std)
Z *= pred.min()
Z[idx[0], idx[1]] = pred
levels = np.linspace(Z.min(), Z.max(), 25)
Z[land_reference == -9999] = -9999
# plot contours of the prediction
plt.contourf(X, Y, Z, levels=levels, cmap=plt.cm.Reds)
plt.colorbar(format="%.2f")
# scatter training/testing points
plt.scatter(
species.pts_train["dd long"],
species.pts_train["dd lat"],
s=2**2,
c="black",
marker="^",
label="train",
)
plt.scatter(
species.pts_test["dd long"],
species.pts_test["dd lat"],
s=2**2,
c="black",
marker="x",
label="test",
)
plt.legend()
plt.title(species.name)
plt.axis("equal")
# Compute AUC with regards to background points
pred_background = Z[background_points[0], background_points[1]]
pred_test = clf.decision_function((species.cov_test - mean) / std)
scores = np.r_[pred_test, pred_background]
y = np.r_[np.ones(pred_test.shape), np.zeros(pred_background.shape)]
fpr, tpr, thresholds = metrics.roc_curve(y, scores)
roc_auc = metrics.auc(fpr, tpr)
plt.text(-35, -70, "AUC: %.3f" % roc_auc, ha="right")
print("\n Area under the ROC curve : %f" % roc_auc)
print("\ntime elapsed: %.2fs" % (time() - t0))
plot_species_distribution()
plt.show()
```
**Total running time of the script:** ( 0 minutes 11.200 seconds)
[`Download Python source code: plot_species_distribution_modeling.py`](https://scikit-learn.org/1.1/_downloads/69878e8e2864920aa874c5a68cecf1d3/plot_species_distribution_modeling.py)
[`Download Jupyter notebook: plot_species_distribution_modeling.ipynb`](https://scikit-learn.org/1.1/_downloads/b5d1ec88ae06ced89813c50d00effe51/plot_species_distribution_modeling.ipynb)
scikit_learn Model Complexity Influence Note
Click [here](#sphx-glr-download-auto-examples-applications-plot-model-complexity-influence-py) to download the full example code or to run this example in your browser via Binder
Model Complexity Influence
==========================
Demonstrate how model complexity influences both prediction accuracy and computational performance.
We will be using two datasets:
* [Diabetes dataset](https://scikit-learn.org/1.1/datasets/toy_dataset.html#diabetes-dataset) for regression. This dataset consists of 10 measurements taken from diabetes patients. The task is to predict disease progression;
* [The 20 newsgroups text dataset](https://scikit-learn.org/1.1/datasets/real_world.html#newsgroups-dataset) for classification. This dataset consists of newsgroup posts. The task is to predict on which topic (out of 20 topics) the post is written about.
We will model the complexity influence on three different estimators:
* [`SGDClassifier`](../../modules/generated/sklearn.linear_model.sgdclassifier#sklearn.linear_model.SGDClassifier "sklearn.linear_model.SGDClassifier") (for classification data) which implements stochastic gradient descent learning;
* [`NuSVR`](../../modules/generated/sklearn.svm.nusvr#sklearn.svm.NuSVR "sklearn.svm.NuSVR") (for regression data) which implements Nu support vector regression;
* [`GradientBoostingRegressor`](../../modules/generated/sklearn.ensemble.gradientboostingregressor#sklearn.ensemble.GradientBoostingRegressor "sklearn.ensemble.GradientBoostingRegressor") builds an additive model in a forward stage-wise fashion. Notice that [`HistGradientBoostingRegressor`](../../modules/generated/sklearn.ensemble.histgradientboostingregressor#sklearn.ensemble.HistGradientBoostingRegressor "sklearn.ensemble.HistGradientBoostingRegressor") is much faster than [`GradientBoostingRegressor`](../../modules/generated/sklearn.ensemble.gradientboostingregressor#sklearn.ensemble.GradientBoostingRegressor "sklearn.ensemble.GradientBoostingRegressor") starting with intermediate datasets (`n_samples >= 10_000`), which is not the case for this example.
We make the model complexity vary through the choice of relevant model parameters in each of our selected models. Next, we will measure the influence on both computational performance (latency) and predictive power (MSE or Hamming Loss).
```
# Authors: Eustache Diemert <[email protected]>
# Maria Telenczuk <https://github.com/maikia>
# Guillaume Lemaitre <[email protected]>
# License: BSD 3 clause
import time
import numpy as np
import matplotlib.pyplot as plt
from sklearn import datasets
from sklearn.model_selection import train_test_split
from sklearn.metrics import mean_squared_error
from sklearn.svm import NuSVR
from sklearn.ensemble import GradientBoostingRegressor
from sklearn.linear_model import SGDClassifier
from sklearn.metrics import hamming_loss
# Initialize random generator
np.random.seed(0)
```
Load the data
-------------
First we load both datasets.
Note
We are using [`fetch_20newsgroups_vectorized`](../../modules/generated/sklearn.datasets.fetch_20newsgroups_vectorized#sklearn.datasets.fetch_20newsgroups_vectorized "sklearn.datasets.fetch_20newsgroups_vectorized") to download 20 newsgroups dataset. It returns ready-to-use features.
Note
`X` of the 20 newsgroups dataset is a sparse matrix while `X` of diabetes dataset is a numpy array.
```
def generate_data(case):
"""Generate regression/classification data."""
if case == "regression":
X, y = datasets.load_diabetes(return_X_y=True)
train_size = 0.8
elif case == "classification":
X, y = datasets.fetch_20newsgroups_vectorized(subset="all", return_X_y=True)
train_size = 0.4 # to make the example run faster
X_train, X_test, y_train, y_test = train_test_split(
X, y, train_size=train_size, random_state=0
)
data = {"X_train": X_train, "X_test": X_test, "y_train": y_train, "y_test": y_test}
return data
regression_data = generate_data("regression")
classification_data = generate_data("classification")
```
Benchmark influence
-------------------
Next, we can calculate the influence of the parameters on the given estimator. In each round, we will set the estimator with the new value of `changing_param` and we will be collecting the prediction times, prediction performance and complexities to see how those changes affect the estimator. We will calculate the complexity using `complexity_computer` passed as a parameter.
```
def benchmark_influence(conf):
"""
Benchmark influence of `changing_param` on both MSE and latency.
"""
prediction_times = []
prediction_powers = []
complexities = []
for param_value in conf["changing_param_values"]:
conf["tuned_params"][conf["changing_param"]] = param_value
estimator = conf["estimator"](**conf["tuned_params"])
print("Benchmarking %s" % estimator)
estimator.fit(conf["data"]["X_train"], conf["data"]["y_train"])
conf["postfit_hook"](estimator)
complexity = conf["complexity_computer"](estimator)
complexities.append(complexity)
start_time = time.time()
for _ in range(conf["n_samples"]):
y_pred = estimator.predict(conf["data"]["X_test"])
elapsed_time = (time.time() - start_time) / float(conf["n_samples"])
prediction_times.append(elapsed_time)
pred_score = conf["prediction_performance_computer"](
conf["data"]["y_test"], y_pred
)
prediction_powers.append(pred_score)
print(
"Complexity: %d | %s: %.4f | Pred. Time: %fs\n"
% (
complexity,
conf["prediction_performance_label"],
pred_score,
elapsed_time,
)
)
return prediction_powers, prediction_times, complexities
```
Choose parameters
-----------------
We choose the parameters for each of our estimators by making a dictionary with all the necessary values. `changing_param` is the name of the parameter which will vary in each estimator. Complexity will be defined by the `complexity_label` and calculated using `complexity_computer`. Also note that depending on the estimator type we are passing different data.
```
def _count_nonzero_coefficients(estimator):
a = estimator.coef_.toarray()
return np.count_nonzero(a)
configurations = [
{
"estimator": SGDClassifier,
"tuned_params": {
"penalty": "elasticnet",
"alpha": 0.001,
"loss": "modified_huber",
"fit_intercept": True,
"tol": 1e-1,
"n_iter_no_change": 2,
},
"changing_param": "l1_ratio",
"changing_param_values": [0.25, 0.5, 0.75, 0.9],
"complexity_label": "non_zero coefficients",
"complexity_computer": _count_nonzero_coefficients,
"prediction_performance_computer": hamming_loss,
"prediction_performance_label": "Hamming Loss (Misclassification Ratio)",
"postfit_hook": lambda x: x.sparsify(),
"data": classification_data,
"n_samples": 5,
},
{
"estimator": NuSVR,
"tuned_params": {"C": 1e3, "gamma": 2**-15},
"changing_param": "nu",
"changing_param_values": [0.05, 0.1, 0.2, 0.35, 0.5],
"complexity_label": "n_support_vectors",
"complexity_computer": lambda x: len(x.support_vectors_),
"data": regression_data,
"postfit_hook": lambda x: x,
"prediction_performance_computer": mean_squared_error,
"prediction_performance_label": "MSE",
"n_samples": 15,
},
{
"estimator": GradientBoostingRegressor,
"tuned_params": {
"loss": "squared_error",
"learning_rate": 0.05,
"max_depth": 2,
},
"changing_param": "n_estimators",
"changing_param_values": [10, 25, 50, 75, 100],
"complexity_label": "n_trees",
"complexity_computer": lambda x: x.n_estimators,
"data": regression_data,
"postfit_hook": lambda x: x,
"prediction_performance_computer": mean_squared_error,
"prediction_performance_label": "MSE",
"n_samples": 15,
},
]
```
Run the code and plot the results
---------------------------------
We defined all the functions required to run our benchmark. Now, we will loop over the different configurations that we defined previously. Subsequently, we can analyze the plots obtained from the benchmark: Relaxing the `L1` penalty in the SGD classifier reduces the prediction error but leads to an increase in the training time. We can draw a similar analysis regarding the training time which increases with the number of support vectors with a Nu-SVR. However, we observed that there is an optimal number of support vectors which reduces the prediction error. Indeed, too few support vectors lead to an under-fitted model while too many support vectors lead to an over-fitted model. The exact same conclusion can be drawn for the gradient-boosting model. The only the difference with the Nu-SVR is that having too many trees in the ensemble is not as detrimental.
```
def plot_influence(conf, mse_values, prediction_times, complexities):
"""
Plot influence of model complexity on both accuracy and latency.
"""
fig = plt.figure()
fig.subplots_adjust(right=0.75)
# first axes (prediction error)
ax1 = fig.add_subplot(111)
line1 = ax1.plot(complexities, mse_values, c="tab:blue", ls="-")[0]
ax1.set_xlabel("Model Complexity (%s)" % conf["complexity_label"])
y1_label = conf["prediction_performance_label"]
ax1.set_ylabel(y1_label)
ax1.spines["left"].set_color(line1.get_color())
ax1.yaxis.label.set_color(line1.get_color())
ax1.tick_params(axis="y", colors=line1.get_color())
# second axes (latency)
ax2 = fig.add_subplot(111, sharex=ax1, frameon=False)
line2 = ax2.plot(complexities, prediction_times, c="tab:orange", ls="-")[0]
ax2.yaxis.tick_right()
ax2.yaxis.set_label_position("right")
y2_label = "Time (s)"
ax2.set_ylabel(y2_label)
ax1.spines["right"].set_color(line2.get_color())
ax2.yaxis.label.set_color(line2.get_color())
ax2.tick_params(axis="y", colors=line2.get_color())
plt.legend(
(line1, line2), ("prediction error", "prediction latency"), loc="upper right"
)
plt.title(
"Influence of varying '%s' on %s"
% (conf["changing_param"], conf["estimator"].__name__)
)
for conf in configurations:
prediction_performances, prediction_times, complexities = benchmark_influence(conf)
plot_influence(conf, prediction_performances, prediction_times, complexities)
plt.show()
```
*
*
*
```
Benchmarking SGDClassifier(alpha=0.001, l1_ratio=0.25, loss='modified_huber',
n_iter_no_change=2, penalty='elasticnet', tol=0.1)
Complexity: 4948 | Hamming Loss (Misclassification Ratio): 0.2675 | Pred. Time: 0.061210s
Benchmarking SGDClassifier(alpha=0.001, l1_ratio=0.5, loss='modified_huber',
n_iter_no_change=2, penalty='elasticnet', tol=0.1)
Complexity: 1847 | Hamming Loss (Misclassification Ratio): 0.3264 | Pred. Time: 0.046076s
Benchmarking SGDClassifier(alpha=0.001, l1_ratio=0.75, loss='modified_huber',
n_iter_no_change=2, penalty='elasticnet', tol=0.1)
Complexity: 997 | Hamming Loss (Misclassification Ratio): 0.3383 | Pred. Time: 0.037848s
Benchmarking SGDClassifier(alpha=0.001, l1_ratio=0.9, loss='modified_huber',
n_iter_no_change=2, penalty='elasticnet', tol=0.1)
Complexity: 802 | Hamming Loss (Misclassification Ratio): 0.3582 | Pred. Time: 0.034874s
Benchmarking NuSVR(C=1000.0, gamma=3.0517578125e-05, nu=0.05)
Complexity: 18 | MSE: 5558.7313 | Pred. Time: 0.000150s
Benchmarking NuSVR(C=1000.0, gamma=3.0517578125e-05, nu=0.1)
Complexity: 36 | MSE: 5289.8022 | Pred. Time: 0.000221s
Benchmarking NuSVR(C=1000.0, gamma=3.0517578125e-05, nu=0.2)
Complexity: 72 | MSE: 5193.8353 | Pred. Time: 0.000378s
Benchmarking NuSVR(C=1000.0, gamma=3.0517578125e-05, nu=0.35)
Complexity: 124 | MSE: 5131.3279 | Pred. Time: 0.000602s
Benchmarking NuSVR(C=1000.0, gamma=3.0517578125e-05)
Complexity: 178 | MSE: 5149.0779 | Pred. Time: 0.000839s
Benchmarking GradientBoostingRegressor(learning_rate=0.05, max_depth=2, n_estimators=10)
Complexity: 10 | MSE: 4066.4812 | Pred. Time: 0.000107s
Benchmarking GradientBoostingRegressor(learning_rate=0.05, max_depth=2, n_estimators=25)
Complexity: 25 | MSE: 3551.1723 | Pred. Time: 0.000121s
Benchmarking GradientBoostingRegressor(learning_rate=0.05, max_depth=2, n_estimators=50)
Complexity: 50 | MSE: 3445.2171 | Pred. Time: 0.000158s
Benchmarking GradientBoostingRegressor(learning_rate=0.05, max_depth=2, n_estimators=75)
Complexity: 75 | MSE: 3433.0358 | Pred. Time: 0.000190s
Benchmarking GradientBoostingRegressor(learning_rate=0.05, max_depth=2)
Complexity: 100 | MSE: 3456.0602 | Pred. Time: 0.000223s
```
Conclusion
----------
As a conclusion, we can deduce the following insights:
* a model which is more complex (or expressive) will require a larger training time;
* a more complex model does not guarantee to reduce the prediction error.
These aspects are related to model generalization and avoiding model under-fitting or over-fitting.
**Total running time of the script:** ( 0 minutes 17.183 seconds)
[`Download Python source code: plot_model_complexity_influence.py`](https://scikit-learn.org/1.1/_downloads/ddd79923ba48c7f71fb17697baa1a22b/plot_model_complexity_influence.py)
[`Download Jupyter notebook: plot_model_complexity_influence.ipynb`](https://scikit-learn.org/1.1/_downloads/3ed102fa8211c8d36f2331f0c5e1dcef/plot_model_complexity_influence.ipynb)
| programming_docs |
scikit_learn Visualizing the stock market structure Note
Click [here](#sphx-glr-download-auto-examples-applications-plot-stock-market-py) to download the full example code or to run this example in your browser via Binder
Visualizing the stock market structure
======================================
This example employs several unsupervised learning techniques to extract the stock market structure from variations in historical quotes.
The quantity that we use is the daily variation in quote price: quotes that are linked tend to fluctuate in relation to each other during a day.
```
# Author: Gael Varoquaux [email protected]
# License: BSD 3 clause
```
Retrieve the data from Internet
-------------------------------
The data is from 2003 - 2008. This is reasonably calm: (not too long ago so that we get high-tech firms, and before the 2008 crash). This kind of historical data can be obtained from APIs like the quandl.com and alphavantage.co .
```
import sys
import numpy as np
import pandas as pd
symbol_dict = {
"TOT": "Total",
"XOM": "Exxon",
"CVX": "Chevron",
"COP": "ConocoPhillips",
"VLO": "Valero Energy",
"MSFT": "Microsoft",
"IBM": "IBM",
"TWX": "Time Warner",
"CMCSA": "Comcast",
"CVC": "Cablevision",
"YHOO": "Yahoo",
"DELL": "Dell",
"HPQ": "HP",
"AMZN": "Amazon",
"TM": "Toyota",
"CAJ": "Canon",
"SNE": "Sony",
"F": "Ford",
"HMC": "Honda",
"NAV": "Navistar",
"NOC": "Northrop Grumman",
"BA": "Boeing",
"KO": "Coca Cola",
"MMM": "3M",
"MCD": "McDonald's",
"PEP": "Pepsi",
"K": "Kellogg",
"UN": "Unilever",
"MAR": "Marriott",
"PG": "Procter Gamble",
"CL": "Colgate-Palmolive",
"GE": "General Electrics",
"WFC": "Wells Fargo",
"JPM": "JPMorgan Chase",
"AIG": "AIG",
"AXP": "American express",
"BAC": "Bank of America",
"GS": "Goldman Sachs",
"AAPL": "Apple",
"SAP": "SAP",
"CSCO": "Cisco",
"TXN": "Texas Instruments",
"XRX": "Xerox",
"WMT": "Wal-Mart",
"HD": "Home Depot",
"GSK": "GlaxoSmithKline",
"PFE": "Pfizer",
"SNY": "Sanofi-Aventis",
"NVS": "Novartis",
"KMB": "Kimberly-Clark",
"R": "Ryder",
"GD": "General Dynamics",
"RTN": "Raytheon",
"CVS": "CVS",
"CAT": "Caterpillar",
"DD": "DuPont de Nemours",
}
symbols, names = np.array(sorted(symbol_dict.items())).T
quotes = []
for symbol in symbols:
print("Fetching quote history for %r" % symbol, file=sys.stderr)
url = (
"https://raw.githubusercontent.com/scikit-learn/examples-data/"
"master/financial-data/{}.csv"
)
quotes.append(pd.read_csv(url.format(symbol)))
close_prices = np.vstack([q["close"] for q in quotes])
open_prices = np.vstack([q["open"] for q in quotes])
# The daily variations of the quotes are what carry the most information
variation = close_prices - open_prices
```
```
Fetching quote history for 'AAPL'
Fetching quote history for 'AIG'
Fetching quote history for 'AMZN'
Fetching quote history for 'AXP'
Fetching quote history for 'BA'
Fetching quote history for 'BAC'
Fetching quote history for 'CAJ'
Fetching quote history for 'CAT'
Fetching quote history for 'CL'
Fetching quote history for 'CMCSA'
Fetching quote history for 'COP'
Fetching quote history for 'CSCO'
Fetching quote history for 'CVC'
Fetching quote history for 'CVS'
Fetching quote history for 'CVX'
Fetching quote history for 'DD'
Fetching quote history for 'DELL'
Fetching quote history for 'F'
Fetching quote history for 'GD'
Fetching quote history for 'GE'
Fetching quote history for 'GS'
Fetching quote history for 'GSK'
Fetching quote history for 'HD'
Fetching quote history for 'HMC'
Fetching quote history for 'HPQ'
Fetching quote history for 'IBM'
Fetching quote history for 'JPM'
Fetching quote history for 'K'
Fetching quote history for 'KMB'
Fetching quote history for 'KO'
Fetching quote history for 'MAR'
Fetching quote history for 'MCD'
Fetching quote history for 'MMM'
Fetching quote history for 'MSFT'
Fetching quote history for 'NAV'
Fetching quote history for 'NOC'
Fetching quote history for 'NVS'
Fetching quote history for 'PEP'
Fetching quote history for 'PFE'
Fetching quote history for 'PG'
Fetching quote history for 'R'
Fetching quote history for 'RTN'
Fetching quote history for 'SAP'
Fetching quote history for 'SNE'
Fetching quote history for 'SNY'
Fetching quote history for 'TM'
Fetching quote history for 'TOT'
Fetching quote history for 'TWX'
Fetching quote history for 'TXN'
Fetching quote history for 'UN'
Fetching quote history for 'VLO'
Fetching quote history for 'WFC'
Fetching quote history for 'WMT'
Fetching quote history for 'XOM'
Fetching quote history for 'XRX'
Fetching quote history for 'YHOO'
```
Learning a graph structure
--------------------------
We use sparse inverse covariance estimation to find which quotes are correlated conditionally on the others. Specifically, sparse inverse covariance gives us a graph, that is a list of connection. For each symbol, the symbols that it is connected too are those useful to explain its fluctuations.
```
from sklearn import covariance
alphas = np.logspace(-1.5, 1, num=10)
edge_model = covariance.GraphicalLassoCV(alphas=alphas)
# standardize the time series: using correlations rather than covariance
# former is more efficient for structure recovery
X = variation.copy().T
X /= X.std(axis=0)
edge_model.fit(X)
```
```
GraphicalLassoCV(alphas=array([ 0.03162278, 0.05994843, 0.11364637, 0.21544347, 0.40842387,
0.77426368, 1.46779927, 2.7825594 , 5.27499706, 10. ]))
```
**In a Jupyter environment, please rerun this cell to show the HTML representation or trust the notebook.
On GitHub, the HTML representation is unable to render, please try loading this page with nbviewer.org.**
GraphicalLassoCV
```
GraphicalLassoCV(alphas=array([ 0.03162278, 0.05994843, 0.11364637, 0.21544347, 0.40842387,
0.77426368, 1.46779927, 2.7825594 , 5.27499706, 10. ]))
```
Clustering using affinity propagation
-------------------------------------
We use clustering to group together quotes that behave similarly. Here, amongst the [various clustering techniques](../../modules/clustering#clustering) available in the scikit-learn, we use [Affinity Propagation](../../modules/clustering#affinity-propagation) as it does not enforce equal-size clusters, and it can choose automatically the number of clusters from the data.
Note that this gives us a different indication than the graph, as the graph reflects conditional relations between variables, while the clustering reflects marginal properties: variables clustered together can be considered as having a similar impact at the level of the full stock market.
```
from sklearn import cluster
_, labels = cluster.affinity_propagation(edge_model.covariance_, random_state=0)
n_labels = labels.max()
for i in range(n_labels + 1):
print(f"Cluster {i + 1}: {', '.join(names[labels == i])}")
```
```
Cluster 1: Apple, Amazon, Yahoo
Cluster 2: Comcast, Cablevision, Time Warner
Cluster 3: ConocoPhillips, Chevron, Total, Valero Energy, Exxon
Cluster 4: Cisco, Dell, HP, IBM, Microsoft, SAP, Texas Instruments
Cluster 5: Boeing, General Dynamics, Northrop Grumman, Raytheon
Cluster 6: AIG, American express, Bank of America, Caterpillar, CVS, DuPont de Nemours, Ford, General Electrics, Goldman Sachs, Home Depot, JPMorgan Chase, Marriott, McDonald's, 3M, Ryder, Wells Fargo, Wal-Mart
Cluster 7: GlaxoSmithKline, Novartis, Pfizer, Sanofi-Aventis, Unilever
Cluster 8: Kellogg, Coca Cola, Pepsi
Cluster 9: Colgate-Palmolive, Kimberly-Clark, Procter Gamble
Cluster 10: Canon, Honda, Navistar, Sony, Toyota, Xerox
```
Embedding in 2D space
---------------------
For visualization purposes, we need to lay out the different symbols on a 2D canvas. For this we use [Manifold learning](../../modules/manifold#manifold) techniques to retrieve 2D embedding. We use a dense eigen\_solver to achieve reproducibility (arpack is initiated with the random vectors that we don’t control). In addition, we use a large number of neighbors to capture the large-scale structure.
```
# Finding a low-dimension embedding for visualization: find the best position of
# the nodes (the stocks) on a 2D plane
from sklearn import manifold
node_position_model = manifold.LocallyLinearEmbedding(
n_components=2, eigen_solver="dense", n_neighbors=6
)
embedding = node_position_model.fit_transform(X.T).T
```
Visualization
-------------
The output of the 3 models are combined in a 2D graph where nodes represents the stocks and edges the:
* cluster labels are used to define the color of the nodes
* the sparse covariance model is used to display the strength of the edges
* the 2D embedding is used to position the nodes in the plan
This example has a fair amount of visualization-related code, as visualization is crucial here to display the graph. One of the challenge is to position the labels minimizing overlap. For this we use an heuristic based on the direction of the nearest neighbor along each axis.
```
import matplotlib.pyplot as plt
from matplotlib.collections import LineCollection
plt.figure(1, facecolor="w", figsize=(10, 8))
plt.clf()
ax = plt.axes([0.0, 0.0, 1.0, 1.0])
plt.axis("off")
# Plot the graph of partial correlations
partial_correlations = edge_model.precision_.copy()
d = 1 / np.sqrt(np.diag(partial_correlations))
partial_correlations *= d
partial_correlations *= d[:, np.newaxis]
non_zero = np.abs(np.triu(partial_correlations, k=1)) > 0.02
# Plot the nodes using the coordinates of our embedding
plt.scatter(
embedding[0], embedding[1], s=100 * d**2, c=labels, cmap=plt.cm.nipy_spectral
)
# Plot the edges
start_idx, end_idx = np.where(non_zero)
# a sequence of (*line0*, *line1*, *line2*), where::
# linen = (x0, y0), (x1, y1), ... (xm, ym)
segments = [
[embedding[:, start], embedding[:, stop]] for start, stop in zip(start_idx, end_idx)
]
values = np.abs(partial_correlations[non_zero])
lc = LineCollection(
segments, zorder=0, cmap=plt.cm.hot_r, norm=plt.Normalize(0, 0.7 * values.max())
)
lc.set_array(values)
lc.set_linewidths(15 * values)
ax.add_collection(lc)
# Add a label to each node. The challenge here is that we want to
# position the labels to avoid overlap with other labels
for index, (name, label, (x, y)) in enumerate(zip(names, labels, embedding.T)):
dx = x - embedding[0]
dx[index] = 1
dy = y - embedding[1]
dy[index] = 1
this_dx = dx[np.argmin(np.abs(dy))]
this_dy = dy[np.argmin(np.abs(dx))]
if this_dx > 0:
horizontalalignment = "left"
x = x + 0.002
else:
horizontalalignment = "right"
x = x - 0.002
if this_dy > 0:
verticalalignment = "bottom"
y = y + 0.002
else:
verticalalignment = "top"
y = y - 0.002
plt.text(
x,
y,
name,
size=10,
horizontalalignment=horizontalalignment,
verticalalignment=verticalalignment,
bbox=dict(
facecolor="w",
edgecolor=plt.cm.nipy_spectral(label / float(n_labels)),
alpha=0.6,
),
)
plt.xlim(
embedding[0].min() - 0.15 * embedding[0].ptp(),
embedding[0].max() + 0.10 * embedding[0].ptp(),
)
plt.ylim(
embedding[1].min() - 0.03 * embedding[1].ptp(),
embedding[1].max() + 0.03 * embedding[1].ptp(),
)
plt.show()
```
**Total running time of the script:** ( 0 minutes 4.604 seconds)
[`Download Python source code: plot_stock_market.py`](https://scikit-learn.org/1.1/_downloads/70af03f765a8f15d7c1d63e836e68590/plot_stock_market.py)
[`Download Jupyter notebook: plot_stock_market.ipynb`](https://scikit-learn.org/1.1/_downloads/2840d928d4f93cd381486b35c2031752/plot_stock_market.ipynb)
scikit_learn Libsvm GUI Note
Click [here](#sphx-glr-download-auto-examples-applications-svm-gui-py) to download the full example code or to run this example in your browser via Binder
Libsvm GUI
==========
A simple graphical frontend for Libsvm mainly intended for didactic purposes. You can create data points by point and click and visualize the decision region induced by different kernels and parameter settings.
To create positive examples click the left mouse button; to create negative examples click the right button.
If all examples are from the same class, it uses a one-class SVM.
```
# Author: Peter Prettenhoer <[email protected]>
#
# License: BSD 3 clause
import matplotlib
matplotlib.use("TkAgg")
from matplotlib.backends.backend_tkagg import FigureCanvasTkAgg
try:
from matplotlib.backends.backend_tkagg import NavigationToolbar2Tk
except ImportError:
# NavigationToolbar2TkAgg was deprecated in matplotlib 2.2
from matplotlib.backends.backend_tkagg import (
NavigationToolbar2TkAgg as NavigationToolbar2Tk,
)
from matplotlib.figure import Figure
from matplotlib.contour import ContourSet
import sys
import numpy as np
import tkinter as Tk
from sklearn import svm
from sklearn.datasets import dump_svmlight_file
y_min, y_max = -50, 50
x_min, x_max = -50, 50
class Model:
"""The Model which hold the data. It implements the
observable in the observer pattern and notifies the
registered observers on change event.
"""
def __init__(self):
self.observers = []
self.surface = None
self.data = []
self.cls = None
self.surface_type = 0
def changed(self, event):
"""Notify the observers."""
for observer in self.observers:
observer.update(event, self)
def add_observer(self, observer):
"""Register an observer."""
self.observers.append(observer)
def set_surface(self, surface):
self.surface = surface
def dump_svmlight_file(self, file):
data = np.array(self.data)
X = data[:, 0:2]
y = data[:, 2]
dump_svmlight_file(X, y, file)
class Controller:
def __init__(self, model):
self.model = model
self.kernel = Tk.IntVar()
self.surface_type = Tk.IntVar()
# Whether or not a model has been fitted
self.fitted = False
def fit(self):
print("fit the model")
train = np.array(self.model.data)
X = train[:, 0:2]
y = train[:, 2]
C = float(self.complexity.get())
gamma = float(self.gamma.get())
coef0 = float(self.coef0.get())
degree = int(self.degree.get())
kernel_map = {0: "linear", 1: "rbf", 2: "poly"}
if len(np.unique(y)) == 1:
clf = svm.OneClassSVM(
kernel=kernel_map[self.kernel.get()],
gamma=gamma,
coef0=coef0,
degree=degree,
)
clf.fit(X)
else:
clf = svm.SVC(
kernel=kernel_map[self.kernel.get()],
C=C,
gamma=gamma,
coef0=coef0,
degree=degree,
)
clf.fit(X, y)
if hasattr(clf, "score"):
print("Accuracy:", clf.score(X, y) * 100)
X1, X2, Z = self.decision_surface(clf)
self.model.clf = clf
self.model.set_surface((X1, X2, Z))
self.model.surface_type = self.surface_type.get()
self.fitted = True
self.model.changed("surface")
def decision_surface(self, cls):
delta = 1
x = np.arange(x_min, x_max + delta, delta)
y = np.arange(y_min, y_max + delta, delta)
X1, X2 = np.meshgrid(x, y)
Z = cls.decision_function(np.c_[X1.ravel(), X2.ravel()])
Z = Z.reshape(X1.shape)
return X1, X2, Z
def clear_data(self):
self.model.data = []
self.fitted = False
self.model.changed("clear")
def add_example(self, x, y, label):
self.model.data.append((x, y, label))
self.model.changed("example_added")
# update decision surface if already fitted.
self.refit()
def refit(self):
"""Refit the model if already fitted."""
if self.fitted:
self.fit()
class View:
"""Test docstring."""
def __init__(self, root, controller):
f = Figure()
ax = f.add_subplot(111)
ax.set_xticks([])
ax.set_yticks([])
ax.set_xlim((x_min, x_max))
ax.set_ylim((y_min, y_max))
canvas = FigureCanvasTkAgg(f, master=root)
try:
canvas.draw()
except AttributeError:
# support for matplotlib (1.*)
canvas.show()
canvas.get_tk_widget().pack(side=Tk.TOP, fill=Tk.BOTH, expand=1)
canvas._tkcanvas.pack(side=Tk.TOP, fill=Tk.BOTH, expand=1)
canvas.mpl_connect("button_press_event", self.onclick)
toolbar = NavigationToolbar2Tk(canvas, root)
toolbar.update()
self.controllbar = ControllBar(root, controller)
self.f = f
self.ax = ax
self.canvas = canvas
self.controller = controller
self.contours = []
self.c_labels = None
self.plot_kernels()
def plot_kernels(self):
self.ax.text(-50, -60, "Linear: $u^T v$")
self.ax.text(-20, -60, r"RBF: $\exp (-\gamma \| u-v \|^2)$")
self.ax.text(10, -60, r"Poly: $(\gamma \, u^T v + r)^d$")
def onclick(self, event):
if event.xdata and event.ydata:
if event.button == 1:
self.controller.add_example(event.xdata, event.ydata, 1)
elif event.button == 3:
self.controller.add_example(event.xdata, event.ydata, -1)
def update_example(self, model, idx):
x, y, l = model.data[idx]
if l == 1:
color = "w"
elif l == -1:
color = "k"
self.ax.plot([x], [y], "%so" % color, scalex=0.0, scaley=0.0)
def update(self, event, model):
if event == "examples_loaded":
for i in range(len(model.data)):
self.update_example(model, i)
if event == "example_added":
self.update_example(model, -1)
if event == "clear":
self.ax.clear()
self.ax.set_xticks([])
self.ax.set_yticks([])
self.contours = []
self.c_labels = None
self.plot_kernels()
if event == "surface":
self.remove_surface()
self.plot_support_vectors(model.clf.support_vectors_)
self.plot_decision_surface(model.surface, model.surface_type)
self.canvas.draw()
def remove_surface(self):
"""Remove old decision surface."""
if len(self.contours) > 0:
for contour in self.contours:
if isinstance(contour, ContourSet):
for lineset in contour.collections:
lineset.remove()
else:
contour.remove()
self.contours = []
def plot_support_vectors(self, support_vectors):
"""Plot the support vectors by placing circles over the
corresponding data points and adds the circle collection
to the contours list."""
cs = self.ax.scatter(
support_vectors[:, 0],
support_vectors[:, 1],
s=80,
edgecolors="k",
facecolors="none",
)
self.contours.append(cs)
def plot_decision_surface(self, surface, type):
X1, X2, Z = surface
if type == 0:
levels = [-1.0, 0.0, 1.0]
linestyles = ["dashed", "solid", "dashed"]
colors = "k"
self.contours.append(
self.ax.contour(X1, X2, Z, levels, colors=colors, linestyles=linestyles)
)
elif type == 1:
self.contours.append(
self.ax.contourf(
X1, X2, Z, 10, cmap=matplotlib.cm.bone, origin="lower", alpha=0.85
)
)
self.contours.append(
self.ax.contour(X1, X2, Z, [0.0], colors="k", linestyles=["solid"])
)
else:
raise ValueError("surface type unknown")
class ControllBar:
def __init__(self, root, controller):
fm = Tk.Frame(root)
kernel_group = Tk.Frame(fm)
Tk.Radiobutton(
kernel_group,
text="Linear",
variable=controller.kernel,
value=0,
command=controller.refit,
).pack(anchor=Tk.W)
Tk.Radiobutton(
kernel_group,
text="RBF",
variable=controller.kernel,
value=1,
command=controller.refit,
).pack(anchor=Tk.W)
Tk.Radiobutton(
kernel_group,
text="Poly",
variable=controller.kernel,
value=2,
command=controller.refit,
).pack(anchor=Tk.W)
kernel_group.pack(side=Tk.LEFT)
valbox = Tk.Frame(fm)
controller.complexity = Tk.StringVar()
controller.complexity.set("1.0")
c = Tk.Frame(valbox)
Tk.Label(c, text="C:", anchor="e", width=7).pack(side=Tk.LEFT)
Tk.Entry(c, width=6, textvariable=controller.complexity).pack(side=Tk.LEFT)
c.pack()
controller.gamma = Tk.StringVar()
controller.gamma.set("0.01")
g = Tk.Frame(valbox)
Tk.Label(g, text="gamma:", anchor="e", width=7).pack(side=Tk.LEFT)
Tk.Entry(g, width=6, textvariable=controller.gamma).pack(side=Tk.LEFT)
g.pack()
controller.degree = Tk.StringVar()
controller.degree.set("3")
d = Tk.Frame(valbox)
Tk.Label(d, text="degree:", anchor="e", width=7).pack(side=Tk.LEFT)
Tk.Entry(d, width=6, textvariable=controller.degree).pack(side=Tk.LEFT)
d.pack()
controller.coef0 = Tk.StringVar()
controller.coef0.set("0")
r = Tk.Frame(valbox)
Tk.Label(r, text="coef0:", anchor="e", width=7).pack(side=Tk.LEFT)
Tk.Entry(r, width=6, textvariable=controller.coef0).pack(side=Tk.LEFT)
r.pack()
valbox.pack(side=Tk.LEFT)
cmap_group = Tk.Frame(fm)
Tk.Radiobutton(
cmap_group,
text="Hyperplanes",
variable=controller.surface_type,
value=0,
command=controller.refit,
).pack(anchor=Tk.W)
Tk.Radiobutton(
cmap_group,
text="Surface",
variable=controller.surface_type,
value=1,
command=controller.refit,
).pack(anchor=Tk.W)
cmap_group.pack(side=Tk.LEFT)
train_button = Tk.Button(fm, text="Fit", width=5, command=controller.fit)
train_button.pack()
fm.pack(side=Tk.LEFT)
Tk.Button(fm, text="Clear", width=5, command=controller.clear_data).pack(
side=Tk.LEFT
)
def get_parser():
from optparse import OptionParser
op = OptionParser()
op.add_option(
"--output",
action="store",
type="str",
dest="output",
help="Path where to dump data.",
)
return op
def main(argv):
op = get_parser()
opts, args = op.parse_args(argv[1:])
root = Tk.Tk()
model = Model()
controller = Controller(model)
root.wm_title("Scikit-learn Libsvm GUI")
view = View(root, controller)
model.add_observer(view)
Tk.mainloop()
if opts.output:
model.dump_svmlight_file(opts.output)
if __name__ == "__main__":
main(sys.argv)
```
**Total running time of the script:** ( 0 minutes 0.000 seconds)
[`Download Python source code: svm_gui.py`](https://scikit-learn.org/1.1/_downloads/d889921befb295d1231ec003639ee4ed/svm_gui.py)
[`Download Jupyter notebook: svm_gui.ipynb`](https://scikit-learn.org/1.1/_downloads/c5bc3466ea81a166c26f41af11f3ae64/svm_gui.ipynb)
| programming_docs |
scikit_learn Image denoising using kernel PCA Note
Click [here](#sphx-glr-download-auto-examples-applications-plot-digits-denoising-py) to download the full example code or to run this example in your browser via Binder
Image denoising using kernel PCA
================================
This example shows how to use [`KernelPCA`](../../modules/generated/sklearn.decomposition.kernelpca#sklearn.decomposition.KernelPCA "sklearn.decomposition.KernelPCA") to denoise images. In short, we take advantage of the approximation function learned during `fit` to reconstruct the original image.
We will compare the results with an exact reconstruction using [`PCA`](../../modules/generated/sklearn.decomposition.pca#sklearn.decomposition.PCA "sklearn.decomposition.PCA").
We will use USPS digits dataset to reproduce presented in Sect. 4 of [[1]](#id2).
```
# Authors: Guillaume Lemaitre <[email protected]>
# Licence: BSD 3 clause
```
Load the dataset via OpenML
---------------------------
The USPS digits datasets is available in OpenML. We use [`fetch_openml`](../../modules/generated/sklearn.datasets.fetch_openml#sklearn.datasets.fetch_openml "sklearn.datasets.fetch_openml") to get this dataset. In addition, we normalize the dataset such that all pixel values are in the range (0, 1).
```
import numpy as np
from sklearn.datasets import fetch_openml
from sklearn.preprocessing import MinMaxScaler
from sklearn.model_selection import train_test_split
X, y = fetch_openml(data_id=41082, as_frame=False, return_X_y=True)
X = MinMaxScaler().fit_transform(X)
```
The idea will be to learn a PCA basis (with and without a kernel) on noisy images and then use these models to reconstruct and denoise these images.
Thus, we split our dataset into a training and testing set composed of 1,000 samples for the training and 100 samples for testing. These images are noise-free and we will use them to evaluate the efficiency of the denoising approaches. In addition, we create a copy of the original dataset and add a Gaussian noise.
The idea of this application, is to show that we can denoise corrupted images by learning a PCA basis on some uncorrupted images. We will use both a PCA and a kernel-based PCA to solve this problem.
```
X_train, X_test, y_train, y_test = train_test_split(
X, y, stratify=y, random_state=0, train_size=1_000, test_size=100
)
rng = np.random.RandomState(0)
noise = rng.normal(scale=0.25, size=X_test.shape)
X_test_noisy = X_test + noise
noise = rng.normal(scale=0.25, size=X_train.shape)
X_train_noisy = X_train + noise
```
In addition, we will create a helper function to qualitatively assess the image reconstruction by plotting the test images.
```
import matplotlib.pyplot as plt
def plot_digits(X, title):
"""Small helper function to plot 100 digits."""
fig, axs = plt.subplots(nrows=10, ncols=10, figsize=(8, 8))
for img, ax in zip(X, axs.ravel()):
ax.imshow(img.reshape((16, 16)), cmap="Greys")
ax.axis("off")
fig.suptitle(title, fontsize=24)
```
In addition, we will use the mean squared error (MSE) to quantitatively assess the image reconstruction.
Let’s first have a look to see the difference between noise-free and noisy images. We will check the test set in this regard.
```
plot_digits(X_test, "Uncorrupted test images")
plot_digits(
X_test_noisy, f"Noisy test images\nMSE: {np.mean((X_test - X_test_noisy) ** 2):.2f}"
)
```
*
*
Learn the `PCA` basis
---------------------
We can now learn our PCA basis using both a linear PCA and a kernel PCA that uses a radial basis function (RBF) kernel.
```
from sklearn.decomposition import PCA, KernelPCA
pca = PCA(n_components=32)
kernel_pca = KernelPCA(
n_components=400, kernel="rbf", gamma=1e-3, fit_inverse_transform=True, alpha=5e-3
)
pca.fit(X_train_noisy)
_ = kernel_pca.fit(X_train_noisy)
```
Reconstruct and denoise test images
-----------------------------------
Now, we can transform and reconstruct the noisy test set. Since we used less components than the number of original features, we will get an approximation of the original set. Indeed, by dropping the components explaining variance in PCA the least, we hope to remove noise. Similar thinking happens in kernel PCA; however, we expect a better reconstruction because we use a non-linear kernel to learn the PCA basis and a kernel ridge to learn the mapping function.
```
X_reconstructed_kernel_pca = kernel_pca.inverse_transform(
kernel_pca.transform(X_test_noisy)
)
X_reconstructed_pca = pca.inverse_transform(pca.transform(X_test_noisy))
```
```
plot_digits(X_test, "Uncorrupted test images")
plot_digits(
X_reconstructed_pca,
f"PCA reconstruction\nMSE: {np.mean((X_test - X_reconstructed_pca) ** 2):.2f}",
)
plot_digits(
X_reconstructed_kernel_pca,
"Kernel PCA reconstruction\n"
f"MSE: {np.mean((X_test - X_reconstructed_kernel_pca) ** 2):.2f}",
)
```
*
*
*
PCA has a lower MSE than kernel PCA. However, the qualitative analysis might not favor PCA instead of kernel PCA. We observe that kernel PCA is able to remove background noise and provide a smoother image.
However, it should be noted that the results of the denoising with kernel PCA will depend of the parameters `n_components`, `gamma`, and `alpha`.
**Total running time of the script:** ( 0 minutes 17.833 seconds)
[`Download Python source code: plot_digits_denoising.py`](https://scikit-learn.org/1.1/_downloads/32173eb704d697c23dffbbf3fd74942a/plot_digits_denoising.py)
[`Download Jupyter notebook: plot_digits_denoising.ipynb`](https://scikit-learn.org/1.1/_downloads/f499e804840a40d11222872e84726eef/plot_digits_denoising.ipynb)
scikit_learn Compressive sensing: tomography reconstruction with L1 prior (Lasso) Note
Click [here](#sphx-glr-download-auto-examples-applications-plot-tomography-l1-reconstruction-py) to download the full example code or to run this example in your browser via Binder
Compressive sensing: tomography reconstruction with L1 prior (Lasso)
====================================================================
This example shows the reconstruction of an image from a set of parallel projections, acquired along different angles. Such a dataset is acquired in **computed tomography** (CT).
Without any prior information on the sample, the number of projections required to reconstruct the image is of the order of the linear size `l` of the image (in pixels). For simplicity we consider here a sparse image, where only pixels on the boundary of objects have a non-zero value. Such data could correspond for example to a cellular material. Note however that most images are sparse in a different basis, such as the Haar wavelets. Only `l/7` projections are acquired, therefore it is necessary to use prior information available on the sample (its sparsity): this is an example of **compressive sensing**.
The tomography projection operation is a linear transformation. In addition to the data-fidelity term corresponding to a linear regression, we penalize the L1 norm of the image to account for its sparsity. The resulting optimization problem is called the [Lasso](../../modules/linear_model#lasso). We use the class [`Lasso`](../../modules/generated/sklearn.linear_model.lasso#sklearn.linear_model.Lasso "sklearn.linear_model.Lasso"), that uses the coordinate descent algorithm. Importantly, this implementation is more computationally efficient on a sparse matrix, than the projection operator used here.
The reconstruction with L1 penalization gives a result with zero error (all pixels are successfully labeled with 0 or 1), even if noise was added to the projections. In comparison, an L2 penalization ([`Ridge`](../../modules/generated/sklearn.linear_model.ridge#sklearn.linear_model.Ridge "sklearn.linear_model.Ridge")) produces a large number of labeling errors for the pixels. Important artifacts are observed on the reconstructed image, contrary to the L1 penalization. Note in particular the circular artifact separating the pixels in the corners, that have contributed to fewer projections than the central disk.
```
# Author: Emmanuelle Gouillart <[email protected]>
# License: BSD 3 clause
import numpy as np
from scipy import sparse
from scipy import ndimage
from sklearn.linear_model import Lasso
from sklearn.linear_model import Ridge
import matplotlib.pyplot as plt
def _weights(x, dx=1, orig=0):
x = np.ravel(x)
floor_x = np.floor((x - orig) / dx).astype(np.int64)
alpha = (x - orig - floor_x * dx) / dx
return np.hstack((floor_x, floor_x + 1)), np.hstack((1 - alpha, alpha))
def _generate_center_coordinates(l_x):
X, Y = np.mgrid[:l_x, :l_x].astype(np.float64)
center = l_x / 2.0
X += 0.5 - center
Y += 0.5 - center
return X, Y
def build_projection_operator(l_x, n_dir):
"""Compute the tomography design matrix.
Parameters
----------
l_x : int
linear size of image array
n_dir : int
number of angles at which projections are acquired.
Returns
-------
p : sparse matrix of shape (n_dir l_x, l_x**2)
"""
X, Y = _generate_center_coordinates(l_x)
angles = np.linspace(0, np.pi, n_dir, endpoint=False)
data_inds, weights, camera_inds = [], [], []
data_unravel_indices = np.arange(l_x**2)
data_unravel_indices = np.hstack((data_unravel_indices, data_unravel_indices))
for i, angle in enumerate(angles):
Xrot = np.cos(angle) * X - np.sin(angle) * Y
inds, w = _weights(Xrot, dx=1, orig=X.min())
mask = np.logical_and(inds >= 0, inds < l_x)
weights += list(w[mask])
camera_inds += list(inds[mask] + i * l_x)
data_inds += list(data_unravel_indices[mask])
proj_operator = sparse.coo_matrix((weights, (camera_inds, data_inds)))
return proj_operator
def generate_synthetic_data():
"""Synthetic binary data"""
rs = np.random.RandomState(0)
n_pts = 36
x, y = np.ogrid[0:l, 0:l]
mask_outer = (x - l / 2.0) ** 2 + (y - l / 2.0) ** 2 < (l / 2.0) ** 2
mask = np.zeros((l, l))
points = l * rs.rand(2, n_pts)
mask[(points[0]).astype(int), (points[1]).astype(int)] = 1
mask = ndimage.gaussian_filter(mask, sigma=l / n_pts)
res = np.logical_and(mask > mask.mean(), mask_outer)
return np.logical_xor(res, ndimage.binary_erosion(res))
# Generate synthetic images, and projections
l = 128
proj_operator = build_projection_operator(l, l // 7)
data = generate_synthetic_data()
proj = proj_operator @ data.ravel()[:, np.newaxis]
proj += 0.15 * np.random.randn(*proj.shape)
# Reconstruction with L2 (Ridge) penalization
rgr_ridge = Ridge(alpha=0.2)
rgr_ridge.fit(proj_operator, proj.ravel())
rec_l2 = rgr_ridge.coef_.reshape(l, l)
# Reconstruction with L1 (Lasso) penalization
# the best value of alpha was determined using cross validation
# with LassoCV
rgr_lasso = Lasso(alpha=0.001)
rgr_lasso.fit(proj_operator, proj.ravel())
rec_l1 = rgr_lasso.coef_.reshape(l, l)
plt.figure(figsize=(8, 3.3))
plt.subplot(131)
plt.imshow(data, cmap=plt.cm.gray, interpolation="nearest")
plt.axis("off")
plt.title("original image")
plt.subplot(132)
plt.imshow(rec_l2, cmap=plt.cm.gray, interpolation="nearest")
plt.title("L2 penalization")
plt.axis("off")
plt.subplot(133)
plt.imshow(rec_l1, cmap=plt.cm.gray, interpolation="nearest")
plt.title("L1 penalization")
plt.axis("off")
plt.subplots_adjust(hspace=0.01, wspace=0.01, top=1, bottom=0, left=0, right=1)
plt.show()
```
**Total running time of the script:** ( 0 minutes 9.318 seconds)
[`Download Python source code: plot_tomography_l1_reconstruction.py`](https://scikit-learn.org/1.1/_downloads/c0cf10731954dbd148230cf322eb6fd7/plot_tomography_l1_reconstruction.py)
[`Download Jupyter notebook: plot_tomography_l1_reconstruction.ipynb`](https://scikit-learn.org/1.1/_downloads/3daf4e9ab9d86061e19a11d997a09779/plot_tomography_l1_reconstruction.ipynb)
scikit_learn Gaussian Mixture Model Ellipsoids Note
Click [here](#sphx-glr-download-auto-examples-mixture-plot-gmm-py) to download the full example code or to run this example in your browser via Binder
Gaussian Mixture Model Ellipsoids
=================================
Plot the confidence ellipsoids of a mixture of two Gaussians obtained with Expectation Maximisation (`GaussianMixture` class) and Variational Inference (`BayesianGaussianMixture` class models with a Dirichlet process prior).
Both models have access to five components with which to fit the data. Note that the Expectation Maximisation model will necessarily use all five components while the Variational Inference model will effectively only use as many as are needed for a good fit. Here we can see that the Expectation Maximisation model splits some components arbitrarily, because it is trying to fit too many components, while the Dirichlet Process model adapts it number of state automatically.
This example doesn’t show it, as we’re in a low-dimensional space, but another advantage of the Dirichlet process model is that it can fit full covariance matrices effectively even when there are less examples per cluster than there are dimensions in the data, due to regularization properties of the inference algorithm.
```
import itertools
import numpy as np
from scipy import linalg
import matplotlib.pyplot as plt
import matplotlib as mpl
from sklearn import mixture
color_iter = itertools.cycle(["navy", "c", "cornflowerblue", "gold", "darkorange"])
def plot_results(X, Y_, means, covariances, index, title):
splot = plt.subplot(2, 1, 1 + index)
for i, (mean, covar, color) in enumerate(zip(means, covariances, color_iter)):
v, w = linalg.eigh(covar)
v = 2.0 * np.sqrt(2.0) * np.sqrt(v)
u = w[0] / linalg.norm(w[0])
# as the DP will not use every component it has access to
# unless it needs it, we shouldn't plot the redundant
# components.
if not np.any(Y_ == i):
continue
plt.scatter(X[Y_ == i, 0], X[Y_ == i, 1], 0.8, color=color)
# Plot an ellipse to show the Gaussian component
angle = np.arctan(u[1] / u[0])
angle = 180.0 * angle / np.pi # convert to degrees
ell = mpl.patches.Ellipse(mean, v[0], v[1], 180.0 + angle, color=color)
ell.set_clip_box(splot.bbox)
ell.set_alpha(0.5)
splot.add_artist(ell)
plt.xlim(-9.0, 5.0)
plt.ylim(-3.0, 6.0)
plt.xticks(())
plt.yticks(())
plt.title(title)
# Number of samples per component
n_samples = 500
# Generate random sample, two components
np.random.seed(0)
C = np.array([[0.0, -0.1], [1.7, 0.4]])
X = np.r_[
np.dot(np.random.randn(n_samples, 2), C),
0.7 * np.random.randn(n_samples, 2) + np.array([-6, 3]),
]
# Fit a Gaussian mixture with EM using five components
gmm = mixture.GaussianMixture(n_components=5, covariance_type="full").fit(X)
plot_results(X, gmm.predict(X), gmm.means_, gmm.covariances_, 0, "Gaussian Mixture")
# Fit a Dirichlet process Gaussian mixture using five components
dpgmm = mixture.BayesianGaussianMixture(n_components=5, covariance_type="full").fit(X)
plot_results(
X,
dpgmm.predict(X),
dpgmm.means_,
dpgmm.covariances_,
1,
"Bayesian Gaussian Mixture with a Dirichlet process prior",
)
plt.show()
```
**Total running time of the script:** ( 0 minutes 0.153 seconds)
[`Download Python source code: plot_gmm.py`](https://scikit-learn.org/1.1/_downloads/ef0baa719e32d13119d996e7e1f4a7dd/plot_gmm.py)
[`Download Jupyter notebook: plot_gmm.ipynb`](https://scikit-learn.org/1.1/_downloads/df17584e78fbab6ddca8edf79def6190/plot_gmm.ipynb)
scikit_learn Concentration Prior Type Analysis of Variation Bayesian Gaussian Mixture Note
Click [here](#sphx-glr-download-auto-examples-mixture-plot-concentration-prior-py) to download the full example code or to run this example in your browser via Binder
Concentration Prior Type Analysis of Variation Bayesian Gaussian Mixture
========================================================================
This example plots the ellipsoids obtained from a toy dataset (mixture of three Gaussians) fitted by the `BayesianGaussianMixture` class models with a Dirichlet distribution prior (`weight_concentration_prior_type='dirichlet_distribution'`) and a Dirichlet process prior (`weight_concentration_prior_type='dirichlet_process'`). On each figure, we plot the results for three different values of the weight concentration prior.
The `BayesianGaussianMixture` class can adapt its number of mixture components automatically. The parameter `weight_concentration_prior` has a direct link with the resulting number of components with non-zero weights. Specifying a low value for the concentration prior will make the model put most of the weight on few components set the remaining components weights very close to zero. High values of the concentration prior will allow a larger number of components to be active in the mixture.
The Dirichlet process prior allows to define an infinite number of components and automatically selects the correct number of components: it activates a component only if it is necessary.
On the contrary the classical finite mixture model with a Dirichlet distribution prior will favor more uniformly weighted components and therefore tends to divide natural clusters into unnecessary sub-components.
*
*
```
# Author: Thierry Guillemot <[email protected]>
# License: BSD 3 clause
import numpy as np
import matplotlib as mpl
import matplotlib.pyplot as plt
import matplotlib.gridspec as gridspec
from sklearn.mixture import BayesianGaussianMixture
def plot_ellipses(ax, weights, means, covars):
for n in range(means.shape[0]):
eig_vals, eig_vecs = np.linalg.eigh(covars[n])
unit_eig_vec = eig_vecs[0] / np.linalg.norm(eig_vecs[0])
angle = np.arctan2(unit_eig_vec[1], unit_eig_vec[0])
# Ellipse needs degrees
angle = 180 * angle / np.pi
# eigenvector normalization
eig_vals = 2 * np.sqrt(2) * np.sqrt(eig_vals)
ell = mpl.patches.Ellipse(
means[n], eig_vals[0], eig_vals[1], 180 + angle, edgecolor="black"
)
ell.set_clip_box(ax.bbox)
ell.set_alpha(weights[n])
ell.set_facecolor("#56B4E9")
ax.add_artist(ell)
def plot_results(ax1, ax2, estimator, X, y, title, plot_title=False):
ax1.set_title(title)
ax1.scatter(X[:, 0], X[:, 1], s=5, marker="o", color=colors[y], alpha=0.8)
ax1.set_xlim(-2.0, 2.0)
ax1.set_ylim(-3.0, 3.0)
ax1.set_xticks(())
ax1.set_yticks(())
plot_ellipses(ax1, estimator.weights_, estimator.means_, estimator.covariances_)
ax2.get_xaxis().set_tick_params(direction="out")
ax2.yaxis.grid(True, alpha=0.7)
for k, w in enumerate(estimator.weights_):
ax2.bar(
k,
w,
width=0.9,
color="#56B4E9",
zorder=3,
align="center",
edgecolor="black",
)
ax2.text(k, w + 0.007, "%.1f%%" % (w * 100.0), horizontalalignment="center")
ax2.set_xlim(-0.6, 2 * n_components - 0.4)
ax2.set_ylim(0.0, 1.1)
ax2.tick_params(axis="y", which="both", left=False, right=False, labelleft=False)
ax2.tick_params(axis="x", which="both", top=False)
if plot_title:
ax1.set_ylabel("Estimated Mixtures")
ax2.set_ylabel("Weight of each component")
# Parameters of the dataset
random_state, n_components, n_features = 2, 3, 2
colors = np.array(["#0072B2", "#F0E442", "#D55E00"])
covars = np.array(
[[[0.7, 0.0], [0.0, 0.1]], [[0.5, 0.0], [0.0, 0.1]], [[0.5, 0.0], [0.0, 0.1]]]
)
samples = np.array([200, 500, 200])
means = np.array([[0.0, -0.70], [0.0, 0.0], [0.0, 0.70]])
# mean_precision_prior= 0.8 to minimize the influence of the prior
estimators = [
(
"Finite mixture with a Dirichlet distribution\nprior and " r"$\gamma_0=$",
BayesianGaussianMixture(
weight_concentration_prior_type="dirichlet_distribution",
n_components=2 * n_components,
reg_covar=0,
init_params="random",
max_iter=1500,
mean_precision_prior=0.8,
random_state=random_state,
),
[0.001, 1, 1000],
),
(
"Infinite mixture with a Dirichlet process\n prior and" r"$\gamma_0=$",
BayesianGaussianMixture(
weight_concentration_prior_type="dirichlet_process",
n_components=2 * n_components,
reg_covar=0,
init_params="random",
max_iter=1500,
mean_precision_prior=0.8,
random_state=random_state,
),
[1, 1000, 100000],
),
]
# Generate data
rng = np.random.RandomState(random_state)
X = np.vstack(
[
rng.multivariate_normal(means[j], covars[j], samples[j])
for j in range(n_components)
]
)
y = np.concatenate([np.full(samples[j], j, dtype=int) for j in range(n_components)])
# Plot results in two different figures
for title, estimator, concentrations_prior in estimators:
plt.figure(figsize=(4.7 * 3, 8))
plt.subplots_adjust(
bottom=0.04, top=0.90, hspace=0.05, wspace=0.05, left=0.03, right=0.99
)
gs = gridspec.GridSpec(3, len(concentrations_prior))
for k, concentration in enumerate(concentrations_prior):
estimator.weight_concentration_prior = concentration
estimator.fit(X)
plot_results(
plt.subplot(gs[0:2, k]),
plt.subplot(gs[2, k]),
estimator,
X,
y,
r"%s$%.1e$" % (title, concentration),
plot_title=k == 0,
)
plt.show()
```
**Total running time of the script:** ( 0 minutes 5.623 seconds)
[`Download Python source code: plot_concentration_prior.py`](https://scikit-learn.org/1.1/_downloads/ac9ba06dbb8ef65d14cff69663ec06f2/plot_concentration_prior.py)
[`Download Jupyter notebook: plot_concentration_prior.ipynb`](https://scikit-learn.org/1.1/_downloads/28df2114703b91f224d70205e9b75a7d/plot_concentration_prior.ipynb)
| programming_docs |
scikit_learn Gaussian Mixture Model Sine Curve Note
Click [here](#sphx-glr-download-auto-examples-mixture-plot-gmm-sin-py) to download the full example code or to run this example in your browser via Binder
Gaussian Mixture Model Sine Curve
=================================
This example demonstrates the behavior of Gaussian mixture models fit on data that was not sampled from a mixture of Gaussian random variables. The dataset is formed by 100 points loosely spaced following a noisy sine curve. There is therefore no ground truth value for the number of Gaussian components.
The first model is a classical Gaussian Mixture Model with 10 components fit with the Expectation-Maximization algorithm.
The second model is a Bayesian Gaussian Mixture Model with a Dirichlet process prior fit with variational inference. The low value of the concentration prior makes the model favor a lower number of active components. This models “decides” to focus its modeling power on the big picture of the structure of the dataset: groups of points with alternating directions modeled by non-diagonal covariance matrices. Those alternating directions roughly capture the alternating nature of the original sine signal.
The third model is also a Bayesian Gaussian mixture model with a Dirichlet process prior but this time the value of the concentration prior is higher giving the model more liberty to model the fine-grained structure of the data. The result is a mixture with a larger number of active components that is similar to the first model where we arbitrarily decided to fix the number of components to 10.
Which model is the best is a matter of subjective judgment: do we want to favor models that only capture the big picture to summarize and explain most of the structure of the data while ignoring the details or do we prefer models that closely follow the high density regions of the signal?
The last two panels show how we can sample from the last two models. The resulting samples distributions do not look exactly like the original data distribution. The difference primarily stems from the approximation error we made by using a model that assumes that the data was generated by a finite number of Gaussian components instead of a continuous noisy sine curve.
```
import itertools
import numpy as np
from scipy import linalg
import matplotlib.pyplot as plt
import matplotlib as mpl
from sklearn import mixture
color_iter = itertools.cycle(["navy", "c", "cornflowerblue", "gold", "darkorange"])
def plot_results(X, Y, means, covariances, index, title):
splot = plt.subplot(5, 1, 1 + index)
for i, (mean, covar, color) in enumerate(zip(means, covariances, color_iter)):
v, w = linalg.eigh(covar)
v = 2.0 * np.sqrt(2.0) * np.sqrt(v)
u = w[0] / linalg.norm(w[0])
# as the DP will not use every component it has access to
# unless it needs it, we shouldn't plot the redundant
# components.
if not np.any(Y == i):
continue
plt.scatter(X[Y == i, 0], X[Y == i, 1], 0.8, color=color)
# Plot an ellipse to show the Gaussian component
angle = np.arctan(u[1] / u[0])
angle = 180.0 * angle / np.pi # convert to degrees
ell = mpl.patches.Ellipse(mean, v[0], v[1], 180.0 + angle, color=color)
ell.set_clip_box(splot.bbox)
ell.set_alpha(0.5)
splot.add_artist(ell)
plt.xlim(-6.0, 4.0 * np.pi - 6.0)
plt.ylim(-5.0, 5.0)
plt.title(title)
plt.xticks(())
plt.yticks(())
def plot_samples(X, Y, n_components, index, title):
plt.subplot(5, 1, 4 + index)
for i, color in zip(range(n_components), color_iter):
# as the DP will not use every component it has access to
# unless it needs it, we shouldn't plot the redundant
# components.
if not np.any(Y == i):
continue
plt.scatter(X[Y == i, 0], X[Y == i, 1], 0.8, color=color)
plt.xlim(-6.0, 4.0 * np.pi - 6.0)
plt.ylim(-5.0, 5.0)
plt.title(title)
plt.xticks(())
plt.yticks(())
# Parameters
n_samples = 100
# Generate random sample following a sine curve
np.random.seed(0)
X = np.zeros((n_samples, 2))
step = 4.0 * np.pi / n_samples
for i in range(X.shape[0]):
x = i * step - 6.0
X[i, 0] = x + np.random.normal(0, 0.1)
X[i, 1] = 3.0 * (np.sin(x) + np.random.normal(0, 0.2))
plt.figure(figsize=(10, 10))
plt.subplots_adjust(
bottom=0.04, top=0.95, hspace=0.2, wspace=0.05, left=0.03, right=0.97
)
# Fit a Gaussian mixture with EM using ten components
gmm = mixture.GaussianMixture(
n_components=10, covariance_type="full", max_iter=100
).fit(X)
plot_results(
X, gmm.predict(X), gmm.means_, gmm.covariances_, 0, "Expectation-maximization"
)
dpgmm = mixture.BayesianGaussianMixture(
n_components=10,
covariance_type="full",
weight_concentration_prior=1e-2,
weight_concentration_prior_type="dirichlet_process",
mean_precision_prior=1e-2,
covariance_prior=1e0 * np.eye(2),
init_params="random",
max_iter=100,
random_state=2,
).fit(X)
plot_results(
X,
dpgmm.predict(X),
dpgmm.means_,
dpgmm.covariances_,
1,
"Bayesian Gaussian mixture models with a Dirichlet process prior "
r"for $\gamma_0=0.01$.",
)
X_s, y_s = dpgmm.sample(n_samples=2000)
plot_samples(
X_s,
y_s,
dpgmm.n_components,
0,
"Gaussian mixture with a Dirichlet process prior "
r"for $\gamma_0=0.01$ sampled with $2000$ samples.",
)
dpgmm = mixture.BayesianGaussianMixture(
n_components=10,
covariance_type="full",
weight_concentration_prior=1e2,
weight_concentration_prior_type="dirichlet_process",
mean_precision_prior=1e-2,
covariance_prior=1e0 * np.eye(2),
init_params="kmeans",
max_iter=100,
random_state=2,
).fit(X)
plot_results(
X,
dpgmm.predict(X),
dpgmm.means_,
dpgmm.covariances_,
2,
"Bayesian Gaussian mixture models with a Dirichlet process prior "
r"for $\gamma_0=100$",
)
X_s, y_s = dpgmm.sample(n_samples=2000)
plot_samples(
X_s,
y_s,
dpgmm.n_components,
1,
"Gaussian mixture with a Dirichlet process prior "
r"for $\gamma_0=100$ sampled with $2000$ samples.",
)
plt.show()
```
**Total running time of the script:** ( 0 minutes 0.438 seconds)
[`Download Python source code: plot_gmm_sin.py`](https://scikit-learn.org/1.1/_downloads/2d0762e90c243d288bfdebfc1ddb009e/plot_gmm_sin.py)
[`Download Jupyter notebook: plot_gmm_sin.ipynb`](https://scikit-learn.org/1.1/_downloads/82b113d6a139ff1d978ca9188dc8f15c/plot_gmm_sin.ipynb)
scikit_learn Gaussian Mixture Model Selection Note
Click [here](#sphx-glr-download-auto-examples-mixture-plot-gmm-selection-py) to download the full example code or to run this example in your browser via Binder
Gaussian Mixture Model Selection
================================
This example shows that model selection can be performed with Gaussian Mixture Models using [information-theoretic criteria (BIC)](../../modules/linear_model#aic-bic). Model selection concerns both the covariance type and the number of components in the model. In that case, AIC also provides the right result (not shown to save time), but BIC is better suited if the problem is to identify the right model. Unlike Bayesian procedures, such inferences are prior-free.
In that case, the model with 2 components and full covariance (which corresponds to the true generative model) is selected.
```
import numpy as np
import itertools
from scipy import linalg
import matplotlib.pyplot as plt
import matplotlib as mpl
from sklearn import mixture
# Number of samples per component
n_samples = 500
# Generate random sample, two components
np.random.seed(0)
C = np.array([[0.0, -0.1], [1.7, 0.4]])
X = np.r_[
np.dot(np.random.randn(n_samples, 2), C),
0.7 * np.random.randn(n_samples, 2) + np.array([-6, 3]),
]
lowest_bic = np.infty
bic = []
n_components_range = range(1, 7)
cv_types = ["spherical", "tied", "diag", "full"]
for cv_type in cv_types:
for n_components in n_components_range:
# Fit a Gaussian mixture with EM
gmm = mixture.GaussianMixture(
n_components=n_components, covariance_type=cv_type
)
gmm.fit(X)
bic.append(gmm.bic(X))
if bic[-1] < lowest_bic:
lowest_bic = bic[-1]
best_gmm = gmm
bic = np.array(bic)
color_iter = itertools.cycle(["navy", "turquoise", "cornflowerblue", "darkorange"])
clf = best_gmm
bars = []
# Plot the BIC scores
plt.figure(figsize=(8, 6))
spl = plt.subplot(2, 1, 1)
for i, (cv_type, color) in enumerate(zip(cv_types, color_iter)):
xpos = np.array(n_components_range) + 0.2 * (i - 2)
bars.append(
plt.bar(
xpos,
bic[i * len(n_components_range) : (i + 1) * len(n_components_range)],
width=0.2,
color=color,
)
)
plt.xticks(n_components_range)
plt.ylim([bic.min() * 1.01 - 0.01 * bic.max(), bic.max()])
plt.title("BIC score per model")
xpos = (
np.mod(bic.argmin(), len(n_components_range))
+ 0.65
+ 0.2 * np.floor(bic.argmin() / len(n_components_range))
)
plt.text(xpos, bic.min() * 0.97 + 0.03 * bic.max(), "*", fontsize=14)
spl.set_xlabel("Number of components")
spl.legend([b[0] for b in bars], cv_types)
# Plot the winner
splot = plt.subplot(2, 1, 2)
Y_ = clf.predict(X)
for i, (mean, cov, color) in enumerate(zip(clf.means_, clf.covariances_, color_iter)):
v, w = linalg.eigh(cov)
if not np.any(Y_ == i):
continue
plt.scatter(X[Y_ == i, 0], X[Y_ == i, 1], 0.8, color=color)
# Plot an ellipse to show the Gaussian component
angle = np.arctan2(w[0][1], w[0][0])
angle = 180.0 * angle / np.pi # convert to degrees
v = 2.0 * np.sqrt(2.0) * np.sqrt(v)
ell = mpl.patches.Ellipse(mean, v[0], v[1], 180.0 + angle, color=color)
ell.set_clip_box(splot.bbox)
ell.set_alpha(0.5)
splot.add_artist(ell)
plt.xticks(())
plt.yticks(())
plt.title(
f"Selected GMM: {best_gmm.covariance_type} model, "
f"{best_gmm.n_components} components"
)
plt.subplots_adjust(hspace=0.35, bottom=0.02)
plt.show()
```
**Total running time of the script:** ( 0 minutes 0.223 seconds)
[`Download Python source code: plot_gmm_selection.py`](https://scikit-learn.org/1.1/_downloads/61adcadacdb5cfe445011b0b0d065d44/plot_gmm_selection.py)
[`Download Jupyter notebook: plot_gmm_selection.ipynb`](https://scikit-learn.org/1.1/_downloads/ed5e2dba642062278ab833dd7617cfe0/plot_gmm_selection.ipynb)
scikit_learn GMM covariances Note
Click [here](#sphx-glr-download-auto-examples-mixture-plot-gmm-covariances-py) to download the full example code or to run this example in your browser via Binder
GMM covariances
===============
Demonstration of several covariances types for Gaussian mixture models.
See [Gaussian mixture models](../../modules/mixture#gmm) for more information on the estimator.
Although GMM are often used for clustering, we can compare the obtained clusters with the actual classes from the dataset. We initialize the means of the Gaussians with the means of the classes from the training set to make this comparison valid.
We plot predicted labels on both training and held out test data using a variety of GMM covariance types on the iris dataset. We compare GMMs with spherical, diagonal, full, and tied covariance matrices in increasing order of performance. Although one would expect full covariance to perform best in general, it is prone to overfitting on small datasets and does not generalize well to held out test data.
On the plots, train data is shown as dots, while test data is shown as crosses. The iris dataset is four-dimensional. Only the first two dimensions are shown here, and thus some points are separated in other dimensions.
```
# Author: Ron Weiss <[email protected]>, Gael Varoquaux
# Modified by Thierry Guillemot <[email protected]>
# License: BSD 3 clause
import matplotlib as mpl
import matplotlib.pyplot as plt
import numpy as np
from sklearn import datasets
from sklearn.mixture import GaussianMixture
from sklearn.model_selection import StratifiedKFold
colors = ["navy", "turquoise", "darkorange"]
def make_ellipses(gmm, ax):
for n, color in enumerate(colors):
if gmm.covariance_type == "full":
covariances = gmm.covariances_[n][:2, :2]
elif gmm.covariance_type == "tied":
covariances = gmm.covariances_[:2, :2]
elif gmm.covariance_type == "diag":
covariances = np.diag(gmm.covariances_[n][:2])
elif gmm.covariance_type == "spherical":
covariances = np.eye(gmm.means_.shape[1]) * gmm.covariances_[n]
v, w = np.linalg.eigh(covariances)
u = w[0] / np.linalg.norm(w[0])
angle = np.arctan2(u[1], u[0])
angle = 180 * angle / np.pi # convert to degrees
v = 2.0 * np.sqrt(2.0) * np.sqrt(v)
ell = mpl.patches.Ellipse(
gmm.means_[n, :2], v[0], v[1], 180 + angle, color=color
)
ell.set_clip_box(ax.bbox)
ell.set_alpha(0.5)
ax.add_artist(ell)
ax.set_aspect("equal", "datalim")
iris = datasets.load_iris()
# Break up the dataset into non-overlapping training (75%) and testing
# (25%) sets.
skf = StratifiedKFold(n_splits=4)
# Only take the first fold.
train_index, test_index = next(iter(skf.split(iris.data, iris.target)))
X_train = iris.data[train_index]
y_train = iris.target[train_index]
X_test = iris.data[test_index]
y_test = iris.target[test_index]
n_classes = len(np.unique(y_train))
# Try GMMs using different types of covariances.
estimators = {
cov_type: GaussianMixture(
n_components=n_classes, covariance_type=cov_type, max_iter=20, random_state=0
)
for cov_type in ["spherical", "diag", "tied", "full"]
}
n_estimators = len(estimators)
plt.figure(figsize=(3 * n_estimators // 2, 6))
plt.subplots_adjust(
bottom=0.01, top=0.95, hspace=0.15, wspace=0.05, left=0.01, right=0.99
)
for index, (name, estimator) in enumerate(estimators.items()):
# Since we have class labels for the training data, we can
# initialize the GMM parameters in a supervised manner.
estimator.means_init = np.array(
[X_train[y_train == i].mean(axis=0) for i in range(n_classes)]
)
# Train the other parameters using the EM algorithm.
estimator.fit(X_train)
h = plt.subplot(2, n_estimators // 2, index + 1)
make_ellipses(estimator, h)
for n, color in enumerate(colors):
data = iris.data[iris.target == n]
plt.scatter(
data[:, 0], data[:, 1], s=0.8, color=color, label=iris.target_names[n]
)
# Plot the test data with crosses
for n, color in enumerate(colors):
data = X_test[y_test == n]
plt.scatter(data[:, 0], data[:, 1], marker="x", color=color)
y_train_pred = estimator.predict(X_train)
train_accuracy = np.mean(y_train_pred.ravel() == y_train.ravel()) * 100
plt.text(0.05, 0.9, "Train accuracy: %.1f" % train_accuracy, transform=h.transAxes)
y_test_pred = estimator.predict(X_test)
test_accuracy = np.mean(y_test_pred.ravel() == y_test.ravel()) * 100
plt.text(0.05, 0.8, "Test accuracy: %.1f" % test_accuracy, transform=h.transAxes)
plt.xticks(())
plt.yticks(())
plt.title(name)
plt.legend(scatterpoints=1, loc="lower right", prop=dict(size=12))
plt.show()
```
**Total running time of the script:** ( 0 minutes 0.183 seconds)
[`Download Python source code: plot_gmm_covariances.py`](https://scikit-learn.org/1.1/_downloads/22c1b876aa7bf8b912208cbfed5299c7/plot_gmm_covariances.py)
[`Download Jupyter notebook: plot_gmm_covariances.ipynb`](https://scikit-learn.org/1.1/_downloads/471829dadf19abf3dd2b87b08c9ffc92/plot_gmm_covariances.ipynb)
scikit_learn Density Estimation for a Gaussian mixture Note
Click [here](#sphx-glr-download-auto-examples-mixture-plot-gmm-pdf-py) to download the full example code or to run this example in your browser via Binder
Density Estimation for a Gaussian mixture
=========================================
Plot the density estimation of a mixture of two Gaussians. Data is generated from two Gaussians with different centers and covariance matrices.
```
import numpy as np
import matplotlib.pyplot as plt
from matplotlib.colors import LogNorm
from sklearn import mixture
n_samples = 300
# generate random sample, two components
np.random.seed(0)
# generate spherical data centered on (20, 20)
shifted_gaussian = np.random.randn(n_samples, 2) + np.array([20, 20])
# generate zero centered stretched Gaussian data
C = np.array([[0.0, -0.7], [3.5, 0.7]])
stretched_gaussian = np.dot(np.random.randn(n_samples, 2), C)
# concatenate the two datasets into the final training set
X_train = np.vstack([shifted_gaussian, stretched_gaussian])
# fit a Gaussian Mixture Model with two components
clf = mixture.GaussianMixture(n_components=2, covariance_type="full")
clf.fit(X_train)
# display predicted scores by the model as a contour plot
x = np.linspace(-20.0, 30.0)
y = np.linspace(-20.0, 40.0)
X, Y = np.meshgrid(x, y)
XX = np.array([X.ravel(), Y.ravel()]).T
Z = -clf.score_samples(XX)
Z = Z.reshape(X.shape)
CS = plt.contour(
X, Y, Z, norm=LogNorm(vmin=1.0, vmax=1000.0), levels=np.logspace(0, 3, 10)
)
CB = plt.colorbar(CS, shrink=0.8, extend="both")
plt.scatter(X_train[:, 0], X_train[:, 1], 0.8)
plt.title("Negative log-likelihood predicted by a GMM")
plt.axis("tight")
plt.show()
```
**Total running time of the script:** ( 0 minutes 0.105 seconds)
[`Download Python source code: plot_gmm_pdf.py`](https://scikit-learn.org/1.1/_downloads/6bcd24982ec4dfd9c0179937957b9bbd/plot_gmm_pdf.py)
[`Download Jupyter notebook: plot_gmm_pdf.ipynb`](https://scikit-learn.org/1.1/_downloads/35ec767dbdc0119a20d1fd2022298627/plot_gmm_pdf.ipynb)
scikit_learn GMM Initialization Methods Note
Click [here](#sphx-glr-download-auto-examples-mixture-plot-gmm-init-py) to download the full example code or to run this example in your browser via Binder
GMM Initialization Methods
==========================
Examples of the different methods of initialization in Gaussian Mixture Models
See [Gaussian mixture models](../../modules/mixture#gmm) for more information on the estimator.
Here we generate some sample data with four easy to identify clusters. The purpose of this example is to show the four different methods for the initialization parameter *init\_param*.
The four initializations are *kmeans* (default), *random*, *random\_from\_data* and *k-means++*.
Orange diamonds represent the initialization centers for the gmm generated by the *init\_param*. The rest of the data is represented as crosses and the colouring represents the eventual associated classification after the GMM has finished.
The numbers in the top right of each subplot represent the number of iterations taken for the GaussianMixture to converge and the relative time taken for the initialization part of the algorithm to run. The shorter initialization times tend to have a greater number of iterations to converge.
The initialization time is the ratio of the time taken for that method versus the time taken for the default *kmeans* method. As you can see all three alternative methods take less time to initialize when compared to *kmeans*.
In this example, when initialized with *random\_from\_data* or *random* the model takes more iterations to converge. Here *k-means++* does a good job of both low time to initialize and low number of GaussianMixture iterations to converge.
```
# Author: Gordon Walsh <[email protected]>
# Data generation code from Jake Vanderplas <[email protected]>
import matplotlib.pyplot as plt
import numpy as np
from sklearn.mixture import GaussianMixture
from sklearn.utils.extmath import row_norms
from sklearn.datasets._samples_generator import make_blobs
from timeit import default_timer as timer
print(__doc__)
# Generate some data
X, y_true = make_blobs(n_samples=4000, centers=4, cluster_std=0.60, random_state=0)
X = X[:, ::-1]
n_samples = 4000
n_components = 4
x_squared_norms = row_norms(X, squared=True)
def get_initial_means(X, init_params, r):
# Run a GaussianMixture with max_iter=0 to output the initalization means
gmm = GaussianMixture(
n_components=4, init_params=init_params, tol=1e-9, max_iter=0, random_state=r
).fit(X)
return gmm.means_
methods = ["kmeans", "random_from_data", "k-means++", "random"]
colors = ["navy", "turquoise", "cornflowerblue", "darkorange"]
times_init = {}
relative_times = {}
plt.figure(figsize=(4 * len(methods) // 2, 6))
plt.subplots_adjust(
bottom=0.1, top=0.9, hspace=0.15, wspace=0.05, left=0.05, right=0.95
)
for n, method in enumerate(methods):
r = np.random.RandomState(seed=1234)
plt.subplot(2, len(methods) // 2, n + 1)
start = timer()
ini = get_initial_means(X, method, r)
end = timer()
init_time = end - start
gmm = GaussianMixture(
n_components=4, means_init=ini, tol=1e-9, max_iter=2000, random_state=r
).fit(X)
times_init[method] = init_time
for i, color in enumerate(colors):
data = X[gmm.predict(X) == i]
plt.scatter(data[:, 0], data[:, 1], color=color, marker="x")
plt.scatter(
ini[:, 0], ini[:, 1], s=75, marker="D", c="orange", lw=1.5, edgecolors="black"
)
relative_times[method] = times_init[method] / times_init[methods[0]]
plt.xticks(())
plt.yticks(())
plt.title(method, loc="left", fontsize=12)
plt.title(
"Iter %i | Init Time %.2fx" % (gmm.n_iter_, relative_times[method]),
loc="right",
fontsize=10,
)
plt.suptitle("GMM iterations and relative time taken to initialize")
plt.show()
```
**Total running time of the script:** ( 0 minutes 0.565 seconds)
[`Download Python source code: plot_gmm_init.py`](https://scikit-learn.org/1.1/_downloads/d192b2d76067841f677150ddd9ea42bf/plot_gmm_init.py)
[`Download Jupyter notebook: plot_gmm_init.ipynb`](https://scikit-learn.org/1.1/_downloads/4969738e4f46f5753249be090ab7b3f5/plot_gmm_init.ipynb)
| programming_docs |
scikit_learn Visualization of MLP weights on MNIST Note
Click [here](#sphx-glr-download-auto-examples-neural-networks-plot-mnist-filters-py) to download the full example code or to run this example in your browser via Binder
Visualization of MLP weights on MNIST
=====================================
Sometimes looking at the learned coefficients of a neural network can provide insight into the learning behavior. For example if weights look unstructured, maybe some were not used at all, or if very large coefficients exist, maybe regularization was too low or the learning rate too high.
This example shows how to plot some of the first layer weights in a MLPClassifier trained on the MNIST dataset.
The input data consists of 28x28 pixel handwritten digits, leading to 784 features in the dataset. Therefore the first layer weight matrix has the shape (784, hidden\_layer\_sizes[0]). We can therefore visualize a single column of the weight matrix as a 28x28 pixel image.
To make the example run faster, we use very few hidden units, and train only for a very short time. Training longer would result in weights with a much smoother spatial appearance. The example will throw a warning because it doesn’t converge, in this case this is what we want because of resource usage constraints on our Continuous Integration infrastructure that is used to build this documentation on a regular basis.
```
Iteration 1, loss = 0.44139186
Iteration 2, loss = 0.19174891
Iteration 3, loss = 0.13983521
Iteration 4, loss = 0.11378556
Iteration 5, loss = 0.09443967
Iteration 6, loss = 0.07846529
Iteration 7, loss = 0.06506307
Iteration 8, loss = 0.05534985
Training set score: 0.986429
Test set score: 0.953061
```
```
import warnings
import matplotlib.pyplot as plt
from sklearn.datasets import fetch_openml
from sklearn.exceptions import ConvergenceWarning
from sklearn.neural_network import MLPClassifier
from sklearn.model_selection import train_test_split
# Load data from https://www.openml.org/d/554
X, y = fetch_openml("mnist_784", version=1, return_X_y=True, as_frame=False)
X = X / 255.0
# Split data into train partition and test partition
X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=0, test_size=0.7)
mlp = MLPClassifier(
hidden_layer_sizes=(40,),
max_iter=8,
alpha=1e-4,
solver="sgd",
verbose=10,
random_state=1,
learning_rate_init=0.2,
)
# this example won't converge because of resource usage constraints on
# our Continuous Integration infrastructure, so we catch the warning and
# ignore it here
with warnings.catch_warnings():
warnings.filterwarnings("ignore", category=ConvergenceWarning, module="sklearn")
mlp.fit(X_train, y_train)
print("Training set score: %f" % mlp.score(X_train, y_train))
print("Test set score: %f" % mlp.score(X_test, y_test))
fig, axes = plt.subplots(4, 4)
# use global min / max to ensure all weights are shown on the same scale
vmin, vmax = mlp.coefs_[0].min(), mlp.coefs_[0].max()
for coef, ax in zip(mlp.coefs_[0].T, axes.ravel()):
ax.matshow(coef.reshape(28, 28), cmap=plt.cm.gray, vmin=0.5 * vmin, vmax=0.5 * vmax)
ax.set_xticks(())
ax.set_yticks(())
plt.show()
```
**Total running time of the script:** ( 0 minutes 18.424 seconds)
[`Download Python source code: plot_mnist_filters.py`](https://scikit-learn.org/1.1/_downloads/7534058b2748ca58f7594203b7723a0e/plot_mnist_filters.py)
[`Download Jupyter notebook: plot_mnist_filters.ipynb`](https://scikit-learn.org/1.1/_downloads/6522aa1dd16bb328d88cb09cbc08eded/plot_mnist_filters.ipynb)
scikit_learn Varying regularization in Multi-layer Perceptron Note
Click [here](#sphx-glr-download-auto-examples-neural-networks-plot-mlp-alpha-py) to download the full example code or to run this example in your browser via Binder
Varying regularization in Multi-layer Perceptron
================================================
A comparison of different values for regularization parameter ‘alpha’ on synthetic datasets. The plot shows that different alphas yield different decision functions.
Alpha is a parameter for regularization term, aka penalty term, that combats overfitting by constraining the size of the weights. Increasing alpha may fix high variance (a sign of overfitting) by encouraging smaller weights, resulting in a decision boundary plot that appears with lesser curvatures. Similarly, decreasing alpha may fix high bias (a sign of underfitting) by encouraging larger weights, potentially resulting in a more complicated decision boundary.

```
# Author: Issam H. Laradji
# License: BSD 3 clause
import numpy as np
from matplotlib import pyplot as plt
from matplotlib.colors import ListedColormap
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import StandardScaler
from sklearn.datasets import make_moons, make_circles, make_classification
from sklearn.neural_network import MLPClassifier
from sklearn.pipeline import make_pipeline
h = 0.02 # step size in the mesh
alphas = np.logspace(-1, 1, 5)
classifiers = []
names = []
for alpha in alphas:
classifiers.append(
make_pipeline(
StandardScaler(),
MLPClassifier(
solver="lbfgs",
alpha=alpha,
random_state=1,
max_iter=2000,
early_stopping=True,
hidden_layer_sizes=[10, 10],
),
)
)
names.append(f"alpha {alpha:.2f}")
X, y = make_classification(
n_features=2, n_redundant=0, n_informative=2, random_state=0, n_clusters_per_class=1
)
rng = np.random.RandomState(2)
X += 2 * rng.uniform(size=X.shape)
linearly_separable = (X, y)
datasets = [
make_moons(noise=0.3, random_state=0),
make_circles(noise=0.2, factor=0.5, random_state=1),
linearly_separable,
]
figure = plt.figure(figsize=(17, 9))
i = 1
# iterate over datasets
for X, y in datasets:
# split into training and test part
X_train, X_test, y_train, y_test = train_test_split(
X, y, test_size=0.4, random_state=42
)
x_min, x_max = X[:, 0].min() - 0.5, X[:, 0].max() + 0.5
y_min, y_max = X[:, 1].min() - 0.5, X[:, 1].max() + 0.5
xx, yy = np.meshgrid(np.arange(x_min, x_max, h), np.arange(y_min, y_max, h))
# just plot the dataset first
cm = plt.cm.RdBu
cm_bright = ListedColormap(["#FF0000", "#0000FF"])
ax = plt.subplot(len(datasets), len(classifiers) + 1, i)
# Plot the training points
ax.scatter(X_train[:, 0], X_train[:, 1], c=y_train, cmap=cm_bright)
# and testing points
ax.scatter(X_test[:, 0], X_test[:, 1], c=y_test, cmap=cm_bright, alpha=0.6)
ax.set_xlim(xx.min(), xx.max())
ax.set_ylim(yy.min(), yy.max())
ax.set_xticks(())
ax.set_yticks(())
i += 1
# iterate over classifiers
for name, clf in zip(names, classifiers):
ax = plt.subplot(len(datasets), len(classifiers) + 1, i)
clf.fit(X_train, y_train)
score = clf.score(X_test, y_test)
# Plot the decision boundary. For that, we will assign a color to each
# point in the mesh [x_min, x_max] x [y_min, y_max].
if hasattr(clf, "decision_function"):
Z = clf.decision_function(np.column_stack([xx.ravel(), yy.ravel()]))
else:
Z = clf.predict_proba(np.column_stack([xx.ravel(), yy.ravel()]))[:, 1]
# Put the result into a color plot
Z = Z.reshape(xx.shape)
ax.contourf(xx, yy, Z, cmap=cm, alpha=0.8)
# Plot also the training points
ax.scatter(
X_train[:, 0],
X_train[:, 1],
c=y_train,
cmap=cm_bright,
edgecolors="black",
s=25,
)
# and testing points
ax.scatter(
X_test[:, 0],
X_test[:, 1],
c=y_test,
cmap=cm_bright,
alpha=0.6,
edgecolors="black",
s=25,
)
ax.set_xlim(xx.min(), xx.max())
ax.set_ylim(yy.min(), yy.max())
ax.set_xticks(())
ax.set_yticks(())
ax.set_title(name)
ax.text(
xx.max() - 0.3,
yy.min() + 0.3,
f"{score:.3f}".lstrip("0"),
size=15,
horizontalalignment="right",
)
i += 1
figure.subplots_adjust(left=0.02, right=0.98)
plt.show()
```
**Total running time of the script:** ( 0 minutes 1.713 seconds)
[`Download Python source code: plot_mlp_alpha.py`](https://scikit-learn.org/1.1/_downloads/8af613cc7180fed274715982abcd696d/plot_mlp_alpha.py)
[`Download Jupyter notebook: plot_mlp_alpha.ipynb`](https://scikit-learn.org/1.1/_downloads/7ce809adad0d67b96c1df3b7fcd74567/plot_mlp_alpha.ipynb)
scikit_learn Restricted Boltzmann Machine features for digit classification Note
Click [here](#sphx-glr-download-auto-examples-neural-networks-plot-rbm-logistic-classification-py) to download the full example code or to run this example in your browser via Binder
Restricted Boltzmann Machine features for digit classification
==============================================================
For greyscale image data where pixel values can be interpreted as degrees of blackness on a white background, like handwritten digit recognition, the Bernoulli Restricted Boltzmann machine model ([`BernoulliRBM`](../../modules/generated/sklearn.neural_network.bernoullirbm#sklearn.neural_network.BernoulliRBM "sklearn.neural_network.BernoulliRBM")) can perform effective non-linear feature extraction.
```
# Authors: Yann N. Dauphin, Vlad Niculae, Gabriel Synnaeve
# License: BSD
```
Generate data
-------------
In order to learn good latent representations from a small dataset, we artificially generate more labeled data by perturbing the training data with linear shifts of 1 pixel in each direction.
```
import numpy as np
from scipy.ndimage import convolve
from sklearn import datasets
from sklearn.preprocessing import minmax_scale
from sklearn.model_selection import train_test_split
def nudge_dataset(X, Y):
"""
This produces a dataset 5 times bigger than the original one,
by moving the 8x8 images in X around by 1px to left, right, down, up
"""
direction_vectors = [
[[0, 1, 0], [0, 0, 0], [0, 0, 0]],
[[0, 0, 0], [1, 0, 0], [0, 0, 0]],
[[0, 0, 0], [0, 0, 1], [0, 0, 0]],
[[0, 0, 0], [0, 0, 0], [0, 1, 0]],
]
def shift(x, w):
return convolve(x.reshape((8, 8)), mode="constant", weights=w).ravel()
X = np.concatenate(
[X] + [np.apply_along_axis(shift, 1, X, vector) for vector in direction_vectors]
)
Y = np.concatenate([Y for _ in range(5)], axis=0)
return X, Y
X, y = datasets.load_digits(return_X_y=True)
X = np.asarray(X, "float32")
X, Y = nudge_dataset(X, y)
X = minmax_scale(X, feature_range=(0, 1)) # 0-1 scaling
X_train, X_test, Y_train, Y_test = train_test_split(X, Y, test_size=0.2, random_state=0)
```
Models definition
-----------------
We build a classification pipeline with a BernoulliRBM feature extractor and a [`LogisticRegression`](../../modules/generated/sklearn.linear_model.logisticregression#sklearn.linear_model.LogisticRegression "sklearn.linear_model.LogisticRegression") classifier.
```
from sklearn import linear_model
from sklearn.neural_network import BernoulliRBM
from sklearn.pipeline import Pipeline
logistic = linear_model.LogisticRegression(solver="newton-cg", tol=1)
rbm = BernoulliRBM(random_state=0, verbose=True)
rbm_features_classifier = Pipeline(steps=[("rbm", rbm), ("logistic", logistic)])
```
Training
--------
The hyperparameters of the entire model (learning rate, hidden layer size, regularization) were optimized by grid search, but the search is not reproduced here because of runtime constraints.
```
from sklearn.base import clone
# Hyper-parameters. These were set by cross-validation,
# using a GridSearchCV. Here we are not performing cross-validation to
# save time.
rbm.learning_rate = 0.06
rbm.n_iter = 10
# More components tend to give better prediction performance, but larger
# fitting time
rbm.n_components = 100
logistic.C = 6000
# Training RBM-Logistic Pipeline
rbm_features_classifier.fit(X_train, Y_train)
# Training the Logistic regression classifier directly on the pixel
raw_pixel_classifier = clone(logistic)
raw_pixel_classifier.C = 100.0
raw_pixel_classifier.fit(X_train, Y_train)
```
```
[BernoulliRBM] Iteration 1, pseudo-likelihood = -25.57, time = 0.09s
[BernoulliRBM] Iteration 2, pseudo-likelihood = -23.68, time = 0.13s
[BernoulliRBM] Iteration 3, pseudo-likelihood = -22.88, time = 0.13s
[BernoulliRBM] Iteration 4, pseudo-likelihood = -21.91, time = 0.13s
[BernoulliRBM] Iteration 5, pseudo-likelihood = -21.79, time = 0.13s
[BernoulliRBM] Iteration 6, pseudo-likelihood = -20.96, time = 0.12s
[BernoulliRBM] Iteration 7, pseudo-likelihood = -20.88, time = 0.12s
[BernoulliRBM] Iteration 8, pseudo-likelihood = -20.50, time = 0.12s
[BernoulliRBM] Iteration 9, pseudo-likelihood = -20.34, time = 0.12s
[BernoulliRBM] Iteration 10, pseudo-likelihood = -20.21, time = 0.12s
```
```
LogisticRegression(C=100.0, solver='newton-cg', tol=1)
```
**In a Jupyter environment, please rerun this cell to show the HTML representation or trust the notebook.
On GitHub, the HTML representation is unable to render, please try loading this page with nbviewer.org.**
LogisticRegression
```
LogisticRegression(C=100.0, solver='newton-cg', tol=1)
```
Evaluation
----------
```
from sklearn import metrics
Y_pred = rbm_features_classifier.predict(X_test)
print(
"Logistic regression using RBM features:\n%s\n"
% (metrics.classification_report(Y_test, Y_pred))
)
```
```
Logistic regression using RBM features:
precision recall f1-score support
0 1.00 0.98 0.99 174
1 0.90 0.92 0.91 184
2 0.93 0.95 0.94 166
3 0.94 0.89 0.92 194
4 0.95 0.94 0.94 186
5 0.94 0.91 0.93 181
6 0.98 0.97 0.97 207
7 0.94 0.99 0.97 154
8 0.90 0.89 0.90 182
9 0.88 0.92 0.90 169
accuracy 0.94 1797
macro avg 0.94 0.94 0.94 1797
weighted avg 0.94 0.94 0.94 1797
```
```
Y_pred = raw_pixel_classifier.predict(X_test)
print(
"Logistic regression using raw pixel features:\n%s\n"
% (metrics.classification_report(Y_test, Y_pred))
)
```
```
Logistic regression using raw pixel features:
precision recall f1-score support
0 0.90 0.93 0.91 174
1 0.60 0.57 0.58 184
2 0.75 0.85 0.80 166
3 0.77 0.78 0.78 194
4 0.82 0.84 0.83 186
5 0.77 0.77 0.77 181
6 0.91 0.87 0.89 207
7 0.85 0.88 0.87 154
8 0.68 0.58 0.62 182
9 0.73 0.77 0.75 169
accuracy 0.78 1797
macro avg 0.78 0.78 0.78 1797
weighted avg 0.78 0.78 0.78 1797
```
The features extracted by the BernoulliRBM help improve the classification accuracy with respect to the logistic regression on raw pixels.
Plotting
--------
```
import matplotlib.pyplot as plt
plt.figure(figsize=(4.2, 4))
for i, comp in enumerate(rbm.components_):
plt.subplot(10, 10, i + 1)
plt.imshow(comp.reshape((8, 8)), cmap=plt.cm.gray_r, interpolation="nearest")
plt.xticks(())
plt.yticks(())
plt.suptitle("100 components extracted by RBM", fontsize=16)
plt.subplots_adjust(0.08, 0.02, 0.92, 0.85, 0.08, 0.23)
plt.show()
```
**Total running time of the script:** ( 0 minutes 4.536 seconds)
[`Download Python source code: plot_rbm_logistic_classification.py`](https://scikit-learn.org/1.1/_downloads/74cf42fe2afd38be640572601152dbe6/plot_rbm_logistic_classification.py)
[`Download Jupyter notebook: plot_rbm_logistic_classification.ipynb`](https://scikit-learn.org/1.1/_downloads/64788a28f138e0d4ba98dbbbb9116be5/plot_rbm_logistic_classification.ipynb)
scikit_learn Compare Stochastic learning strategies for MLPClassifier Note
Click [here](#sphx-glr-download-auto-examples-neural-networks-plot-mlp-training-curves-py) to download the full example code or to run this example in your browser via Binder
Compare Stochastic learning strategies for MLPClassifier
========================================================
This example visualizes some training loss curves for different stochastic learning strategies, including SGD and Adam. Because of time-constraints, we use several small datasets, for which L-BFGS might be more suitable. The general trend shown in these examples seems to carry over to larger datasets, however.
Note that those results can be highly dependent on the value of `learning_rate_init`.
```
learning on dataset iris
training: constant learning-rate
Training set score: 0.980000
Training set loss: 0.096950
training: constant with momentum
Training set score: 0.980000
Training set loss: 0.049530
training: constant with Nesterov's momentum
Training set score: 0.980000
Training set loss: 0.049540
training: inv-scaling learning-rate
Training set score: 0.360000
Training set loss: 0.978444
training: inv-scaling with momentum
Training set score: 0.860000
Training set loss: 0.503452
training: inv-scaling with Nesterov's momentum
Training set score: 0.860000
Training set loss: 0.504185
training: adam
Training set score: 0.980000
Training set loss: 0.045311
learning on dataset digits
training: constant learning-rate
Training set score: 0.956038
Training set loss: 0.243802
training: constant with momentum
Training set score: 0.992766
Training set loss: 0.041297
training: constant with Nesterov's momentum
Training set score: 0.993879
Training set loss: 0.042898
training: inv-scaling learning-rate
Training set score: 0.638843
Training set loss: 1.855465
training: inv-scaling with momentum
Training set score: 0.912632
Training set loss: 0.290584
training: inv-scaling with Nesterov's momentum
Training set score: 0.909293
Training set loss: 0.318387
training: adam
Training set score: 0.991653
Training set loss: 0.045934
learning on dataset circles
training: constant learning-rate
Training set score: 0.840000
Training set loss: 0.601052
training: constant with momentum
Training set score: 0.940000
Training set loss: 0.157334
training: constant with Nesterov's momentum
Training set score: 0.940000
Training set loss: 0.154453
training: inv-scaling learning-rate
Training set score: 0.500000
Training set loss: 0.692470
training: inv-scaling with momentum
Training set score: 0.500000
Training set loss: 0.689143
training: inv-scaling with Nesterov's momentum
Training set score: 0.500000
Training set loss: 0.689751
training: adam
Training set score: 0.940000
Training set loss: 0.150527
learning on dataset moons
training: constant learning-rate
Training set score: 0.850000
Training set loss: 0.341523
training: constant with momentum
Training set score: 0.850000
Training set loss: 0.336188
training: constant with Nesterov's momentum
Training set score: 0.850000
Training set loss: 0.335919
training: inv-scaling learning-rate
Training set score: 0.500000
Training set loss: 0.689015
training: inv-scaling with momentum
Training set score: 0.830000
Training set loss: 0.512595
training: inv-scaling with Nesterov's momentum
Training set score: 0.830000
Training set loss: 0.513034
training: adam
Training set score: 0.930000
Training set loss: 0.170087
```
```
import warnings
import matplotlib.pyplot as plt
from sklearn.neural_network import MLPClassifier
from sklearn.preprocessing import MinMaxScaler
from sklearn import datasets
from sklearn.exceptions import ConvergenceWarning
# different learning rate schedules and momentum parameters
params = [
{
"solver": "sgd",
"learning_rate": "constant",
"momentum": 0,
"learning_rate_init": 0.2,
},
{
"solver": "sgd",
"learning_rate": "constant",
"momentum": 0.9,
"nesterovs_momentum": False,
"learning_rate_init": 0.2,
},
{
"solver": "sgd",
"learning_rate": "constant",
"momentum": 0.9,
"nesterovs_momentum": True,
"learning_rate_init": 0.2,
},
{
"solver": "sgd",
"learning_rate": "invscaling",
"momentum": 0,
"learning_rate_init": 0.2,
},
{
"solver": "sgd",
"learning_rate": "invscaling",
"momentum": 0.9,
"nesterovs_momentum": True,
"learning_rate_init": 0.2,
},
{
"solver": "sgd",
"learning_rate": "invscaling",
"momentum": 0.9,
"nesterovs_momentum": False,
"learning_rate_init": 0.2,
},
{"solver": "adam", "learning_rate_init": 0.01},
]
labels = [
"constant learning-rate",
"constant with momentum",
"constant with Nesterov's momentum",
"inv-scaling learning-rate",
"inv-scaling with momentum",
"inv-scaling with Nesterov's momentum",
"adam",
]
plot_args = [
{"c": "red", "linestyle": "-"},
{"c": "green", "linestyle": "-"},
{"c": "blue", "linestyle": "-"},
{"c": "red", "linestyle": "--"},
{"c": "green", "linestyle": "--"},
{"c": "blue", "linestyle": "--"},
{"c": "black", "linestyle": "-"},
]
def plot_on_dataset(X, y, ax, name):
# for each dataset, plot learning for each learning strategy
print("\nlearning on dataset %s" % name)
ax.set_title(name)
X = MinMaxScaler().fit_transform(X)
mlps = []
if name == "digits":
# digits is larger but converges fairly quickly
max_iter = 15
else:
max_iter = 400
for label, param in zip(labels, params):
print("training: %s" % label)
mlp = MLPClassifier(random_state=0, max_iter=max_iter, **param)
# some parameter combinations will not converge as can be seen on the
# plots so they are ignored here
with warnings.catch_warnings():
warnings.filterwarnings(
"ignore", category=ConvergenceWarning, module="sklearn"
)
mlp.fit(X, y)
mlps.append(mlp)
print("Training set score: %f" % mlp.score(X, y))
print("Training set loss: %f" % mlp.loss_)
for mlp, label, args in zip(mlps, labels, plot_args):
ax.plot(mlp.loss_curve_, label=label, **args)
fig, axes = plt.subplots(2, 2, figsize=(15, 10))
# load / generate some toy datasets
iris = datasets.load_iris()
X_digits, y_digits = datasets.load_digits(return_X_y=True)
data_sets = [
(iris.data, iris.target),
(X_digits, y_digits),
datasets.make_circles(noise=0.2, factor=0.5, random_state=1),
datasets.make_moons(noise=0.3, random_state=0),
]
for ax, data, name in zip(
axes.ravel(), data_sets, ["iris", "digits", "circles", "moons"]
):
plot_on_dataset(*data, ax=ax, name=name)
fig.legend(ax.get_lines(), labels, ncol=3, loc="upper center")
plt.show()
```
**Total running time of the script:** ( 0 minutes 2.756 seconds)
[`Download Python source code: plot_mlp_training_curves.py`](https://scikit-learn.org/1.1/_downloads/1a0eed524b181183d59e57a5a9df565b/plot_mlp_training_curves.py)
[`Download Jupyter notebook: plot_mlp_training_curves.ipynb`](https://scikit-learn.org/1.1/_downloads/646b2c083be2d0f557b369a51cec966d/plot_mlp_training_curves.ipynb)
| programming_docs |
scikit_learn Release Highlights for scikit-learn 0.22 Note
Click [here](#sphx-glr-download-auto-examples-release-highlights-plot-release-highlights-0-22-0-py) to download the full example code or to run this example in your browser via Binder
Release Highlights for scikit-learn 0.22
========================================
We are pleased to announce the release of scikit-learn 0.22, which comes with many bug fixes and new features! We detail below a few of the major features of this release. For an exhaustive list of all the changes, please refer to the [release notes](https://scikit-learn.org/1.1/whats_new/v0.22.html#changes-0-22).
To install the latest version (with pip):
```
pip install --upgrade scikit-learn
```
or with conda:
```
conda install -c conda-forge scikit-learn
```
New plotting API
----------------
A new plotting API is available for creating visualizations. This new API allows for quickly adjusting the visuals of a plot without involving any recomputation. It is also possible to add different plots to the same figure. The following example illustrates [`plot_roc_curve`](../../modules/generated/sklearn.metrics.plot_roc_curve#sklearn.metrics.plot_roc_curve "sklearn.metrics.plot_roc_curve"), but other plots utilities are supported like [`plot_partial_dependence`](../../modules/generated/sklearn.inspection.plot_partial_dependence#sklearn.inspection.plot_partial_dependence "sklearn.inspection.plot_partial_dependence"), [`plot_precision_recall_curve`](../../modules/generated/sklearn.metrics.plot_precision_recall_curve#sklearn.metrics.plot_precision_recall_curve "sklearn.metrics.plot_precision_recall_curve"), and [`plot_confusion_matrix`](../../modules/generated/sklearn.metrics.plot_confusion_matrix#sklearn.metrics.plot_confusion_matrix "sklearn.metrics.plot_confusion_matrix"). Read more about this new API in the [User Guide](https://scikit-learn.org/1.1/visualizations.html#visualizations).
```
from sklearn.model_selection import train_test_split
from sklearn.svm import SVC
from sklearn.metrics import plot_roc_curve
from sklearn.ensemble import RandomForestClassifier
from sklearn.datasets import make_classification
import matplotlib.pyplot as plt
X, y = make_classification(random_state=0)
X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=42)
svc = SVC(random_state=42)
svc.fit(X_train, y_train)
rfc = RandomForestClassifier(random_state=42)
rfc.fit(X_train, y_train)
svc_disp = plot_roc_curve(svc, X_test, y_test)
rfc_disp = plot_roc_curve(rfc, X_test, y_test, ax=svc_disp.ax_)
rfc_disp.figure_.suptitle("ROC curve comparison")
plt.show()
```
```
/home/runner/work/scikit-learn/scikit-learn/sklearn/utils/deprecation.py:87: FutureWarning: Function plot_roc_curve is deprecated; Function :func:`plot_roc_curve` is deprecated in 1.0 and will be removed in 1.2. Use one of the class methods: :meth:`sklearn.metrics.RocCurveDisplay.from_predictions` or :meth:`sklearn.metrics.RocCurveDisplay.from_estimator`.
warnings.warn(msg, category=FutureWarning)
/home/runner/work/scikit-learn/scikit-learn/sklearn/utils/deprecation.py:87: FutureWarning: Function plot_roc_curve is deprecated; Function :func:`plot_roc_curve` is deprecated in 1.0 and will be removed in 1.2. Use one of the class methods: :meth:`sklearn.metrics.RocCurveDisplay.from_predictions` or :meth:`sklearn.metrics.RocCurveDisplay.from_estimator`.
warnings.warn(msg, category=FutureWarning)
```
Stacking Classifier and Regressor
---------------------------------
[`StackingClassifier`](../../modules/generated/sklearn.ensemble.stackingclassifier#sklearn.ensemble.StackingClassifier "sklearn.ensemble.StackingClassifier") and [`StackingRegressor`](../../modules/generated/sklearn.ensemble.stackingregressor#sklearn.ensemble.StackingRegressor "sklearn.ensemble.StackingRegressor") allow you to have a stack of estimators with a final classifier or a regressor. Stacked generalization consists in stacking the output of individual estimators and use a classifier to compute the final prediction. Stacking allows to use the strength of each individual estimator by using their output as input of a final estimator. Base estimators are fitted on the full `X` while the final estimator is trained using cross-validated predictions of the base estimators using `cross_val_predict`.
Read more in the [User Guide](../../modules/ensemble#stacking).
```
from sklearn.datasets import load_iris
from sklearn.svm import LinearSVC
from sklearn.linear_model import LogisticRegression
from sklearn.preprocessing import StandardScaler
from sklearn.pipeline import make_pipeline
from sklearn.ensemble import StackingClassifier
from sklearn.model_selection import train_test_split
X, y = load_iris(return_X_y=True)
estimators = [
("rf", RandomForestClassifier(n_estimators=10, random_state=42)),
("svr", make_pipeline(StandardScaler(), LinearSVC(random_state=42))),
]
clf = StackingClassifier(estimators=estimators, final_estimator=LogisticRegression())
X_train, X_test, y_train, y_test = train_test_split(X, y, stratify=y, random_state=42)
clf.fit(X_train, y_train).score(X_test, y_test)
```
```
0.9473684210526315
```
Permutation-based feature importance
------------------------------------
The [`inspection.permutation_importance`](../../modules/generated/sklearn.inspection.permutation_importance#sklearn.inspection.permutation_importance "sklearn.inspection.permutation_importance") can be used to get an estimate of the importance of each feature, for any fitted estimator:
```
import numpy as np
import matplotlib.pyplot as plt
from sklearn.datasets import make_classification
from sklearn.ensemble import RandomForestClassifier
from sklearn.inspection import permutation_importance
X, y = make_classification(random_state=0, n_features=5, n_informative=3)
feature_names = np.array([f"x_{i}" for i in range(X.shape[1])])
rf = RandomForestClassifier(random_state=0).fit(X, y)
result = permutation_importance(rf, X, y, n_repeats=10, random_state=0, n_jobs=2)
fig, ax = plt.subplots()
sorted_idx = result.importances_mean.argsort()
ax.boxplot(
result.importances[sorted_idx].T, vert=False, labels=feature_names[sorted_idx]
)
ax.set_title("Permutation Importance of each feature")
ax.set_ylabel("Features")
fig.tight_layout()
plt.show()
```
Native support for missing values for gradient boosting
-------------------------------------------------------
The [`ensemble.HistGradientBoostingClassifier`](../../modules/generated/sklearn.ensemble.histgradientboostingclassifier#sklearn.ensemble.HistGradientBoostingClassifier "sklearn.ensemble.HistGradientBoostingClassifier") and [`ensemble.HistGradientBoostingRegressor`](../../modules/generated/sklearn.ensemble.histgradientboostingregressor#sklearn.ensemble.HistGradientBoostingRegressor "sklearn.ensemble.HistGradientBoostingRegressor") now have native support for missing values (NaNs). This means that there is no need for imputing data when training or predicting.
```
from sklearn.ensemble import HistGradientBoostingClassifier
X = np.array([0, 1, 2, np.nan]).reshape(-1, 1)
y = [0, 0, 1, 1]
gbdt = HistGradientBoostingClassifier(min_samples_leaf=1).fit(X, y)
print(gbdt.predict(X))
```
```
[0 0 1 1]
```
Precomputed sparse nearest neighbors graph
------------------------------------------
Most estimators based on nearest neighbors graphs now accept precomputed sparse graphs as input, to reuse the same graph for multiple estimator fits. To use this feature in a pipeline, one can use the `memory` parameter, along with one of the two new transformers, [`neighbors.KNeighborsTransformer`](../../modules/generated/sklearn.neighbors.kneighborstransformer#sklearn.neighbors.KNeighborsTransformer "sklearn.neighbors.KNeighborsTransformer") and [`neighbors.RadiusNeighborsTransformer`](../../modules/generated/sklearn.neighbors.radiusneighborstransformer#sklearn.neighbors.RadiusNeighborsTransformer "sklearn.neighbors.RadiusNeighborsTransformer"). The precomputation can also be performed by custom estimators to use alternative implementations, such as approximate nearest neighbors methods. See more details in the [User Guide](../../modules/neighbors#neighbors-transformer).
```
from tempfile import TemporaryDirectory
from sklearn.neighbors import KNeighborsTransformer
from sklearn.manifold import Isomap
from sklearn.pipeline import make_pipeline
X, y = make_classification(random_state=0)
with TemporaryDirectory(prefix="sklearn_cache_") as tmpdir:
estimator = make_pipeline(
KNeighborsTransformer(n_neighbors=10, mode="distance"),
Isomap(n_neighbors=10, metric="precomputed"),
memory=tmpdir,
)
estimator.fit(X)
# We can decrease the number of neighbors and the graph will not be
# recomputed.
estimator.set_params(isomap__n_neighbors=5)
estimator.fit(X)
```
KNN Based Imputation
--------------------
We now support imputation for completing missing values using k-Nearest Neighbors.
Each sample’s missing values are imputed using the mean value from `n_neighbors` nearest neighbors found in the training set. Two samples are close if the features that neither is missing are close. By default, a euclidean distance metric that supports missing values, `nan_euclidean_distances`, is used to find the nearest neighbors.
Read more in the [User Guide](../../modules/impute#knnimpute).
```
from sklearn.impute import KNNImputer
X = [[1, 2, np.nan], [3, 4, 3], [np.nan, 6, 5], [8, 8, 7]]
imputer = KNNImputer(n_neighbors=2)
print(imputer.fit_transform(X))
```
```
[[1. 2. 4. ]
[3. 4. 3. ]
[5.5 6. 5. ]
[8. 8. 7. ]]
```
Tree pruning
------------
It is now possible to prune most tree-based estimators once the trees are built. The pruning is based on minimal cost-complexity. Read more in the [User Guide](../../modules/tree#minimal-cost-complexity-pruning) for details.
```
X, y = make_classification(random_state=0)
rf = RandomForestClassifier(random_state=0, ccp_alpha=0).fit(X, y)
print(
"Average number of nodes without pruning {:.1f}".format(
np.mean([e.tree_.node_count for e in rf.estimators_])
)
)
rf = RandomForestClassifier(random_state=0, ccp_alpha=0.05).fit(X, y)
print(
"Average number of nodes with pruning {:.1f}".format(
np.mean([e.tree_.node_count for e in rf.estimators_])
)
)
```
```
Average number of nodes without pruning 22.3
Average number of nodes with pruning 6.4
```
Retrieve dataframes from OpenML
-------------------------------
[`datasets.fetch_openml`](../../modules/generated/sklearn.datasets.fetch_openml#sklearn.datasets.fetch_openml "sklearn.datasets.fetch_openml") can now return pandas dataframe and thus properly handle datasets with heterogeneous data:
```
from sklearn.datasets import fetch_openml
titanic = fetch_openml("titanic", version=1, as_frame=True)
print(titanic.data.head()[["pclass", "embarked"]])
```
```
pclass embarked
0 1.0 S
1 1.0 S
2 1.0 S
3 1.0 S
4 1.0 S
```
Checking scikit-learn compatibility of an estimator
---------------------------------------------------
Developers can check the compatibility of their scikit-learn compatible estimators using [`check_estimator`](../../modules/generated/sklearn.utils.estimator_checks.check_estimator#sklearn.utils.estimator_checks.check_estimator "sklearn.utils.estimator_checks.check_estimator"). For instance, the `check_estimator(LinearSVC())` passes.
We now provide a `pytest` specific decorator which allows `pytest` to run all checks independently and report the checks that are failing.
..note::
This entry was slightly updated in version 0.24, where passing classes isn’t supported anymore: pass instances instead.
```
from sklearn.linear_model import LogisticRegression
from sklearn.tree import DecisionTreeRegressor
from sklearn.utils.estimator_checks import parametrize_with_checks
@parametrize_with_checks([LogisticRegression(), DecisionTreeRegressor()])
def test_sklearn_compatible_estimator(estimator, check):
check(estimator)
```
ROC AUC now supports multiclass classification
----------------------------------------------
The `roc_auc_score` function can also be used in multi-class classification. Two averaging strategies are currently supported: the one-vs-one algorithm computes the average of the pairwise ROC AUC scores, and the one-vs-rest algorithm computes the average of the ROC AUC scores for each class against all other classes. In both cases, the multiclass ROC AUC scores are computed from the probability estimates that a sample belongs to a particular class according to the model. The OvO and OvR algorithms support weighting uniformly (`average='macro'`) and weighting by the prevalence (`average='weighted'`).
Read more in the [User Guide](../../modules/model_evaluation#roc-metrics).
```
from sklearn.datasets import make_classification
from sklearn.svm import SVC
from sklearn.metrics import roc_auc_score
X, y = make_classification(n_classes=4, n_informative=16)
clf = SVC(decision_function_shape="ovo", probability=True).fit(X, y)
print(roc_auc_score(y, clf.predict_proba(X), multi_class="ovo"))
```
```
0.9925333333333333
```
**Total running time of the script:** ( 0 minutes 1.082 seconds)
[`Download Python source code: plot_release_highlights_0_22_0.py`](https://scikit-learn.org/1.1/_downloads/50040ae12dd16e7d2e79135d7793c17e/plot_release_highlights_0_22_0.py)
[`Download Jupyter notebook: plot_release_highlights_0_22_0.ipynb`](https://scikit-learn.org/1.1/_downloads/df790541d4c6bdebcc75018a2459467a/plot_release_highlights_0_22_0.ipynb)
scikit_learn Release Highlights for scikit-learn 0.24 Note
Click [here](#sphx-glr-download-auto-examples-release-highlights-plot-release-highlights-0-24-0-py) to download the full example code or to run this example in your browser via Binder
Release Highlights for scikit-learn 0.24
========================================
We are pleased to announce the release of scikit-learn 0.24! Many bug fixes and improvements were added, as well as some new key features. We detail below a few of the major features of this release. **For an exhaustive list of all the changes**, please refer to the [release notes](https://scikit-learn.org/1.1/whats_new/v0.24.html#changes-0-24).
To install the latest version (with pip):
```
pip install --upgrade scikit-learn
```
or with conda:
```
conda install -c conda-forge scikit-learn
```
Successive Halving estimators for tuning hyper-parameters
---------------------------------------------------------
Successive Halving, a state of the art method, is now available to explore the space of the parameters and identify their best combination. [`HalvingGridSearchCV`](../../modules/generated/sklearn.model_selection.halvinggridsearchcv#sklearn.model_selection.HalvingGridSearchCV "sklearn.model_selection.HalvingGridSearchCV") and [`HalvingRandomSearchCV`](../../modules/generated/sklearn.model_selection.halvingrandomsearchcv#sklearn.model_selection.HalvingRandomSearchCV "sklearn.model_selection.HalvingRandomSearchCV") can be used as drop-in replacement for [`GridSearchCV`](../../modules/generated/sklearn.model_selection.gridsearchcv#sklearn.model_selection.GridSearchCV "sklearn.model_selection.GridSearchCV") and [`RandomizedSearchCV`](../../modules/generated/sklearn.model_selection.randomizedsearchcv#sklearn.model_selection.RandomizedSearchCV "sklearn.model_selection.RandomizedSearchCV"). Successive Halving is an iterative selection process illustrated in the figure below. The first iteration is run with a small amount of resources, where the resource typically corresponds to the number of training samples, but can also be an arbitrary integer parameter such as `n_estimators` in a random forest. Only a subset of the parameter candidates are selected for the next iteration, which will be run with an increasing amount of allocated resources. Only a subset of candidates will last until the end of the iteration process, and the best parameter candidate is the one that has the highest score on the last iteration.
Read more in the [User Guide](../../modules/grid_search#successive-halving-user-guide) (note: the Successive Halving estimators are still [experimental](https://scikit-learn.org/1.1/glossary.html#term-experimental)).
```
import numpy as np
from scipy.stats import randint
from sklearn.experimental import enable_halving_search_cv # noqa
from sklearn.model_selection import HalvingRandomSearchCV
from sklearn.ensemble import RandomForestClassifier
from sklearn.datasets import make_classification
rng = np.random.RandomState(0)
X, y = make_classification(n_samples=700, random_state=rng)
clf = RandomForestClassifier(n_estimators=10, random_state=rng)
param_dist = {
"max_depth": [3, None],
"max_features": randint(1, 11),
"min_samples_split": randint(2, 11),
"bootstrap": [True, False],
"criterion": ["gini", "entropy"],
}
rsh = HalvingRandomSearchCV(
estimator=clf, param_distributions=param_dist, factor=2, random_state=rng
)
rsh.fit(X, y)
rsh.best_params_
```
```
{'bootstrap': True, 'criterion': 'gini', 'max_depth': None, 'max_features': 10, 'min_samples_split': 10}
```
Native support for categorical features in HistGradientBoosting estimators
--------------------------------------------------------------------------
[`HistGradientBoostingClassifier`](../../modules/generated/sklearn.ensemble.histgradientboostingclassifier#sklearn.ensemble.HistGradientBoostingClassifier "sklearn.ensemble.HistGradientBoostingClassifier") and [`HistGradientBoostingRegressor`](../../modules/generated/sklearn.ensemble.histgradientboostingregressor#sklearn.ensemble.HistGradientBoostingRegressor "sklearn.ensemble.HistGradientBoostingRegressor") now have native support for categorical features: they can consider splits on non-ordered, categorical data. Read more in the [User Guide](../../modules/ensemble#categorical-support-gbdt).
The plot shows that the new native support for categorical features leads to fitting times that are comparable to models where the categories are treated as ordered quantities, i.e. simply ordinal-encoded. Native support is also more expressive than both one-hot encoding and ordinal encoding. However, to use the new `categorical_features` parameter, it is still required to preprocess the data within a pipeline as demonstrated in this [example](../ensemble/plot_gradient_boosting_categorical#sphx-glr-auto-examples-ensemble-plot-gradient-boosting-categorical-py).
Improved performances of HistGradientBoosting estimators
--------------------------------------------------------
The memory footprint of [`ensemble.HistGradientBoostingRegressor`](../../modules/generated/sklearn.ensemble.histgradientboostingregressor#sklearn.ensemble.HistGradientBoostingRegressor "sklearn.ensemble.HistGradientBoostingRegressor") and [`ensemble.HistGradientBoostingClassifier`](../../modules/generated/sklearn.ensemble.histgradientboostingclassifier#sklearn.ensemble.HistGradientBoostingClassifier "sklearn.ensemble.HistGradientBoostingClassifier") has been significantly improved during calls to `fit`. In addition, histogram initialization is now done in parallel which results in slight speed improvements. See more in the [Benchmark page](https://scikit-learn.org/scikit-learn-benchmarks/).
New self-training meta-estimator
--------------------------------
A new self-training implementation, based on [Yarowski’s algorithm](https://doi.org/10.3115/981658.981684) can now be used with any classifier that implements [predict\_proba](https://scikit-learn.org/1.1/glossary.html#term-predict_proba). The sub-classifier will behave as a semi-supervised classifier, allowing it to learn from unlabeled data. Read more in the [User guide](../../modules/semi_supervised#self-training).
```
import numpy as np
from sklearn import datasets
from sklearn.semi_supervised import SelfTrainingClassifier
from sklearn.svm import SVC
rng = np.random.RandomState(42)
iris = datasets.load_iris()
random_unlabeled_points = rng.rand(iris.target.shape[0]) < 0.3
iris.target[random_unlabeled_points] = -1
svc = SVC(probability=True, gamma="auto")
self_training_model = SelfTrainingClassifier(svc)
self_training_model.fit(iris.data, iris.target)
```
```
SelfTrainingClassifier(base_estimator=SVC(gamma='auto', probability=True))
```
**In a Jupyter environment, please rerun this cell to show the HTML representation or trust the notebook.
On GitHub, the HTML representation is unable to render, please try loading this page with nbviewer.org.**
SelfTrainingClassifier
```
SelfTrainingClassifier(base_estimator=SVC(gamma='auto', probability=True))
```
base\_estimator: SVC
```
SVC(gamma='auto', probability=True)
```
SVC
```
SVC(gamma='auto', probability=True)
```
New SequentialFeatureSelector transformer
-----------------------------------------
A new iterative transformer to select features is available: [`SequentialFeatureSelector`](../../modules/generated/sklearn.feature_selection.sequentialfeatureselector#sklearn.feature_selection.SequentialFeatureSelector "sklearn.feature_selection.SequentialFeatureSelector"). Sequential Feature Selection can add features one at a time (forward selection) or remove features from the list of the available features (backward selection), based on a cross-validated score maximization. See the [User Guide](../../modules/feature_selection#sequential-feature-selection).
```
from sklearn.feature_selection import SequentialFeatureSelector
from sklearn.neighbors import KNeighborsClassifier
from sklearn.datasets import load_iris
X, y = load_iris(return_X_y=True, as_frame=True)
feature_names = X.columns
knn = KNeighborsClassifier(n_neighbors=3)
sfs = SequentialFeatureSelector(knn, n_features_to_select=2)
sfs.fit(X, y)
print(
"Features selected by forward sequential selection: "
f"{feature_names[sfs.get_support()].tolist()}"
)
```
```
Features selected by forward sequential selection: ['sepal length (cm)', 'petal width (cm)']
```
New PolynomialCountSketch kernel approximation function
-------------------------------------------------------
The new [`PolynomialCountSketch`](../../modules/generated/sklearn.kernel_approximation.polynomialcountsketch#sklearn.kernel_approximation.PolynomialCountSketch "sklearn.kernel_approximation.PolynomialCountSketch") approximates a polynomial expansion of a feature space when used with linear models, but uses much less memory than [`PolynomialFeatures`](../../modules/generated/sklearn.preprocessing.polynomialfeatures#sklearn.preprocessing.PolynomialFeatures "sklearn.preprocessing.PolynomialFeatures").
```
from sklearn.datasets import fetch_covtype
from sklearn.pipeline import make_pipeline
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import MinMaxScaler
from sklearn.kernel_approximation import PolynomialCountSketch
from sklearn.linear_model import LogisticRegression
X, y = fetch_covtype(return_X_y=True)
pipe = make_pipeline(
MinMaxScaler(),
PolynomialCountSketch(degree=2, n_components=300),
LogisticRegression(max_iter=1000),
)
X_train, X_test, y_train, y_test = train_test_split(
X, y, train_size=5000, test_size=10000, random_state=42
)
pipe.fit(X_train, y_train).score(X_test, y_test)
```
```
0.7358
```
For comparison, here is the score of a linear baseline for the same data:
```
linear_baseline = make_pipeline(MinMaxScaler(), LogisticRegression(max_iter=1000))
linear_baseline.fit(X_train, y_train).score(X_test, y_test)
```
```
0.7137
```
Individual Conditional Expectation plots
----------------------------------------
A new kind of partial dependence plot is available: the Individual Conditional Expectation (ICE) plot. ICE plots visualize the dependence of the prediction on a feature for each sample separately, with one line per sample. See the [User Guide](../../modules/partial_dependence#individual-conditional)
```
from sklearn.ensemble import RandomForestRegressor
from sklearn.datasets import fetch_california_housing
from sklearn.inspection import plot_partial_dependence
X, y = fetch_california_housing(return_X_y=True, as_frame=True)
features = ["MedInc", "AveOccup", "HouseAge", "AveRooms"]
est = RandomForestRegressor(n_estimators=10)
est.fit(X, y)
display = plot_partial_dependence(
est,
X,
features,
kind="individual",
subsample=50,
n_jobs=3,
grid_resolution=20,
random_state=0,
)
display.figure_.suptitle(
"Partial dependence of house value on non-location features\n"
"for the California housing dataset, with BayesianRidge"
)
display.figure_.subplots_adjust(hspace=0.3)
```
```
/home/runner/work/scikit-learn/scikit-learn/sklearn/utils/deprecation.py:87: FutureWarning: Function plot_partial_dependence is deprecated; Function `plot_partial_dependence` is deprecated in 1.0 and will be removed in 1.2. Use PartialDependenceDisplay.from_estimator instead
warnings.warn(msg, category=FutureWarning)
```
New Poisson splitting criterion for DecisionTreeRegressor
---------------------------------------------------------
The integration of Poisson regression estimation continues from version 0.23. [`DecisionTreeRegressor`](../../modules/generated/sklearn.tree.decisiontreeregressor#sklearn.tree.DecisionTreeRegressor "sklearn.tree.DecisionTreeRegressor") now supports a new `'poisson'` splitting criterion. Setting `criterion="poisson"` might be a good choice if your target is a count or a frequency.
```
from sklearn.tree import DecisionTreeRegressor
from sklearn.model_selection import train_test_split
import numpy as np
n_samples, n_features = 1000, 20
rng = np.random.RandomState(0)
X = rng.randn(n_samples, n_features)
# positive integer target correlated with X[:, 5] with many zeros:
y = rng.poisson(lam=np.exp(X[:, 5]) / 2)
X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=rng)
regressor = DecisionTreeRegressor(criterion="poisson", random_state=0)
regressor.fit(X_train, y_train)
```
```
DecisionTreeRegressor(criterion='poisson', random_state=0)
```
**In a Jupyter environment, please rerun this cell to show the HTML representation or trust the notebook.
On GitHub, the HTML representation is unable to render, please try loading this page with nbviewer.org.**
DecisionTreeRegressor
```
DecisionTreeRegressor(criterion='poisson', random_state=0)
```
New documentation improvements
------------------------------
New examples and documentation pages have been added, in a continuous effort to improve the understanding of machine learning practices:
* a new section about [common pitfalls and recommended practices](https://scikit-learn.org/1.1/common_pitfalls.html#common-pitfalls),
* an example illustrating how to [statistically compare the performance of models](../model_selection/plot_grid_search_stats#sphx-glr-auto-examples-model-selection-plot-grid-search-stats-py) evaluated using [`GridSearchCV`](../../modules/generated/sklearn.model_selection.gridsearchcv#sklearn.model_selection.GridSearchCV "sklearn.model_selection.GridSearchCV"),
* an example on how to [interpret coefficients of linear models](../inspection/plot_linear_model_coefficient_interpretation#sphx-glr-auto-examples-inspection-plot-linear-model-coefficient-interpretation-py),
* an [example](../cross_decomposition/plot_pcr_vs_pls#sphx-glr-auto-examples-cross-decomposition-plot-pcr-vs-pls-py) comparing Principal Component Regression and Partial Least Squares.
**Total running time of the script:** ( 1 minutes 13.459 seconds)
[`Download Python source code: plot_release_highlights_0_24_0.py`](https://scikit-learn.org/1.1/_downloads/d20432c5cccdd8b208184031b83a8cf9/plot_release_highlights_0_24_0.py)
[`Download Jupyter notebook: plot_release_highlights_0_24_0.ipynb`](https://scikit-learn.org/1.1/_downloads/2e6d841155147c9fbce4ea3837b924b9/plot_release_highlights_0_24_0.ipynb)
| programming_docs |
scikit_learn Release Highlights for scikit-learn 1.1 Note
Click [here](#sphx-glr-download-auto-examples-release-highlights-plot-release-highlights-1-1-0-py) to download the full example code or to run this example in your browser via Binder
Release Highlights for scikit-learn 1.1
=======================================
We are pleased to announce the release of scikit-learn 1.1! Many bug fixes and improvements were added, as well as some new key features. We detail below a few of the major features of this release. **For an exhaustive list of all the changes**, please refer to the [release notes](https://scikit-learn.org/1.1/whats_new/v1.1.html#changes-1-1).
To install the latest version (with pip):
```
pip install --upgrade scikit-learn
```
or with conda:
```
conda install -c conda-forge scikit-learn
```
Quantile loss in ensemble.HistGradientBoostingRegressor
-------------------------------------------------------
[`ensemble.HistGradientBoostingRegressor`](../../modules/generated/sklearn.ensemble.histgradientboostingregressor#sklearn.ensemble.HistGradientBoostingRegressor "sklearn.ensemble.HistGradientBoostingRegressor") can model quantiles with `loss="quantile"` and the new parameter `quantile`.
```
from sklearn.ensemble import HistGradientBoostingRegressor
import numpy as np
import matplotlib.pyplot as plt
# Simple regression function for X * cos(X)
rng = np.random.RandomState(42)
X_1d = np.linspace(0, 10, num=2000)
X = X_1d.reshape(-1, 1)
y = X_1d * np.cos(X_1d) + rng.normal(scale=X_1d / 3)
quantiles = [0.95, 0.5, 0.05]
parameters = dict(loss="quantile", max_bins=32, max_iter=50)
hist_quantiles = {
f"quantile={quantile:.2f}": HistGradientBoostingRegressor(
**parameters, quantile=quantile
).fit(X, y)
for quantile in quantiles
}
fig, ax = plt.subplots()
ax.plot(X_1d, y, "o", alpha=0.5, markersize=1)
for quantile, hist in hist_quantiles.items():
ax.plot(X_1d, hist.predict(X), label=quantile)
_ = ax.legend(loc="lower left")
```
`get_feature_names_out` Available in all Transformers
------------------------------------------------------
[get\_feature\_names\_out](https://scikit-learn.org/1.1/glossary.html#term-get_feature_names_out) is now available in all Transformers. This enables [`pipeline.Pipeline`](../../modules/generated/sklearn.pipeline.pipeline#sklearn.pipeline.Pipeline "sklearn.pipeline.Pipeline") to construct the output feature names for more complex pipelines:
```
from sklearn.compose import ColumnTransformer
from sklearn.preprocessing import OneHotEncoder, StandardScaler
from sklearn.pipeline import make_pipeline
from sklearn.impute import SimpleImputer
from sklearn.feature_selection import SelectKBest
from sklearn.datasets import fetch_openml
from sklearn.linear_model import LogisticRegression
X, y = fetch_openml("titanic", version=1, as_frame=True, return_X_y=True)
numeric_features = ["age", "fare"]
numeric_transformer = make_pipeline(SimpleImputer(strategy="median"), StandardScaler())
categorical_features = ["embarked", "pclass"]
preprocessor = ColumnTransformer(
[
("num", numeric_transformer, numeric_features),
(
"cat",
OneHotEncoder(handle_unknown="ignore", sparse=False),
categorical_features,
),
],
verbose_feature_names_out=False,
)
log_reg = make_pipeline(preprocessor, SelectKBest(k=7), LogisticRegression())
log_reg.fit(X, y)
```
```
Pipeline(steps=[('columntransformer',
ColumnTransformer(transformers=[('num',
Pipeline(steps=[('simpleimputer',
SimpleImputer(strategy='median')),
('standardscaler',
StandardScaler())]),
['age', 'fare']),
('cat',
OneHotEncoder(handle_unknown='ignore',
sparse=False),
['embarked', 'pclass'])],
verbose_feature_names_out=False)),
('selectkbest', SelectKBest(k=7)),
('logisticregression', LogisticRegression())])
```
**In a Jupyter environment, please rerun this cell to show the HTML representation or trust the notebook.
On GitHub, the HTML representation is unable to render, please try loading this page with nbviewer.org.**
Pipeline
```
Pipeline(steps=[('columntransformer',
ColumnTransformer(transformers=[('num',
Pipeline(steps=[('simpleimputer',
SimpleImputer(strategy='median')),
('standardscaler',
StandardScaler())]),
['age', 'fare']),
('cat',
OneHotEncoder(handle_unknown='ignore',
sparse=False),
['embarked', 'pclass'])],
verbose_feature_names_out=False)),
('selectkbest', SelectKBest(k=7)),
('logisticregression', LogisticRegression())])
```
columntransformer: ColumnTransformer
```
ColumnTransformer(transformers=[('num',
Pipeline(steps=[('simpleimputer',
SimpleImputer(strategy='median')),
('standardscaler',
StandardScaler())]),
['age', 'fare']),
('cat',
OneHotEncoder(handle_unknown='ignore',
sparse=False),
['embarked', 'pclass'])],
verbose_feature_names_out=False)
```
num
```
['age', 'fare']
```
SimpleImputer
```
SimpleImputer(strategy='median')
```
StandardScaler
```
StandardScaler()
```
cat
```
['embarked', 'pclass']
```
OneHotEncoder
```
OneHotEncoder(handle_unknown='ignore', sparse=False)
```
SelectKBest
```
SelectKBest(k=7)
```
LogisticRegression
```
LogisticRegression()
```
Here we slice the pipeline to include all the steps but the last one. The output feature names of this pipeline slice are the features put into logistic regression. These names correspond directly to the coefficients in the logistic regression:
```
import pandas as pd
log_reg_input_features = log_reg[:-1].get_feature_names_out()
pd.Series(log_reg[-1].coef_.ravel(), index=log_reg_input_features).plot.bar()
plt.tight_layout()
```
Grouping infrequent categories in `OneHotEncoder`
-------------------------------------------------
`OneHotEncoder` supports aggregating infrequent categories into a single output for each feature. The parameters to enable the gathering of infrequent categories are `min_frequency` and `max_categories`. See the [User Guide](../../modules/preprocessing#one-hot-encoder-infrequent-categories) for more details.
```
from sklearn.preprocessing import OneHotEncoder
import numpy as np
X = np.array(
[["dog"] * 5 + ["cat"] * 20 + ["rabbit"] * 10 + ["snake"] * 3], dtype=object
).T
enc = OneHotEncoder(min_frequency=6, sparse=False).fit(X)
enc.infrequent_categories_
```
```
[array(['dog', 'snake'], dtype=object)]
```
Since dog and snake are infrequent categories, they are grouped together when transformed:
```
encoded = enc.transform(np.array([["dog"], ["snake"], ["cat"], ["rabbit"]]))
pd.DataFrame(encoded, columns=enc.get_feature_names_out())
```
| | x0\_cat | x0\_rabbit | x0\_infrequent\_sklearn |
| --- | --- | --- | --- |
| 0 | 0.0 | 0.0 | 1.0 |
| 1 | 0.0 | 0.0 | 1.0 |
| 2 | 1.0 | 0.0 | 0.0 |
| 3 | 0.0 | 1.0 | 0.0 |
Performance improvements
------------------------
Reductions on pairwise distances for dense float64 datasets has been refactored to better take advantage of non-blocking thread parallelism. For example, [`neighbors.NearestNeighbors.kneighbors`](../../modules/generated/sklearn.neighbors.nearestneighbors#sklearn.neighbors.NearestNeighbors.kneighbors "sklearn.neighbors.NearestNeighbors.kneighbors") and [`neighbors.NearestNeighbors.radius_neighbors`](../../modules/generated/sklearn.neighbors.nearestneighbors#sklearn.neighbors.NearestNeighbors.radius_neighbors "sklearn.neighbors.NearestNeighbors.radius_neighbors") can respectively be up to ×20 and ×5 faster than previously. In summary, the following functions and estimators now benefit from improved performance:
* [`metrics.pairwise_distances_argmin`](../../modules/generated/sklearn.metrics.pairwise_distances_argmin#sklearn.metrics.pairwise_distances_argmin "sklearn.metrics.pairwise_distances_argmin")
* [`metrics.pairwise_distances_argmin_min`](../../modules/generated/sklearn.metrics.pairwise_distances_argmin_min#sklearn.metrics.pairwise_distances_argmin_min "sklearn.metrics.pairwise_distances_argmin_min")
* [`cluster.AffinityPropagation`](../../modules/generated/sklearn.cluster.affinitypropagation#sklearn.cluster.AffinityPropagation "sklearn.cluster.AffinityPropagation")
* [`cluster.Birch`](../../modules/generated/sklearn.cluster.birch#sklearn.cluster.Birch "sklearn.cluster.Birch")
* [`cluster.MeanShift`](../../modules/generated/sklearn.cluster.meanshift#sklearn.cluster.MeanShift "sklearn.cluster.MeanShift")
* [`cluster.OPTICS`](../../modules/generated/sklearn.cluster.optics#sklearn.cluster.OPTICS "sklearn.cluster.OPTICS")
* [`cluster.SpectralClustering`](../../modules/generated/sklearn.cluster.spectralclustering#sklearn.cluster.SpectralClustering "sklearn.cluster.SpectralClustering")
* [`feature_selection.mutual_info_regression`](../../modules/generated/sklearn.feature_selection.mutual_info_regression#sklearn.feature_selection.mutual_info_regression "sklearn.feature_selection.mutual_info_regression")
* [`neighbors.KNeighborsClassifier`](../../modules/generated/sklearn.neighbors.kneighborsclassifier#sklearn.neighbors.KNeighborsClassifier "sklearn.neighbors.KNeighborsClassifier")
* [`neighbors.KNeighborsRegressor`](../../modules/generated/sklearn.neighbors.kneighborsregressor#sklearn.neighbors.KNeighborsRegressor "sklearn.neighbors.KNeighborsRegressor")
* [`neighbors.RadiusNeighborsClassifier`](../../modules/generated/sklearn.neighbors.radiusneighborsclassifier#sklearn.neighbors.RadiusNeighborsClassifier "sklearn.neighbors.RadiusNeighborsClassifier")
* [`neighbors.RadiusNeighborsRegressor`](../../modules/generated/sklearn.neighbors.radiusneighborsregressor#sklearn.neighbors.RadiusNeighborsRegressor "sklearn.neighbors.RadiusNeighborsRegressor")
* [`neighbors.LocalOutlierFactor`](../../modules/generated/sklearn.neighbors.localoutlierfactor#sklearn.neighbors.LocalOutlierFactor "sklearn.neighbors.LocalOutlierFactor")
* [`neighbors.NearestNeighbors`](../../modules/generated/sklearn.neighbors.nearestneighbors#sklearn.neighbors.NearestNeighbors "sklearn.neighbors.NearestNeighbors")
* [`manifold.Isomap`](../../modules/generated/sklearn.manifold.isomap#sklearn.manifold.Isomap "sklearn.manifold.Isomap")
* [`manifold.LocallyLinearEmbedding`](../../modules/generated/sklearn.manifold.locallylinearembedding#sklearn.manifold.LocallyLinearEmbedding "sklearn.manifold.LocallyLinearEmbedding")
* [`manifold.TSNE`](../../modules/generated/sklearn.manifold.tsne#sklearn.manifold.TSNE "sklearn.manifold.TSNE")
* [`manifold.trustworthiness`](../../modules/generated/sklearn.manifold.trustworthiness#sklearn.manifold.trustworthiness "sklearn.manifold.trustworthiness")
* [`semi_supervised.LabelPropagation`](../../modules/generated/sklearn.semi_supervised.labelpropagation#sklearn.semi_supervised.LabelPropagation "sklearn.semi_supervised.LabelPropagation")
* [`semi_supervised.LabelSpreading`](../../modules/generated/sklearn.semi_supervised.labelspreading#sklearn.semi_supervised.LabelSpreading "sklearn.semi_supervised.LabelSpreading")
To know more about the technical details of this work, you can read [this suite of blog posts](https://blog.scikit-learn.org/technical/performances/).
Moreover, the computation of loss functions has been refactored using Cython resulting in performance improvements for the following estimators:
* [`linear_model.LogisticRegression`](../../modules/generated/sklearn.linear_model.logisticregression#sklearn.linear_model.LogisticRegression "sklearn.linear_model.LogisticRegression")
* [`linear_model.GammaRegressor`](../../modules/generated/sklearn.linear_model.gammaregressor#sklearn.linear_model.GammaRegressor "sklearn.linear_model.GammaRegressor")
* [`linear_model.PoissonRegressor`](../../modules/generated/sklearn.linear_model.poissonregressor#sklearn.linear_model.PoissonRegressor "sklearn.linear_model.PoissonRegressor")
* [`linear_model.TweedieRegressor`](../../modules/generated/sklearn.linear_model.tweedieregressor#sklearn.linear_model.TweedieRegressor "sklearn.linear_model.TweedieRegressor")
MiniBatchNMF: an online version of NMF
--------------------------------------
The new class [`decomposition.MiniBatchNMF`](../../modules/generated/sklearn.decomposition.minibatchnmf#sklearn.decomposition.MiniBatchNMF "sklearn.decomposition.MiniBatchNMF") implements a faster but less accurate version of non-negative matrix factorization ([`decomposition.NMF`](../../modules/generated/sklearn.decomposition.nmf#sklearn.decomposition.NMF "sklearn.decomposition.NMF")). `MiniBatchNMF` divides the data into mini-batches and optimizes the NMF model in an online manner by cycling over the mini-batches, making it better suited for large datasets. In particular, it implements `partial_fit`, which can be used for online learning when the data is not readily available from the start, or when the data does not fit into memory.
```
import numpy as np
from sklearn.decomposition import MiniBatchNMF
rng = np.random.RandomState(0)
n_samples, n_features, n_components = 10, 10, 5
true_W = rng.uniform(size=(n_samples, n_components))
true_H = rng.uniform(size=(n_components, n_features))
X = true_W @ true_H
nmf = MiniBatchNMF(n_components=n_components, random_state=0)
for _ in range(10):
nmf.partial_fit(X)
W = nmf.transform(X)
H = nmf.components_
X_reconstructed = W @ H
print(
f"relative reconstruction error: ",
f"{np.sum((X - X_reconstructed) ** 2) / np.sum(X**2):.5f}",
)
```
```
relative reconstruction error: 0.00364
```
BisectingKMeans: divide and cluster
-----------------------------------
The new class [`cluster.BisectingKMeans`](../../modules/generated/sklearn.cluster.bisectingkmeans#sklearn.cluster.BisectingKMeans "sklearn.cluster.BisectingKMeans") is a variant of `KMeans`, using divisive hierarchical clustering. Instead of creating all centroids at once, centroids are picked progressively based on a previous clustering: a cluster is split into two new clusters repeatedly until the target number of clusters is reached, giving a hierarchical structure to the clustering.
```
from sklearn.datasets import make_blobs
from sklearn.cluster import KMeans, BisectingKMeans
import matplotlib.pyplot as plt
X, _ = make_blobs(n_samples=1000, centers=2, random_state=0)
km = KMeans(n_clusters=5, random_state=0).fit(X)
bisect_km = BisectingKMeans(n_clusters=5, random_state=0).fit(X)
fig, ax = plt.subplots(1, 2, figsize=(10, 5))
ax[0].scatter(X[:, 0], X[:, 1], s=10, c=km.labels_)
ax[0].scatter(km.cluster_centers_[:, 0], km.cluster_centers_[:, 1], s=20, c="r")
ax[0].set_title("KMeans")
ax[1].scatter(X[:, 0], X[:, 1], s=10, c=bisect_km.labels_)
ax[1].scatter(
bisect_km.cluster_centers_[:, 0], bisect_km.cluster_centers_[:, 1], s=20, c="r"
)
_ = ax[1].set_title("BisectingKMeans")
```
**Total running time of the script:** ( 0 minutes 9.704 seconds)
[`Download Python source code: plot_release_highlights_1_1_0.py`](https://scikit-learn.org/1.1/_downloads/4cf0456267ced0f869a458ef4776d4c5/plot_release_highlights_1_1_0.py)
[`Download Jupyter notebook: plot_release_highlights_1_1_0.ipynb`](https://scikit-learn.org/1.1/_downloads/68fdea23e50d165632d4bd4e36453cd5/plot_release_highlights_1_1_0.ipynb)
scikit_learn Release Highlights for scikit-learn 0.23 Note
Click [here](#sphx-glr-download-auto-examples-release-highlights-plot-release-highlights-0-23-0-py) to download the full example code or to run this example in your browser via Binder
Release Highlights for scikit-learn 0.23
========================================
We are pleased to announce the release of scikit-learn 0.23! Many bug fixes and improvements were added, as well as some new key features. We detail below a few of the major features of this release. **For an exhaustive list of all the changes**, please refer to the [release notes](https://scikit-learn.org/1.1/whats_new/v0.23.html#changes-0-23).
To install the latest version (with pip):
```
pip install --upgrade scikit-learn
```
or with conda:
```
conda install -c conda-forge scikit-learn
```
Generalized Linear Models, and Poisson loss for gradient boosting
-----------------------------------------------------------------
Long-awaited Generalized Linear Models with non-normal loss functions are now available. In particular, three new regressors were implemented: [`PoissonRegressor`](../../modules/generated/sklearn.linear_model.poissonregressor#sklearn.linear_model.PoissonRegressor "sklearn.linear_model.PoissonRegressor"), [`GammaRegressor`](../../modules/generated/sklearn.linear_model.gammaregressor#sklearn.linear_model.GammaRegressor "sklearn.linear_model.GammaRegressor"), and [`TweedieRegressor`](../../modules/generated/sklearn.linear_model.tweedieregressor#sklearn.linear_model.TweedieRegressor "sklearn.linear_model.TweedieRegressor"). The Poisson regressor can be used to model positive integer counts, or relative frequencies. Read more in the [User Guide](../../modules/linear_model#generalized-linear-regression). Additionally, [`HistGradientBoostingRegressor`](../../modules/generated/sklearn.ensemble.histgradientboostingregressor#sklearn.ensemble.HistGradientBoostingRegressor "sklearn.ensemble.HistGradientBoostingRegressor") supports a new ‘poisson’ loss as well.
```
import numpy as np
from sklearn.model_selection import train_test_split
from sklearn.linear_model import PoissonRegressor
from sklearn.ensemble import HistGradientBoostingRegressor
n_samples, n_features = 1000, 20
rng = np.random.RandomState(0)
X = rng.randn(n_samples, n_features)
# positive integer target correlated with X[:, 5] with many zeros:
y = rng.poisson(lam=np.exp(X[:, 5]) / 2)
X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=rng)
glm = PoissonRegressor()
gbdt = HistGradientBoostingRegressor(loss="poisson", learning_rate=0.01)
glm.fit(X_train, y_train)
gbdt.fit(X_train, y_train)
print(glm.score(X_test, y_test))
print(gbdt.score(X_test, y_test))
```
```
0.3577618906572577
0.42425183539869404
```
Rich visual representation of estimators
----------------------------------------
Estimators can now be visualized in notebooks by enabling the `display='diagram'` option. This is particularly useful to summarise the structure of pipelines and other composite estimators, with interactivity to provide detail. Click on the example image below to expand Pipeline elements. See [Visualizing Composite Estimators](../../modules/compose#visualizing-composite-estimators) for how you can use this feature.
```
from sklearn import set_config
from sklearn.pipeline import make_pipeline
from sklearn.preprocessing import OneHotEncoder, StandardScaler
from sklearn.impute import SimpleImputer
from sklearn.compose import make_column_transformer
from sklearn.linear_model import LogisticRegression
set_config(display="diagram")
num_proc = make_pipeline(SimpleImputer(strategy="median"), StandardScaler())
cat_proc = make_pipeline(
SimpleImputer(strategy="constant", fill_value="missing"),
OneHotEncoder(handle_unknown="ignore"),
)
preprocessor = make_column_transformer(
(num_proc, ("feat1", "feat3")), (cat_proc, ("feat0", "feat2"))
)
clf = make_pipeline(preprocessor, LogisticRegression())
clf
```
```
Pipeline(steps=[('columntransformer',
ColumnTransformer(transformers=[('pipeline-1',
Pipeline(steps=[('simpleimputer',
SimpleImputer(strategy='median')),
('standardscaler',
StandardScaler())]),
('feat1', 'feat3')),
('pipeline-2',
Pipeline(steps=[('simpleimputer',
SimpleImputer(fill_value='missing',
strategy='constant')),
('onehotencoder',
OneHotEncoder(handle_unknown='ignore'))]),
('feat0', 'feat2'))])),
('logisticregression', LogisticRegression())])
```
**In a Jupyter environment, please rerun this cell to show the HTML representation or trust the notebook.
On GitHub, the HTML representation is unable to render, please try loading this page with nbviewer.org.**
Pipeline
```
Pipeline(steps=[('columntransformer',
ColumnTransformer(transformers=[('pipeline-1',
Pipeline(steps=[('simpleimputer',
SimpleImputer(strategy='median')),
('standardscaler',
StandardScaler())]),
('feat1', 'feat3')),
('pipeline-2',
Pipeline(steps=[('simpleimputer',
SimpleImputer(fill_value='missing',
strategy='constant')),
('onehotencoder',
OneHotEncoder(handle_unknown='ignore'))]),
('feat0', 'feat2'))])),
('logisticregression', LogisticRegression())])
```
columntransformer: ColumnTransformer
```
ColumnTransformer(transformers=[('pipeline-1',
Pipeline(steps=[('simpleimputer',
SimpleImputer(strategy='median')),
('standardscaler',
StandardScaler())]),
('feat1', 'feat3')),
('pipeline-2',
Pipeline(steps=[('simpleimputer',
SimpleImputer(fill_value='missing',
strategy='constant')),
('onehotencoder',
OneHotEncoder(handle_unknown='ignore'))]),
('feat0', 'feat2'))])
```
pipeline-1
```
('feat1', 'feat3')
```
SimpleImputer
```
SimpleImputer(strategy='median')
```
StandardScaler
```
StandardScaler()
```
pipeline-2
```
('feat0', 'feat2')
```
SimpleImputer
```
SimpleImputer(fill_value='missing', strategy='constant')
```
OneHotEncoder
```
OneHotEncoder(handle_unknown='ignore')
```
LogisticRegression
```
LogisticRegression()
```
Scalability and stability improvements to KMeans
------------------------------------------------
The [`KMeans`](../../modules/generated/sklearn.cluster.kmeans#sklearn.cluster.KMeans "sklearn.cluster.KMeans") estimator was entirely re-worked, and it is now significantly faster and more stable. In addition, the Elkan algorithm is now compatible with sparse matrices. The estimator uses OpenMP based parallelism instead of relying on joblib, so the `n_jobs` parameter has no effect anymore. For more details on how to control the number of threads, please refer to our [Parallelism](https://scikit-learn.org/1.1/computing/parallelism.html#parallelism) notes.
```
import scipy
import numpy as np
from sklearn.model_selection import train_test_split
from sklearn.cluster import KMeans
from sklearn.datasets import make_blobs
from sklearn.metrics import completeness_score
rng = np.random.RandomState(0)
X, y = make_blobs(random_state=rng)
X = scipy.sparse.csr_matrix(X)
X_train, X_test, _, y_test = train_test_split(X, y, random_state=rng)
kmeans = KMeans(algorithm="elkan").fit(X_train)
print(completeness_score(kmeans.predict(X_test), y_test))
```
```
0.6560362663398502
```
Improvements to the histogram-based Gradient Boosting estimators
----------------------------------------------------------------
Various improvements were made to [`HistGradientBoostingClassifier`](../../modules/generated/sklearn.ensemble.histgradientboostingclassifier#sklearn.ensemble.HistGradientBoostingClassifier "sklearn.ensemble.HistGradientBoostingClassifier") and [`HistGradientBoostingRegressor`](../../modules/generated/sklearn.ensemble.histgradientboostingregressor#sklearn.ensemble.HistGradientBoostingRegressor "sklearn.ensemble.HistGradientBoostingRegressor"). On top of the Poisson loss mentioned above, these estimators now support [sample weights](../../modules/ensemble#sw-hgbdt). Also, an automatic early-stopping criterion was added: early-stopping is enabled by default when the number of samples exceeds 10k. Finally, users can now define [monotonic constraints](../../modules/ensemble#monotonic-cst-gbdt) to constrain the predictions based on the variations of specific features. In the following example, we construct a target that is generally positively correlated with the first feature, with some noise. Applying monotoinc constraints allows the prediction to capture the global effect of the first feature, instead of fitting the noise.
```
import numpy as np
from matplotlib import pyplot as plt
from sklearn.model_selection import train_test_split
from sklearn.inspection import plot_partial_dependence
from sklearn.ensemble import HistGradientBoostingRegressor
n_samples = 500
rng = np.random.RandomState(0)
X = rng.randn(n_samples, 2)
noise = rng.normal(loc=0.0, scale=0.01, size=n_samples)
y = 5 * X[:, 0] + np.sin(10 * np.pi * X[:, 0]) - noise
gbdt_no_cst = HistGradientBoostingRegressor().fit(X, y)
gbdt_cst = HistGradientBoostingRegressor(monotonic_cst=[1, 0]).fit(X, y)
disp = plot_partial_dependence(
gbdt_no_cst,
X,
features=[0],
feature_names=["feature 0"],
line_kw={"linewidth": 4, "label": "unconstrained", "color": "tab:blue"},
)
plot_partial_dependence(
gbdt_cst,
X,
features=[0],
line_kw={"linewidth": 4, "label": "constrained", "color": "tab:orange"},
ax=disp.axes_,
)
disp.axes_[0, 0].plot(
X[:, 0], y, "o", alpha=0.5, zorder=-1, label="samples", color="tab:green"
)
disp.axes_[0, 0].set_ylim(-3, 3)
disp.axes_[0, 0].set_xlim(-1, 1)
plt.legend()
plt.show()
```
```
/home/runner/work/scikit-learn/scikit-learn/sklearn/utils/deprecation.py:87: FutureWarning: Function plot_partial_dependence is deprecated; Function `plot_partial_dependence` is deprecated in 1.0 and will be removed in 1.2. Use PartialDependenceDisplay.from_estimator instead
warnings.warn(msg, category=FutureWarning)
```
Sample-weight support for Lasso and ElasticNet
----------------------------------------------
The two linear regressors [`Lasso`](../../modules/generated/sklearn.linear_model.lasso#sklearn.linear_model.Lasso "sklearn.linear_model.Lasso") and [`ElasticNet`](../../modules/generated/sklearn.linear_model.elasticnet#sklearn.linear_model.ElasticNet "sklearn.linear_model.ElasticNet") now support sample weights.
```
from sklearn.model_selection import train_test_split
from sklearn.datasets import make_regression
from sklearn.linear_model import Lasso
import numpy as np
n_samples, n_features = 1000, 20
rng = np.random.RandomState(0)
X, y = make_regression(n_samples, n_features, random_state=rng)
sample_weight = rng.rand(n_samples)
X_train, X_test, y_train, y_test, sw_train, sw_test = train_test_split(
X, y, sample_weight, random_state=rng
)
reg = Lasso()
reg.fit(X_train, y_train, sample_weight=sw_train)
print(reg.score(X_test, y_test, sw_test))
```
```
0.999791942438998
```
**Total running time of the script:** ( 0 minutes 0.606 seconds)
[`Download Python source code: plot_release_highlights_0_23_0.py`](https://scikit-learn.org/1.1/_downloads/4f07b03421908788913e044918d8ed1e/plot_release_highlights_0_23_0.py)
[`Download Jupyter notebook: plot_release_highlights_0_23_0.ipynb`](https://scikit-learn.org/1.1/_downloads/923fcad5e07de1ce7dc8dcbd7327c178/plot_release_highlights_0_23_0.ipynb)
| programming_docs |
scikit_learn Release Highlights for scikit-learn 1.0 Note
Click [here](#sphx-glr-download-auto-examples-release-highlights-plot-release-highlights-1-0-0-py) to download the full example code or to run this example in your browser via Binder
Release Highlights for scikit-learn 1.0
=======================================
We are very pleased to announce the release of scikit-learn 1.0! The library has been stable for quite some time, releasing version 1.0 is recognizing that and signalling it to our users. This release does not include any breaking changes apart from the usual two-release deprecation cycle. For the future, we do our best to keep this pattern.
This release includes some new key features as well as many improvements and bug fixes. We detail below a few of the major features of this release. **For an exhaustive list of all the changes**, please refer to the [release notes](https://scikit-learn.org/1.1/whats_new/v1.0.html#changes-1-0).
To install the latest version (with pip):
```
pip install --upgrade scikit-learn
```
or with conda:
```
conda install -c conda-forge scikit-learn
```
Keyword and positional arguments
--------------------------------
The scikit-learn API exposes many functions and methods which have many input parameters. For example, before this release, one could instantiate a [`HistGradientBoostingRegressor`](../../modules/generated/sklearn.ensemble.histgradientboostingregressor#sklearn.ensemble.HistGradientBoostingRegressor "sklearn.ensemble.HistGradientBoostingRegressor") as:
```
HistGradientBoostingRegressor("squared_error", 0.1, 100, 31, None,
20, 0.0, 255, None, None, False, "auto", "loss", 0.1, 10, 1e-7,
0, None)
```
Understanding the above code requires the reader to go to the API documentation and to check each and every parameter for its position and its meaning. To improve the readability of code written based on scikit-learn, now users have to provide most parameters with their names, as keyword arguments, instead of positional arguments. For example, the above code would be:
```
HistGradientBoostingRegressor(
loss="squared_error",
learning_rate=0.1,
max_iter=100,
max_leaf_nodes=31,
max_depth=None,
min_samples_leaf=20,
l2_regularization=0.0,
max_bins=255,
categorical_features=None,
monotonic_cst=None,
warm_start=False,
early_stopping="auto",
scoring="loss",
validation_fraction=0.1,
n_iter_no_change=10,
tol=1e-7,
verbose=0,
random_state=None,
)
```
which is much more readable. Positional arguments have been deprecated since version 0.23 and will now raise a `TypeError`. A limited number of positional arguments are still allowed in some cases, for example in [`PCA`](../../modules/generated/sklearn.decomposition.pca#sklearn.decomposition.PCA "sklearn.decomposition.PCA"), where `PCA(10)` is still allowed, but `PCA(10,
False)` is not allowed.
Spline Transformers
-------------------
One way to add nonlinear terms to a dataset’s feature set is to generate spline basis functions for continuous/numerical features with the new [`SplineTransformer`](../../modules/generated/sklearn.preprocessing.splinetransformer#sklearn.preprocessing.SplineTransformer "sklearn.preprocessing.SplineTransformer"). Splines are piecewise polynomials, parametrized by their polynomial degree and the positions of the knots. The [`SplineTransformer`](../../modules/generated/sklearn.preprocessing.splinetransformer#sklearn.preprocessing.SplineTransformer "sklearn.preprocessing.SplineTransformer") implements a B-spline basis.
The following code shows splines in action, for more information, please refer to the [User Guide](../../modules/preprocessing#spline-transformer).
```
import numpy as np
from sklearn.preprocessing import SplineTransformer
X = np.arange(5).reshape(5, 1)
spline = SplineTransformer(degree=2, n_knots=3)
spline.fit_transform(X)
```
```
array([[0.5 , 0.5 , 0. , 0. ],
[0.125, 0.75 , 0.125, 0. ],
[0. , 0.5 , 0.5 , 0. ],
[0. , 0.125, 0.75 , 0.125],
[0. , 0. , 0.5 , 0.5 ]])
```
Quantile Regressor
------------------
Quantile regression estimates the median or other quantiles of \(y\) conditional on \(X\), while ordinary least squares (OLS) estimates the conditional mean.
As a linear model, the new [`QuantileRegressor`](../../modules/generated/sklearn.linear_model.quantileregressor#sklearn.linear_model.QuantileRegressor "sklearn.linear_model.QuantileRegressor") gives linear predictions \(\hat{y}(w, X) = Xw\) for the \(q\)-th quantile, \(q \in (0, 1)\). The weights or coefficients \(w\) are then found by the following minimization problem:
\[\min\_{w} {\frac{1}{n\_{\text{samples}}} \sum\_i PB\_q(y\_i - X\_i w) + \alpha ||w||\_1}.\] This consists of the pinball loss (also known as linear loss), see also [`mean_pinball_loss`](../../modules/generated/sklearn.metrics.mean_pinball_loss#sklearn.metrics.mean_pinball_loss "sklearn.metrics.mean_pinball_loss"),
\[\begin{split}PB\_q(t) = q \max(t, 0) + (1 - q) \max(-t, 0) = \begin{cases} q t, & t > 0, \\ 0, & t = 0, \\ (1-q) t, & t < 0 \end{cases}\end{split}\] and the L1 penalty controlled by parameter `alpha`, similar to [`linear_model.Lasso`](../../modules/generated/sklearn.linear_model.lasso#sklearn.linear_model.Lasso "sklearn.linear_model.Lasso").
Please check the following example to see how it works, and the [User Guide](../../modules/linear_model#quantile-regression) for more details.
Feature Names Support
---------------------
When an estimator is passed a [pandas’ dataframe](https://pandas.pydata.org/docs/user_guide/dsintro.html#dataframe) during [fit](https://scikit-learn.org/1.1/glossary.html#term-fit), the estimator will set a `feature_names_in_` attribute containing the feature names. Note that feature names support is only enabled when the column names in the dataframe are all strings. `feature_names_in_` is used to check that the column names of the dataframe passed in non-[fit](https://scikit-learn.org/1.1/glossary.html#term-fit), such as [predict](https://scikit-learn.org/1.1/glossary.html#term-predict), are consistent with features in [fit](https://scikit-learn.org/1.1/glossary.html#term-fit):
```
from sklearn.preprocessing import StandardScaler
import pandas as pd
X = pd.DataFrame([[1, 2, 3], [4, 5, 6]], columns=["a", "b", "c"])
scalar = StandardScaler().fit(X)
scalar.feature_names_in_
```
```
array(['a', 'b', 'c'], dtype=object)
```
The support of [get\_feature\_names\_out](https://scikit-learn.org/1.1/glossary.html#term-get_feature_names_out) is available for transformers that already had [get\_feature\_names](https://scikit-learn.org/1.1/glossary.html#term-get_feature_names) and transformers with a one-to-one correspondence between input and output such as [`StandardScaler`](../../modules/generated/sklearn.preprocessing.standardscaler#sklearn.preprocessing.StandardScaler "sklearn.preprocessing.StandardScaler"). [get\_feature\_names\_out](https://scikit-learn.org/1.1/glossary.html#term-get_feature_names_out) support will be added to all other transformers in future releases. Additionally, [`compose.ColumnTransformer.get_feature_names_out`](../../modules/generated/sklearn.compose.columntransformer#sklearn.compose.ColumnTransformer.get_feature_names_out "sklearn.compose.ColumnTransformer.get_feature_names_out") is available to combine feature names of its transformers:
```
from sklearn.compose import ColumnTransformer
from sklearn.preprocessing import OneHotEncoder
import pandas as pd
X = pd.DataFrame({"pet": ["dog", "cat", "fish"], "age": [3, 7, 1]})
preprocessor = ColumnTransformer(
[
("numerical", StandardScaler(), ["age"]),
("categorical", OneHotEncoder(), ["pet"]),
],
verbose_feature_names_out=False,
).fit(X)
preprocessor.get_feature_names_out()
```
```
array(['age', 'pet_cat', 'pet_dog', 'pet_fish'], dtype=object)
```
When this `preprocessor` is used with a pipeline, the feature names used by the classifier are obtained by slicing and calling [get\_feature\_names\_out](https://scikit-learn.org/1.1/glossary.html#term-get_feature_names_out):
```
from sklearn.linear_model import LogisticRegression
from sklearn.pipeline import make_pipeline
y = [1, 0, 1]
pipe = make_pipeline(preprocessor, LogisticRegression())
pipe.fit(X, y)
pipe[:-1].get_feature_names_out()
```
```
array(['age', 'pet_cat', 'pet_dog', 'pet_fish'], dtype=object)
```
A more flexible plotting API
----------------------------
[`metrics.ConfusionMatrixDisplay`](../../modules/generated/sklearn.metrics.confusionmatrixdisplay#sklearn.metrics.ConfusionMatrixDisplay "sklearn.metrics.ConfusionMatrixDisplay"), [`metrics.PrecisionRecallDisplay`](../../modules/generated/sklearn.metrics.precisionrecalldisplay#sklearn.metrics.PrecisionRecallDisplay "sklearn.metrics.PrecisionRecallDisplay"), [`metrics.DetCurveDisplay`](../../modules/generated/sklearn.metrics.detcurvedisplay#sklearn.metrics.DetCurveDisplay "sklearn.metrics.DetCurveDisplay"), and [`inspection.PartialDependenceDisplay`](../../modules/generated/sklearn.inspection.partialdependencedisplay#sklearn.inspection.PartialDependenceDisplay "sklearn.inspection.PartialDependenceDisplay") now expose two class methods: `from_estimator` and `from_predictions` which allow users to create a plot given the predictions or an estimator. This means the corresponding `plot_*` functions are deprecated. Please check [example one](../model_selection/plot_confusion_matrix#sphx-glr-auto-examples-model-selection-plot-confusion-matrix-py) and [example two](../classification/plot_digits_classification#sphx-glr-auto-examples-classification-plot-digits-classification-py) for how to use the new plotting functionalities.
Online One-Class SVM
--------------------
The new class [`SGDOneClassSVM`](../../modules/generated/sklearn.linear_model.sgdoneclasssvm#sklearn.linear_model.SGDOneClassSVM "sklearn.linear_model.SGDOneClassSVM") implements an online linear version of the One-Class SVM using a stochastic gradient descent. Combined with kernel approximation techniques, [`SGDOneClassSVM`](../../modules/generated/sklearn.linear_model.sgdoneclasssvm#sklearn.linear_model.SGDOneClassSVM "sklearn.linear_model.SGDOneClassSVM") can be used to approximate the solution of a kernelized One-Class SVM, implemented in [`OneClassSVM`](../../modules/generated/sklearn.svm.oneclasssvm#sklearn.svm.OneClassSVM "sklearn.svm.OneClassSVM"), with a fit time complexity linear in the number of samples. Note that the complexity of a kernelized One-Class SVM is at best quadratic in the number of samples. [`SGDOneClassSVM`](../../modules/generated/sklearn.linear_model.sgdoneclasssvm#sklearn.linear_model.SGDOneClassSVM "sklearn.linear_model.SGDOneClassSVM") is thus well suited for datasets with a large number of training samples (> 10,000) for which the SGD variant can be several orders of magnitude faster. Please check this [example](../miscellaneous/plot_anomaly_comparison#sphx-glr-auto-examples-miscellaneous-plot-anomaly-comparison-py) to see how it’s used, and the [User Guide](../../modules/sgd#sgd-online-one-class-svm) for more details.
[](../miscellaneous/plot_anomaly_comparison) Histogram-based Gradient Boosting Models are now stable
-------------------------------------------------------
[`HistGradientBoostingRegressor`](../../modules/generated/sklearn.ensemble.histgradientboostingregressor#sklearn.ensemble.HistGradientBoostingRegressor "sklearn.ensemble.HistGradientBoostingRegressor") and [`HistGradientBoostingClassifier`](../../modules/generated/sklearn.ensemble.histgradientboostingclassifier#sklearn.ensemble.HistGradientBoostingClassifier "sklearn.ensemble.HistGradientBoostingClassifier") are no longer experimental and can simply be imported and used as:
```
from sklearn.ensemble import HistGradientBoostingClassifier
```
New documentation improvements
------------------------------
This release includes many documentation improvements. Out of over 2100 merged pull requests, about 800 of them are improvements to our documentation.
**Total running time of the script:** ( 0 minutes 0.014 seconds)
[`Download Python source code: plot_release_highlights_1_0_0.py`](https://scikit-learn.org/1.1/_downloads/7be24b5c175982744260a2ced8398a5a/plot_release_highlights_1_0_0.py)
[`Download Jupyter notebook: plot_release_highlights_1_0_0.ipynb`](https://scikit-learn.org/1.1/_downloads/f2677dfc7ce435a13dbd8d5fc7298413/plot_release_highlights_1_0_0.ipynb)
scikit_learn Map data to a normal distribution Note
Click [here](#sphx-glr-download-auto-examples-preprocessing-plot-map-data-to-normal-py) to download the full example code or to run this example in your browser via Binder
Map data to a normal distribution
=================================
This example demonstrates the use of the Box-Cox and Yeo-Johnson transforms through [`PowerTransformer`](../../modules/generated/sklearn.preprocessing.powertransformer#sklearn.preprocessing.PowerTransformer "sklearn.preprocessing.PowerTransformer") to map data from various distributions to a normal distribution.
The power transform is useful as a transformation in modeling problems where homoscedasticity and normality are desired. Below are examples of Box-Cox and Yeo-Johnwon applied to six different probability distributions: Lognormal, Chi-squared, Weibull, Gaussian, Uniform, and Bimodal.
Note that the transformations successfully map the data to a normal distribution when applied to certain datasets, but are ineffective with others. This highlights the importance of visualizing the data before and after transformation.
Also note that even though Box-Cox seems to perform better than Yeo-Johnson for lognormal and chi-squared distributions, keep in mind that Box-Cox does not support inputs with negative values.
For comparison, we also add the output from [`QuantileTransformer`](../../modules/generated/sklearn.preprocessing.quantiletransformer#sklearn.preprocessing.QuantileTransformer "sklearn.preprocessing.QuantileTransformer"). It can force any arbitrary distribution into a gaussian, provided that there are enough training samples (thousands). Because it is a non-parametric method, it is harder to interpret than the parametric ones (Box-Cox and Yeo-Johnson).
On “small” datasets (less than a few hundred points), the quantile transformer is prone to overfitting. The use of the power transform is then recommended.
```
# Author: Eric Chang <[email protected]>
# Nicolas Hug <[email protected]>
# License: BSD 3 clause
import numpy as np
import matplotlib.pyplot as plt
from sklearn.preprocessing import PowerTransformer
from sklearn.preprocessing import QuantileTransformer
from sklearn.model_selection import train_test_split
N_SAMPLES = 1000
FONT_SIZE = 6
BINS = 30
rng = np.random.RandomState(304)
bc = PowerTransformer(method="box-cox")
yj = PowerTransformer(method="yeo-johnson")
# n_quantiles is set to the training set size rather than the default value
# to avoid a warning being raised by this example
qt = QuantileTransformer(
n_quantiles=500, output_distribution="normal", random_state=rng
)
size = (N_SAMPLES, 1)
# lognormal distribution
X_lognormal = rng.lognormal(size=size)
# chi-squared distribution
df = 3
X_chisq = rng.chisquare(df=df, size=size)
# weibull distribution
a = 50
X_weibull = rng.weibull(a=a, size=size)
# gaussian distribution
loc = 100
X_gaussian = rng.normal(loc=loc, size=size)
# uniform distribution
X_uniform = rng.uniform(low=0, high=1, size=size)
# bimodal distribution
loc_a, loc_b = 100, 105
X_a, X_b = rng.normal(loc=loc_a, size=size), rng.normal(loc=loc_b, size=size)
X_bimodal = np.concatenate([X_a, X_b], axis=0)
# create plots
distributions = [
("Lognormal", X_lognormal),
("Chi-squared", X_chisq),
("Weibull", X_weibull),
("Gaussian", X_gaussian),
("Uniform", X_uniform),
("Bimodal", X_bimodal),
]
colors = ["#D81B60", "#0188FF", "#FFC107", "#B7A2FF", "#000000", "#2EC5AC"]
fig, axes = plt.subplots(nrows=8, ncols=3, figsize=plt.figaspect(2))
axes = axes.flatten()
axes_idxs = [
(0, 3, 6, 9),
(1, 4, 7, 10),
(2, 5, 8, 11),
(12, 15, 18, 21),
(13, 16, 19, 22),
(14, 17, 20, 23),
]
axes_list = [(axes[i], axes[j], axes[k], axes[l]) for (i, j, k, l) in axes_idxs]
for distribution, color, axes in zip(distributions, colors, axes_list):
name, X = distribution
X_train, X_test = train_test_split(X, test_size=0.5)
# perform power transforms and quantile transform
X_trans_bc = bc.fit(X_train).transform(X_test)
lmbda_bc = round(bc.lambdas_[0], 2)
X_trans_yj = yj.fit(X_train).transform(X_test)
lmbda_yj = round(yj.lambdas_[0], 2)
X_trans_qt = qt.fit(X_train).transform(X_test)
ax_original, ax_bc, ax_yj, ax_qt = axes
ax_original.hist(X_train, color=color, bins=BINS)
ax_original.set_title(name, fontsize=FONT_SIZE)
ax_original.tick_params(axis="both", which="major", labelsize=FONT_SIZE)
for ax, X_trans, meth_name, lmbda in zip(
(ax_bc, ax_yj, ax_qt),
(X_trans_bc, X_trans_yj, X_trans_qt),
("Box-Cox", "Yeo-Johnson", "Quantile transform"),
(lmbda_bc, lmbda_yj, None),
):
ax.hist(X_trans, color=color, bins=BINS)
title = "After {}".format(meth_name)
if lmbda is not None:
title += "\n$\\lambda$ = {}".format(lmbda)
ax.set_title(title, fontsize=FONT_SIZE)
ax.tick_params(axis="both", which="major", labelsize=FONT_SIZE)
ax.set_xlim([-3.5, 3.5])
plt.tight_layout()
plt.show()
```
**Total running time of the script:** ( 0 minutes 1.332 seconds)
[`Download Python source code: plot_map_data_to_normal.py`](https://scikit-learn.org/1.1/_downloads/4a586e9878904f51e624d94c03b3beff/plot_map_data_to_normal.py)
[`Download Jupyter notebook: plot_map_data_to_normal.ipynb`](https://scikit-learn.org/1.1/_downloads/fc26dc0b9d906466315a49a860435409/plot_map_data_to_normal.ipynb)
scikit_learn Using KBinsDiscretizer to discretize continuous features Note
Click [here](#sphx-glr-download-auto-examples-preprocessing-plot-discretization-py) to download the full example code or to run this example in your browser via Binder
Using KBinsDiscretizer to discretize continuous features
========================================================
The example compares prediction result of linear regression (linear model) and decision tree (tree based model) with and without discretization of real-valued features.
As is shown in the result before discretization, linear model is fast to build and relatively straightforward to interpret, but can only model linear relationships, while decision tree can build a much more complex model of the data. One way to make linear model more powerful on continuous data is to use discretization (also known as binning). In the example, we discretize the feature and one-hot encode the transformed data. Note that if the bins are not reasonably wide, there would appear to be a substantially increased risk of overfitting, so the discretizer parameters should usually be tuned under cross validation.
After discretization, linear regression and decision tree make exactly the same prediction. As features are constant within each bin, any model must predict the same value for all points within a bin. Compared with the result before discretization, linear model become much more flexible while decision tree gets much less flexible. Note that binning features generally has no beneficial effect for tree-based models, as these models can learn to split up the data anywhere.
```
# Author: Andreas Müller
# Hanmin Qin <[email protected]>
# License: BSD 3 clause
import numpy as np
import matplotlib.pyplot as plt
from sklearn.linear_model import LinearRegression
from sklearn.preprocessing import KBinsDiscretizer
from sklearn.tree import DecisionTreeRegressor
# construct the dataset
rnd = np.random.RandomState(42)
X = rnd.uniform(-3, 3, size=100)
y = np.sin(X) + rnd.normal(size=len(X)) / 3
X = X.reshape(-1, 1)
# transform the dataset with KBinsDiscretizer
enc = KBinsDiscretizer(n_bins=10, encode="onehot")
X_binned = enc.fit_transform(X)
# predict with original dataset
fig, (ax1, ax2) = plt.subplots(ncols=2, sharey=True, figsize=(10, 4))
line = np.linspace(-3, 3, 1000, endpoint=False).reshape(-1, 1)
reg = LinearRegression().fit(X, y)
ax1.plot(line, reg.predict(line), linewidth=2, color="green", label="linear regression")
reg = DecisionTreeRegressor(min_samples_split=3, random_state=0).fit(X, y)
ax1.plot(line, reg.predict(line), linewidth=2, color="red", label="decision tree")
ax1.plot(X[:, 0], y, "o", c="k")
ax1.legend(loc="best")
ax1.set_ylabel("Regression output")
ax1.set_xlabel("Input feature")
ax1.set_title("Result before discretization")
# predict with transformed dataset
line_binned = enc.transform(line)
reg = LinearRegression().fit(X_binned, y)
ax2.plot(
line,
reg.predict(line_binned),
linewidth=2,
color="green",
linestyle="-",
label="linear regression",
)
reg = DecisionTreeRegressor(min_samples_split=3, random_state=0).fit(X_binned, y)
ax2.plot(
line,
reg.predict(line_binned),
linewidth=2,
color="red",
linestyle=":",
label="decision tree",
)
ax2.plot(X[:, 0], y, "o", c="k")
ax2.vlines(enc.bin_edges_[0], *plt.gca().get_ylim(), linewidth=1, alpha=0.2)
ax2.legend(loc="best")
ax2.set_xlabel("Input feature")
ax2.set_title("Result after discretization")
plt.tight_layout()
plt.show()
```
**Total running time of the script:** ( 0 minutes 0.139 seconds)
[`Download Python source code: plot_discretization.py`](https://scikit-learn.org/1.1/_downloads/7341736ba71d0e04b4b71061cfe9b78e/plot_discretization.py)
[`Download Jupyter notebook: plot_discretization.ipynb`](https://scikit-learn.org/1.1/_downloads/c4b61c2bcffecd3661bffe3c79ef6e0b/plot_discretization.ipynb)
| programming_docs |
scikit_learn Feature discretization Note
Click [here](#sphx-glr-download-auto-examples-preprocessing-plot-discretization-classification-py) to download the full example code or to run this example in your browser via Binder
Feature discretization
======================
A demonstration of feature discretization on synthetic classification datasets. Feature discretization decomposes each feature into a set of bins, here equally distributed in width. The discrete values are then one-hot encoded, and given to a linear classifier. This preprocessing enables a non-linear behavior even though the classifier is linear.
On this example, the first two rows represent linearly non-separable datasets (moons and concentric circles) while the third is approximately linearly separable. On the two linearly non-separable datasets, feature discretization largely increases the performance of linear classifiers. On the linearly separable dataset, feature discretization decreases the performance of linear classifiers. Two non-linear classifiers are also shown for comparison.
This example should be taken with a grain of salt, as the intuition conveyed does not necessarily carry over to real datasets. Particularly in high-dimensional spaces, data can more easily be separated linearly. Moreover, using feature discretization and one-hot encoding increases the number of features, which easily lead to overfitting when the number of samples is small.
The plots show training points in solid colors and testing points semi-transparent. The lower right shows the classification accuracy on the test set.

```
dataset 0
---------
LogisticRegression: 0.86
LinearSVC: 0.86
KBinsDiscretizer + LogisticRegression: 0.86
KBinsDiscretizer + LinearSVC: 0.94
GradientBoostingClassifier: 0.90
SVC: 0.94
dataset 1
---------
LogisticRegression: 0.40
LinearSVC: 0.40
KBinsDiscretizer + LogisticRegression: 0.78
KBinsDiscretizer + LinearSVC: 0.80
GradientBoostingClassifier: 0.84
SVC: 0.84
dataset 2
---------
LogisticRegression: 0.98
LinearSVC: 0.96
KBinsDiscretizer + LogisticRegression: 0.94
KBinsDiscretizer + LinearSVC: 0.94
GradientBoostingClassifier: 0.94
SVC: 0.98
```
```
# Code source: Tom Dupré la Tour
# Adapted from plot_classifier_comparison by Gaël Varoquaux and Andreas Müller
#
# License: BSD 3 clause
import numpy as np
import matplotlib.pyplot as plt
from matplotlib.colors import ListedColormap
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import StandardScaler
from sklearn.datasets import make_moons, make_circles, make_classification
from sklearn.linear_model import LogisticRegression
from sklearn.model_selection import GridSearchCV
from sklearn.pipeline import make_pipeline
from sklearn.preprocessing import KBinsDiscretizer
from sklearn.svm import SVC, LinearSVC
from sklearn.ensemble import GradientBoostingClassifier
from sklearn.utils._testing import ignore_warnings
from sklearn.exceptions import ConvergenceWarning
h = 0.02 # step size in the mesh
def get_name(estimator):
name = estimator.__class__.__name__
if name == "Pipeline":
name = [get_name(est[1]) for est in estimator.steps]
name = " + ".join(name)
return name
# list of (estimator, param_grid), where param_grid is used in GridSearchCV
# The parameter spaces in this example are limited to a narrow band to reduce
# its runtime. In a real use case, a broader search space for the algorithms
# should be used.
classifiers = [
(
make_pipeline(StandardScaler(), LogisticRegression(random_state=0)),
{"logisticregression__C": np.logspace(-1, 1, 3)},
),
(
make_pipeline(StandardScaler(), LinearSVC(random_state=0)),
{"linearsvc__C": np.logspace(-1, 1, 3)},
),
(
make_pipeline(
StandardScaler(),
KBinsDiscretizer(encode="onehot"),
LogisticRegression(random_state=0),
),
{
"kbinsdiscretizer__n_bins": np.arange(5, 8),
"logisticregression__C": np.logspace(-1, 1, 3),
},
),
(
make_pipeline(
StandardScaler(),
KBinsDiscretizer(encode="onehot"),
LinearSVC(random_state=0),
),
{
"kbinsdiscretizer__n_bins": np.arange(5, 8),
"linearsvc__C": np.logspace(-1, 1, 3),
},
),
(
make_pipeline(
StandardScaler(), GradientBoostingClassifier(n_estimators=5, random_state=0)
),
{"gradientboostingclassifier__learning_rate": np.logspace(-2, 0, 5)},
),
(
make_pipeline(StandardScaler(), SVC(random_state=0)),
{"svc__C": np.logspace(-1, 1, 3)},
),
]
names = [get_name(e).replace("StandardScaler + ", "") for e, _ in classifiers]
n_samples = 100
datasets = [
make_moons(n_samples=n_samples, noise=0.2, random_state=0),
make_circles(n_samples=n_samples, noise=0.2, factor=0.5, random_state=1),
make_classification(
n_samples=n_samples,
n_features=2,
n_redundant=0,
n_informative=2,
random_state=2,
n_clusters_per_class=1,
),
]
fig, axes = plt.subplots(
nrows=len(datasets), ncols=len(classifiers) + 1, figsize=(21, 9)
)
cm_piyg = plt.cm.PiYG
cm_bright = ListedColormap(["#b30065", "#178000"])
# iterate over datasets
for ds_cnt, (X, y) in enumerate(datasets):
print(f"\ndataset {ds_cnt}\n---------")
# split into training and test part
X_train, X_test, y_train, y_test = train_test_split(
X, y, test_size=0.5, random_state=42
)
# create the grid for background colors
x_min, x_max = X[:, 0].min() - 0.5, X[:, 0].max() + 0.5
y_min, y_max = X[:, 1].min() - 0.5, X[:, 1].max() + 0.5
xx, yy = np.meshgrid(np.arange(x_min, x_max, h), np.arange(y_min, y_max, h))
# plot the dataset first
ax = axes[ds_cnt, 0]
if ds_cnt == 0:
ax.set_title("Input data")
# plot the training points
ax.scatter(X_train[:, 0], X_train[:, 1], c=y_train, cmap=cm_bright, edgecolors="k")
# and testing points
ax.scatter(
X_test[:, 0], X_test[:, 1], c=y_test, cmap=cm_bright, alpha=0.6, edgecolors="k"
)
ax.set_xlim(xx.min(), xx.max())
ax.set_ylim(yy.min(), yy.max())
ax.set_xticks(())
ax.set_yticks(())
# iterate over classifiers
for est_idx, (name, (estimator, param_grid)) in enumerate(zip(names, classifiers)):
ax = axes[ds_cnt, est_idx + 1]
clf = GridSearchCV(estimator=estimator, param_grid=param_grid)
with ignore_warnings(category=ConvergenceWarning):
clf.fit(X_train, y_train)
score = clf.score(X_test, y_test)
print(f"{name}: {score:.2f}")
# plot the decision boundary. For that, we will assign a color to each
# point in the mesh [x_min, x_max]*[y_min, y_max].
if hasattr(clf, "decision_function"):
Z = clf.decision_function(np.column_stack([xx.ravel(), yy.ravel()]))
else:
Z = clf.predict_proba(np.column_stack([xx.ravel(), yy.ravel()]))[:, 1]
# put the result into a color plot
Z = Z.reshape(xx.shape)
ax.contourf(xx, yy, Z, cmap=cm_piyg, alpha=0.8)
# plot the training points
ax.scatter(
X_train[:, 0], X_train[:, 1], c=y_train, cmap=cm_bright, edgecolors="k"
)
# and testing points
ax.scatter(
X_test[:, 0],
X_test[:, 1],
c=y_test,
cmap=cm_bright,
edgecolors="k",
alpha=0.6,
)
ax.set_xlim(xx.min(), xx.max())
ax.set_ylim(yy.min(), yy.max())
ax.set_xticks(())
ax.set_yticks(())
if ds_cnt == 0:
ax.set_title(name.replace(" + ", "\n"))
ax.text(
0.95,
0.06,
(f"{score:.2f}").lstrip("0"),
size=15,
bbox=dict(boxstyle="round", alpha=0.8, facecolor="white"),
transform=ax.transAxes,
horizontalalignment="right",
)
plt.tight_layout()
# Add suptitles above the figure
plt.subplots_adjust(top=0.90)
suptitles = [
"Linear classifiers",
"Feature discretization and linear classifiers",
"Non-linear classifiers",
]
for i, suptitle in zip([1, 3, 5], suptitles):
ax = axes[0, i]
ax.text(
1.05,
1.25,
suptitle,
transform=ax.transAxes,
horizontalalignment="center",
size="x-large",
)
plt.show()
```
**Total running time of the script:** ( 0 minutes 2.740 seconds)
[`Download Python source code: plot_discretization_classification.py`](https://scikit-learn.org/1.1/_downloads/74caedf3eb449b80f3f00e66c1c576bd/plot_discretization_classification.py)
[`Download Jupyter notebook: plot_discretization_classification.ipynb`](https://scikit-learn.org/1.1/_downloads/aa8e07ce1b796a15ada1d9f0edce48b5/plot_discretization_classification.ipynb)
scikit_learn Compare the effect of different scalers on data with outliers Note
Click [here](#sphx-glr-download-auto-examples-preprocessing-plot-all-scaling-py) to download the full example code or to run this example in your browser via Binder
Compare the effect of different scalers on data with outliers
=============================================================
Feature 0 (median income in a block) and feature 5 (average house occupancy) of the [California Housing dataset](https://scikit-learn.org/1.1/datasets/real_world.html#california-housing-dataset) have very different scales and contain some very large outliers. These two characteristics lead to difficulties to visualize the data and, more importantly, they can degrade the predictive performance of many machine learning algorithms. Unscaled data can also slow down or even prevent the convergence of many gradient-based estimators.
Indeed many estimators are designed with the assumption that each feature takes values close to zero or more importantly that all features vary on comparable scales. In particular, metric-based and gradient-based estimators often assume approximately standardized data (centered features with unit variances). A notable exception are decision tree-based estimators that are robust to arbitrary scaling of the data.
This example uses different scalers, transformers, and normalizers to bring the data within a pre-defined range.
Scalers are linear (or more precisely affine) transformers and differ from each other in the way they estimate the parameters used to shift and scale each feature.
[`QuantileTransformer`](../../modules/generated/sklearn.preprocessing.quantiletransformer#sklearn.preprocessing.QuantileTransformer "sklearn.preprocessing.QuantileTransformer") provides non-linear transformations in which distances between marginal outliers and inliers are shrunk. [`PowerTransformer`](../../modules/generated/sklearn.preprocessing.powertransformer#sklearn.preprocessing.PowerTransformer "sklearn.preprocessing.PowerTransformer") provides non-linear transformations in which data is mapped to a normal distribution to stabilize variance and minimize skewness.
Unlike the previous transformations, normalization refers to a per sample transformation instead of a per feature transformation.
The following code is a bit verbose, feel free to jump directly to the analysis of the [results](#results).
```
# Author: Raghav RV <[email protected]>
# Guillaume Lemaitre <[email protected]>
# Thomas Unterthiner
# License: BSD 3 clause
import numpy as np
import matplotlib as mpl
from matplotlib import pyplot as plt
from matplotlib import cm
from sklearn.preprocessing import MinMaxScaler
from sklearn.preprocessing import minmax_scale
from sklearn.preprocessing import MaxAbsScaler
from sklearn.preprocessing import StandardScaler
from sklearn.preprocessing import RobustScaler
from sklearn.preprocessing import Normalizer
from sklearn.preprocessing import QuantileTransformer
from sklearn.preprocessing import PowerTransformer
from sklearn.datasets import fetch_california_housing
dataset = fetch_california_housing()
X_full, y_full = dataset.data, dataset.target
feature_names = dataset.feature_names
feature_mapping = {
"MedInc": "Median income in block",
"HousAge": "Median house age in block",
"AveRooms": "Average number of rooms",
"AveBedrms": "Average number of bedrooms",
"Population": "Block population",
"AveOccup": "Average house occupancy",
"Latitude": "House block latitude",
"Longitude": "House block longitude",
}
# Take only 2 features to make visualization easier
# Feature MedInc has a long tail distribution.
# Feature AveOccup has a few but very large outliers.
features = ["MedInc", "AveOccup"]
features_idx = [feature_names.index(feature) for feature in features]
X = X_full[:, features_idx]
distributions = [
("Unscaled data", X),
("Data after standard scaling", StandardScaler().fit_transform(X)),
("Data after min-max scaling", MinMaxScaler().fit_transform(X)),
("Data after max-abs scaling", MaxAbsScaler().fit_transform(X)),
(
"Data after robust scaling",
RobustScaler(quantile_range=(25, 75)).fit_transform(X),
),
(
"Data after power transformation (Yeo-Johnson)",
PowerTransformer(method="yeo-johnson").fit_transform(X),
),
(
"Data after power transformation (Box-Cox)",
PowerTransformer(method="box-cox").fit_transform(X),
),
(
"Data after quantile transformation (uniform pdf)",
QuantileTransformer(output_distribution="uniform").fit_transform(X),
),
(
"Data after quantile transformation (gaussian pdf)",
QuantileTransformer(output_distribution="normal").fit_transform(X),
),
("Data after sample-wise L2 normalizing", Normalizer().fit_transform(X)),
]
# scale the output between 0 and 1 for the colorbar
y = minmax_scale(y_full)
# plasma does not exist in matplotlib < 1.5
cmap = getattr(cm, "plasma_r", cm.hot_r)
def create_axes(title, figsize=(16, 6)):
fig = plt.figure(figsize=figsize)
fig.suptitle(title)
# define the axis for the first plot
left, width = 0.1, 0.22
bottom, height = 0.1, 0.7
bottom_h = height + 0.15
left_h = left + width + 0.02
rect_scatter = [left, bottom, width, height]
rect_histx = [left, bottom_h, width, 0.1]
rect_histy = [left_h, bottom, 0.05, height]
ax_scatter = plt.axes(rect_scatter)
ax_histx = plt.axes(rect_histx)
ax_histy = plt.axes(rect_histy)
# define the axis for the zoomed-in plot
left = width + left + 0.2
left_h = left + width + 0.02
rect_scatter = [left, bottom, width, height]
rect_histx = [left, bottom_h, width, 0.1]
rect_histy = [left_h, bottom, 0.05, height]
ax_scatter_zoom = plt.axes(rect_scatter)
ax_histx_zoom = plt.axes(rect_histx)
ax_histy_zoom = plt.axes(rect_histy)
# define the axis for the colorbar
left, width = width + left + 0.13, 0.01
rect_colorbar = [left, bottom, width, height]
ax_colorbar = plt.axes(rect_colorbar)
return (
(ax_scatter, ax_histy, ax_histx),
(ax_scatter_zoom, ax_histy_zoom, ax_histx_zoom),
ax_colorbar,
)
def plot_distribution(axes, X, y, hist_nbins=50, title="", x0_label="", x1_label=""):
ax, hist_X1, hist_X0 = axes
ax.set_title(title)
ax.set_xlabel(x0_label)
ax.set_ylabel(x1_label)
# The scatter plot
colors = cmap(y)
ax.scatter(X[:, 0], X[:, 1], alpha=0.5, marker="o", s=5, lw=0, c=colors)
# Removing the top and the right spine for aesthetics
# make nice axis layout
ax.spines["top"].set_visible(False)
ax.spines["right"].set_visible(False)
ax.get_xaxis().tick_bottom()
ax.get_yaxis().tick_left()
ax.spines["left"].set_position(("outward", 10))
ax.spines["bottom"].set_position(("outward", 10))
# Histogram for axis X1 (feature 5)
hist_X1.set_ylim(ax.get_ylim())
hist_X1.hist(
X[:, 1], bins=hist_nbins, orientation="horizontal", color="grey", ec="grey"
)
hist_X1.axis("off")
# Histogram for axis X0 (feature 0)
hist_X0.set_xlim(ax.get_xlim())
hist_X0.hist(
X[:, 0], bins=hist_nbins, orientation="vertical", color="grey", ec="grey"
)
hist_X0.axis("off")
```
Two plots will be shown for each scaler/normalizer/transformer. The left figure will show a scatter plot of the full data set while the right figure will exclude the extreme values considering only 99 % of the data set, excluding marginal outliers. In addition, the marginal distributions for each feature will be shown on the sides of the scatter plot.
```
def make_plot(item_idx):
title, X = distributions[item_idx]
ax_zoom_out, ax_zoom_in, ax_colorbar = create_axes(title)
axarr = (ax_zoom_out, ax_zoom_in)
plot_distribution(
axarr[0],
X,
y,
hist_nbins=200,
x0_label=feature_mapping[features[0]],
x1_label=feature_mapping[features[1]],
title="Full data",
)
# zoom-in
zoom_in_percentile_range = (0, 99)
cutoffs_X0 = np.percentile(X[:, 0], zoom_in_percentile_range)
cutoffs_X1 = np.percentile(X[:, 1], zoom_in_percentile_range)
non_outliers_mask = np.all(X > [cutoffs_X0[0], cutoffs_X1[0]], axis=1) & np.all(
X < [cutoffs_X0[1], cutoffs_X1[1]], axis=1
)
plot_distribution(
axarr[1],
X[non_outliers_mask],
y[non_outliers_mask],
hist_nbins=50,
x0_label=feature_mapping[features[0]],
x1_label=feature_mapping[features[1]],
title="Zoom-in",
)
norm = mpl.colors.Normalize(y_full.min(), y_full.max())
mpl.colorbar.ColorbarBase(
ax_colorbar,
cmap=cmap,
norm=norm,
orientation="vertical",
label="Color mapping for values of y",
)
```
Original data
-------------
Each transformation is plotted showing two transformed features, with the left plot showing the entire dataset, and the right zoomed-in to show the dataset without the marginal outliers. A large majority of the samples are compacted to a specific range, [0, 10] for the median income and [0, 6] for the average house occupancy. Note that there are some marginal outliers (some blocks have average occupancy of more than 1200). Therefore, a specific pre-processing can be very beneficial depending of the application. In the following, we present some insights and behaviors of those pre-processing methods in the presence of marginal outliers.
```
make_plot(0)
```
StandardScaler
--------------
[`StandardScaler`](../../modules/generated/sklearn.preprocessing.standardscaler#sklearn.preprocessing.StandardScaler "sklearn.preprocessing.StandardScaler") removes the mean and scales the data to unit variance. The scaling shrinks the range of the feature values as shown in the left figure below. However, the outliers have an influence when computing the empirical mean and standard deviation. Note in particular that because the outliers on each feature have different magnitudes, the spread of the transformed data on each feature is very different: most of the data lie in the [-2, 4] range for the transformed median income feature while the same data is squeezed in the smaller [-0.2, 0.2] range for the transformed average house occupancy.
[`StandardScaler`](../../modules/generated/sklearn.preprocessing.standardscaler#sklearn.preprocessing.StandardScaler "sklearn.preprocessing.StandardScaler") therefore cannot guarantee balanced feature scales in the presence of outliers.
```
make_plot(1)
```
MinMaxScaler
------------
[`MinMaxScaler`](../../modules/generated/sklearn.preprocessing.minmaxscaler#sklearn.preprocessing.MinMaxScaler "sklearn.preprocessing.MinMaxScaler") rescales the data set such that all feature values are in the range [0, 1] as shown in the right panel below. However, this scaling compresses all inliers into the narrow range [0, 0.005] for the transformed average house occupancy.
Both [`StandardScaler`](../../modules/generated/sklearn.preprocessing.standardscaler#sklearn.preprocessing.StandardScaler "sklearn.preprocessing.StandardScaler") and [`MinMaxScaler`](../../modules/generated/sklearn.preprocessing.minmaxscaler#sklearn.preprocessing.MinMaxScaler "sklearn.preprocessing.MinMaxScaler") are very sensitive to the presence of outliers.
```
make_plot(2)
```
MaxAbsScaler
------------
[`MaxAbsScaler`](../../modules/generated/sklearn.preprocessing.maxabsscaler#sklearn.preprocessing.MaxAbsScaler "sklearn.preprocessing.MaxAbsScaler") is similar to [`MinMaxScaler`](../../modules/generated/sklearn.preprocessing.minmaxscaler#sklearn.preprocessing.MinMaxScaler "sklearn.preprocessing.MinMaxScaler") except that the values are mapped across several ranges depending on whether negative OR positive values are present. If only positive values are present, the range is [0, 1]. If only negative values are present, the range is [-1, 0]. If both negative and positive values are present, the range is [-1, 1]. On positive only data, both [`MinMaxScaler`](../../modules/generated/sklearn.preprocessing.minmaxscaler#sklearn.preprocessing.MinMaxScaler "sklearn.preprocessing.MinMaxScaler") and [`MaxAbsScaler`](../../modules/generated/sklearn.preprocessing.maxabsscaler#sklearn.preprocessing.MaxAbsScaler "sklearn.preprocessing.MaxAbsScaler") behave similarly. [`MaxAbsScaler`](../../modules/generated/sklearn.preprocessing.maxabsscaler#sklearn.preprocessing.MaxAbsScaler "sklearn.preprocessing.MaxAbsScaler") therefore also suffers from the presence of large outliers.
```
make_plot(3)
```
RobustScaler
------------
Unlike the previous scalers, the centering and scaling statistics of [`RobustScaler`](../../modules/generated/sklearn.preprocessing.robustscaler#sklearn.preprocessing.RobustScaler "sklearn.preprocessing.RobustScaler") are based on percentiles and are therefore not influenced by a small number of very large marginal outliers. Consequently, the resulting range of the transformed feature values is larger than for the previous scalers and, more importantly, are approximately similar: for both features most of the transformed values lie in a [-2, 3] range as seen in the zoomed-in figure. Note that the outliers themselves are still present in the transformed data. If a separate outlier clipping is desirable, a non-linear transformation is required (see below).
```
make_plot(4)
```
PowerTransformer
----------------
[`PowerTransformer`](../../modules/generated/sklearn.preprocessing.powertransformer#sklearn.preprocessing.PowerTransformer "sklearn.preprocessing.PowerTransformer") applies a power transformation to each feature to make the data more Gaussian-like in order to stabilize variance and minimize skewness. Currently the Yeo-Johnson and Box-Cox transforms are supported and the optimal scaling factor is determined via maximum likelihood estimation in both methods. By default, [`PowerTransformer`](../../modules/generated/sklearn.preprocessing.powertransformer#sklearn.preprocessing.PowerTransformer "sklearn.preprocessing.PowerTransformer") applies zero-mean, unit variance normalization. Note that Box-Cox can only be applied to strictly positive data. Income and average house occupancy happen to be strictly positive, but if negative values are present the Yeo-Johnson transformed is preferred.
```
make_plot(5)
make_plot(6)
```
* 
*
QuantileTransformer (uniform output)
------------------------------------
[`QuantileTransformer`](../../modules/generated/sklearn.preprocessing.quantiletransformer#sklearn.preprocessing.QuantileTransformer "sklearn.preprocessing.QuantileTransformer") applies a non-linear transformation such that the probability density function of each feature will be mapped to a uniform or Gaussian distribution. In this case, all the data, including outliers, will be mapped to a uniform distribution with the range [0, 1], making outliers indistinguishable from inliers.
[`RobustScaler`](../../modules/generated/sklearn.preprocessing.robustscaler#sklearn.preprocessing.RobustScaler "sklearn.preprocessing.RobustScaler") and [`QuantileTransformer`](../../modules/generated/sklearn.preprocessing.quantiletransformer#sklearn.preprocessing.QuantileTransformer "sklearn.preprocessing.QuantileTransformer") are robust to outliers in the sense that adding or removing outliers in the training set will yield approximately the same transformation. But contrary to [`RobustScaler`](../../modules/generated/sklearn.preprocessing.robustscaler#sklearn.preprocessing.RobustScaler "sklearn.preprocessing.RobustScaler"), [`QuantileTransformer`](../../modules/generated/sklearn.preprocessing.quantiletransformer#sklearn.preprocessing.QuantileTransformer "sklearn.preprocessing.QuantileTransformer") will also automatically collapse any outlier by setting them to the a priori defined range boundaries (0 and 1). This can result in saturation artifacts for extreme values.
```
make_plot(7)
```
 QuantileTransformer (Gaussian output)
-------------------------------------
To map to a Gaussian distribution, set the parameter `output_distribution='normal'`.
```
make_plot(8)
```
 Normalizer
----------
The [`Normalizer`](../../modules/generated/sklearn.preprocessing.normalizer#sklearn.preprocessing.Normalizer "sklearn.preprocessing.Normalizer") rescales the vector for each sample to have unit norm, independently of the distribution of the samples. It can be seen on both figures below where all samples are mapped onto the unit circle. In our example the two selected features have only positive values; therefore the transformed data only lie in the positive quadrant. This would not be the case if some original features had a mix of positive and negative values.
```
make_plot(9)
plt.show()
```
**Total running time of the script:** ( 0 minutes 7.237 seconds)
[`Download Python source code: plot_all_scaling.py`](https://scikit-learn.org/1.1/_downloads/24475810034a0d0d190a9de0f87d72b5/plot_all_scaling.py)
[`Download Jupyter notebook: plot_all_scaling.ipynb`](https://scikit-learn.org/1.1/_downloads/e60e99adef360baabc49b925646a39d9/plot_all_scaling.ipynb)
| programming_docs |
scikit_learn Demonstrating the different strategies of KBinsDiscretizer Note
Click [here](#sphx-glr-download-auto-examples-preprocessing-plot-discretization-strategies-py) to download the full example code or to run this example in your browser via Binder
Demonstrating the different strategies of KBinsDiscretizer
==========================================================
This example presents the different strategies implemented in KBinsDiscretizer:
* ‘uniform’: The discretization is uniform in each feature, which means that the bin widths are constant in each dimension.
* quantile’: The discretization is done on the quantiled values, which means that each bin has approximately the same number of samples.
* ‘kmeans’: The discretization is based on the centroids of a KMeans clustering procedure.
The plot shows the regions where the discretized encoding is constant.
```
# Author: Tom Dupré la Tour
# License: BSD 3 clause
import numpy as np
import matplotlib.pyplot as plt
from sklearn.preprocessing import KBinsDiscretizer
from sklearn.datasets import make_blobs
strategies = ["uniform", "quantile", "kmeans"]
n_samples = 200
centers_0 = np.array([[0, 0], [0, 5], [2, 4], [8, 8]])
centers_1 = np.array([[0, 0], [3, 1]])
# construct the datasets
random_state = 42
X_list = [
np.random.RandomState(random_state).uniform(-3, 3, size=(n_samples, 2)),
make_blobs(
n_samples=[
n_samples // 10,
n_samples * 4 // 10,
n_samples // 10,
n_samples * 4 // 10,
],
cluster_std=0.5,
centers=centers_0,
random_state=random_state,
)[0],
make_blobs(
n_samples=[n_samples // 5, n_samples * 4 // 5],
cluster_std=0.5,
centers=centers_1,
random_state=random_state,
)[0],
]
figure = plt.figure(figsize=(14, 9))
i = 1
for ds_cnt, X in enumerate(X_list):
ax = plt.subplot(len(X_list), len(strategies) + 1, i)
ax.scatter(X[:, 0], X[:, 1], edgecolors="k")
if ds_cnt == 0:
ax.set_title("Input data", size=14)
xx, yy = np.meshgrid(
np.linspace(X[:, 0].min(), X[:, 0].max(), 300),
np.linspace(X[:, 1].min(), X[:, 1].max(), 300),
)
grid = np.c_[xx.ravel(), yy.ravel()]
ax.set_xlim(xx.min(), xx.max())
ax.set_ylim(yy.min(), yy.max())
ax.set_xticks(())
ax.set_yticks(())
i += 1
# transform the dataset with KBinsDiscretizer
for strategy in strategies:
enc = KBinsDiscretizer(n_bins=4, encode="ordinal", strategy=strategy)
enc.fit(X)
grid_encoded = enc.transform(grid)
ax = plt.subplot(len(X_list), len(strategies) + 1, i)
# horizontal stripes
horizontal = grid_encoded[:, 0].reshape(xx.shape)
ax.contourf(xx, yy, horizontal, alpha=0.5)
# vertical stripes
vertical = grid_encoded[:, 1].reshape(xx.shape)
ax.contourf(xx, yy, vertical, alpha=0.5)
ax.scatter(X[:, 0], X[:, 1], edgecolors="k")
ax.set_xlim(xx.min(), xx.max())
ax.set_ylim(yy.min(), yy.max())
ax.set_xticks(())
ax.set_yticks(())
if ds_cnt == 0:
ax.set_title("strategy='%s'" % (strategy,), size=14)
i += 1
plt.tight_layout()
plt.show()
```
**Total running time of the script:** ( 0 minutes 0.593 seconds)
[`Download Python source code: plot_discretization_strategies.py`](https://scikit-learn.org/1.1/_downloads/43e84df0b93ff974da370e8da900f2ee/plot_discretization_strategies.py)
[`Download Jupyter notebook: plot_discretization_strategies.ipynb`](https://scikit-learn.org/1.1/_downloads/adc9be3b7acc279025dad9ee4ce92038/plot_discretization_strategies.ipynb)
scikit_learn Importance of Feature Scaling Note
Click [here](#sphx-glr-download-auto-examples-preprocessing-plot-scaling-importance-py) to download the full example code or to run this example in your browser via Binder
Importance of Feature Scaling
=============================
Feature scaling through standardization (or Z-score normalization) can be an important preprocessing step for many machine learning algorithms. Standardization involves rescaling the features such that they have the properties of a standard normal distribution with a mean of zero and a standard deviation of one.
While many algorithms (such as SVM, K-nearest neighbors, and logistic regression) require features to be normalized, intuitively we can think of Principle Component Analysis (PCA) as being a prime example of when normalization is important. In PCA we are interested in the components that maximize the variance. If one component (e.g. human height) varies less than another (e.g. weight) because of their respective scales (meters vs. kilos), PCA might determine that the direction of maximal variance more closely corresponds with the ‘weight’ axis, if those features are not scaled. As a change in height of one meter can be considered much more important than the change in weight of one kilogram, this is clearly incorrect.
To illustrate this, [`PCA`](../../modules/generated/sklearn.decomposition.pca#sklearn.decomposition.PCA "sklearn.decomposition.PCA") is performed comparing the use of data with [`StandardScaler`](../../modules/generated/sklearn.preprocessing.standardscaler#sklearn.preprocessing.StandardScaler "sklearn.preprocessing.StandardScaler") applied, to unscaled data. The results are visualized and a clear difference noted. The 1st principal component in the unscaled set can be seen. It can be seen that feature #13 dominates the direction, being a whole two orders of magnitude above the other features. This is contrasted when observing the principal component for the scaled version of the data. In the scaled version, the orders of magnitude are roughly the same across all the features.
The dataset used is the Wine Dataset available at UCI. This dataset has continuous features that are heterogeneous in scale due to differing properties that they measure (i.e. alcohol content and malic acid).
The transformed data is then used to train a naive Bayes classifier, and a clear difference in prediction accuracies is observed wherein the dataset which is scaled before PCA vastly outperforms the unscaled version.
```
Prediction accuracy for the normal test dataset with PCA
81.48%
Prediction accuracy for the standardized test dataset with PCA
98.15%
PC 1 without scaling:
[ 1.76e-03 -8.36e-04 1.55e-04 -5.31e-03 2.02e-02 1.02e-03 1.53e-03
-1.12e-04 6.31e-04 2.33e-03 1.54e-04 7.43e-04 1.00e+00]
PC 1 with scaling:
[ 0.13 -0.26 -0.01 -0.23 0.16 0.39 0.42 -0.28 0.33 -0.11 0.3 0.38
0.28]
```
```
import matplotlib.pyplot as plt
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import StandardScaler
from sklearn.decomposition import PCA
from sklearn.naive_bayes import GaussianNB
from sklearn.metrics import accuracy_score
from sklearn.datasets import load_wine
from sklearn.pipeline import make_pipeline
# Code source: Tyler Lanigan <[email protected]>
# Sebastian Raschka <[email protected]>
# License: BSD 3 clause
RANDOM_STATE = 42
FIG_SIZE = (10, 7)
features, target = load_wine(return_X_y=True)
# Make a train/test split using 30% test size
X_train, X_test, y_train, y_test = train_test_split(
features, target, test_size=0.30, random_state=RANDOM_STATE
)
# Fit to data and predict using pipelined GNB and PCA
unscaled_clf = make_pipeline(PCA(n_components=2), GaussianNB())
unscaled_clf.fit(X_train, y_train)
pred_test = unscaled_clf.predict(X_test)
# Fit to data and predict using pipelined scaling, GNB and PCA
std_clf = make_pipeline(StandardScaler(), PCA(n_components=2), GaussianNB())
std_clf.fit(X_train, y_train)
pred_test_std = std_clf.predict(X_test)
# Show prediction accuracies in scaled and unscaled data.
print("\nPrediction accuracy for the normal test dataset with PCA")
print(f"{accuracy_score(y_test, pred_test):.2%}\n")
print("\nPrediction accuracy for the standardized test dataset with PCA")
print(f"{accuracy_score(y_test, pred_test_std):.2%}\n")
# Extract PCA from pipeline
pca = unscaled_clf.named_steps["pca"]
pca_std = std_clf.named_steps["pca"]
# Show first principal components
print(f"\nPC 1 without scaling:\n{pca.components_[0]}")
print(f"\nPC 1 with scaling:\n{pca_std.components_[0]}")
# Use PCA without and with scale on X_train data for visualization.
X_train_transformed = pca.transform(X_train)
scaler = std_clf.named_steps["standardscaler"]
scaled_X_train = scaler.transform(X_train)
X_train_std_transformed = pca_std.transform(scaled_X_train)
# visualize standardized vs. untouched dataset with PCA performed
fig, (ax1, ax2) = plt.subplots(ncols=2, figsize=FIG_SIZE)
target_classes = range(0, 3)
colors = ("blue", "red", "green")
markers = ("^", "s", "o")
for target_class, color, marker in zip(target_classes, colors, markers):
ax1.scatter(
x=X_train_transformed[y_train == target_class, 0],
y=X_train_transformed[y_train == target_class, 1],
color=color,
label=f"class {target_class}",
alpha=0.5,
marker=marker,
)
ax2.scatter(
x=X_train_std_transformed[y_train == target_class, 0],
y=X_train_std_transformed[y_train == target_class, 1],
color=color,
label=f"class {target_class}",
alpha=0.5,
marker=marker,
)
ax1.set_title("Training dataset after PCA")
ax2.set_title("Standardized training dataset after PCA")
for ax in (ax1, ax2):
ax.set_xlabel("1st principal component")
ax.set_ylabel("2nd principal component")
ax.legend(loc="upper right")
ax.grid()
plt.tight_layout()
plt.show()
```
**Total running time of the script:** ( 0 minutes 0.175 seconds)
[`Download Python source code: plot_scaling_importance.py`](https://scikit-learn.org/1.1/_downloads/4ef6a0e5e8f2fe6463d63928373e5f91/plot_scaling_importance.py)
[`Download Jupyter notebook: plot_scaling_importance.ipynb`](https://scikit-learn.org/1.1/_downloads/c9688d36cfbf43a68f3613b58110ceaa/plot_scaling_importance.ipynb)
scikit_learn Classification of text documents using sparse features Note
Click [here](#sphx-glr-download-auto-examples-text-plot-document-classification-20newsgroups-py) to download the full example code or to run this example in your browser via Binder
Classification of text documents using sparse features
======================================================
This is an example showing how scikit-learn can be used to classify documents by topics using a [Bag of Words approach](https://en.wikipedia.org/wiki/Bag-of-words_model). This example uses a Tf-idf-weighted document-term sparse matrix to encode the features and demonstrates various classifiers that can efficiently handle sparse matrices.
For document analysis via an unsupervised learning approach, see the example script [Clustering text documents using k-means](plot_document_clustering#sphx-glr-auto-examples-text-plot-document-clustering-py).
```
# Author: Peter Prettenhofer <[email protected]>
# Olivier Grisel <[email protected]>
# Mathieu Blondel <[email protected]>
# Arturo Amor <[email protected]>
# Lars Buitinck
# License: BSD 3 clause
```
Loading and vectorizing the 20 newsgroups text dataset
------------------------------------------------------
We define a function to load data from [The 20 newsgroups text dataset](https://scikit-learn.org/1.1/datasets/real_world.html#newsgroups-dataset), which comprises around 18,000 newsgroups posts on 20 topics split in two subsets: one for training (or development) and the other one for testing (or for performance evaluation). Note that, by default, the text samples contain some message metadata such as `'headers'`, `'footers'` (signatures) and `'quotes'` to other posts. The `fetch_20newsgroups` function therefore accepts a parameter named `remove` to attempt stripping such information that can make the classification problem “too easy”. This is achieved using simple heuristics that are neither perfect nor standard, hence disabled by default.
```
from sklearn.datasets import fetch_20newsgroups
from sklearn.feature_extraction.text import TfidfVectorizer
from time import time
categories = [
"alt.atheism",
"talk.religion.misc",
"comp.graphics",
"sci.space",
]
def size_mb(docs):
return sum(len(s.encode("utf-8")) for s in docs) / 1e6
def load_dataset(verbose=False, remove=()):
"""Load and vectorize the 20 newsgroups dataset."""
data_train = fetch_20newsgroups(
subset="train",
categories=categories,
shuffle=True,
random_state=42,
remove=remove,
)
data_test = fetch_20newsgroups(
subset="test",
categories=categories,
shuffle=True,
random_state=42,
remove=remove,
)
# order of labels in `target_names` can be different from `categories`
target_names = data_train.target_names
# split target in a training set and a test set
y_train, y_test = data_train.target, data_test.target
# Extracting features from the training data using a sparse vectorizer
t0 = time()
vectorizer = TfidfVectorizer(
sublinear_tf=True, max_df=0.5, min_df=5, stop_words="english"
)
X_train = vectorizer.fit_transform(data_train.data)
duration_train = time() - t0
# Extracting features from the test data using the same vectorizer
t0 = time()
X_test = vectorizer.transform(data_test.data)
duration_test = time() - t0
feature_names = vectorizer.get_feature_names_out()
if verbose:
# compute size of loaded data
data_train_size_mb = size_mb(data_train.data)
data_test_size_mb = size_mb(data_test.data)
print(
f"{len(data_train.data)} documents - "
f"{data_train_size_mb:.2f}MB (training set)"
)
print(f"{len(data_test.data)} documents - {data_test_size_mb:.2f}MB (test set)")
print(f"{len(target_names)} categories")
print(
f"vectorize training done in {duration_train:.3f}s "
f"at {data_train_size_mb / duration_train:.3f}MB/s"
)
print(f"n_samples: {X_train.shape[0]}, n_features: {X_train.shape[1]}")
print(
f"vectorize testing done in {duration_test:.3f}s "
f"at {data_test_size_mb / duration_test:.3f}MB/s"
)
print(f"n_samples: {X_test.shape[0]}, n_features: {X_test.shape[1]}")
return X_train, X_test, y_train, y_test, feature_names, target_names
```
Analysis of a bag-of-words document classifier
----------------------------------------------
We will now train a classifier twice, once on the text samples including metadata and once after stripping the metadata. For both cases we will analyze the classification errors on a test set using a confusion matrix and inspect the coefficients that define the classification function of the trained models.
### Model without metadata stripping
We start by using the custom function `load_dataset` to load the data without metadata stripping.
```
X_train, X_test, y_train, y_test, feature_names, target_names = load_dataset(
verbose=True
)
```
```
2034 documents - 3.98MB (training set)
1353 documents - 2.87MB (test set)
4 categories
vectorize training done in 0.352s at 11.305MB/s
n_samples: 2034, n_features: 7831
vectorize testing done in 0.227s at 12.649MB/s
n_samples: 1353, n_features: 7831
```
Our first model is an instance of the [`RidgeClassifier`](../../modules/generated/sklearn.linear_model.ridgeclassifier#sklearn.linear_model.RidgeClassifier "sklearn.linear_model.RidgeClassifier") class. This is a linear classification model that uses the mean squared error on {-1, 1} encoded targets, one for each possible class. Contrary to [`LogisticRegression`](../../modules/generated/sklearn.linear_model.logisticregression#sklearn.linear_model.LogisticRegression "sklearn.linear_model.LogisticRegression"), [`RidgeClassifier`](../../modules/generated/sklearn.linear_model.ridgeclassifier#sklearn.linear_model.RidgeClassifier "sklearn.linear_model.RidgeClassifier") does not provide probabilistic predictions (no `predict_proba` method), but it is often faster to train.
```
from sklearn.linear_model import RidgeClassifier
clf = RidgeClassifier(tol=1e-2, solver="sparse_cg")
clf.fit(X_train, y_train)
pred = clf.predict(X_test)
```
We plot the confusion matrix of this classifier to find if there is a pattern in the classification errors.
```
import matplotlib.pyplot as plt
from sklearn.metrics import ConfusionMatrixDisplay
fig, ax = plt.subplots(figsize=(10, 5))
ConfusionMatrixDisplay.from_predictions(y_test, pred, ax=ax)
ax.xaxis.set_ticklabels(target_names)
ax.yaxis.set_ticklabels(target_names)
_ = ax.set_title(
f"Confusion Matrix for {clf.__class__.__name__}\non the original documents"
)
```
The confusion matrix highlights that documents of the `alt.atheism` class are often confused with documents with the class `talk.religion.misc` class and vice-versa which is expected since the topics are semantically related.
We also observe that some documents of the `sci.space` class can be misclassified as `comp.graphics` while the converse is much rarer. A manual inspection of those badly classified documents would be required to get some insights on this asymmetry. It could be the case that the vocabulary of the space topic could be more specific than the vocabulary for computer graphics.
We can gain a deeper understanding of how this classifier makes its decisions by looking at the words with the highest average feature effects:
```
import pandas as pd
import numpy as np
def plot_feature_effects():
# learned coefficients weighted by frequency of appearance
average_feature_effects = clf.coef_ * np.asarray(X_train.mean(axis=0)).ravel()
for i, label in enumerate(target_names):
top5 = np.argsort(average_feature_effects[i])[-5:][::-1]
if i == 0:
top = pd.DataFrame(feature_names[top5], columns=[label])
top_indices = top5
else:
top[label] = feature_names[top5]
top_indices = np.concatenate((top_indices, top5), axis=None)
top_indices = np.unique(top_indices)
predictive_words = feature_names[top_indices]
# plot feature effects
bar_size = 0.25
padding = 0.75
y_locs = np.arange(len(top_indices)) * (4 * bar_size + padding)
fig, ax = plt.subplots(figsize=(10, 8))
for i, label in enumerate(target_names):
ax.barh(
y_locs + (i - 2) * bar_size,
average_feature_effects[i, top_indices],
height=bar_size,
label=label,
)
ax.set(
yticks=y_locs,
yticklabels=predictive_words,
ylim=[
0 - 4 * bar_size,
len(top_indices) * (4 * bar_size + padding) - 4 * bar_size,
],
)
ax.legend(loc="lower right")
print("top 5 keywords per class:")
print(top)
return ax
_ = plot_feature_effects().set_title("Average feature effect on the original data")
```
```
top 5 keywords per class:
alt.atheism comp.graphics sci.space talk.religion.misc
0 keith graphics space christian
1 god university nasa com
2 atheists thanks orbit god
3 people does moon morality
4 caltech image access people
```
We can observe that the most predictive words are often strongly positively associated with a single class and negatively associated with all the other classes. Most of those positive associations are quite easy to interpret. However, some words such as `"god"` and `"people"` are positively associated to both `"talk.misc.religion"` and `"alt.atheism"` as those two classes expectedly share some common vocabulary. Notice however that there are also words such as `"christian"` and `"morality"` that are only positively associated with `"talk.misc.religion"`. Furthermore, in this version of the dataset, the word `"caltech"` is one of the top predictive features for atheism due to pollution in the dataset coming from some sort of metadata such as the email addresses of the sender of previous emails in the discussion as can be seen below:
```
data_train = fetch_20newsgroups(
subset="train", categories=categories, shuffle=True, random_state=42
)
for doc in data_train.data:
if "caltech" in doc:
print(doc)
break
```
```
From: [email protected] (Jon Livesey)
Subject: Re: Morality? (was Re: <Political Atheists?)
Organization: sgi
Lines: 93
Distribution: world
NNTP-Posting-Host: solntze.wpd.sgi.com
In article <[email protected]>, [email protected] (Keith Allan Schneider) writes:
|> [email protected] (Jon Livesey) writes:
|>
|> >>>Explain to me
|> >>>how instinctive acts can be moral acts, and I am happy to listen.
|> >>For example, if it were instinctive not to murder...
|> >
|> >Then not murdering would have no moral significance, since there
|> >would be nothing voluntary about it.
|>
|> See, there you go again, saying that a moral act is only significant
|> if it is "voluntary." Why do you think this?
If you force me to do something, am I morally responsible for it?
|>
|> And anyway, humans have the ability to disregard some of their instincts.
Well, make up your mind. Is it to be "instinctive not to murder"
or not?
|>
|> >>So, only intelligent beings can be moral, even if the bahavior of other
|> >>beings mimics theirs?
|> >
|> >You are starting to get the point. Mimicry is not necessarily the
|> >same as the action being imitated. A Parrot saying "Pretty Polly"
|> >isn't necessarily commenting on the pulchritude of Polly.
|>
|> You are attaching too many things to the term "moral," I think.
|> Let's try this: is it "good" that animals of the same species
|> don't kill each other. Or, do you think this is right?
It's not even correct. Animals of the same species do kill
one another.
|>
|> Or do you think that animals are machines, and that nothing they do
|> is either right nor wrong?
Sigh. I wonder how many times we have been round this loop.
I think that instinctive bahaviour has no moral significance.
I am quite prepared to believe that higher animals, such as
primates, have the beginnings of a moral sense, since they seem
to exhibit self-awareness.
|>
|>
|> >>Animals of the same species could kill each other arbitarily, but
|> >>they don't.
|> >
|> >They do. I and other posters have given you many examples of exactly
|> >this, but you seem to have a very short memory.
|>
|> Those weren't arbitrary killings. They were slayings related to some
|> sort of mating ritual or whatnot.
So what? Are you trying to say that some killing in animals
has a moral significance and some does not? Is this your
natural morality>
|>
|> >>Are you trying to say that this isn't an act of morality because
|> >>most animals aren't intelligent enough to think like we do?
|> >
|> >I'm saying:
|> > "There must be the possibility that the organism - it's not
|> > just people we are talking about - can consider alternatives."
|> >
|> >It's right there in the posting you are replying to.
|>
|> Yes it was, but I still don't understand your distinctions. What
|> do you mean by "consider?" Can a small child be moral? How about
|> a gorilla? A dolphin? A platypus? Where is the line drawn? Does
|> the being need to be self aware?
Are you blind? What do you think that this sentence means?
"There must be the possibility that the organism - it's not
just people we are talking about - can consider alternatives."
What would that imply?
|>
|> What *do* you call the mechanism which seems to prevent animals of
|> the same species from (arbitrarily) killing each other? Don't
|> you find the fact that they don't at all significant?
I find the fact that they do to be significant.
jon.
```
Such headers, signature footers (and quoted metadata from previous messages) can be considered side information that artificially reveals the newsgroup by identifying the registered members and one would rather want our text classifier to only learn from the “main content” of each text document instead of relying on the leaked identity of the writers.
### Model with metadata stripping
The `remove` option of the 20 newsgroups dataset loader in scikit-learn allows to heuristically attempt to filter out some of this unwanted metadata that makes the classification problem artificially easier. Be aware that such filtering of the text contents is far from perfect.
Let us try to leverage this option to train a text classifier that does not rely too much on this kind of metadata to make its decisions:
```
(
X_train,
X_test,
y_train,
y_test,
feature_names,
target_names,
) = load_dataset(remove=("headers", "footers", "quotes"))
clf = RidgeClassifier(tol=1e-2, solver="sparse_cg")
clf.fit(X_train, y_train)
pred = clf.predict(X_test)
fig, ax = plt.subplots(figsize=(10, 5))
ConfusionMatrixDisplay.from_predictions(y_test, pred, ax=ax)
ax.xaxis.set_ticklabels(target_names)
ax.yaxis.set_ticklabels(target_names)
_ = ax.set_title(
f"Confusion Matrix for {clf.__class__.__name__}\non filtered documents"
)
```
By looking at the confusion matrix, it is more evident that the scores of the model trained with metadata were over-optimistic. The classification problem without access to the metadata is less accurate but more representative of the intended text classification problem.
```
_ = plot_feature_effects().set_title("Average feature effects on filtered documents")
```
```
top 5 keywords per class:
alt.atheism comp.graphics sci.space talk.religion.misc
0 don graphics space god
1 people file like christian
2 say thanks nasa jesus
3 religion image orbit christians
4 post does launch wrong
```
In the next section we keep the dataset without metadata to compare several classifiers.
Benchmarking classifiers
------------------------
Scikit-learn provides many different kinds of classification algorithms. In this section we will train a selection of those classifiers on the same text classification problem and measure both their generalization performance (accuracy on the test set) and their computation performance (speed), both at training time and testing time. For such purpose we define the following benchmarking utilities:
```
from sklearn.utils.extmath import density
from sklearn import metrics
def benchmark(clf, custom_name=False):
print("_" * 80)
print("Training: ")
print(clf)
t0 = time()
clf.fit(X_train, y_train)
train_time = time() - t0
print(f"train time: {train_time:.3}s")
t0 = time()
pred = clf.predict(X_test)
test_time = time() - t0
print(f"test time: {test_time:.3}s")
score = metrics.accuracy_score(y_test, pred)
print(f"accuracy: {score:.3}")
if hasattr(clf, "coef_"):
print(f"dimensionality: {clf.coef_.shape[1]}")
print(f"density: {density(clf.coef_)}")
print()
print()
if custom_name:
clf_descr = str(custom_name)
else:
clf_descr = clf.__class__.__name__
return clf_descr, score, train_time, test_time
```
We now train and test the datasets with 8 different classification models and get performance results for each model. The goal of this study is to highlight the computation/accuracy tradeoffs of different types of classifiers for such a multi-class text classification problem.
Notice that the most important hyperparameters values were tuned using a grid search procedure not shown in this notebook for the sake of simplicity.
```
from sklearn.linear_model import LogisticRegression
from sklearn.svm import LinearSVC
from sklearn.linear_model import SGDClassifier
from sklearn.naive_bayes import ComplementNB
from sklearn.neighbors import KNeighborsClassifier
from sklearn.neighbors import NearestCentroid
from sklearn.ensemble import RandomForestClassifier
results = []
for clf, name in (
(LogisticRegression(C=5, max_iter=1000), "Logistic Regression"),
(RidgeClassifier(alpha=1.0, solver="sparse_cg"), "Ridge Classifier"),
(KNeighborsClassifier(n_neighbors=100), "kNN"),
(RandomForestClassifier(), "Random Forest"),
# L2 penalty Linear SVC
(LinearSVC(C=0.1, dual=False, max_iter=1000), "Linear SVC"),
# L2 penalty Linear SGD
(
SGDClassifier(
loss="log_loss", alpha=1e-4, n_iter_no_change=3, early_stopping=True
),
"log-loss SGD",
),
# NearestCentroid (aka Rocchio classifier)
(NearestCentroid(), "NearestCentroid"),
# Sparse naive Bayes classifier
(ComplementNB(alpha=0.1), "Complement naive Bayes"),
):
print("=" * 80)
print(name)
results.append(benchmark(clf, name))
```
```
================================================================================
Logistic Regression
________________________________________________________________________________
Training:
LogisticRegression(C=5, max_iter=1000)
train time: 0.308s
test time: 0.000539s
accuracy: 0.773
dimensionality: 5316
density: 1.0
================================================================================
Ridge Classifier
________________________________________________________________________________
Training:
RidgeClassifier(solver='sparse_cg')
train time: 0.0281s
test time: 0.000501s
accuracy: 0.76
dimensionality: 5316
density: 1.0
================================================================================
kNN
________________________________________________________________________________
Training:
KNeighborsClassifier(n_neighbors=100)
train time: 0.000659s
test time: 0.108s
accuracy: 0.753
================================================================================
Random Forest
________________________________________________________________________________
Training:
RandomForestClassifier()
train time: 1.14s
test time: 0.0634s
accuracy: 0.704
================================================================================
Linear SVC
________________________________________________________________________________
Training:
LinearSVC(C=0.1, dual=False)
train time: 0.0285s
test time: 0.000546s
accuracy: 0.752
dimensionality: 5316
density: 1.0
================================================================================
log-loss SGD
________________________________________________________________________________
Training:
SGDClassifier(early_stopping=True, loss='log_loss', n_iter_no_change=3)
train time: 0.0214s
test time: 0.000479s
accuracy: 0.762
dimensionality: 5316
density: 1.0
================================================================================
NearestCentroid
________________________________________________________________________________
Training:
NearestCentroid()
train time: 0.00249s
test time: 0.000795s
accuracy: 0.748
================================================================================
Complement naive Bayes
________________________________________________________________________________
Training:
ComplementNB(alpha=0.1)
train time: 0.00145s
test time: 0.000409s
accuracy: 0.779
```
Plot accuracy, training and test time of each classifier
--------------------------------------------------------
The scatter plots show the trade-off between the test accuracy and the training and testing time of each classifier.
```
indices = np.arange(len(results))
results = [[x[i] for x in results] for i in range(4)]
clf_names, score, training_time, test_time = results
training_time = np.array(training_time)
test_time = np.array(test_time)
fig, ax1 = plt.subplots(figsize=(10, 8))
ax1.scatter(score, training_time, s=60)
ax1.set(
title="Score-training time trade-off",
yscale="log",
xlabel="test accuracy",
ylabel="training time (s)",
)
fig, ax2 = plt.subplots(figsize=(10, 8))
ax2.scatter(score, test_time, s=60)
ax2.set(
title="Score-test time trade-off",
yscale="log",
xlabel="test accuracy",
ylabel="test time (s)",
)
for i, txt in enumerate(clf_names):
ax1.annotate(txt, (score[i], training_time[i]))
ax2.annotate(txt, (score[i], test_time[i]))
```
*
*
The naive Bayes model has the best trade-off between score and training/testing time, while Random Forest is both slow to train, expensive to predict and has a comparatively bad accuracy. This is expected: for high-dimensional prediction problems, linear models are often better suited as most problems become linearly separable when the feature space has 10,000 dimensions or more.
The difference in training speed and accuracy of the linear models can be explained by the choice of the loss function they optimize and the kind of regularization they use. Be aware that some linear models with the same loss but a different solver or regularization configuration may yield different fitting times and test accuracy. We can observe on the second plot that once trained, all linear models have approximately the same prediction speed which is expected because they all implement the same prediction function.
KNeighborsClassifier has a relatively low accuracy and has the highest testing time. The long prediction time is also expected: for each prediction the model has to compute the pairwise distances between the testing sample and each document in the training set, which is computationally expensive. Furthermore, the “curse of dimensionality” harms the ability of this model to yield competitive accuracy in the high dimensional feature space of text classification problems.
**Total running time of the script:** ( 0 minutes 5.886 seconds)
[`Download Python source code: plot_document_classification_20newsgroups.py`](https://scikit-learn.org/1.1/_downloads/b95d2be91f59162cfa269bdb32134d31/plot_document_classification_20newsgroups.py)
[`Download Jupyter notebook: plot_document_classification_20newsgroups.ipynb`](https://scikit-learn.org/1.1/_downloads/7ff1697c60d48929305821f39296dbb9/plot_document_classification_20newsgroups.ipynb)
| programming_docs |
scikit_learn Clustering text documents using k-means Note
Click [here](#sphx-glr-download-auto-examples-text-plot-document-clustering-py) to download the full example code or to run this example in your browser via Binder
Clustering text documents using k-means
=======================================
This is an example showing how the scikit-learn API can be used to cluster documents by topics using a [Bag of Words approach](https://en.wikipedia.org/wiki/Bag-of-words_model).
Two algorithms are demoed: [`KMeans`](../../modules/generated/sklearn.cluster.kmeans#sklearn.cluster.KMeans "sklearn.cluster.KMeans") and its more scalable variant, [`MiniBatchKMeans`](../../modules/generated/sklearn.cluster.minibatchkmeans#sklearn.cluster.MiniBatchKMeans "sklearn.cluster.MiniBatchKMeans"). Additionally, latent semantic analysis is used to reduce dimensionality and discover latent patterns in the data.
This example uses two different text vectorizers: a [`TfidfVectorizer`](../../modules/generated/sklearn.feature_extraction.text.tfidfvectorizer#sklearn.feature_extraction.text.TfidfVectorizer "sklearn.feature_extraction.text.TfidfVectorizer") and a [`HashingVectorizer`](../../modules/generated/sklearn.feature_extraction.text.hashingvectorizer#sklearn.feature_extraction.text.HashingVectorizer "sklearn.feature_extraction.text.HashingVectorizer"). See the example notebook [FeatureHasher and DictVectorizer Comparison](plot_hashing_vs_dict_vectorizer#sphx-glr-auto-examples-text-plot-hashing-vs-dict-vectorizer-py) for more information on vectorizers and a comparison of their processing times.
For document analysis via a supervised learning approach, see the example script [Classification of text documents using sparse features](plot_document_classification_20newsgroups#sphx-glr-auto-examples-text-plot-document-classification-20newsgroups-py).
```
# Author: Peter Prettenhofer <[email protected]>
# Lars Buitinck
# Olivier Grisel <[email protected]>
# Arturo Amor <[email protected]>
# License: BSD 3 clause
```
Loading text data
-----------------
We load data from [The 20 newsgroups text dataset](https://scikit-learn.org/1.1/datasets/real_world.html#newsgroups-dataset), which comprises around 18,000 newsgroups posts on 20 topics. For illustrative purposes and to reduce the computational cost, we select a subset of 4 topics only accounting for around 3,400 documents. See the example [Classification of text documents using sparse features](plot_document_classification_20newsgroups#sphx-glr-auto-examples-text-plot-document-classification-20newsgroups-py) to gain intuition on the overlap of such topics.
Notice that, by default, the text samples contain some message metadata such as `"headers"`, `"footers"` (signatures) and `"quotes"` to other posts. We use the `remove` parameter from [`fetch_20newsgroups`](../../modules/generated/sklearn.datasets.fetch_20newsgroups#sklearn.datasets.fetch_20newsgroups "sklearn.datasets.fetch_20newsgroups") to strip those features and have a more sensible clustering problem.
```
import numpy as np
from sklearn.datasets import fetch_20newsgroups
categories = [
"alt.atheism",
"talk.religion.misc",
"comp.graphics",
"sci.space",
]
dataset = fetch_20newsgroups(
remove=("headers", "footers", "quotes"),
subset="all",
categories=categories,
shuffle=True,
random_state=42,
)
labels = dataset.target
unique_labels, category_sizes = np.unique(labels, return_counts=True)
true_k = unique_labels.shape[0]
print(f"{len(dataset.data)} documents - {true_k} categories")
```
```
3387 documents - 4 categories
```
Quantifying the quality of clustering results
---------------------------------------------
In this section we define a function to score different clustering pipelines using several metrics.
Clustering algorithms are fundamentally unsupervised learning methods. However, since we happen to have class labels for this specific dataset, it is possible to use evaluation metrics that leverage this “supervised” ground truth information to quantify the quality of the resulting clusters. Examples of such metrics are the following:
* homogeneity, which quantifies how much clusters contain only members of a single class;
* completeness, which quantifies how much members of a given class are assigned to the same clusters;
* V-measure, the harmonic mean of completeness and homogeneity;
* Rand-Index, which measures how frequently pairs of data points are grouped consistently according to the result of the clustering algorithm and the ground truth class assignment;
* Adjusted Rand-Index, a chance-adjusted Rand-Index such that random cluster assignment have an ARI of 0.0 in expectation.
If the ground truth labels are not known, evaluation can only be performed using the model results itself. In that case, the Silhouette Coefficient comes in handy.
For more reference, see [Clustering performance evaluation](../../modules/clustering#clustering-evaluation).
```
from collections import defaultdict
from sklearn import metrics
from time import time
evaluations = []
evaluations_std = []
def fit_and_evaluate(km, X, name=None, n_runs=5):
name = km.__class__.__name__ if name is None else name
train_times = []
scores = defaultdict(list)
for seed in range(n_runs):
km.set_params(random_state=seed)
t0 = time()
km.fit(X)
train_times.append(time() - t0)
scores["Homogeneity"].append(metrics.homogeneity_score(labels, km.labels_))
scores["Completeness"].append(metrics.completeness_score(labels, km.labels_))
scores["V-measure"].append(metrics.v_measure_score(labels, km.labels_))
scores["Adjusted Rand-Index"].append(
metrics.adjusted_rand_score(labels, km.labels_)
)
scores["Silhouette Coefficient"].append(
metrics.silhouette_score(X, km.labels_, sample_size=2000)
)
train_times = np.asarray(train_times)
print(f"clustering done in {train_times.mean():.2f} ± {train_times.std():.2f} s ")
evaluation = {
"estimator": name,
"train_time": train_times.mean(),
}
evaluation_std = {
"estimator": name,
"train_time": train_times.std(),
}
for score_name, score_values in scores.items():
mean_score, std_score = np.mean(score_values), np.std(score_values)
print(f"{score_name}: {mean_score:.3f} ± {std_score:.3f}")
evaluation[score_name] = mean_score
evaluation_std[score_name] = std_score
evaluations.append(evaluation)
evaluations_std.append(evaluation_std)
```
K-means clustering on text features
-----------------------------------
Two feature extraction methods are used in this example:
* [`TfidfVectorizer`](../../modules/generated/sklearn.feature_extraction.text.tfidfvectorizer#sklearn.feature_extraction.text.TfidfVectorizer "sklearn.feature_extraction.text.TfidfVectorizer") uses an in-memory vocabulary (a Python dict) to map the most frequent words to features indices and hence compute a word occurrence frequency (sparse) matrix. The word frequencies are then reweighted using the Inverse Document Frequency (IDF) vector collected feature-wise over the corpus.
* [`HashingVectorizer`](../../modules/generated/sklearn.feature_extraction.text.hashingvectorizer#sklearn.feature_extraction.text.HashingVectorizer "sklearn.feature_extraction.text.HashingVectorizer") hashes word occurrences to a fixed dimensional space, possibly with collisions. The word count vectors are then normalized to each have l2-norm equal to one (projected to the euclidean unit-sphere) which seems to be important for k-means to work in high dimensional space.
Furthermore it is possible to post-process those extracted features using dimensionality reduction. We will explore the impact of those choices on the clustering quality in the following.
### Feature Extraction using TfidfVectorizer
We first benchmark the estimators using a dictionary vectorizer along with an IDF normalization as provided by [`TfidfVectorizer`](../../modules/generated/sklearn.feature_extraction.text.tfidfvectorizer#sklearn.feature_extraction.text.TfidfVectorizer "sklearn.feature_extraction.text.TfidfVectorizer").
```
from sklearn.feature_extraction.text import TfidfVectorizer
vectorizer = TfidfVectorizer(
max_df=0.5,
min_df=5,
stop_words="english",
)
t0 = time()
X_tfidf = vectorizer.fit_transform(dataset.data)
print(f"vectorization done in {time() - t0:.3f} s")
print(f"n_samples: {X_tfidf.shape[0]}, n_features: {X_tfidf.shape[1]}")
```
```
vectorization done in 0.372 s
n_samples: 3387, n_features: 7929
```
After ignoring terms that appear in more than 50% of the documents (as set by `max_df=0.5`) and terms that are not present in at least 5 documents (set by `min_df=5`), the resulting number of unique terms `n_features` is around 8,000. We can additionally quantify the sparsity of the `X_tfidf` matrix as the fraction of non-zero entries devided by the total number of elements.
```
print(f"{X_tfidf.nnz / np.prod(X_tfidf.shape):.3f}")
```
```
0.007
```
We find that around 0.7% of the entries of the `X_tfidf` matrix are non-zero.
### Clustering sparse data with k-means
As both [`KMeans`](../../modules/generated/sklearn.cluster.kmeans#sklearn.cluster.KMeans "sklearn.cluster.KMeans") and [`MiniBatchKMeans`](../../modules/generated/sklearn.cluster.minibatchkmeans#sklearn.cluster.MiniBatchKMeans "sklearn.cluster.MiniBatchKMeans") optimize a non-convex objective function, their clustering is not guaranteed to be optimal for a given random init. Even further, on sparse high-dimensional data such as text vectorized using the Bag of Words approach, k-means can initialize centroids on extremely isolated data points. Those data points can stay their own centroids all along.
The following code illustrates how the previous phenomenon can sometimes lead to highly imbalanced clusters, depending on the random initialization:
```
from sklearn.cluster import KMeans
for seed in range(5):
kmeans = KMeans(
n_clusters=true_k,
max_iter=100,
n_init=1,
random_state=seed,
).fit(X_tfidf)
cluster_ids, cluster_sizes = np.unique(kmeans.labels_, return_counts=True)
print(f"Number of elements asigned to each cluster: {cluster_sizes}")
print()
print(
"True number of documents in each category according to the class labels: "
f"{category_sizes}"
)
```
```
Number of elements asigned to each cluster: [ 1 1 3384 1]
Number of elements asigned to each cluster: [1733 717 238 699]
Number of elements asigned to each cluster: [1115 256 1417 599]
Number of elements asigned to each cluster: [1695 649 446 597]
Number of elements asigned to each cluster: [ 254 2117 459 557]
True number of documents in each category according to the class labels: [799 973 987 628]
```
To avoid this problem, one possibility is to increase the number of runs with independent random initiations `n_init`. In such case the clustering with the best inertia (objective function of k-means) is chosen.
```
kmeans = KMeans(
n_clusters=true_k,
max_iter=100,
n_init=5,
)
fit_and_evaluate(kmeans, X_tfidf, name="KMeans\non tf-idf vectors")
```
```
clustering done in 0.16 ± 0.04 s
Homogeneity: 0.347 ± 0.009
Completeness: 0.397 ± 0.006
V-measure: 0.370 ± 0.007
Adjusted Rand-Index: 0.197 ± 0.014
Silhouette Coefficient: 0.007 ± 0.001
```
All those clustering evaluation metrics have a maximum value of 1.0 (for a perfect clustering result). Higher values are better. Values of the Adjusted Rand-Index close to 0.0 correspond to a random labeling. Notice from the scores above that the cluster assignment is indeed well above chance level, but the overall quality can certainly improve.
Keep in mind that the class labels may not reflect accurately the document topics and therefore metrics that use labels are not necessarily the best to evaluate the quality of our clustering pipeline.
### Performing dimensionality reduction using LSA
A `n_init=1` can still be used as long as the dimension of the vectorized space is reduced first to make k-means more stable. For such purpose we use [`TruncatedSVD`](../../modules/generated/sklearn.decomposition.truncatedsvd#sklearn.decomposition.TruncatedSVD "sklearn.decomposition.TruncatedSVD"), which works on term count/tf-idf matrices. Since SVD results are not normalized, we redo the normalization to improve the [`KMeans`](../../modules/generated/sklearn.cluster.kmeans#sklearn.cluster.KMeans "sklearn.cluster.KMeans") result. Using SVD to reduce the dimensionality of TF-IDF document vectors is often known as [latent semantic analysis](https://en.wikipedia.org/wiki/Latent_semantic_analysis) (LSA) in the information retrieval and text mining literature.
```
from sklearn.decomposition import TruncatedSVD
from sklearn.pipeline import make_pipeline
from sklearn.preprocessing import Normalizer
lsa = make_pipeline(TruncatedSVD(n_components=100), Normalizer(copy=False))
t0 = time()
X_lsa = lsa.fit_transform(X_tfidf)
explained_variance = lsa[0].explained_variance_ratio_.sum()
print(f"LSA done in {time() - t0:.3f} s")
print(f"Explained variance of the SVD step: {explained_variance * 100:.1f}%")
```
```
LSA done in 0.365 s
Explained variance of the SVD step: 18.4%
```
Using a single initialization means the processing time will be reduced for both [`KMeans`](../../modules/generated/sklearn.cluster.kmeans#sklearn.cluster.KMeans "sklearn.cluster.KMeans") and [`MiniBatchKMeans`](../../modules/generated/sklearn.cluster.minibatchkmeans#sklearn.cluster.MiniBatchKMeans "sklearn.cluster.MiniBatchKMeans").
```
kmeans = KMeans(
n_clusters=true_k,
max_iter=100,
n_init=1,
)
fit_and_evaluate(kmeans, X_lsa, name="KMeans\nwith LSA on tf-idf vectors")
```
```
clustering done in 0.02 ± 0.00 s
Homogeneity: 0.393 ± 0.009
Completeness: 0.420 ± 0.020
V-measure: 0.405 ± 0.012
Adjusted Rand-Index: 0.342 ± 0.034
Silhouette Coefficient: 0.030 ± 0.002
```
We can observe that clustering on the LSA representation of the document is significantly faster (both because of `n_init=1` and because the dimensionality of the LSA feature space is much smaller). Furthermore, all the clustering evaluation metrics have improved. We repeat the experiment with [`MiniBatchKMeans`](../../modules/generated/sklearn.cluster.minibatchkmeans#sklearn.cluster.MiniBatchKMeans "sklearn.cluster.MiniBatchKMeans").
```
from sklearn.cluster import MiniBatchKMeans
minibatch_kmeans = MiniBatchKMeans(
n_clusters=true_k,
n_init=1,
init_size=1000,
batch_size=1000,
)
fit_and_evaluate(
minibatch_kmeans,
X_lsa,
name="MiniBatchKMeans\nwith LSA on tf-idf vectors",
)
```
```
clustering done in 0.02 ± 0.00 s
Homogeneity: 0.348 ± 0.070
Completeness: 0.381 ± 0.019
V-measure: 0.361 ± 0.050
Adjusted Rand-Index: 0.330 ± 0.106
Silhouette Coefficient: 0.024 ± 0.006
```
### Top terms per cluster
Since [`TfidfVectorizer`](../../modules/generated/sklearn.feature_extraction.text.tfidfvectorizer#sklearn.feature_extraction.text.TfidfVectorizer "sklearn.feature_extraction.text.TfidfVectorizer") can be inverted we can identify the cluster centers, which provide an intuition of the most influential words **for each cluster**. See the example script [Classification of text documents using sparse features](plot_document_classification_20newsgroups#sphx-glr-auto-examples-text-plot-document-classification-20newsgroups-py) for a comparison with the most predictive words **for each target class**.
```
original_space_centroids = lsa[0].inverse_transform(kmeans.cluster_centers_)
order_centroids = original_space_centroids.argsort()[:, ::-1]
terms = vectorizer.get_feature_names_out()
for i in range(true_k):
print(f"Cluster {i}: ", end="")
for ind in order_centroids[i, :10]:
print(f"{terms[ind]} ", end="")
print()
```
```
Cluster 0: think just don people like know want say good really
Cluster 1: god people jesus say religion did does christian said evidence
Cluster 2: thanks graphics image know edu does files file program mail
Cluster 3: space launch orbit earth shuttle like nasa moon mission time
```
### HashingVectorizer
An alternative vectorization can be done using a [`HashingVectorizer`](../../modules/generated/sklearn.feature_extraction.text.hashingvectorizer#sklearn.feature_extraction.text.HashingVectorizer "sklearn.feature_extraction.text.HashingVectorizer") instance, which does not provide IDF weighting as this is a stateless model (the fit method does nothing). When IDF weighting is needed it can be added by pipelining the [`HashingVectorizer`](../../modules/generated/sklearn.feature_extraction.text.hashingvectorizer#sklearn.feature_extraction.text.HashingVectorizer "sklearn.feature_extraction.text.HashingVectorizer") output to a [`TfidfTransformer`](../../modules/generated/sklearn.feature_extraction.text.tfidftransformer#sklearn.feature_extraction.text.TfidfTransformer "sklearn.feature_extraction.text.TfidfTransformer") instance. In this case we also add LSA to the pipeline to reduce the dimension and sparcity of the hashed vector space.
```
from sklearn.feature_extraction.text import HashingVectorizer
from sklearn.feature_extraction.text import TfidfTransformer
lsa_vectorizer = make_pipeline(
HashingVectorizer(stop_words="english", n_features=50_000),
TfidfTransformer(),
TruncatedSVD(n_components=100, random_state=0),
Normalizer(copy=False),
)
t0 = time()
X_hashed_lsa = lsa_vectorizer.fit_transform(dataset.data)
print(f"vectorization done in {time() - t0:.3f} s")
```
```
vectorization done in 2.086 s
```
One can observe that the LSA step takes a relatively long time to fit, especially with hashed vectors. The reason is that a hashed space is typically large (set to `n_features=50_000` in this example). One can try lowering the number of features at the expense of having a larger fraction of features with hash collisions as shown in the example notebook [FeatureHasher and DictVectorizer Comparison](plot_hashing_vs_dict_vectorizer#sphx-glr-auto-examples-text-plot-hashing-vs-dict-vectorizer-py).
We now fit and evaluate the `kmeans` and `minibatch_kmeans` instances on this hashed-lsa-reduced data:
```
fit_and_evaluate(kmeans, X_hashed_lsa, name="KMeans\nwith LSA on hashed vectors")
```
```
clustering done in 0.02 ± 0.01 s
Homogeneity: 0.395 ± 0.012
Completeness: 0.445 ± 0.014
V-measure: 0.419 ± 0.013
Adjusted Rand-Index: 0.325 ± 0.012
Silhouette Coefficient: 0.030 ± 0.001
```
```
fit_and_evaluate(
minibatch_kmeans,
X_hashed_lsa,
name="MiniBatchKMeans\nwith LSA on hashed vectors",
)
```
```
clustering done in 0.02 ± 0.00 s
Homogeneity: 0.353 ± 0.045
Completeness: 0.358 ± 0.043
V-measure: 0.356 ± 0.044
Adjusted Rand-Index: 0.316 ± 0.066
Silhouette Coefficient: 0.025 ± 0.003
```
Both methods lead to good results that are similar to running the same models on the traditional LSA vectors (without hashing).
Clustering evaluation summary
-----------------------------
```
import pandas as pd
import matplotlib.pyplot as plt
fig, (ax0, ax1) = plt.subplots(ncols=2, figsize=(16, 6), sharey=True)
df = pd.DataFrame(evaluations[::-1]).set_index("estimator")
df_std = pd.DataFrame(evaluations_std[::-1]).set_index("estimator")
df.drop(
["train_time"],
axis="columns",
).plot.barh(ax=ax0, xerr=df_std)
ax0.set_xlabel("Clustering scores")
ax0.set_ylabel("")
df["train_time"].plot.barh(ax=ax1, xerr=df_std["train_time"])
ax1.set_xlabel("Clustering time (s)")
plt.tight_layout()
```
[`KMeans`](../../modules/generated/sklearn.cluster.kmeans#sklearn.cluster.KMeans "sklearn.cluster.KMeans") and [`MiniBatchKMeans`](../../modules/generated/sklearn.cluster.minibatchkmeans#sklearn.cluster.MiniBatchKMeans "sklearn.cluster.MiniBatchKMeans") suffer from the phenomenon called the [Curse of Dimensionality](https://en.wikipedia.org/wiki/Curse_of_dimensionality) for high dimensional datasets such as text data. That is the reason why the overall scores improve when using LSA. Using LSA reduced data also improves the stability and requires lower clustering time, though keep in mind that the LSA step itself takes a long time, especially with hashed vectors.
The Silhouette Coefficient is defined between 0 and 1. In all cases we obtain values close to 0 (even if they improve a bit after using LSA) because its definition requires measuring distances, in contrast with other evaluation metrics such as the V-measure and the Adjusted Rand Index which are only based on cluster assignments rather than distances. Notice that strictly speaking, one should not compare the Silhouette Coefficient between spaces of different dimension, due to the different notions of distance they imply.
The homogeneity, completeness and hence v-measure metrics do not yield a baseline with regards to random labeling: this means that depending on the number of samples, clusters and ground truth classes, a completely random labeling will not always yield the same values. In particular random labeling won’t yield zero scores, especially when the number of clusters is large. This problem can safely be ignored when the number of samples is more than a thousand and the number of clusters is less than 10, which is the case of the present example. For smaller sample sizes or larger number of clusters it is safer to use an adjusted index such as the Adjusted Rand Index (ARI). See the example [Adjustment for chance in clustering performance evaluation](../cluster/plot_adjusted_for_chance_measures#sphx-glr-auto-examples-cluster-plot-adjusted-for-chance-measures-py) for a demo on the effect of random labeling.
The size of the error bars show that [`MiniBatchKMeans`](../../modules/generated/sklearn.cluster.minibatchkmeans#sklearn.cluster.MiniBatchKMeans "sklearn.cluster.MiniBatchKMeans") is less stable than [`KMeans`](../../modules/generated/sklearn.cluster.kmeans#sklearn.cluster.KMeans "sklearn.cluster.KMeans") for this relatively small dataset. It is more interesting to use when the number of samples is much bigger, but it can come at the expense of a small degradation in clustering quality compared to the traditional k-means algorithm.
**Total running time of the script:** ( 0 minutes 7.415 seconds)
[`Download Python source code: plot_document_clustering.py`](https://scikit-learn.org/1.1/_downloads/ba68199eea858ec04949b2c6c65147e0/plot_document_clustering.py)
[`Download Jupyter notebook: plot_document_clustering.ipynb`](https://scikit-learn.org/1.1/_downloads/751db3d5e6b909ff00972495eaae53df/plot_document_clustering.ipynb)
| programming_docs |
scikit_learn FeatureHasher and DictVectorizer Comparison Note
Click [here](#sphx-glr-download-auto-examples-text-plot-hashing-vs-dict-vectorizer-py) to download the full example code or to run this example in your browser via Binder
FeatureHasher and DictVectorizer Comparison
===========================================
In this example we illustrate text vectorization, which is the process of representing non-numerical input data (such as dictionaries or text documents) as vectors of real numbers.
We first compare [`FeatureHasher`](../../modules/generated/sklearn.feature_extraction.featurehasher#sklearn.feature_extraction.FeatureHasher "sklearn.feature_extraction.FeatureHasher") and [`DictVectorizer`](../../modules/generated/sklearn.feature_extraction.dictvectorizer#sklearn.feature_extraction.DictVectorizer "sklearn.feature_extraction.DictVectorizer") by using both methods to vectorize text documents that are preprocessed (tokenized) with the help of a custom Python function.
Later we introduce and analyze the text-specific vectorizers [`HashingVectorizer`](../../modules/generated/sklearn.feature_extraction.text.hashingvectorizer#sklearn.feature_extraction.text.HashingVectorizer "sklearn.feature_extraction.text.HashingVectorizer"), [`CountVectorizer`](../../modules/generated/sklearn.feature_extraction.text.countvectorizer#sklearn.feature_extraction.text.CountVectorizer "sklearn.feature_extraction.text.CountVectorizer") and [`TfidfVectorizer`](../../modules/generated/sklearn.feature_extraction.text.tfidfvectorizer#sklearn.feature_extraction.text.TfidfVectorizer "sklearn.feature_extraction.text.TfidfVectorizer") that handle both the tokenization and the assembling of the feature matrix within a single class.
The objective of the example is to demonstrate the usage of text vectorization API and to compare their processing time. See the example scripts [Classification of text documents using sparse features](plot_document_classification_20newsgroups#sphx-glr-auto-examples-text-plot-document-classification-20newsgroups-py) and [Clustering text documents using k-means](plot_document_clustering#sphx-glr-auto-examples-text-plot-document-clustering-py) for actual learning on text documents.
```
# Author: Lars Buitinck
# Olivier Grisel <[email protected]>
# Arturo Amor <[email protected]>
# License: BSD 3 clause
```
Load Data
---------
We load data from [The 20 newsgroups text dataset](https://scikit-learn.org/1.1/datasets/real_world.html#newsgroups-dataset), which comprises around 18000 newsgroups posts on 20 topics split in two subsets: one for training and one for testing. For the sake of simplicity and reducing the computational cost, we select a subset of 7 topics and use the training set only.
```
from sklearn.datasets import fetch_20newsgroups
categories = [
"alt.atheism",
"comp.graphics",
"comp.sys.ibm.pc.hardware",
"misc.forsale",
"rec.autos",
"sci.space",
"talk.religion.misc",
]
print("Loading 20 newsgroups training data")
raw_data, _ = fetch_20newsgroups(subset="train", categories=categories, return_X_y=True)
data_size_mb = sum(len(s.encode("utf-8")) for s in raw_data) / 1e6
print(f"{len(raw_data)} documents - {data_size_mb:.3f}MB")
```
```
Loading 20 newsgroups training data
3803 documents - 6.245MB
```
Define preprocessing functions
------------------------------
A token may be a word, part of a word or anything comprised between spaces or symbols in a string. Here we define a function that extracts the tokens using a simple regular expression (regex) that matches Unicode word characters. This includes most characters that can be part of a word in any language, as well as numbers and the underscore:
```
import re
def tokenize(doc):
"""Extract tokens from doc.
This uses a simple regex that matches word characters to break strings
into tokens. For a more principled approach, see CountVectorizer or
TfidfVectorizer.
"""
return (tok.lower() for tok in re.findall(r"\w+", doc))
list(tokenize("This is a simple example, isn't it?"))
```
```
['this', 'is', 'a', 'simple', 'example', 'isn', 't', 'it']
```
We define an additional function that counts the (frequency of) occurrence of each token in a given document. It returns a frequency dictionary to be used by the vectorizers.
```
from collections import defaultdict
def token_freqs(doc):
"""Extract a dict mapping tokens from doc to their occurrences."""
freq = defaultdict(int)
for tok in tokenize(doc):
freq[tok] += 1
return freq
token_freqs("That is one example, but this is another one")
```
```
defaultdict(<class 'int'>, {'that': 1, 'is': 2, 'one': 2, 'example': 1, 'but': 1, 'this': 1, 'another': 1})
```
Observe in particular that the repeated token `"is"` is counted twice for instance.
Breaking a text document into word tokens, potentially losing the order information between the words in a sentence is often called a [Bag of Words representation](https://en.wikipedia.org/wiki/Bag-of-words_model).
DictVectorizer
--------------
First we benchmark the [`DictVectorizer`](../../modules/generated/sklearn.feature_extraction.dictvectorizer#sklearn.feature_extraction.DictVectorizer "sklearn.feature_extraction.DictVectorizer"), then we compare it to [`FeatureHasher`](../../modules/generated/sklearn.feature_extraction.featurehasher#sklearn.feature_extraction.FeatureHasher "sklearn.feature_extraction.FeatureHasher") as both of them receive dictionaries as input.
```
from time import time
from sklearn.feature_extraction import DictVectorizer
dict_count_vectorizers = defaultdict(list)
t0 = time()
vectorizer = DictVectorizer()
vectorizer.fit_transform(token_freqs(d) for d in raw_data)
duration = time() - t0
dict_count_vectorizers["vectorizer"].append(
vectorizer.__class__.__name__ + "\non freq dicts"
)
dict_count_vectorizers["speed"].append(data_size_mb / duration)
print(f"done in {duration:.3f} s at {data_size_mb / duration:.1f} MB/s")
print(f"Found {len(vectorizer.get_feature_names_out())} unique terms")
```
```
done in 0.989 s at 6.3 MB/s
Found 47928 unique terms
```
The actual mapping from text token to column index is explicitly stored in the `.vocabulary_` attribute which is a potentially very large Python dictionary:
```
type(vectorizer.vocabulary_)
```
```
len(vectorizer.vocabulary_)
```
```
47928
```
```
vectorizer.vocabulary_["example"]
```
```
19145
```
FeatureHasher
-------------
Dictionaries take up a large amount of storage space and grow in size as the training set grows. Instead of growing the vectors along with a dictionary, feature hashing builds a vector of pre-defined length by applying a hash function `h` to the features (e.g., tokens), then using the hash values directly as feature indices and updating the resulting vector at those indices. When the feature space is not large enough, hashing functions tend to map distinct values to the same hash code (hash collisions). As a result, it is impossible to determine what object generated any particular hash code.
Because of the above it is impossible to recover the original tokens from the feature matrix and the best approach to estimate the number of unique terms in the original dictionary is to count the number of active columns in the encoded feature matrix. For such a purpose we define the following function:
```
import numpy as np
def n_nonzero_columns(X):
"""Number of columns with at least one non-zero value in a CSR matrix.
This is useful to count the number of features columns that are effectively
active when using the FeatureHasher.
"""
return len(np.unique(X.nonzero()[1]))
```
The default number of features for the [`FeatureHasher`](../../modules/generated/sklearn.feature_extraction.featurehasher#sklearn.feature_extraction.FeatureHasher "sklearn.feature_extraction.FeatureHasher") is 2\*\*20. Here we set `n_features = 2**18` to illustrate hash collisions.
**FeatureHasher on frequency dictionaries**
```
from sklearn.feature_extraction import FeatureHasher
t0 = time()
hasher = FeatureHasher(n_features=2**18)
X = hasher.transform(token_freqs(d) for d in raw_data)
duration = time() - t0
dict_count_vectorizers["vectorizer"].append(
hasher.__class__.__name__ + "\non freq dicts"
)
dict_count_vectorizers["speed"].append(data_size_mb / duration)
print(f"done in {duration:.3f} s at {data_size_mb / duration:.1f} MB/s")
print(f"Found {n_nonzero_columns(X)} unique tokens")
```
```
done in 0.544 s at 11.5 MB/s
Found 43873 unique tokens
```
The number of unique tokens when using the [`FeatureHasher`](../../modules/generated/sklearn.feature_extraction.featurehasher#sklearn.feature_extraction.FeatureHasher "sklearn.feature_extraction.FeatureHasher") is lower than those obtained using the [`DictVectorizer`](../../modules/generated/sklearn.feature_extraction.dictvectorizer#sklearn.feature_extraction.DictVectorizer "sklearn.feature_extraction.DictVectorizer"). This is due to hash collisions.
The number of collisions can be reduced by increasing the feature space. Notice that the speed of the vectorizer does not change significantly when setting a large number of features, though it causes larger coefficient dimensions and then requires more memory usage to store them, even if a majority of them is inactive.
```
t0 = time()
hasher = FeatureHasher(n_features=2**22)
X = hasher.transform(token_freqs(d) for d in raw_data)
duration = time() - t0
print(f"done in {duration:.3f} s at {data_size_mb / duration:.1f} MB/s")
print(f"Found {n_nonzero_columns(X)} unique tokens")
```
```
done in 0.546 s at 11.4 MB/s
Found 47668 unique tokens
```
We confirm that the number of unique tokens gets closer to the number of unique terms found by the [`DictVectorizer`](../../modules/generated/sklearn.feature_extraction.dictvectorizer#sklearn.feature_extraction.DictVectorizer "sklearn.feature_extraction.DictVectorizer").
**FeatureHasher on raw tokens**
Alternatively, one can set `input_type="string"` in the [`FeatureHasher`](../../modules/generated/sklearn.feature_extraction.featurehasher#sklearn.feature_extraction.FeatureHasher "sklearn.feature_extraction.FeatureHasher") to vectorize the strings output directly from the customized `tokenize` function. This is equivalent to passing a dictionary with an implied frequency of 1 for each feature name.
```
t0 = time()
hasher = FeatureHasher(n_features=2**18, input_type="string")
X = hasher.transform(tokenize(d) for d in raw_data)
duration = time() - t0
dict_count_vectorizers["vectorizer"].append(
hasher.__class__.__name__ + "\non raw tokens"
)
dict_count_vectorizers["speed"].append(data_size_mb / duration)
print(f"done in {duration:.3f} s at {data_size_mb / duration:.1f} MB/s")
print(f"Found {n_nonzero_columns(X)} unique tokens")
```
```
done in 0.483 s at 12.9 MB/s
Found 43873 unique tokens
```
We now plot the speed of the above methods for vectorizing.
```
import matplotlib.pyplot as plt
fig, ax = plt.subplots(figsize=(12, 6))
y_pos = np.arange(len(dict_count_vectorizers["vectorizer"]))
ax.barh(y_pos, dict_count_vectorizers["speed"], align="center")
ax.set_yticks(y_pos)
ax.set_yticklabels(dict_count_vectorizers["vectorizer"])
ax.invert_yaxis()
_ = ax.set_xlabel("speed (MB/s)")
```
In both cases [`FeatureHasher`](../../modules/generated/sklearn.feature_extraction.featurehasher#sklearn.feature_extraction.FeatureHasher "sklearn.feature_extraction.FeatureHasher") is approximately twice as fast as [`DictVectorizer`](../../modules/generated/sklearn.feature_extraction.dictvectorizer#sklearn.feature_extraction.DictVectorizer "sklearn.feature_extraction.DictVectorizer"). This is handy when dealing with large amounts of data, with the downside of losing the invertibility of the transformation, which in turn makes the interpretation of a model a more complex task.
The `FeatureHeasher` with `input_type="string"` is slightly faster than the variant that works on frequency dict because it does not count repeated tokens: each token is implicitly counted once, even if it was repeated. Depending on the downstream machine learning task, it can be a limitation or not.
Comparison with special purpose text vectorizers
------------------------------------------------
[`CountVectorizer`](../../modules/generated/sklearn.feature_extraction.text.countvectorizer#sklearn.feature_extraction.text.CountVectorizer "sklearn.feature_extraction.text.CountVectorizer") accepts raw data as it internally implements tokenization and occurrence counting. It is similar to the [`DictVectorizer`](../../modules/generated/sklearn.feature_extraction.dictvectorizer#sklearn.feature_extraction.DictVectorizer "sklearn.feature_extraction.DictVectorizer") when used along with the customized function `token_freqs` as done in the previous section. The difference being that [`CountVectorizer`](../../modules/generated/sklearn.feature_extraction.text.countvectorizer#sklearn.feature_extraction.text.CountVectorizer "sklearn.feature_extraction.text.CountVectorizer") is more flexible. In particular it accepts various regex patterns through the `token_pattern` parameter.
```
from sklearn.feature_extraction.text import CountVectorizer
t0 = time()
vectorizer = CountVectorizer()
vectorizer.fit_transform(raw_data)
duration = time() - t0
dict_count_vectorizers["vectorizer"].append(vectorizer.__class__.__name__)
dict_count_vectorizers["speed"].append(data_size_mb / duration)
print(f"done in {duration:.3f} s at {data_size_mb / duration:.1f} MB/s")
print(f"Found {len(vectorizer.get_feature_names_out())} unique terms")
```
```
done in 0.626 s at 10.0 MB/s
Found 47885 unique terms
```
We see that using the [`CountVectorizer`](../../modules/generated/sklearn.feature_extraction.text.countvectorizer#sklearn.feature_extraction.text.CountVectorizer "sklearn.feature_extraction.text.CountVectorizer") implementation is approximately twice as fast as using the [`DictVectorizer`](../../modules/generated/sklearn.feature_extraction.dictvectorizer#sklearn.feature_extraction.DictVectorizer "sklearn.feature_extraction.DictVectorizer") along with the simple function we defined for mapping the tokens. The reason is that [`CountVectorizer`](../../modules/generated/sklearn.feature_extraction.text.countvectorizer#sklearn.feature_extraction.text.CountVectorizer "sklearn.feature_extraction.text.CountVectorizer") is optimized by reusing a compiled regular expression for the full training set instead of creating one per document as done in our naive tokenize function.
Now we make a similar experiment with the [`HashingVectorizer`](../../modules/generated/sklearn.feature_extraction.text.hashingvectorizer#sklearn.feature_extraction.text.HashingVectorizer "sklearn.feature_extraction.text.HashingVectorizer"), which is equivalent to combining the “hashing trick” implemented by the [`FeatureHasher`](../../modules/generated/sklearn.feature_extraction.featurehasher#sklearn.feature_extraction.FeatureHasher "sklearn.feature_extraction.FeatureHasher") class and the text preprocessing and tokenization of the [`CountVectorizer`](../../modules/generated/sklearn.feature_extraction.text.countvectorizer#sklearn.feature_extraction.text.CountVectorizer "sklearn.feature_extraction.text.CountVectorizer").
```
from sklearn.feature_extraction.text import HashingVectorizer
t0 = time()
vectorizer = HashingVectorizer(n_features=2**18)
vectorizer.fit_transform(raw_data)
duration = time() - t0
dict_count_vectorizers["vectorizer"].append(vectorizer.__class__.__name__)
dict_count_vectorizers["speed"].append(data_size_mb / duration)
print(f"done in {duration:.3f} s at {data_size_mb / duration:.1f} MB/s")
```
```
done in 0.458 s at 13.6 MB/s
```
We can observe that this is the fastest text tokenization strategy so far, assuming that the downstream machine learning task can tolerate a few collisions.
TfidfVectorizer
---------------
In a large text corpus, some words appear with higher frequency (e.g. “the”, “a”, “is” in English) and do not carry meaningful information about the actual contents of a document. If we were to feed the word count data directly to a classifier, those very common terms would shadow the frequencies of rarer yet more informative terms. In order to re-weight the count features into floating point values suitable for usage by a classifier it is very common to use the tf–idf transform as implemented by the [`TfidfTransformer`](../../modules/generated/sklearn.feature_extraction.text.tfidftransformer#sklearn.feature_extraction.text.TfidfTransformer "sklearn.feature_extraction.text.TfidfTransformer"). TF stands for “term-frequency” while “tf–idf” means term-frequency times inverse document-frequency.
We now benchmark the [`TfidfVectorizer`](../../modules/generated/sklearn.feature_extraction.text.tfidfvectorizer#sklearn.feature_extraction.text.TfidfVectorizer "sklearn.feature_extraction.text.TfidfVectorizer"), which is equivalent to combining the tokenization and occurrence counting of the [`CountVectorizer`](../../modules/generated/sklearn.feature_extraction.text.countvectorizer#sklearn.feature_extraction.text.CountVectorizer "sklearn.feature_extraction.text.CountVectorizer") along with the normalizing and weighting from a [`TfidfTransformer`](../../modules/generated/sklearn.feature_extraction.text.tfidftransformer#sklearn.feature_extraction.text.TfidfTransformer "sklearn.feature_extraction.text.TfidfTransformer").
```
from sklearn.feature_extraction.text import TfidfVectorizer
t0 = time()
vectorizer = TfidfVectorizer()
vectorizer.fit_transform(raw_data)
duration = time() - t0
dict_count_vectorizers["vectorizer"].append(vectorizer.__class__.__name__)
dict_count_vectorizers["speed"].append(data_size_mb / duration)
print(f"done in {duration:.3f} s at {data_size_mb / duration:.1f} MB/s")
print(f"Found {len(vectorizer.get_feature_names_out())} unique terms")
```
```
done in 0.637 s at 9.8 MB/s
Found 47885 unique terms
```
Summary
-------
Let’s conclude this notebook by summarizing all the recorded processing speeds in a single plot:
```
fig, ax = plt.subplots(figsize=(12, 6))
y_pos = np.arange(len(dict_count_vectorizers["vectorizer"]))
ax.barh(y_pos, dict_count_vectorizers["speed"], align="center")
ax.set_yticks(y_pos)
ax.set_yticklabels(dict_count_vectorizers["vectorizer"])
ax.invert_yaxis()
_ = ax.set_xlabel("speed (MB/s)")
```
Notice from the plot that [`TfidfVectorizer`](../../modules/generated/sklearn.feature_extraction.text.tfidfvectorizer#sklearn.feature_extraction.text.TfidfVectorizer "sklearn.feature_extraction.text.TfidfVectorizer") is slightly slower than [`CountVectorizer`](../../modules/generated/sklearn.feature_extraction.text.countvectorizer#sklearn.feature_extraction.text.CountVectorizer "sklearn.feature_extraction.text.CountVectorizer") because of the extra operation induced by the [`TfidfTransformer`](../../modules/generated/sklearn.feature_extraction.text.tfidftransformer#sklearn.feature_extraction.text.TfidfTransformer "sklearn.feature_extraction.text.TfidfTransformer").
Also notice that, by setting the number of features `n_features = 2**18`, the [`HashingVectorizer`](../../modules/generated/sklearn.feature_extraction.text.hashingvectorizer#sklearn.feature_extraction.text.HashingVectorizer "sklearn.feature_extraction.text.HashingVectorizer") performs better than the [`CountVectorizer`](../../modules/generated/sklearn.feature_extraction.text.countvectorizer#sklearn.feature_extraction.text.CountVectorizer "sklearn.feature_extraction.text.CountVectorizer") at the expense of inversibility of the transformation due to hash collisions.
We highlight that [`CountVectorizer`](../../modules/generated/sklearn.feature_extraction.text.countvectorizer#sklearn.feature_extraction.text.CountVectorizer "sklearn.feature_extraction.text.CountVectorizer") and [`HashingVectorizer`](../../modules/generated/sklearn.feature_extraction.text.hashingvectorizer#sklearn.feature_extraction.text.HashingVectorizer "sklearn.feature_extraction.text.HashingVectorizer") perform better than their equivalent [`DictVectorizer`](../../modules/generated/sklearn.feature_extraction.dictvectorizer#sklearn.feature_extraction.DictVectorizer "sklearn.feature_extraction.DictVectorizer") and [`FeatureHasher`](../../modules/generated/sklearn.feature_extraction.featurehasher#sklearn.feature_extraction.FeatureHasher "sklearn.feature_extraction.FeatureHasher") on manually tokenized documents since the internal tokenization step of the former vectorizers compiles a regular expression once and then reuses it for all the documents.
**Total running time of the script:** ( 0 minutes 4.738 seconds)
[`Download Python source code: plot_hashing_vs_dict_vectorizer.py`](https://scikit-learn.org/1.1/_downloads/5775388ede077a05b00514ecbaa17f32/plot_hashing_vs_dict_vectorizer.py)
[`Download Jupyter notebook: plot_hashing_vs_dict_vectorizer.ipynb`](https://scikit-learn.org/1.1/_downloads/06cfc926acb27652fb2aa5bfc583e7cb/plot_hashing_vs_dict_vectorizer.ipynb)
| programming_docs |
scikit_learn Receiver Operating Characteristic (ROC) with cross validation Note
Click [here](#sphx-glr-download-auto-examples-model-selection-plot-roc-crossval-py) to download the full example code or to run this example in your browser via Binder
Receiver Operating Characteristic (ROC) with cross validation
=============================================================
Example of Receiver Operating Characteristic (ROC) metric to evaluate classifier output quality using cross-validation.
ROC curves typically feature true positive rate on the Y axis, and false positive rate on the X axis. This means that the top left corner of the plot is the “ideal” point - a false positive rate of zero, and a true positive rate of one. This is not very realistic, but it does mean that a larger area under the curve (AUC) is usually better.
The “steepness” of ROC curves is also important, since it is ideal to maximize the true positive rate while minimizing the false positive rate.
This example shows the ROC response of different datasets, created from K-fold cross-validation. Taking all of these curves, it is possible to calculate the mean area under curve, and see the variance of the curve when the training set is split into different subsets. This roughly shows how the classifier output is affected by changes in the training data, and how different the splits generated by K-fold cross-validation are from one another.
Note
See also [`sklearn.metrics.roc_auc_score`](../../modules/generated/sklearn.metrics.roc_auc_score#sklearn.metrics.roc_auc_score "sklearn.metrics.roc_auc_score"),
[`sklearn.model_selection.cross_val_score`](../../modules/generated/sklearn.model_selection.cross_val_score#sklearn.model_selection.cross_val_score "sklearn.model_selection.cross_val_score"), [Receiver Operating Characteristic (ROC)](plot_roc#sphx-glr-auto-examples-model-selection-plot-roc-py),
Data IO and generation
----------------------
```
import numpy as np
from sklearn import datasets
# Import some data to play with
iris = datasets.load_iris()
X = iris.data
y = iris.target
X, y = X[y != 2], y[y != 2]
n_samples, n_features = X.shape
# Add noisy features
random_state = np.random.RandomState(0)
X = np.c_[X, random_state.randn(n_samples, 200 * n_features)]
```
Classification and ROC analysis
-------------------------------
```
import matplotlib.pyplot as plt
from sklearn import svm
from sklearn.metrics import auc
from sklearn.metrics import RocCurveDisplay
from sklearn.model_selection import StratifiedKFold
# Run classifier with cross-validation and plot ROC curves
cv = StratifiedKFold(n_splits=6)
classifier = svm.SVC(kernel="linear", probability=True, random_state=random_state)
tprs = []
aucs = []
mean_fpr = np.linspace(0, 1, 100)
fig, ax = plt.subplots()
for i, (train, test) in enumerate(cv.split(X, y)):
classifier.fit(X[train], y[train])
viz = RocCurveDisplay.from_estimator(
classifier,
X[test],
y[test],
name="ROC fold {}".format(i),
alpha=0.3,
lw=1,
ax=ax,
)
interp_tpr = np.interp(mean_fpr, viz.fpr, viz.tpr)
interp_tpr[0] = 0.0
tprs.append(interp_tpr)
aucs.append(viz.roc_auc)
ax.plot([0, 1], [0, 1], linestyle="--", lw=2, color="r", label="Chance", alpha=0.8)
mean_tpr = np.mean(tprs, axis=0)
mean_tpr[-1] = 1.0
mean_auc = auc(mean_fpr, mean_tpr)
std_auc = np.std(aucs)
ax.plot(
mean_fpr,
mean_tpr,
color="b",
label=r"Mean ROC (AUC = %0.2f $\pm$ %0.2f)" % (mean_auc, std_auc),
lw=2,
alpha=0.8,
)
std_tpr = np.std(tprs, axis=0)
tprs_upper = np.minimum(mean_tpr + std_tpr, 1)
tprs_lower = np.maximum(mean_tpr - std_tpr, 0)
ax.fill_between(
mean_fpr,
tprs_lower,
tprs_upper,
color="grey",
alpha=0.2,
label=r"$\pm$ 1 std. dev.",
)
ax.set(
xlim=[-0.05, 1.05],
ylim=[-0.05, 1.05],
title="Receiver operating characteristic example",
)
ax.legend(loc="lower right")
plt.show()
```
**Total running time of the script:** ( 0 minutes 0.152 seconds)
[`Download Python source code: plot_roc_crossval.py`](https://scikit-learn.org/1.1/_downloads/010337852815f8103ac6cca38a812b3c/plot_roc_crossval.py)
[`Download Jupyter notebook: plot_roc_crossval.ipynb`](https://scikit-learn.org/1.1/_downloads/055e8313e28f2f3b5fd508054dfe5fe0/plot_roc_crossval.ipynb)
scikit_learn Successive Halving Iterations Note
Click [here](#sphx-glr-download-auto-examples-model-selection-plot-successive-halving-iterations-py) to download the full example code or to run this example in your browser via Binder
Successive Halving Iterations
=============================
This example illustrates how a successive halving search ([`HalvingGridSearchCV`](../../modules/generated/sklearn.model_selection.halvinggridsearchcv#sklearn.model_selection.HalvingGridSearchCV "sklearn.model_selection.HalvingGridSearchCV") and [`HalvingRandomSearchCV`](../../modules/generated/sklearn.model_selection.halvingrandomsearchcv#sklearn.model_selection.HalvingRandomSearchCV "sklearn.model_selection.HalvingRandomSearchCV")) iteratively chooses the best parameter combination out of multiple candidates.
```
import pandas as pd
from sklearn import datasets
import matplotlib.pyplot as plt
from scipy.stats import randint
import numpy as np
from sklearn.experimental import enable_halving_search_cv # noqa
from sklearn.model_selection import HalvingRandomSearchCV
from sklearn.ensemble import RandomForestClassifier
```
We first define the parameter space and train a [`HalvingRandomSearchCV`](../../modules/generated/sklearn.model_selection.halvingrandomsearchcv#sklearn.model_selection.HalvingRandomSearchCV "sklearn.model_selection.HalvingRandomSearchCV") instance.
```
rng = np.random.RandomState(0)
X, y = datasets.make_classification(n_samples=400, n_features=12, random_state=rng)
clf = RandomForestClassifier(n_estimators=20, random_state=rng)
param_dist = {
"max_depth": [3, None],
"max_features": randint(1, 6),
"min_samples_split": randint(2, 11),
"bootstrap": [True, False],
"criterion": ["gini", "entropy"],
}
rsh = HalvingRandomSearchCV(
estimator=clf, param_distributions=param_dist, factor=2, random_state=rng
)
rsh.fit(X, y)
```
```
HalvingRandomSearchCV(estimator=RandomForestClassifier(n_estimators=20,
random_state=RandomState(MT19937) at 0x7F6E7E761B40),
factor=2,
param_distributions={'bootstrap': [True, False],
'criterion': ['gini', 'entropy'],
'max_depth': [3, None],
'max_features': <scipy.stats._distn_infrastructure.rv_discrete_frozen object at 0x7f6e7e41fc10>,
'min_samples_split': <scipy.stats._distn_infrastructure.rv_discrete_frozen object at 0x7f6dea2acdf0>},
random_state=RandomState(MT19937) at 0x7F6E7E761B40)
```
**In a Jupyter environment, please rerun this cell to show the HTML representation or trust the notebook.
On GitHub, the HTML representation is unable to render, please try loading this page with nbviewer.org.**
HalvingRandomSearchCV
```
HalvingRandomSearchCV(estimator=RandomForestClassifier(n_estimators=20,
random_state=RandomState(MT19937) at 0x7F6E7E761B40),
factor=2,
param_distributions={'bootstrap': [True, False],
'criterion': ['gini', 'entropy'],
'max_depth': [3, None],
'max_features': <scipy.stats._distn_infrastructure.rv_discrete_frozen object at 0x7f6e7e41fc10>,
'min_samples_split': <scipy.stats._distn_infrastructure.rv_discrete_frozen object at 0x7f6dea2acdf0>},
random_state=RandomState(MT19937) at 0x7F6E7E761B40)
```
estimator: RandomForestClassifier
```
RandomForestClassifier(n_estimators=20,
random_state=RandomState(MT19937) at 0x7F6E7E761B40)
```
RandomForestClassifier
```
RandomForestClassifier(n_estimators=20,
random_state=RandomState(MT19937) at 0x7F6E7E761B40)
```
We can now use the `cv_results_` attribute of the search estimator to inspect and plot the evolution of the search.
```
results = pd.DataFrame(rsh.cv_results_)
results["params_str"] = results.params.apply(str)
results.drop_duplicates(subset=("params_str", "iter"), inplace=True)
mean_scores = results.pivot(
index="iter", columns="params_str", values="mean_test_score"
)
ax = mean_scores.plot(legend=False, alpha=0.6)
labels = [
f"iter={i}\nn_samples={rsh.n_resources_[i]}\nn_candidates={rsh.n_candidates_[i]}"
for i in range(rsh.n_iterations_)
]
ax.set_xticks(range(rsh.n_iterations_))
ax.set_xticklabels(labels, rotation=45, multialignment="left")
ax.set_title("Scores of candidates over iterations")
ax.set_ylabel("mean test score", fontsize=15)
ax.set_xlabel("iterations", fontsize=15)
plt.tight_layout()
plt.show()
```
Number of candidates and amount of resource at each iteration
-------------------------------------------------------------
At the first iteration, a small amount of resources is used. The resource here is the number of samples that the estimators are trained on. All candidates are evaluated.
At the second iteration, only the best half of the candidates is evaluated. The number of allocated resources is doubled: candidates are evaluated on twice as many samples.
This process is repeated until the last iteration, where only 2 candidates are left. The best candidate is the candidate that has the best score at the last iteration.
**Total running time of the script:** ( 0 minutes 3.973 seconds)
[`Download Python source code: plot_successive_halving_iterations.py`](https://scikit-learn.org/1.1/_downloads/49fae0b4f6ab58738dcbf62236756548/plot_successive_halving_iterations.py)
[`Download Jupyter notebook: plot_successive_halving_iterations.ipynb`](https://scikit-learn.org/1.1/_downloads/23fb33f64b3c23edf25165a3a4f04237/plot_successive_halving_iterations.ipynb)
scikit_learn Precision-Recall Note
Click [here](#sphx-glr-download-auto-examples-model-selection-plot-precision-recall-py) to download the full example code or to run this example in your browser via Binder
Precision-Recall
================
Example of Precision-Recall metric to evaluate classifier output quality.
Precision-Recall is a useful measure of success of prediction when the classes are very imbalanced. In information retrieval, precision is a measure of result relevancy, while recall is a measure of how many truly relevant results are returned.
The precision-recall curve shows the tradeoff between precision and recall for different threshold. A high area under the curve represents both high recall and high precision, where high precision relates to a low false positive rate, and high recall relates to a low false negative rate. High scores for both show that the classifier is returning accurate results (high precision), as well as returning a majority of all positive results (high recall).
A system with high recall but low precision returns many results, but most of its predicted labels are incorrect when compared to the training labels. A system with high precision but low recall is just the opposite, returning very few results, but most of its predicted labels are correct when compared to the training labels. An ideal system with high precision and high recall will return many results, with all results labeled correctly.
Precision (\(P\)) is defined as the number of true positives (\(T\_p\)) over the number of true positives plus the number of false positives (\(F\_p\)).
\(P = \frac{T\_p}{T\_p+F\_p}\)
Recall (\(R\)) is defined as the number of true positives (\(T\_p\)) over the number of true positives plus the number of false negatives (\(F\_n\)).
\(R = \frac{T\_p}{T\_p + F\_n}\)
These quantities are also related to the (\(F\_1\)) score, which is defined as the harmonic mean of precision and recall.
\(F1 = 2\frac{P \times R}{P+R}\)
Note that the precision may not decrease with recall. The definition of precision (\(\frac{T\_p}{T\_p + F\_p}\)) shows that lowering the threshold of a classifier may increase the denominator, by increasing the number of results returned. If the threshold was previously set too high, the new results may all be true positives, which will increase precision. If the previous threshold was about right or too low, further lowering the threshold will introduce false positives, decreasing precision.
Recall is defined as \(\frac{T\_p}{T\_p+F\_n}\), where \(T\_p+F\_n\) does not depend on the classifier threshold. This means that lowering the classifier threshold may increase recall, by increasing the number of true positive results. It is also possible that lowering the threshold may leave recall unchanged, while the precision fluctuates.
The relationship between recall and precision can be observed in the stairstep area of the plot - at the edges of these steps a small change in the threshold considerably reduces precision, with only a minor gain in recall.
**Average precision** (AP) summarizes such a plot as the weighted mean of precisions achieved at each threshold, with the increase in recall from the previous threshold used as the weight:
\(\text{AP} = \sum\_n (R\_n - R\_{n-1}) P\_n\)
where \(P\_n\) and \(R\_n\) are the precision and recall at the nth threshold. A pair \((R\_k, P\_k)\) is referred to as an *operating point*.
AP and the trapezoidal area under the operating points ([`sklearn.metrics.auc`](../../modules/generated/sklearn.metrics.auc#sklearn.metrics.auc "sklearn.metrics.auc")) are common ways to summarize a precision-recall curve that lead to different results. Read more in the [User Guide](../../modules/model_evaluation#precision-recall-f-measure-metrics).
Precision-recall curves are typically used in binary classification to study the output of a classifier. In order to extend the precision-recall curve and average precision to multi-class or multi-label classification, it is necessary to binarize the output. One curve can be drawn per label, but one can also draw a precision-recall curve by considering each element of the label indicator matrix as a binary prediction (micro-averaging).
Note
See also [`sklearn.metrics.average_precision_score`](../../modules/generated/sklearn.metrics.average_precision_score#sklearn.metrics.average_precision_score "sklearn.metrics.average_precision_score"),
[`sklearn.metrics.recall_score`](../../modules/generated/sklearn.metrics.recall_score#sklearn.metrics.recall_score "sklearn.metrics.recall_score"), [`sklearn.metrics.precision_score`](../../modules/generated/sklearn.metrics.precision_score#sklearn.metrics.precision_score "sklearn.metrics.precision_score"), [`sklearn.metrics.f1_score`](../../modules/generated/sklearn.metrics.f1_score#sklearn.metrics.f1_score "sklearn.metrics.f1_score")
In binary classification settings
---------------------------------
### Dataset and model
We will use a Linear SVC classifier to differentiate two types of irises.
```
import numpy as np
from sklearn.datasets import load_iris
from sklearn.model_selection import train_test_split
X, y = load_iris(return_X_y=True)
# Add noisy features
random_state = np.random.RandomState(0)
n_samples, n_features = X.shape
X = np.concatenate([X, random_state.randn(n_samples, 200 * n_features)], axis=1)
# Limit to the two first classes, and split into training and test
X_train, X_test, y_train, y_test = train_test_split(
X[y < 2], y[y < 2], test_size=0.5, random_state=random_state
)
```
Linear SVC will expect each feature to have a similar range of values. Thus, we will first scale the data using a [`StandardScaler`](../../modules/generated/sklearn.preprocessing.standardscaler#sklearn.preprocessing.StandardScaler "sklearn.preprocessing.StandardScaler").
```
from sklearn.pipeline import make_pipeline
from sklearn.preprocessing import StandardScaler
from sklearn.svm import LinearSVC
classifier = make_pipeline(StandardScaler(), LinearSVC(random_state=random_state))
classifier.fit(X_train, y_train)
```
```
Pipeline(steps=[('standardscaler', StandardScaler()),
('linearsvc',
LinearSVC(random_state=RandomState(MT19937) at 0x7F6E7E761B40))])
```
**In a Jupyter environment, please rerun this cell to show the HTML representation or trust the notebook.
On GitHub, the HTML representation is unable to render, please try loading this page with nbviewer.org.**
Pipeline
```
Pipeline(steps=[('standardscaler', StandardScaler()),
('linearsvc',
LinearSVC(random_state=RandomState(MT19937) at 0x7F6E7E761B40))])
```
StandardScaler
```
StandardScaler()
```
LinearSVC
```
LinearSVC(random_state=RandomState(MT19937) at 0x7F6E7E761B40)
```
### Plot the Precision-Recall curve
To plot the precision-recall curve, you should use [`PrecisionRecallDisplay`](../../modules/generated/sklearn.metrics.precisionrecalldisplay#sklearn.metrics.PrecisionRecallDisplay "sklearn.metrics.PrecisionRecallDisplay"). Indeed, there is two methods available depending if you already computed the predictions of the classifier or not.
Let’s first plot the precision-recall curve without the classifier predictions. We use [`from_estimator`](../../modules/generated/sklearn.metrics.precisionrecalldisplay#sklearn.metrics.PrecisionRecallDisplay.from_estimator "sklearn.metrics.PrecisionRecallDisplay.from_estimator") that computes the predictions for us before plotting the curve.
```
from sklearn.metrics import PrecisionRecallDisplay
display = PrecisionRecallDisplay.from_estimator(
classifier, X_test, y_test, name="LinearSVC"
)
_ = display.ax_.set_title("2-class Precision-Recall curve")
```
If we already got the estimated probabilities or scores for our model, then we can use [`from_predictions`](../../modules/generated/sklearn.metrics.precisionrecalldisplay#sklearn.metrics.PrecisionRecallDisplay.from_predictions "sklearn.metrics.PrecisionRecallDisplay.from_predictions").
```
y_score = classifier.decision_function(X_test)
display = PrecisionRecallDisplay.from_predictions(y_test, y_score, name="LinearSVC")
_ = display.ax_.set_title("2-class Precision-Recall curve")
```
In multi-label settings
-----------------------
The precision-recall curve does not support the multilabel setting. However, one can decide how to handle this case. We show such an example below.
### Create multi-label data, fit, and predict
We create a multi-label dataset, to illustrate the precision-recall in multi-label settings.
```
from sklearn.preprocessing import label_binarize
# Use label_binarize to be multi-label like settings
Y = label_binarize(y, classes=[0, 1, 2])
n_classes = Y.shape[1]
# Split into training and test
X_train, X_test, Y_train, Y_test = train_test_split(
X, Y, test_size=0.5, random_state=random_state
)
```
We use [`OneVsRestClassifier`](../../modules/generated/sklearn.multiclass.onevsrestclassifier#sklearn.multiclass.OneVsRestClassifier "sklearn.multiclass.OneVsRestClassifier") for multi-label prediction.
```
from sklearn.multiclass import OneVsRestClassifier
classifier = OneVsRestClassifier(
make_pipeline(StandardScaler(), LinearSVC(random_state=random_state))
)
classifier.fit(X_train, Y_train)
y_score = classifier.decision_function(X_test)
```
### The average precision score in multi-label settings
```
from sklearn.metrics import precision_recall_curve
from sklearn.metrics import average_precision_score
# For each class
precision = dict()
recall = dict()
average_precision = dict()
for i in range(n_classes):
precision[i], recall[i], _ = precision_recall_curve(Y_test[:, i], y_score[:, i])
average_precision[i] = average_precision_score(Y_test[:, i], y_score[:, i])
# A "micro-average": quantifying score on all classes jointly
precision["micro"], recall["micro"], _ = precision_recall_curve(
Y_test.ravel(), y_score.ravel()
)
average_precision["micro"] = average_precision_score(Y_test, y_score, average="micro")
```
### Plot the micro-averaged Precision-Recall curve
```
display = PrecisionRecallDisplay(
recall=recall["micro"],
precision=precision["micro"],
average_precision=average_precision["micro"],
)
display.plot()
_ = display.ax_.set_title("Micro-averaged over all classes")
```
### Plot Precision-Recall curve for each class and iso-f1 curves
```
import matplotlib.pyplot as plt
from itertools import cycle
# setup plot details
colors = cycle(["navy", "turquoise", "darkorange", "cornflowerblue", "teal"])
_, ax = plt.subplots(figsize=(7, 8))
f_scores = np.linspace(0.2, 0.8, num=4)
lines, labels = [], []
for f_score in f_scores:
x = np.linspace(0.01, 1)
y = f_score * x / (2 * x - f_score)
(l,) = plt.plot(x[y >= 0], y[y >= 0], color="gray", alpha=0.2)
plt.annotate("f1={0:0.1f}".format(f_score), xy=(0.9, y[45] + 0.02))
display = PrecisionRecallDisplay(
recall=recall["micro"],
precision=precision["micro"],
average_precision=average_precision["micro"],
)
display.plot(ax=ax, name="Micro-average precision-recall", color="gold")
for i, color in zip(range(n_classes), colors):
display = PrecisionRecallDisplay(
recall=recall[i],
precision=precision[i],
average_precision=average_precision[i],
)
display.plot(ax=ax, name=f"Precision-recall for class {i}", color=color)
# add the legend for the iso-f1 curves
handles, labels = display.ax_.get_legend_handles_labels()
handles.extend([l])
labels.extend(["iso-f1 curves"])
# set the legend and the axes
ax.set_xlim([0.0, 1.0])
ax.set_ylim([0.0, 1.05])
ax.legend(handles=handles, labels=labels, loc="best")
ax.set_title("Extension of Precision-Recall curve to multi-class")
plt.show()
```
**Total running time of the script:** ( 0 minutes 0.314 seconds)
[`Download Python source code: plot_precision_recall.py`](https://scikit-learn.org/1.1/_downloads/98161c8b335acb98de356229c1005819/plot_precision_recall.py)
[`Download Jupyter notebook: plot_precision_recall.ipynb`](https://scikit-learn.org/1.1/_downloads/764d061a261a2e06ad21ec9133361b2d/plot_precision_recall.ipynb)
| programming_docs |
scikit_learn Plotting Learning Curves Note
Click [here](#sphx-glr-download-auto-examples-model-selection-plot-learning-curve-py) to download the full example code or to run this example in your browser via Binder
Plotting Learning Curves
========================
In the first column, first row the learning curve of a naive Bayes classifier is shown for the digits dataset. Note that the training score and the cross-validation score are both not very good at the end. However, the shape of the curve can be found in more complex datasets very often: the training score is very high at the beginning and decreases and the cross-validation score is very low at the beginning and increases. In the second column, first row we see the learning curve of an SVM with RBF kernel. We can see clearly that the training score is still around the maximum and the validation score could be increased with more training samples. The plots in the second row show the times required by the models to train with various sizes of training dataset. The plots in the third row show how much time was required to train the models for each training sizes.
```
import numpy as np
import matplotlib.pyplot as plt
from sklearn.naive_bayes import GaussianNB
from sklearn.svm import SVC
from sklearn.datasets import load_digits
from sklearn.model_selection import learning_curve
from sklearn.model_selection import ShuffleSplit
def plot_learning_curve(
estimator,
title,
X,
y,
axes=None,
ylim=None,
cv=None,
n_jobs=None,
scoring=None,
train_sizes=np.linspace(0.1, 1.0, 5),
):
"""
Generate 3 plots: the test and training learning curve, the training
samples vs fit times curve, the fit times vs score curve.
Parameters
----------
estimator : estimator instance
An estimator instance implementing `fit` and `predict` methods which
will be cloned for each validation.
title : str
Title for the chart.
X : array-like of shape (n_samples, n_features)
Training vector, where ``n_samples`` is the number of samples and
``n_features`` is the number of features.
y : array-like of shape (n_samples) or (n_samples, n_features)
Target relative to ``X`` for classification or regression;
None for unsupervised learning.
axes : array-like of shape (3,), default=None
Axes to use for plotting the curves.
ylim : tuple of shape (2,), default=None
Defines minimum and maximum y-values plotted, e.g. (ymin, ymax).
cv : int, cross-validation generator or an iterable, default=None
Determines the cross-validation splitting strategy.
Possible inputs for cv are:
- None, to use the default 5-fold cross-validation,
- integer, to specify the number of folds.
- :term:`CV splitter`,
- An iterable yielding (train, test) splits as arrays of indices.
For integer/None inputs, if ``y`` is binary or multiclass,
:class:`StratifiedKFold` used. If the estimator is not a classifier
or if ``y`` is neither binary nor multiclass, :class:`KFold` is used.
Refer :ref:`User Guide <cross_validation>` for the various
cross-validators that can be used here.
n_jobs : int or None, default=None
Number of jobs to run in parallel.
``None`` means 1 unless in a :obj:`joblib.parallel_backend` context.
``-1`` means using all processors. See :term:`Glossary <n_jobs>`
for more details.
scoring : str or callable, default=None
A str (see model evaluation documentation) or
a scorer callable object / function with signature
``scorer(estimator, X, y)``.
train_sizes : array-like of shape (n_ticks,)
Relative or absolute numbers of training examples that will be used to
generate the learning curve. If the ``dtype`` is float, it is regarded
as a fraction of the maximum size of the training set (that is
determined by the selected validation method), i.e. it has to be within
(0, 1]. Otherwise it is interpreted as absolute sizes of the training
sets. Note that for classification the number of samples usually have
to be big enough to contain at least one sample from each class.
(default: np.linspace(0.1, 1.0, 5))
"""
if axes is None:
_, axes = plt.subplots(1, 3, figsize=(20, 5))
axes[0].set_title(title)
if ylim is not None:
axes[0].set_ylim(*ylim)
axes[0].set_xlabel("Training examples")
axes[0].set_ylabel("Score")
train_sizes, train_scores, test_scores, fit_times, _ = learning_curve(
estimator,
X,
y,
scoring=scoring,
cv=cv,
n_jobs=n_jobs,
train_sizes=train_sizes,
return_times=True,
)
train_scores_mean = np.mean(train_scores, axis=1)
train_scores_std = np.std(train_scores, axis=1)
test_scores_mean = np.mean(test_scores, axis=1)
test_scores_std = np.std(test_scores, axis=1)
fit_times_mean = np.mean(fit_times, axis=1)
fit_times_std = np.std(fit_times, axis=1)
# Plot learning curve
axes[0].grid()
axes[0].fill_between(
train_sizes,
train_scores_mean - train_scores_std,
train_scores_mean + train_scores_std,
alpha=0.1,
color="r",
)
axes[0].fill_between(
train_sizes,
test_scores_mean - test_scores_std,
test_scores_mean + test_scores_std,
alpha=0.1,
color="g",
)
axes[0].plot(
train_sizes, train_scores_mean, "o-", color="r", label="Training score"
)
axes[0].plot(
train_sizes, test_scores_mean, "o-", color="g", label="Cross-validation score"
)
axes[0].legend(loc="best")
# Plot n_samples vs fit_times
axes[1].grid()
axes[1].plot(train_sizes, fit_times_mean, "o-")
axes[1].fill_between(
train_sizes,
fit_times_mean - fit_times_std,
fit_times_mean + fit_times_std,
alpha=0.1,
)
axes[1].set_xlabel("Training examples")
axes[1].set_ylabel("fit_times")
axes[1].set_title("Scalability of the model")
# Plot fit_time vs score
fit_time_argsort = fit_times_mean.argsort()
fit_time_sorted = fit_times_mean[fit_time_argsort]
test_scores_mean_sorted = test_scores_mean[fit_time_argsort]
test_scores_std_sorted = test_scores_std[fit_time_argsort]
axes[2].grid()
axes[2].plot(fit_time_sorted, test_scores_mean_sorted, "o-")
axes[2].fill_between(
fit_time_sorted,
test_scores_mean_sorted - test_scores_std_sorted,
test_scores_mean_sorted + test_scores_std_sorted,
alpha=0.1,
)
axes[2].set_xlabel("fit_times")
axes[2].set_ylabel("Score")
axes[2].set_title("Performance of the model")
return plt
fig, axes = plt.subplots(3, 2, figsize=(10, 15))
X, y = load_digits(return_X_y=True)
title = "Learning Curves (Naive Bayes)"
# Cross validation with 50 iterations to get smoother mean test and train
# score curves, each time with 20% data randomly selected as a validation set.
cv = ShuffleSplit(n_splits=50, test_size=0.2, random_state=0)
estimator = GaussianNB()
plot_learning_curve(
estimator,
title,
X,
y,
axes=axes[:, 0],
ylim=(0.7, 1.01),
cv=cv,
n_jobs=4,
scoring="accuracy",
)
title = r"Learning Curves (SVM, RBF kernel, $\gamma=0.001$)"
# SVC is more expensive so we do a lower number of CV iterations:
cv = ShuffleSplit(n_splits=5, test_size=0.2, random_state=0)
estimator = SVC(gamma=0.001)
plot_learning_curve(
estimator, title, X, y, axes=axes[:, 1], ylim=(0.7, 1.01), cv=cv, n_jobs=4
)
plt.show()
```
**Total running time of the script:** ( 0 minutes 2.896 seconds)
[`Download Python source code: plot_learning_curve.py`](https://scikit-learn.org/1.1/_downloads/57163227aeb4c19ca4c69b87a8d1949c/plot_learning_curve.py)
[`Download Jupyter notebook: plot_learning_curve.ipynb`](https://scikit-learn.org/1.1/_downloads/ca0bfe2435d9b3fffe21c713e63d3a6f/plot_learning_curve.ipynb)
scikit_learn Statistical comparison of models using grid search Note
Click [here](#sphx-glr-download-auto-examples-model-selection-plot-grid-search-stats-py) to download the full example code or to run this example in your browser via Binder
Statistical comparison of models using grid search
==================================================
This example illustrates how to statistically compare the performance of models trained and evaluated using [`GridSearchCV`](../../modules/generated/sklearn.model_selection.gridsearchcv#sklearn.model_selection.GridSearchCV "sklearn.model_selection.GridSearchCV").
We will start by simulating moon shaped data (where the ideal separation between classes is non-linear), adding to it a moderate degree of noise. Datapoints will belong to one of two possible classes to be predicted by two features. We will simulate 50 samples for each class:
```
import matplotlib.pyplot as plt
import seaborn as sns
from sklearn.datasets import make_moons
X, y = make_moons(noise=0.352, random_state=1, n_samples=100)
sns.scatterplot(
x=X[:, 0], y=X[:, 1], hue=y, marker="o", s=25, edgecolor="k", legend=False
).set_title("Data")
plt.show()
```
We will compare the performance of [`SVC`](../../modules/generated/sklearn.svm.svc#sklearn.svm.SVC "sklearn.svm.SVC") estimators that vary on their `kernel` parameter, to decide which choice of this hyper-parameter predicts our simulated data best. We will evaluate the performance of the models using [`RepeatedStratifiedKFold`](../../modules/generated/sklearn.model_selection.repeatedstratifiedkfold#sklearn.model_selection.RepeatedStratifiedKFold "sklearn.model_selection.RepeatedStratifiedKFold"), repeating 10 times a 10-fold stratified cross validation using a different randomization of the data in each repetition. The performance will be evaluated using [`roc_auc_score`](../../modules/generated/sklearn.metrics.roc_auc_score#sklearn.metrics.roc_auc_score "sklearn.metrics.roc_auc_score").
```
from sklearn.model_selection import GridSearchCV, RepeatedStratifiedKFold
from sklearn.svm import SVC
param_grid = [
{"kernel": ["linear"]},
{"kernel": ["poly"], "degree": [2, 3]},
{"kernel": ["rbf"]},
]
svc = SVC(random_state=0)
cv = RepeatedStratifiedKFold(n_splits=10, n_repeats=10, random_state=0)
search = GridSearchCV(estimator=svc, param_grid=param_grid, scoring="roc_auc", cv=cv)
search.fit(X, y)
```
```
GridSearchCV(cv=RepeatedStratifiedKFold(n_repeats=10, n_splits=10, random_state=0),
estimator=SVC(random_state=0),
param_grid=[{'kernel': ['linear']},
{'degree': [2, 3], 'kernel': ['poly']},
{'kernel': ['rbf']}],
scoring='roc_auc')
```
**In a Jupyter environment, please rerun this cell to show the HTML representation or trust the notebook.
On GitHub, the HTML representation is unable to render, please try loading this page with nbviewer.org.**
GridSearchCV
```
GridSearchCV(cv=RepeatedStratifiedKFold(n_repeats=10, n_splits=10, random_state=0),
estimator=SVC(random_state=0),
param_grid=[{'kernel': ['linear']},
{'degree': [2, 3], 'kernel': ['poly']},
{'kernel': ['rbf']}],
scoring='roc_auc')
```
estimator: SVC
```
SVC(random_state=0)
```
SVC
```
SVC(random_state=0)
```
We can now inspect the results of our search, sorted by their `mean_test_score`:
```
import pandas as pd
results_df = pd.DataFrame(search.cv_results_)
results_df = results_df.sort_values(by=["rank_test_score"])
results_df = results_df.set_index(
results_df["params"].apply(lambda x: "_".join(str(val) for val in x.values()))
).rename_axis("kernel")
results_df[["params", "rank_test_score", "mean_test_score", "std_test_score"]]
```
| | params | rank\_test\_score | mean\_test\_score | std\_test\_score |
| --- | --- | --- | --- | --- |
| kernel | | | | |
| rbf | {'kernel': 'rbf'} | 1 | 0.9400 | 0.079297 |
| linear | {'kernel': 'linear'} | 2 | 0.9300 | 0.077846 |
| 3\_poly | {'degree': 3, 'kernel': 'poly'} | 3 | 0.9044 | 0.098776 |
| 2\_poly | {'degree': 2, 'kernel': 'poly'} | 4 | 0.6852 | 0.169106 |
We can see that the estimator using the `'rbf'` kernel performed best, closely followed by `'linear'`. Both estimators with a `'poly'` kernel performed worse, with the one using a two-degree polynomial achieving a much lower performance than all other models.
Usually, the analysis just ends here, but half the story is missing. The output of [`GridSearchCV`](../../modules/generated/sklearn.model_selection.gridsearchcv#sklearn.model_selection.GridSearchCV "sklearn.model_selection.GridSearchCV") does not provide information on the certainty of the differences between the models. We don’t know if these are **statistically** significant. To evaluate this, we need to conduct a statistical test. Specifically, to contrast the performance of two models we should statistically compare their AUC scores. There are 100 samples (AUC scores) for each model as we repreated 10 times a 10-fold cross-validation.
However, the scores of the models are not independent: all models are evaluated on the **same** 100 partitions, increasing the correlation between the performance of the models. Since some partitions of the data can make the distinction of the classes particularly easy or hard to find for all models, the models scores will co-vary.
Let’s inspect this partition effect by plotting the performance of all models in each fold, and calculating the correlation between models across folds:
```
# create df of model scores ordered by performance
model_scores = results_df.filter(regex=r"split\d*_test_score")
# plot 30 examples of dependency between cv fold and AUC scores
fig, ax = plt.subplots()
sns.lineplot(
data=model_scores.transpose().iloc[:30],
dashes=False,
palette="Set1",
marker="o",
alpha=0.5,
ax=ax,
)
ax.set_xlabel("CV test fold", size=12, labelpad=10)
ax.set_ylabel("Model AUC", size=12)
ax.tick_params(bottom=True, labelbottom=False)
plt.show()
# print correlation of AUC scores across folds
print(f"Correlation of models:\n {model_scores.transpose().corr()}")
```
```
Correlation of models:
kernel rbf linear 3_poly 2_poly
kernel
rbf 1.000000 0.882561 0.783392 0.351390
linear 0.882561 1.000000 0.746492 0.298688
3_poly 0.783392 0.746492 1.000000 0.355440
2_poly 0.351390 0.298688 0.355440 1.000000
```
We can observe that the performance of the models highly depends on the fold.
As a consequence, if we assume independence between samples we will be underestimating the variance computed in our statistical tests, increasing the number of false positive errors (i.e. detecting a significant difference between models when such does not exist) [[1]](#id10).
Several variance-corrected statistical tests have been developed for these cases. In this example we will show how to implement one of them (the so called Nadeau and Bengio’s corrected t-test) under two different statistical frameworks: frequentist and Bayesian.
Comparing two models: frequentist approach
------------------------------------------
We can start by asking: “Is the first model significantly better than the second model (when ranked by `mean_test_score`)?”
To answer this question using a frequentist approach we could run a paired t-test and compute the p-value. This is also known as Diebold-Mariano test in the forecast literature [[5]](#id14). Many variants of such a t-test have been developed to account for the ‘non-independence of samples problem’ described in the previous section. We will use the one proven to obtain the highest replicability scores (which rate how similar the performance of a model is when evaluating it on different random partitions of the same dataset) while maintaining a low rate of false positives and false negatives: the Nadeau and Bengio’s corrected t-test [[2]](#id11) that uses a 10 times repeated 10-fold cross validation [[3]](#id12).
This corrected paired t-test is computed as:
\[t=\frac{\frac{1}{k \cdot r}\sum\_{i=1}^{k}\sum\_{j=1}^{r}x\_{ij}} {\sqrt{(\frac{1}{k \cdot r}+\frac{n\_{test}}{n\_{train}})\hat{\sigma}^2}}\] where \(k\) is the number of folds, \(r\) the number of repetitions in the cross-validation, \(x\) is the difference in performance of the models, \(n\_{test}\) is the number of samples used for testing, \(n\_{train}\) is the number of samples used for training, and \(\hat{\sigma}^2\) represents the variance of the observed differences.
Let’s implement a corrected right-tailed paired t-test to evaluate if the performance of the first model is significantly better than that of the second model. Our null hypothesis is that the second model performs at least as good as the first model.
```
import numpy as np
from scipy.stats import t
def corrected_std(differences, n_train, n_test):
"""Corrects standard deviation using Nadeau and Bengio's approach.
Parameters
----------
differences : ndarray of shape (n_samples,)
Vector containing the differences in the score metrics of two models.
n_train : int
Number of samples in the training set.
n_test : int
Number of samples in the testing set.
Returns
-------
corrected_std : float
Variance-corrected standard deviation of the set of differences.
"""
# kr = k times r, r times repeated k-fold crossvalidation,
# kr equals the number of times the model was evaluated
kr = len(differences)
corrected_var = np.var(differences, ddof=1) * (1 / kr + n_test / n_train)
corrected_std = np.sqrt(corrected_var)
return corrected_std
def compute_corrected_ttest(differences, df, n_train, n_test):
"""Computes right-tailed paired t-test with corrected variance.
Parameters
----------
differences : array-like of shape (n_samples,)
Vector containing the differences in the score metrics of two models.
df : int
Degrees of freedom.
n_train : int
Number of samples in the training set.
n_test : int
Number of samples in the testing set.
Returns
-------
t_stat : float
Variance-corrected t-statistic.
p_val : float
Variance-corrected p-value.
"""
mean = np.mean(differences)
std = corrected_std(differences, n_train, n_test)
t_stat = mean / std
p_val = t.sf(np.abs(t_stat), df) # right-tailed t-test
return t_stat, p_val
```
```
model_1_scores = model_scores.iloc[0].values # scores of the best model
model_2_scores = model_scores.iloc[1].values # scores of the second-best model
differences = model_1_scores - model_2_scores
n = differences.shape[0] # number of test sets
df = n - 1
n_train = len(list(cv.split(X, y))[0][0])
n_test = len(list(cv.split(X, y))[0][1])
t_stat, p_val = compute_corrected_ttest(differences, df, n_train, n_test)
print(f"Corrected t-value: {t_stat:.3f}\nCorrected p-value: {p_val:.3f}")
```
```
Corrected t-value: 0.750
Corrected p-value: 0.227
```
We can compare the corrected t- and p-values with the uncorrected ones:
```
t_stat_uncorrected = np.mean(differences) / np.sqrt(np.var(differences, ddof=1) / n)
p_val_uncorrected = t.sf(np.abs(t_stat_uncorrected), df)
print(
f"Uncorrected t-value: {t_stat_uncorrected:.3f}\n"
f"Uncorrected p-value: {p_val_uncorrected:.3f}"
)
```
```
Uncorrected t-value: 2.611
Uncorrected p-value: 0.005
```
Using the conventional significance alpha level at `p=0.05`, we observe that the uncorrected t-test concludes that the first model is significantly better than the second.
With the corrected approach, in contrast, we fail to detect this difference.
In the latter case, however, the frequentist approach does not let us conclude that the first and second model have an equivalent performance. If we wanted to make this assertion we need to use a Bayesian approach.
Comparing two models: Bayesian approach
---------------------------------------
We can use Bayesian estimation to calculate the probability that the first model is better than the second. Bayesian estimation will output a distribution followed by the mean \(\mu\) of the differences in the performance of two models.
To obtain the posterior distribution we need to define a prior that models our beliefs of how the mean is distributed before looking at the data, and multiply it by a likelihood function that computes how likely our observed differences are, given the values that the mean of differences could take.
Bayesian estimation can be carried out in many forms to answer our question, but in this example we will implement the approach suggested by Benavoli and colleagues [[4]](#id13).
One way of defining our posterior using a closed-form expression is to select a prior conjugate to the likelihood function. Benavoli and colleagues [[4]](#id13) show that when comparing the performance of two classifiers we can model the prior as a Normal-Gamma distribution (with both mean and variance unknown) conjugate to a normal likelihood, to thus express the posterior as a normal distribution. Marginalizing out the variance from this normal posterior, we can define the posterior of the mean parameter as a Student’s t-distribution. Specifically:
\[St(\mu;n-1,\overline{x},(\frac{1}{n}+\frac{n\_{test}}{n\_{train}}) \hat{\sigma}^2)\] where \(n\) is the total number of samples, \(\overline{x}\) represents the mean difference in the scores, \(n\_{test}\) is the number of samples used for testing, \(n\_{train}\) is the number of samples used for training, and \(\hat{\sigma}^2\) represents the variance of the observed differences.
Notice that we are using Nadeau and Bengio’s corrected variance in our Bayesian approach as well.
Let’s compute and plot the posterior:
```
# initialize random variable
t_post = t(
df, loc=np.mean(differences), scale=corrected_std(differences, n_train, n_test)
)
```
Let’s plot the posterior distribution:
```
x = np.linspace(t_post.ppf(0.001), t_post.ppf(0.999), 100)
plt.plot(x, t_post.pdf(x))
plt.xticks(np.arange(-0.04, 0.06, 0.01))
plt.fill_between(x, t_post.pdf(x), 0, facecolor="blue", alpha=0.2)
plt.ylabel("Probability density")
plt.xlabel(r"Mean difference ($\mu$)")
plt.title("Posterior distribution")
plt.show()
```
We can calculate the probability that the first model is better than the second by computing the area under the curve of the posterior distribution from zero to infinity. And also the reverse: we can calculate the probability that the second model is better than the first by computing the area under the curve from minus infinity to zero.
```
better_prob = 1 - t_post.cdf(0)
print(
f"Probability of {model_scores.index[0]} being more accurate than "
f"{model_scores.index[1]}: {better_prob:.3f}"
)
print(
f"Probability of {model_scores.index[1]} being more accurate than "
f"{model_scores.index[0]}: {1 - better_prob:.3f}"
)
```
```
Probability of rbf being more accurate than linear: 0.773
Probability of linear being more accurate than rbf: 0.227
```
In contrast with the frequentist approach, we can compute the probability that one model is better than the other.
Note that we obtained similar results as those in the frequentist approach. Given our choice of priors, we are essentially performing the same computations, but we are allowed to make different assertions.
### Region of Practical Equivalence
Sometimes we are interested in determining the probabilities that our models have an equivalent performance, where “equivalent” is defined in a practical way. A naive approach [[4]](#id13) would be to define estimators as practically equivalent when they differ by less than 1% in their accuracy. But we could also define this practical equivalence taking into account the problem we are trying to solve. For example, a difference of 5% in accuracy would mean an increase of $1000 in sales, and we consider any quantity above that as relevant for our business.
In this example we are going to define the Region of Practical Equivalence (ROPE) to be \([-0.01, 0.01]\). That is, we will consider two models as practically equivalent if they differ by less than 1% in their performance.
To compute the probabilities of the classifiers being practically equivalent, we calculate the area under the curve of the posterior over the ROPE interval:
```
rope_interval = [-0.01, 0.01]
rope_prob = t_post.cdf(rope_interval[1]) - t_post.cdf(rope_interval[0])
print(
f"Probability of {model_scores.index[0]} and {model_scores.index[1]} "
f"being practically equivalent: {rope_prob:.3f}"
)
```
```
Probability of rbf and linear being practically equivalent: 0.432
```
We can plot how the posterior is distributed over the ROPE interval:
```
x_rope = np.linspace(rope_interval[0], rope_interval[1], 100)
plt.plot(x, t_post.pdf(x))
plt.xticks(np.arange(-0.04, 0.06, 0.01))
plt.vlines([-0.01, 0.01], ymin=0, ymax=(np.max(t_post.pdf(x)) + 1))
plt.fill_between(x_rope, t_post.pdf(x_rope), 0, facecolor="blue", alpha=0.2)
plt.ylabel("Probability density")
plt.xlabel(r"Mean difference ($\mu$)")
plt.title("Posterior distribution under the ROPE")
plt.show()
```
As suggested in [[4]](#id13), we can further interpret these probabilities using the same criteria as the frequentist approach: is the probability of falling inside the ROPE bigger than 95% (alpha value of 5%)? In that case we can conclude that both models are practically equivalent.
The Bayesian estimation approach also allows us to compute how uncertain we are about our estimation of the difference. This can be calculated using credible intervals. For a given probability, they show the range of values that the estimated quantity, in our case the mean difference in performance, can take. For example, a 50% credible interval [x, y] tells us that there is a 50% probability that the true (mean) difference of performance between models is between x and y.
Let’s determine the credible intervals of our data using 50%, 75% and 95%:
```
cred_intervals = []
intervals = [0.5, 0.75, 0.95]
for interval in intervals:
cred_interval = list(t_post.interval(interval))
cred_intervals.append([interval, cred_interval[0], cred_interval[1]])
cred_int_df = pd.DataFrame(
cred_intervals, columns=["interval", "lower value", "upper value"]
).set_index("interval")
cred_int_df
```
| | lower value | upper value |
| --- | --- | --- |
| interval | | |
| 0.50 | 0.000977 | 0.019023 |
| 0.75 | -0.005422 | 0.025422 |
| 0.95 | -0.016445 | 0.036445 |
As shown in the table, there is a 50% probability that the true mean difference between models will be between 0.000977 and 0.019023, 70% probability that it will be between -0.005422 and 0.025422, and 95% probability that it will be between -0.016445 and 0.036445.
Pairwise comparison of all models: frequentist approach
-------------------------------------------------------
We could also be interested in comparing the performance of all our models evaluated with [`GridSearchCV`](../../modules/generated/sklearn.model_selection.gridsearchcv#sklearn.model_selection.GridSearchCV "sklearn.model_selection.GridSearchCV"). In this case we would be running our statistical test multiple times, which leads us to the [multiple comparisons problem](https://en.wikipedia.org/wiki/Multiple_comparisons_problem).
There are many possible ways to tackle this problem, but a standard approach is to apply a [Bonferroni correction](https://en.wikipedia.org/wiki/Bonferroni_correction). Bonferroni can be computed by multiplying the p-value by the number of comparisons we are testing.
Let’s compare the performance of the models using the corrected t-test:
```
from itertools import combinations
from math import factorial
n_comparisons = factorial(len(model_scores)) / (
factorial(2) * factorial(len(model_scores) - 2)
)
pairwise_t_test = []
for model_i, model_k in combinations(range(len(model_scores)), 2):
model_i_scores = model_scores.iloc[model_i].values
model_k_scores = model_scores.iloc[model_k].values
differences = model_i_scores - model_k_scores
t_stat, p_val = compute_corrected_ttest(differences, df, n_train, n_test)
p_val *= n_comparisons # implement Bonferroni correction
# Bonferroni can output p-values higher than 1
p_val = 1 if p_val > 1 else p_val
pairwise_t_test.append(
[model_scores.index[model_i], model_scores.index[model_k], t_stat, p_val]
)
pairwise_comp_df = pd.DataFrame(
pairwise_t_test, columns=["model_1", "model_2", "t_stat", "p_val"]
).round(3)
pairwise_comp_df
```
| | model\_1 | model\_2 | t\_stat | p\_val |
| --- | --- | --- | --- | --- |
| 0 | rbf | linear | 0.750 | 1.000 |
| 1 | rbf | 3\_poly | 1.657 | 0.302 |
| 2 | rbf | 2\_poly | 4.565 | 0.000 |
| 3 | linear | 3\_poly | 1.111 | 0.807 |
| 4 | linear | 2\_poly | 4.276 | 0.000 |
| 5 | 3\_poly | 2\_poly | 3.851 | 0.001 |
We observe that after correcting for multiple comparisons, the only model that significantly differs from the others is `'2_poly'`. `'rbf'`, the model ranked first by [`GridSearchCV`](../../modules/generated/sklearn.model_selection.gridsearchcv#sklearn.model_selection.GridSearchCV "sklearn.model_selection.GridSearchCV"), does not significantly differ from `'linear'` or `'3_poly'`.
Pairwise comparison of all models: Bayesian approach
----------------------------------------------------
When using Bayesian estimation to compare multiple models, we don’t need to correct for multiple comparisons (for reasons why see [[4]](#id13)).
We can carry out our pairwise comparisons the same way as in the first section:
```
pairwise_bayesian = []
for model_i, model_k in combinations(range(len(model_scores)), 2):
model_i_scores = model_scores.iloc[model_i].values
model_k_scores = model_scores.iloc[model_k].values
differences = model_i_scores - model_k_scores
t_post = t(
df, loc=np.mean(differences), scale=corrected_std(differences, n_train, n_test)
)
worse_prob = t_post.cdf(rope_interval[0])
better_prob = 1 - t_post.cdf(rope_interval[1])
rope_prob = t_post.cdf(rope_interval[1]) - t_post.cdf(rope_interval[0])
pairwise_bayesian.append([worse_prob, better_prob, rope_prob])
pairwise_bayesian_df = pd.DataFrame(
pairwise_bayesian, columns=["worse_prob", "better_prob", "rope_prob"]
).round(3)
pairwise_comp_df = pairwise_comp_df.join(pairwise_bayesian_df)
pairwise_comp_df
```
| | model\_1 | model\_2 | t\_stat | p\_val | worse\_prob | better\_prob | rope\_prob |
| --- | --- | --- | --- | --- | --- | --- | --- |
| 0 | rbf | linear | 0.750 | 1.000 | 0.068 | 0.500 | 0.432 |
| 1 | rbf | 3\_poly | 1.657 | 0.302 | 0.018 | 0.882 | 0.100 |
| 2 | rbf | 2\_poly | 4.565 | 0.000 | 0.000 | 1.000 | 0.000 |
| 3 | linear | 3\_poly | 1.111 | 0.807 | 0.063 | 0.750 | 0.187 |
| 4 | linear | 2\_poly | 4.276 | 0.000 | 0.000 | 1.000 | 0.000 |
| 5 | 3\_poly | 2\_poly | 3.851 | 0.001 | 0.000 | 1.000 | 0.000 |
Using the Bayesian approach we can compute the probability that a model performs better, worse or practically equivalent to another.
Results show that the model ranked first by [`GridSearchCV`](../../modules/generated/sklearn.model_selection.gridsearchcv#sklearn.model_selection.GridSearchCV "sklearn.model_selection.GridSearchCV") `'rbf'`, has approximately a 6.8% chance of being worse than `'linear'`, and a 1.8% chance of being worse than `'3_poly'`. `'rbf'` and `'linear'` have a 43% probability of being practically equivalent, while `'rbf'` and `'3_poly'` have a 10% chance of being so.
Similarly to the conclusions obtained using the frequentist approach, all models have a 100% probability of being better than `'2_poly'`, and none have a practically equivalent performance with the latter.
Take-home messages
------------------
* Small differences in performance measures might easily turn out to be merely by chance, but not because one model predicts systematically better than the other. As shown in this example, statistics can tell you how likely that is.
* When statistically comparing the performance of two models evaluated in GridSearchCV, it is necessary to correct the calculated variance which could be underestimated since the scores of the models are not independent from each other.
* A frequentist approach that uses a (variance-corrected) paired t-test can tell us if the performance of one model is better than another with a degree of certainty above chance.
* A Bayesian approach can provide the probabilities of one model being better, worse or practically equivalent than another. It can also tell us how confident we are of knowing that the true differences of our models fall under a certain range of values.
* If multiple models are statistically compared, a multiple comparisons correction is needed when using the frequentist approach.
**Total running time of the script:** ( 0 minutes 1.095 seconds)
[`Download Python source code: plot_grid_search_stats.py`](https://scikit-learn.org/1.1/_downloads/efb3df90d4ec295fa0dafe6c8b46211b/plot_grid_search_stats.py)
[`Download Jupyter notebook: plot_grid_search_stats.ipynb`](https://scikit-learn.org/1.1/_downloads/2402de18d671ce5087e3760b2540184f/plot_grid_search_stats.ipynb)
| programming_docs |
scikit_learn Custom refit strategy of a grid search with cross-validation Note
Click [here](#sphx-glr-download-auto-examples-model-selection-plot-grid-search-digits-py) to download the full example code or to run this example in your browser via Binder
Custom refit strategy of a grid search with cross-validation
============================================================
This examples shows how a classifier is optimized by cross-validation, which is done using the [`GridSearchCV`](../../modules/generated/sklearn.model_selection.gridsearchcv#sklearn.model_selection.GridSearchCV "sklearn.model_selection.GridSearchCV") object on a development set that comprises only half of the available labeled data.
The performance of the selected hyper-parameters and trained model is then measured on a dedicated evaluation set that was not used during the model selection step.
More details on tools available for model selection can be found in the sections on [Cross-validation: evaluating estimator performance](../../modules/cross_validation#cross-validation) and [Tuning the hyper-parameters of an estimator](../../modules/grid_search#grid-search).
The dataset
-----------
We will work with the `digits` dataset. The goal is to classify handwritten digits images. We transform the problem into a binary classification for easier understanding: the goal is to identify whether a digit is `8` or not.
```
from sklearn import datasets
digits = datasets.load_digits()
```
In order to train a classifier on images, we need to flatten them into vectors. Each image of 8 by 8 pixels needs to be transformed to a vector of 64 pixels. Thus, we will get a final data array of shape `(n_images, n_pixels)`.
```
n_samples = len(digits.images)
X = digits.images.reshape((n_samples, -1))
y = digits.target == 8
print(
f"The number of images is {X.shape[0]} and each image contains {X.shape[1]} pixels"
)
```
```
The number of images is 1797 and each image contains 64 pixels
```
As presented in the introduction, the data will be split into a training and a testing set of equal size.
```
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.5, random_state=0)
```
Define our grid-search strategy
-------------------------------
We will select a classifier by searching the best hyper-parameters on folds of the training set. To do this, we need to define the scores to select the best candidate.
```
scores = ["precision", "recall"]
```
We can also define a function to be passed to the `refit` parameter of the [`GridSearchCV`](../../modules/generated/sklearn.model_selection.gridsearchcv#sklearn.model_selection.GridSearchCV "sklearn.model_selection.GridSearchCV") instance. It will implement the custom strategy to select the best candidate from the `cv_results_` attribute of the [`GridSearchCV`](../../modules/generated/sklearn.model_selection.gridsearchcv#sklearn.model_selection.GridSearchCV "sklearn.model_selection.GridSearchCV"). Once the candidate is selected, it is automatically refitted by the [`GridSearchCV`](../../modules/generated/sklearn.model_selection.gridsearchcv#sklearn.model_selection.GridSearchCV "sklearn.model_selection.GridSearchCV") instance.
Here, the strategy is to short-list the models which are the best in terms of precision and recall. From the selected models, we finally select the fastest model at predicting. Notice that these custom choices are completely arbitrary.
```
import pandas as pd
def print_dataframe(filtered_cv_results):
"""Pretty print for filtered dataframe"""
for mean_precision, std_precision, mean_recall, std_recall, params in zip(
filtered_cv_results["mean_test_precision"],
filtered_cv_results["std_test_precision"],
filtered_cv_results["mean_test_recall"],
filtered_cv_results["std_test_recall"],
filtered_cv_results["params"],
):
print(
f"precision: {mean_precision:0.3f} (±{std_precision:0.03f}),"
f" recall: {mean_recall:0.3f} (±{std_recall:0.03f}),"
f" for {params}"
)
print()
def refit_strategy(cv_results):
"""Define the strategy to select the best estimator.
The strategy defined here is to filter-out all results below a precision threshold
of 0.98, rank the remaining by recall and keep all models with one standard
deviation of the best by recall. Once these models are selected, we can select the
fastest model to predict.
Parameters
----------
cv_results : dict of numpy (masked) ndarrays
CV results as returned by the `GridSearchCV`.
Returns
-------
best_index : int
The index of the best estimator as it appears in `cv_results`.
"""
# print the info about the grid-search for the different scores
precision_threshold = 0.98
cv_results_ = pd.DataFrame(cv_results)
print("All grid-search results:")
print_dataframe(cv_results_)
# Filter-out all results below the threshold
high_precision_cv_results = cv_results_[
cv_results_["mean_test_precision"] > precision_threshold
]
print(f"Models with a precision higher than {precision_threshold}:")
print_dataframe(high_precision_cv_results)
high_precision_cv_results = high_precision_cv_results[
[
"mean_score_time",
"mean_test_recall",
"std_test_recall",
"mean_test_precision",
"std_test_precision",
"rank_test_recall",
"rank_test_precision",
"params",
]
]
# Select the most performant models in terms of recall
# (within 1 sigma from the best)
best_recall_std = high_precision_cv_results["mean_test_recall"].std()
best_recall = high_precision_cv_results["mean_test_recall"].max()
best_recall_threshold = best_recall - best_recall_std
high_recall_cv_results = high_precision_cv_results[
high_precision_cv_results["mean_test_recall"] > best_recall_threshold
]
print(
"Out of the previously selected high precision models, we keep all the\n"
"the models within one standard deviation of the highest recall model:"
)
print_dataframe(high_recall_cv_results)
# From the best candidates, select the fastest model to predict
fastest_top_recall_high_precision_index = high_recall_cv_results[
"mean_score_time"
].idxmin()
print(
"\nThe selected final model is the fastest to predict out of the previously\n"
"selected subset of best models based on precision and recall.\n"
"Its scoring time is:\n\n"
f"{high_recall_cv_results.loc[fastest_top_recall_high_precision_index]}"
)
return fastest_top_recall_high_precision_index
```
Tuning hyper-parameters
-----------------------
Once we defined our strategy to select the best model, we define the values of the hyper-parameters and create the grid-search instance:
```
from sklearn.model_selection import GridSearchCV
from sklearn.svm import SVC
tuned_parameters = [
{"kernel": ["rbf"], "gamma": [1e-3, 1e-4], "C": [1, 10, 100, 1000]},
{"kernel": ["linear"], "C": [1, 10, 100, 1000]},
]
grid_search = GridSearchCV(
SVC(), tuned_parameters, scoring=scores, refit=refit_strategy
)
grid_search.fit(X_train, y_train)
```
```
All grid-search results:
precision: 1.000 (±0.000), recall: 0.854 (±0.063), for {'C': 1, 'gamma': 0.001, 'kernel': 'rbf'}
precision: 1.000 (±0.000), recall: 0.257 (±0.061), for {'C': 1, 'gamma': 0.0001, 'kernel': 'rbf'}
precision: 1.000 (±0.000), recall: 0.877 (±0.069), for {'C': 10, 'gamma': 0.001, 'kernel': 'rbf'}
precision: 0.968 (±0.039), recall: 0.780 (±0.083), for {'C': 10, 'gamma': 0.0001, 'kernel': 'rbf'}
precision: 1.000 (±0.000), recall: 0.877 (±0.069), for {'C': 100, 'gamma': 0.001, 'kernel': 'rbf'}
precision: 0.905 (±0.058), recall: 0.889 (±0.074), for {'C': 100, 'gamma': 0.0001, 'kernel': 'rbf'}
precision: 1.000 (±0.000), recall: 0.877 (±0.069), for {'C': 1000, 'gamma': 0.001, 'kernel': 'rbf'}
precision: 0.904 (±0.058), recall: 0.890 (±0.073), for {'C': 1000, 'gamma': 0.0001, 'kernel': 'rbf'}
precision: 0.695 (±0.073), recall: 0.743 (±0.065), for {'C': 1, 'kernel': 'linear'}
precision: 0.643 (±0.066), recall: 0.757 (±0.066), for {'C': 10, 'kernel': 'linear'}
precision: 0.611 (±0.028), recall: 0.744 (±0.044), for {'C': 100, 'kernel': 'linear'}
precision: 0.618 (±0.039), recall: 0.744 (±0.044), for {'C': 1000, 'kernel': 'linear'}
Models with a precision higher than 0.98:
precision: 1.000 (±0.000), recall: 0.854 (±0.063), for {'C': 1, 'gamma': 0.001, 'kernel': 'rbf'}
precision: 1.000 (±0.000), recall: 0.257 (±0.061), for {'C': 1, 'gamma': 0.0001, 'kernel': 'rbf'}
precision: 1.000 (±0.000), recall: 0.877 (±0.069), for {'C': 10, 'gamma': 0.001, 'kernel': 'rbf'}
precision: 1.000 (±0.000), recall: 0.877 (±0.069), for {'C': 100, 'gamma': 0.001, 'kernel': 'rbf'}
precision: 1.000 (±0.000), recall: 0.877 (±0.069), for {'C': 1000, 'gamma': 0.001, 'kernel': 'rbf'}
Out of the previously selected high precision models, we keep all the
the models within one standard deviation of the highest recall model:
precision: 1.000 (±0.000), recall: 0.854 (±0.063), for {'C': 1, 'gamma': 0.001, 'kernel': 'rbf'}
precision: 1.000 (±0.000), recall: 0.877 (±0.069), for {'C': 10, 'gamma': 0.001, 'kernel': 'rbf'}
precision: 1.000 (±0.000), recall: 0.877 (±0.069), for {'C': 100, 'gamma': 0.001, 'kernel': 'rbf'}
precision: 1.000 (±0.000), recall: 0.877 (±0.069), for {'C': 1000, 'gamma': 0.001, 'kernel': 'rbf'}
The selected final model is the fastest to predict out of the previously
selected subset of best models based on precision and recall.
Its scoring time is:
mean_score_time 0.003016
mean_test_recall 0.877206
std_test_recall 0.069196
mean_test_precision 1.0
std_test_precision 0.0
rank_test_recall 3
rank_test_precision 1
params {'C': 100, 'gamma': 0.001, 'kernel': 'rbf'}
Name: 4, dtype: object
```
```
GridSearchCV(estimator=SVC(),
param_grid=[{'C': [1, 10, 100, 1000], 'gamma': [0.001, 0.0001],
'kernel': ['rbf']},
{'C': [1, 10, 100, 1000], 'kernel': ['linear']}],
refit=<function refit_strategy at 0x7f6e7fb80310>,
scoring=['precision', 'recall'])
```
**In a Jupyter environment, please rerun this cell to show the HTML representation or trust the notebook.
On GitHub, the HTML representation is unable to render, please try loading this page with nbviewer.org.**
GridSearchCV
```
GridSearchCV(estimator=SVC(),
param_grid=[{'C': [1, 10, 100, 1000], 'gamma': [0.001, 0.0001],
'kernel': ['rbf']},
{'C': [1, 10, 100, 1000], 'kernel': ['linear']}],
refit=<function refit_strategy at 0x7f6e7fb80310>,
scoring=['precision', 'recall'])
```
estimator: SVC
```
SVC()
```
SVC
```
SVC()
```
The parameters selected by the grid-search with our custom strategy are:
```
grid_search.best_params_
```
```
{'C': 100, 'gamma': 0.001, 'kernel': 'rbf'}
```
Finally, we evaluate the fine-tuned model on the left-out evaluation set: the `grid_search` object **has automatically been refit** on the full training set with the parameters selected by our custom refit strategy.
We can use the classification report to compute standard classification metrics on the left-out set:
```
from sklearn.metrics import classification_report
y_pred = grid_search.predict(X_test)
print(classification_report(y_test, y_pred))
```
```
precision recall f1-score support
False 0.99 1.00 0.99 807
True 1.00 0.87 0.93 92
accuracy 0.99 899
macro avg 0.99 0.93 0.96 899
weighted avg 0.99 0.99 0.99 899
```
Note
The problem is too easy: the hyperparameter plateau is too flat and the output model is the same for precision and recall with ties in quality.
**Total running time of the script:** ( 0 minutes 9.450 seconds)
[`Download Python source code: plot_grid_search_digits.py`](https://scikit-learn.org/1.1/_downloads/ffc6ef5575b0aa920abd7a8113265839/plot_grid_search_digits.py)
[`Download Jupyter notebook: plot_grid_search_digits.ipynb`](https://scikit-learn.org/1.1/_downloads/f4a89bf823d814fee03a693df158d83a/plot_grid_search_digits.ipynb)
scikit_learn Train error vs Test error Note
Click [here](#sphx-glr-download-auto-examples-model-selection-plot-train-error-vs-test-error-py) to download the full example code or to run this example in your browser via Binder
Train error vs Test error
=========================
Illustration of how the performance of an estimator on unseen data (test data) is not the same as the performance on training data. As the regularization increases the performance on train decreases while the performance on test is optimal within a range of values of the regularization parameter. The example with an Elastic-Net regression model and the performance is measured using the explained variance a.k.a. R^2.
```
# Author: Alexandre Gramfort <[email protected]>
# License: BSD 3 clause
```
Generate sample data
--------------------
```
import numpy as np
from sklearn import linear_model
from sklearn.datasets import make_regression
from sklearn.model_selection import train_test_split
n_samples_train, n_samples_test, n_features = 75, 150, 500
X, y, coef = make_regression(
n_samples=n_samples_train + n_samples_test,
n_features=n_features,
n_informative=50,
shuffle=False,
noise=1.0,
coef=True,
)
X_train, X_test, y_train, y_test = train_test_split(
X, y, train_size=n_samples_train, test_size=n_samples_test, shuffle=False
)
```
Compute train and test errors
-----------------------------
```
alphas = np.logspace(-5, 1, 60)
enet = linear_model.ElasticNet(l1_ratio=0.7, max_iter=10000)
train_errors = list()
test_errors = list()
for alpha in alphas:
enet.set_params(alpha=alpha)
enet.fit(X_train, y_train)
train_errors.append(enet.score(X_train, y_train))
test_errors.append(enet.score(X_test, y_test))
i_alpha_optim = np.argmax(test_errors)
alpha_optim = alphas[i_alpha_optim]
print("Optimal regularization parameter : %s" % alpha_optim)
# Estimate the coef_ on full data with optimal regularization parameter
enet.set_params(alpha=alpha_optim)
coef_ = enet.fit(X, y).coef_
```
```
Optimal regularization parameter : 0.00026529484644318975
```
Plot results functions
----------------------
```
import matplotlib.pyplot as plt
plt.subplot(2, 1, 1)
plt.semilogx(alphas, train_errors, label="Train")
plt.semilogx(alphas, test_errors, label="Test")
plt.vlines(
alpha_optim,
plt.ylim()[0],
np.max(test_errors),
color="k",
linewidth=3,
label="Optimum on test",
)
plt.legend(loc="lower left")
plt.ylim([0, 1.2])
plt.xlabel("Regularization parameter")
plt.ylabel("Performance")
# Show estimated coef_ vs true coef
plt.subplot(2, 1, 2)
plt.plot(coef, label="True coef")
plt.plot(coef_, label="Estimated coef")
plt.legend()
plt.subplots_adjust(0.09, 0.04, 0.94, 0.94, 0.26, 0.26)
plt.show()
```
**Total running time of the script:** ( 0 minutes 6.616 seconds)
[`Download Python source code: plot_train_error_vs_test_error.py`](https://scikit-learn.org/1.1/_downloads/dcb776e3eb7cce048909ddcd70100917/plot_train_error_vs_test_error.py)
[`Download Jupyter notebook: plot_train_error_vs_test_error.ipynb`](https://scikit-learn.org/1.1/_downloads/b49810e68af99a01e25ba2dfc951b687/plot_train_error_vs_test_error.ipynb)
scikit_learn Receiver Operating Characteristic (ROC) Note
Click [here](#sphx-glr-download-auto-examples-model-selection-plot-roc-py) to download the full example code or to run this example in your browser via Binder
Receiver Operating Characteristic (ROC)
=======================================
Example of Receiver Operating Characteristic (ROC) metric to evaluate classifier output quality.
ROC curves typically feature true positive rate on the Y axis, and false positive rate on the X axis. This means that the top left corner of the plot is the “ideal” point - a false positive rate of zero, and a true positive rate of one. This is not very realistic, but it does mean that a larger area under the curve (AUC) is usually better.
The “steepness” of ROC curves is also important, since it is ideal to maximize the true positive rate while minimizing the false positive rate.
ROC curves are typically used in binary classification to study the output of a classifier. In order to extend ROC curve and ROC area to multi-label classification, it is necessary to binarize the output. One ROC curve can be drawn per label, but one can also draw a ROC curve by considering each element of the label indicator matrix as a binary prediction (micro-averaging).
Another evaluation measure for multi-label classification is macro-averaging, which gives equal weight to the classification of each label.
Note
See also [`sklearn.metrics.roc_auc_score`](../../modules/generated/sklearn.metrics.roc_auc_score#sklearn.metrics.roc_auc_score "sklearn.metrics.roc_auc_score"),
[Receiver Operating Characteristic (ROC) with cross validation](plot_roc_crossval#sphx-glr-auto-examples-model-selection-plot-roc-crossval-py)
```
import numpy as np
import matplotlib.pyplot as plt
from itertools import cycle
from sklearn import svm, datasets
from sklearn.metrics import roc_curve, auc
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import label_binarize
from sklearn.multiclass import OneVsRestClassifier
from sklearn.metrics import roc_auc_score
# Import some data to play with
iris = datasets.load_iris()
X = iris.data
y = iris.target
# Binarize the output
y = label_binarize(y, classes=[0, 1, 2])
n_classes = y.shape[1]
# Add noisy features to make the problem harder
random_state = np.random.RandomState(0)
n_samples, n_features = X.shape
X = np.c_[X, random_state.randn(n_samples, 200 * n_features)]
# shuffle and split training and test sets
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.5, random_state=0)
# Learn to predict each class against the other
classifier = OneVsRestClassifier(
svm.SVC(kernel="linear", probability=True, random_state=random_state)
)
y_score = classifier.fit(X_train, y_train).decision_function(X_test)
# Compute ROC curve and ROC area for each class
fpr = dict()
tpr = dict()
roc_auc = dict()
for i in range(n_classes):
fpr[i], tpr[i], _ = roc_curve(y_test[:, i], y_score[:, i])
roc_auc[i] = auc(fpr[i], tpr[i])
# Compute micro-average ROC curve and ROC area
fpr["micro"], tpr["micro"], _ = roc_curve(y_test.ravel(), y_score.ravel())
roc_auc["micro"] = auc(fpr["micro"], tpr["micro"])
```
Plot of a ROC curve for a specific class
```
plt.figure()
lw = 2
plt.plot(
fpr[2],
tpr[2],
color="darkorange",
lw=lw,
label="ROC curve (area = %0.2f)" % roc_auc[2],
)
plt.plot([0, 1], [0, 1], color="navy", lw=lw, linestyle="--")
plt.xlim([0.0, 1.0])
plt.ylim([0.0, 1.05])
plt.xlabel("False Positive Rate")
plt.ylabel("True Positive Rate")
plt.title("Receiver operating characteristic example")
plt.legend(loc="lower right")
plt.show()
```
Plot ROC curves for the multiclass problem
------------------------------------------
Compute macro-average ROC curve and ROC area
```
# First aggregate all false positive rates
all_fpr = np.unique(np.concatenate([fpr[i] for i in range(n_classes)]))
# Then interpolate all ROC curves at this points
mean_tpr = np.zeros_like(all_fpr)
for i in range(n_classes):
mean_tpr += np.interp(all_fpr, fpr[i], tpr[i])
# Finally average it and compute AUC
mean_tpr /= n_classes
fpr["macro"] = all_fpr
tpr["macro"] = mean_tpr
roc_auc["macro"] = auc(fpr["macro"], tpr["macro"])
# Plot all ROC curves
plt.figure()
plt.plot(
fpr["micro"],
tpr["micro"],
label="micro-average ROC curve (area = {0:0.2f})".format(roc_auc["micro"]),
color="deeppink",
linestyle=":",
linewidth=4,
)
plt.plot(
fpr["macro"],
tpr["macro"],
label="macro-average ROC curve (area = {0:0.2f})".format(roc_auc["macro"]),
color="navy",
linestyle=":",
linewidth=4,
)
colors = cycle(["aqua", "darkorange", "cornflowerblue"])
for i, color in zip(range(n_classes), colors):
plt.plot(
fpr[i],
tpr[i],
color=color,
lw=lw,
label="ROC curve of class {0} (area = {1:0.2f})".format(i, roc_auc[i]),
)
plt.plot([0, 1], [0, 1], "k--", lw=lw)
plt.xlim([0.0, 1.0])
plt.ylim([0.0, 1.05])
plt.xlabel("False Positive Rate")
plt.ylabel("True Positive Rate")
plt.title("Some extension of Receiver operating characteristic to multiclass")
plt.legend(loc="lower right")
plt.show()
```
Area under ROC for the multiclass problem
-----------------------------------------
The [`sklearn.metrics.roc_auc_score`](../../modules/generated/sklearn.metrics.roc_auc_score#sklearn.metrics.roc_auc_score "sklearn.metrics.roc_auc_score") function can be used for multi-class classification. The multi-class One-vs-One scheme compares every unique pairwise combination of classes. In this section, we calculate the AUC using the OvR and OvO schemes. We report a macro average, and a prevalence-weighted average.
```
y_prob = classifier.predict_proba(X_test)
macro_roc_auc_ovo = roc_auc_score(y_test, y_prob, multi_class="ovo", average="macro")
weighted_roc_auc_ovo = roc_auc_score(
y_test, y_prob, multi_class="ovo", average="weighted"
)
macro_roc_auc_ovr = roc_auc_score(y_test, y_prob, multi_class="ovr", average="macro")
weighted_roc_auc_ovr = roc_auc_score(
y_test, y_prob, multi_class="ovr", average="weighted"
)
print(
"One-vs-One ROC AUC scores:\n{:.6f} (macro),\n{:.6f} "
"(weighted by prevalence)".format(macro_roc_auc_ovo, weighted_roc_auc_ovo)
)
print(
"One-vs-Rest ROC AUC scores:\n{:.6f} (macro),\n{:.6f} "
"(weighted by prevalence)".format(macro_roc_auc_ovr, weighted_roc_auc_ovr)
)
```
```
One-vs-One ROC AUC scores:
0.698586 (macro),
0.665839 (weighted by prevalence)
One-vs-Rest ROC AUC scores:
0.698586 (macro),
0.665839 (weighted by prevalence)
```
**Total running time of the script:** ( 0 minutes 0.174 seconds)
[`Download Python source code: plot_roc.py`](https://scikit-learn.org/1.1/_downloads/80fef09514fd851560e999a5b7daa303/plot_roc.py)
[`Download Jupyter notebook: plot_roc.ipynb`](https://scikit-learn.org/1.1/_downloads/40f4aad91af595a370d7582e3a23bed7/plot_roc.ipynb)
| programming_docs |
scikit_learn Test with permutations the significance of a classification score Note
Click [here](#sphx-glr-download-auto-examples-model-selection-plot-permutation-tests-for-classification-py) to download the full example code or to run this example in your browser via Binder
Test with permutations the significance of a classification score
=================================================================
This example demonstrates the use of [`permutation_test_score`](../../modules/generated/sklearn.model_selection.permutation_test_score#sklearn.model_selection.permutation_test_score "sklearn.model_selection.permutation_test_score") to evaluate the significance of a cross-validated score using permutations.
```
# Authors: Alexandre Gramfort <[email protected]>
# Lucy Liu
# License: BSD 3 clause
```
Dataset
-------
We will use the [Iris plants dataset](https://scikit-learn.org/1.1/datasets/toy_dataset.html#iris-dataset), which consists of measurements taken from 3 types of irises.
```
from sklearn.datasets import load_iris
iris = load_iris()
X = iris.data
y = iris.target
```
We will also generate some random feature data (i.e., 20 features), uncorrelated with the class labels in the iris dataset.
```
import numpy as np
n_uncorrelated_features = 20
rng = np.random.RandomState(seed=0)
# Use same number of samples as in iris and 20 features
X_rand = rng.normal(size=(X.shape[0], n_uncorrelated_features))
```
Permutation test score
----------------------
Next, we calculate the [`permutation_test_score`](../../modules/generated/sklearn.model_selection.permutation_test_score#sklearn.model_selection.permutation_test_score "sklearn.model_selection.permutation_test_score") using the original iris dataset, which strongly predict the labels and the randomly generated features and iris labels, which should have no dependency between features and labels. We use the [`SVC`](../../modules/generated/sklearn.svm.svc#sklearn.svm.SVC "sklearn.svm.SVC") classifier and [Accuracy score](../../modules/model_evaluation#accuracy-score) to evaluate the model at each round.
[`permutation_test_score`](../../modules/generated/sklearn.model_selection.permutation_test_score#sklearn.model_selection.permutation_test_score "sklearn.model_selection.permutation_test_score") generates a null distribution by calculating the accuracy of the classifier on 1000 different permutations of the dataset, where features remain the same but labels undergo different permutations. This is the distribution for the null hypothesis which states there is no dependency between the features and labels. An empirical p-value is then calculated as the percentage of permutations for which the score obtained is greater that the score obtained using the original data.
```
from sklearn.svm import SVC
from sklearn.model_selection import StratifiedKFold
from sklearn.model_selection import permutation_test_score
clf = SVC(kernel="linear", random_state=7)
cv = StratifiedKFold(2, shuffle=True, random_state=0)
score_iris, perm_scores_iris, pvalue_iris = permutation_test_score(
clf, X, y, scoring="accuracy", cv=cv, n_permutations=1000
)
score_rand, perm_scores_rand, pvalue_rand = permutation_test_score(
clf, X_rand, y, scoring="accuracy", cv=cv, n_permutations=1000
)
```
### Original data
Below we plot a histogram of the permutation scores (the null distribution). The red line indicates the score obtained by the classifier on the original data. The score is much better than those obtained by using permuted data and the p-value is thus very low. This indicates that there is a low likelihood that this good score would be obtained by chance alone. It provides evidence that the iris dataset contains real dependency between features and labels and the classifier was able to utilize this to obtain good results.
```
import matplotlib.pyplot as plt
fig, ax = plt.subplots()
ax.hist(perm_scores_iris, bins=20, density=True)
ax.axvline(score_iris, ls="--", color="r")
score_label = f"Score on original\ndata: {score_iris:.2f}\n(p-value: {pvalue_iris:.3f})"
ax.text(0.7, 10, score_label, fontsize=12)
ax.set_xlabel("Accuracy score")
_ = ax.set_ylabel("Probability")
```
### Random data
Below we plot the null distribution for the randomized data. The permutation scores are similar to those obtained using the original iris dataset because the permutation always destroys any feature label dependency present. The score obtained on the original randomized data in this case though, is very poor. This results in a large p-value, confirming that there was no feature label dependency in the original data.
```
fig, ax = plt.subplots()
ax.hist(perm_scores_rand, bins=20, density=True)
ax.set_xlim(0.13)
ax.axvline(score_rand, ls="--", color="r")
score_label = f"Score on original\ndata: {score_rand:.2f}\n(p-value: {pvalue_rand:.3f})"
ax.text(0.14, 7.5, score_label, fontsize=12)
ax.set_xlabel("Accuracy score")
ax.set_ylabel("Probability")
plt.show()
```
Another possible reason for obtaining a high p-value is that the classifier was not able to use the structure in the data. In this case, the p-value would only be low for classifiers that are able to utilize the dependency present. In our case above, where the data is random, all classifiers would have a high p-value as there is no structure present in the data.
Finally, note that this test has been shown to produce low p-values even if there is only weak structure in the data [[1]](#id2).
**Total running time of the script:** ( 0 minutes 8.658 seconds)
[`Download Python source code: plot_permutation_tests_for_classification.py`](https://scikit-learn.org/1.1/_downloads/be7fb5ee9d7dc0c4fb277110eef3566a/plot_permutation_tests_for_classification.py)
[`Download Jupyter notebook: plot_permutation_tests_for_classification.ipynb`](https://scikit-learn.org/1.1/_downloads/8ed2480d8058dc8515097893b64d815b/plot_permutation_tests_for_classification.ipynb)
scikit_learn Confusion matrix Note
Click [here](#sphx-glr-download-auto-examples-model-selection-plot-confusion-matrix-py) to download the full example code or to run this example in your browser via Binder
Confusion matrix
================
Example of confusion matrix usage to evaluate the quality of the output of a classifier on the iris data set. The diagonal elements represent the number of points for which the predicted label is equal to the true label, while off-diagonal elements are those that are mislabeled by the classifier. The higher the diagonal values of the confusion matrix the better, indicating many correct predictions.
The figures show the confusion matrix with and without normalization by class support size (number of elements in each class). This kind of normalization can be interesting in case of class imbalance to have a more visual interpretation of which class is being misclassified.
Here the results are not as good as they could be as our choice for the regularization parameter C was not the best. In real life applications this parameter is usually chosen using [Tuning the hyper-parameters of an estimator](../../modules/grid_search#grid-search).
*
*
```
Confusion matrix, without normalization
[[13 0 0]
[ 0 10 6]
[ 0 0 9]]
Normalized confusion matrix
[[1. 0. 0. ]
[0. 0.62 0.38]
[0. 0. 1. ]]
```
```
import numpy as np
import matplotlib.pyplot as plt
from sklearn import svm, datasets
from sklearn.model_selection import train_test_split
from sklearn.metrics import ConfusionMatrixDisplay
# import some data to play with
iris = datasets.load_iris()
X = iris.data
y = iris.target
class_names = iris.target_names
# Split the data into a training set and a test set
X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=0)
# Run classifier, using a model that is too regularized (C too low) to see
# the impact on the results
classifier = svm.SVC(kernel="linear", C=0.01).fit(X_train, y_train)
np.set_printoptions(precision=2)
# Plot non-normalized confusion matrix
titles_options = [
("Confusion matrix, without normalization", None),
("Normalized confusion matrix", "true"),
]
for title, normalize in titles_options:
disp = ConfusionMatrixDisplay.from_estimator(
classifier,
X_test,
y_test,
display_labels=class_names,
cmap=plt.cm.Blues,
normalize=normalize,
)
disp.ax_.set_title(title)
print(title)
print(disp.confusion_matrix)
plt.show()
```
**Total running time of the script:** ( 0 minutes 0.163 seconds)
[`Download Python source code: plot_confusion_matrix.py`](https://scikit-learn.org/1.1/_downloads/d6fdb17f9d063111bf8547f1af312fc5/plot_confusion_matrix.py)
[`Download Jupyter notebook: plot_confusion_matrix.ipynb`](https://scikit-learn.org/1.1/_downloads/9ad55bf68758018c9961815802c65e18/plot_confusion_matrix.ipynb)
scikit_learn Comparison between grid search and successive halving Note
Click [here](#sphx-glr-download-auto-examples-model-selection-plot-successive-halving-heatmap-py) to download the full example code or to run this example in your browser via Binder
Comparison between grid search and successive halving
=====================================================
This example compares the parameter search performed by [`HalvingGridSearchCV`](../../modules/generated/sklearn.model_selection.halvinggridsearchcv#sklearn.model_selection.HalvingGridSearchCV "sklearn.model_selection.HalvingGridSearchCV") and [`GridSearchCV`](../../modules/generated/sklearn.model_selection.gridsearchcv#sklearn.model_selection.GridSearchCV "sklearn.model_selection.GridSearchCV").
```
from time import time
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
from sklearn.svm import SVC
from sklearn import datasets
from sklearn.model_selection import GridSearchCV
from sklearn.experimental import enable_halving_search_cv # noqa
from sklearn.model_selection import HalvingGridSearchCV
```
We first define the parameter space for an [`SVC`](../../modules/generated/sklearn.svm.svc#sklearn.svm.SVC "sklearn.svm.SVC") estimator, and compute the time required to train a [`HalvingGridSearchCV`](../../modules/generated/sklearn.model_selection.halvinggridsearchcv#sklearn.model_selection.HalvingGridSearchCV "sklearn.model_selection.HalvingGridSearchCV") instance, as well as a [`GridSearchCV`](../../modules/generated/sklearn.model_selection.gridsearchcv#sklearn.model_selection.GridSearchCV "sklearn.model_selection.GridSearchCV") instance.
```
rng = np.random.RandomState(0)
X, y = datasets.make_classification(n_samples=1000, random_state=rng)
gammas = [1e-1, 1e-2, 1e-3, 1e-4, 1e-5, 1e-6, 1e-7]
Cs = [1, 10, 100, 1e3, 1e4, 1e5]
param_grid = {"gamma": gammas, "C": Cs}
clf = SVC(random_state=rng)
tic = time()
gsh = HalvingGridSearchCV(
estimator=clf, param_grid=param_grid, factor=2, random_state=rng
)
gsh.fit(X, y)
gsh_time = time() - tic
tic = time()
gs = GridSearchCV(estimator=clf, param_grid=param_grid)
gs.fit(X, y)
gs_time = time() - tic
```
We now plot heatmaps for both search estimators.
```
def make_heatmap(ax, gs, is_sh=False, make_cbar=False):
"""Helper to make a heatmap."""
results = pd.DataFrame.from_dict(gs.cv_results_)
results["params_str"] = results.params.apply(str)
if is_sh:
# SH dataframe: get mean_test_score values for the highest iter
scores_matrix = results.sort_values("iter").pivot_table(
index="param_gamma",
columns="param_C",
values="mean_test_score",
aggfunc="last",
)
else:
scores_matrix = results.pivot(
index="param_gamma", columns="param_C", values="mean_test_score"
)
im = ax.imshow(scores_matrix)
ax.set_xticks(np.arange(len(Cs)))
ax.set_xticklabels(["{:.0E}".format(x) for x in Cs])
ax.set_xlabel("C", fontsize=15)
ax.set_yticks(np.arange(len(gammas)))
ax.set_yticklabels(["{:.0E}".format(x) for x in gammas])
ax.set_ylabel("gamma", fontsize=15)
# Rotate the tick labels and set their alignment.
plt.setp(ax.get_xticklabels(), rotation=45, ha="right", rotation_mode="anchor")
if is_sh:
iterations = results.pivot_table(
index="param_gamma", columns="param_C", values="iter", aggfunc="max"
).values
for i in range(len(gammas)):
for j in range(len(Cs)):
ax.text(
j,
i,
iterations[i, j],
ha="center",
va="center",
color="w",
fontsize=20,
)
if make_cbar:
fig.subplots_adjust(right=0.8)
cbar_ax = fig.add_axes([0.85, 0.15, 0.05, 0.7])
fig.colorbar(im, cax=cbar_ax)
cbar_ax.set_ylabel("mean_test_score", rotation=-90, va="bottom", fontsize=15)
fig, axes = plt.subplots(ncols=2, sharey=True)
ax1, ax2 = axes
make_heatmap(ax1, gsh, is_sh=True)
make_heatmap(ax2, gs, make_cbar=True)
ax1.set_title("Successive Halving\ntime = {:.3f}s".format(gsh_time), fontsize=15)
ax2.set_title("GridSearch\ntime = {:.3f}s".format(gs_time), fontsize=15)
plt.show()
```
```
/home/runner/mambaforge/envs/testenv/lib/python3.9/site-packages/pandas/core/algorithms.py:798: FutureWarning: In a future version, the Index constructor will not infer numeric dtypes when passed object-dtype sequences (matching Series behavior)
uniques = Index(uniques)
```
The heatmaps show the mean test score of the parameter combinations for an [`SVC`](../../modules/generated/sklearn.svm.svc#sklearn.svm.SVC "sklearn.svm.SVC") instance. The [`HalvingGridSearchCV`](../../modules/generated/sklearn.model_selection.halvinggridsearchcv#sklearn.model_selection.HalvingGridSearchCV "sklearn.model_selection.HalvingGridSearchCV") also shows the iteration at which the combinations where last used. The combinations marked as `0` were only evaluated at the first iteration, while the ones with `5` are the parameter combinations that are considered the best ones.
We can see that the [`HalvingGridSearchCV`](../../modules/generated/sklearn.model_selection.halvinggridsearchcv#sklearn.model_selection.HalvingGridSearchCV "sklearn.model_selection.HalvingGridSearchCV") class is able to find parameter combinations that are just as accurate as [`GridSearchCV`](../../modules/generated/sklearn.model_selection.gridsearchcv#sklearn.model_selection.GridSearchCV "sklearn.model_selection.GridSearchCV"), in much less time.
**Total running time of the script:** ( 0 minutes 6.821 seconds)
[`Download Python source code: plot_successive_halving_heatmap.py`](https://scikit-learn.org/1.1/_downloads/6383d955c013c730f9d211f15e261f38/plot_successive_halving_heatmap.py)
[`Download Jupyter notebook: plot_successive_halving_heatmap.ipynb`](https://scikit-learn.org/1.1/_downloads/69f1bc3bab6ea5d622c5dd4cbd78227f/plot_successive_halving_heatmap.ipynb)
scikit_learn Detection error tradeoff (DET) curve Note
Click [here](#sphx-glr-download-auto-examples-model-selection-plot-det-py) to download the full example code or to run this example in your browser via Binder
Detection error tradeoff (DET) curve
====================================
In this example, we compare receiver operating characteristic (ROC) and detection error tradeoff (DET) curves for different classification algorithms for the same classification task.
DET curves are commonly plotted in normal deviate scale. To achieve this the DET display transforms the error rates as returned by the [`det_curve`](../../modules/generated/sklearn.metrics.det_curve#sklearn.metrics.det_curve "sklearn.metrics.det_curve") and the axis scale using `scipy.stats.norm`.
The point of this example is to demonstrate two properties of DET curves, namely:
1. It might be easier to visually assess the overall performance of different classification algorithms using DET curves over ROC curves. Due to the linear scale used for plotting ROC curves, different classifiers usually only differ in the top left corner of the graph and appear similar for a large part of the plot. On the other hand, because DET curves represent straight lines in normal deviate scale. As such, they tend to be distinguishable as a whole and the area of interest spans a large part of the plot.
2. DET curves give the user direct feedback of the detection error tradeoff to aid in operating point analysis. The user can deduct directly from the DET-curve plot at which rate false-negative error rate will improve when willing to accept an increase in false-positive error rate (or vice-versa).
The plots in this example compare ROC curves on the left side to corresponding DET curves on the right. There is no particular reason why these classifiers have been chosen for the example plot over other classifiers available in scikit-learn.
Note
* See [`sklearn.metrics.roc_curve`](../../modules/generated/sklearn.metrics.roc_curve#sklearn.metrics.roc_curve "sklearn.metrics.roc_curve") for further information about ROC curves.
* See [`sklearn.metrics.det_curve`](../../modules/generated/sklearn.metrics.det_curve#sklearn.metrics.det_curve "sklearn.metrics.det_curve") for further information about DET curves.
* This example is loosely based on [Classifier comparison](../classification/plot_classifier_comparison#sphx-glr-auto-examples-classification-plot-classifier-comparison-py) example.
```
import matplotlib.pyplot as plt
from sklearn.datasets import make_classification
from sklearn.ensemble import RandomForestClassifier
from sklearn.metrics import DetCurveDisplay, RocCurveDisplay
from sklearn.model_selection import train_test_split
from sklearn.pipeline import make_pipeline
from sklearn.preprocessing import StandardScaler
from sklearn.svm import LinearSVC
N_SAMPLES = 1000
classifiers = {
"Linear SVM": make_pipeline(StandardScaler(), LinearSVC(C=0.025)),
"Random Forest": RandomForestClassifier(
max_depth=5, n_estimators=10, max_features=1
),
}
X, y = make_classification(
n_samples=N_SAMPLES,
n_features=2,
n_redundant=0,
n_informative=2,
random_state=1,
n_clusters_per_class=1,
)
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.4, random_state=0)
# prepare plots
fig, [ax_roc, ax_det] = plt.subplots(1, 2, figsize=(11, 5))
for name, clf in classifiers.items():
clf.fit(X_train, y_train)
RocCurveDisplay.from_estimator(clf, X_test, y_test, ax=ax_roc, name=name)
DetCurveDisplay.from_estimator(clf, X_test, y_test, ax=ax_det, name=name)
ax_roc.set_title("Receiver Operating Characteristic (ROC) curves")
ax_det.set_title("Detection Error Tradeoff (DET) curves")
ax_roc.grid(linestyle="--")
ax_det.grid(linestyle="--")
plt.legend()
plt.show()
```
**Total running time of the script:** ( 0 minutes 0.155 seconds)
[`Download Python source code: plot_det.py`](https://scikit-learn.org/1.1/_downloads/67703ae8c65716668dd87c31a24a069b/plot_det.py)
[`Download Jupyter notebook: plot_det.ipynb`](https://scikit-learn.org/1.1/_downloads/10bb40e21b74618cdeed618ff1eae595/plot_det.ipynb)
scikit_learn Underfitting vs. Overfitting Note
Click [here](#sphx-glr-download-auto-examples-model-selection-plot-underfitting-overfitting-py) to download the full example code or to run this example in your browser via Binder
Underfitting vs. Overfitting
============================
This example demonstrates the problems of underfitting and overfitting and how we can use linear regression with polynomial features to approximate nonlinear functions. The plot shows the function that we want to approximate, which is a part of the cosine function. In addition, the samples from the real function and the approximations of different models are displayed. The models have polynomial features of different degrees. We can see that a linear function (polynomial with degree 1) is not sufficient to fit the training samples. This is called **underfitting**. A polynomial of degree 4 approximates the true function almost perfectly. However, for higher degrees the model will **overfit** the training data, i.e. it learns the noise of the training data. We evaluate quantitatively **overfitting** / **underfitting** by using cross-validation. We calculate the mean squared error (MSE) on the validation set, the higher, the less likely the model generalizes correctly from the training data.
```
import numpy as np
import matplotlib.pyplot as plt
from sklearn.pipeline import Pipeline
from sklearn.preprocessing import PolynomialFeatures
from sklearn.linear_model import LinearRegression
from sklearn.model_selection import cross_val_score
def true_fun(X):
return np.cos(1.5 * np.pi * X)
np.random.seed(0)
n_samples = 30
degrees = [1, 4, 15]
X = np.sort(np.random.rand(n_samples))
y = true_fun(X) + np.random.randn(n_samples) * 0.1
plt.figure(figsize=(14, 5))
for i in range(len(degrees)):
ax = plt.subplot(1, len(degrees), i + 1)
plt.setp(ax, xticks=(), yticks=())
polynomial_features = PolynomialFeatures(degree=degrees[i], include_bias=False)
linear_regression = LinearRegression()
pipeline = Pipeline(
[
("polynomial_features", polynomial_features),
("linear_regression", linear_regression),
]
)
pipeline.fit(X[:, np.newaxis], y)
# Evaluate the models using crossvalidation
scores = cross_val_score(
pipeline, X[:, np.newaxis], y, scoring="neg_mean_squared_error", cv=10
)
X_test = np.linspace(0, 1, 100)
plt.plot(X_test, pipeline.predict(X_test[:, np.newaxis]), label="Model")
plt.plot(X_test, true_fun(X_test), label="True function")
plt.scatter(X, y, edgecolor="b", s=20, label="Samples")
plt.xlabel("x")
plt.ylabel("y")
plt.xlim((0, 1))
plt.ylim((-2, 2))
plt.legend(loc="best")
plt.title(
"Degree {}\nMSE = {:.2e}(+/- {:.2e})".format(
degrees[i], -scores.mean(), scores.std()
)
)
plt.show()
```
**Total running time of the script:** ( 0 minutes 0.177 seconds)
[`Download Python source code: plot_underfitting_overfitting.py`](https://scikit-learn.org/1.1/_downloads/746a69b3fd67273959a3cd9bfc2a88e8/plot_underfitting_overfitting.py)
[`Download Jupyter notebook: plot_underfitting_overfitting.ipynb`](https://scikit-learn.org/1.1/_downloads/3b25b9158f1c0a1564eb0f4c9e531d46/plot_underfitting_overfitting.ipynb)
| programming_docs |
scikit_learn Plotting Cross-Validated Predictions Note
Click [here](#sphx-glr-download-auto-examples-model-selection-plot-cv-predict-py) to download the full example code or to run this example in your browser via Binder
Plotting Cross-Validated Predictions
====================================
This example shows how to use [`cross_val_predict`](../../modules/generated/sklearn.model_selection.cross_val_predict#sklearn.model_selection.cross_val_predict "sklearn.model_selection.cross_val_predict") to visualize prediction errors.
```
from sklearn import datasets
from sklearn.model_selection import cross_val_predict
from sklearn import linear_model
import matplotlib.pyplot as plt
lr = linear_model.LinearRegression()
X, y = datasets.load_diabetes(return_X_y=True)
# cross_val_predict returns an array of the same size as `y` where each entry
# is a prediction obtained by cross validation:
predicted = cross_val_predict(lr, X, y, cv=10)
fig, ax = plt.subplots()
ax.scatter(y, predicted, edgecolors=(0, 0, 0))
ax.plot([y.min(), y.max()], [y.min(), y.max()], "k--", lw=4)
ax.set_xlabel("Measured")
ax.set_ylabel("Predicted")
plt.show()
```
**Total running time of the script:** ( 0 minutes 0.075 seconds)
[`Download Python source code: plot_cv_predict.py`](https://scikit-learn.org/1.1/_downloads/ff75d3bf9db66ac1cfeb9d15a1400576/plot_cv_predict.py)
[`Download Jupyter notebook: plot_cv_predict.ipynb`](https://scikit-learn.org/1.1/_downloads/93afb18fff95760fe854241141b4bdae/plot_cv_predict.ipynb)
scikit_learn Visualizing cross-validation behavior in scikit-learn Note
Click [here](#sphx-glr-download-auto-examples-model-selection-plot-cv-indices-py) to download the full example code or to run this example in your browser via Binder
Visualizing cross-validation behavior in scikit-learn
=====================================================
Choosing the right cross-validation object is a crucial part of fitting a model properly. There are many ways to split data into training and test sets in order to avoid model overfitting, to standardize the number of groups in test sets, etc.
This example visualizes the behavior of several common scikit-learn objects for comparison.
```
from sklearn.model_selection import (
TimeSeriesSplit,
KFold,
ShuffleSplit,
StratifiedKFold,
GroupShuffleSplit,
GroupKFold,
StratifiedShuffleSplit,
StratifiedGroupKFold,
)
import numpy as np
import matplotlib.pyplot as plt
from matplotlib.patches import Patch
rng = np.random.RandomState(1338)
cmap_data = plt.cm.Paired
cmap_cv = plt.cm.coolwarm
n_splits = 4
```
Visualize our data
------------------
First, we must understand the structure of our data. It has 100 randomly generated input datapoints, 3 classes split unevenly across datapoints, and 10 “groups” split evenly across datapoints.
As we’ll see, some cross-validation objects do specific things with labeled data, others behave differently with grouped data, and others do not use this information.
To begin, we’ll visualize our data.
```
# Generate the class/group data
n_points = 100
X = rng.randn(100, 10)
percentiles_classes = [0.1, 0.3, 0.6]
y = np.hstack([[ii] * int(100 * perc) for ii, perc in enumerate(percentiles_classes)])
# Generate uneven groups
group_prior = rng.dirichlet([2] * 10)
groups = np.repeat(np.arange(10), rng.multinomial(100, group_prior))
def visualize_groups(classes, groups, name):
# Visualize dataset groups
fig, ax = plt.subplots()
ax.scatter(
range(len(groups)),
[0.5] * len(groups),
c=groups,
marker="_",
lw=50,
cmap=cmap_data,
)
ax.scatter(
range(len(groups)),
[3.5] * len(groups),
c=classes,
marker="_",
lw=50,
cmap=cmap_data,
)
ax.set(
ylim=[-1, 5],
yticks=[0.5, 3.5],
yticklabels=["Data\ngroup", "Data\nclass"],
xlabel="Sample index",
)
visualize_groups(y, groups, "no groups")
```
Define a function to visualize cross-validation behavior
--------------------------------------------------------
We’ll define a function that lets us visualize the behavior of each cross-validation object. We’ll perform 4 splits of the data. On each split, we’ll visualize the indices chosen for the training set (in blue) and the test set (in red).
```
def plot_cv_indices(cv, X, y, group, ax, n_splits, lw=10):
"""Create a sample plot for indices of a cross-validation object."""
# Generate the training/testing visualizations for each CV split
for ii, (tr, tt) in enumerate(cv.split(X=X, y=y, groups=group)):
# Fill in indices with the training/test groups
indices = np.array([np.nan] * len(X))
indices[tt] = 1
indices[tr] = 0
# Visualize the results
ax.scatter(
range(len(indices)),
[ii + 0.5] * len(indices),
c=indices,
marker="_",
lw=lw,
cmap=cmap_cv,
vmin=-0.2,
vmax=1.2,
)
# Plot the data classes and groups at the end
ax.scatter(
range(len(X)), [ii + 1.5] * len(X), c=y, marker="_", lw=lw, cmap=cmap_data
)
ax.scatter(
range(len(X)), [ii + 2.5] * len(X), c=group, marker="_", lw=lw, cmap=cmap_data
)
# Formatting
yticklabels = list(range(n_splits)) + ["class", "group"]
ax.set(
yticks=np.arange(n_splits + 2) + 0.5,
yticklabels=yticklabels,
xlabel="Sample index",
ylabel="CV iteration",
ylim=[n_splits + 2.2, -0.2],
xlim=[0, 100],
)
ax.set_title("{}".format(type(cv).__name__), fontsize=15)
return ax
```
Let’s see how it looks for the [`KFold`](../../modules/generated/sklearn.model_selection.kfold#sklearn.model_selection.KFold "sklearn.model_selection.KFold") cross-validation object:
```
fig, ax = plt.subplots()
cv = KFold(n_splits)
plot_cv_indices(cv, X, y, groups, ax, n_splits)
```
```
<AxesSubplot:title={'center':'KFold'}, xlabel='Sample index', ylabel='CV iteration'>
```
As you can see, by default the KFold cross-validation iterator does not take either datapoint class or group into consideration. We can change this by using either:
* `StratifiedKFold` to preserve the percentage of samples for each class.
* `GroupKFold` to ensure that the same group will not appear in two different folds.
* `StratifiedGroupKFold` to keep the constraint of `GroupKFold` while attempting to return stratified folds.
```
cvs = [StratifiedKFold, GroupKFold, StratifiedGroupKFold]
for cv in cvs:
fig, ax = plt.subplots(figsize=(6, 3))
plot_cv_indices(cv(n_splits), X, y, groups, ax, n_splits)
ax.legend(
[Patch(color=cmap_cv(0.8)), Patch(color=cmap_cv(0.02))],
["Testing set", "Training set"],
loc=(1.02, 0.8),
)
# Make the legend fit
plt.tight_layout()
fig.subplots_adjust(right=0.7)
```
*
*
*
Next we’ll visualize this behavior for a number of CV iterators.
Visualize cross-validation indices for many CV objects
------------------------------------------------------
Let’s visually compare the cross validation behavior for many scikit-learn cross-validation objects. Below we will loop through several common cross-validation objects, visualizing the behavior of each.
Note how some use the group/class information while others do not.
```
cvs = [
KFold,
GroupKFold,
ShuffleSplit,
StratifiedKFold,
StratifiedGroupKFold,
GroupShuffleSplit,
StratifiedShuffleSplit,
TimeSeriesSplit,
]
for cv in cvs:
this_cv = cv(n_splits=n_splits)
fig, ax = plt.subplots(figsize=(6, 3))
plot_cv_indices(this_cv, X, y, groups, ax, n_splits)
ax.legend(
[Patch(color=cmap_cv(0.8)), Patch(color=cmap_cv(0.02))],
["Testing set", "Training set"],
loc=(1.02, 0.8),
)
# Make the legend fit
plt.tight_layout()
fig.subplots_adjust(right=0.7)
plt.show()
```
*
*
*
*
*
*
*
*
**Total running time of the script:** ( 0 minutes 0.887 seconds)
[`Download Python source code: plot_cv_indices.py`](https://scikit-learn.org/1.1/_downloads/f1caa332331b42f32518c03ec8a71341/plot_cv_indices.py)
[`Download Jupyter notebook: plot_cv_indices.ipynb`](https://scikit-learn.org/1.1/_downloads/49cd91d05440a1c88b074430761aeb76/plot_cv_indices.ipynb)
scikit_learn Demonstration of multi-metric evaluation on cross_val_score and GridSearchCV Note
Click [here](#sphx-glr-download-auto-examples-model-selection-plot-multi-metric-evaluation-py) to download the full example code or to run this example in your browser via Binder
Demonstration of multi-metric evaluation on cross\_val\_score and GridSearchCV
==============================================================================
Multiple metric parameter search can be done by setting the `scoring` parameter to a list of metric scorer names or a dict mapping the scorer names to the scorer callables.
The scores of all the scorers are available in the `cv_results_` dict at keys ending in `'_<scorer_name>'` (`'mean_test_precision'`, `'rank_test_precision'`, etc…)
The `best_estimator_`, `best_index_`, `best_score_` and `best_params_` correspond to the scorer (key) that is set to the `refit` attribute.
```
# Author: Raghav RV <[email protected]>
# License: BSD
import numpy as np
from matplotlib import pyplot as plt
from sklearn.datasets import make_hastie_10_2
from sklearn.model_selection import GridSearchCV
from sklearn.metrics import make_scorer
from sklearn.metrics import accuracy_score
from sklearn.tree import DecisionTreeClassifier
```
Running `GridSearchCV` using multiple evaluation metrics
--------------------------------------------------------
```
X, y = make_hastie_10_2(n_samples=8000, random_state=42)
# The scorers can be either one of the predefined metric strings or a scorer
# callable, like the one returned by make_scorer
scoring = {"AUC": "roc_auc", "Accuracy": make_scorer(accuracy_score)}
# Setting refit='AUC', refits an estimator on the whole dataset with the
# parameter setting that has the best cross-validated AUC score.
# That estimator is made available at ``gs.best_estimator_`` along with
# parameters like ``gs.best_score_``, ``gs.best_params_`` and
# ``gs.best_index_``
gs = GridSearchCV(
DecisionTreeClassifier(random_state=42),
param_grid={"min_samples_split": range(2, 403, 20)},
scoring=scoring,
refit="AUC",
n_jobs=2,
return_train_score=True,
)
gs.fit(X, y)
results = gs.cv_results_
```
Plotting the result
-------------------
```
plt.figure(figsize=(13, 13))
plt.title("GridSearchCV evaluating using multiple scorers simultaneously", fontsize=16)
plt.xlabel("min_samples_split")
plt.ylabel("Score")
ax = plt.gca()
ax.set_xlim(0, 402)
ax.set_ylim(0.73, 1)
# Get the regular numpy array from the MaskedArray
X_axis = np.array(results["param_min_samples_split"].data, dtype=float)
for scorer, color in zip(sorted(scoring), ["g", "k"]):
for sample, style in (("train", "--"), ("test", "-")):
sample_score_mean = results["mean_%s_%s" % (sample, scorer)]
sample_score_std = results["std_%s_%s" % (sample, scorer)]
ax.fill_between(
X_axis,
sample_score_mean - sample_score_std,
sample_score_mean + sample_score_std,
alpha=0.1 if sample == "test" else 0,
color=color,
)
ax.plot(
X_axis,
sample_score_mean,
style,
color=color,
alpha=1 if sample == "test" else 0.7,
label="%s (%s)" % (scorer, sample),
)
best_index = np.nonzero(results["rank_test_%s" % scorer] == 1)[0][0]
best_score = results["mean_test_%s" % scorer][best_index]
# Plot a dotted vertical line at the best score for that scorer marked by x
ax.plot(
[
X_axis[best_index],
]
* 2,
[0, best_score],
linestyle="-.",
color=color,
marker="x",
markeredgewidth=3,
ms=8,
)
# Annotate the best score for that scorer
ax.annotate("%0.2f" % best_score, (X_axis[best_index], best_score + 0.005))
plt.legend(loc="best")
plt.grid(False)
plt.show()
```
**Total running time of the script:** ( 0 minutes 5.970 seconds)
[`Download Python source code: plot_multi_metric_evaluation.py`](https://scikit-learn.org/1.1/_downloads/dedbcc9464f3269f4f012f4bfc7d16da/plot_multi_metric_evaluation.py)
[`Download Jupyter notebook: plot_multi_metric_evaluation.ipynb`](https://scikit-learn.org/1.1/_downloads/f57e1ee55d4c7a51949d5c26b3af07bb/plot_multi_metric_evaluation.ipynb)
scikit_learn Nested versus non-nested cross-validation Note
Click [here](#sphx-glr-download-auto-examples-model-selection-plot-nested-cross-validation-iris-py) to download the full example code or to run this example in your browser via Binder
Nested versus non-nested cross-validation
=========================================
This example compares non-nested and nested cross-validation strategies on a classifier of the iris data set. Nested cross-validation (CV) is often used to train a model in which hyperparameters also need to be optimized. Nested CV estimates the generalization error of the underlying model and its (hyper)parameter search. Choosing the parameters that maximize non-nested CV biases the model to the dataset, yielding an overly-optimistic score.
Model selection without nested CV uses the same data to tune model parameters and evaluate model performance. Information may thus “leak” into the model and overfit the data. The magnitude of this effect is primarily dependent on the size of the dataset and the stability of the model. See Cawley and Talbot [[1]](#id2) for an analysis of these issues.
To avoid this problem, nested CV effectively uses a series of train/validation/test set splits. In the inner loop (here executed by [`GridSearchCV`](../../modules/generated/sklearn.model_selection.gridsearchcv#sklearn.model_selection.GridSearchCV "sklearn.model_selection.GridSearchCV")), the score is approximately maximized by fitting a model to each training set, and then directly maximized in selecting (hyper)parameters over the validation set. In the outer loop (here in [`cross_val_score`](../../modules/generated/sklearn.model_selection.cross_val_score#sklearn.model_selection.cross_val_score "sklearn.model_selection.cross_val_score")), generalization error is estimated by averaging test set scores over several dataset splits.
The example below uses a support vector classifier with a non-linear kernel to build a model with optimized hyperparameters by grid search. We compare the performance of non-nested and nested CV strategies by taking the difference between their scores.
```
Average difference of 0.007581 with std. dev. of 0.007833.
```
```
from sklearn.datasets import load_iris
from matplotlib import pyplot as plt
from sklearn.svm import SVC
from sklearn.model_selection import GridSearchCV, cross_val_score, KFold
import numpy as np
# Number of random trials
NUM_TRIALS = 30
# Load the dataset
iris = load_iris()
X_iris = iris.data
y_iris = iris.target
# Set up possible values of parameters to optimize over
p_grid = {"C": [1, 10, 100], "gamma": [0.01, 0.1]}
# We will use a Support Vector Classifier with "rbf" kernel
svm = SVC(kernel="rbf")
# Arrays to store scores
non_nested_scores = np.zeros(NUM_TRIALS)
nested_scores = np.zeros(NUM_TRIALS)
# Loop for each trial
for i in range(NUM_TRIALS):
# Choose cross-validation techniques for the inner and outer loops,
# independently of the dataset.
# E.g "GroupKFold", "LeaveOneOut", "LeaveOneGroupOut", etc.
inner_cv = KFold(n_splits=4, shuffle=True, random_state=i)
outer_cv = KFold(n_splits=4, shuffle=True, random_state=i)
# Non_nested parameter search and scoring
clf = GridSearchCV(estimator=svm, param_grid=p_grid, cv=outer_cv)
clf.fit(X_iris, y_iris)
non_nested_scores[i] = clf.best_score_
# Nested CV with parameter optimization
clf = GridSearchCV(estimator=svm, param_grid=p_grid, cv=inner_cv)
nested_score = cross_val_score(clf, X=X_iris, y=y_iris, cv=outer_cv)
nested_scores[i] = nested_score.mean()
score_difference = non_nested_scores - nested_scores
print(
"Average difference of {:6f} with std. dev. of {:6f}.".format(
score_difference.mean(), score_difference.std()
)
)
# Plot scores on each trial for nested and non-nested CV
plt.figure()
plt.subplot(211)
(non_nested_scores_line,) = plt.plot(non_nested_scores, color="r")
(nested_line,) = plt.plot(nested_scores, color="b")
plt.ylabel("score", fontsize="14")
plt.legend(
[non_nested_scores_line, nested_line],
["Non-Nested CV", "Nested CV"],
bbox_to_anchor=(0, 0.4, 0.5, 0),
)
plt.title(
"Non-Nested and Nested Cross Validation on Iris Dataset",
x=0.5,
y=1.1,
fontsize="15",
)
# Plot bar chart of the difference.
plt.subplot(212)
difference_plot = plt.bar(range(NUM_TRIALS), score_difference)
plt.xlabel("Individual Trial #")
plt.legend(
[difference_plot],
["Non-Nested CV - Nested CV Score"],
bbox_to_anchor=(0, 1, 0.8, 0),
)
plt.ylabel("score difference", fontsize="14")
plt.show()
```
**Total running time of the script:** ( 0 minutes 3.999 seconds)
[`Download Python source code: plot_nested_cross_validation_iris.py`](https://scikit-learn.org/1.1/_downloads/23614d75e8327ef369659da7d2ed62db/plot_nested_cross_validation_iris.py)
[`Download Jupyter notebook: plot_nested_cross_validation_iris.ipynb`](https://scikit-learn.org/1.1/_downloads/45916745bb89ca49be3a50aa80e65e3f/plot_nested_cross_validation_iris.ipynb)
scikit_learn Comparing randomized search and grid search for hyperparameter estimation Note
Click [here](#sphx-glr-download-auto-examples-model-selection-plot-randomized-search-py) to download the full example code or to run this example in your browser via Binder
Comparing randomized search and grid search for hyperparameter estimation
=========================================================================
Compare randomized search and grid search for optimizing hyperparameters of a linear SVM with SGD training. All parameters that influence the learning are searched simultaneously (except for the number of estimators, which poses a time / quality tradeoff).
The randomized search and the grid search explore exactly the same space of parameters. The result in parameter settings is quite similar, while the run time for randomized search is drastically lower.
The performance is may slightly worse for the randomized search, and is likely due to a noise effect and would not carry over to a held-out test set.
Note that in practice, one would not search over this many different parameters simultaneously using grid search, but pick only the ones deemed most important.
```
RandomizedSearchCV took 0.62 seconds for 15 candidates parameter settings.
Model with rank: 1
Mean validation score: 0.993 (std: 0.009)
Parameters: {'alpha': 0.010076570442984415, 'average': False, 'l1_ratio': 0.475034577147909}
Model with rank: 2
Mean validation score: 0.985 (std: 0.015)
Parameters: {'alpha': 0.013207974323115428, 'average': True, 'l1_ratio': 0.6411976931351385}
Model with rank: 3
Mean validation score: 0.983 (std: 0.012)
Parameters: {'alpha': 0.011065149589382038, 'average': False, 'l1_ratio': 0.19811655229892267}
GridSearchCV took 2.95 seconds for 60 candidate parameter settings.
Model with rank: 1
Mean validation score: 0.994 (std: 0.007)
Parameters: {'alpha': 0.01, 'average': False, 'l1_ratio': 0.2222222222222222}
Model with rank: 2
Mean validation score: 0.993 (std: 0.004)
Parameters: {'alpha': 0.01, 'average': False, 'l1_ratio': 0.7777777777777777}
Model with rank: 3
Mean validation score: 0.991 (std: 0.008)
Parameters: {'alpha': 0.01, 'average': False, 'l1_ratio': 0.0}
Model with rank: 3
Mean validation score: 0.991 (std: 0.008)
Parameters: {'alpha': 0.01, 'average': False, 'l1_ratio': 0.1111111111111111}
```
```
import numpy as np
from time import time
import scipy.stats as stats
from sklearn.utils.fixes import loguniform
from sklearn.model_selection import GridSearchCV, RandomizedSearchCV
from sklearn.datasets import load_digits
from sklearn.linear_model import SGDClassifier
# get some data
X, y = load_digits(return_X_y=True, n_class=3)
# build a classifier
clf = SGDClassifier(loss="hinge", penalty="elasticnet", fit_intercept=True)
# Utility function to report best scores
def report(results, n_top=3):
for i in range(1, n_top + 1):
candidates = np.flatnonzero(results["rank_test_score"] == i)
for candidate in candidates:
print("Model with rank: {0}".format(i))
print(
"Mean validation score: {0:.3f} (std: {1:.3f})".format(
results["mean_test_score"][candidate],
results["std_test_score"][candidate],
)
)
print("Parameters: {0}".format(results["params"][candidate]))
print("")
# specify parameters and distributions to sample from
param_dist = {
"average": [True, False],
"l1_ratio": stats.uniform(0, 1),
"alpha": loguniform(1e-2, 1e0),
}
# run randomized search
n_iter_search = 15
random_search = RandomizedSearchCV(
clf, param_distributions=param_dist, n_iter=n_iter_search
)
start = time()
random_search.fit(X, y)
print(
"RandomizedSearchCV took %.2f seconds for %d candidates parameter settings."
% ((time() - start), n_iter_search)
)
report(random_search.cv_results_)
# use a full grid over all parameters
param_grid = {
"average": [True, False],
"l1_ratio": np.linspace(0, 1, num=10),
"alpha": np.power(10, np.arange(-2, 1, dtype=float)),
}
# run grid search
grid_search = GridSearchCV(clf, param_grid=param_grid)
start = time()
grid_search.fit(X, y)
print(
"GridSearchCV took %.2f seconds for %d candidate parameter settings."
% (time() - start, len(grid_search.cv_results_["params"]))
)
report(grid_search.cv_results_)
```
**Total running time of the script:** ( 0 minutes 3.588 seconds)
[`Download Python source code: plot_randomized_search.py`](https://scikit-learn.org/1.1/_downloads/f6e7c2e766100e8bcbb85bbb947d2893/plot_randomized_search.py)
[`Download Jupyter notebook: plot_randomized_search.ipynb`](https://scikit-learn.org/1.1/_downloads/733ff7845fe2f197ecd0c72afcf23651/plot_randomized_search.ipynb)
| programming_docs |
scikit_learn Balance model complexity and cross-validated score Note
Click [here](#sphx-glr-download-auto-examples-model-selection-plot-grid-search-refit-callable-py) to download the full example code or to run this example in your browser via Binder
Balance model complexity and cross-validated score
==================================================
This example balances model complexity and cross-validated score by finding a decent accuracy within 1 standard deviation of the best accuracy score while minimising the number of PCA components [1].
The figure shows the trade-off between cross-validated score and the number of PCA components. The balanced case is when n\_components=10 and accuracy=0.88, which falls into the range within 1 standard deviation of the best accuracy score.
[1] Hastie, T., Tibshirani, R.,, Friedman, J. (2001). Model Assessment and Selection. The Elements of Statistical Learning (pp. 219-260). New York, NY, USA: Springer New York Inc..
```
The best_index_ is 2
The n_components selected is 10
The corresponding accuracy score is 0.88
```
```
# Author: Wenhao Zhang <[email protected]>
import numpy as np
import matplotlib.pyplot as plt
from sklearn.datasets import load_digits
from sklearn.decomposition import PCA
from sklearn.model_selection import GridSearchCV
from sklearn.pipeline import Pipeline
from sklearn.svm import LinearSVC
def lower_bound(cv_results):
"""
Calculate the lower bound within 1 standard deviation
of the best `mean_test_scores`.
Parameters
----------
cv_results : dict of numpy(masked) ndarrays
See attribute cv_results_ of `GridSearchCV`
Returns
-------
float
Lower bound within 1 standard deviation of the
best `mean_test_score`.
"""
best_score_idx = np.argmax(cv_results["mean_test_score"])
return (
cv_results["mean_test_score"][best_score_idx]
- cv_results["std_test_score"][best_score_idx]
)
def best_low_complexity(cv_results):
"""
Balance model complexity with cross-validated score.
Parameters
----------
cv_results : dict of numpy(masked) ndarrays
See attribute cv_results_ of `GridSearchCV`.
Return
------
int
Index of a model that has the fewest PCA components
while has its test score within 1 standard deviation of the best
`mean_test_score`.
"""
threshold = lower_bound(cv_results)
candidate_idx = np.flatnonzero(cv_results["mean_test_score"] >= threshold)
best_idx = candidate_idx[
cv_results["param_reduce_dim__n_components"][candidate_idx].argmin()
]
return best_idx
pipe = Pipeline(
[
("reduce_dim", PCA(random_state=42)),
("classify", LinearSVC(random_state=42, C=0.01)),
]
)
param_grid = {"reduce_dim__n_components": [6, 8, 10, 12, 14]}
grid = GridSearchCV(
pipe,
cv=10,
n_jobs=1,
param_grid=param_grid,
scoring="accuracy",
refit=best_low_complexity,
)
X, y = load_digits(return_X_y=True)
grid.fit(X, y)
n_components = grid.cv_results_["param_reduce_dim__n_components"]
test_scores = grid.cv_results_["mean_test_score"]
plt.figure()
plt.bar(n_components, test_scores, width=1.3, color="b")
lower = lower_bound(grid.cv_results_)
plt.axhline(np.max(test_scores), linestyle="--", color="y", label="Best score")
plt.axhline(lower, linestyle="--", color=".5", label="Best score - 1 std")
plt.title("Balance model complexity and cross-validated score")
plt.xlabel("Number of PCA components used")
plt.ylabel("Digit classification accuracy")
plt.xticks(n_components.tolist())
plt.ylim((0, 1.0))
plt.legend(loc="upper left")
best_index_ = grid.best_index_
print("The best_index_ is %d" % best_index_)
print("The n_components selected is %d" % n_components[best_index_])
print(
"The corresponding accuracy score is %.2f"
% grid.cv_results_["mean_test_score"][best_index_]
)
plt.show()
```
**Total running time of the script:** ( 0 minutes 4.580 seconds)
[`Download Python source code: plot_grid_search_refit_callable.py`](https://scikit-learn.org/1.1/_downloads/02d88d76c60b7397c8c6e221b31568dd/plot_grid_search_refit_callable.py)
[`Download Jupyter notebook: plot_grid_search_refit_callable.ipynb`](https://scikit-learn.org/1.1/_downloads/af8345a01f32fc7a8b3c9693bb4aca30/plot_grid_search_refit_callable.ipynb)
scikit_learn Plotting Validation Curves Note
Click [here](#sphx-glr-download-auto-examples-model-selection-plot-validation-curve-py) to download the full example code or to run this example in your browser via Binder
Plotting Validation Curves
==========================
In this plot you can see the training scores and validation scores of an SVM for different values of the kernel parameter gamma. For very low values of gamma, you can see that both the training score and the validation score are low. This is called underfitting. Medium values of gamma will result in high values for both scores, i.e. the classifier is performing fairly well. If gamma is too high, the classifier will overfit, which means that the training score is good but the validation score is poor.
```
import matplotlib.pyplot as plt
import numpy as np
from sklearn.datasets import load_digits
from sklearn.svm import SVC
from sklearn.model_selection import validation_curve
X, y = load_digits(return_X_y=True)
subset_mask = np.isin(y, [1, 2]) # binary classification: 1 vs 2
X, y = X[subset_mask], y[subset_mask]
param_range = np.logspace(-6, -1, 5)
train_scores, test_scores = validation_curve(
SVC(),
X,
y,
param_name="gamma",
param_range=param_range,
scoring="accuracy",
n_jobs=2,
)
train_scores_mean = np.mean(train_scores, axis=1)
train_scores_std = np.std(train_scores, axis=1)
test_scores_mean = np.mean(test_scores, axis=1)
test_scores_std = np.std(test_scores, axis=1)
plt.title("Validation Curve with SVM")
plt.xlabel(r"$\gamma$")
plt.ylabel("Score")
plt.ylim(0.0, 1.1)
lw = 2
plt.semilogx(
param_range, train_scores_mean, label="Training score", color="darkorange", lw=lw
)
plt.fill_between(
param_range,
train_scores_mean - train_scores_std,
train_scores_mean + train_scores_std,
alpha=0.2,
color="darkorange",
lw=lw,
)
plt.semilogx(
param_range, test_scores_mean, label="Cross-validation score", color="navy", lw=lw
)
plt.fill_between(
param_range,
test_scores_mean - test_scores_std,
test_scores_mean + test_scores_std,
alpha=0.2,
color="navy",
lw=lw,
)
plt.legend(loc="best")
plt.show()
```
**Total running time of the script:** ( 0 minutes 0.496 seconds)
[`Download Python source code: plot_validation_curve.py`](https://scikit-learn.org/1.1/_downloads/d7ef5ff0bffa701d573ebc3ef124729a/plot_validation_curve.py)
[`Download Jupyter notebook: plot_validation_curve.ipynb`](https://scikit-learn.org/1.1/_downloads/7996e584c563a930d174772f44af2089/plot_validation_curve.ipynb)
scikit_learn Sample pipeline for text feature extraction and evaluation Note
Click [here](#sphx-glr-download-auto-examples-model-selection-grid-search-text-feature-extraction-py) to download the full example code or to run this example in your browser via Binder
Sample pipeline for text feature extraction and evaluation
==========================================================
The dataset used in this example is the 20 newsgroups dataset which will be automatically downloaded and then cached and reused for the document classification example.
You can adjust the number of categories by giving their names to the dataset loader or setting them to None to get the 20 of them.
Here is a sample output of a run on a quad-core machine:
```
Loading 20 newsgroups dataset for categories:
['alt.atheism', 'talk.religion.misc']
1427 documents
2 categories
Performing grid search...
pipeline: ['vect', 'tfidf', 'clf']
parameters:
{'clf__alpha': (1.0000000000000001e-05, 9.9999999999999995e-07),
'clf__max_iter': (10, 50, 80),
'clf__penalty': ('l2', 'elasticnet'),
'tfidf__use_idf': (True, False),
'vect__max_n': (1, 2),
'vect__max_df': (0.5, 0.75, 1.0),
'vect__max_features': (None, 5000, 10000, 50000)}
done in 1737.030s
Best score: 0.940
Best parameters set:
clf__alpha: 9.9999999999999995e-07
clf__max_iter: 50
clf__penalty: 'elasticnet'
tfidf__use_idf: True
vect__max_n: 2
vect__max_df: 0.75
vect__max_features: 50000
```
```
# Author: Olivier Grisel <[email protected]>
# Peter Prettenhofer <[email protected]>
# Mathieu Blondel <[email protected]>
# License: BSD 3 clause
```
Data loading
------------
```
from pprint import pprint
from time import time
import logging
from sklearn.datasets import fetch_20newsgroups
from sklearn.feature_extraction.text import CountVectorizer
from sklearn.feature_extraction.text import TfidfTransformer
from sklearn.linear_model import SGDClassifier
from sklearn.model_selection import GridSearchCV
from sklearn.pipeline import Pipeline
# Display progress logs on stdout
logging.basicConfig(level=logging.INFO, format="%(asctime)s %(levelname)s %(message)s")
# Load some categories from the training set
categories = [
"alt.atheism",
"talk.religion.misc",
]
# Uncomment the following to do the analysis on all the categories
# categories = None
print("Loading 20 newsgroups dataset for categories:")
print(categories)
data = fetch_20newsgroups(subset="train", categories=categories)
print("%d documents" % len(data.filenames))
print("%d categories" % len(data.target_names))
print()
```
Pipeline with hyperparameter tuning
-----------------------------------
```
# Define a pipeline combining a text feature extractor with a simple classifier
pipeline = Pipeline(
[
("vect", CountVectorizer()),
("tfidf", TfidfTransformer()),
("clf", SGDClassifier()),
]
)
# Parameters to use for grid search. Uncommenting more parameters will give
# better exploring power but will increase processing time in a combinatorial
# way
parameters = {
"vect__max_df": (0.5, 0.75, 1.0),
# 'vect__max_features': (None, 5000, 10000, 50000),
"vect__ngram_range": ((1, 1), (1, 2)), # unigrams or bigrams
# 'tfidf__use_idf': (True, False),
# 'tfidf__norm': ('l1', 'l2'),
"clf__max_iter": (20,),
"clf__alpha": (0.00001, 0.000001),
"clf__penalty": ("l2", "elasticnet"),
# 'clf__max_iter': (10, 50, 80),
}
# Find the best parameters for both the feature extraction and the
# classifier
grid_search = GridSearchCV(pipeline, parameters, n_jobs=-1, verbose=1)
print("Performing grid search...")
print("pipeline:", [name for name, _ in pipeline.steps])
print("parameters:")
pprint(parameters)
t0 = time()
grid_search.fit(data.data, data.target)
print("done in %0.3fs" % (time() - t0))
print()
print("Best score: %0.3f" % grid_search.best_score_)
print("Best parameters set:")
best_parameters = grid_search.best_estimator_.get_params()
for param_name in sorted(parameters.keys()):
print("\t%s: %r" % (param_name, best_parameters[param_name]))
```
**Total running time of the script:** ( 0 minutes 0.000 seconds)
[`Download Python source code: grid_search_text_feature_extraction.py`](https://scikit-learn.org/1.1/_downloads/6a71771766f7ff51a9ac596ae0439d01/grid_search_text_feature_extraction.py)
[`Download Jupyter notebook: grid_search_text_feature_extraction.ipynb`](https://scikit-learn.org/1.1/_downloads/2686c9a8c33b1b0159cc05f207d65b4c/grid_search_text_feature_extraction.ipynb)
scikit_learn Incremental PCA Note
Click [here](#sphx-glr-download-auto-examples-decomposition-plot-incremental-pca-py) to download the full example code or to run this example in your browser via Binder
Incremental PCA
===============
Incremental principal component analysis (IPCA) is typically used as a replacement for principal component analysis (PCA) when the dataset to be decomposed is too large to fit in memory. IPCA builds a low-rank approximation for the input data using an amount of memory which is independent of the number of input data samples. It is still dependent on the input data features, but changing the batch size allows for control of memory usage.
This example serves as a visual check that IPCA is able to find a similar projection of the data to PCA (to a sign flip), while only processing a few samples at a time. This can be considered a “toy example”, as IPCA is intended for large datasets which do not fit in main memory, requiring incremental approaches.
*
*
```
# Authors: Kyle Kastner
# License: BSD 3 clause
import numpy as np
import matplotlib.pyplot as plt
from sklearn.datasets import load_iris
from sklearn.decomposition import PCA, IncrementalPCA
iris = load_iris()
X = iris.data
y = iris.target
n_components = 2
ipca = IncrementalPCA(n_components=n_components, batch_size=10)
X_ipca = ipca.fit_transform(X)
pca = PCA(n_components=n_components)
X_pca = pca.fit_transform(X)
colors = ["navy", "turquoise", "darkorange"]
for X_transformed, title in [(X_ipca, "Incremental PCA"), (X_pca, "PCA")]:
plt.figure(figsize=(8, 8))
for color, i, target_name in zip(colors, [0, 1, 2], iris.target_names):
plt.scatter(
X_transformed[y == i, 0],
X_transformed[y == i, 1],
color=color,
lw=2,
label=target_name,
)
if "Incremental" in title:
err = np.abs(np.abs(X_pca) - np.abs(X_ipca)).mean()
plt.title(title + " of iris dataset\nMean absolute unsigned error %.6f" % err)
else:
plt.title(title + " of iris dataset")
plt.legend(loc="best", shadow=False, scatterpoints=1)
plt.axis([-4, 4, -1.5, 1.5])
plt.show()
```
**Total running time of the script:** ( 0 minutes 0.208 seconds)
[`Download Python source code: plot_incremental_pca.py`](https://scikit-learn.org/1.1/_downloads/84013ce9dfea94b493c87cb1413fc3bf/plot_incremental_pca.py)
[`Download Jupyter notebook: plot_incremental_pca.ipynb`](https://scikit-learn.org/1.1/_downloads/1e455640a8ffb0f44808048624f12d3d/plot_incremental_pca.ipynb)
scikit_learn Blind source separation using FastICA Note
Click [here](#sphx-glr-download-auto-examples-decomposition-plot-ica-blind-source-separation-py) to download the full example code or to run this example in your browser via Binder
Blind source separation using FastICA
=====================================
An example of estimating sources from noisy data.
[Independent component analysis (ICA)](../../modules/decomposition#ica) is used to estimate sources given noisy measurements. Imagine 3 instruments playing simultaneously and 3 microphones recording the mixed signals. ICA is used to recover the sources ie. what is played by each instrument. Importantly, PCA fails at recovering our `instruments` since the related signals reflect non-Gaussian processes.
Generate sample data
--------------------
```
import numpy as np
from scipy import signal
np.random.seed(0)
n_samples = 2000
time = np.linspace(0, 8, n_samples)
s1 = np.sin(2 * time) # Signal 1 : sinusoidal signal
s2 = np.sign(np.sin(3 * time)) # Signal 2 : square signal
s3 = signal.sawtooth(2 * np.pi * time) # Signal 3: saw tooth signal
S = np.c_[s1, s2, s3]
S += 0.2 * np.random.normal(size=S.shape) # Add noise
S /= S.std(axis=0) # Standardize data
# Mix data
A = np.array([[1, 1, 1], [0.5, 2, 1.0], [1.5, 1.0, 2.0]]) # Mixing matrix
X = np.dot(S, A.T) # Generate observations
```
Fit ICA and PCA models
----------------------
```
from sklearn.decomposition import FastICA, PCA
# Compute ICA
ica = FastICA(n_components=3)
S_ = ica.fit_transform(X) # Reconstruct signals
A_ = ica.mixing_ # Get estimated mixing matrix
# We can `prove` that the ICA model applies by reverting the unmixing.
assert np.allclose(X, np.dot(S_, A_.T) + ica.mean_)
# For comparison, compute PCA
pca = PCA(n_components=3)
H = pca.fit_transform(X) # Reconstruct signals based on orthogonal components
```
```
/home/runner/work/scikit-learn/scikit-learn/sklearn/decomposition/_fastica.py:494: FutureWarning: Starting in v1.3, whiten='unit-variance' will be used by default.
warnings.warn(
```
Plot results
------------
```
import matplotlib.pyplot as plt
plt.figure()
models = [X, S, S_, H]
names = [
"Observations (mixed signal)",
"True Sources",
"ICA recovered signals",
"PCA recovered signals",
]
colors = ["red", "steelblue", "orange"]
for ii, (model, name) in enumerate(zip(models, names), 1):
plt.subplot(4, 1, ii)
plt.title(name)
for sig, color in zip(model.T, colors):
plt.plot(sig, color=color)
plt.tight_layout()
plt.show()
```
**Total running time of the script:** ( 0 minutes 0.215 seconds)
[`Download Python source code: plot_ica_blind_source_separation.py`](https://scikit-learn.org/1.1/_downloads/b122040dba159b8887080828461a517e/plot_ica_blind_source_separation.py)
[`Download Jupyter notebook: plot_ica_blind_source_separation.ipynb`](https://scikit-learn.org/1.1/_downloads/3018e0d07b27e25beba9764116709763/plot_ica_blind_source_separation.ipynb)
scikit_learn Image denoising using dictionary learning Note
Click [here](#sphx-glr-download-auto-examples-decomposition-plot-image-denoising-py) to download the full example code or to run this example in your browser via Binder
Image denoising using dictionary learning
=========================================
An example comparing the effect of reconstructing noisy fragments of a raccoon face image using firstly online [Dictionary Learning](../../modules/decomposition#dictionarylearning) and various transform methods.
The dictionary is fitted on the distorted left half of the image, and subsequently used to reconstruct the right half. Note that even better performance could be achieved by fitting to an undistorted (i.e. noiseless) image, but here we start from the assumption that it is not available.
A common practice for evaluating the results of image denoising is by looking at the difference between the reconstruction and the original image. If the reconstruction is perfect this will look like Gaussian noise.
It can be seen from the plots that the results of [Orthogonal Matching Pursuit (OMP)](../../modules/linear_model#omp) with two non-zero coefficients is a bit less biased than when keeping only one (the edges look less prominent). It is in addition closer from the ground truth in Frobenius norm.
The result of [Least Angle Regression](../../modules/linear_model#least-angle-regression) is much more strongly biased: the difference is reminiscent of the local intensity value of the original image.
Thresholding is clearly not useful for denoising, but it is here to show that it can produce a suggestive output with very high speed, and thus be useful for other tasks such as object classification, where performance is not necessarily related to visualisation.
Generate distorted image
------------------------
```
import numpy as np
import scipy as sp
try: # SciPy >= 0.16 have face in misc
from scipy.misc import face
face = face(gray=True)
except ImportError:
face = sp.face(gray=True)
# Convert from uint8 representation with values between 0 and 255 to
# a floating point representation with values between 0 and 1.
face = face / 255.0
# downsample for higher speed
face = face[::4, ::4] + face[1::4, ::4] + face[::4, 1::4] + face[1::4, 1::4]
face /= 4.0
height, width = face.shape
# Distort the right half of the image
print("Distorting image...")
distorted = face.copy()
distorted[:, width // 2 :] += 0.075 * np.random.randn(height, width // 2)
```
```
Distorting image...
```
Display the distorted image
---------------------------
```
import matplotlib.pyplot as plt
def show_with_diff(image, reference, title):
"""Helper function to display denoising"""
plt.figure(figsize=(5, 3.3))
plt.subplot(1, 2, 1)
plt.title("Image")
plt.imshow(image, vmin=0, vmax=1, cmap=plt.cm.gray, interpolation="nearest")
plt.xticks(())
plt.yticks(())
plt.subplot(1, 2, 2)
difference = image - reference
plt.title("Difference (norm: %.2f)" % np.sqrt(np.sum(difference**2)))
plt.imshow(
difference, vmin=-0.5, vmax=0.5, cmap=plt.cm.PuOr, interpolation="nearest"
)
plt.xticks(())
plt.yticks(())
plt.suptitle(title, size=16)
plt.subplots_adjust(0.02, 0.02, 0.98, 0.79, 0.02, 0.2)
show_with_diff(distorted, face, "Distorted image")
```
Extract reference patches
-------------------------
```
from time import time
from sklearn.feature_extraction.image import extract_patches_2d
# Extract all reference patches from the left half of the image
print("Extracting reference patches...")
t0 = time()
patch_size = (7, 7)
data = extract_patches_2d(distorted[:, : width // 2], patch_size)
data = data.reshape(data.shape[0], -1)
data -= np.mean(data, axis=0)
data /= np.std(data, axis=0)
print(f"{data.shape[0]} patches extracted in %.2fs." % (time() - t0))
```
```
Extracting reference patches...
22692 patches extracted in 0.01s.
```
Learn the dictionary from reference patches
-------------------------------------------
```
from sklearn.decomposition import MiniBatchDictionaryLearning
print("Learning the dictionary...")
t0 = time()
dico = MiniBatchDictionaryLearning(
# increase to 300 for higher quality results at the cost of slower
# training times.
n_components=50,
batch_size=200,
alpha=1.0,
max_iter=10,
)
V = dico.fit(data).components_
dt = time() - t0
print(f"{dico.n_iter_} iterations / {dico.n_steps_} steps in {dt:.2f}.")
plt.figure(figsize=(4.2, 4))
for i, comp in enumerate(V[:100]):
plt.subplot(10, 10, i + 1)
plt.imshow(comp.reshape(patch_size), cmap=plt.cm.gray_r, interpolation="nearest")
plt.xticks(())
plt.yticks(())
plt.suptitle(
"Dictionary learned from face patches\n"
+ "Train time %.1fs on %d patches" % (dt, len(data)),
fontsize=16,
)
plt.subplots_adjust(0.08, 0.02, 0.92, 0.85, 0.08, 0.23)
```
```
Learning the dictionary...
1.0 iterations / 114 steps in 14.16.
```
Extract noisy patches and reconstruct them using the dictionary
---------------------------------------------------------------
```
from sklearn.feature_extraction.image import reconstruct_from_patches_2d
print("Extracting noisy patches... ")
t0 = time()
data = extract_patches_2d(distorted[:, width // 2 :], patch_size)
data = data.reshape(data.shape[0], -1)
intercept = np.mean(data, axis=0)
data -= intercept
print("done in %.2fs." % (time() - t0))
transform_algorithms = [
("Orthogonal Matching Pursuit\n1 atom", "omp", {"transform_n_nonzero_coefs": 1}),
("Orthogonal Matching Pursuit\n2 atoms", "omp", {"transform_n_nonzero_coefs": 2}),
("Least-angle regression\n4 atoms", "lars", {"transform_n_nonzero_coefs": 4}),
("Thresholding\n alpha=0.1", "threshold", {"transform_alpha": 0.1}),
]
reconstructions = {}
for title, transform_algorithm, kwargs in transform_algorithms:
print(title + "...")
reconstructions[title] = face.copy()
t0 = time()
dico.set_params(transform_algorithm=transform_algorithm, **kwargs)
code = dico.transform(data)
patches = np.dot(code, V)
patches += intercept
patches = patches.reshape(len(data), *patch_size)
if transform_algorithm == "threshold":
patches -= patches.min()
patches /= patches.max()
reconstructions[title][:, width // 2 :] = reconstruct_from_patches_2d(
patches, (height, width // 2)
)
dt = time() - t0
print("done in %.2fs." % dt)
show_with_diff(reconstructions[title], face, title + " (time: %.1fs)" % dt)
plt.show()
```
*
*
*
*
```
Extracting noisy patches...
done in 0.00s.
Orthogonal Matching Pursuit
1 atom...
done in 0.54s.
Orthogonal Matching Pursuit
2 atoms...
done in 1.09s.
Least-angle regression
4 atoms...
done in 8.29s.
Thresholding
alpha=0.1...
done in 0.09s.
```
**Total running time of the script:** ( 0 minutes 25.422 seconds)
[`Download Python source code: plot_image_denoising.py`](https://scikit-learn.org/1.1/_downloads/149ff4a0ff65a845f675cc7a0fcb86ea/plot_image_denoising.py)
[`Download Jupyter notebook: plot_image_denoising.ipynb`](https://scikit-learn.org/1.1/_downloads/f726f6c50f1cc13e1afb7561fa005d16/plot_image_denoising.ipynb)
| programming_docs |
scikit_learn FastICA on 2D point clouds Note
Click [here](#sphx-glr-download-auto-examples-decomposition-plot-ica-vs-pca-py) to download the full example code or to run this example in your browser via Binder
FastICA on 2D point clouds
==========================
This example illustrates visually in the feature space a comparison by results using two different component analysis techniques.
[Independent component analysis (ICA)](../../modules/decomposition#ica) vs [Principal component analysis (PCA)](../../modules/decomposition#pca).
Representing ICA in the feature space gives the view of ‘geometric ICA’: ICA is an algorithm that finds directions in the feature space corresponding to projections with high non-Gaussianity. These directions need not be orthogonal in the original feature space, but they are orthogonal in the whitened feature space, in which all directions correspond to the same variance.
PCA, on the other hand, finds orthogonal directions in the raw feature space that correspond to directions accounting for maximum variance.
Here we simulate independent sources using a highly non-Gaussian process, 2 student T with a low number of degrees of freedom (top left figure). We mix them to create observations (top right figure). In this raw observation space, directions identified by PCA are represented by orange vectors. We represent the signal in the PCA space, after whitening by the variance corresponding to the PCA vectors (lower left). Running ICA corresponds to finding a rotation in this space to identify the directions of largest non-Gaussianity (lower right).
```
# Authors: Alexandre Gramfort, Gael Varoquaux
# License: BSD 3 clause
```
Generate sample data
--------------------
```
import numpy as np
from sklearn.decomposition import PCA, FastICA
rng = np.random.RandomState(42)
S = rng.standard_t(1.5, size=(20000, 2))
S[:, 0] *= 2.0
# Mix data
A = np.array([[1, 1], [0, 2]]) # Mixing matrix
X = np.dot(S, A.T) # Generate observations
pca = PCA()
S_pca_ = pca.fit(X).transform(X)
ica = FastICA(random_state=rng, whiten="arbitrary-variance")
S_ica_ = ica.fit(X).transform(X) # Estimate the sources
S_ica_ /= S_ica_.std(axis=0)
```
Plot results
------------
```
import matplotlib.pyplot as plt
def plot_samples(S, axis_list=None):
plt.scatter(
S[:, 0], S[:, 1], s=2, marker="o", zorder=10, color="steelblue", alpha=0.5
)
if axis_list is not None:
for axis, color, label in axis_list:
axis /= axis.std()
x_axis, y_axis = axis
plt.quiver(
(0, 0),
(0, 0),
x_axis,
y_axis,
zorder=11,
width=0.01,
scale=6,
color=color,
label=label,
)
plt.hlines(0, -3, 3)
plt.vlines(0, -3, 3)
plt.xlim(-3, 3)
plt.ylim(-3, 3)
plt.xlabel("x")
plt.ylabel("y")
plt.figure()
plt.subplot(2, 2, 1)
plot_samples(S / S.std())
plt.title("True Independent Sources")
axis_list = [(pca.components_.T, "orange", "PCA"), (ica.mixing_, "red", "ICA")]
plt.subplot(2, 2, 2)
plot_samples(X / np.std(X), axis_list=axis_list)
legend = plt.legend(loc="lower right")
legend.set_zorder(100)
plt.title("Observations")
plt.subplot(2, 2, 3)
plot_samples(S_pca_ / np.std(S_pca_, axis=0))
plt.title("PCA recovered signals")
plt.subplot(2, 2, 4)
plot_samples(S_ica_ / np.std(S_ica_))
plt.title("ICA recovered signals")
plt.subplots_adjust(0.09, 0.04, 0.94, 0.94, 0.26, 0.36)
plt.show()
```
**Total running time of the script:** ( 0 minutes 0.227 seconds)
[`Download Python source code: plot_ica_vs_pca.py`](https://scikit-learn.org/1.1/_downloads/dc313f6a0617a04ceea6108f4cde71b6/plot_ica_vs_pca.py)
[`Download Jupyter notebook: plot_ica_vs_pca.ipynb`](https://scikit-learn.org/1.1/_downloads/4ee88a807e060ca374ab95e0d8d819ed/plot_ica_vs_pca.ipynb)
scikit_learn Faces dataset decompositions Note
Click [here](#sphx-glr-download-auto-examples-decomposition-plot-faces-decomposition-py) to download the full example code or to run this example in your browser via Binder
Faces dataset decompositions
============================
This example applies to [The Olivetti faces dataset](https://scikit-learn.org/1.1/datasets/real_world.html#olivetti-faces-dataset) different unsupervised matrix decomposition (dimension reduction) methods from the module [`sklearn.decomposition`](../../modules/classes#module-sklearn.decomposition "sklearn.decomposition") (see the documentation chapter [Decomposing signals in components (matrix factorization problems)](../../modules/decomposition#decompositions)).
* Authors: Vlad Niculae, Alexandre Gramfort
* License: BSD 3 clause
Dataset preparation
-------------------
Loading and preprocessing the Olivetti faces dataset.
```
import logging
from numpy.random import RandomState
import matplotlib.pyplot as plt
from sklearn.datasets import fetch_olivetti_faces
from sklearn import cluster
from sklearn import decomposition
rng = RandomState(0)
# Display progress logs on stdout
logging.basicConfig(level=logging.INFO, format="%(asctime)s %(levelname)s %(message)s")
faces, _ = fetch_olivetti_faces(return_X_y=True, shuffle=True, random_state=rng)
n_samples, n_features = faces.shape
# Global centering (focus on one feature, centering all samples)
faces_centered = faces - faces.mean(axis=0)
# Local centering (focus on one sample, centering all features)
faces_centered -= faces_centered.mean(axis=1).reshape(n_samples, -1)
print("Dataset consists of %d faces" % n_samples)
```
```
Dataset consists of 400 faces
```
Define a base function to plot the gallery of faces.
```
n_row, n_col = 2, 3
n_components = n_row * n_col
image_shape = (64, 64)
def plot_gallery(title, images, n_col=n_col, n_row=n_row, cmap=plt.cm.gray):
fig, axs = plt.subplots(
nrows=n_row,
ncols=n_col,
figsize=(2.0 * n_col, 2.3 * n_row),
facecolor="white",
constrained_layout=True,
)
fig.set_constrained_layout_pads(w_pad=0.01, h_pad=0.02, hspace=0, wspace=0)
fig.set_edgecolor("black")
fig.suptitle(title, size=16)
for ax, vec in zip(axs.flat, images):
vmax = max(vec.max(), -vec.min())
im = ax.imshow(
vec.reshape(image_shape),
cmap=cmap,
interpolation="nearest",
vmin=-vmax,
vmax=vmax,
)
ax.axis("off")
fig.colorbar(im, ax=axs, orientation="horizontal", shrink=0.99, aspect=40, pad=0.01)
plt.show()
```
Let’s take a look at our data. Gray color indicates negative values, white indicates positive values.
```
plot_gallery("Faces from dataset", faces_centered[:n_components])
```
Decomposition
-------------
Initialise different estimators for decomposition and fit each of them on all images and plot some results. Each estimator extracts 6 components as vectors \(h \in \mathbb{R}^{4096}\). We just displayed these vectors in human-friendly visualisation as 64x64 pixel images.
Read more in the [User Guide](../../modules/decomposition#decompositions).
### Eigenfaces - PCA using randomized SVD
Linear dimensionality reduction using Singular Value Decomposition (SVD) of the data to project it to a lower dimensional space.
Note
The Eigenfaces estimator, via the [`sklearn.decomposition.PCA`](../../modules/generated/sklearn.decomposition.pca#sklearn.decomposition.PCA "sklearn.decomposition.PCA"), also provides a scalar `noise_variance_` (the mean of pixelwise variance) that cannot be displayed as an image.
```
pca_estimator = decomposition.PCA(
n_components=n_components, svd_solver="randomized", whiten=True
)
pca_estimator.fit(faces_centered)
plot_gallery(
"Eigenfaces - PCA using randomized SVD", pca_estimator.components_[:n_components]
)
```
### Non-negative components - NMF
Estimate non-negative original data as production of two non-negative matrices.
```
nmf_estimator = decomposition.NMF(n_components=n_components, tol=5e-3)
nmf_estimator.fit(faces) # original non- negative dataset
plot_gallery("Non-negative components - NMF", nmf_estimator.components_[:n_components])
```
### Independent components - FastICA
Independent component analysis separates a multivariate vectors into additive subcomponents that are maximally independent.
```
ica_estimator = decomposition.FastICA(
n_components=n_components, max_iter=400, whiten="arbitrary-variance", tol=15e-5
)
ica_estimator.fit(faces_centered)
plot_gallery(
"Independent components - FastICA", ica_estimator.components_[:n_components]
)
```
### Sparse components - MiniBatchSparsePCA
Mini-batch sparse PCA (`MiniBatchSparsePCA`) extracts the set of sparse components that best reconstruct the data. This variant is faster but less accurate than the similar [`sklearn.decomposition.SparsePCA`](../../modules/generated/sklearn.decomposition.sparsepca#sklearn.decomposition.SparsePCA "sklearn.decomposition.SparsePCA").
```
batch_pca_estimator = decomposition.MiniBatchSparsePCA(
n_components=n_components, alpha=0.1, n_iter=100, batch_size=3, random_state=rng
)
batch_pca_estimator.fit(faces_centered)
plot_gallery(
"Sparse components - MiniBatchSparsePCA",
batch_pca_estimator.components_[:n_components],
)
```
### Dictionary learning
By default, `MiniBatchDictionaryLearning` divides the data into mini-batches and optimizes in an online manner by cycling over the mini-batches for the specified number of iterations.
```
batch_dict_estimator = decomposition.MiniBatchDictionaryLearning(
n_components=n_components, alpha=0.1, n_iter=50, batch_size=3, random_state=rng
)
batch_dict_estimator.fit(faces_centered)
plot_gallery("Dictionary learning", batch_dict_estimator.components_[:n_components])
```
```
/home/runner/work/scikit-learn/scikit-learn/sklearn/decomposition/_dict_learning.py:2314: FutureWarning: 'n_iter' is deprecated in version 1.1 and will be removed in version 1.3. Use 'max_iter' instead.
warnings.warn(
```
### Cluster centers - MiniBatchKMeans
`MiniBatchKMeans` is computationally efficient and implements on-line learning with a `partial_fit` method. That is why it could be beneficial to enhance some time-consuming algorithms with `MiniBatchKMeans`.
```
kmeans_estimator = cluster.MiniBatchKMeans(
n_clusters=n_components,
tol=1e-3,
batch_size=20,
max_iter=50,
random_state=rng,
)
kmeans_estimator.fit(faces_centered)
plot_gallery(
"Cluster centers - MiniBatchKMeans",
kmeans_estimator.cluster_centers_[:n_components],
)
```
### Factor Analysis components - FA
`Factor Analysis` is similar to `PCA` but has the advantage of modelling the variance in every direction of the input space independently (heteroscedastic noise). Read more in the [User Guide](../../modules/decomposition#fa).
```
fa_estimator = decomposition.FactorAnalysis(n_components=n_components, max_iter=20)
fa_estimator.fit(faces_centered)
plot_gallery("Factor Analysis (FA)", fa_estimator.components_[:n_components])
# --- Pixelwise variance
plt.figure(figsize=(3.2, 3.6), facecolor="white", tight_layout=True)
vec = fa_estimator.noise_variance_
vmax = max(vec.max(), -vec.min())
plt.imshow(
vec.reshape(image_shape),
cmap=plt.cm.gray,
interpolation="nearest",
vmin=-vmax,
vmax=vmax,
)
plt.axis("off")
plt.title("Pixelwise variance from \n Factor Analysis (FA)", size=16, wrap=True)
plt.colorbar(orientation="horizontal", shrink=0.8, pad=0.03)
plt.show()
```
*
*
Decomposition: Dictionary learning
----------------------------------
In the further section, let’s consider [Dictionary Learning](../../modules/decomposition#dictionarylearning) more precisely. Dictionary learning is a problem that amounts to finding a sparse representation of the input data as a combination of simple elements. These simple elements form a dictionary. It is possible to constrain the dictionary and/or coding coefficients to be positive to match constraints that may be present in the data.
`MiniBatchDictionaryLearning` implements a faster, but less accurate version of the dictionary learning algorithm that is better suited for large datasets. Read more in the [User Guide](../../modules/decomposition#minibatchdictionarylearning).
Plot the same samples from our dataset but with another colormap. Red indicates negative values, blue indicates positive values, and white represents zeros.
```
plot_gallery("Faces from dataset", faces_centered[:n_components], cmap=plt.cm.RdBu)
```
Similar to the previous examples, we change parameters and train `MiniBatchDictionaryLearning` estimator on all images. Generally, the dictionary learning and sparse encoding decompose input data into the dictionary and the coding coefficients matrices. \(X \approx UV\), where \(X = [x\_1, . . . , x\_n]\), \(X \in \mathbb{R}^{m×n}\), dictionary \(U \in \mathbb{R}^{m×k}\), coding coefficients \(V \in \mathbb{R}^{k×n}\).
Also below are the results when the dictionary and coding coefficients are positively constrained.
### Dictionary learning - positive dictionary
In the following section we enforce positivity when finding the dictionary.
```
dict_pos_dict_estimator = decomposition.MiniBatchDictionaryLearning(
n_components=n_components,
alpha=0.1,
n_iter=50,
batch_size=3,
random_state=rng,
positive_dict=True,
)
dict_pos_dict_estimator.fit(faces_centered)
plot_gallery(
"Dictionary learning - positive dictionary",
dict_pos_dict_estimator.components_[:n_components],
cmap=plt.cm.RdBu,
)
```
```
/home/runner/work/scikit-learn/scikit-learn/sklearn/decomposition/_dict_learning.py:2314: FutureWarning: 'n_iter' is deprecated in version 1.1 and will be removed in version 1.3. Use 'max_iter' instead.
warnings.warn(
```
### Dictionary learning - positive code
Below we constrain the coding coefficients as a positive matrix.
```
dict_pos_code_estimator = decomposition.MiniBatchDictionaryLearning(
n_components=n_components,
alpha=0.1,
n_iter=50,
batch_size=3,
fit_algorithm="cd",
random_state=rng,
positive_code=True,
)
dict_pos_code_estimator.fit(faces_centered)
plot_gallery(
"Dictionary learning - positive code",
dict_pos_code_estimator.components_[:n_components],
cmap=plt.cm.RdBu,
)
```
```
/home/runner/work/scikit-learn/scikit-learn/sklearn/decomposition/_dict_learning.py:2314: FutureWarning: 'n_iter' is deprecated in version 1.1 and will be removed in version 1.3. Use 'max_iter' instead.
warnings.warn(
```
### Dictionary learning - positive dictionary & code
Also below are the results if the dictionary values and coding coefficients are positively constrained.
```
dict_pos_estimator = decomposition.MiniBatchDictionaryLearning(
n_components=n_components,
alpha=0.1,
n_iter=50,
batch_size=3,
fit_algorithm="cd",
random_state=rng,
positive_dict=True,
positive_code=True,
)
dict_pos_estimator.fit(faces_centered)
plot_gallery(
"Dictionary learning - positive dictionary & code",
dict_pos_estimator.components_[:n_components],
cmap=plt.cm.RdBu,
)
```
```
/home/runner/work/scikit-learn/scikit-learn/sklearn/decomposition/_dict_learning.py:2314: FutureWarning: 'n_iter' is deprecated in version 1.1 and will be removed in version 1.3. Use 'max_iter' instead.
warnings.warn(
```
**Total running time of the script:** ( 0 minutes 6.123 seconds)
[`Download Python source code: plot_faces_decomposition.py`](https://scikit-learn.org/1.1/_downloads/4825fc8223d1af0f3b61080c3dea3a62/plot_faces_decomposition.py)
[`Download Jupyter notebook: plot_faces_decomposition.ipynb`](https://scikit-learn.org/1.1/_downloads/fcae36814d8e700024ca855a1eb87ca9/plot_faces_decomposition.ipynb)
scikit_learn Sparse coding with a precomputed dictionary Note
Click [here](#sphx-glr-download-auto-examples-decomposition-plot-sparse-coding-py) to download the full example code or to run this example in your browser via Binder
Sparse coding with a precomputed dictionary
===========================================
Transform a signal as a sparse combination of Ricker wavelets. This example visually compares different sparse coding methods using the [`SparseCoder`](../../modules/generated/sklearn.decomposition.sparsecoder#sklearn.decomposition.SparseCoder "sklearn.decomposition.SparseCoder") estimator. The Ricker (also known as Mexican hat or the second derivative of a Gaussian) is not a particularly good kernel to represent piecewise constant signals like this one. It can therefore be seen how much adding different widths of atoms matters and it therefore motivates learning the dictionary to best fit your type of signals.
The richer dictionary on the right is not larger in size, heavier subsampling is performed in order to stay on the same order of magnitude.
```
import numpy as np
import matplotlib.pyplot as plt
from sklearn.decomposition import SparseCoder
def ricker_function(resolution, center, width):
"""Discrete sub-sampled Ricker (Mexican hat) wavelet"""
x = np.linspace(0, resolution - 1, resolution)
x = (
(2 / (np.sqrt(3 * width) * np.pi**0.25))
* (1 - (x - center) ** 2 / width**2)
* np.exp(-((x - center) ** 2) / (2 * width**2))
)
return x
def ricker_matrix(width, resolution, n_components):
"""Dictionary of Ricker (Mexican hat) wavelets"""
centers = np.linspace(0, resolution - 1, n_components)
D = np.empty((n_components, resolution))
for i, center in enumerate(centers):
D[i] = ricker_function(resolution, center, width)
D /= np.sqrt(np.sum(D**2, axis=1))[:, np.newaxis]
return D
resolution = 1024
subsampling = 3 # subsampling factor
width = 100
n_components = resolution // subsampling
# Compute a wavelet dictionary
D_fixed = ricker_matrix(width=width, resolution=resolution, n_components=n_components)
D_multi = np.r_[
tuple(
ricker_matrix(width=w, resolution=resolution, n_components=n_components // 5)
for w in (10, 50, 100, 500, 1000)
)
]
# Generate a signal
y = np.linspace(0, resolution - 1, resolution)
first_quarter = y < resolution / 4
y[first_quarter] = 3.0
y[np.logical_not(first_quarter)] = -1.0
# List the different sparse coding methods in the following format:
# (title, transform_algorithm, transform_alpha,
# transform_n_nozero_coefs, color)
estimators = [
("OMP", "omp", None, 15, "navy"),
("Lasso", "lasso_lars", 2, None, "turquoise"),
]
lw = 2
plt.figure(figsize=(13, 6))
for subplot, (D, title) in enumerate(
zip((D_fixed, D_multi), ("fixed width", "multiple widths"))
):
plt.subplot(1, 2, subplot + 1)
plt.title("Sparse coding against %s dictionary" % title)
plt.plot(y, lw=lw, linestyle="--", label="Original signal")
# Do a wavelet approximation
for title, algo, alpha, n_nonzero, color in estimators:
coder = SparseCoder(
dictionary=D,
transform_n_nonzero_coefs=n_nonzero,
transform_alpha=alpha,
transform_algorithm=algo,
)
x = coder.transform(y.reshape(1, -1))
density = len(np.flatnonzero(x))
x = np.ravel(np.dot(x, D))
squared_error = np.sum((y - x) ** 2)
plt.plot(
x,
color=color,
lw=lw,
label="%s: %s nonzero coefs,\n%.2f error" % (title, density, squared_error),
)
# Soft thresholding debiasing
coder = SparseCoder(
dictionary=D, transform_algorithm="threshold", transform_alpha=20
)
x = coder.transform(y.reshape(1, -1))
_, idx = np.where(x != 0)
x[0, idx], _, _, _ = np.linalg.lstsq(D[idx, :].T, y, rcond=None)
x = np.ravel(np.dot(x, D))
squared_error = np.sum((y - x) ** 2)
plt.plot(
x,
color="darkorange",
lw=lw,
label="Thresholding w/ debiasing:\n%d nonzero coefs, %.2f error"
% (len(idx), squared_error),
)
plt.axis("tight")
plt.legend(shadow=False, loc="best")
plt.subplots_adjust(0.04, 0.07, 0.97, 0.90, 0.09, 0.2)
plt.show()
```
**Total running time of the script:** ( 0 minutes 0.262 seconds)
[`Download Python source code: plot_sparse_coding.py`](https://scikit-learn.org/1.1/_downloads/2a14e362a70d246e83fa6a89ca069cee/plot_sparse_coding.py)
[`Download Jupyter notebook: plot_sparse_coding.ipynb`](https://scikit-learn.org/1.1/_downloads/c763870a5d2681d445fc65fcf63bba31/plot_sparse_coding.ipynb)
| programming_docs |
scikit_learn Kernel PCA Note
Click [here](#sphx-glr-download-auto-examples-decomposition-plot-kernel-pca-py) to download the full example code or to run this example in your browser via Binder
Kernel PCA
==========
This example shows the difference between the Principal Components Analysis ([`PCA`](../../modules/generated/sklearn.decomposition.pca#sklearn.decomposition.PCA "sklearn.decomposition.PCA")) and its kernalized version ([`KernelPCA`](../../modules/generated/sklearn.decomposition.kernelpca#sklearn.decomposition.KernelPCA "sklearn.decomposition.KernelPCA")).
On the one hand, we show that [`KernelPCA`](../../modules/generated/sklearn.decomposition.kernelpca#sklearn.decomposition.KernelPCA "sklearn.decomposition.KernelPCA") is able to find a projection of the data which linearly separates them while it is not the case with [`PCA`](../../modules/generated/sklearn.decomposition.pca#sklearn.decomposition.PCA "sklearn.decomposition.PCA").
Finally, we show that inverting this projection is an approximation with [`KernelPCA`](../../modules/generated/sklearn.decomposition.kernelpca#sklearn.decomposition.KernelPCA "sklearn.decomposition.KernelPCA"), while it is exact with [`PCA`](../../modules/generated/sklearn.decomposition.pca#sklearn.decomposition.PCA "sklearn.decomposition.PCA").
```
# Authors: Mathieu Blondel
# Andreas Mueller
# Guillaume Lemaitre
# License: BSD 3 clause
```
Projecting data: `PCA` vs. `KernelPCA`
--------------------------------------
In this section, we show the advantages of using a kernel when projecting data using a Principal Component Analysis (PCA). We create a dataset made of two nested circles.
```
from sklearn.datasets import make_circles
from sklearn.model_selection import train_test_split
X, y = make_circles(n_samples=1_000, factor=0.3, noise=0.05, random_state=0)
X_train, X_test, y_train, y_test = train_test_split(X, y, stratify=y, random_state=0)
```
Let’s have a quick first look at the generated dataset.
```
import matplotlib.pyplot as plt
_, (train_ax, test_ax) = plt.subplots(ncols=2, sharex=True, sharey=True, figsize=(8, 4))
train_ax.scatter(X_train[:, 0], X_train[:, 1], c=y_train)
train_ax.set_ylabel("Feature #1")
train_ax.set_xlabel("Feature #0")
train_ax.set_title("Training data")
test_ax.scatter(X_test[:, 0], X_test[:, 1], c=y_test)
test_ax.set_xlabel("Feature #0")
_ = test_ax.set_title("Testing data")
```
The samples from each class cannot be linearly separated: there is no straight line that can split the samples of the inner set from the outer set.
Now, we will use PCA with and without a kernel to see what is the effect of using such a kernel. The kernel used here is a radial basis function (RBF) kernel.
```
from sklearn.decomposition import PCA, KernelPCA
pca = PCA(n_components=2)
kernel_pca = KernelPCA(
n_components=None, kernel="rbf", gamma=10, fit_inverse_transform=True, alpha=0.1
)
X_test_pca = pca.fit(X_train).transform(X_test)
X_test_kernel_pca = kernel_pca.fit(X_train).transform(X_test)
```
```
fig, (orig_data_ax, pca_proj_ax, kernel_pca_proj_ax) = plt.subplots(
ncols=3, figsize=(14, 4)
)
orig_data_ax.scatter(X_test[:, 0], X_test[:, 1], c=y_test)
orig_data_ax.set_ylabel("Feature #1")
orig_data_ax.set_xlabel("Feature #0")
orig_data_ax.set_title("Testing data")
pca_proj_ax.scatter(X_test_pca[:, 0], X_test_pca[:, 1], c=y_test)
pca_proj_ax.set_ylabel("Principal component #1")
pca_proj_ax.set_xlabel("Principal component #0")
pca_proj_ax.set_title("Projection of testing data\n using PCA")
kernel_pca_proj_ax.scatter(X_test_kernel_pca[:, 0], X_test_kernel_pca[:, 1], c=y_test)
kernel_pca_proj_ax.set_ylabel("Principal component #1")
kernel_pca_proj_ax.set_xlabel("Principal component #0")
_ = kernel_pca_proj_ax.set_title("Projection of testing data\n using KernelPCA")
```
We recall that PCA transforms the data linearly. Intuitively, it means that the coordinate system will be centered, rescaled on each component with respected to its variance and finally be rotated. The obtained data from this transformation is isotropic and can now be projected on its *principal components*.
Thus, looking at the projection made using PCA (i.e. the middle figure), we see that there is no change regarding the scaling; indeed the data being two concentric circles centered in zero, the original data is already isotropic. However, we can see that the data have been rotated. As a conclusion, we see that such a projection would not help if define a linear classifier to distinguish samples from both classes.
Using a kernel allows to make a non-linear projection. Here, by using an RBF kernel, we expect that the projection will unfold the dataset while keeping approximately preserving the relative distances of pairs of data points that are close to one another in the original space.
We observe such behaviour in the figure on the right: the samples of a given class are closer to each other than the samples from the opposite class, untangling both sample sets. Now, we can use a linear classifier to separate the samples from the two classes.
Projecting into the original feature space
------------------------------------------
One particularity to have in mind when using [`KernelPCA`](../../modules/generated/sklearn.decomposition.kernelpca#sklearn.decomposition.KernelPCA "sklearn.decomposition.KernelPCA") is related to the reconstruction (i.e. the back projection in the original feature space). With [`PCA`](../../modules/generated/sklearn.decomposition.pca#sklearn.decomposition.PCA "sklearn.decomposition.PCA"), the reconstruction will be exact if `n_components` is the same than the number of original features. This is the case in this example.
We can investigate if we get the original dataset when back projecting with [`KernelPCA`](../../modules/generated/sklearn.decomposition.kernelpca#sklearn.decomposition.KernelPCA "sklearn.decomposition.KernelPCA").
```
X_reconstructed_pca = pca.inverse_transform(pca.transform(X_test))
X_reconstructed_kernel_pca = kernel_pca.inverse_transform(kernel_pca.transform(X_test))
```
```
fig, (orig_data_ax, pca_back_proj_ax, kernel_pca_back_proj_ax) = plt.subplots(
ncols=3, sharex=True, sharey=True, figsize=(13, 4)
)
orig_data_ax.scatter(X_test[:, 0], X_test[:, 1], c=y_test)
orig_data_ax.set_ylabel("Feature #1")
orig_data_ax.set_xlabel("Feature #0")
orig_data_ax.set_title("Original test data")
pca_back_proj_ax.scatter(X_reconstructed_pca[:, 0], X_reconstructed_pca[:, 1], c=y_test)
pca_back_proj_ax.set_xlabel("Feature #0")
pca_back_proj_ax.set_title("Reconstruction via PCA")
kernel_pca_back_proj_ax.scatter(
X_reconstructed_kernel_pca[:, 0], X_reconstructed_kernel_pca[:, 1], c=y_test
)
kernel_pca_back_proj_ax.set_xlabel("Feature #0")
_ = kernel_pca_back_proj_ax.set_title("Reconstruction via KernelPCA")
```
While we see a perfect reconstruction with [`PCA`](../../modules/generated/sklearn.decomposition.pca#sklearn.decomposition.PCA "sklearn.decomposition.PCA") we observe a different result for [`KernelPCA`](../../modules/generated/sklearn.decomposition.kernelpca#sklearn.decomposition.KernelPCA "sklearn.decomposition.KernelPCA").
Indeed, [`inverse_transform`](../../modules/generated/sklearn.decomposition.kernelpca#sklearn.decomposition.KernelPCA.inverse_transform "sklearn.decomposition.KernelPCA.inverse_transform") cannot rely on an analytical back-projection and thus an exact reconstruction. Instead, a [`KernelRidge`](../../modules/generated/sklearn.kernel_ridge.kernelridge#sklearn.kernel_ridge.KernelRidge "sklearn.kernel_ridge.KernelRidge") is internally trained to learn a mapping from the kernalized PCA basis to the original feature space. This method therefore comes with an approximation introducing small differences when back projecting in the original feature space.
To improve the reconstruction using [`inverse_transform`](../../modules/generated/sklearn.decomposition.kernelpca#sklearn.decomposition.KernelPCA.inverse_transform "sklearn.decomposition.KernelPCA.inverse_transform"), one can tune `alpha` in [`KernelPCA`](../../modules/generated/sklearn.decomposition.kernelpca#sklearn.decomposition.KernelPCA "sklearn.decomposition.KernelPCA"), the regularization term which controls the reliance on the training data during the training of the mapping.
**Total running time of the script:** ( 0 minutes 0.521 seconds)
[`Download Python source code: plot_kernel_pca.py`](https://scikit-learn.org/1.1/_downloads/023324c27491610e7c0ccff87c59abf9/plot_kernel_pca.py)
[`Download Jupyter notebook: plot_kernel_pca.ipynb`](https://scikit-learn.org/1.1/_downloads/c0a901203201090b01ac6d929a31ce08/plot_kernel_pca.ipynb)
scikit_learn Principal components analysis (PCA) Note
Click [here](#sphx-glr-download-auto-examples-decomposition-plot-pca-3d-py) to download the full example code or to run this example in your browser via Binder
Principal components analysis (PCA)
===================================
These figures aid in illustrating how a point cloud can be very flat in one direction–which is where PCA comes in to choose a direction that is not flat.
```
# Authors: Gael Varoquaux
# Jaques Grobler
# Kevin Hughes
# License: BSD 3 clause
```
Create the data
---------------
```
import numpy as np
from scipy import stats
e = np.exp(1)
np.random.seed(4)
def pdf(x):
return 0.5 * (stats.norm(scale=0.25 / e).pdf(x) + stats.norm(scale=4 / e).pdf(x))
y = np.random.normal(scale=0.5, size=(30000))
x = np.random.normal(scale=0.5, size=(30000))
z = np.random.normal(scale=0.1, size=len(x))
density = pdf(x) * pdf(y)
pdf_z = pdf(5 * z)
density *= pdf_z
a = x + y
b = 2 * y
c = a - b + z
norm = np.sqrt(a.var() + b.var())
a /= norm
b /= norm
```
Plot the figures
----------------
```
from sklearn.decomposition import PCA
import matplotlib.pyplot as plt
# unused but required import for doing 3d projections with matplotlib < 3.2
import mpl_toolkits.mplot3d # noqa: F401
def plot_figs(fig_num, elev, azim):
fig = plt.figure(fig_num, figsize=(4, 3))
plt.clf()
ax = fig.add_subplot(111, projection="3d", elev=elev, azim=azim)
ax.set_position([0, 0, 0.95, 1])
ax.scatter(a[::10], b[::10], c[::10], c=density[::10], marker="+", alpha=0.4)
Y = np.c_[a, b, c]
# Using SciPy's SVD, this would be:
# _, pca_score, Vt = scipy.linalg.svd(Y, full_matrices=False)
pca = PCA(n_components=3)
pca.fit(Y)
V = pca.components_.T
x_pca_axis, y_pca_axis, z_pca_axis = 3 * V
x_pca_plane = np.r_[x_pca_axis[:2], -x_pca_axis[1::-1]]
y_pca_plane = np.r_[y_pca_axis[:2], -y_pca_axis[1::-1]]
z_pca_plane = np.r_[z_pca_axis[:2], -z_pca_axis[1::-1]]
x_pca_plane.shape = (2, 2)
y_pca_plane.shape = (2, 2)
z_pca_plane.shape = (2, 2)
ax.plot_surface(x_pca_plane, y_pca_plane, z_pca_plane)
ax.w_xaxis.set_ticklabels([])
ax.w_yaxis.set_ticklabels([])
ax.w_zaxis.set_ticklabels([])
elev = -40
azim = -80
plot_figs(1, elev, azim)
elev = 30
azim = 20
plot_figs(2, elev, azim)
plt.show()
```
*
*
**Total running time of the script:** ( 0 minutes 0.150 seconds)
[`Download Python source code: plot_pca_3d.py`](https://scikit-learn.org/1.1/_downloads/6304a55e6fa4d75c8e8d11b4ea9a8679/plot_pca_3d.py)
[`Download Jupyter notebook: plot_pca_3d.ipynb`](https://scikit-learn.org/1.1/_downloads/720e4861bf00cd09b55ae64187ea58be/plot_pca_3d.ipynb)
scikit_learn Factor Analysis (with rotation) to visualize patterns Note
Click [here](#sphx-glr-download-auto-examples-decomposition-plot-varimax-fa-py) to download the full example code or to run this example in your browser via Binder
Factor Analysis (with rotation) to visualize patterns
=====================================================
Investigating the Iris dataset, we see that sepal length, petal length and petal width are highly correlated. Sepal width is less redundant. Matrix decomposition techniques can uncover these latent patterns. Applying rotations to the resulting components does not inherently improve the predictive value of the derived latent space, but can help visualise their structure; here, for example, the varimax rotation, which is found by maximizing the squared variances of the weights, finds a structure where the second component only loads positively on sepal width.
```
# Authors: Jona Sassenhagen
# License: BSD 3 clause
import matplotlib.pyplot as plt
import numpy as np
from sklearn.decomposition import FactorAnalysis, PCA
from sklearn.preprocessing import StandardScaler
from sklearn.datasets import load_iris
```
Load Iris data
```
data = load_iris()
X = StandardScaler().fit_transform(data["data"])
feature_names = data["feature_names"]
```
Plot covariance of Iris features
```
ax = plt.axes()
im = ax.imshow(np.corrcoef(X.T), cmap="RdBu_r", vmin=-1, vmax=1)
ax.set_xticks([0, 1, 2, 3])
ax.set_xticklabels(list(feature_names), rotation=90)
ax.set_yticks([0, 1, 2, 3])
ax.set_yticklabels(list(feature_names))
plt.colorbar(im).ax.set_ylabel("$r$", rotation=0)
ax.set_title("Iris feature correlation matrix")
plt.tight_layout()
```
Run factor analysis with Varimax rotation
```
n_comps = 2
methods = [
("PCA", PCA()),
("Unrotated FA", FactorAnalysis()),
("Varimax FA", FactorAnalysis(rotation="varimax")),
]
fig, axes = plt.subplots(ncols=len(methods), figsize=(10, 8))
for ax, (method, fa) in zip(axes, methods):
fa.set_params(n_components=n_comps)
fa.fit(X)
components = fa.components_.T
print("\n\n %s :\n" % method)
print(components)
vmax = np.abs(components).max()
ax.imshow(components, cmap="RdBu_r", vmax=vmax, vmin=-vmax)
ax.set_yticks(np.arange(len(feature_names)))
if ax.is_first_col():
ax.set_yticklabels(feature_names)
else:
ax.set_yticklabels([])
ax.set_title(str(method))
ax.set_xticks([0, 1])
ax.set_xticklabels(["Comp. 1", "Comp. 2"])
fig.suptitle("Factors")
plt.tight_layout()
plt.show()
```
```
PCA :
[[ 0.52106591 0.37741762]
[-0.26934744 0.92329566]
[ 0.5804131 0.02449161]
[ 0.56485654 0.06694199]]
/home/runner/work/scikit-learn/scikit-learn/examples/decomposition/plot_varimax_fa.py:72: MatplotlibDeprecationWarning:
The is_first_col function was deprecated in Matplotlib 3.4 and will be removed two minor releases later. Use ax.get_subplotspec().is_first_col() instead.
if ax.is_first_col():
Unrotated FA :
[[ 0.88096009 -0.4472869 ]
[-0.41691605 -0.55390036]
[ 0.99918858 0.01915283]
[ 0.96228895 0.05840206]]
/home/runner/work/scikit-learn/scikit-learn/examples/decomposition/plot_varimax_fa.py:72: MatplotlibDeprecationWarning:
The is_first_col function was deprecated in Matplotlib 3.4 and will be removed two minor releases later. Use ax.get_subplotspec().is_first_col() instead.
if ax.is_first_col():
Varimax FA :
[[ 0.98633022 -0.05752333]
[-0.16052385 -0.67443065]
[ 0.90809432 0.41726413]
[ 0.85857475 0.43847489]]
/home/runner/work/scikit-learn/scikit-learn/examples/decomposition/plot_varimax_fa.py:72: MatplotlibDeprecationWarning:
The is_first_col function was deprecated in Matplotlib 3.4 and will be removed two minor releases later. Use ax.get_subplotspec().is_first_col() instead.
if ax.is_first_col():
```
**Total running time of the script:** ( 0 minutes 0.301 seconds)
[`Download Python source code: plot_varimax_fa.py`](https://scikit-learn.org/1.1/_downloads/e953392bd366ce4c91e2993b65d310dd/plot_varimax_fa.py)
[`Download Jupyter notebook: plot_varimax_fa.ipynb`](https://scikit-learn.org/1.1/_downloads/7716523ca12b85a020d7a525dff641cc/plot_varimax_fa.ipynb)
scikit_learn Beta-divergence loss functions Note
Click [here](#sphx-glr-download-auto-examples-decomposition-plot-beta-divergence-py) to download the full example code or to run this example in your browser via Binder
Beta-divergence loss functions
==============================
A plot that compares the various Beta-divergence loss functions supported by the Multiplicative-Update (‘mu’) solver in [`NMF`](../../modules/generated/sklearn.decomposition.nmf#sklearn.decomposition.NMF "sklearn.decomposition.NMF").
```
import numpy as np
import matplotlib.pyplot as plt
from sklearn.decomposition._nmf import _beta_divergence
x = np.linspace(0.001, 4, 1000)
y = np.zeros(x.shape)
colors = "mbgyr"
for j, beta in enumerate((0.0, 0.5, 1.0, 1.5, 2.0)):
for i, xi in enumerate(x):
y[i] = _beta_divergence(1, xi, 1, beta)
name = "beta = %1.1f" % beta
plt.plot(x, y, label=name, color=colors[j])
plt.xlabel("x")
plt.title("beta-divergence(1, x)")
plt.legend(loc=0)
plt.axis([0, 4, 0, 3])
plt.show()
```
**Total running time of the script:** ( 0 minutes 0.199 seconds)
[`Download Python source code: plot_beta_divergence.py`](https://scikit-learn.org/1.1/_downloads/6d09465eed1ee4ede505244049097627/plot_beta_divergence.py)
[`Download Jupyter notebook: plot_beta_divergence.ipynb`](https://scikit-learn.org/1.1/_downloads/21fba8cadc21699d2b4699b4ccdad10f/plot_beta_divergence.ipynb)
scikit_learn Model selection with Probabilistic PCA and Factor Analysis (FA) Note
Click [here](#sphx-glr-download-auto-examples-decomposition-plot-pca-vs-fa-model-selection-py) to download the full example code or to run this example in your browser via Binder
Model selection with Probabilistic PCA and Factor Analysis (FA)
===============================================================
Probabilistic PCA and Factor Analysis are probabilistic models. The consequence is that the likelihood of new data can be used for model selection and covariance estimation. Here we compare PCA and FA with cross-validation on low rank data corrupted with homoscedastic noise (noise variance is the same for each feature) or heteroscedastic noise (noise variance is the different for each feature). In a second step we compare the model likelihood to the likelihoods obtained from shrinkage covariance estimators.
One can observe that with homoscedastic noise both FA and PCA succeed in recovering the size of the low rank subspace. The likelihood with PCA is higher than FA in this case. However PCA fails and overestimates the rank when heteroscedastic noise is present. Under appropriate circumstances (choice of the number of components), the held-out data is more likely for low rank models than for shrinkage models.
The automatic estimation from Automatic Choice of Dimensionality for PCA. NIPS 2000: 598-604 by Thomas P. Minka is also compared.
```
# Authors: Alexandre Gramfort
# Denis A. Engemann
# License: BSD 3 clause
```
Create the data
---------------
```
import numpy as np
from scipy import linalg
n_samples, n_features, rank = 500, 25, 5
sigma = 1.0
rng = np.random.RandomState(42)
U, _, _ = linalg.svd(rng.randn(n_features, n_features))
X = np.dot(rng.randn(n_samples, rank), U[:, :rank].T)
# Adding homoscedastic noise
X_homo = X + sigma * rng.randn(n_samples, n_features)
# Adding heteroscedastic noise
sigmas = sigma * rng.rand(n_features) + sigma / 2.0
X_hetero = X + rng.randn(n_samples, n_features) * sigmas
```
Fit the models
--------------
```
import matplotlib.pyplot as plt
from sklearn.decomposition import PCA, FactorAnalysis
from sklearn.covariance import ShrunkCovariance, LedoitWolf
from sklearn.model_selection import cross_val_score
from sklearn.model_selection import GridSearchCV
n_components = np.arange(0, n_features, 5) # options for n_components
def compute_scores(X):
pca = PCA(svd_solver="full")
fa = FactorAnalysis()
pca_scores, fa_scores = [], []
for n in n_components:
pca.n_components = n
fa.n_components = n
pca_scores.append(np.mean(cross_val_score(pca, X)))
fa_scores.append(np.mean(cross_val_score(fa, X)))
return pca_scores, fa_scores
def shrunk_cov_score(X):
shrinkages = np.logspace(-2, 0, 30)
cv = GridSearchCV(ShrunkCovariance(), {"shrinkage": shrinkages})
return np.mean(cross_val_score(cv.fit(X).best_estimator_, X))
def lw_score(X):
return np.mean(cross_val_score(LedoitWolf(), X))
for X, title in [(X_homo, "Homoscedastic Noise"), (X_hetero, "Heteroscedastic Noise")]:
pca_scores, fa_scores = compute_scores(X)
n_components_pca = n_components[np.argmax(pca_scores)]
n_components_fa = n_components[np.argmax(fa_scores)]
pca = PCA(svd_solver="full", n_components="mle")
pca.fit(X)
n_components_pca_mle = pca.n_components_
print("best n_components by PCA CV = %d" % n_components_pca)
print("best n_components by FactorAnalysis CV = %d" % n_components_fa)
print("best n_components by PCA MLE = %d" % n_components_pca_mle)
plt.figure()
plt.plot(n_components, pca_scores, "b", label="PCA scores")
plt.plot(n_components, fa_scores, "r", label="FA scores")
plt.axvline(rank, color="g", label="TRUTH: %d" % rank, linestyle="-")
plt.axvline(
n_components_pca,
color="b",
label="PCA CV: %d" % n_components_pca,
linestyle="--",
)
plt.axvline(
n_components_fa,
color="r",
label="FactorAnalysis CV: %d" % n_components_fa,
linestyle="--",
)
plt.axvline(
n_components_pca_mle,
color="k",
label="PCA MLE: %d" % n_components_pca_mle,
linestyle="--",
)
# compare with other covariance estimators
plt.axhline(
shrunk_cov_score(X),
color="violet",
label="Shrunk Covariance MLE",
linestyle="-.",
)
plt.axhline(
lw_score(X),
color="orange",
label="LedoitWolf MLE" % n_components_pca_mle,
linestyle="-.",
)
plt.xlabel("nb of components")
plt.ylabel("CV scores")
plt.legend(loc="lower right")
plt.title(title)
plt.show()
```
*
*
```
best n_components by PCA CV = 5
best n_components by FactorAnalysis CV = 5
best n_components by PCA MLE = 5
best n_components by PCA CV = 20
best n_components by FactorAnalysis CV = 5
best n_components by PCA MLE = 18
```
**Total running time of the script:** ( 0 minutes 2.495 seconds)
[`Download Python source code: plot_pca_vs_fa_model_selection.py`](https://scikit-learn.org/1.1/_downloads/79ed9713970355da938b86bf77fcefa5/plot_pca_vs_fa_model_selection.py)
[`Download Jupyter notebook: plot_pca_vs_fa_model_selection.ipynb`](https://scikit-learn.org/1.1/_downloads/0c988b0c2bea0040fec13fe1055db95c/plot_pca_vs_fa_model_selection.ipynb)
| programming_docs |
scikit_learn PCA example with Iris Data-set Note
Click [here](#sphx-glr-download-auto-examples-decomposition-plot-pca-iris-py) to download the full example code or to run this example in your browser via Binder
PCA example with Iris Data-set
==============================
Principal Component Analysis applied to the Iris dataset.
See [here](https://en.wikipedia.org/wiki/Iris_flower_data_set) for more information on this dataset.
```
# Code source: Gaël Varoquaux
# License: BSD 3 clause
import numpy as np
import matplotlib.pyplot as plt
from sklearn import decomposition
from sklearn import datasets
np.random.seed(5)
iris = datasets.load_iris()
X = iris.data
y = iris.target
fig = plt.figure(1, figsize=(4, 3))
plt.clf()
ax = fig.add_subplot(111, projection="3d", elev=48, azim=134)
ax.set_position([0, 0, 0.95, 1])
plt.cla()
pca = decomposition.PCA(n_components=3)
pca.fit(X)
X = pca.transform(X)
for name, label in [("Setosa", 0), ("Versicolour", 1), ("Virginica", 2)]:
ax.text3D(
X[y == label, 0].mean(),
X[y == label, 1].mean() + 1.5,
X[y == label, 2].mean(),
name,
horizontalalignment="center",
bbox=dict(alpha=0.5, edgecolor="w", facecolor="w"),
)
# Reorder the labels to have colors matching the cluster results
y = np.choose(y, [1, 2, 0]).astype(float)
ax.scatter(X[:, 0], X[:, 1], X[:, 2], c=y, cmap=plt.cm.nipy_spectral, edgecolor="k")
ax.w_xaxis.set_ticklabels([])
ax.w_yaxis.set_ticklabels([])
ax.w_zaxis.set_ticklabels([])
plt.show()
```
**Total running time of the script:** ( 0 minutes 0.074 seconds)
[`Download Python source code: plot_pca_iris.py`](https://scikit-learn.org/1.1/_downloads/1168f82083b3e70f31672e7c33738f8d/plot_pca_iris.py)
[`Download Jupyter notebook: plot_pca_iris.ipynb`](https://scikit-learn.org/1.1/_downloads/46b6a23d83637bf0f381ce9d8c528aa2/plot_pca_iris.ipynb)
scikit_learn Comparison of LDA and PCA 2D projection of Iris dataset Note
Click [here](#sphx-glr-download-auto-examples-decomposition-plot-pca-vs-lda-py) to download the full example code or to run this example in your browser via Binder
Comparison of LDA and PCA 2D projection of Iris dataset
=======================================================
The Iris dataset represents 3 kind of Iris flowers (Setosa, Versicolour and Virginica) with 4 attributes: sepal length, sepal width, petal length and petal width.
Principal Component Analysis (PCA) applied to this data identifies the combination of attributes (principal components, or directions in the feature space) that account for the most variance in the data. Here we plot the different samples on the 2 first principal components.
Linear Discriminant Analysis (LDA) tries to identify attributes that account for the most variance *between classes*. In particular, LDA, in contrast to PCA, is a supervised method, using known class labels.
*
*
```
explained variance ratio (first two components): [0.92461872 0.05306648]
```
```
import matplotlib.pyplot as plt
from sklearn import datasets
from sklearn.decomposition import PCA
from sklearn.discriminant_analysis import LinearDiscriminantAnalysis
iris = datasets.load_iris()
X = iris.data
y = iris.target
target_names = iris.target_names
pca = PCA(n_components=2)
X_r = pca.fit(X).transform(X)
lda = LinearDiscriminantAnalysis(n_components=2)
X_r2 = lda.fit(X, y).transform(X)
# Percentage of variance explained for each components
print(
"explained variance ratio (first two components): %s"
% str(pca.explained_variance_ratio_)
)
plt.figure()
colors = ["navy", "turquoise", "darkorange"]
lw = 2
for color, i, target_name in zip(colors, [0, 1, 2], target_names):
plt.scatter(
X_r[y == i, 0], X_r[y == i, 1], color=color, alpha=0.8, lw=lw, label=target_name
)
plt.legend(loc="best", shadow=False, scatterpoints=1)
plt.title("PCA of IRIS dataset")
plt.figure()
for color, i, target_name in zip(colors, [0, 1, 2], target_names):
plt.scatter(
X_r2[y == i, 0], X_r2[y == i, 1], alpha=0.8, color=color, label=target_name
)
plt.legend(loc="best", shadow=False, scatterpoints=1)
plt.title("LDA of IRIS dataset")
plt.show()
```
**Total running time of the script:** ( 0 minutes 0.172 seconds)
[`Download Python source code: plot_pca_vs_lda.py`](https://scikit-learn.org/1.1/_downloads/1c72371dcc2add3178503d130a25cd74/plot_pca_vs_lda.py)
[`Download Jupyter notebook: plot_pca_vs_lda.ipynb`](https://scikit-learn.org/1.1/_downloads/24a2d942938af474165a1cab2179d64d/plot_pca_vs_lda.ipynb)
scikit_learn Principal Component Regression vs Partial Least Squares Regression Note
Click [here](#sphx-glr-download-auto-examples-cross-decomposition-plot-pcr-vs-pls-py) to download the full example code or to run this example in your browser via Binder
Principal Component Regression vs Partial Least Squares Regression
==================================================================
This example compares [Principal Component Regression](https://en.wikipedia.org/wiki/Principal_component_regression) (PCR) and [Partial Least Squares Regression](https://en.wikipedia.org/wiki/Partial_least_squares_regression) (PLS) on a toy dataset. Our goal is to illustrate how PLS can outperform PCR when the target is strongly correlated with some directions in the data that have a low variance.
PCR is a regressor composed of two steps: first, [`PCA`](../../modules/generated/sklearn.decomposition.pca#sklearn.decomposition.PCA "sklearn.decomposition.PCA") is applied to the training data, possibly performing dimensionality reduction; then, a regressor (e.g. a linear regressor) is trained on the transformed samples. In [`PCA`](../../modules/generated/sklearn.decomposition.pca#sklearn.decomposition.PCA "sklearn.decomposition.PCA"), the transformation is purely unsupervised, meaning that no information about the targets is used. As a result, PCR may perform poorly in some datasets where the target is strongly correlated with *directions* that have low variance. Indeed, the dimensionality reduction of PCA projects the data into a lower dimensional space where the variance of the projected data is greedily maximized along each axis. Despite them having the most predictive power on the target, the directions with a lower variance will be dropped, and the final regressor will not be able to leverage them.
PLS is both a transformer and a regressor, and it is quite similar to PCR: it also applies a dimensionality reduction to the samples before applying a linear regressor to the transformed data. The main difference with PCR is that the PLS transformation is supervised. Therefore, as we will see in this example, it does not suffer from the issue we just mentioned.
The data
--------
We start by creating a simple dataset with two features. Before we even dive into PCR and PLS, we fit a PCA estimator to display the two principal components of this dataset, i.e. the two directions that explain the most variance in the data.
```
import numpy as np
import matplotlib.pyplot as plt
from sklearn.decomposition import PCA
rng = np.random.RandomState(0)
n_samples = 500
cov = [[3, 3], [3, 4]]
X = rng.multivariate_normal(mean=[0, 0], cov=cov, size=n_samples)
pca = PCA(n_components=2).fit(X)
plt.scatter(X[:, 0], X[:, 1], alpha=0.3, label="samples")
for i, (comp, var) in enumerate(zip(pca.components_, pca.explained_variance_)):
comp = comp * var # scale component by its variance explanation power
plt.plot(
[0, comp[0]],
[0, comp[1]],
label=f"Component {i}",
linewidth=5,
color=f"C{i + 2}",
)
plt.gca().set(
aspect="equal",
title="2-dimensional dataset with principal components",
xlabel="first feature",
ylabel="second feature",
)
plt.legend()
plt.show()
```
For the purpose of this example, we now define the target `y` such that it is strongly correlated with a direction that has a small variance. To this end, we will project `X` onto the second component, and add some noise to it.
```
y = X.dot(pca.components_[1]) + rng.normal(size=n_samples) / 2
fig, axes = plt.subplots(1, 2, figsize=(10, 3))
axes[0].scatter(X.dot(pca.components_[0]), y, alpha=0.3)
axes[0].set(xlabel="Projected data onto first PCA component", ylabel="y")
axes[1].scatter(X.dot(pca.components_[1]), y, alpha=0.3)
axes[1].set(xlabel="Projected data onto second PCA component", ylabel="y")
plt.tight_layout()
plt.show()
```
Projection on one component and predictive power
------------------------------------------------
We now create two regressors: PCR and PLS, and for our illustration purposes we set the number of components to 1. Before feeding the data to the PCA step of PCR, we first standardize it, as recommended by good practice. The PLS estimator has built-in scaling capabilities.
For both models, we plot the projected data onto the first component against the target. In both cases, this projected data is what the regressors will use as training data.
```
from sklearn.model_selection import train_test_split
from sklearn.pipeline import make_pipeline
from sklearn.linear_model import LinearRegression
from sklearn.preprocessing import StandardScaler
from sklearn.decomposition import PCA
from sklearn.cross_decomposition import PLSRegression
X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=rng)
pcr = make_pipeline(StandardScaler(), PCA(n_components=1), LinearRegression())
pcr.fit(X_train, y_train)
pca = pcr.named_steps["pca"] # retrieve the PCA step of the pipeline
pls = PLSRegression(n_components=1)
pls.fit(X_train, y_train)
fig, axes = plt.subplots(1, 2, figsize=(10, 3))
axes[0].scatter(pca.transform(X_test), y_test, alpha=0.3, label="ground truth")
axes[0].scatter(
pca.transform(X_test), pcr.predict(X_test), alpha=0.3, label="predictions"
)
axes[0].set(
xlabel="Projected data onto first PCA component", ylabel="y", title="PCR / PCA"
)
axes[0].legend()
axes[1].scatter(pls.transform(X_test), y_test, alpha=0.3, label="ground truth")
axes[1].scatter(
pls.transform(X_test), pls.predict(X_test), alpha=0.3, label="predictions"
)
axes[1].set(xlabel="Projected data onto first PLS component", ylabel="y", title="PLS")
axes[1].legend()
plt.tight_layout()
plt.show()
```
As expected, the unsupervised PCA transformation of PCR has dropped the second component, i.e. the direction with the lowest variance, despite it being the most predictive direction. This is because PCA is a completely unsupervised transformation, and results in the projected data having a low predictive power on the target.
On the other hand, the PLS regressor manages to capture the effect of the direction with the lowest variance, thanks to its use of target information during the transformation: it can recognize that this direction is actually the most predictive. We note that the first PLS component is negatively correlated with the target, which comes from the fact that the signs of eigenvectors are arbitrary.
We also print the R-squared scores of both estimators, which further confirms that PLS is a better alternative than PCR in this case. A negative R-squared indicates that PCR performs worse than a regressor that would simply predict the mean of the target.
```
print(f"PCR r-squared {pcr.score(X_test, y_test):.3f}")
print(f"PLS r-squared {pls.score(X_test, y_test):.3f}")
```
```
PCR r-squared -0.026
PLS r-squared 0.658
```
As a final remark, we note that PCR with 2 components performs as well as PLS: this is because in this case, PCR was able to leverage the second component which has the most preditive power on the target.
```
pca_2 = make_pipeline(PCA(n_components=2), LinearRegression())
pca_2.fit(X_train, y_train)
print(f"PCR r-squared with 2 components {pca_2.score(X_test, y_test):.3f}")
```
```
PCR r-squared with 2 components 0.673
```
**Total running time of the script:** ( 0 minutes 0.357 seconds)
[`Download Python source code: plot_pcr_vs_pls.py`](https://scikit-learn.org/1.1/_downloads/767218d296758dab2ab1c6e1ab1a07bd/plot_pcr_vs_pls.py)
[`Download Jupyter notebook: plot_pcr_vs_pls.ipynb`](https://scikit-learn.org/1.1/_downloads/0486bf9e537e44cedd2a236d034bcd90/plot_pcr_vs_pls.ipynb)
scikit_learn Compare cross decomposition methods Note
Click [here](#sphx-glr-download-auto-examples-cross-decomposition-plot-compare-cross-decomposition-py) to download the full example code or to run this example in your browser via Binder
Compare cross decomposition methods
===================================
Simple usage of various cross decomposition algorithms:
* PLSCanonical
* PLSRegression, with multivariate response, a.k.a. PLS2
* PLSRegression, with univariate response, a.k.a. PLS1
* CCA
Given 2 multivariate covarying two-dimensional datasets, X, and Y, PLS extracts the ‘directions of covariance’, i.e. the components of each datasets that explain the most shared variance between both datasets. This is apparent on the **scatterplot matrix** display: components 1 in dataset X and dataset Y are maximally correlated (points lie around the first diagonal). This is also true for components 2 in both dataset, however, the correlation across datasets for different components is weak: the point cloud is very spherical.
Dataset based latent variables model
------------------------------------
```
import numpy as np
n = 500
# 2 latents vars:
l1 = np.random.normal(size=n)
l2 = np.random.normal(size=n)
latents = np.array([l1, l1, l2, l2]).T
X = latents + np.random.normal(size=4 * n).reshape((n, 4))
Y = latents + np.random.normal(size=4 * n).reshape((n, 4))
X_train = X[: n // 2]
Y_train = Y[: n // 2]
X_test = X[n // 2 :]
Y_test = Y[n // 2 :]
print("Corr(X)")
print(np.round(np.corrcoef(X.T), 2))
print("Corr(Y)")
print(np.round(np.corrcoef(Y.T), 2))
```
```
Corr(X)
[[ 1. 0.44 -0.06 -0.01]
[ 0.44 1. -0.01 -0.06]
[-0.06 -0.01 1. 0.5 ]
[-0.01 -0.06 0.5 1. ]]
Corr(Y)
[[ 1. 0.47 -0.05 0.02]
[ 0.47 1. -0.01 0.03]
[-0.05 -0.01 1. 0.47]
[ 0.02 0.03 0.47 1. ]]
```
Canonical (symmetric) PLS
-------------------------
### Transform data
```
from sklearn.cross_decomposition import PLSCanonical
plsca = PLSCanonical(n_components=2)
plsca.fit(X_train, Y_train)
X_train_r, Y_train_r = plsca.transform(X_train, Y_train)
X_test_r, Y_test_r = plsca.transform(X_test, Y_test)
```
### Scatter plot of scores
```
import matplotlib.pyplot as plt
# On diagonal plot X vs Y scores on each components
plt.figure(figsize=(12, 8))
plt.subplot(221)
plt.scatter(X_train_r[:, 0], Y_train_r[:, 0], label="train", marker="o", s=25)
plt.scatter(X_test_r[:, 0], Y_test_r[:, 0], label="test", marker="o", s=25)
plt.xlabel("x scores")
plt.ylabel("y scores")
plt.title(
"Comp. 1: X vs Y (test corr = %.2f)"
% np.corrcoef(X_test_r[:, 0], Y_test_r[:, 0])[0, 1]
)
plt.xticks(())
plt.yticks(())
plt.legend(loc="best")
plt.subplot(224)
plt.scatter(X_train_r[:, 1], Y_train_r[:, 1], label="train", marker="o", s=25)
plt.scatter(X_test_r[:, 1], Y_test_r[:, 1], label="test", marker="o", s=25)
plt.xlabel("x scores")
plt.ylabel("y scores")
plt.title(
"Comp. 2: X vs Y (test corr = %.2f)"
% np.corrcoef(X_test_r[:, 1], Y_test_r[:, 1])[0, 1]
)
plt.xticks(())
plt.yticks(())
plt.legend(loc="best")
# Off diagonal plot components 1 vs 2 for X and Y
plt.subplot(222)
plt.scatter(X_train_r[:, 0], X_train_r[:, 1], label="train", marker="*", s=50)
plt.scatter(X_test_r[:, 0], X_test_r[:, 1], label="test", marker="*", s=50)
plt.xlabel("X comp. 1")
plt.ylabel("X comp. 2")
plt.title(
"X comp. 1 vs X comp. 2 (test corr = %.2f)"
% np.corrcoef(X_test_r[:, 0], X_test_r[:, 1])[0, 1]
)
plt.legend(loc="best")
plt.xticks(())
plt.yticks(())
plt.subplot(223)
plt.scatter(Y_train_r[:, 0], Y_train_r[:, 1], label="train", marker="*", s=50)
plt.scatter(Y_test_r[:, 0], Y_test_r[:, 1], label="test", marker="*", s=50)
plt.xlabel("Y comp. 1")
plt.ylabel("Y comp. 2")
plt.title(
"Y comp. 1 vs Y comp. 2 , (test corr = %.2f)"
% np.corrcoef(Y_test_r[:, 0], Y_test_r[:, 1])[0, 1]
)
plt.legend(loc="best")
plt.xticks(())
plt.yticks(())
plt.show()
```
PLS regression, with multivariate response, a.k.a. PLS2
-------------------------------------------------------
```
from sklearn.cross_decomposition import PLSRegression
n = 1000
q = 3
p = 10
X = np.random.normal(size=n * p).reshape((n, p))
B = np.array([[1, 2] + [0] * (p - 2)] * q).T
# each Yj = 1*X1 + 2*X2 + noize
Y = np.dot(X, B) + np.random.normal(size=n * q).reshape((n, q)) + 5
pls2 = PLSRegression(n_components=3)
pls2.fit(X, Y)
print("True B (such that: Y = XB + Err)")
print(B)
# compare pls2.coef_ with B
print("Estimated B")
print(np.round(pls2.coef_, 1))
pls2.predict(X)
```
```
True B (such that: Y = XB + Err)
[[1 1 1]
[2 2 2]
[0 0 0]
[0 0 0]
[0 0 0]
[0 0 0]
[0 0 0]
[0 0 0]
[0 0 0]
[0 0 0]]
Estimated B
/home/runner/work/scikit-learn/scikit-learn/sklearn/cross_decomposition/_pls.py:507: FutureWarning: The attribute `coef_` will be transposed in version 1.3 to be consistent with other linear models in scikit-learn. Currently, `coef_` has a shape of (n_features, n_targets) and in the future it will have a shape of (n_targets, n_features).
warnings.warn(
[[ 1. 1. 1. ]
[ 2. 1.9 1.9]
[-0.1 -0. 0. ]
[ 0. 0. -0. ]
[-0. -0. 0. ]
[ 0. 0. 0. ]
[ 0. 0. 0. ]
[ 0. -0. -0. ]
[ 0. 0. 0. ]
[ 0. 0. 0.1]]
array([[ 3.50210309, 3.55301008, 3.72528805],
[10.03429511, 9.83576671, 9.74902647],
[ 8.03916339, 7.84652988, 7.78629756],
...,
[ 2.11231897, 2.1905275 , 2.33508757],
[ 5.35433161, 5.32686504, 5.39877158],
[ 5.47827435, 5.38004088, 5.35574845]])
```
PLS regression, with univariate response, a.k.a. PLS1
-----------------------------------------------------
```
n = 1000
p = 10
X = np.random.normal(size=n * p).reshape((n, p))
y = X[:, 0] + 2 * X[:, 1] + np.random.normal(size=n * 1) + 5
pls1 = PLSRegression(n_components=3)
pls1.fit(X, y)
# note that the number of components exceeds 1 (the dimension of y)
print("Estimated betas")
print(np.round(pls1.coef_, 1))
```
```
Estimated betas
/home/runner/work/scikit-learn/scikit-learn/sklearn/cross_decomposition/_pls.py:507: FutureWarning: The attribute `coef_` will be transposed in version 1.3 to be consistent with other linear models in scikit-learn. Currently, `coef_` has a shape of (n_features, n_targets) and in the future it will have a shape of (n_targets, n_features).
warnings.warn(
[[ 1.]
[ 2.]
[-0.]
[ 0.]
[-0.]
[ 0.]
[-0.]
[ 0.]
[-0.]
[ 0.]]
```
CCA (PLS mode B with symmetric deflation)
-----------------------------------------
```
from sklearn.cross_decomposition import CCA
cca = CCA(n_components=2)
cca.fit(X_train, Y_train)
X_train_r, Y_train_r = cca.transform(X_train, Y_train)
X_test_r, Y_test_r = cca.transform(X_test, Y_test)
```
**Total running time of the script:** ( 0 minutes 0.201 seconds)
[`Download Python source code: plot_compare_cross_decomposition.py`](https://scikit-learn.org/1.1/_downloads/4a508ac905c7935181ce08bd00a8cda9/plot_compare_cross_decomposition.py)
[`Download Jupyter notebook: plot_compare_cross_decomposition.ipynb`](https://scikit-learn.org/1.1/_downloads/0837676cf643e44f0684e848d0967551/plot_compare_cross_decomposition.ipynb)
scikit_learn Kernel Density Estimate of Species Distributions Note
Click [here](#sphx-glr-download-auto-examples-neighbors-plot-species-kde-py) to download the full example code or to run this example in your browser via Binder
Kernel Density Estimate of Species Distributions
================================================
This shows an example of a neighbors-based query (in particular a kernel density estimate) on geospatial data, using a Ball Tree built upon the Haversine distance metric – i.e. distances over points in latitude/longitude. The dataset is provided by Phillips et. al. (2006). If available, the example uses [basemap](https://matplotlib.org/basemap/) to plot the coast lines and national boundaries of South America.
This example does not perform any learning over the data (see [Species distribution modeling](../applications/plot_species_distribution_modeling#sphx-glr-auto-examples-applications-plot-species-distribution-modeling-py) for an example of classification based on the attributes in this dataset). It simply shows the kernel density estimate of observed data points in geospatial coordinates.
The two species are:
* [“Bradypus variegatus”](https://www.iucnredlist.org/species/3038/47437046) , the Brown-throated Sloth.
* [“Microryzomys minutus”](http://www.iucnredlist.org/details/13408/0) , also known as the Forest Small Rice Rat, a rodent that lives in Peru, Colombia, Ecuador, Peru, and Venezuela.
References
----------
* [“Maximum entropy modeling of species geographic distributions”](http://rob.schapire.net/papers/ecolmod.pdf) S. J. Phillips, R. P. Anderson, R. E. Schapire - Ecological Modelling, 190:231-259, 2006.
```
- computing KDE in spherical coordinates
- plot coastlines from coverage
- computing KDE in spherical coordinates
- plot coastlines from coverage
```
```
# Author: Jake Vanderplas <[email protected]>
#
# License: BSD 3 clause
import numpy as np
import matplotlib.pyplot as plt
from sklearn.datasets import fetch_species_distributions
from sklearn.neighbors import KernelDensity
# if basemap is available, we'll use it.
# otherwise, we'll improvise later...
try:
from mpl_toolkits.basemap import Basemap
basemap = True
except ImportError:
basemap = False
def construct_grids(batch):
"""Construct the map grid from the batch object
Parameters
----------
batch : Batch object
The object returned by :func:`fetch_species_distributions`
Returns
-------
(xgrid, ygrid) : 1-D arrays
The grid corresponding to the values in batch.coverages
"""
# x,y coordinates for corner cells
xmin = batch.x_left_lower_corner + batch.grid_size
xmax = xmin + (batch.Nx * batch.grid_size)
ymin = batch.y_left_lower_corner + batch.grid_size
ymax = ymin + (batch.Ny * batch.grid_size)
# x coordinates of the grid cells
xgrid = np.arange(xmin, xmax, batch.grid_size)
# y coordinates of the grid cells
ygrid = np.arange(ymin, ymax, batch.grid_size)
return (xgrid, ygrid)
# Get matrices/arrays of species IDs and locations
data = fetch_species_distributions()
species_names = ["Bradypus Variegatus", "Microryzomys Minutus"]
Xtrain = np.vstack([data["train"]["dd lat"], data["train"]["dd long"]]).T
ytrain = np.array(
[d.decode("ascii").startswith("micro") for d in data["train"]["species"]],
dtype="int",
)
Xtrain *= np.pi / 180.0 # Convert lat/long to radians
# Set up the data grid for the contour plot
xgrid, ygrid = construct_grids(data)
X, Y = np.meshgrid(xgrid[::5], ygrid[::5][::-1])
land_reference = data.coverages[6][::5, ::5]
land_mask = (land_reference > -9999).ravel()
xy = np.vstack([Y.ravel(), X.ravel()]).T
xy = xy[land_mask]
xy *= np.pi / 180.0
# Plot map of South America with distributions of each species
fig = plt.figure()
fig.subplots_adjust(left=0.05, right=0.95, wspace=0.05)
for i in range(2):
plt.subplot(1, 2, i + 1)
# construct a kernel density estimate of the distribution
print(" - computing KDE in spherical coordinates")
kde = KernelDensity(
bandwidth=0.04, metric="haversine", kernel="gaussian", algorithm="ball_tree"
)
kde.fit(Xtrain[ytrain == i])
# evaluate only on the land: -9999 indicates ocean
Z = np.full(land_mask.shape[0], -9999, dtype="int")
Z[land_mask] = np.exp(kde.score_samples(xy))
Z = Z.reshape(X.shape)
# plot contours of the density
levels = np.linspace(0, Z.max(), 25)
plt.contourf(X, Y, Z, levels=levels, cmap=plt.cm.Reds)
if basemap:
print(" - plot coastlines using basemap")
m = Basemap(
projection="cyl",
llcrnrlat=Y.min(),
urcrnrlat=Y.max(),
llcrnrlon=X.min(),
urcrnrlon=X.max(),
resolution="c",
)
m.drawcoastlines()
m.drawcountries()
else:
print(" - plot coastlines from coverage")
plt.contour(
X, Y, land_reference, levels=[-9998], colors="k", linestyles="solid"
)
plt.xticks([])
plt.yticks([])
plt.title(species_names[i])
plt.show()
```
**Total running time of the script:** ( 0 minutes 3.381 seconds)
[`Download Python source code: plot_species_kde.py`](https://scikit-learn.org/1.1/_downloads/02a1306a494b46cc56c930ceec6e8c4a/plot_species_kde.py)
[`Download Jupyter notebook: plot_species_kde.ipynb`](https://scikit-learn.org/1.1/_downloads/1c4a422dfa5bd721501d19a2b7e2499b/plot_species_kde.ipynb)
| programming_docs |
scikit_learn Approximate nearest neighbors in TSNE Note
Click [here](#sphx-glr-download-auto-examples-neighbors-approximate-nearest-neighbors-py) to download the full example code or to run this example in your browser via Binder
Approximate nearest neighbors in TSNE
=====================================
This example presents how to chain KNeighborsTransformer and TSNE in a pipeline. It also shows how to wrap the packages `annoy` and `nmslib` to replace KNeighborsTransformer and perform approximate nearest neighbors. These packages can be installed with `pip install annoy nmslib`.
Note: In KNeighborsTransformer we use the definition which includes each training point as its own neighbor in the count of `n_neighbors`, and for compatibility reasons, one extra neighbor is computed when `mode == 'distance'`. Please note that we do the same in the proposed wrappers.
Sample output:
```
Benchmarking on MNIST_2000:
---------------------------
AnnoyTransformer: 0.305 sec
NMSlibTransformer: 0.144 sec
KNeighborsTransformer: 0.090 sec
TSNE with AnnoyTransformer: 2.818 sec
TSNE with NMSlibTransformer: 2.592 sec
TSNE with KNeighborsTransformer: 2.338 sec
TSNE with internal NearestNeighbors: 2.364 sec
Benchmarking on MNIST_10000:
----------------------------
AnnoyTransformer: 2.874 sec
NMSlibTransformer: 1.098 sec
KNeighborsTransformer: 1.264 sec
TSNE with AnnoyTransformer: 16.118 sec
TSNE with NMSlibTransformer: 15.281 sec
TSNE with KNeighborsTransformer: 15.400 sec
TSNE with internal NearestNeighbors: 15.573 sec
```
Note that the prediction speed KNeighborsTransformer was optimized in scikit-learn 1.1 and therefore approximate methods are not necessarily faster because computing the index takes time and can nullify the gains obtained at prediction time.
```
# Author: Tom Dupre la Tour
#
# License: BSD 3 clause
import time
import sys
try:
import annoy
except ImportError:
print("The package 'annoy' is required to run this example.")
sys.exit()
try:
import nmslib
except ImportError:
print("The package 'nmslib' is required to run this example.")
sys.exit()
import numpy as np
import matplotlib.pyplot as plt
from matplotlib.ticker import NullFormatter
from scipy.sparse import csr_matrix
from sklearn.base import BaseEstimator, TransformerMixin
from sklearn.neighbors import KNeighborsTransformer
from sklearn.utils._testing import assert_array_almost_equal
from sklearn.datasets import fetch_openml
from sklearn.pipeline import make_pipeline
from sklearn.manifold import TSNE
from sklearn.utils import shuffle
class NMSlibTransformer(TransformerMixin, BaseEstimator):
"""Wrapper for using nmslib as sklearn's KNeighborsTransformer"""
def __init__(self, n_neighbors=5, metric="euclidean", method="sw-graph", n_jobs=1):
self.n_neighbors = n_neighbors
self.method = method
self.metric = metric
self.n_jobs = n_jobs
def fit(self, X):
self.n_samples_fit_ = X.shape[0]
# see more metric in the manual
# https://github.com/nmslib/nmslib/tree/master/manual
space = {
"euclidean": "l2",
"cosine": "cosinesimil",
"l1": "l1",
"l2": "l2",
}[self.metric]
self.nmslib_ = nmslib.init(method=self.method, space=space)
self.nmslib_.addDataPointBatch(X)
self.nmslib_.createIndex()
return self
def transform(self, X):
n_samples_transform = X.shape[0]
# For compatibility reasons, as each sample is considered as its own
# neighbor, one extra neighbor will be computed.
n_neighbors = self.n_neighbors + 1
results = self.nmslib_.knnQueryBatch(X, k=n_neighbors, num_threads=self.n_jobs)
indices, distances = zip(*results)
indices, distances = np.vstack(indices), np.vstack(distances)
indptr = np.arange(0, n_samples_transform * n_neighbors + 1, n_neighbors)
kneighbors_graph = csr_matrix(
(distances.ravel(), indices.ravel(), indptr),
shape=(n_samples_transform, self.n_samples_fit_),
)
return kneighbors_graph
class AnnoyTransformer(TransformerMixin, BaseEstimator):
"""Wrapper for using annoy.AnnoyIndex as sklearn's KNeighborsTransformer"""
def __init__(self, n_neighbors=5, metric="euclidean", n_trees=10, search_k=-1):
self.n_neighbors = n_neighbors
self.n_trees = n_trees
self.search_k = search_k
self.metric = metric
def fit(self, X):
self.n_samples_fit_ = X.shape[0]
self.annoy_ = annoy.AnnoyIndex(X.shape[1], metric=self.metric)
for i, x in enumerate(X):
self.annoy_.add_item(i, x.tolist())
self.annoy_.build(self.n_trees)
return self
def transform(self, X):
return self._transform(X)
def fit_transform(self, X, y=None):
return self.fit(X)._transform(X=None)
def _transform(self, X):
"""As `transform`, but handles X is None for faster `fit_transform`."""
n_samples_transform = self.n_samples_fit_ if X is None else X.shape[0]
# For compatibility reasons, as each sample is considered as its own
# neighbor, one extra neighbor will be computed.
n_neighbors = self.n_neighbors + 1
indices = np.empty((n_samples_transform, n_neighbors), dtype=int)
distances = np.empty((n_samples_transform, n_neighbors))
if X is None:
for i in range(self.annoy_.get_n_items()):
ind, dist = self.annoy_.get_nns_by_item(
i, n_neighbors, self.search_k, include_distances=True
)
indices[i], distances[i] = ind, dist
else:
for i, x in enumerate(X):
indices[i], distances[i] = self.annoy_.get_nns_by_vector(
x.tolist(), n_neighbors, self.search_k, include_distances=True
)
indptr = np.arange(0, n_samples_transform * n_neighbors + 1, n_neighbors)
kneighbors_graph = csr_matrix(
(distances.ravel(), indices.ravel(), indptr),
shape=(n_samples_transform, self.n_samples_fit_),
)
return kneighbors_graph
def test_transformers():
"""Test that AnnoyTransformer and KNeighborsTransformer give same results"""
X = np.random.RandomState(42).randn(10, 2)
knn = KNeighborsTransformer()
Xt0 = knn.fit_transform(X)
ann = AnnoyTransformer()
Xt1 = ann.fit_transform(X)
nms = NMSlibTransformer()
Xt2 = nms.fit_transform(X)
assert_array_almost_equal(Xt0.toarray(), Xt1.toarray(), decimal=5)
assert_array_almost_equal(Xt0.toarray(), Xt2.toarray(), decimal=5)
def load_mnist(n_samples):
"""Load MNIST, shuffle the data, and return only n_samples."""
mnist = fetch_openml("mnist_784", as_frame=False)
X, y = shuffle(mnist.data, mnist.target, random_state=2)
return X[:n_samples] / 255, y[:n_samples]
def run_benchmark():
datasets = [
("MNIST_2000", load_mnist(n_samples=2000)),
("MNIST_10000", load_mnist(n_samples=10000)),
]
n_iter = 500
perplexity = 30
metric = "euclidean"
# TSNE requires a certain number of neighbors which depends on the
# perplexity parameter.
# Add one since we include each sample as its own neighbor.
n_neighbors = int(3.0 * perplexity + 1) + 1
tsne_params = dict(
init="random", # pca not supported for sparse matrices
perplexity=perplexity,
method="barnes_hut",
random_state=42,
n_iter=n_iter,
learning_rate="auto",
)
transformers = [
("AnnoyTransformer", AnnoyTransformer(n_neighbors=n_neighbors, metric=metric)),
(
"NMSlibTransformer",
NMSlibTransformer(n_neighbors=n_neighbors, metric=metric),
),
(
"KNeighborsTransformer",
KNeighborsTransformer(
n_neighbors=n_neighbors, mode="distance", metric=metric
),
),
(
"TSNE with AnnoyTransformer",
make_pipeline(
AnnoyTransformer(n_neighbors=n_neighbors, metric=metric),
TSNE(metric="precomputed", **tsne_params),
),
),
(
"TSNE with NMSlibTransformer",
make_pipeline(
NMSlibTransformer(n_neighbors=n_neighbors, metric=metric),
TSNE(metric="precomputed", **tsne_params),
),
),
(
"TSNE with KNeighborsTransformer",
make_pipeline(
KNeighborsTransformer(
n_neighbors=n_neighbors, mode="distance", metric=metric
),
TSNE(metric="precomputed", **tsne_params),
),
),
("TSNE with internal NearestNeighbors", TSNE(metric=metric, **tsne_params)),
]
# init the plot
nrows = len(datasets)
ncols = np.sum([1 for name, model in transformers if "TSNE" in name])
fig, axes = plt.subplots(
nrows=nrows, ncols=ncols, squeeze=False, figsize=(5 * ncols, 4 * nrows)
)
axes = axes.ravel()
i_ax = 0
for dataset_name, (X, y) in datasets:
msg = "Benchmarking on %s:" % dataset_name
print("\n%s\n%s" % (msg, "-" * len(msg)))
for transformer_name, transformer in transformers:
start = time.time()
Xt = transformer.fit_transform(X)
duration = time.time() - start
# print the duration report
longest = np.max([len(name) for name, model in transformers])
whitespaces = " " * (longest - len(transformer_name))
print("%s: %s%.3f sec" % (transformer_name, whitespaces, duration))
# plot TSNE embedding which should be very similar across methods
if "TSNE" in transformer_name:
axes[i_ax].set_title(transformer_name + "\non " + dataset_name)
axes[i_ax].scatter(
Xt[:, 0],
Xt[:, 1],
c=y.astype(np.int32),
alpha=0.2,
cmap=plt.cm.viridis,
)
axes[i_ax].xaxis.set_major_formatter(NullFormatter())
axes[i_ax].yaxis.set_major_formatter(NullFormatter())
axes[i_ax].axis("tight")
i_ax += 1
fig.tight_layout()
plt.show()
if __name__ == "__main__":
test_transformers()
run_benchmark()
```
**Total running time of the script:** ( 0 minutes 0.000 seconds)
[`Download Python source code: approximate_nearest_neighbors.py`](https://scikit-learn.org/1.1/_downloads/dcd99fee3ee8ac76b69a1d2d6f5c7e78/approximate_nearest_neighbors.py)
[`Download Jupyter notebook: approximate_nearest_neighbors.ipynb`](https://scikit-learn.org/1.1/_downloads/73ec8f08ae3ece02509539ee03cc3cdd/approximate_nearest_neighbors.ipynb)
scikit_learn Nearest Centroid Classification Note
Click [here](#sphx-glr-download-auto-examples-neighbors-plot-nearest-centroid-py) to download the full example code or to run this example in your browser via Binder
Nearest Centroid Classification
===============================
Sample usage of Nearest Centroid classification. It will plot the decision boundaries for each class.
*
*
```
None 0.8133333333333334
0.2 0.82
```
```
import numpy as np
import matplotlib.pyplot as plt
from matplotlib.colors import ListedColormap
from sklearn import datasets
from sklearn.neighbors import NearestCentroid
from sklearn.inspection import DecisionBoundaryDisplay
n_neighbors = 15
# import some data to play with
iris = datasets.load_iris()
# we only take the first two features. We could avoid this ugly
# slicing by using a two-dim dataset
X = iris.data[:, :2]
y = iris.target
# Create color maps
cmap_light = ListedColormap(["orange", "cyan", "cornflowerblue"])
cmap_bold = ListedColormap(["darkorange", "c", "darkblue"])
for shrinkage in [None, 0.2]:
# we create an instance of Neighbours Classifier and fit the data.
clf = NearestCentroid(shrink_threshold=shrinkage)
clf.fit(X, y)
y_pred = clf.predict(X)
print(shrinkage, np.mean(y == y_pred))
_, ax = plt.subplots()
DecisionBoundaryDisplay.from_estimator(
clf, X, cmap=cmap_light, ax=ax, response_method="predict"
)
# Plot also the training points
plt.scatter(X[:, 0], X[:, 1], c=y, cmap=cmap_bold, edgecolor="k", s=20)
plt.title("3-Class classification (shrink_threshold=%r)" % shrinkage)
plt.axis("tight")
plt.show()
```
**Total running time of the script:** ( 0 minutes 0.136 seconds)
[`Download Python source code: plot_nearest_centroid.py`](https://scikit-learn.org/1.1/_downloads/1ee82dc6471486cb5b088fc473cd945b/plot_nearest_centroid.py)
[`Download Jupyter notebook: plot_nearest_centroid.ipynb`](https://scikit-learn.org/1.1/_downloads/06ffeb4f0ded6447302acd5a712f8490/plot_nearest_centroid.ipynb)
scikit_learn Caching nearest neighbors Note
Click [here](#sphx-glr-download-auto-examples-neighbors-plot-caching-nearest-neighbors-py) to download the full example code or to run this example in your browser via Binder
Caching nearest neighbors
=========================
This examples demonstrates how to precompute the k nearest neighbors before using them in KNeighborsClassifier. KNeighborsClassifier can compute the nearest neighbors internally, but precomputing them can have several benefits, such as finer parameter control, caching for multiple use, or custom implementations.
Here we use the caching property of pipelines to cache the nearest neighbors graph between multiple fits of KNeighborsClassifier. The first call is slow since it computes the neighbors graph, while subsequent call are faster as they do not need to recompute the graph. Here the durations are small since the dataset is small, but the gain can be more substantial when the dataset grows larger, or when the grid of parameter to search is large.
```
# Author: Tom Dupre la Tour
#
# License: BSD 3 clause
from tempfile import TemporaryDirectory
import matplotlib.pyplot as plt
from sklearn.neighbors import KNeighborsTransformer, KNeighborsClassifier
from sklearn.model_selection import GridSearchCV
from sklearn.datasets import load_digits
from sklearn.pipeline import Pipeline
X, y = load_digits(return_X_y=True)
n_neighbors_list = [1, 2, 3, 4, 5, 6, 7, 8, 9]
# The transformer computes the nearest neighbors graph using the maximum number
# of neighbors necessary in the grid search. The classifier model filters the
# nearest neighbors graph as required by its own n_neighbors parameter.
graph_model = KNeighborsTransformer(n_neighbors=max(n_neighbors_list), mode="distance")
classifier_model = KNeighborsClassifier(metric="precomputed")
# Note that we give `memory` a directory to cache the graph computation
# that will be used several times when tuning the hyperparameters of the
# classifier.
with TemporaryDirectory(prefix="sklearn_graph_cache_") as tmpdir:
full_model = Pipeline(
steps=[("graph", graph_model), ("classifier", classifier_model)], memory=tmpdir
)
param_grid = {"classifier__n_neighbors": n_neighbors_list}
grid_model = GridSearchCV(full_model, param_grid)
grid_model.fit(X, y)
# Plot the results of the grid search.
fig, axes = plt.subplots(1, 2, figsize=(8, 4))
axes[0].errorbar(
x=n_neighbors_list,
y=grid_model.cv_results_["mean_test_score"],
yerr=grid_model.cv_results_["std_test_score"],
)
axes[0].set(xlabel="n_neighbors", title="Classification accuracy")
axes[1].errorbar(
x=n_neighbors_list,
y=grid_model.cv_results_["mean_fit_time"],
yerr=grid_model.cv_results_["std_fit_time"],
color="r",
)
axes[1].set(xlabel="n_neighbors", title="Fit time (with caching)")
fig.tight_layout()
plt.show()
```
**Total running time of the script:** ( 0 minutes 0.877 seconds)
[`Download Python source code: plot_caching_nearest_neighbors.py`](https://scikit-learn.org/1.1/_downloads/7880ee063d9d3845a0ed85afa9948265/plot_caching_nearest_neighbors.py)
[`Download Jupyter notebook: plot_caching_nearest_neighbors.ipynb`](https://scikit-learn.org/1.1/_downloads/2eaa49c025f80c826512eda4a8add5c3/plot_caching_nearest_neighbors.ipynb)
scikit_learn Novelty detection with Local Outlier Factor (LOF) Note
Click [here](#sphx-glr-download-auto-examples-neighbors-plot-lof-novelty-detection-py) to download the full example code or to run this example in your browser via Binder
Novelty detection with Local Outlier Factor (LOF)
=================================================
The Local Outlier Factor (LOF) algorithm is an unsupervised anomaly detection method which computes the local density deviation of a given data point with respect to its neighbors. It considers as outliers the samples that have a substantially lower density than their neighbors. This example shows how to use LOF for novelty detection. Note that when LOF is used for novelty detection you MUST not use predict, decision\_function and score\_samples on the training set as this would lead to wrong results. You must only use these methods on new unseen data (which are not in the training set). See [User Guide](../../modules/outlier_detection#outlier-detection): for details on the difference between outlier detection and novelty detection and how to use LOF for outlier detection.
The number of neighbors considered, (parameter n\_neighbors) is typically set 1) greater than the minimum number of samples a cluster has to contain, so that other samples can be local outliers relative to this cluster, and 2) smaller than the maximum number of close by samples that can potentially be local outliers. In practice, such information is generally not available, and taking n\_neighbors=20 appears to work well in general.
```
import numpy as np
import matplotlib
import matplotlib.pyplot as plt
from sklearn.neighbors import LocalOutlierFactor
np.random.seed(42)
xx, yy = np.meshgrid(np.linspace(-5, 5, 500), np.linspace(-5, 5, 500))
# Generate normal (not abnormal) training observations
X = 0.3 * np.random.randn(100, 2)
X_train = np.r_[X + 2, X - 2]
# Generate new normal (not abnormal) observations
X = 0.3 * np.random.randn(20, 2)
X_test = np.r_[X + 2, X - 2]
# Generate some abnormal novel observations
X_outliers = np.random.uniform(low=-4, high=4, size=(20, 2))
# fit the model for novelty detection (novelty=True)
clf = LocalOutlierFactor(n_neighbors=20, novelty=True, contamination=0.1)
clf.fit(X_train)
# DO NOT use predict, decision_function and score_samples on X_train as this
# would give wrong results but only on new unseen data (not used in X_train),
# e.g. X_test, X_outliers or the meshgrid
y_pred_test = clf.predict(X_test)
y_pred_outliers = clf.predict(X_outliers)
n_error_test = y_pred_test[y_pred_test == -1].size
n_error_outliers = y_pred_outliers[y_pred_outliers == 1].size
# plot the learned frontier, the points, and the nearest vectors to the plane
Z = clf.decision_function(np.c_[xx.ravel(), yy.ravel()])
Z = Z.reshape(xx.shape)
plt.title("Novelty Detection with LOF")
plt.contourf(xx, yy, Z, levels=np.linspace(Z.min(), 0, 7), cmap=plt.cm.PuBu)
a = plt.contour(xx, yy, Z, levels=[0], linewidths=2, colors="darkred")
plt.contourf(xx, yy, Z, levels=[0, Z.max()], colors="palevioletred")
s = 40
b1 = plt.scatter(X_train[:, 0], X_train[:, 1], c="white", s=s, edgecolors="k")
b2 = plt.scatter(X_test[:, 0], X_test[:, 1], c="blueviolet", s=s, edgecolors="k")
c = plt.scatter(X_outliers[:, 0], X_outliers[:, 1], c="gold", s=s, edgecolors="k")
plt.axis("tight")
plt.xlim((-5, 5))
plt.ylim((-5, 5))
plt.legend(
[a.collections[0], b1, b2, c],
[
"learned frontier",
"training observations",
"new regular observations",
"new abnormal observations",
],
loc="upper left",
prop=matplotlib.font_manager.FontProperties(size=11),
)
plt.xlabel(
"errors novel regular: %d/40 ; errors novel abnormal: %d/40"
% (n_error_test, n_error_outliers)
)
plt.show()
```
**Total running time of the script:** ( 0 minutes 0.682 seconds)
[`Download Python source code: plot_lof_novelty_detection.py`](https://scikit-learn.org/1.1/_downloads/42b321f3e1d8c7a657ebec98c5d6ea0d/plot_lof_novelty_detection.py)
[`Download Jupyter notebook: plot_lof_novelty_detection.ipynb`](https://scikit-learn.org/1.1/_downloads/1ecb1284f0b785cd0011e155bf44657c/plot_lof_novelty_detection.ipynb)
| programming_docs |
scikit_learn Kernel Density Estimation Note
Click [here](#sphx-glr-download-auto-examples-neighbors-plot-digits-kde-sampling-py) to download the full example code or to run this example in your browser via Binder
Kernel Density Estimation
=========================
This example shows how kernel density estimation (KDE), a powerful non-parametric density estimation technique, can be used to learn a generative model for a dataset. With this generative model in place, new samples can be drawn. These new samples reflect the underlying model of the data.
```
best bandwidth: 3.79269019073225
```
```
import numpy as np
import matplotlib.pyplot as plt
from sklearn.datasets import load_digits
from sklearn.neighbors import KernelDensity
from sklearn.decomposition import PCA
from sklearn.model_selection import GridSearchCV
# load the data
digits = load_digits()
# project the 64-dimensional data to a lower dimension
pca = PCA(n_components=15, whiten=False)
data = pca.fit_transform(digits.data)
# use grid search cross-validation to optimize the bandwidth
params = {"bandwidth": np.logspace(-1, 1, 20)}
grid = GridSearchCV(KernelDensity(), params)
grid.fit(data)
print("best bandwidth: {0}".format(grid.best_estimator_.bandwidth))
# use the best estimator to compute the kernel density estimate
kde = grid.best_estimator_
# sample 44 new points from the data
new_data = kde.sample(44, random_state=0)
new_data = pca.inverse_transform(new_data)
# turn data into a 4x11 grid
new_data = new_data.reshape((4, 11, -1))
real_data = digits.data[:44].reshape((4, 11, -1))
# plot real digits and resampled digits
fig, ax = plt.subplots(9, 11, subplot_kw=dict(xticks=[], yticks=[]))
for j in range(11):
ax[4, j].set_visible(False)
for i in range(4):
im = ax[i, j].imshow(
real_data[i, j].reshape((8, 8)), cmap=plt.cm.binary, interpolation="nearest"
)
im.set_clim(0, 16)
im = ax[i + 5, j].imshow(
new_data[i, j].reshape((8, 8)), cmap=plt.cm.binary, interpolation="nearest"
)
im.set_clim(0, 16)
ax[0, 5].set_title("Selection from the input data")
ax[5, 5].set_title('"New" digits drawn from the kernel density model')
plt.show()
```
**Total running time of the script:** ( 0 minutes 4.343 seconds)
[`Download Python source code: plot_digits_kde_sampling.py`](https://scikit-learn.org/1.1/_downloads/02a7bbce3c39c70d62d80e875968e5c6/plot_digits_kde_sampling.py)
[`Download Jupyter notebook: plot_digits_kde_sampling.ipynb`](https://scikit-learn.org/1.1/_downloads/c82bd3e95b1fad595c0ed8d47e58d845/plot_digits_kde_sampling.ipynb)
scikit_learn Neighborhood Components Analysis Illustration Note
Click [here](#sphx-glr-download-auto-examples-neighbors-plot-nca-illustration-py) to download the full example code or to run this example in your browser via Binder
Neighborhood Components Analysis Illustration
=============================================
This example illustrates a learned distance metric that maximizes the nearest neighbors classification accuracy. It provides a visual representation of this metric compared to the original point space. Please refer to the [User Guide](../../modules/neighbors#nca) for more information.
```
# License: BSD 3 clause
import numpy as np
import matplotlib.pyplot as plt
from sklearn.datasets import make_classification
from sklearn.neighbors import NeighborhoodComponentsAnalysis
from matplotlib import cm
from scipy.special import logsumexp
```
Original points
---------------
First we create a data set of 9 samples from 3 classes, and plot the points in the original space. For this example, we focus on the classification of point no. 3. The thickness of a link between point no. 3 and another point is proportional to their distance.
```
X, y = make_classification(
n_samples=9,
n_features=2,
n_informative=2,
n_redundant=0,
n_classes=3,
n_clusters_per_class=1,
class_sep=1.0,
random_state=0,
)
plt.figure(1)
ax = plt.gca()
for i in range(X.shape[0]):
ax.text(X[i, 0], X[i, 1], str(i), va="center", ha="center")
ax.scatter(X[i, 0], X[i, 1], s=300, c=cm.Set1(y[[i]]), alpha=0.4)
ax.set_title("Original points")
ax.axes.get_xaxis().set_visible(False)
ax.axes.get_yaxis().set_visible(False)
ax.axis("equal") # so that boundaries are displayed correctly as circles
def link_thickness_i(X, i):
diff_embedded = X[i] - X
dist_embedded = np.einsum("ij,ij->i", diff_embedded, diff_embedded)
dist_embedded[i] = np.inf
# compute exponentiated distances (use the log-sum-exp trick to
# avoid numerical instabilities
exp_dist_embedded = np.exp(-dist_embedded - logsumexp(-dist_embedded))
return exp_dist_embedded
def relate_point(X, i, ax):
pt_i = X[i]
for j, pt_j in enumerate(X):
thickness = link_thickness_i(X, i)
if i != j:
line = ([pt_i[0], pt_j[0]], [pt_i[1], pt_j[1]])
ax.plot(*line, c=cm.Set1(y[j]), linewidth=5 * thickness[j])
i = 3
relate_point(X, i, ax)
plt.show()
```
Learning an embedding
---------------------
We use [`NeighborhoodComponentsAnalysis`](../../modules/generated/sklearn.neighbors.neighborhoodcomponentsanalysis#sklearn.neighbors.NeighborhoodComponentsAnalysis "sklearn.neighbors.NeighborhoodComponentsAnalysis") to learn an embedding and plot the points after the transformation. We then take the embedding and find the nearest neighbors.
```
nca = NeighborhoodComponentsAnalysis(max_iter=30, random_state=0)
nca = nca.fit(X, y)
plt.figure(2)
ax2 = plt.gca()
X_embedded = nca.transform(X)
relate_point(X_embedded, i, ax2)
for i in range(len(X)):
ax2.text(X_embedded[i, 0], X_embedded[i, 1], str(i), va="center", ha="center")
ax2.scatter(X_embedded[i, 0], X_embedded[i, 1], s=300, c=cm.Set1(y[[i]]), alpha=0.4)
ax2.set_title("NCA embedding")
ax2.axes.get_xaxis().set_visible(False)
ax2.axes.get_yaxis().set_visible(False)
ax2.axis("equal")
plt.show()
```
**Total running time of the script:** ( 0 minutes 0.149 seconds)
[`Download Python source code: plot_nca_illustration.py`](https://scikit-learn.org/1.1/_downloads/3c0067f1f85f236ccc8a25c1245347e8/plot_nca_illustration.py)
[`Download Jupyter notebook: plot_nca_illustration.ipynb`](https://scikit-learn.org/1.1/_downloads/19e9c0cb24a132133cef3b311caaf199/plot_nca_illustration.ipynb)
scikit_learn Nearest Neighbors regression Note
Click [here](#sphx-glr-download-auto-examples-neighbors-plot-regression-py) to download the full example code or to run this example in your browser via Binder
Nearest Neighbors regression
============================
Demonstrate the resolution of a regression problem using a k-Nearest Neighbor and the interpolation of the target using both barycenter and constant weights.
```
# Author: Alexandre Gramfort <[email protected]>
# Fabian Pedregosa <[email protected]>
#
# License: BSD 3 clause (C) INRIA
```
Generate sample data
--------------------
```
import numpy as np
import matplotlib.pyplot as plt
from sklearn import neighbors
np.random.seed(0)
X = np.sort(5 * np.random.rand(40, 1), axis=0)
T = np.linspace(0, 5, 500)[:, np.newaxis]
y = np.sin(X).ravel()
# Add noise to targets
y[::5] += 1 * (0.5 - np.random.rand(8))
```
Fit regression model
--------------------
```
n_neighbors = 5
for i, weights in enumerate(["uniform", "distance"]):
knn = neighbors.KNeighborsRegressor(n_neighbors, weights=weights)
y_ = knn.fit(X, y).predict(T)
plt.subplot(2, 1, i + 1)
plt.scatter(X, y, color="darkorange", label="data")
plt.plot(T, y_, color="navy", label="prediction")
plt.axis("tight")
plt.legend()
plt.title("KNeighborsRegressor (k = %i, weights = '%s')" % (n_neighbors, weights))
plt.tight_layout()
plt.show()
```
**Total running time of the script:** ( 0 minutes 0.127 seconds)
[`Download Python source code: plot_regression.py`](https://scikit-learn.org/1.1/_downloads/437df39fcde24ead7b91917f2133a53c/plot_regression.py)
[`Download Jupyter notebook: plot_regression.ipynb`](https://scikit-learn.org/1.1/_downloads/8daa075623a97eefa5be2c4a6eb55992/plot_regression.ipynb)
scikit_learn Dimensionality Reduction with Neighborhood Components Analysis Note
Click [here](#sphx-glr-download-auto-examples-neighbors-plot-nca-dim-reduction-py) to download the full example code or to run this example in your browser via Binder
Dimensionality Reduction with Neighborhood Components Analysis
==============================================================
Sample usage of Neighborhood Components Analysis for dimensionality reduction.
This example compares different (linear) dimensionality reduction methods applied on the Digits data set. The data set contains images of digits from 0 to 9 with approximately 180 samples of each class. Each image is of dimension 8x8 = 64, and is reduced to a two-dimensional data point.
Principal Component Analysis (PCA) applied to this data identifies the combination of attributes (principal components, or directions in the feature space) that account for the most variance in the data. Here we plot the different samples on the 2 first principal components.
Linear Discriminant Analysis (LDA) tries to identify attributes that account for the most variance *between classes*. In particular, LDA, in contrast to PCA, is a supervised method, using known class labels.
Neighborhood Components Analysis (NCA) tries to find a feature space such that a stochastic nearest neighbor algorithm will give the best accuracy. Like LDA, it is a supervised method.
One can see that NCA enforces a clustering of the data that is visually meaningful despite the large reduction in dimension.
*
*
*
```
# License: BSD 3 clause
import numpy as np
import matplotlib.pyplot as plt
from sklearn import datasets
from sklearn.model_selection import train_test_split
from sklearn.decomposition import PCA
from sklearn.discriminant_analysis import LinearDiscriminantAnalysis
from sklearn.neighbors import KNeighborsClassifier, NeighborhoodComponentsAnalysis
from sklearn.pipeline import make_pipeline
from sklearn.preprocessing import StandardScaler
n_neighbors = 3
random_state = 0
# Load Digits dataset
X, y = datasets.load_digits(return_X_y=True)
# Split into train/test
X_train, X_test, y_train, y_test = train_test_split(
X, y, test_size=0.5, stratify=y, random_state=random_state
)
dim = len(X[0])
n_classes = len(np.unique(y))
# Reduce dimension to 2 with PCA
pca = make_pipeline(StandardScaler(), PCA(n_components=2, random_state=random_state))
# Reduce dimension to 2 with LinearDiscriminantAnalysis
lda = make_pipeline(StandardScaler(), LinearDiscriminantAnalysis(n_components=2))
# Reduce dimension to 2 with NeighborhoodComponentAnalysis
nca = make_pipeline(
StandardScaler(),
NeighborhoodComponentsAnalysis(n_components=2, random_state=random_state),
)
# Use a nearest neighbor classifier to evaluate the methods
knn = KNeighborsClassifier(n_neighbors=n_neighbors)
# Make a list of the methods to be compared
dim_reduction_methods = [("PCA", pca), ("LDA", lda), ("NCA", nca)]
# plt.figure()
for i, (name, model) in enumerate(dim_reduction_methods):
plt.figure()
# plt.subplot(1, 3, i + 1, aspect=1)
# Fit the method's model
model.fit(X_train, y_train)
# Fit a nearest neighbor classifier on the embedded training set
knn.fit(model.transform(X_train), y_train)
# Compute the nearest neighbor accuracy on the embedded test set
acc_knn = knn.score(model.transform(X_test), y_test)
# Embed the data set in 2 dimensions using the fitted model
X_embedded = model.transform(X)
# Plot the projected points and show the evaluation score
plt.scatter(X_embedded[:, 0], X_embedded[:, 1], c=y, s=30, cmap="Set1")
plt.title(
"{}, KNN (k={})\nTest accuracy = {:.2f}".format(name, n_neighbors, acc_knn)
)
plt.show()
```
**Total running time of the script:** ( 0 minutes 1.500 seconds)
[`Download Python source code: plot_nca_dim_reduction.py`](https://scikit-learn.org/1.1/_downloads/4964d8bc6f2b7f1d6bceb91753b362f0/plot_nca_dim_reduction.py)
[`Download Jupyter notebook: plot_nca_dim_reduction.ipynb`](https://scikit-learn.org/1.1/_downloads/e44c57031af568237dc334b0d317a929/plot_nca_dim_reduction.ipynb)
scikit_learn Comparing Nearest Neighbors with and without Neighborhood Components Analysis Note
Click [here](#sphx-glr-download-auto-examples-neighbors-plot-nca-classification-py) to download the full example code or to run this example in your browser via Binder
Comparing Nearest Neighbors with and without Neighborhood Components Analysis
=============================================================================
An example comparing nearest neighbors classification with and without Neighborhood Components Analysis.
It will plot the class decision boundaries given by a Nearest Neighbors classifier when using the Euclidean distance on the original features, versus using the Euclidean distance after the transformation learned by Neighborhood Components Analysis. The latter aims to find a linear transformation that maximises the (stochastic) nearest neighbor classification accuracy on the training set.
*
*
```
# License: BSD 3 clause
import matplotlib.pyplot as plt
from matplotlib.colors import ListedColormap
from sklearn import datasets
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import StandardScaler
from sklearn.neighbors import KNeighborsClassifier, NeighborhoodComponentsAnalysis
from sklearn.pipeline import Pipeline
from sklearn.inspection import DecisionBoundaryDisplay
n_neighbors = 1
dataset = datasets.load_iris()
X, y = dataset.data, dataset.target
# we only take two features. We could avoid this ugly
# slicing by using a two-dim dataset
X = X[:, [0, 2]]
X_train, X_test, y_train, y_test = train_test_split(
X, y, stratify=y, test_size=0.7, random_state=42
)
h = 0.05 # step size in the mesh
# Create color maps
cmap_light = ListedColormap(["#FFAAAA", "#AAFFAA", "#AAAAFF"])
cmap_bold = ListedColormap(["#FF0000", "#00FF00", "#0000FF"])
names = ["KNN", "NCA, KNN"]
classifiers = [
Pipeline(
[
("scaler", StandardScaler()),
("knn", KNeighborsClassifier(n_neighbors=n_neighbors)),
]
),
Pipeline(
[
("scaler", StandardScaler()),
("nca", NeighborhoodComponentsAnalysis()),
("knn", KNeighborsClassifier(n_neighbors=n_neighbors)),
]
),
]
for name, clf in zip(names, classifiers):
clf.fit(X_train, y_train)
score = clf.score(X_test, y_test)
_, ax = plt.subplots()
DecisionBoundaryDisplay.from_estimator(
clf,
X,
cmap=cmap_light,
alpha=0.8,
ax=ax,
response_method="predict",
plot_method="pcolormesh",
shading="auto",
)
# Plot also the training and testing points
plt.scatter(X[:, 0], X[:, 1], c=y, cmap=cmap_bold, edgecolor="k", s=20)
plt.title("{} (k = {})".format(name, n_neighbors))
plt.text(
0.9,
0.1,
"{:.2f}".format(score),
size=15,
ha="center",
va="center",
transform=plt.gca().transAxes,
)
plt.show()
```
**Total running time of the script:** ( 0 minutes 0.458 seconds)
[`Download Python source code: plot_nca_classification.py`](https://scikit-learn.org/1.1/_downloads/b7792f6c26a74369f67bbe6f9ac41edf/plot_nca_classification.py)
[`Download Jupyter notebook: plot_nca_classification.ipynb`](https://scikit-learn.org/1.1/_downloads/54c7ea3b3671861fbfb2161a6f0ab6d0/plot_nca_classification.ipynb)
scikit_learn Simple 1D Kernel Density Estimation Note
Click [here](#sphx-glr-download-auto-examples-neighbors-plot-kde-1d-py) to download the full example code or to run this example in your browser via Binder
Simple 1D Kernel Density Estimation
===================================
This example uses the [`KernelDensity`](../../modules/generated/sklearn.neighbors.kerneldensity#sklearn.neighbors.KernelDensity "sklearn.neighbors.KernelDensity") class to demonstrate the principles of Kernel Density Estimation in one dimension.
The first plot shows one of the problems with using histograms to visualize the density of points in 1D. Intuitively, a histogram can be thought of as a scheme in which a unit “block” is stacked above each point on a regular grid. As the top two panels show, however, the choice of gridding for these blocks can lead to wildly divergent ideas about the underlying shape of the density distribution. If we instead center each block on the point it represents, we get the estimate shown in the bottom left panel. This is a kernel density estimation with a “top hat” kernel. This idea can be generalized to other kernel shapes: the bottom-right panel of the first figure shows a Gaussian kernel density estimate over the same distribution.
Scikit-learn implements efficient kernel density estimation using either a Ball Tree or KD Tree structure, through the [`KernelDensity`](../../modules/generated/sklearn.neighbors.kerneldensity#sklearn.neighbors.KernelDensity "sklearn.neighbors.KernelDensity") estimator. The available kernels are shown in the second figure of this example.
The third figure compares kernel density estimates for a distribution of 100 samples in 1 dimension. Though this example uses 1D distributions, kernel density estimation is easily and efficiently extensible to higher dimensions as well.
*
*
*
```
# Author: Jake Vanderplas <[email protected]>
#
import numpy as np
import matplotlib.pyplot as plt
from scipy.stats import norm
from sklearn.neighbors import KernelDensity
# ----------------------------------------------------------------------
# Plot the progression of histograms to kernels
np.random.seed(1)
N = 20
X = np.concatenate(
(np.random.normal(0, 1, int(0.3 * N)), np.random.normal(5, 1, int(0.7 * N)))
)[:, np.newaxis]
X_plot = np.linspace(-5, 10, 1000)[:, np.newaxis]
bins = np.linspace(-5, 10, 10)
fig, ax = plt.subplots(2, 2, sharex=True, sharey=True)
fig.subplots_adjust(hspace=0.05, wspace=0.05)
# histogram 1
ax[0, 0].hist(X[:, 0], bins=bins, fc="#AAAAFF", density=True)
ax[0, 0].text(-3.5, 0.31, "Histogram")
# histogram 2
ax[0, 1].hist(X[:, 0], bins=bins + 0.75, fc="#AAAAFF", density=True)
ax[0, 1].text(-3.5, 0.31, "Histogram, bins shifted")
# tophat KDE
kde = KernelDensity(kernel="tophat", bandwidth=0.75).fit(X)
log_dens = kde.score_samples(X_plot)
ax[1, 0].fill(X_plot[:, 0], np.exp(log_dens), fc="#AAAAFF")
ax[1, 0].text(-3.5, 0.31, "Tophat Kernel Density")
# Gaussian KDE
kde = KernelDensity(kernel="gaussian", bandwidth=0.75).fit(X)
log_dens = kde.score_samples(X_plot)
ax[1, 1].fill(X_plot[:, 0], np.exp(log_dens), fc="#AAAAFF")
ax[1, 1].text(-3.5, 0.31, "Gaussian Kernel Density")
for axi in ax.ravel():
axi.plot(X[:, 0], np.full(X.shape[0], -0.01), "+k")
axi.set_xlim(-4, 9)
axi.set_ylim(-0.02, 0.34)
for axi in ax[:, 0]:
axi.set_ylabel("Normalized Density")
for axi in ax[1, :]:
axi.set_xlabel("x")
# ----------------------------------------------------------------------
# Plot all available kernels
X_plot = np.linspace(-6, 6, 1000)[:, None]
X_src = np.zeros((1, 1))
fig, ax = plt.subplots(2, 3, sharex=True, sharey=True)
fig.subplots_adjust(left=0.05, right=0.95, hspace=0.05, wspace=0.05)
def format_func(x, loc):
if x == 0:
return "0"
elif x == 1:
return "h"
elif x == -1:
return "-h"
else:
return "%ih" % x
for i, kernel in enumerate(
["gaussian", "tophat", "epanechnikov", "exponential", "linear", "cosine"]
):
axi = ax.ravel()[i]
log_dens = KernelDensity(kernel=kernel).fit(X_src).score_samples(X_plot)
axi.fill(X_plot[:, 0], np.exp(log_dens), "-k", fc="#AAAAFF")
axi.text(-2.6, 0.95, kernel)
axi.xaxis.set_major_formatter(plt.FuncFormatter(format_func))
axi.xaxis.set_major_locator(plt.MultipleLocator(1))
axi.yaxis.set_major_locator(plt.NullLocator())
axi.set_ylim(0, 1.05)
axi.set_xlim(-2.9, 2.9)
ax[0, 1].set_title("Available Kernels")
# ----------------------------------------------------------------------
# Plot a 1D density example
N = 100
np.random.seed(1)
X = np.concatenate(
(np.random.normal(0, 1, int(0.3 * N)), np.random.normal(5, 1, int(0.7 * N)))
)[:, np.newaxis]
X_plot = np.linspace(-5, 10, 1000)[:, np.newaxis]
true_dens = 0.3 * norm(0, 1).pdf(X_plot[:, 0]) + 0.7 * norm(5, 1).pdf(X_plot[:, 0])
fig, ax = plt.subplots()
ax.fill(X_plot[:, 0], true_dens, fc="black", alpha=0.2, label="input distribution")
colors = ["navy", "cornflowerblue", "darkorange"]
kernels = ["gaussian", "tophat", "epanechnikov"]
lw = 2
for color, kernel in zip(colors, kernels):
kde = KernelDensity(kernel=kernel, bandwidth=0.5).fit(X)
log_dens = kde.score_samples(X_plot)
ax.plot(
X_plot[:, 0],
np.exp(log_dens),
color=color,
lw=lw,
linestyle="-",
label="kernel = '{0}'".format(kernel),
)
ax.text(6, 0.38, "N={0} points".format(N))
ax.legend(loc="upper left")
ax.plot(X[:, 0], -0.005 - 0.01 * np.random.random(X.shape[0]), "+k")
ax.set_xlim(-4, 9)
ax.set_ylim(-0.02, 0.4)
plt.show()
```
**Total running time of the script:** ( 0 minutes 0.347 seconds)
[`Download Python source code: plot_kde_1d.py`](https://scikit-learn.org/1.1/_downloads/8c65925beb12265775bcceee8e0aa90b/plot_kde_1d.py)
[`Download Jupyter notebook: plot_kde_1d.ipynb`](https://scikit-learn.org/1.1/_downloads/7c2f454ae53819802ecec0f2cacd6d51/plot_kde_1d.ipynb)
| programming_docs |
scikit_learn Nearest Neighbors Classification Note
Click [here](#sphx-glr-download-auto-examples-neighbors-plot-classification-py) to download the full example code or to run this example in your browser via Binder
Nearest Neighbors Classification
================================
Sample usage of Nearest Neighbors classification. It will plot the decision boundaries for each class.
*
*
```
import matplotlib.pyplot as plt
import seaborn as sns
from matplotlib.colors import ListedColormap
from sklearn import neighbors, datasets
from sklearn.inspection import DecisionBoundaryDisplay
n_neighbors = 15
# import some data to play with
iris = datasets.load_iris()
# we only take the first two features. We could avoid this ugly
# slicing by using a two-dim dataset
X = iris.data[:, :2]
y = iris.target
# Create color maps
cmap_light = ListedColormap(["orange", "cyan", "cornflowerblue"])
cmap_bold = ["darkorange", "c", "darkblue"]
for weights in ["uniform", "distance"]:
# we create an instance of Neighbours Classifier and fit the data.
clf = neighbors.KNeighborsClassifier(n_neighbors, weights=weights)
clf.fit(X, y)
_, ax = plt.subplots()
DecisionBoundaryDisplay.from_estimator(
clf,
X,
cmap=cmap_light,
ax=ax,
response_method="predict",
plot_method="pcolormesh",
xlabel=iris.feature_names[0],
ylabel=iris.feature_names[1],
shading="auto",
)
# Plot also the training points
sns.scatterplot(
x=X[:, 0],
y=X[:, 1],
hue=iris.target_names[y],
palette=cmap_bold,
alpha=1.0,
edgecolor="black",
)
plt.title(
"3-Class classification (k = %i, weights = '%s')" % (n_neighbors, weights)
)
plt.show()
```
**Total running time of the script:** ( 0 minutes 0.469 seconds)
[`Download Python source code: plot_classification.py`](https://scikit-learn.org/1.1/_downloads/8d0cc737ca20800f70d8aa80d8b8fb7d/plot_classification.py)
[`Download Jupyter notebook: plot_classification.ipynb`](https://scikit-learn.org/1.1/_downloads/47f024d726d245e034c7690b4664721f/plot_classification.ipynb)
scikit_learn Outlier detection with Local Outlier Factor (LOF) Note
Click [here](#sphx-glr-download-auto-examples-neighbors-plot-lof-outlier-detection-py) to download the full example code or to run this example in your browser via Binder
Outlier detection with Local Outlier Factor (LOF)
=================================================
The Local Outlier Factor (LOF) algorithm is an unsupervised anomaly detection method which computes the local density deviation of a given data point with respect to its neighbors. It considers as outliers the samples that have a substantially lower density than their neighbors. This example shows how to use LOF for outlier detection which is the default use case of this estimator in scikit-learn. Note that when LOF is used for outlier detection it has no predict, decision\_function and score\_samples methods. See [User Guide](../../modules/outlier_detection#outlier-detection): for details on the difference between outlier detection and novelty detection and how to use LOF for novelty detection.
The number of neighbors considered (parameter n\_neighbors) is typically set 1) greater than the minimum number of samples a cluster has to contain, so that other samples can be local outliers relative to this cluster, and 2) smaller than the maximum number of close by samples that can potentially be local outliers. In practice, such information is generally not available, and taking n\_neighbors=20 appears to work well in general.
```
import numpy as np
import matplotlib.pyplot as plt
from sklearn.neighbors import LocalOutlierFactor
np.random.seed(42)
# Generate train data
X_inliers = 0.3 * np.random.randn(100, 2)
X_inliers = np.r_[X_inliers + 2, X_inliers - 2]
# Generate some outliers
X_outliers = np.random.uniform(low=-4, high=4, size=(20, 2))
X = np.r_[X_inliers, X_outliers]
n_outliers = len(X_outliers)
ground_truth = np.ones(len(X), dtype=int)
ground_truth[-n_outliers:] = -1
# fit the model for outlier detection (default)
clf = LocalOutlierFactor(n_neighbors=20, contamination=0.1)
# use fit_predict to compute the predicted labels of the training samples
# (when LOF is used for outlier detection, the estimator has no predict,
# decision_function and score_samples methods).
y_pred = clf.fit_predict(X)
n_errors = (y_pred != ground_truth).sum()
X_scores = clf.negative_outlier_factor_
plt.title("Local Outlier Factor (LOF)")
plt.scatter(X[:, 0], X[:, 1], color="k", s=3.0, label="Data points")
# plot circles with radius proportional to the outlier scores
radius = (X_scores.max() - X_scores) / (X_scores.max() - X_scores.min())
plt.scatter(
X[:, 0],
X[:, 1],
s=1000 * radius,
edgecolors="r",
facecolors="none",
label="Outlier scores",
)
plt.axis("tight")
plt.xlim((-5, 5))
plt.ylim((-5, 5))
plt.xlabel("prediction errors: %d" % (n_errors))
legend = plt.legend(loc="upper left")
legend.legendHandles[0]._sizes = [10]
legend.legendHandles[1]._sizes = [20]
plt.show()
```
**Total running time of the script:** ( 0 minutes 0.073 seconds)
[`Download Python source code: plot_lof_outlier_detection.py`](https://scikit-learn.org/1.1/_downloads/06d58c7fb0650278c0e3ea127bff7167/plot_lof_outlier_detection.py)
[`Download Jupyter notebook: plot_lof_outlier_detection.ipynb`](https://scikit-learn.org/1.1/_downloads/a91fcd567e6bc2540b34f44d599d3531/plot_lof_outlier_detection.ipynb)
scikit_learn 4.1. Partial Dependence and Individual Conditional Expectation plots 4.1. Partial Dependence and Individual Conditional Expectation plots
====================================================================
Partial dependence plots (PDP) and individual conditional expectation (ICE) plots can be used to visualize and analyze interaction between the target response [[1]](#id6) and a set of input features of interest.
Both PDPs [[H2009]](#h2009) and ICEs [[G2015]](#g2015) assume that the input features of interest are independent from the complement features, and this assumption is often violated in practice. Thus, in the case of correlated features, we will create absurd data points to compute the PDP/ICE [[M2019]](#m2019).
4.1.1. Partial dependence plots
--------------------------------
Partial dependence plots (PDP) show the dependence between the target response and a set of input features of interest, marginalizing over the values of all other input features (the ‘complement’ features). Intuitively, we can interpret the partial dependence as the expected target response as a function of the input features of interest.
Due to the limits of human perception the size of the set of input feature of interest must be small (usually, one or two) thus the input features of interest are usually chosen among the most important features.
The figure below shows two one-way and one two-way partial dependence plots for the California housing dataset, with a [`HistGradientBoostingRegressor`](generated/sklearn.ensemble.histgradientboostingregressor#sklearn.ensemble.HistGradientBoostingRegressor "sklearn.ensemble.HistGradientBoostingRegressor"):
One-way PDPs tell us about the interaction between the target response and an input feature of interest feature (e.g. linear, non-linear). The left plot in the above figure shows the effect of the average occupancy on the median house price; we can clearly see a linear relationship among them when the average occupancy is inferior to 3 persons. Similarly, we could analyze the effect of the house age on the median house price (middle plot). Thus, these interpretations are marginal, considering a feature at a time.
PDPs with two input features of interest show the interactions among the two features. For example, the two-variable PDP in the above figure shows the dependence of median house price on joint values of house age and average occupants per household. We can clearly see an interaction between the two features: for an average occupancy greater than two, the house price is nearly independent of the house age, whereas for values less than 2 there is a strong dependence on age.
The [`sklearn.inspection`](classes#module-sklearn.inspection "sklearn.inspection") module provides a convenience function [`from_estimator`](generated/sklearn.inspection.partialdependencedisplay#sklearn.inspection.PartialDependenceDisplay.from_estimator "sklearn.inspection.PartialDependenceDisplay.from_estimator") to create one-way and two-way partial dependence plots. In the below example we show how to create a grid of partial dependence plots: two one-way PDPs for the features `0` and `1` and a two-way PDP between the two features:
```
>>> from sklearn.datasets import make_hastie_10_2
>>> from sklearn.ensemble import GradientBoostingClassifier
>>> from sklearn.inspection import PartialDependenceDisplay
>>> X, y = make_hastie_10_2(random_state=0)
>>> clf = GradientBoostingClassifier(n_estimators=100, learning_rate=1.0,
... max_depth=1, random_state=0).fit(X, y)
>>> features = [0, 1, (0, 1)]
>>> PartialDependenceDisplay.from_estimator(clf, X, features)
<...>
```
You can access the newly created figure and Axes objects using `plt.gcf()` and `plt.gca()`.
For multi-class classification, you need to set the class label for which the PDPs should be created via the `target` argument:
```
>>> from sklearn.datasets import load_iris
>>> iris = load_iris()
>>> mc_clf = GradientBoostingClassifier(n_estimators=10,
... max_depth=1).fit(iris.data, iris.target)
>>> features = [3, 2, (3, 2)]
>>> PartialDependenceDisplay.from_estimator(mc_clf, X, features, target=0)
<...>
```
The same parameter `target` is used to specify the target in multi-output regression settings.
If you need the raw values of the partial dependence function rather than the plots, you can use the [`sklearn.inspection.partial_dependence`](generated/sklearn.inspection.partial_dependence#sklearn.inspection.partial_dependence "sklearn.inspection.partial_dependence") function:
```
>>> from sklearn.inspection import partial_dependence
>>> results = partial_dependence(clf, X, [0])
>>> results["average"]
array([[ 2.466..., 2.466..., ...
>>> results["values"]
[array([-1.624..., -1.592..., ...
```
The values at which the partial dependence should be evaluated are directly generated from `X`. For 2-way partial dependence, a 2D-grid of values is generated. The `values` field returned by [`sklearn.inspection.partial_dependence`](generated/sklearn.inspection.partial_dependence#sklearn.inspection.partial_dependence "sklearn.inspection.partial_dependence") gives the actual values used in the grid for each input feature of interest. They also correspond to the axis of the plots.
4.1.2. Individual conditional expectation (ICE) plot
-----------------------------------------------------
Similar to a PDP, an individual conditional expectation (ICE) plot shows the dependence between the target function and an input feature of interest. However, unlike a PDP, which shows the average effect of the input feature, an ICE plot visualizes the dependence of the prediction on a feature for each sample separately with one line per sample. Due to the limits of human perception, only one input feature of interest is supported for ICE plots.
The figures below show four ICE plots for the California housing dataset, with a [`HistGradientBoostingRegressor`](generated/sklearn.ensemble.histgradientboostingregressor#sklearn.ensemble.HistGradientBoostingRegressor "sklearn.ensemble.HistGradientBoostingRegressor"). The second figure plots the corresponding PD line overlaid on ICE lines.
While the PDPs are good at showing the average effect of the target features, they can obscure a heterogeneous relationship created by interactions. When interactions are present the ICE plot will provide many more insights. For example, we could observe a linear relationship between the median income and the house price in the PD line. However, the ICE lines show that there are some exceptions, where the house price remains constant in some ranges of the median income.
The [`sklearn.inspection`](classes#module-sklearn.inspection "sklearn.inspection") module’s [`PartialDependenceDisplay.from_estimator`](generated/sklearn.inspection.partialdependencedisplay#sklearn.inspection.PartialDependenceDisplay.from_estimator "sklearn.inspection.PartialDependenceDisplay.from_estimator") convenience function can be used to create ICE plots by setting `kind='individual'`. In the example below, we show how to create a grid of ICE plots:
```
>>> from sklearn.datasets import make_hastie_10_2
>>> from sklearn.ensemble import GradientBoostingClassifier
>>> from sklearn.inspection import PartialDependenceDisplay
```
```
>>> X, y = make_hastie_10_2(random_state=0)
>>> clf = GradientBoostingClassifier(n_estimators=100, learning_rate=1.0,
... max_depth=1, random_state=0).fit(X, y)
>>> features = [0, 1]
>>> PartialDependenceDisplay.from_estimator(clf, X, features,
... kind='individual')
<...>
```
In ICE plots it might not be easy to see the average effect of the input feature of interest. Hence, it is recommended to use ICE plots alongside PDPs. They can be plotted together with `kind='both'`.
```
>>> PartialDependenceDisplay.from_estimator(clf, X, features,
... kind='both')
<...>
```
If there are too many lines in an ICE plot, it can be difficult to see differences between individual samples and interpret the model. Centering the ICE at the first value on the x-axis, produces centered Individual Conditional Expectation (cICE) plots [[G2015]](#g2015). This puts emphasis on the divergence of individual conditional expectations from the mean line, thus making it easier to explore heterogeneous relationships. cICE plots can be plotted by setting `centered=True`:
```
>>> PartialDependenceDisplay.from_estimator(clf, X, features,
... kind='both', centered=True)
<...>
```
4.1.3. Mathematical Definition
-------------------------------
Let \(X\_S\) be the set of input features of interest (i.e. the `features` parameter) and let \(X\_C\) be its complement.
The partial dependence of the response \(f\) at a point \(x\_S\) is defined as:
\[\begin{split}pd\_{X\_S}(x\_S) &\overset{def}{=} \mathbb{E}\_{X\_C}\left[ f(x\_S, X\_C) \right]\\ &= \int f(x\_S, x\_C) p(x\_C) dx\_C,\end{split}\] where \(f(x\_S, x\_C)\) is the response function ([predict](https://scikit-learn.org/1.1/glossary.html#term-predict), [predict\_proba](https://scikit-learn.org/1.1/glossary.html#term-predict_proba) or [decision\_function](https://scikit-learn.org/1.1/glossary.html#term-decision_function)) for a given sample whose values are defined by \(x\_S\) for the features in \(X\_S\), and by \(x\_C\) for the features in \(X\_C\). Note that \(x\_S\) and \(x\_C\) may be tuples.
Computing this integral for various values of \(x\_S\) produces a PDP plot as above. An ICE line is defined as a single \(f(x\_{S}, x\_{C}^{(i)})\) evaluated at \(x\_{S}\).
4.1.4. Computation methods
---------------------------
There are two main methods to approximate the integral above, namely the ‘brute’ and ‘recursion’ methods. The `method` parameter controls which method to use.
The ‘brute’ method is a generic method that works with any estimator. Note that computing ICE plots is only supported with the ‘brute’ method. It approximates the above integral by computing an average over the data `X`:
\[pd\_{X\_S}(x\_S) \approx \frac{1}{n\_\text{samples}} \sum\_{i=1}^n f(x\_S, x\_C^{(i)}),\] where \(x\_C^{(i)}\) is the value of the i-th sample for the features in \(X\_C\). For each value of \(x\_S\), this method requires a full pass over the dataset `X` which is computationally intensive.
Each of the \(f(x\_{S}, x\_{C}^{(i)})\) corresponds to one ICE line evaluated at \(x\_{S}\). Computing this for multiple values of \(x\_{S}\), one obtains a full ICE line. As one can see, the average of the ICE lines correspond to the partial dependence line.
The ‘recursion’ method is faster than the ‘brute’ method, but it is only supported for PDP plots by some tree-based estimators. It is computed as follows. For a given point \(x\_S\), a weighted tree traversal is performed: if a split node involves an input feature of interest, the corresponding left or right branch is followed; otherwise both branches are followed, each branch being weighted by the fraction of training samples that entered that branch. Finally, the partial dependence is given by a weighted average of all the visited leaves values.
With the ‘brute’ method, the parameter `X` is used both for generating the grid of values \(x\_S\) and the complement feature values \(x\_C\). However with the ‘recursion’ method, `X` is only used for the grid values: implicitly, the \(x\_C\) values are those of the training data.
By default, the ‘recursion’ method is used for plotting PDPs on tree-based estimators that support it, and ‘brute’ is used for the rest.
Note
While both methods should be close in general, they might differ in some specific settings. The ‘brute’ method assumes the existence of the data points \((x\_S, x\_C^{(i)})\). When the features are correlated, such artificial samples may have a very low probability mass. The ‘brute’ and ‘recursion’ methods will likely disagree regarding the value of the partial dependence, because they will treat these unlikely samples differently. Remember, however, that the primary assumption for interpreting PDPs is that the features should be independent.
#### Footnotes
scikit_learn 1.2. Linear and Quadratic Discriminant Analysis 1.2. Linear and Quadratic Discriminant Analysis
===============================================
Linear Discriminant Analysis ([`LinearDiscriminantAnalysis`](generated/sklearn.discriminant_analysis.lineardiscriminantanalysis#sklearn.discriminant_analysis.LinearDiscriminantAnalysis "sklearn.discriminant_analysis.LinearDiscriminantAnalysis")) and Quadratic Discriminant Analysis ([`QuadraticDiscriminantAnalysis`](generated/sklearn.discriminant_analysis.quadraticdiscriminantanalysis#sklearn.discriminant_analysis.QuadraticDiscriminantAnalysis "sklearn.discriminant_analysis.QuadraticDiscriminantAnalysis")) are two classic classifiers, with, as their names suggest, a linear and a quadratic decision surface, respectively.
These classifiers are attractive because they have closed-form solutions that can be easily computed, are inherently multiclass, have proven to work well in practice, and have no hyperparameters to tune.
The plot shows decision boundaries for Linear Discriminant Analysis and Quadratic Discriminant Analysis. The bottom row demonstrates that Linear Discriminant Analysis can only learn linear boundaries, while Quadratic Discriminant Analysis can learn quadratic boundaries and is therefore more flexible.
1.2.1. Dimensionality reduction using Linear Discriminant Analysis
-------------------------------------------------------------------
[`LinearDiscriminantAnalysis`](generated/sklearn.discriminant_analysis.lineardiscriminantanalysis#sklearn.discriminant_analysis.LinearDiscriminantAnalysis "sklearn.discriminant_analysis.LinearDiscriminantAnalysis") can be used to perform supervised dimensionality reduction, by projecting the input data to a linear subspace consisting of the directions which maximize the separation between classes (in a precise sense discussed in the mathematics section below). The dimension of the output is necessarily less than the number of classes, so this is in general a rather strong dimensionality reduction, and only makes sense in a multiclass setting.
This is implemented in the `transform` method. The desired dimensionality can be set using the `n_components` parameter. This parameter has no influence on the `fit` and `predict` methods.
1.2.2. Mathematical formulation of the LDA and QDA classifiers
---------------------------------------------------------------
Both LDA and QDA can be derived from simple probabilistic models which model the class conditional distribution of the data \(P(X|y=k)\) for each class \(k\). Predictions can then be obtained by using Bayes’ rule, for each training sample \(x \in \mathcal{R}^d\):
\[P(y=k | x) = \frac{P(x | y=k) P(y=k)}{P(x)} = \frac{P(x | y=k) P(y = k)}{ \sum\_{l} P(x | y=l) \cdot P(y=l)}\] and we select the class \(k\) which maximizes this posterior probability.
More specifically, for linear and quadratic discriminant analysis, \(P(x|y)\) is modeled as a multivariate Gaussian distribution with density:
\[P(x | y=k) = \frac{1}{(2\pi)^{d/2} |\Sigma\_k|^{1/2}}\exp\left(-\frac{1}{2} (x-\mu\_k)^t \Sigma\_k^{-1} (x-\mu\_k)\right)\] where \(d\) is the number of features.
###
1.2.2.1. QDA
According to the model above, the log of the posterior is:
\[\begin{split}\log P(y=k | x) &= \log P(x | y=k) + \log P(y = k) + Cst \\ &= -\frac{1}{2} \log |\Sigma\_k| -\frac{1}{2} (x-\mu\_k)^t \Sigma\_k^{-1} (x-\mu\_k) + \log P(y = k) + Cst,\end{split}\] where the constant term \(Cst\) corresponds to the denominator \(P(x)\), in addition to other constant terms from the Gaussian. The predicted class is the one that maximises this log-posterior.
Note
**Relation with Gaussian Naive Bayes**
If in the QDA model one assumes that the covariance matrices are diagonal, then the inputs are assumed to be conditionally independent in each class, and the resulting classifier is equivalent to the Gaussian Naive Bayes classifier [`naive_bayes.GaussianNB`](generated/sklearn.naive_bayes.gaussiannb#sklearn.naive_bayes.GaussianNB "sklearn.naive_bayes.GaussianNB").
###
1.2.2.2. LDA
LDA is a special case of QDA, where the Gaussians for each class are assumed to share the same covariance matrix: \(\Sigma\_k = \Sigma\) for all \(k\). This reduces the log posterior to:
\[\log P(y=k | x) = -\frac{1}{2} (x-\mu\_k)^t \Sigma^{-1} (x-\mu\_k) + \log P(y = k) + Cst.\] The term \((x-\mu\_k)^t \Sigma^{-1} (x-\mu\_k)\) corresponds to the [Mahalanobis Distance](https://en.wikipedia.org/wiki/Mahalanobis_distance) between the sample \(x\) and the mean \(\mu\_k\). The Mahalanobis distance tells how close \(x\) is from \(\mu\_k\), while also accounting for the variance of each feature. We can thus interpret LDA as assigning \(x\) to the class whose mean is the closest in terms of Mahalanobis distance, while also accounting for the class prior probabilities.
The log-posterior of LDA can also be written [[3]](#id7) as:
\[\log P(y=k | x) = \omega\_k^t x + \omega\_{k0} + Cst.\] where \(\omega\_k = \Sigma^{-1} \mu\_k\) and \(\omega\_{k0} = -\frac{1}{2} \mu\_k^t\Sigma^{-1}\mu\_k + \log P (y = k)\). These quantities correspond to the `coef_` and `intercept_` attributes, respectively.
From the above formula, it is clear that LDA has a linear decision surface. In the case of QDA, there are no assumptions on the covariance matrices \(\Sigma\_k\) of the Gaussians, leading to quadratic decision surfaces. See [[1]](#id5) for more details.
1.2.3. Mathematical formulation of LDA dimensionality reduction
----------------------------------------------------------------
First note that the K means \(\mu\_k\) are vectors in \(\mathcal{R}^d\), and they lie in an affine subspace \(H\) of dimension at most \(K - 1\) (2 points lie on a line, 3 points lie on a plane, etc).
As mentioned above, we can interpret LDA as assigning \(x\) to the class whose mean \(\mu\_k\) is the closest in terms of Mahalanobis distance, while also accounting for the class prior probabilities. Alternatively, LDA is equivalent to first *sphering* the data so that the covariance matrix is the identity, and then assigning \(x\) to the closest mean in terms of Euclidean distance (still accounting for the class priors).
Computing Euclidean distances in this d-dimensional space is equivalent to first projecting the data points into \(H\), and computing the distances there (since the other dimensions will contribute equally to each class in terms of distance). In other words, if \(x\) is closest to \(\mu\_k\) in the original space, it will also be the case in \(H\). This shows that, implicit in the LDA classifier, there is a dimensionality reduction by linear projection onto a \(K-1\) dimensional space.
We can reduce the dimension even more, to a chosen \(L\), by projecting onto the linear subspace \(H\_L\) which maximizes the variance of the \(\mu^\*\_k\) after projection (in effect, we are doing a form of PCA for the transformed class means \(\mu^\*\_k\)). This \(L\) corresponds to the `n_components` parameter used in the [`transform`](generated/sklearn.discriminant_analysis.lineardiscriminantanalysis#sklearn.discriminant_analysis.LinearDiscriminantAnalysis.transform "sklearn.discriminant_analysis.LinearDiscriminantAnalysis.transform") method. See [[1]](#id5) for more details.
1.2.4. Shrinkage and Covariance Estimator
------------------------------------------
Shrinkage is a form of regularization used to improve the estimation of covariance matrices in situations where the number of training samples is small compared to the number of features. In this scenario, the empirical sample covariance is a poor estimator, and shrinkage helps improving the generalization performance of the classifier. Shrinkage LDA can be used by setting the `shrinkage` parameter of the [`LinearDiscriminantAnalysis`](generated/sklearn.discriminant_analysis.lineardiscriminantanalysis#sklearn.discriminant_analysis.LinearDiscriminantAnalysis "sklearn.discriminant_analysis.LinearDiscriminantAnalysis") class to ‘auto’. This automatically determines the optimal shrinkage parameter in an analytic way following the lemma introduced by Ledoit and Wolf [[2]](#id6). Note that currently shrinkage only works when setting the `solver` parameter to ‘lsqr’ or ‘eigen’.
The `shrinkage` parameter can also be manually set between 0 and 1. In particular, a value of 0 corresponds to no shrinkage (which means the empirical covariance matrix will be used) and a value of 1 corresponds to complete shrinkage (which means that the diagonal matrix of variances will be used as an estimate for the covariance matrix). Setting this parameter to a value between these two extrema will estimate a shrunk version of the covariance matrix.
The shrunk Ledoit and Wolf estimator of covariance may not always be the best choice. For example if the distribution of the data is normally distributed, the Oracle Shrinkage Approximating estimator [`sklearn.covariance.OAS`](generated/sklearn.covariance.oas#sklearn.covariance.OAS "sklearn.covariance.OAS") yields a smaller Mean Squared Error than the one given by Ledoit and Wolf’s formula used with shrinkage=”auto”. In LDA, the data are assumed to be gaussian conditionally to the class. If these assumptions hold, using LDA with the OAS estimator of covariance will yield a better classification accuracy than if Ledoit and Wolf or the empirical covariance estimator is used.
The covariance estimator can be chosen using with the `covariance_estimator` parameter of the [`discriminant_analysis.LinearDiscriminantAnalysis`](generated/sklearn.discriminant_analysis.lineardiscriminantanalysis#sklearn.discriminant_analysis.LinearDiscriminantAnalysis "sklearn.discriminant_analysis.LinearDiscriminantAnalysis") class. A covariance estimator should have a [fit](https://scikit-learn.org/1.1/glossary.html#term-fit) method and a `covariance_` attribute like all covariance estimators in the [`sklearn.covariance`](classes#module-sklearn.covariance "sklearn.covariance") module.
1.2.5. Estimation algorithms
-----------------------------
Using LDA and QDA requires computing the log-posterior which depends on the class priors \(P(y=k)\), the class means \(\mu\_k\), and the covariance matrices.
The ‘svd’ solver is the default solver used for [`LinearDiscriminantAnalysis`](generated/sklearn.discriminant_analysis.lineardiscriminantanalysis#sklearn.discriminant_analysis.LinearDiscriminantAnalysis "sklearn.discriminant_analysis.LinearDiscriminantAnalysis"), and it is the only available solver for [`QuadraticDiscriminantAnalysis`](generated/sklearn.discriminant_analysis.quadraticdiscriminantanalysis#sklearn.discriminant_analysis.QuadraticDiscriminantAnalysis "sklearn.discriminant_analysis.QuadraticDiscriminantAnalysis"). It can perform both classification and transform (for LDA). As it does not rely on the calculation of the covariance matrix, the ‘svd’ solver may be preferable in situations where the number of features is large. The ‘svd’ solver cannot be used with shrinkage. For QDA, the use of the SVD solver relies on the fact that the covariance matrix \(\Sigma\_k\) is, by definition, equal to \(\frac{1}{n - 1} X\_k^tX\_k = \frac{1}{n - 1} V S^2 V^t\) where \(V\) comes from the SVD of the (centered) matrix: \(X\_k = U S V^t\). It turns out that we can compute the log-posterior above without having to explicitly compute \(\Sigma\): computing \(S\) and \(V\) via the SVD of \(X\) is enough. For LDA, two SVDs are computed: the SVD of the centered input matrix \(X\) and the SVD of the class-wise mean vectors.
The ‘lsqr’ solver is an efficient algorithm that only works for classification. It needs to explicitly compute the covariance matrix \(\Sigma\), and supports shrinkage and custom covariance estimators. This solver computes the coefficients \(\omega\_k = \Sigma^{-1}\mu\_k\) by solving for \(\Sigma \omega = \mu\_k\), thus avoiding the explicit computation of the inverse \(\Sigma^{-1}\).
The ‘eigen’ solver is based on the optimization of the between class scatter to within class scatter ratio. It can be used for both classification and transform, and it supports shrinkage. However, the ‘eigen’ solver needs to compute the covariance matrix, so it might not be suitable for situations with a high number of features.
| programming_docs |
scikit_learn 1.10. Decision Trees 1.10. Decision Trees
====================
**Decision Trees (DTs)** are a non-parametric supervised learning method used for [classification](#tree-classification) and [regression](#tree-regression). The goal is to create a model that predicts the value of a target variable by learning simple decision rules inferred from the data features. A tree can be seen as a piecewise constant approximation.
For instance, in the example below, decision trees learn from data to approximate a sine curve with a set of if-then-else decision rules. The deeper the tree, the more complex the decision rules and the fitter the model.
Some advantages of decision trees are:
* Simple to understand and to interpret. Trees can be visualized.
* Requires little data preparation. Other techniques often require data normalization, dummy variables need to be created and blank values to be removed. Note however that this module does not support missing values.
* The cost of using the tree (i.e., predicting data) is logarithmic in the number of data points used to train the tree.
* Able to handle both numerical and categorical data. However, the scikit-learn implementation does not support categorical variables for now. Other techniques are usually specialized in analyzing datasets that have only one type of variable. See [algorithms](#tree-algorithms) for more information.
* Able to handle multi-output problems.
* Uses a white box model. If a given situation is observable in a model, the explanation for the condition is easily explained by boolean logic. By contrast, in a black box model (e.g., in an artificial neural network), results may be more difficult to interpret.
* Possible to validate a model using statistical tests. That makes it possible to account for the reliability of the model.
* Performs well even if its assumptions are somewhat violated by the true model from which the data were generated.
The disadvantages of decision trees include:
* Decision-tree learners can create over-complex trees that do not generalize the data well. This is called overfitting. Mechanisms such as pruning, setting the minimum number of samples required at a leaf node or setting the maximum depth of the tree are necessary to avoid this problem.
* Decision trees can be unstable because small variations in the data might result in a completely different tree being generated. This problem is mitigated by using decision trees within an ensemble.
* Predictions of decision trees are neither smooth nor continuous, but piecewise constant approximations as seen in the above figure. Therefore, they are not good at extrapolation.
* The problem of learning an optimal decision tree is known to be NP-complete under several aspects of optimality and even for simple concepts. Consequently, practical decision-tree learning algorithms are based on heuristic algorithms such as the greedy algorithm where locally optimal decisions are made at each node. Such algorithms cannot guarantee to return the globally optimal decision tree. This can be mitigated by training multiple trees in an ensemble learner, where the features and samples are randomly sampled with replacement.
* There are concepts that are hard to learn because decision trees do not express them easily, such as XOR, parity or multiplexer problems.
* Decision tree learners create biased trees if some classes dominate. It is therefore recommended to balance the dataset prior to fitting with the decision tree.
1.10.1. Classification
-----------------------
[`DecisionTreeClassifier`](generated/sklearn.tree.decisiontreeclassifier#sklearn.tree.DecisionTreeClassifier "sklearn.tree.DecisionTreeClassifier") is a class capable of performing multi-class classification on a dataset.
As with other classifiers, [`DecisionTreeClassifier`](generated/sklearn.tree.decisiontreeclassifier#sklearn.tree.DecisionTreeClassifier "sklearn.tree.DecisionTreeClassifier") takes as input two arrays: an array X, sparse or dense, of shape `(n_samples, n_features)` holding the training samples, and an array Y of integer values, shape `(n_samples,)`, holding the class labels for the training samples:
```
>>> from sklearn import tree
>>> X = [[0, 0], [1, 1]]
>>> Y = [0, 1]
>>> clf = tree.DecisionTreeClassifier()
>>> clf = clf.fit(X, Y)
```
After being fitted, the model can then be used to predict the class of samples:
```
>>> clf.predict([[2., 2.]])
array([1])
```
In case that there are multiple classes with the same and highest probability, the classifier will predict the class with the lowest index amongst those classes.
As an alternative to outputting a specific class, the probability of each class can be predicted, which is the fraction of training samples of the class in a leaf:
```
>>> clf.predict_proba([[2., 2.]])
array([[0., 1.]])
```
[`DecisionTreeClassifier`](generated/sklearn.tree.decisiontreeclassifier#sklearn.tree.DecisionTreeClassifier "sklearn.tree.DecisionTreeClassifier") is capable of both binary (where the labels are [-1, 1]) classification and multiclass (where the labels are [0, …, K-1]) classification.
Using the Iris dataset, we can construct a tree as follows:
```
>>> from sklearn.datasets import load_iris
>>> from sklearn import tree
>>> iris = load_iris()
>>> X, y = iris.data, iris.target
>>> clf = tree.DecisionTreeClassifier()
>>> clf = clf.fit(X, y)
```
Once trained, you can plot the tree with the [`plot_tree`](generated/sklearn.tree.plot_tree#sklearn.tree.plot_tree "sklearn.tree.plot_tree") function:
```
>>> tree.plot_tree(clf)
[...]
```
We can also export the tree in [Graphviz](https://www.graphviz.org/) format using the [`export_graphviz`](generated/sklearn.tree.export_graphviz#sklearn.tree.export_graphviz "sklearn.tree.export_graphviz") exporter. If you use the [conda](https://conda.io) package manager, the graphviz binaries and the python package can be installed with `conda install python-graphviz`.
Alternatively binaries for graphviz can be downloaded from the graphviz project homepage, and the Python wrapper installed from pypi with `pip install graphviz`.
Below is an example graphviz export of the above tree trained on the entire iris dataset; the results are saved in an output file `iris.pdf`:
```
>>> import graphviz
>>> dot_data = tree.export_graphviz(clf, out_file=None)
>>> graph = graphviz.Source(dot_data)
>>> graph.render("iris")
```
The [`export_graphviz`](generated/sklearn.tree.export_graphviz#sklearn.tree.export_graphviz "sklearn.tree.export_graphviz") exporter also supports a variety of aesthetic options, including coloring nodes by their class (or value for regression) and using explicit variable and class names if desired. Jupyter notebooks also render these plots inline automatically:
```
>>> dot_data = tree.export_graphviz(clf, out_file=None,
... feature_names=iris.feature_names,
... class_names=iris.target_names,
... filled=True, rounded=True,
... special_characters=True)
>>> graph = graphviz.Source(dot_data)
>>> graph
```
Alternatively, the tree can also be exported in textual format with the function [`export_text`](generated/sklearn.tree.export_text#sklearn.tree.export_text "sklearn.tree.export_text"). This method doesn’t require the installation of external libraries and is more compact:
```
>>> from sklearn.datasets import load_iris
>>> from sklearn.tree import DecisionTreeClassifier
>>> from sklearn.tree import export_text
>>> iris = load_iris()
>>> decision_tree = DecisionTreeClassifier(random_state=0, max_depth=2)
>>> decision_tree = decision_tree.fit(iris.data, iris.target)
>>> r = export_text(decision_tree, feature_names=iris['feature_names'])
>>> print(r)
|--- petal width (cm) <= 0.80
| |--- class: 0
|--- petal width (cm) > 0.80
| |--- petal width (cm) <= 1.75
| | |--- class: 1
| |--- petal width (cm) > 1.75
| | |--- class: 2
```
1.10.2. Regression
-------------------
Decision trees can also be applied to regression problems, using the [`DecisionTreeRegressor`](generated/sklearn.tree.decisiontreeregressor#sklearn.tree.DecisionTreeRegressor "sklearn.tree.DecisionTreeRegressor") class.
As in the classification setting, the fit method will take as argument arrays X and y, only that in this case y is expected to have floating point values instead of integer values:
```
>>> from sklearn import tree
>>> X = [[0, 0], [2, 2]]
>>> y = [0.5, 2.5]
>>> clf = tree.DecisionTreeRegressor()
>>> clf = clf.fit(X, y)
>>> clf.predict([[1, 1]])
array([0.5])
```
1.10.3. Multi-output problems
------------------------------
A multi-output problem is a supervised learning problem with several outputs to predict, that is when Y is a 2d array of shape `(n_samples, n_outputs)`.
When there is no correlation between the outputs, a very simple way to solve this kind of problem is to build n independent models, i.e. one for each output, and then to use those models to independently predict each one of the n outputs. However, because it is likely that the output values related to the same input are themselves correlated, an often better way is to build a single model capable of predicting simultaneously all n outputs. First, it requires lower training time since only a single estimator is built. Second, the generalization accuracy of the resulting estimator may often be increased.
With regard to decision trees, this strategy can readily be used to support multi-output problems. This requires the following changes:
* Store n output values in leaves, instead of 1;
* Use splitting criteria that compute the average reduction across all n outputs.
This module offers support for multi-output problems by implementing this strategy in both [`DecisionTreeClassifier`](generated/sklearn.tree.decisiontreeclassifier#sklearn.tree.DecisionTreeClassifier "sklearn.tree.DecisionTreeClassifier") and [`DecisionTreeRegressor`](generated/sklearn.tree.decisiontreeregressor#sklearn.tree.DecisionTreeRegressor "sklearn.tree.DecisionTreeRegressor"). If a decision tree is fit on an output array Y of shape `(n_samples, n_outputs)` then the resulting estimator will:
* Output n\_output values upon `predict`;
* Output a list of n\_output arrays of class probabilities upon `predict_proba`.
The use of multi-output trees for regression is demonstrated in [Multi-output Decision Tree Regression](../auto_examples/tree/plot_tree_regression_multioutput#sphx-glr-auto-examples-tree-plot-tree-regression-multioutput-py). In this example, the input X is a single real value and the outputs Y are the sine and cosine of X.
The use of multi-output trees for classification is demonstrated in [Face completion with a multi-output estimators](../auto_examples/miscellaneous/plot_multioutput_face_completion#sphx-glr-auto-examples-miscellaneous-plot-multioutput-face-completion-py). In this example, the inputs X are the pixels of the upper half of faces and the outputs Y are the pixels of the lower half of those faces.
1.10.4. Complexity
-------------------
In general, the run time cost to construct a balanced binary tree is \(O(n\_{samples}n\_{features}\log(n\_{samples}))\) and query time \(O(\log(n\_{samples}))\). Although the tree construction algorithm attempts to generate balanced trees, they will not always be balanced. Assuming that the subtrees remain approximately balanced, the cost at each node consists of searching through \(O(n\_{features})\) to find the feature that offers the largest reduction in the impurity criterion, e.g. log loss (which is equivalent to an information gain). This has a cost of \(O(n\_{features}n\_{samples}\log(n\_{samples}))\) at each node, leading to a total cost over the entire trees (by summing the cost at each node) of \(O(n\_{features}n\_{samples}^{2}\log(n\_{samples}))\).
1.10.5. Tips on practical use
------------------------------
* Decision trees tend to overfit on data with a large number of features. Getting the right ratio of samples to number of features is important, since a tree with few samples in high dimensional space is very likely to overfit.
* Consider performing dimensionality reduction ([PCA](decomposition#pca), [ICA](decomposition#ica), or [Feature selection](feature_selection#feature-selection)) beforehand to give your tree a better chance of finding features that are discriminative.
* [Understanding the decision tree structure](../auto_examples/tree/plot_unveil_tree_structure#sphx-glr-auto-examples-tree-plot-unveil-tree-structure-py) will help in gaining more insights about how the decision tree makes predictions, which is important for understanding the important features in the data.
* Visualize your tree as you are training by using the `export` function. Use `max_depth=3` as an initial tree depth to get a feel for how the tree is fitting to your data, and then increase the depth.
* Remember that the number of samples required to populate the tree doubles for each additional level the tree grows to. Use `max_depth` to control the size of the tree to prevent overfitting.
* Use `min_samples_split` or `min_samples_leaf` to ensure that multiple samples inform every decision in the tree, by controlling which splits will be considered. A very small number will usually mean the tree will overfit, whereas a large number will prevent the tree from learning the data. Try `min_samples_leaf=5` as an initial value. If the sample size varies greatly, a float number can be used as percentage in these two parameters. While `min_samples_split` can create arbitrarily small leaves, `min_samples_leaf` guarantees that each leaf has a minimum size, avoiding low-variance, over-fit leaf nodes in regression problems. For classification with few classes, `min_samples_leaf=1` is often the best choice.
Note that `min_samples_split` considers samples directly and independent of `sample_weight`, if provided (e.g. a node with m weighted samples is still treated as having exactly m samples). Consider `min_weight_fraction_leaf` or `min_impurity_decrease` if accounting for sample weights is required at splits.
* Balance your dataset before training to prevent the tree from being biased toward the classes that are dominant. Class balancing can be done by sampling an equal number of samples from each class, or preferably by normalizing the sum of the sample weights (`sample_weight`) for each class to the same value. Also note that weight-based pre-pruning criteria, such as `min_weight_fraction_leaf`, will then be less biased toward dominant classes than criteria that are not aware of the sample weights, like `min_samples_leaf`.
* If the samples are weighted, it will be easier to optimize the tree structure using weight-based pre-pruning criterion such as `min_weight_fraction_leaf`, which ensure that leaf nodes contain at least a fraction of the overall sum of the sample weights.
* All decision trees use `np.float32` arrays internally. If training data is not in this format, a copy of the dataset will be made.
* If the input matrix X is very sparse, it is recommended to convert to sparse `csc_matrix` before calling fit and sparse `csr_matrix` before calling predict. Training time can be orders of magnitude faster for a sparse matrix input compared to a dense matrix when features have zero values in most of the samples.
1.10.6. Tree algorithms: ID3, C4.5, C5.0 and CART
--------------------------------------------------
What are all the various decision tree algorithms and how do they differ from each other? Which one is implemented in scikit-learn?
[ID3](https://en.wikipedia.org/wiki/ID3_algorithm) (Iterative Dichotomiser 3) was developed in 1986 by Ross Quinlan. The algorithm creates a multiway tree, finding for each node (i.e. in a greedy manner) the categorical feature that will yield the largest information gain for categorical targets. Trees are grown to their maximum size and then a pruning step is usually applied to improve the ability of the tree to generalize to unseen data.
C4.5 is the successor to ID3 and removed the restriction that features must be categorical by dynamically defining a discrete attribute (based on numerical variables) that partitions the continuous attribute value into a discrete set of intervals. C4.5 converts the trained trees (i.e. the output of the ID3 algorithm) into sets of if-then rules. The accuracy of each rule is then evaluated to determine the order in which they should be applied. Pruning is done by removing a rule’s precondition if the accuracy of the rule improves without it.
C5.0 is Quinlan’s latest version release under a proprietary license. It uses less memory and builds smaller rulesets than C4.5 while being more accurate.
CART (Classification and Regression Trees) is very similar to C4.5, but it differs in that it supports numerical target variables (regression) and does not compute rule sets. CART constructs binary trees using the feature and threshold that yield the largest information gain at each node.
scikit-learn uses an optimized version of the CART algorithm; however, the scikit-learn implementation does not support categorical variables for now.
1.10.7. Mathematical formulation
---------------------------------
Given training vectors \(x\_i \in R^n\), i=1,…, l and a label vector \(y \in R^l\), a decision tree recursively partitions the feature space such that the samples with the same labels or similar target values are grouped together.
Let the data at node \(m\) be represented by \(Q\_m\) with \(n\_m\) samples. For each candidate split \(\theta = (j, t\_m)\) consisting of a feature \(j\) and threshold \(t\_m\), partition the data into \(Q\_m^{left}(\theta)\) and \(Q\_m^{right}(\theta)\) subsets
\[ \begin{align}\begin{aligned}Q\_m^{left}(\theta) = \{(x, y) | x\_j \leq t\_m\}\\Q\_m^{right}(\theta) = Q\_m \setminus Q\_m^{left}(\theta)\end{aligned}\end{align} \] The quality of a candidate split of node \(m\) is then computed using an impurity function or loss function \(H()\), the choice of which depends on the task being solved (classification or regression)
\[G(Q\_m, \theta) = \frac{n\_m^{left}}{n\_m} H(Q\_m^{left}(\theta)) + \frac{n\_m^{right}}{n\_m} H(Q\_m^{right}(\theta))\] Select the parameters that minimises the impurity
\[\theta^\* = \operatorname{argmin}\_\theta G(Q\_m, \theta)\] Recurse for subsets \(Q\_m^{left}(\theta^\*)\) and \(Q\_m^{right}(\theta^\*)\) until the maximum allowable depth is reached, \(n\_m < \min\_{samples}\) or \(n\_m = 1\).
###
1.10.7.1. Classification criteria
If a target is a classification outcome taking on values 0,1,…,K-1, for node \(m\), let
\[p\_{mk} = \frac{1}{n\_m} \sum\_{y \in Q\_m} I(y = k)\] be the proportion of class k observations in node \(m\). If \(m\) is a terminal node, `predict_proba` for this region is set to \(p\_{mk}\). Common measures of impurity are the following.
Gini:
\[H(Q\_m) = \sum\_k p\_{mk} (1 - p\_{mk})\] Log Loss or Entropy:
\[H(Q\_m) = - \sum\_k p\_{mk} \log(p\_{mk})\] Note
The entropy criterion computes the Shannon entropy of the possible classes. It takes the class frequencies of the training data points that reached a given leaf \(m\) as their probability. Using the **Shannon entropy as tree node splitting criterion is equivalent to minimizing the log loss** (also known as cross-entropy and multinomial deviance) between the true labels \(y\_i\) and the probalistic predictions \(T\_k(x\_i)\) of the tree model \(T\) for class \(k\).
To see this, first recall that the log loss of a tree model \(T\) computed on a dataset \(D\) is defined as follows:
\[\mathrm{LL}(D, T) = -\frac{1}{n} \sum\_{(x\_i, y\_i) \in D} \sum\_k I(y\_i = k) \log(T\_k(x\_i))\] where \(D\) is a training dataset of \(n\) pairs \((x\_i, y\_i)\).
In a classification tree, the predicted class probabilities within leaf nodes are constant, that is: for all \((x\_i, y\_i) \in Q\_m\), one has: \(T\_k(x\_i) = p\_{mk}\) for each class \(k\).
This property makes it possible to rewrite \(\mathrm{LL}(D, T)\) as the sum of the Shannon entropies computed for each leaf of \(T\) weighted by the number of training data points that reached each leaf:
\[\mathrm{LL}(D, T) = \sum\_{m \in T} \frac{n\_m}{n} H(Q\_m)\] ###
1.10.7.2. Regression criteria
If the target is a continuous value, then for node \(m\), common criteria to minimize as for determining locations for future splits are Mean Squared Error (MSE or L2 error), Poisson deviance as well as Mean Absolute Error (MAE or L1 error). MSE and Poisson deviance both set the predicted value of terminal nodes to the learned mean value \(\bar{y}\_m\) of the node whereas the MAE sets the predicted value of terminal nodes to the median \(median(y)\_m\).
Mean Squared Error:
\[ \begin{align}\begin{aligned}\bar{y}\_m = \frac{1}{n\_m} \sum\_{y \in Q\_m} y\\H(Q\_m) = \frac{1}{n\_m} \sum\_{y \in Q\_m} (y - \bar{y}\_m)^2\end{aligned}\end{align} \] Half Poisson deviance:
\[H(Q\_m) = \frac{1}{n\_m} \sum\_{y \in Q\_m} (y \log\frac{y}{\bar{y}\_m} - y + \bar{y}\_m)\] Setting `criterion="poisson"` might be a good choice if your target is a count or a frequency (count per some unit). In any case, \(y >= 0\) is a necessary condition to use this criterion. Note that it fits much slower than the MSE criterion.
Mean Absolute Error:
\[ \begin{align}\begin{aligned}median(y)\_m = \underset{y \in Q\_m}{\mathrm{median}}(y)\\H(Q\_m) = \frac{1}{n\_m} \sum\_{y \in Q\_m} |y - median(y)\_m|\end{aligned}\end{align} \] Note that it fits much slower than the MSE criterion.
1.10.8. Minimal Cost-Complexity Pruning
----------------------------------------
Minimal cost-complexity pruning is an algorithm used to prune a tree to avoid over-fitting, described in Chapter 3 of [[BRE]](#bre). This algorithm is parameterized by \(\alpha\ge0\) known as the complexity parameter. The complexity parameter is used to define the cost-complexity measure, \(R\_\alpha(T)\) of a given tree \(T\):
\[R\_\alpha(T) = R(T) + \alpha|\widetilde{T}|\] where \(|\widetilde{T}|\) is the number of terminal nodes in \(T\) and \(R(T)\) is traditionally defined as the total misclassification rate of the terminal nodes. Alternatively, scikit-learn uses the total sample weighted impurity of the terminal nodes for \(R(T)\). As shown above, the impurity of a node depends on the criterion. Minimal cost-complexity pruning finds the subtree of \(T\) that minimizes \(R\_\alpha(T)\).
The cost complexity measure of a single node is \(R\_\alpha(t)=R(t)+\alpha\). The branch, \(T\_t\), is defined to be a tree where node \(t\) is its root. In general, the impurity of a node is greater than the sum of impurities of its terminal nodes, \(R(T\_t)<R(t)\). However, the cost complexity measure of a node, \(t\), and its branch, \(T\_t\), can be equal depending on \(\alpha\). We define the effective \(\alpha\) of a node to be the value where they are equal, \(R\_\alpha(T\_t)=R\_\alpha(t)\) or \(\alpha\_{eff}(t)=\frac{R(t)-R(T\_t)}{|T|-1}\). A non-terminal node with the smallest value of \(\alpha\_{eff}\) is the weakest link and will be pruned. This process stops when the pruned tree’s minimal \(\alpha\_{eff}\) is greater than the `ccp_alpha` parameter.
| programming_docs |
scikit_learn 3.2. Tuning the hyper-parameters of an estimator 3.2. Tuning the hyper-parameters of an estimator
================================================
Hyper-parameters are parameters that are not directly learnt within estimators. In scikit-learn they are passed as arguments to the constructor of the estimator classes. Typical examples include `C`, `kernel` and `gamma` for Support Vector Classifier, `alpha` for Lasso, etc.
It is possible and recommended to search the hyper-parameter space for the best [cross validation](cross_validation#cross-validation) score.
Any parameter provided when constructing an estimator may be optimized in this manner. Specifically, to find the names and current values for all parameters for a given estimator, use:
```
estimator.get_params()
```
A search consists of:
* an estimator (regressor or classifier such as `sklearn.svm.SVC()`);
* a parameter space;
* a method for searching or sampling candidates;
* a cross-validation scheme; and
* a [score function](#gridsearch-scoring).
Two generic approaches to parameter search are provided in scikit-learn: for given values, [`GridSearchCV`](generated/sklearn.model_selection.gridsearchcv#sklearn.model_selection.GridSearchCV "sklearn.model_selection.GridSearchCV") exhaustively considers all parameter combinations, while [`RandomizedSearchCV`](generated/sklearn.model_selection.randomizedsearchcv#sklearn.model_selection.RandomizedSearchCV "sklearn.model_selection.RandomizedSearchCV") can sample a given number of candidates from a parameter space with a specified distribution. Both these tools have successive halving counterparts [`HalvingGridSearchCV`](generated/sklearn.model_selection.halvinggridsearchcv#sklearn.model_selection.HalvingGridSearchCV "sklearn.model_selection.HalvingGridSearchCV") and [`HalvingRandomSearchCV`](generated/sklearn.model_selection.halvingrandomsearchcv#sklearn.model_selection.HalvingRandomSearchCV "sklearn.model_selection.HalvingRandomSearchCV"), which can be much faster at finding a good parameter combination.
After describing these tools we detail [best practices](#grid-search-tips) applicable to these approaches. Some models allow for specialized, efficient parameter search strategies, outlined in [Alternatives to brute force parameter search](#alternative-cv).
Note that it is common that a small subset of those parameters can have a large impact on the predictive or computation performance of the model while others can be left to their default values. It is recommended to read the docstring of the estimator class to get a finer understanding of their expected behavior, possibly by reading the enclosed reference to the literature.
3.2.1. Exhaustive Grid Search
------------------------------
The grid search provided by [`GridSearchCV`](generated/sklearn.model_selection.gridsearchcv#sklearn.model_selection.GridSearchCV "sklearn.model_selection.GridSearchCV") exhaustively generates candidates from a grid of parameter values specified with the `param_grid` parameter. For instance, the following `param_grid`:
```
param_grid = [
{'C': [1, 10, 100, 1000], 'kernel': ['linear']},
{'C': [1, 10, 100, 1000], 'gamma': [0.001, 0.0001], 'kernel': ['rbf']},
]
```
specifies that two grids should be explored: one with a linear kernel and C values in [1, 10, 100, 1000], and the second one with an RBF kernel, and the cross-product of C values ranging in [1, 10, 100, 1000] and gamma values in [0.001, 0.0001].
The [`GridSearchCV`](generated/sklearn.model_selection.gridsearchcv#sklearn.model_selection.GridSearchCV "sklearn.model_selection.GridSearchCV") instance implements the usual estimator API: when “fitting” it on a dataset all the possible combinations of parameter values are evaluated and the best combination is retained.
3.2.2. Randomized Parameter Optimization
-----------------------------------------
While using a grid of parameter settings is currently the most widely used method for parameter optimization, other search methods have more favorable properties. [`RandomizedSearchCV`](generated/sklearn.model_selection.randomizedsearchcv#sklearn.model_selection.RandomizedSearchCV "sklearn.model_selection.RandomizedSearchCV") implements a randomized search over parameters, where each setting is sampled from a distribution over possible parameter values. This has two main benefits over an exhaustive search:
* A budget can be chosen independent of the number of parameters and possible values.
* Adding parameters that do not influence the performance does not decrease efficiency.
Specifying how parameters should be sampled is done using a dictionary, very similar to specifying parameters for [`GridSearchCV`](generated/sklearn.model_selection.gridsearchcv#sklearn.model_selection.GridSearchCV "sklearn.model_selection.GridSearchCV"). Additionally, a computation budget, being the number of sampled candidates or sampling iterations, is specified using the `n_iter` parameter. For each parameter, either a distribution over possible values or a list of discrete choices (which will be sampled uniformly) can be specified:
```
{'C': scipy.stats.expon(scale=100), 'gamma': scipy.stats.expon(scale=.1),
'kernel': ['rbf'], 'class_weight':['balanced', None]}
```
This example uses the `scipy.stats` module, which contains many useful distributions for sampling parameters, such as `expon`, `gamma`, `uniform` or `randint`.
In principle, any function can be passed that provides a `rvs` (random variate sample) method to sample a value. A call to the `rvs` function should provide independent random samples from possible parameter values on consecutive calls.
Warning
The distributions in `scipy.stats` prior to version scipy 0.16 do not allow specifying a random state. Instead, they use the global numpy random state, that can be seeded via `np.random.seed` or set using `np.random.set_state`. However, beginning scikit-learn 0.18, the [`sklearn.model_selection`](classes#module-sklearn.model_selection "sklearn.model_selection") module sets the random state provided by the user if scipy >= 0.16 is also available.
For continuous parameters, such as `C` above, it is important to specify a continuous distribution to take full advantage of the randomization. This way, increasing `n_iter` will always lead to a finer search.
A continuous log-uniform random variable is available through `loguniform`. This is a continuous version of log-spaced parameters. For example to specify `C` above, `loguniform(1,
100)` can be used instead of `[1, 10, 100]` or `np.logspace(0, 2,
num=1000)`. This is an alias to [scipy.stats.loguniform](https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.loguniform.html).
Mirroring the example above in grid search, we can specify a continuous random variable that is log-uniformly distributed between `1e0` and `1e3`:
```
from sklearn.utils.fixes import loguniform
{'C': loguniform(1e0, 1e3),
'gamma': loguniform(1e-4, 1e-3),
'kernel': ['rbf'],
'class_weight':['balanced', None]}
```
3.2.3. Searching for optimal parameters with successive halving
----------------------------------------------------------------
Scikit-learn also provides the [`HalvingGridSearchCV`](generated/sklearn.model_selection.halvinggridsearchcv#sklearn.model_selection.HalvingGridSearchCV "sklearn.model_selection.HalvingGridSearchCV") and [`HalvingRandomSearchCV`](generated/sklearn.model_selection.halvingrandomsearchcv#sklearn.model_selection.HalvingRandomSearchCV "sklearn.model_selection.HalvingRandomSearchCV") estimators that can be used to search a parameter space using successive halving [[1]](#id3) [[2]](#id4). Successive halving (SH) is like a tournament among candidate parameter combinations. SH is an iterative selection process where all candidates (the parameter combinations) are evaluated with a small amount of resources at the first iteration. Only some of these candidates are selected for the next iteration, which will be allocated more resources. For parameter tuning, the resource is typically the number of training samples, but it can also be an arbitrary numeric parameter such as `n_estimators` in a random forest.
As illustrated in the figure below, only a subset of candidates ‘survive’ until the last iteration. These are the candidates that have consistently ranked among the top-scoring candidates across all iterations. Each iteration is allocated an increasing amount of resources per candidate, here the number of samples.
We here briefly describe the main parameters, but each parameter and their interactions are described in more details in the sections below. The `factor` (> 1) parameter controls the rate at which the resources grow, and the rate at which the number of candidates decreases. In each iteration, the number of resources per candidate is multiplied by `factor` and the number of candidates is divided by the same factor. Along with `resource` and `min_resources`, `factor` is the most important parameter to control the search in our implementation, though a value of 3 usually works well. `factor` effectively controls the number of iterations in [`HalvingGridSearchCV`](generated/sklearn.model_selection.halvinggridsearchcv#sklearn.model_selection.HalvingGridSearchCV "sklearn.model_selection.HalvingGridSearchCV") and the number of candidates (by default) and iterations in [`HalvingRandomSearchCV`](generated/sklearn.model_selection.halvingrandomsearchcv#sklearn.model_selection.HalvingRandomSearchCV "sklearn.model_selection.HalvingRandomSearchCV"). `aggressive_elimination=True` can also be used if the number of available resources is small. More control is available through tuning the `min_resources` parameter.
These estimators are still **experimental**: their predictions and their API might change without any deprecation cycle. To use them, you need to explicitly import `enable_halving_search_cv`:
```
>>> # explicitly require this experimental feature
>>> from sklearn.experimental import enable_halving_search_cv # noqa
>>> # now you can import normally from model_selection
>>> from sklearn.model_selection import HalvingGridSearchCV
>>> from sklearn.model_selection import HalvingRandomSearchCV
```
###
3.2.3.1. Choosing `min_resources` and the number of candidates
Beside `factor`, the two main parameters that influence the behaviour of a successive halving search are the `min_resources` parameter, and the number of candidates (or parameter combinations) that are evaluated. `min_resources` is the amount of resources allocated at the first iteration for each candidate. The number of candidates is specified directly in [`HalvingRandomSearchCV`](generated/sklearn.model_selection.halvingrandomsearchcv#sklearn.model_selection.HalvingRandomSearchCV "sklearn.model_selection.HalvingRandomSearchCV"), and is determined from the `param_grid` parameter of [`HalvingGridSearchCV`](generated/sklearn.model_selection.halvinggridsearchcv#sklearn.model_selection.HalvingGridSearchCV "sklearn.model_selection.HalvingGridSearchCV").
Consider a case where the resource is the number of samples, and where we have 1000 samples. In theory, with `min_resources=10` and `factor=2`, we are able to run **at most** 7 iterations with the following number of samples: `[10, 20, 40, 80, 160, 320, 640]`.
But depending on the number of candidates, we might run less than 7 iterations: if we start with a **small** number of candidates, the last iteration might use less than 640 samples, which means not using all the available resources (samples). For example if we start with 5 candidates, we only need 2 iterations: 5 candidates for the first iteration, then `5 // 2 = 2` candidates at the second iteration, after which we know which candidate performs the best (so we don’t need a third one). We would only be using at most 20 samples which is a waste since we have 1000 samples at our disposal. On the other hand, if we start with a **high** number of candidates, we might end up with a lot of candidates at the last iteration, which may not always be ideal: it means that many candidates will run with the full resources, basically reducing the procedure to standard search.
In the case of [`HalvingRandomSearchCV`](generated/sklearn.model_selection.halvingrandomsearchcv#sklearn.model_selection.HalvingRandomSearchCV "sklearn.model_selection.HalvingRandomSearchCV"), the number of candidates is set by default such that the last iteration uses as much of the available resources as possible. For [`HalvingGridSearchCV`](generated/sklearn.model_selection.halvinggridsearchcv#sklearn.model_selection.HalvingGridSearchCV "sklearn.model_selection.HalvingGridSearchCV"), the number of candidates is determined by the `param_grid` parameter. Changing the value of `min_resources` will impact the number of possible iterations, and as a result will also have an effect on the ideal number of candidates.
Another consideration when choosing `min_resources` is whether or not it is easy to discriminate between good and bad candidates with a small amount of resources. For example, if you need a lot of samples to distinguish between good and bad parameters, a high `min_resources` is recommended. On the other hand if the distinction is clear even with a small amount of samples, then a small `min_resources` may be preferable since it would speed up the computation.
Notice in the example above that the last iteration does not use the maximum amount of resources available: 1000 samples are available, yet only 640 are used, at most. By default, both [`HalvingRandomSearchCV`](generated/sklearn.model_selection.halvingrandomsearchcv#sklearn.model_selection.HalvingRandomSearchCV "sklearn.model_selection.HalvingRandomSearchCV") and [`HalvingGridSearchCV`](generated/sklearn.model_selection.halvinggridsearchcv#sklearn.model_selection.HalvingGridSearchCV "sklearn.model_selection.HalvingGridSearchCV") try to use as many resources as possible in the last iteration, with the constraint that this amount of resources must be a multiple of both `min_resources` and `factor` (this constraint will be clear in the next section). [`HalvingRandomSearchCV`](generated/sklearn.model_selection.halvingrandomsearchcv#sklearn.model_selection.HalvingRandomSearchCV "sklearn.model_selection.HalvingRandomSearchCV") achieves this by sampling the right amount of candidates, while [`HalvingGridSearchCV`](generated/sklearn.model_selection.halvinggridsearchcv#sklearn.model_selection.HalvingGridSearchCV "sklearn.model_selection.HalvingGridSearchCV") achieves this by properly setting `min_resources`. Please see [Exhausting the available resources](#exhausting-the-resources) for details.
###
3.2.3.2. Amount of resource and number of candidates at each iteration
At any iteration `i`, each candidate is allocated a given amount of resources which we denote `n_resources_i`. This quantity is controlled by the parameters `factor` and `min_resources` as follows (`factor` is strictly greater than 1):
```
n_resources_i = factor**i * min_resources,
```
or equivalently:
```
n_resources_{i+1} = n_resources_i * factor
```
where `min_resources == n_resources_0` is the amount of resources used at the first iteration. `factor` also defines the proportions of candidates that will be selected for the next iteration:
```
n_candidates_i = n_candidates // (factor ** i)
```
or equivalently:
```
n_candidates_0 = n_candidates
n_candidates_{i+1} = n_candidates_i // factor
```
So in the first iteration, we use `min_resources` resources `n_candidates` times. In the second iteration, we use `min_resources *
factor` resources `n_candidates // factor` times. The third again multiplies the resources per candidate and divides the number of candidates. This process stops when the maximum amount of resource per candidate is reached, or when we have identified the best candidate. The best candidate is identified at the iteration that is evaluating `factor` or less candidates (see just below for an explanation).
Here is an example with `min_resources=3` and `factor=2`, starting with 70 candidates:
| `n_resources_i` | `n_candidates_i` |
| --- | --- |
| 3 (=min\_resources) | 70 (=n\_candidates) |
| 3 \* 2 = 6 | 70 // 2 = 35 |
| 6 \* 2 = 12 | 35 // 2 = 17 |
| 12 \* 2 = 24 | 17 // 2 = 8 |
| 24 \* 2 = 48 | 8 // 2 = 4 |
| 48 \* 2 = 96 | 4 // 2 = 2 |
We can note that:
* the process stops at the first iteration which evaluates `factor=2` candidates: the best candidate is the best out of these 2 candidates. It is not necessary to run an additional iteration, since it would only evaluate one candidate (namely the best one, which we have already identified). For this reason, in general, we want the last iteration to run at most `factor` candidates. If the last iteration evaluates more than `factor` candidates, then this last iteration reduces to a regular search (as in [`RandomizedSearchCV`](generated/sklearn.model_selection.randomizedsearchcv#sklearn.model_selection.RandomizedSearchCV "sklearn.model_selection.RandomizedSearchCV") or [`GridSearchCV`](generated/sklearn.model_selection.gridsearchcv#sklearn.model_selection.GridSearchCV "sklearn.model_selection.GridSearchCV")).
* each `n_resources_i` is a multiple of both `factor` and `min_resources` (which is confirmed by its definition above).
The amount of resources that is used at each iteration can be found in the `n_resources_` attribute.
###
3.2.3.3. Choosing a resource
By default, the resource is defined in terms of number of samples. That is, each iteration will use an increasing amount of samples to train on. You can however manually specify a parameter to use as the resource with the `resource` parameter. Here is an example where the resource is defined in terms of the number of estimators of a random forest:
```
>>> from sklearn.datasets import make_classification
>>> from sklearn.ensemble import RandomForestClassifier
>>> from sklearn.experimental import enable_halving_search_cv # noqa
>>> from sklearn.model_selection import HalvingGridSearchCV
>>> import pandas as pd
>>>
>>> param_grid = {'max_depth': [3, 5, 10],
... 'min_samples_split': [2, 5, 10]}
>>> base_estimator = RandomForestClassifier(random_state=0)
>>> X, y = make_classification(n_samples=1000, random_state=0)
>>> sh = HalvingGridSearchCV(base_estimator, param_grid, cv=5,
... factor=2, resource='n_estimators',
... max_resources=30).fit(X, y)
>>> sh.best_estimator_
RandomForestClassifier(max_depth=5, n_estimators=24, random_state=0)
```
Note that it is not possible to budget on a parameter that is part of the parameter grid.
###
3.2.3.4. Exhausting the available resources
As mentioned above, the number of resources that is used at each iteration depends on the `min_resources` parameter. If you have a lot of resources available but start with a low number of resources, some of them might be wasted (i.e. not used):
```
>>> from sklearn.datasets import make_classification
>>> from sklearn.svm import SVC
>>> from sklearn.experimental import enable_halving_search_cv # noqa
>>> from sklearn.model_selection import HalvingGridSearchCV
>>> import pandas as pd
>>> param_grid= {'kernel': ('linear', 'rbf'),
... 'C': [1, 10, 100]}
>>> base_estimator = SVC(gamma='scale')
>>> X, y = make_classification(n_samples=1000)
>>> sh = HalvingGridSearchCV(base_estimator, param_grid, cv=5,
... factor=2, min_resources=20).fit(X, y)
>>> sh.n_resources_
[20, 40, 80]
```
The search process will only use 80 resources at most, while our maximum amount of available resources is `n_samples=1000`. Here, we have `min_resources = r_0 = 20`.
For [`HalvingGridSearchCV`](generated/sklearn.model_selection.halvinggridsearchcv#sklearn.model_selection.HalvingGridSearchCV "sklearn.model_selection.HalvingGridSearchCV"), by default, the `min_resources` parameter is set to ‘exhaust’. This means that `min_resources` is automatically set such that the last iteration can use as many resources as possible, within the `max_resources` limit:
```
>>> sh = HalvingGridSearchCV(base_estimator, param_grid, cv=5,
... factor=2, min_resources='exhaust').fit(X, y)
>>> sh.n_resources_
[250, 500, 1000]
```
`min_resources` was here automatically set to 250, which results in the last iteration using all the resources. The exact value that is used depends on the number of candidate parameter, on `max_resources` and on `factor`.
For [`HalvingRandomSearchCV`](generated/sklearn.model_selection.halvingrandomsearchcv#sklearn.model_selection.HalvingRandomSearchCV "sklearn.model_selection.HalvingRandomSearchCV"), exhausting the resources can be done in 2 ways:
* by setting `min_resources='exhaust'`, just like for [`HalvingGridSearchCV`](generated/sklearn.model_selection.halvinggridsearchcv#sklearn.model_selection.HalvingGridSearchCV "sklearn.model_selection.HalvingGridSearchCV");
* by setting `n_candidates='exhaust'`.
Both options are mutally exclusive: using `min_resources='exhaust'` requires knowing the number of candidates, and symmetrically `n_candidates='exhaust'` requires knowing `min_resources`.
In general, exhausting the total number of resources leads to a better final candidate parameter, and is slightly more time-intensive.
###
3.2.3.5. Aggressive elimination of candidates
Ideally, we want the last iteration to evaluate `factor` candidates (see [Amount of resource and number of candidates at each iteration](#amount-of-resource-and-number-of-candidates)). We then just have to pick the best one. When the number of available resources is small with respect to the number of candidates, the last iteration may have to evaluate more than `factor` candidates:
```
>>> from sklearn.datasets import make_classification
>>> from sklearn.svm import SVC
>>> from sklearn.experimental import enable_halving_search_cv # noqa
>>> from sklearn.model_selection import HalvingGridSearchCV
>>> import pandas as pd
>>>
>>>
>>> param_grid = {'kernel': ('linear', 'rbf'),
... 'C': [1, 10, 100]}
>>> base_estimator = SVC(gamma='scale')
>>> X, y = make_classification(n_samples=1000)
>>> sh = HalvingGridSearchCV(base_estimator, param_grid, cv=5,
... factor=2, max_resources=40,
... aggressive_elimination=False).fit(X, y)
>>> sh.n_resources_
[20, 40]
>>> sh.n_candidates_
[6, 3]
```
Since we cannot use more than `max_resources=40` resources, the process has to stop at the second iteration which evaluates more than `factor=2` candidates.
Using the `aggressive_elimination` parameter, you can force the search process to end up with less than `factor` candidates at the last iteration. To do this, the process will eliminate as many candidates as necessary using `min_resources` resources:
```
>>> sh = HalvingGridSearchCV(base_estimator, param_grid, cv=5,
... factor=2,
... max_resources=40,
... aggressive_elimination=True,
... ).fit(X, y)
>>> sh.n_resources_
[20, 20, 40]
>>> sh.n_candidates_
[6, 3, 2]
```
Notice that we end with 2 candidates at the last iteration since we have eliminated enough candidates during the first iterations, using `n_resources =
min_resources = 20`.
###
3.2.3.6. Analyzing results with the `cv_results_` attribute
The `cv_results_` attribute contains useful information for analyzing the results of a search. It can be converted to a pandas dataframe with `df =
pd.DataFrame(est.cv_results_)`. The `cv_results_` attribute of [`HalvingGridSearchCV`](generated/sklearn.model_selection.halvinggridsearchcv#sklearn.model_selection.HalvingGridSearchCV "sklearn.model_selection.HalvingGridSearchCV") and [`HalvingRandomSearchCV`](generated/sklearn.model_selection.halvingrandomsearchcv#sklearn.model_selection.HalvingRandomSearchCV "sklearn.model_selection.HalvingRandomSearchCV") is similar to that of [`GridSearchCV`](generated/sklearn.model_selection.gridsearchcv#sklearn.model_selection.GridSearchCV "sklearn.model_selection.GridSearchCV") and [`RandomizedSearchCV`](generated/sklearn.model_selection.randomizedsearchcv#sklearn.model_selection.RandomizedSearchCV "sklearn.model_selection.RandomizedSearchCV"), with additional information related to the successive halving process.
Here is an example with some of the columns of a (truncated) dataframe:
| | iter | n\_resources | mean\_test\_score | params |
| --- | --- | --- | --- | --- |
| 0 | 0 | 125 | 0.983667 | {‘criterion’: ‘log\_loss’, ‘max\_depth’: None, ‘max\_features’: 9, ‘min\_samples\_split’: 5} |
| 1 | 0 | 125 | 0.983667 | {‘criterion’: ‘gini’, ‘max\_depth’: None, ‘max\_features’: 8, ‘min\_samples\_split’: 7} |
| 2 | 0 | 125 | 0.983667 | {‘criterion’: ‘gini’, ‘max\_depth’: None, ‘max\_features’: 10, ‘min\_samples\_split’: 10} |
| 3 | 0 | 125 | 0.983667 | {‘criterion’: ‘log\_loss’, ‘max\_depth’: None, ‘max\_features’: 6, ‘min\_samples\_split’: 6} |
| … | … | … | … | … |
| 15 | 2 | 500 | 0.951958 | {‘criterion’: ‘log\_loss’, ‘max\_depth’: None, ‘max\_features’: 9, ‘min\_samples\_split’: 10} |
| 16 | 2 | 500 | 0.947958 | {‘criterion’: ‘gini’, ‘max\_depth’: None, ‘max\_features’: 10, ‘min\_samples\_split’: 10} |
| 17 | 2 | 500 | 0.951958 | {‘criterion’: ‘gini’, ‘max\_depth’: None, ‘max\_features’: 10, ‘min\_samples\_split’: 4} |
| 18 | 3 | 1000 | 0.961009 | {‘criterion’: ‘log\_loss’, ‘max\_depth’: None, ‘max\_features’: 9, ‘min\_samples\_split’: 10} |
| 19 | 3 | 1000 | 0.955989 | {‘criterion’: ‘gini’, ‘max\_depth’: None, ‘max\_features’: 10, ‘min\_samples\_split’: 4} |
Each row corresponds to a given parameter combination (a candidate) and a given iteration. The iteration is given by the `iter` column. The `n_resources` column tells you how many resources were used.
In the example above, the best parameter combination is `{'criterion':
'log_loss', 'max_depth': None, 'max_features': 9, 'min_samples_split': 10}` since it has reached the last iteration (3) with the highest score: 0.96.
3.2.4. Tips for parameter search
---------------------------------
###
3.2.4.1. Specifying an objective metric
By default, parameter search uses the `score` function of the estimator to evaluate a parameter setting. These are the [`sklearn.metrics.accuracy_score`](generated/sklearn.metrics.accuracy_score#sklearn.metrics.accuracy_score "sklearn.metrics.accuracy_score") for classification and [`sklearn.metrics.r2_score`](generated/sklearn.metrics.r2_score#sklearn.metrics.r2_score "sklearn.metrics.r2_score") for regression. For some applications, other scoring functions are better suited (for example in unbalanced classification, the accuracy score is often uninformative). An alternative scoring function can be specified via the `scoring` parameter of most parameter search tools. See [The scoring parameter: defining model evaluation rules](model_evaluation#scoring-parameter) for more details.
###
3.2.4.2. Specifying multiple metrics for evaluation
[`GridSearchCV`](generated/sklearn.model_selection.gridsearchcv#sklearn.model_selection.GridSearchCV "sklearn.model_selection.GridSearchCV") and [`RandomizedSearchCV`](generated/sklearn.model_selection.randomizedsearchcv#sklearn.model_selection.RandomizedSearchCV "sklearn.model_selection.RandomizedSearchCV") allow specifying multiple metrics for the `scoring` parameter.
Multimetric scoring can either be specified as a list of strings of predefined scores names or a dict mapping the scorer name to the scorer function and/or the predefined scorer name(s). See [Using multiple metric evaluation](model_evaluation#multimetric-scoring) for more details.
When specifying multiple metrics, the `refit` parameter must be set to the metric (string) for which the `best_params_` will be found and used to build the `best_estimator_` on the whole dataset. If the search should not be refit, set `refit=False`. Leaving refit to the default value `None` will result in an error when using multiple metrics.
See [Demonstration of multi-metric evaluation on cross\_val\_score and GridSearchCV](../auto_examples/model_selection/plot_multi_metric_evaluation#sphx-glr-auto-examples-model-selection-plot-multi-metric-evaluation-py) for an example usage.
[`HalvingRandomSearchCV`](generated/sklearn.model_selection.halvingrandomsearchcv#sklearn.model_selection.HalvingRandomSearchCV "sklearn.model_selection.HalvingRandomSearchCV") and [`HalvingGridSearchCV`](generated/sklearn.model_selection.halvinggridsearchcv#sklearn.model_selection.HalvingGridSearchCV "sklearn.model_selection.HalvingGridSearchCV") do not support multimetric scoring.
###
3.2.4.3. Composite estimators and parameter spaces
[`GridSearchCV`](generated/sklearn.model_selection.gridsearchcv#sklearn.model_selection.GridSearchCV "sklearn.model_selection.GridSearchCV") and [`RandomizedSearchCV`](generated/sklearn.model_selection.randomizedsearchcv#sklearn.model_selection.RandomizedSearchCV "sklearn.model_selection.RandomizedSearchCV") allow searching over parameters of composite or nested estimators such as [`Pipeline`](generated/sklearn.pipeline.pipeline#sklearn.pipeline.Pipeline "sklearn.pipeline.Pipeline"), [`ColumnTransformer`](generated/sklearn.compose.columntransformer#sklearn.compose.ColumnTransformer "sklearn.compose.ColumnTransformer"), [`VotingClassifier`](generated/sklearn.ensemble.votingclassifier#sklearn.ensemble.VotingClassifier "sklearn.ensemble.VotingClassifier") or [`CalibratedClassifierCV`](generated/sklearn.calibration.calibratedclassifiercv#sklearn.calibration.CalibratedClassifierCV "sklearn.calibration.CalibratedClassifierCV") using a dedicated `<estimator>__<parameter>` syntax:
```
>>> from sklearn.model_selection import GridSearchCV
>>> from sklearn.calibration import CalibratedClassifierCV
>>> from sklearn.ensemble import RandomForestClassifier
>>> from sklearn.datasets import make_moons
>>> X, y = make_moons()
>>> calibrated_forest = CalibratedClassifierCV(
... base_estimator=RandomForestClassifier(n_estimators=10))
>>> param_grid = {
... 'base_estimator__max_depth': [2, 4, 6, 8]}
>>> search = GridSearchCV(calibrated_forest, param_grid, cv=5)
>>> search.fit(X, y)
GridSearchCV(cv=5,
estimator=CalibratedClassifierCV(...),
param_grid={'base_estimator__max_depth': [2, 4, 6, 8]})
```
Here, `<estimator>` is the parameter name of the nested estimator, in this case `base_estimator`. If the meta-estimator is constructed as a collection of estimators as in `pipeline.Pipeline`, then `<estimator>` refers to the name of the estimator, see [Nested parameters](compose#pipeline-nested-parameters). In practice, there can be several levels of nesting:
```
>>> from sklearn.pipeline import Pipeline
>>> from sklearn.feature_selection import SelectKBest
>>> pipe = Pipeline([
... ('select', SelectKBest()),
... ('model', calibrated_forest)])
>>> param_grid = {
... 'select__k': [1, 2],
... 'model__base_estimator__max_depth': [2, 4, 6, 8]}
>>> search = GridSearchCV(pipe, param_grid, cv=5).fit(X, y)
```
Please refer to [Pipeline: chaining estimators](compose#pipeline) for performing parameter searches over pipelines.
###
3.2.4.4. Model selection: development and evaluation
Model selection by evaluating various parameter settings can be seen as a way to use the labeled data to “train” the parameters of the grid.
When evaluating the resulting model it is important to do it on held-out samples that were not seen during the grid search process: it is recommended to split the data into a **development set** (to be fed to the [`GridSearchCV`](generated/sklearn.model_selection.gridsearchcv#sklearn.model_selection.GridSearchCV "sklearn.model_selection.GridSearchCV") instance) and an **evaluation set** to compute performance metrics.
This can be done by using the [`train_test_split`](generated/sklearn.model_selection.train_test_split#sklearn.model_selection.train_test_split "sklearn.model_selection.train_test_split") utility function.
###
3.2.4.5. Parallelism
The parameter search tools evaluate each parameter combination on each data fold independently. Computations can be run in parallel by using the keyword `n_jobs=-1`. See function signature for more details, and also the Glossary entry for [n\_jobs](https://scikit-learn.org/1.1/glossary.html#term-n_jobs).
###
3.2.4.6. Robustness to failure
Some parameter settings may result in a failure to `fit` one or more folds of the data. By default, this will cause the entire search to fail, even if some parameter settings could be fully evaluated. Setting `error_score=0` (or `=np.NaN`) will make the procedure robust to such failure, issuing a warning and setting the score for that fold to 0 (or `NaN`), but completing the search.
3.2.5. Alternatives to brute force parameter search
----------------------------------------------------
###
3.2.5.1. Model specific cross-validation
Some models can fit data for a range of values of some parameter almost as efficiently as fitting the estimator for a single value of the parameter. This feature can be leveraged to perform a more efficient cross-validation used for model selection of this parameter.
The most common parameter amenable to this strategy is the parameter encoding the strength of the regularizer. In this case we say that we compute the **regularization path** of the estimator.
Here is the list of such models:
| | |
| --- | --- |
| [`linear_model.ElasticNetCV`](generated/sklearn.linear_model.elasticnetcv#sklearn.linear_model.ElasticNetCV "sklearn.linear_model.ElasticNetCV")(\*[, l1\_ratio, ...]) | Elastic Net model with iterative fitting along a regularization path. |
| [`linear_model.LarsCV`](generated/sklearn.linear_model.larscv#sklearn.linear_model.LarsCV "sklearn.linear_model.LarsCV")(\*[, fit\_intercept, ...]) | Cross-validated Least Angle Regression model. |
| [`linear_model.LassoCV`](generated/sklearn.linear_model.lassocv#sklearn.linear_model.LassoCV "sklearn.linear_model.LassoCV")(\*[, eps, n\_alphas, ...]) | Lasso linear model with iterative fitting along a regularization path. |
| [`linear_model.LassoLarsCV`](generated/sklearn.linear_model.lassolarscv#sklearn.linear_model.LassoLarsCV "sklearn.linear_model.LassoLarsCV")(\*[, fit\_intercept, ...]) | Cross-validated Lasso, using the LARS algorithm. |
| [`linear_model.LogisticRegressionCV`](generated/sklearn.linear_model.logisticregressioncv#sklearn.linear_model.LogisticRegressionCV "sklearn.linear_model.LogisticRegressionCV")(\*[, Cs, ...]) | Logistic Regression CV (aka logit, MaxEnt) classifier. |
| [`linear_model.MultiTaskElasticNetCV`](generated/sklearn.linear_model.multitaskelasticnetcv#sklearn.linear_model.MultiTaskElasticNetCV "sklearn.linear_model.MultiTaskElasticNetCV")(\*[, ...]) | Multi-task L1/L2 ElasticNet with built-in cross-validation. |
| [`linear_model.MultiTaskLassoCV`](generated/sklearn.linear_model.multitasklassocv#sklearn.linear_model.MultiTaskLassoCV "sklearn.linear_model.MultiTaskLassoCV")(\*[, eps, ...]) | Multi-task Lasso model trained with L1/L2 mixed-norm as regularizer. |
| [`linear_model.OrthogonalMatchingPursuitCV`](generated/sklearn.linear_model.orthogonalmatchingpursuitcv#sklearn.linear_model.OrthogonalMatchingPursuitCV "sklearn.linear_model.OrthogonalMatchingPursuitCV")(\*) | Cross-validated Orthogonal Matching Pursuit model (OMP). |
| [`linear_model.RidgeCV`](generated/sklearn.linear_model.ridgecv#sklearn.linear_model.RidgeCV "sklearn.linear_model.RidgeCV")([alphas, ...]) | Ridge regression with built-in cross-validation. |
| [`linear_model.RidgeClassifierCV`](generated/sklearn.linear_model.ridgeclassifiercv#sklearn.linear_model.RidgeClassifierCV "sklearn.linear_model.RidgeClassifierCV")([alphas, ...]) | Ridge classifier with built-in cross-validation. |
###
3.2.5.2. Information Criterion
Some models can offer an information-theoretic closed-form formula of the optimal estimate of the regularization parameter by computing a single regularization path (instead of several when using cross-validation).
Here is the list of models benefiting from the Akaike Information Criterion (AIC) or the Bayesian Information Criterion (BIC) for automated model selection:
| | |
| --- | --- |
| [`linear_model.LassoLarsIC`](generated/sklearn.linear_model.lassolarsic#sklearn.linear_model.LassoLarsIC "sklearn.linear_model.LassoLarsIC")([criterion, ...]) | Lasso model fit with Lars using BIC or AIC for model selection. |
###
3.2.5.3. Out of Bag Estimates
When using ensemble methods base upon bagging, i.e. generating new training sets using sampling with replacement, part of the training set remains unused. For each classifier in the ensemble, a different part of the training set is left out.
This left out portion can be used to estimate the generalization error without having to rely on a separate validation set. This estimate comes “for free” as no additional data is needed and can be used for model selection.
This is currently implemented in the following classes:
| | |
| --- | --- |
| [`ensemble.RandomForestClassifier`](generated/sklearn.ensemble.randomforestclassifier#sklearn.ensemble.RandomForestClassifier "sklearn.ensemble.RandomForestClassifier")([...]) | A random forest classifier. |
| [`ensemble.RandomForestRegressor`](generated/sklearn.ensemble.randomforestregressor#sklearn.ensemble.RandomForestRegressor "sklearn.ensemble.RandomForestRegressor")([...]) | A random forest regressor. |
| [`ensemble.ExtraTreesClassifier`](generated/sklearn.ensemble.extratreesclassifier#sklearn.ensemble.ExtraTreesClassifier "sklearn.ensemble.ExtraTreesClassifier")([...]) | An extra-trees classifier. |
| [`ensemble.ExtraTreesRegressor`](generated/sklearn.ensemble.extratreesregressor#sklearn.ensemble.ExtraTreesRegressor "sklearn.ensemble.ExtraTreesRegressor")([n\_estimators, ...]) | An extra-trees regressor. |
| [`ensemble.GradientBoostingClassifier`](generated/sklearn.ensemble.gradientboostingclassifier#sklearn.ensemble.GradientBoostingClassifier "sklearn.ensemble.GradientBoostingClassifier")(\*[, ...]) | Gradient Boosting for classification. |
| [`ensemble.GradientBoostingRegressor`](generated/sklearn.ensemble.gradientboostingregressor#sklearn.ensemble.GradientBoostingRegressor "sklearn.ensemble.GradientBoostingRegressor")(\*[, ...]) | Gradient Boosting for regression. |
| programming_docs |
scikit_learn 6.7. Kernel Approximation 6.7. Kernel Approximation
=========================
This submodule contains functions that approximate the feature mappings that correspond to certain kernels, as they are used for example in support vector machines (see [Support Vector Machines](svm#svm)). The following feature functions perform non-linear transformations of the input, which can serve as a basis for linear classification or other algorithms.
The advantage of using approximate explicit feature maps compared to the [kernel trick](https://en.wikipedia.org/wiki/Kernel_trick), which makes use of feature maps implicitly, is that explicit mappings can be better suited for online learning and can significantly reduce the cost of learning with very large datasets. Standard kernelized SVMs do not scale well to large datasets, but using an approximate kernel map it is possible to use much more efficient linear SVMs. In particular, the combination of kernel map approximations with [`SGDClassifier`](generated/sklearn.linear_model.sgdclassifier#sklearn.linear_model.SGDClassifier "sklearn.linear_model.SGDClassifier") can make non-linear learning on large datasets possible.
Since there has not been much empirical work using approximate embeddings, it is advisable to compare results against exact kernel methods when possible.
See also
[Polynomial regression: extending linear models with basis functions](linear_model#polynomial-regression) for an exact polynomial transformation.
6.7.1. Nystroem Method for Kernel Approximation
------------------------------------------------
The Nystroem method, as implemented in [`Nystroem`](generated/sklearn.kernel_approximation.nystroem#sklearn.kernel_approximation.Nystroem "sklearn.kernel_approximation.Nystroem") is a general method for low-rank approximations of kernels. It achieves this by essentially subsampling the data on which the kernel is evaluated. By default [`Nystroem`](generated/sklearn.kernel_approximation.nystroem#sklearn.kernel_approximation.Nystroem "sklearn.kernel_approximation.Nystroem") uses the `rbf` kernel, but it can use any kernel function or a precomputed kernel matrix. The number of samples used - which is also the dimensionality of the features computed - is given by the parameter `n_components`.
6.7.2. Radial Basis Function Kernel
------------------------------------
The [`RBFSampler`](generated/sklearn.kernel_approximation.rbfsampler#sklearn.kernel_approximation.RBFSampler "sklearn.kernel_approximation.RBFSampler") constructs an approximate mapping for the radial basis function kernel, also known as *Random Kitchen Sinks* [[RR2007]](#rr2007). This transformation can be used to explicitly model a kernel map, prior to applying a linear algorithm, for example a linear SVM:
```
>>> from sklearn.kernel_approximation import RBFSampler
>>> from sklearn.linear_model import SGDClassifier
>>> X = [[0, 0], [1, 1], [1, 0], [0, 1]]
>>> y = [0, 0, 1, 1]
>>> rbf_feature = RBFSampler(gamma=1, random_state=1)
>>> X_features = rbf_feature.fit_transform(X)
>>> clf = SGDClassifier(max_iter=5)
>>> clf.fit(X_features, y)
SGDClassifier(max_iter=5)
>>> clf.score(X_features, y)
1.0
```
The mapping relies on a Monte Carlo approximation to the kernel values. The `fit` function performs the Monte Carlo sampling, whereas the `transform` method performs the mapping of the data. Because of the inherent randomness of the process, results may vary between different calls to the `fit` function.
The `fit` function takes two arguments: `n_components`, which is the target dimensionality of the feature transform, and `gamma`, the parameter of the RBF-kernel. A higher `n_components` will result in a better approximation of the kernel and will yield results more similar to those produced by a kernel SVM. Note that “fitting” the feature function does not actually depend on the data given to the `fit` function. Only the dimensionality of the data is used. Details on the method can be found in [[RR2007]](#rr2007).
For a given value of `n_components` [`RBFSampler`](generated/sklearn.kernel_approximation.rbfsampler#sklearn.kernel_approximation.RBFSampler "sklearn.kernel_approximation.RBFSampler") is often less accurate as [`Nystroem`](generated/sklearn.kernel_approximation.nystroem#sklearn.kernel_approximation.Nystroem "sklearn.kernel_approximation.Nystroem"). [`RBFSampler`](generated/sklearn.kernel_approximation.rbfsampler#sklearn.kernel_approximation.RBFSampler "sklearn.kernel_approximation.RBFSampler") is cheaper to compute, though, making use of larger feature spaces more efficient.
Comparing an exact RBF kernel (left) with the approximation (right)
6.7.3. Additive Chi Squared Kernel
-----------------------------------
The additive chi squared kernel is a kernel on histograms, often used in computer vision.
The additive chi squared kernel as used here is given by
\[k(x, y) = \sum\_i \frac{2x\_iy\_i}{x\_i+y\_i}\] This is not exactly the same as `sklearn.metrics.additive_chi2_kernel`. The authors of [[VZ2010]](#vz2010) prefer the version above as it is always positive definite. Since the kernel is additive, it is possible to treat all components \(x\_i\) separately for embedding. This makes it possible to sample the Fourier transform in regular intervals, instead of approximating using Monte Carlo sampling.
The class [`AdditiveChi2Sampler`](generated/sklearn.kernel_approximation.additivechi2sampler#sklearn.kernel_approximation.AdditiveChi2Sampler "sklearn.kernel_approximation.AdditiveChi2Sampler") implements this component wise deterministic sampling. Each component is sampled \(n\) times, yielding \(2n+1\) dimensions per input dimension (the multiple of two stems from the real and complex part of the Fourier transform). In the literature, \(n\) is usually chosen to be 1 or 2, transforming the dataset to size `n_samples * 5 * n_features` (in the case of \(n=2\)).
The approximate feature map provided by [`AdditiveChi2Sampler`](generated/sklearn.kernel_approximation.additivechi2sampler#sklearn.kernel_approximation.AdditiveChi2Sampler "sklearn.kernel_approximation.AdditiveChi2Sampler") can be combined with the approximate feature map provided by [`RBFSampler`](generated/sklearn.kernel_approximation.rbfsampler#sklearn.kernel_approximation.RBFSampler "sklearn.kernel_approximation.RBFSampler") to yield an approximate feature map for the exponentiated chi squared kernel. See the [[VZ2010]](#vz2010) for details and [[VVZ2010]](#vvz2010) for combination with the [`RBFSampler`](generated/sklearn.kernel_approximation.rbfsampler#sklearn.kernel_approximation.RBFSampler "sklearn.kernel_approximation.RBFSampler").
6.7.4. Skewed Chi Squared Kernel
---------------------------------
The skewed chi squared kernel is given by:
\[k(x,y) = \prod\_i \frac{2\sqrt{x\_i+c}\sqrt{y\_i+c}}{x\_i + y\_i + 2c}\] It has properties that are similar to the exponentiated chi squared kernel often used in computer vision, but allows for a simple Monte Carlo approximation of the feature map.
The usage of the [`SkewedChi2Sampler`](generated/sklearn.kernel_approximation.skewedchi2sampler#sklearn.kernel_approximation.SkewedChi2Sampler "sklearn.kernel_approximation.SkewedChi2Sampler") is the same as the usage described above for the [`RBFSampler`](generated/sklearn.kernel_approximation.rbfsampler#sklearn.kernel_approximation.RBFSampler "sklearn.kernel_approximation.RBFSampler"). The only difference is in the free parameter, that is called \(c\). For a motivation for this mapping and the mathematical details see [[LS2010]](#ls2010).
6.7.5. Polynomial Kernel Approximation via Tensor Sketch
---------------------------------------------------------
The [polynomial kernel](metrics#polynomial-kernel) is a popular type of kernel function given by:
\[k(x, y) = (\gamma x^\top y +c\_0)^d\] where:
* `x`, `y` are the input vectors
* `d` is the kernel degree
Intuitively, the feature space of the polynomial kernel of degree `d` consists of all possible degree-`d` products among input features, which enables learning algorithms using this kernel to account for interactions between features.
The TensorSketch [[PP2013]](#pp2013) method, as implemented in [`PolynomialCountSketch`](generated/sklearn.kernel_approximation.polynomialcountsketch#sklearn.kernel_approximation.PolynomialCountSketch "sklearn.kernel_approximation.PolynomialCountSketch"), is a scalable, input data independent method for polynomial kernel approximation. It is based on the concept of Count sketch [[WIKICS]](#wikics) [[CCF2002]](#ccf2002) , a dimensionality reduction technique similar to feature hashing, which instead uses several independent hash functions. TensorSketch obtains a Count Sketch of the outer product of two vectors (or a vector with itself), which can be used as an approximation of the polynomial kernel feature space. In particular, instead of explicitly computing the outer product, TensorSketch computes the Count Sketch of the vectors and then uses polynomial multiplication via the Fast Fourier Transform to compute the Count Sketch of their outer product.
Conveniently, the training phase of TensorSketch simply consists of initializing some random variables. It is thus independent of the input data, i.e. it only depends on the number of input features, but not the data values. In addition, this method can transform samples in \(\mathcal{O}(n\_{\text{samples}}(n\_{\text{features}} + n\_{\text{components}} \log(n\_{\text{components}})))\) time, where \(n\_{\text{components}}\) is the desired output dimension, determined by `n_components`.
6.7.6. Mathematical Details
----------------------------
Kernel methods like support vector machines or kernelized PCA rely on a property of reproducing kernel Hilbert spaces. For any positive definite kernel function \(k\) (a so called Mercer kernel), it is guaranteed that there exists a mapping \(\phi\) into a Hilbert space \(\mathcal{H}\), such that
\[k(x,y) = \langle \phi(x), \phi(y) \rangle\] Where \(\langle \cdot, \cdot \rangle\) denotes the inner product in the Hilbert space.
If an algorithm, such as a linear support vector machine or PCA, relies only on the scalar product of data points \(x\_i\), one may use the value of \(k(x\_i, x\_j)\), which corresponds to applying the algorithm to the mapped data points \(\phi(x\_i)\). The advantage of using \(k\) is that the mapping \(\phi\) never has to be calculated explicitly, allowing for arbitrary large features (even infinite).
One drawback of kernel methods is, that it might be necessary to store many kernel values \(k(x\_i, x\_j)\) during optimization. If a kernelized classifier is applied to new data \(y\_j\), \(k(x\_i, y\_j)\) needs to be computed to make predictions, possibly for many different \(x\_i\) in the training set.
The classes in this submodule allow to approximate the embedding \(\phi\), thereby working explicitly with the representations \(\phi(x\_i)\), which obviates the need to apply the kernel or store training examples.
scikit_learn 6.3. Preprocessing data 6.3. Preprocessing data
=======================
The `sklearn.preprocessing` package provides several common utility functions and transformer classes to change raw feature vectors into a representation that is more suitable for the downstream estimators.
In general, learning algorithms benefit from standardization of the data set. If some outliers are present in the set, robust scalers or transformers are more appropriate. The behaviors of the different scalers, transformers, and normalizers on a dataset containing marginal outliers is highlighted in [Compare the effect of different scalers on data with outliers](../auto_examples/preprocessing/plot_all_scaling#sphx-glr-auto-examples-preprocessing-plot-all-scaling-py).
6.3.1. Standardization, or mean removal and variance scaling
-------------------------------------------------------------
**Standardization** of datasets is a **common requirement for many machine learning estimators** implemented in scikit-learn; they might behave badly if the individual features do not more or less look like standard normally distributed data: Gaussian with **zero mean and unit variance**.
In practice we often ignore the shape of the distribution and just transform the data to center it by removing the mean value of each feature, then scale it by dividing non-constant features by their standard deviation.
For instance, many elements used in the objective function of a learning algorithm (such as the RBF kernel of Support Vector Machines or the l1 and l2 regularizers of linear models) may assume that all features are centered around zero or have variance in the same order. If a feature has a variance that is orders of magnitude larger than others, it might dominate the objective function and make the estimator unable to learn from other features correctly as expected.
The [`preprocessing`](classes#module-sklearn.preprocessing "sklearn.preprocessing") module provides the [`StandardScaler`](generated/sklearn.preprocessing.standardscaler#sklearn.preprocessing.StandardScaler "sklearn.preprocessing.StandardScaler") utility class, which is a quick and easy way to perform the following operation on an array-like dataset:
```
>>> from sklearn import preprocessing
>>> import numpy as np
>>> X_train = np.array([[ 1., -1., 2.],
... [ 2., 0., 0.],
... [ 0., 1., -1.]])
>>> scaler = preprocessing.StandardScaler().fit(X_train)
>>> scaler
StandardScaler()
>>> scaler.mean_
array([1. ..., 0. ..., 0.33...])
>>> scaler.scale_
array([0.81..., 0.81..., 1.24...])
>>> X_scaled = scaler.transform(X_train)
>>> X_scaled
array([[ 0. ..., -1.22..., 1.33...],
[ 1.22..., 0. ..., -0.26...],
[-1.22..., 1.22..., -1.06...]])
```
Scaled data has zero mean and unit variance:
```
>>> X_scaled.mean(axis=0)
array([0., 0., 0.])
>>> X_scaled.std(axis=0)
array([1., 1., 1.])
```
This class implements the `Transformer` API to compute the mean and standard deviation on a training set so as to be able to later re-apply the same transformation on the testing set. This class is hence suitable for use in the early steps of a [`Pipeline`](generated/sklearn.pipeline.pipeline#sklearn.pipeline.Pipeline "sklearn.pipeline.Pipeline"):
```
>>> from sklearn.datasets import make_classification
>>> from sklearn.linear_model import LogisticRegression
>>> from sklearn.model_selection import train_test_split
>>> from sklearn.pipeline import make_pipeline
>>> from sklearn.preprocessing import StandardScaler
>>> X, y = make_classification(random_state=42)
>>> X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=42)
>>> pipe = make_pipeline(StandardScaler(), LogisticRegression())
>>> pipe.fit(X_train, y_train) # apply scaling on training data
Pipeline(steps=[('standardscaler', StandardScaler()),
('logisticregression', LogisticRegression())])
>>> pipe.score(X_test, y_test) # apply scaling on testing data, without leaking training data.
0.96
```
It is possible to disable either centering or scaling by either passing `with_mean=False` or `with_std=False` to the constructor of [`StandardScaler`](generated/sklearn.preprocessing.standardscaler#sklearn.preprocessing.StandardScaler "sklearn.preprocessing.StandardScaler").
###
6.3.1.1. Scaling features to a range
An alternative standardization is scaling features to lie between a given minimum and maximum value, often between zero and one, or so that the maximum absolute value of each feature is scaled to unit size. This can be achieved using [`MinMaxScaler`](generated/sklearn.preprocessing.minmaxscaler#sklearn.preprocessing.MinMaxScaler "sklearn.preprocessing.MinMaxScaler") or [`MaxAbsScaler`](generated/sklearn.preprocessing.maxabsscaler#sklearn.preprocessing.MaxAbsScaler "sklearn.preprocessing.MaxAbsScaler"), respectively.
The motivation to use this scaling include robustness to very small standard deviations of features and preserving zero entries in sparse data.
Here is an example to scale a toy data matrix to the `[0, 1]` range:
```
>>> X_train = np.array([[ 1., -1., 2.],
... [ 2., 0., 0.],
... [ 0., 1., -1.]])
...
>>> min_max_scaler = preprocessing.MinMaxScaler()
>>> X_train_minmax = min_max_scaler.fit_transform(X_train)
>>> X_train_minmax
array([[0.5 , 0. , 1. ],
[1. , 0.5 , 0.33333333],
[0. , 1. , 0. ]])
```
The same instance of the transformer can then be applied to some new test data unseen during the fit call: the same scaling and shifting operations will be applied to be consistent with the transformation performed on the train data:
```
>>> X_test = np.array([[-3., -1., 4.]])
>>> X_test_minmax = min_max_scaler.transform(X_test)
>>> X_test_minmax
array([[-1.5 , 0. , 1.66666667]])
```
It is possible to introspect the scaler attributes to find about the exact nature of the transformation learned on the training data:
```
>>> min_max_scaler.scale_
array([0.5 , 0.5 , 0.33...])
>>> min_max_scaler.min_
array([0. , 0.5 , 0.33...])
```
If [`MinMaxScaler`](generated/sklearn.preprocessing.minmaxscaler#sklearn.preprocessing.MinMaxScaler "sklearn.preprocessing.MinMaxScaler") is given an explicit `feature_range=(min, max)` the full formula is:
```
X_std = (X - X.min(axis=0)) / (X.max(axis=0) - X.min(axis=0))
X_scaled = X_std * (max - min) + min
```
[`MaxAbsScaler`](generated/sklearn.preprocessing.maxabsscaler#sklearn.preprocessing.MaxAbsScaler "sklearn.preprocessing.MaxAbsScaler") works in a very similar fashion, but scales in a way that the training data lies within the range `[-1, 1]` by dividing through the largest maximum value in each feature. It is meant for data that is already centered at zero or sparse data.
Here is how to use the toy data from the previous example with this scaler:
```
>>> X_train = np.array([[ 1., -1., 2.],
... [ 2., 0., 0.],
... [ 0., 1., -1.]])
...
>>> max_abs_scaler = preprocessing.MaxAbsScaler()
>>> X_train_maxabs = max_abs_scaler.fit_transform(X_train)
>>> X_train_maxabs
array([[ 0.5, -1. , 1. ],
[ 1. , 0. , 0. ],
[ 0. , 1. , -0.5]])
>>> X_test = np.array([[ -3., -1., 4.]])
>>> X_test_maxabs = max_abs_scaler.transform(X_test)
>>> X_test_maxabs
array([[-1.5, -1. , 2. ]])
>>> max_abs_scaler.scale_
array([2., 1., 2.])
```
###
6.3.1.2. Scaling sparse data
Centering sparse data would destroy the sparseness structure in the data, and thus rarely is a sensible thing to do. However, it can make sense to scale sparse inputs, especially if features are on different scales.
[`MaxAbsScaler`](generated/sklearn.preprocessing.maxabsscaler#sklearn.preprocessing.MaxAbsScaler "sklearn.preprocessing.MaxAbsScaler") was specifically designed for scaling sparse data, and is the recommended way to go about this. However, [`StandardScaler`](generated/sklearn.preprocessing.standardscaler#sklearn.preprocessing.StandardScaler "sklearn.preprocessing.StandardScaler") can accept `scipy.sparse` matrices as input, as long as `with_mean=False` is explicitly passed to the constructor. Otherwise a `ValueError` will be raised as silently centering would break the sparsity and would often crash the execution by allocating excessive amounts of memory unintentionally. [`RobustScaler`](generated/sklearn.preprocessing.robustscaler#sklearn.preprocessing.RobustScaler "sklearn.preprocessing.RobustScaler") cannot be fitted to sparse inputs, but you can use the `transform` method on sparse inputs.
Note that the scalers accept both Compressed Sparse Rows and Compressed Sparse Columns format (see `scipy.sparse.csr_matrix` and `scipy.sparse.csc_matrix`). Any other sparse input will be **converted to the Compressed Sparse Rows representation**. To avoid unnecessary memory copies, it is recommended to choose the CSR or CSC representation upstream.
Finally, if the centered data is expected to be small enough, explicitly converting the input to an array using the `toarray` method of sparse matrices is another option.
###
6.3.1.3. Scaling data with outliers
If your data contains many outliers, scaling using the mean and variance of the data is likely to not work very well. In these cases, you can use [`RobustScaler`](generated/sklearn.preprocessing.robustscaler#sklearn.preprocessing.RobustScaler "sklearn.preprocessing.RobustScaler") as a drop-in replacement instead. It uses more robust estimates for the center and range of your data.
###
6.3.1.4. Centering kernel matrices
If you have a kernel matrix of a kernel \(K\) that computes a dot product in a feature space (possibly implicitly) defined by a function \(\phi(\cdot)\), a [`KernelCenterer`](generated/sklearn.preprocessing.kernelcenterer#sklearn.preprocessing.KernelCenterer "sklearn.preprocessing.KernelCenterer") can transform the kernel matrix so that it contains inner products in the feature space defined by \(\phi\) followed by the removal of the mean in that space. In other words, [`KernelCenterer`](generated/sklearn.preprocessing.kernelcenterer#sklearn.preprocessing.KernelCenterer "sklearn.preprocessing.KernelCenterer") computes the centered Gram matrix associated to a positive semidefinite kernel \(K\).
**Mathematical formulation**
We can have a look at the mathematical formulation now that we have the intuition. Let \(K\) be a kernel matrix of shape `(n_samples, n_samples)` computed from \(X\), a data matrix of shape `(n_samples, n_features)`, during the `fit` step. \(K\) is defined by
\[K(X, X) = \phi(X) . \phi(X)^{T}\] \(\phi(X)\) is a function mapping of \(X\) to a Hilbert space. A centered kernel \(\tilde{K}\) is defined as:
\[\tilde{K}(X, X) = \tilde{\phi}(X) . \tilde{\phi}(X)^{T}\] where \(\tilde{\phi}(X)\) results from centering \(\phi(X)\) in the Hilbert space.
Thus, one could compute \(\tilde{K}\) by mapping \(X\) using the function \(\phi(\cdot)\) and center the data in this new space. However, kernels are often used because they allows some algebra calculations that avoid computing explicitly this mapping using \(\phi(\cdot)\). Indeed, one can implicitly center as shown in Appendix B in [[Scholkopf1998]](#scholkopf1998):
\[\tilde{K} = K - 1\_{\text{n}\_{samples}} K - K 1\_{\text{n}\_{samples}} + 1\_{\text{n}\_{samples}} K 1\_{\text{n}\_{samples}}\] \(1\_{\text{n}\_{samples}}\) is a matrix of `(n_samples, n_samples)` where all entries are equal to \(\frac{1}{\text{n}\_{samples}}\). In the `transform` step, the kernel becomes \(K\_{test}(X, Y)\) defined as:
\[K\_{test}(X, Y) = \phi(Y) . \phi(X)^{T}\] \(Y\) is the test dataset of shape `(n_samples_test, n_features)` and thus \(K\_{test}\) is of shape `(n_samples_test, n_samples)`. In this case, centering \(K\_{test}\) is done as:
\[\tilde{K}\_{test}(X, Y) = K\_{test} - 1'\_{\text{n}\_{samples}} K - K\_{test} 1\_{\text{n}\_{samples}} + 1'\_{\text{n}\_{samples}} K 1\_{\text{n}\_{samples}}\] \(1'\_{\text{n}\_{samples}}\) is a matrix of shape `(n_samples_test, n_samples)` where all entries are equal to \(\frac{1}{\text{n}\_{samples}}\).
6.3.2. Non-linear transformation
---------------------------------
Two types of transformations are available: quantile transforms and power transforms. Both quantile and power transforms are based on monotonic transformations of the features and thus preserve the rank of the values along each feature.
Quantile transforms put all features into the same desired distribution based on the formula \(G^{-1}(F(X))\) where \(F\) is the cumulative distribution function of the feature and \(G^{-1}\) the [quantile function](https://en.wikipedia.org/wiki/Quantile_function) of the desired output distribution \(G\). This formula is using the two following facts: (i) if \(X\) is a random variable with a continuous cumulative distribution function \(F\) then \(F(X)\) is uniformly distributed on \([0,1]\); (ii) if \(U\) is a random variable with uniform distribution on \([0,1]\) then \(G^{-1}(U)\) has distribution \(G\). By performing a rank transformation, a quantile transform smooths out unusual distributions and is less influenced by outliers than scaling methods. It does, however, distort correlations and distances within and across features.
Power transforms are a family of parametric transformations that aim to map data from any distribution to as close to a Gaussian distribution.
###
6.3.2.1. Mapping to a Uniform distribution
[`QuantileTransformer`](generated/sklearn.preprocessing.quantiletransformer#sklearn.preprocessing.QuantileTransformer "sklearn.preprocessing.QuantileTransformer") provides a non-parametric transformation to map the data to a uniform distribution with values between 0 and 1:
```
>>> from sklearn.datasets import load_iris
>>> from sklearn.model_selection import train_test_split
>>> X, y = load_iris(return_X_y=True)
>>> X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=0)
>>> quantile_transformer = preprocessing.QuantileTransformer(random_state=0)
>>> X_train_trans = quantile_transformer.fit_transform(X_train)
>>> X_test_trans = quantile_transformer.transform(X_test)
>>> np.percentile(X_train[:, 0], [0, 25, 50, 75, 100])
array([ 4.3, 5.1, 5.8, 6.5, 7.9])
```
This feature corresponds to the sepal length in cm. Once the quantile transformation applied, those landmarks approach closely the percentiles previously defined:
```
>>> np.percentile(X_train_trans[:, 0], [0, 25, 50, 75, 100])
...
array([ 0.00... , 0.24..., 0.49..., 0.73..., 0.99... ])
```
This can be confirmed on a independent testing set with similar remarks:
```
>>> np.percentile(X_test[:, 0], [0, 25, 50, 75, 100])
...
array([ 4.4 , 5.125, 5.75 , 6.175, 7.3 ])
>>> np.percentile(X_test_trans[:, 0], [0, 25, 50, 75, 100])
...
array([ 0.01..., 0.25..., 0.46..., 0.60... , 0.94...])
```
###
6.3.2.2. Mapping to a Gaussian distribution
In many modeling scenarios, normality of the features in a dataset is desirable. Power transforms are a family of parametric, monotonic transformations that aim to map data from any distribution to as close to a Gaussian distribution as possible in order to stabilize variance and minimize skewness.
[`PowerTransformer`](generated/sklearn.preprocessing.powertransformer#sklearn.preprocessing.PowerTransformer "sklearn.preprocessing.PowerTransformer") currently provides two such power transformations, the Yeo-Johnson transform and the Box-Cox transform.
The Yeo-Johnson transform is given by:
\[\begin{split}x\_i^{(\lambda)} = \begin{cases} [(x\_i + 1)^\lambda - 1] / \lambda & \text{if } \lambda \neq 0, x\_i \geq 0, \\[8pt] \ln{(x\_i + 1)} & \text{if } \lambda = 0, x\_i \geq 0 \\[8pt] -[(-x\_i + 1)^{2 - \lambda} - 1] / (2 - \lambda) & \text{if } \lambda \neq 2, x\_i < 0, \\[8pt] - \ln (- x\_i + 1) & \text{if } \lambda = 2, x\_i < 0 \end{cases}\end{split}\] while the Box-Cox transform is given by:
\[\begin{split}x\_i^{(\lambda)} = \begin{cases} \dfrac{x\_i^\lambda - 1}{\lambda} & \text{if } \lambda \neq 0, \\[8pt] \ln{(x\_i)} & \text{if } \lambda = 0, \end{cases}\end{split}\] Box-Cox can only be applied to strictly positive data. In both methods, the transformation is parameterized by \(\lambda\), which is determined through maximum likelihood estimation. Here is an example of using Box-Cox to map samples drawn from a lognormal distribution to a normal distribution:
```
>>> pt = preprocessing.PowerTransformer(method='box-cox', standardize=False)
>>> X_lognormal = np.random.RandomState(616).lognormal(size=(3, 3))
>>> X_lognormal
array([[1.28..., 1.18..., 0.84...],
[0.94..., 1.60..., 0.38...],
[1.35..., 0.21..., 1.09...]])
>>> pt.fit_transform(X_lognormal)
array([[ 0.49..., 0.17..., -0.15...],
[-0.05..., 0.58..., -0.57...],
[ 0.69..., -0.84..., 0.10...]])
```
While the above example sets the `standardize` option to `False`, [`PowerTransformer`](generated/sklearn.preprocessing.powertransformer#sklearn.preprocessing.PowerTransformer "sklearn.preprocessing.PowerTransformer") will apply zero-mean, unit-variance normalization to the transformed output by default.
Below are examples of Box-Cox and Yeo-Johnson applied to various probability distributions. Note that when applied to certain distributions, the power transforms achieve very Gaussian-like results, but with others, they are ineffective. This highlights the importance of visualizing the data before and after transformation.
It is also possible to map data to a normal distribution using [`QuantileTransformer`](generated/sklearn.preprocessing.quantiletransformer#sklearn.preprocessing.QuantileTransformer "sklearn.preprocessing.QuantileTransformer") by setting `output_distribution='normal'`. Using the earlier example with the iris dataset:
```
>>> quantile_transformer = preprocessing.QuantileTransformer(
... output_distribution='normal', random_state=0)
>>> X_trans = quantile_transformer.fit_transform(X)
>>> quantile_transformer.quantiles_
array([[4.3, 2. , 1. , 0.1],
[4.4, 2.2, 1.1, 0.1],
[4.4, 2.2, 1.2, 0.1],
...,
[7.7, 4.1, 6.7, 2.5],
[7.7, 4.2, 6.7, 2.5],
[7.9, 4.4, 6.9, 2.5]])
```
Thus the median of the input becomes the mean of the output, centered at 0. The normal output is clipped so that the input’s minimum and maximum — corresponding to the 1e-7 and 1 - 1e-7 quantiles respectively — do not become infinite under the transformation.
6.3.3. Normalization
---------------------
**Normalization** is the process of **scaling individual samples to have unit norm**. This process can be useful if you plan to use a quadratic form such as the dot-product or any other kernel to quantify the similarity of any pair of samples.
This assumption is the base of the [Vector Space Model](https://en.wikipedia.org/wiki/Vector_Space_Model) often used in text classification and clustering contexts.
The function [`normalize`](generated/sklearn.preprocessing.normalize#sklearn.preprocessing.normalize "sklearn.preprocessing.normalize") provides a quick and easy way to perform this operation on a single array-like dataset, either using the `l1`, `l2`, or `max` norms:
```
>>> X = [[ 1., -1., 2.],
... [ 2., 0., 0.],
... [ 0., 1., -1.]]
>>> X_normalized = preprocessing.normalize(X, norm='l2')
>>> X_normalized
array([[ 0.40..., -0.40..., 0.81...],
[ 1. ..., 0. ..., 0. ...],
[ 0. ..., 0.70..., -0.70...]])
```
The `preprocessing` module further provides a utility class [`Normalizer`](generated/sklearn.preprocessing.normalizer#sklearn.preprocessing.Normalizer "sklearn.preprocessing.Normalizer") that implements the same operation using the `Transformer` API (even though the `fit` method is useless in this case: the class is stateless as this operation treats samples independently).
This class is hence suitable for use in the early steps of a [`Pipeline`](generated/sklearn.pipeline.pipeline#sklearn.pipeline.Pipeline "sklearn.pipeline.Pipeline"):
```
>>> normalizer = preprocessing.Normalizer().fit(X) # fit does nothing
>>> normalizer
Normalizer()
```
The normalizer instance can then be used on sample vectors as any transformer:
```
>>> normalizer.transform(X)
array([[ 0.40..., -0.40..., 0.81...],
[ 1. ..., 0. ..., 0. ...],
[ 0. ..., 0.70..., -0.70...]])
>>> normalizer.transform([[-1., 1., 0.]])
array([[-0.70..., 0.70..., 0. ...]])
```
Note: L2 normalization is also known as spatial sign preprocessing.
6.3.4. Encoding categorical features
-------------------------------------
Often features are not given as continuous values but categorical. For example a person could have features `["male", "female"]`, `["from Europe", "from US", "from Asia"]`, `["uses Firefox", "uses Chrome", "uses Safari", "uses Internet Explorer"]`. Such features can be efficiently coded as integers, for instance `["male", "from US", "uses Internet Explorer"]` could be expressed as `[0, 1, 3]` while `["female", "from Asia", "uses Chrome"]` would be `[1, 2, 1]`.
To convert categorical features to such integer codes, we can use the [`OrdinalEncoder`](generated/sklearn.preprocessing.ordinalencoder#sklearn.preprocessing.OrdinalEncoder "sklearn.preprocessing.OrdinalEncoder"). This estimator transforms each categorical feature to one new feature of integers (0 to n\_categories - 1):
```
>>> enc = preprocessing.OrdinalEncoder()
>>> X = [['male', 'from US', 'uses Safari'], ['female', 'from Europe', 'uses Firefox']]
>>> enc.fit(X)
OrdinalEncoder()
>>> enc.transform([['female', 'from US', 'uses Safari']])
array([[0., 1., 1.]])
```
Such integer representation can, however, not be used directly with all scikit-learn estimators, as these expect continuous input, and would interpret the categories as being ordered, which is often not desired (i.e. the set of browsers was ordered arbitrarily).
By default, [`OrdinalEncoder`](generated/sklearn.preprocessing.ordinalencoder#sklearn.preprocessing.OrdinalEncoder "sklearn.preprocessing.OrdinalEncoder") will also passthrough missing values that are indicated by `np.nan`.
```
>>> enc = preprocessing.OrdinalEncoder()
>>> X = [['male'], ['female'], [np.nan], ['female']]
>>> enc.fit_transform(X)
array([[ 1.],
[ 0.],
[nan],
[ 0.]])
```
[`OrdinalEncoder`](generated/sklearn.preprocessing.ordinalencoder#sklearn.preprocessing.OrdinalEncoder "sklearn.preprocessing.OrdinalEncoder") provides a parameter `encoded_missing_value` to encode the missing values without the need to create a pipeline and using [`SimpleImputer`](generated/sklearn.impute.simpleimputer#sklearn.impute.SimpleImputer "sklearn.impute.SimpleImputer").
```
>>> enc = preprocessing.OrdinalEncoder(encoded_missing_value=-1)
>>> X = [['male'], ['female'], [np.nan], ['female']]
>>> enc.fit_transform(X)
array([[ 1.],
[ 0.],
[-1.],
[ 0.]])
```
The above processing is equivalent to the following pipeline:
```
>>> from sklearn.pipeline import Pipeline
>>> from sklearn.impute import SimpleImputer
>>> enc = Pipeline(steps=[
... ("encoder", preprocessing.OrdinalEncoder()),
... ("imputer", SimpleImputer(strategy="constant", fill_value=-1)),
... ])
>>> enc.fit_transform(X)
array([[ 1.],
[ 0.],
[-1.],
[ 0.]])
```
Another possibility to convert categorical features to features that can be used with scikit-learn estimators is to use a one-of-K, also known as one-hot or dummy encoding. This type of encoding can be obtained with the [`OneHotEncoder`](generated/sklearn.preprocessing.onehotencoder#sklearn.preprocessing.OneHotEncoder "sklearn.preprocessing.OneHotEncoder"), which transforms each categorical feature with `n_categories` possible values into `n_categories` binary features, with one of them 1, and all others 0.
Continuing the example above:
```
>>> enc = preprocessing.OneHotEncoder()
>>> X = [['male', 'from US', 'uses Safari'], ['female', 'from Europe', 'uses Firefox']]
>>> enc.fit(X)
OneHotEncoder()
>>> enc.transform([['female', 'from US', 'uses Safari'],
... ['male', 'from Europe', 'uses Safari']]).toarray()
array([[1., 0., 0., 1., 0., 1.],
[0., 1., 1., 0., 0., 1.]])
```
By default, the values each feature can take is inferred automatically from the dataset and can be found in the `categories_` attribute:
```
>>> enc.categories_
[array(['female', 'male'], dtype=object), array(['from Europe', 'from US'], dtype=object), array(['uses Firefox', 'uses Safari'], dtype=object)]
```
It is possible to specify this explicitly using the parameter `categories`. There are two genders, four possible continents and four web browsers in our dataset:
```
>>> genders = ['female', 'male']
>>> locations = ['from Africa', 'from Asia', 'from Europe', 'from US']
>>> browsers = ['uses Chrome', 'uses Firefox', 'uses IE', 'uses Safari']
>>> enc = preprocessing.OneHotEncoder(categories=[genders, locations, browsers])
>>> # Note that for there are missing categorical values for the 2nd and 3rd
>>> # feature
>>> X = [['male', 'from US', 'uses Safari'], ['female', 'from Europe', 'uses Firefox']]
>>> enc.fit(X)
OneHotEncoder(categories=[['female', 'male'],
['from Africa', 'from Asia', 'from Europe',
'from US'],
['uses Chrome', 'uses Firefox', 'uses IE',
'uses Safari']])
>>> enc.transform([['female', 'from Asia', 'uses Chrome']]).toarray()
array([[1., 0., 0., 1., 0., 0., 1., 0., 0., 0.]])
```
If there is a possibility that the training data might have missing categorical features, it can often be better to specify `handle_unknown='infrequent_if_exist'` instead of setting the `categories` manually as above. When `handle_unknown='infrequent_if_exist'` is specified and unknown categories are encountered during transform, no error will be raised but the resulting one-hot encoded columns for this feature will be all zeros or considered as an infrequent category if enabled. (`handle_unknown='infrequent_if_exist'` is only supported for one-hot encoding):
```
>>> enc = preprocessing.OneHotEncoder(handle_unknown='infrequent_if_exist')
>>> X = [['male', 'from US', 'uses Safari'], ['female', 'from Europe', 'uses Firefox']]
>>> enc.fit(X)
OneHotEncoder(handle_unknown='infrequent_if_exist')
>>> enc.transform([['female', 'from Asia', 'uses Chrome']]).toarray()
array([[1., 0., 0., 0., 0., 0.]])
```
It is also possible to encode each column into `n_categories - 1` columns instead of `n_categories` columns by using the `drop` parameter. This parameter allows the user to specify a category for each feature to be dropped. This is useful to avoid co-linearity in the input matrix in some classifiers. Such functionality is useful, for example, when using non-regularized regression ([`LinearRegression`](generated/sklearn.linear_model.linearregression#sklearn.linear_model.LinearRegression "sklearn.linear_model.LinearRegression")), since co-linearity would cause the covariance matrix to be non-invertible:
```
>>> X = [['male', 'from US', 'uses Safari'],
... ['female', 'from Europe', 'uses Firefox']]
>>> drop_enc = preprocessing.OneHotEncoder(drop='first').fit(X)
>>> drop_enc.categories_
[array(['female', 'male'], dtype=object), array(['from Europe', 'from US'], dtype=object),
array(['uses Firefox', 'uses Safari'], dtype=object)]
>>> drop_enc.transform(X).toarray()
array([[1., 1., 1.],
[0., 0., 0.]])
```
One might want to drop one of the two columns only for features with 2 categories. In this case, you can set the parameter `drop='if_binary'`.
```
>>> X = [['male', 'US', 'Safari'],
... ['female', 'Europe', 'Firefox'],
... ['female', 'Asia', 'Chrome']]
>>> drop_enc = preprocessing.OneHotEncoder(drop='if_binary').fit(X)
>>> drop_enc.categories_
[array(['female', 'male'], dtype=object), array(['Asia', 'Europe', 'US'], dtype=object),
array(['Chrome', 'Firefox', 'Safari'], dtype=object)]
>>> drop_enc.transform(X).toarray()
array([[1., 0., 0., 1., 0., 0., 1.],
[0., 0., 1., 0., 0., 1., 0.],
[0., 1., 0., 0., 1., 0., 0.]])
```
In the transformed `X`, the first column is the encoding of the feature with categories “male”/”female”, while the remaining 6 columns is the encoding of the 2 features with respectively 3 categories each.
When `handle_unknown='ignore'` and `drop` is not None, unknown categories will be encoded as all zeros:
```
>>> drop_enc = preprocessing.OneHotEncoder(drop='first',
... handle_unknown='ignore').fit(X)
>>> X_test = [['unknown', 'America', 'IE']]
>>> drop_enc.transform(X_test).toarray()
array([[0., 0., 0., 0., 0.]])
```
All the categories in `X_test` are unknown during transform and will be mapped to all zeros. This means that unknown categories will have the same mapping as the dropped category. :meth`OneHotEncoder.inverse\_transform` will map all zeros to the dropped category if a category is dropped and `None` if a category is not dropped:
```
>>> drop_enc = preprocessing.OneHotEncoder(drop='if_binary', sparse=False,
... handle_unknown='ignore').fit(X)
>>> X_test = [['unknown', 'America', 'IE']]
>>> X_trans = drop_enc.transform(X_test)
>>> X_trans
array([[0., 0., 0., 0., 0., 0., 0.]])
>>> drop_enc.inverse_transform(X_trans)
array([['female', None, None]], dtype=object)
```
[`OneHotEncoder`](generated/sklearn.preprocessing.onehotencoder#sklearn.preprocessing.OneHotEncoder "sklearn.preprocessing.OneHotEncoder") supports categorical features with missing values by considering the missing values as an additional category:
```
>>> X = [['male', 'Safari'],
... ['female', None],
... [np.nan, 'Firefox']]
>>> enc = preprocessing.OneHotEncoder(handle_unknown='error').fit(X)
>>> enc.categories_
[array(['female', 'male', nan], dtype=object),
array(['Firefox', 'Safari', None], dtype=object)]
>>> enc.transform(X).toarray()
array([[0., 1., 0., 0., 1., 0.],
[1., 0., 0., 0., 0., 1.],
[0., 0., 1., 1., 0., 0.]])
```
If a feature contains both `np.nan` and `None`, they will be considered separate categories:
```
>>> X = [['Safari'], [None], [np.nan], ['Firefox']]
>>> enc = preprocessing.OneHotEncoder(handle_unknown='error').fit(X)
>>> enc.categories_
[array(['Firefox', 'Safari', None, nan], dtype=object)]
>>> enc.transform(X).toarray()
array([[0., 1., 0., 0.],
[0., 0., 1., 0.],
[0., 0., 0., 1.],
[1., 0., 0., 0.]])
```
See [Loading features from dicts](feature_extraction#dict-feature-extraction) for categorical features that are represented as a dict, not as scalars.
###
6.3.4.1. Infrequent categories
[`OneHotEncoder`](generated/sklearn.preprocessing.onehotencoder#sklearn.preprocessing.OneHotEncoder "sklearn.preprocessing.OneHotEncoder") supports aggregating infrequent categories into a single output for each feature. The parameters to enable the gathering of infrequent categories are `min_frequency` and `max_categories`.
1. `min_frequency` is either an integer greater or equal to 1, or a float in the interval `(0.0, 1.0)`. If `min_frequency` is an integer, categories with a cardinality smaller than `min_frequency` will be considered infrequent. If `min_frequency` is a float, categories with a cardinality smaller than this fraction of the total number of samples will be considered infrequent. The default value is 1, which means every category is encoded separately.
2. `max_categories` is either `None` or any integer greater than 1. This parameter sets an upper limit to the number of output features for each input feature. `max_categories` includes the feature that combines infrequent categories.
In the following example, the categories, `'dog', 'snake'` are considered infrequent:
```
>>> X = np.array([['dog'] * 5 + ['cat'] * 20 + ['rabbit'] * 10 +
... ['snake'] * 3], dtype=object).T
>>> enc = preprocessing.OneHotEncoder(min_frequency=6, sparse=False).fit(X)
>>> enc.infrequent_categories_
[array(['dog', 'snake'], dtype=object)]
>>> enc.transform(np.array([['dog'], ['cat'], ['rabbit'], ['snake']]))
array([[0., 0., 1.],
[1., 0., 0.],
[0., 1., 0.],
[0., 0., 1.]])
```
By setting handle\_unknown to `'infrequent_if_exist'`, unknown categories will be considered infrequent:
```
>>> enc = preprocessing.OneHotEncoder(
... handle_unknown='infrequent_if_exist', sparse=False, min_frequency=6)
>>> enc = enc.fit(X)
>>> enc.transform(np.array([['dragon']]))
array([[0., 0., 1.]])
```
[`OneHotEncoder.get_feature_names_out`](generated/sklearn.preprocessing.onehotencoder#sklearn.preprocessing.OneHotEncoder.get_feature_names_out "sklearn.preprocessing.OneHotEncoder.get_feature_names_out") uses ‘infrequent’ as the infrequent feature name:
```
>>> enc.get_feature_names_out()
array(['x0_cat', 'x0_rabbit', 'x0_infrequent_sklearn'], dtype=object)
```
When `'handle_unknown'` is set to `'infrequent_if_exist'` and an unknown category is encountered in transform:
1. If infrequent category support was not configured or there was no infrequent category during training, the resulting one-hot encoded columns for this feature will be all zeros. In the inverse transform, an unknown category will be denoted as `None`.
2. If there is an infrequent category during training, the unknown category will be considered infrequent. In the inverse transform, ‘infrequent\_sklearn’ will be used to represent the infrequent category.
Infrequent categories can also be configured using `max_categories`. In the following example, we set `max_categories=2` to limit the number of features in the output. This will result in all but the `'cat'` category to be considered infrequent, leading to two features, one for `'cat'` and one for infrequent categories - which are all the others:
```
>>> enc = preprocessing.OneHotEncoder(max_categories=2, sparse=False)
>>> enc = enc.fit(X)
>>> enc.transform([['dog'], ['cat'], ['rabbit'], ['snake']])
array([[0., 1.],
[1., 0.],
[0., 1.],
[0., 1.]])
```
If both `max_categories` and `min_frequency` are non-default values, then categories are selected based on `min_frequency` first and `max_categories` categories are kept. In the following example, `min_frequency=4` considers only `snake` to be infrequent, but `max_categories=3`, forces `dog` to also be infrequent:
```
>>> enc = preprocessing.OneHotEncoder(min_frequency=4, max_categories=3, sparse=False)
>>> enc = enc.fit(X)
>>> enc.transform([['dog'], ['cat'], ['rabbit'], ['snake']])
array([[0., 0., 1.],
[1., 0., 0.],
[0., 1., 0.],
[0., 0., 1.]])
```
If there are infrequent categories with the same cardinality at the cutoff of `max_categories`, then then the first `max_categories` are taken based on lexicon ordering. In the following example, “b”, “c”, and “d”, have the same cardinality and with `max_categories=2`, “b” and “c” are infrequent because they have a higher lexicon order.
```
>>> X = np.asarray([["a"] * 20 + ["b"] * 10 + ["c"] * 10 + ["d"] * 10], dtype=object).T
>>> enc = preprocessing.OneHotEncoder(max_categories=3).fit(X)
>>> enc.infrequent_categories_
[array(['b', 'c'], dtype=object)]
```
6.3.5. Discretization
----------------------
[Discretization](https://en.wikipedia.org/wiki/Discretization_of_continuous_features) (otherwise known as quantization or binning) provides a way to partition continuous features into discrete values. Certain datasets with continuous features may benefit from discretization, because discretization can transform the dataset of continuous attributes to one with only nominal attributes.
One-hot encoded discretized features can make a model more expressive, while maintaining interpretability. For instance, pre-processing with a discretizer can introduce nonlinearity to linear models. For more advanced possibilities, in particular smooth ones, see [Generating polynomial features](#generating-polynomial-features) further below.
###
6.3.5.1. K-bins discretization
[`KBinsDiscretizer`](generated/sklearn.preprocessing.kbinsdiscretizer#sklearn.preprocessing.KBinsDiscretizer "sklearn.preprocessing.KBinsDiscretizer") discretizes features into `k` bins:
```
>>> X = np.array([[ -3., 5., 15 ],
... [ 0., 6., 14 ],
... [ 6., 3., 11 ]])
>>> est = preprocessing.KBinsDiscretizer(n_bins=[3, 2, 2], encode='ordinal').fit(X)
```
By default the output is one-hot encoded into a sparse matrix (See [Encoding categorical features](#preprocessing-categorical-features)) and this can be configured with the `encode` parameter. For each feature, the bin edges are computed during `fit` and together with the number of bins, they will define the intervals. Therefore, for the current example, these intervals are defined as:
* feature 1: \({[-\infty, -1), [-1, 2), [2, \infty)}\)
* feature 2: \({[-\infty, 5), [5, \infty)}\)
* feature 3: \({[-\infty, 14), [14, \infty)}\)
Based on these bin intervals, `X` is transformed as follows:
```
>>> est.transform(X)
array([[ 0., 1., 1.],
[ 1., 1., 1.],
[ 2., 0., 0.]])
```
The resulting dataset contains ordinal attributes which can be further used in a [`Pipeline`](generated/sklearn.pipeline.pipeline#sklearn.pipeline.Pipeline "sklearn.pipeline.Pipeline").
Discretization is similar to constructing histograms for continuous data. However, histograms focus on counting features which fall into particular bins, whereas discretization focuses on assigning feature values to these bins.
[`KBinsDiscretizer`](generated/sklearn.preprocessing.kbinsdiscretizer#sklearn.preprocessing.KBinsDiscretizer "sklearn.preprocessing.KBinsDiscretizer") implements different binning strategies, which can be selected with the `strategy` parameter. The ‘uniform’ strategy uses constant-width bins. The ‘quantile’ strategy uses the quantiles values to have equally populated bins in each feature. The ‘kmeans’ strategy defines bins based on a k-means clustering procedure performed on each feature independently.
Be aware that one can specify custom bins by passing a callable defining the discretization strategy to [`FunctionTransformer`](generated/sklearn.preprocessing.functiontransformer#sklearn.preprocessing.FunctionTransformer "sklearn.preprocessing.FunctionTransformer"). For instance, we can use the Pandas function [`pandas.cut`](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.cut.html#pandas.cut "(in pandas v1.5.1)"):
```
>>> import pandas as pd
>>> import numpy as np
>>> bins = [0, 1, 13, 20, 60, np.inf]
>>> labels = ['infant', 'kid', 'teen', 'adult', 'senior citizen']
>>> transformer = preprocessing.FunctionTransformer(
... pd.cut, kw_args={'bins': bins, 'labels': labels, 'retbins': False}
... )
>>> X = np.array([0.2, 2, 15, 25, 97])
>>> transformer.fit_transform(X)
['infant', 'kid', 'teen', 'adult', 'senior citizen']
Categories (5, object): ['infant' < 'kid' < 'teen' < 'adult' < 'senior citizen']
```
###
6.3.5.2. Feature binarization
**Feature binarization** is the process of **thresholding numerical features to get boolean values**. This can be useful for downstream probabilistic estimators that make assumption that the input data is distributed according to a multi-variate [Bernoulli distribution](https://en.wikipedia.org/wiki/Bernoulli_distribution). For instance, this is the case for the [`BernoulliRBM`](generated/sklearn.neural_network.bernoullirbm#sklearn.neural_network.BernoulliRBM "sklearn.neural_network.BernoulliRBM").
It is also common among the text processing community to use binary feature values (probably to simplify the probabilistic reasoning) even if normalized counts (a.k.a. term frequencies) or TF-IDF valued features often perform slightly better in practice.
As for the [`Normalizer`](generated/sklearn.preprocessing.normalizer#sklearn.preprocessing.Normalizer "sklearn.preprocessing.Normalizer"), the utility class [`Binarizer`](generated/sklearn.preprocessing.binarizer#sklearn.preprocessing.Binarizer "sklearn.preprocessing.Binarizer") is meant to be used in the early stages of [`Pipeline`](generated/sklearn.pipeline.pipeline#sklearn.pipeline.Pipeline "sklearn.pipeline.Pipeline"). The `fit` method does nothing as each sample is treated independently of others:
```
>>> X = [[ 1., -1., 2.],
... [ 2., 0., 0.],
... [ 0., 1., -1.]]
>>> binarizer = preprocessing.Binarizer().fit(X) # fit does nothing
>>> binarizer
Binarizer()
>>> binarizer.transform(X)
array([[1., 0., 1.],
[1., 0., 0.],
[0., 1., 0.]])
```
It is possible to adjust the threshold of the binarizer:
```
>>> binarizer = preprocessing.Binarizer(threshold=1.1)
>>> binarizer.transform(X)
array([[0., 0., 1.],
[1., 0., 0.],
[0., 0., 0.]])
```
As for the [`Normalizer`](generated/sklearn.preprocessing.normalizer#sklearn.preprocessing.Normalizer "sklearn.preprocessing.Normalizer") class, the preprocessing module provides a companion function [`binarize`](generated/sklearn.preprocessing.binarize#sklearn.preprocessing.binarize "sklearn.preprocessing.binarize") to be used when the transformer API is not necessary.
Note that the [`Binarizer`](generated/sklearn.preprocessing.binarizer#sklearn.preprocessing.Binarizer "sklearn.preprocessing.Binarizer") is similar to the [`KBinsDiscretizer`](generated/sklearn.preprocessing.kbinsdiscretizer#sklearn.preprocessing.KBinsDiscretizer "sklearn.preprocessing.KBinsDiscretizer") when `k = 2`, and when the bin edge is at the value `threshold`.
6.3.6. Imputation of missing values
------------------------------------
Tools for imputing missing values are discussed at [Imputation of missing values](impute#impute).
6.3.7. Generating polynomial features
--------------------------------------
Often it’s useful to add complexity to a model by considering nonlinear features of the input data. We show two possibilities that are both based on polynomials: The first one uses pure polynomials, the second one uses splines, i.e. piecewise polynomials.
###
6.3.7.1. Polynomial features
A simple and common method to use is polynomial features, which can get features’ high-order and interaction terms. It is implemented in [`PolynomialFeatures`](generated/sklearn.preprocessing.polynomialfeatures#sklearn.preprocessing.PolynomialFeatures "sklearn.preprocessing.PolynomialFeatures"):
```
>>> import numpy as np
>>> from sklearn.preprocessing import PolynomialFeatures
>>> X = np.arange(6).reshape(3, 2)
>>> X
array([[0, 1],
[2, 3],
[4, 5]])
>>> poly = PolynomialFeatures(2)
>>> poly.fit_transform(X)
array([[ 1., 0., 1., 0., 0., 1.],
[ 1., 2., 3., 4., 6., 9.],
[ 1., 4., 5., 16., 20., 25.]])
```
The features of X have been transformed from \((X\_1, X\_2)\) to \((1, X\_1, X\_2, X\_1^2, X\_1X\_2, X\_2^2)\).
In some cases, only interaction terms among features are required, and it can be gotten with the setting `interaction_only=True`:
```
>>> X = np.arange(9).reshape(3, 3)
>>> X
array([[0, 1, 2],
[3, 4, 5],
[6, 7, 8]])
>>> poly = PolynomialFeatures(degree=3, interaction_only=True)
>>> poly.fit_transform(X)
array([[ 1., 0., 1., 2., 0., 0., 2., 0.],
[ 1., 3., 4., 5., 12., 15., 20., 60.],
[ 1., 6., 7., 8., 42., 48., 56., 336.]])
```
The features of X have been transformed from \((X\_1, X\_2, X\_3)\) to \((1, X\_1, X\_2, X\_3, X\_1X\_2, X\_1X\_3, X\_2X\_3, X\_1X\_2X\_3)\).
Note that polynomial features are used implicitly in [kernel methods](https://en.wikipedia.org/wiki/Kernel_method) (e.g., [`SVC`](generated/sklearn.svm.svc#sklearn.svm.SVC "sklearn.svm.SVC"), [`KernelPCA`](generated/sklearn.decomposition.kernelpca#sklearn.decomposition.KernelPCA "sklearn.decomposition.KernelPCA")) when using polynomial [Kernel functions](svm#svm-kernels).
See [Polynomial and Spline interpolation](../auto_examples/linear_model/plot_polynomial_interpolation#sphx-glr-auto-examples-linear-model-plot-polynomial-interpolation-py) for Ridge regression using created polynomial features.
###
6.3.7.2. Spline transformer
Another way to add nonlinear terms instead of pure polynomials of features is to generate spline basis functions for each feature with the [`SplineTransformer`](generated/sklearn.preprocessing.splinetransformer#sklearn.preprocessing.SplineTransformer "sklearn.preprocessing.SplineTransformer"). Splines are piecewise polynomials, parametrized by their polynomial degree and the positions of the knots. The [`SplineTransformer`](generated/sklearn.preprocessing.splinetransformer#sklearn.preprocessing.SplineTransformer "sklearn.preprocessing.SplineTransformer") implements a B-spline basis, cf. the references below.
Note
The [`SplineTransformer`](generated/sklearn.preprocessing.splinetransformer#sklearn.preprocessing.SplineTransformer "sklearn.preprocessing.SplineTransformer") treats each feature separately, i.e. it won’t give you interaction terms.
Some of the advantages of splines over polynomials are:
* B-splines are very flexible and robust if you keep a fixed low degree, usually 3, and parsimoniously adapt the number of knots. Polynomials would need a higher degree, which leads to the next point.
* B-splines do not have oscillatory behaviour at the boundaries as have polynomials (the higher the degree, the worse). This is known as [Runge’s phenomenon](https://en.wikipedia.org/wiki/Runge%27s_phenomenon).
* B-splines provide good options for extrapolation beyond the boundaries, i.e. beyond the range of fitted values. Have a look at the option `extrapolation`.
* B-splines generate a feature matrix with a banded structure. For a single feature, every row contains only `degree + 1` non-zero elements, which occur consecutively and are even positive. This results in a matrix with good numerical properties, e.g. a low condition number, in sharp contrast to a matrix of polynomials, which goes under the name [Vandermonde matrix](https://en.wikipedia.org/wiki/Vandermonde_matrix). A low condition number is important for stable algorithms of linear models.
The following code snippet shows splines in action:
```
>>> import numpy as np
>>> from sklearn.preprocessing import SplineTransformer
>>> X = np.arange(5).reshape(5, 1)
>>> X
array([[0],
[1],
[2],
[3],
[4]])
>>> spline = SplineTransformer(degree=2, n_knots=3)
>>> spline.fit_transform(X)
array([[0.5 , 0.5 , 0. , 0. ],
[0.125, 0.75 , 0.125, 0. ],
[0. , 0.5 , 0.5 , 0. ],
[0. , 0.125, 0.75 , 0.125],
[0. , 0. , 0.5 , 0.5 ]])
```
As the `X` is sorted, one can easily see the banded matrix output. Only the three middle diagonals are non-zero for `degree=2`. The higher the degree, the more overlapping of the splines.
Interestingly, a [`SplineTransformer`](generated/sklearn.preprocessing.splinetransformer#sklearn.preprocessing.SplineTransformer "sklearn.preprocessing.SplineTransformer") of `degree=0` is the same as [`KBinsDiscretizer`](generated/sklearn.preprocessing.kbinsdiscretizer#sklearn.preprocessing.KBinsDiscretizer "sklearn.preprocessing.KBinsDiscretizer") with `encode='onehot-dense'` and `n_bins = n_knots - 1` if `knots = strategy`.
6.3.8. Custom transformers
---------------------------
Often, you will want to convert an existing Python function into a transformer to assist in data cleaning or processing. You can implement a transformer from an arbitrary function with [`FunctionTransformer`](generated/sklearn.preprocessing.functiontransformer#sklearn.preprocessing.FunctionTransformer "sklearn.preprocessing.FunctionTransformer"). For example, to build a transformer that applies a log transformation in a pipeline, do:
```
>>> import numpy as np
>>> from sklearn.preprocessing import FunctionTransformer
>>> transformer = FunctionTransformer(np.log1p, validate=True)
>>> X = np.array([[0, 1], [2, 3]])
>>> # Since FunctionTransformer is no-op during fit, we can call transform directly
>>> transformer.transform(X)
array([[0. , 0.69314718],
[1.09861229, 1.38629436]])
```
You can ensure that `func` and `inverse_func` are the inverse of each other by setting `check_inverse=True` and calling `fit` before `transform`. Please note that a warning is raised and can be turned into an error with a `filterwarnings`:
```
>>> import warnings
>>> warnings.filterwarnings("error", message=".*check_inverse*.",
... category=UserWarning, append=False)
```
For a full code example that demonstrates using a [`FunctionTransformer`](generated/sklearn.preprocessing.functiontransformer#sklearn.preprocessing.FunctionTransformer "sklearn.preprocessing.FunctionTransformer") to extract features from text data see [Column Transformer with Heterogeneous Data Sources](../auto_examples/compose/plot_column_transformer#sphx-glr-auto-examples-compose-plot-column-transformer-py) and [Time-related feature engineering](../auto_examples/applications/plot_cyclical_feature_engineering#sphx-glr-auto-examples-applications-plot-cyclical-feature-engineering-py).
| programming_docs |
scikit_learn 2.8. Density Estimation 2.8. Density Estimation
=======================
Density estimation walks the line between unsupervised learning, feature engineering, and data modeling. Some of the most popular and useful density estimation techniques are mixture models such as Gaussian Mixtures ([`GaussianMixture`](generated/sklearn.mixture.gaussianmixture#sklearn.mixture.GaussianMixture "sklearn.mixture.GaussianMixture")), and neighbor-based approaches such as the kernel density estimate ([`KernelDensity`](generated/sklearn.neighbors.kerneldensity#sklearn.neighbors.KernelDensity "sklearn.neighbors.KernelDensity")). Gaussian Mixtures are discussed more fully in the context of [clustering](clustering#clustering), because the technique is also useful as an unsupervised clustering scheme.
Density estimation is a very simple concept, and most people are already familiar with one common density estimation technique: the histogram.
2.8.1. Density Estimation: Histograms
--------------------------------------
A histogram is a simple visualization of data where bins are defined, and the number of data points within each bin is tallied. An example of a histogram can be seen in the upper-left panel of the following figure:
A major problem with histograms, however, is that the choice of binning can have a disproportionate effect on the resulting visualization. Consider the upper-right panel of the above figure. It shows a histogram over the same data, with the bins shifted right. The results of the two visualizations look entirely different, and might lead to different interpretations of the data.
Intuitively, one can also think of a histogram as a stack of blocks, one block per point. By stacking the blocks in the appropriate grid space, we recover the histogram. But what if, instead of stacking the blocks on a regular grid, we center each block on the point it represents, and sum the total height at each location? This idea leads to the lower-left visualization. It is perhaps not as clean as a histogram, but the fact that the data drive the block locations mean that it is a much better representation of the underlying data.
This visualization is an example of a *kernel density estimation*, in this case with a top-hat kernel (i.e. a square block at each point). We can recover a smoother distribution by using a smoother kernel. The bottom-right plot shows a Gaussian kernel density estimate, in which each point contributes a Gaussian curve to the total. The result is a smooth density estimate which is derived from the data, and functions as a powerful non-parametric model of the distribution of points.
2.8.2. Kernel Density Estimation
---------------------------------
Kernel density estimation in scikit-learn is implemented in the [`KernelDensity`](generated/sklearn.neighbors.kerneldensity#sklearn.neighbors.KernelDensity "sklearn.neighbors.KernelDensity") estimator, which uses the Ball Tree or KD Tree for efficient queries (see [Nearest Neighbors](neighbors#neighbors) for a discussion of these). Though the above example uses a 1D data set for simplicity, kernel density estimation can be performed in any number of dimensions, though in practice the curse of dimensionality causes its performance to degrade in high dimensions.
In the following figure, 100 points are drawn from a bimodal distribution, and the kernel density estimates are shown for three choices of kernels:
It’s clear how the kernel shape affects the smoothness of the resulting distribution. The scikit-learn kernel density estimator can be used as follows:
```
>>> from sklearn.neighbors import KernelDensity
>>> import numpy as np
>>> X = np.array([[-1, -1], [-2, -1], [-3, -2], [1, 1], [2, 1], [3, 2]])
>>> kde = KernelDensity(kernel='gaussian', bandwidth=0.2).fit(X)
>>> kde.score_samples(X)
array([-0.41075698, -0.41075698, -0.41076071, -0.41075698, -0.41075698,
-0.41076071])
```
Here we have used `kernel='gaussian'`, as seen above. Mathematically, a kernel is a positive function \(K(x;h)\) which is controlled by the bandwidth parameter \(h\). Given this kernel form, the density estimate at a point \(y\) within a group of points \(x\_i; i=1\cdots N\) is given by:
\[\rho\_K(y) = \sum\_{i=1}^{N} K(y - x\_i; h)\] The bandwidth here acts as a smoothing parameter, controlling the tradeoff between bias and variance in the result. A large bandwidth leads to a very smooth (i.e. high-bias) density distribution. A small bandwidth leads to an unsmooth (i.e. high-variance) density distribution.
[`KernelDensity`](generated/sklearn.neighbors.kerneldensity#sklearn.neighbors.KernelDensity "sklearn.neighbors.KernelDensity") implements several common kernel forms, which are shown in the following figure:
The form of these kernels is as follows:
* Gaussian kernel (`kernel = 'gaussian'`)
\(K(x; h) \propto \exp(- \frac{x^2}{2h^2} )\)
* Tophat kernel (`kernel = 'tophat'`)
\(K(x; h) \propto 1\) if \(x < h\)
* Epanechnikov kernel (`kernel = 'epanechnikov'`)
\(K(x; h) \propto 1 - \frac{x^2}{h^2}\)
* Exponential kernel (`kernel = 'exponential'`)
\(K(x; h) \propto \exp(-x/h)\)
* Linear kernel (`kernel = 'linear'`)
\(K(x; h) \propto 1 - x/h\) if \(x < h\)
* Cosine kernel (`kernel = 'cosine'`)
\(K(x; h) \propto \cos(\frac{\pi x}{2h})\) if \(x < h\)
The kernel density estimator can be used with any of the valid distance metrics (see [`DistanceMetric`](generated/sklearn.metrics.distancemetric#sklearn.metrics.DistanceMetric "sklearn.metrics.DistanceMetric") for a list of available metrics), though the results are properly normalized only for the Euclidean metric. One particularly useful metric is the [Haversine distance](https://en.wikipedia.org/wiki/Haversine_formula) which measures the angular distance between points on a sphere. Here is an example of using a kernel density estimate for a visualization of geospatial data, in this case the distribution of observations of two different species on the South American continent:
One other useful application of kernel density estimation is to learn a non-parametric generative model of a dataset in order to efficiently draw new samples from this generative model. Here is an example of using this process to create a new set of hand-written digits, using a Gaussian kernel learned on a PCA projection of the data:
The “new” data consists of linear combinations of the input data, with weights probabilistically drawn given the KDE model.
scikit_learn 6.4. Imputation of missing values 6.4. Imputation of missing values
=================================
For various reasons, many real world datasets contain missing values, often encoded as blanks, NaNs or other placeholders. Such datasets however are incompatible with scikit-learn estimators which assume that all values in an array are numerical, and that all have and hold meaning. A basic strategy to use incomplete datasets is to discard entire rows and/or columns containing missing values. However, this comes at the price of losing data which may be valuable (even though incomplete). A better strategy is to impute the missing values, i.e., to infer them from the known part of the data. See the glossary entry on [imputation](https://scikit-learn.org/1.1/glossary.html#term-imputation).
6.4.1. Univariate vs. Multivariate Imputation
----------------------------------------------
One type of imputation algorithm is univariate, which imputes values in the i-th feature dimension using only non-missing values in that feature dimension (e.g. `impute.SimpleImputer`). By contrast, multivariate imputation algorithms use the entire set of available feature dimensions to estimate the missing values (e.g. `impute.IterativeImputer`).
6.4.2. Univariate feature imputation
-------------------------------------
The [`SimpleImputer`](generated/sklearn.impute.simpleimputer#sklearn.impute.SimpleImputer "sklearn.impute.SimpleImputer") class provides basic strategies for imputing missing values. Missing values can be imputed with a provided constant value, or using the statistics (mean, median or most frequent) of each column in which the missing values are located. This class also allows for different missing values encodings.
The following snippet demonstrates how to replace missing values, encoded as `np.nan`, using the mean value of the columns (axis 0) that contain the missing values:
```
>>> import numpy as np
>>> from sklearn.impute import SimpleImputer
>>> imp = SimpleImputer(missing_values=np.nan, strategy='mean')
>>> imp.fit([[1, 2], [np.nan, 3], [7, 6]])
SimpleImputer()
>>> X = [[np.nan, 2], [6, np.nan], [7, 6]]
>>> print(imp.transform(X))
[[4. 2. ]
[6. 3.666...]
[7. 6. ]]
```
The [`SimpleImputer`](generated/sklearn.impute.simpleimputer#sklearn.impute.SimpleImputer "sklearn.impute.SimpleImputer") class also supports sparse matrices:
```
>>> import scipy.sparse as sp
>>> X = sp.csc_matrix([[1, 2], [0, -1], [8, 4]])
>>> imp = SimpleImputer(missing_values=-1, strategy='mean')
>>> imp.fit(X)
SimpleImputer(missing_values=-1)
>>> X_test = sp.csc_matrix([[-1, 2], [6, -1], [7, 6]])
>>> print(imp.transform(X_test).toarray())
[[3. 2.]
[6. 3.]
[7. 6.]]
```
Note that this format is not meant to be used to implicitly store missing values in the matrix because it would densify it at transform time. Missing values encoded by 0 must be used with dense input.
The [`SimpleImputer`](generated/sklearn.impute.simpleimputer#sklearn.impute.SimpleImputer "sklearn.impute.SimpleImputer") class also supports categorical data represented as string values or pandas categoricals when using the `'most_frequent'` or `'constant'` strategy:
```
>>> import pandas as pd
>>> df = pd.DataFrame([["a", "x"],
... [np.nan, "y"],
... ["a", np.nan],
... ["b", "y"]], dtype="category")
...
>>> imp = SimpleImputer(strategy="most_frequent")
>>> print(imp.fit_transform(df))
[['a' 'x']
['a' 'y']
['a' 'y']
['b' 'y']]
```
6.4.3. Multivariate feature imputation
---------------------------------------
A more sophisticated approach is to use the [`IterativeImputer`](generated/sklearn.impute.iterativeimputer#sklearn.impute.IterativeImputer "sklearn.impute.IterativeImputer") class, which models each feature with missing values as a function of other features, and uses that estimate for imputation. It does so in an iterated round-robin fashion: at each step, a feature column is designated as output `y` and the other feature columns are treated as inputs `X`. A regressor is fit on `(X,
y)` for known `y`. Then, the regressor is used to predict the missing values of `y`. This is done for each feature in an iterative fashion, and then is repeated for `max_iter` imputation rounds. The results of the final imputation round are returned.
Note
This estimator is still **experimental** for now: default parameters or details of behaviour might change without any deprecation cycle. Resolving the following issues would help stabilize [`IterativeImputer`](generated/sklearn.impute.iterativeimputer#sklearn.impute.IterativeImputer "sklearn.impute.IterativeImputer"): convergence criteria ([#14338](https://github.com/scikit-learn/scikit-learn/issues/14338)), default estimators ([#13286](https://github.com/scikit-learn/scikit-learn/issues/13286)), and use of random state ([#15611](https://github.com/scikit-learn/scikit-learn/issues/15611)). To use it, you need to explicitly import `enable_iterative_imputer`.
```
>>> import numpy as np
>>> from sklearn.experimental import enable_iterative_imputer
>>> from sklearn.impute import IterativeImputer
>>> imp = IterativeImputer(max_iter=10, random_state=0)
>>> imp.fit([[1, 2], [3, 6], [4, 8], [np.nan, 3], [7, np.nan]])
IterativeImputer(random_state=0)
>>> X_test = [[np.nan, 2], [6, np.nan], [np.nan, 6]]
>>> # the model learns that the second feature is double the first
>>> print(np.round(imp.transform(X_test)))
[[ 1. 2.]
[ 6. 12.]
[ 3. 6.]]
```
Both [`SimpleImputer`](generated/sklearn.impute.simpleimputer#sklearn.impute.SimpleImputer "sklearn.impute.SimpleImputer") and [`IterativeImputer`](generated/sklearn.impute.iterativeimputer#sklearn.impute.IterativeImputer "sklearn.impute.IterativeImputer") can be used in a Pipeline as a way to build a composite estimator that supports imputation. See [Imputing missing values before building an estimator](../auto_examples/impute/plot_missing_values#sphx-glr-auto-examples-impute-plot-missing-values-py).
###
6.4.3.1. Flexibility of IterativeImputer
There are many well-established imputation packages in the R data science ecosystem: Amelia, mi, mice, missForest, etc. missForest is popular, and turns out to be a particular instance of different sequential imputation algorithms that can all be implemented with [`IterativeImputer`](generated/sklearn.impute.iterativeimputer#sklearn.impute.IterativeImputer "sklearn.impute.IterativeImputer") by passing in different regressors to be used for predicting missing feature values. In the case of missForest, this regressor is a Random Forest. See [Imputing missing values with variants of IterativeImputer](../auto_examples/impute/plot_iterative_imputer_variants_comparison#sphx-glr-auto-examples-impute-plot-iterative-imputer-variants-comparison-py).
###
6.4.3.2. Multiple vs. Single Imputation
In the statistics community, it is common practice to perform multiple imputations, generating, for example, `m` separate imputations for a single feature matrix. Each of these `m` imputations is then put through the subsequent analysis pipeline (e.g. feature engineering, clustering, regression, classification). The `m` final analysis results (e.g. held-out validation errors) allow the data scientist to obtain understanding of how analytic results may differ as a consequence of the inherent uncertainty caused by the missing values. The above practice is called multiple imputation.
Our implementation of [`IterativeImputer`](generated/sklearn.impute.iterativeimputer#sklearn.impute.IterativeImputer "sklearn.impute.IterativeImputer") was inspired by the R MICE package (Multivariate Imputation by Chained Equations) [[1]](#id3), but differs from it by returning a single imputation instead of multiple imputations. However, [`IterativeImputer`](generated/sklearn.impute.iterativeimputer#sklearn.impute.IterativeImputer "sklearn.impute.IterativeImputer") can also be used for multiple imputations by applying it repeatedly to the same dataset with different random seeds when `sample_posterior=True`. See [[2]](#id4), chapter 4 for more discussion on multiple vs. single imputations.
It is still an open problem as to how useful single vs. multiple imputation is in the context of prediction and classification when the user is not interested in measuring uncertainty due to missing values.
Note that a call to the `transform` method of [`IterativeImputer`](generated/sklearn.impute.iterativeimputer#sklearn.impute.IterativeImputer "sklearn.impute.IterativeImputer") is not allowed to change the number of samples. Therefore multiple imputations cannot be achieved by a single call to `transform`.
6.4.4. References
------------------
6.4.5. Nearest neighbors imputation
------------------------------------
The [`KNNImputer`](generated/sklearn.impute.knnimputer#sklearn.impute.KNNImputer "sklearn.impute.KNNImputer") class provides imputation for filling in missing values using the k-Nearest Neighbors approach. By default, a euclidean distance metric that supports missing values, `nan_euclidean_distances`, is used to find the nearest neighbors. Each missing feature is imputed using values from `n_neighbors` nearest neighbors that have a value for the feature. The feature of the neighbors are averaged uniformly or weighted by distance to each neighbor. If a sample has more than one feature missing, then the neighbors for that sample can be different depending on the particular feature being imputed. When the number of available neighbors is less than `n_neighbors` and there are no defined distances to the training set, the training set average for that feature is used during imputation. If there is at least one neighbor with a defined distance, the weighted or unweighted average of the remaining neighbors will be used during imputation. If a feature is always missing in training, it is removed during `transform`. For more information on the methodology, see ref. [[OL2001]](#ol2001).
The following snippet demonstrates how to replace missing values, encoded as `np.nan`, using the mean feature value of the two nearest neighbors of samples with missing values:
```
>>> import numpy as np
>>> from sklearn.impute import KNNImputer
>>> nan = np.nan
>>> X = [[1, 2, nan], [3, 4, 3], [nan, 6, 5], [8, 8, 7]]
>>> imputer = KNNImputer(n_neighbors=2, weights="uniform")
>>> imputer.fit_transform(X)
array([[1. , 2. , 4. ],
[3. , 4. , 3. ],
[5.5, 6. , 5. ],
[8. , 8. , 7. ]])
```
[[OL2001](#id5)] Olga Troyanskaya, Michael Cantor, Gavin Sherlock, Pat Brown, Trevor Hastie, Robert Tibshirani, David Botstein and Russ B. Altman, Missing value estimation methods for DNA microarrays, BIOINFORMATICS Vol. 17 no. 6, 2001 Pages 520-525.
6.4.6. Marking imputed values
------------------------------
The [`MissingIndicator`](generated/sklearn.impute.missingindicator#sklearn.impute.MissingIndicator "sklearn.impute.MissingIndicator") transformer is useful to transform a dataset into corresponding binary matrix indicating the presence of missing values in the dataset. This transformation is useful in conjunction with imputation. When using imputation, preserving the information about which values had been missing can be informative. Note that both the [`SimpleImputer`](generated/sklearn.impute.simpleimputer#sklearn.impute.SimpleImputer "sklearn.impute.SimpleImputer") and [`IterativeImputer`](generated/sklearn.impute.iterativeimputer#sklearn.impute.IterativeImputer "sklearn.impute.IterativeImputer") have the boolean parameter `add_indicator` (`False` by default) which when set to `True` provides a convenient way of stacking the output of the [`MissingIndicator`](generated/sklearn.impute.missingindicator#sklearn.impute.MissingIndicator "sklearn.impute.MissingIndicator") transformer with the output of the imputer.
`NaN` is usually used as the placeholder for missing values. However, it enforces the data type to be float. The parameter `missing_values` allows to specify other placeholder such as integer. In the following example, we will use `-1` as missing values:
```
>>> from sklearn.impute import MissingIndicator
>>> X = np.array([[-1, -1, 1, 3],
... [4, -1, 0, -1],
... [8, -1, 1, 0]])
>>> indicator = MissingIndicator(missing_values=-1)
>>> mask_missing_values_only = indicator.fit_transform(X)
>>> mask_missing_values_only
array([[ True, True, False],
[False, True, True],
[False, True, False]])
```
The `features` parameter is used to choose the features for which the mask is constructed. By default, it is `'missing-only'` which returns the imputer mask of the features containing missing values at `fit` time:
```
>>> indicator.features_
array([0, 1, 3])
```
The `features` parameter can be set to `'all'` to return all features whether or not they contain missing values:
```
>>> indicator = MissingIndicator(missing_values=-1, features="all")
>>> mask_all = indicator.fit_transform(X)
>>> mask_all
array([[ True, True, False, False],
[False, True, False, True],
[False, True, False, False]])
>>> indicator.features_
array([0, 1, 2, 3])
```
When using the [`MissingIndicator`](generated/sklearn.impute.missingindicator#sklearn.impute.MissingIndicator "sklearn.impute.MissingIndicator") in a `Pipeline`, be sure to use the `FeatureUnion` or `ColumnTransformer` to add the indicator features to the regular features. First we obtain the `iris` dataset, and add some missing values to it.
```
>>> from sklearn.datasets import load_iris
>>> from sklearn.impute import SimpleImputer, MissingIndicator
>>> from sklearn.model_selection import train_test_split
>>> from sklearn.pipeline import FeatureUnion, make_pipeline
>>> from sklearn.tree import DecisionTreeClassifier
>>> X, y = load_iris(return_X_y=True)
>>> mask = np.random.randint(0, 2, size=X.shape).astype(bool)
>>> X[mask] = np.nan
>>> X_train, X_test, y_train, _ = train_test_split(X, y, test_size=100,
... random_state=0)
```
Now we create a `FeatureUnion`. All features will be imputed using [`SimpleImputer`](generated/sklearn.impute.simpleimputer#sklearn.impute.SimpleImputer "sklearn.impute.SimpleImputer"), in order to enable classifiers to work with this data. Additionally, it adds the indicator variables from [`MissingIndicator`](generated/sklearn.impute.missingindicator#sklearn.impute.MissingIndicator "sklearn.impute.MissingIndicator").
```
>>> transformer = FeatureUnion(
... transformer_list=[
... ('features', SimpleImputer(strategy='mean')),
... ('indicators', MissingIndicator())])
>>> transformer = transformer.fit(X_train, y_train)
>>> results = transformer.transform(X_test)
>>> results.shape
(100, 8)
```
Of course, we cannot use the transformer to make any predictions. We should wrap this in a `Pipeline` with a classifier (e.g., a `DecisionTreeClassifier`) to be able to make predictions.
```
>>> clf = make_pipeline(transformer, DecisionTreeClassifier())
>>> clf = clf.fit(X_train, y_train)
>>> results = clf.predict(X_test)
>>> results.shape
(100,)
```
6.4.7. Estimators that handle NaN values
-----------------------------------------
Some estimators are designed to handle NaN values without preprocessing. Below is the list of these estimators, classified by type (cluster, regressor, classifier, transform) :
* **Estimators that allow NaN values for type** `regressor`**:**
+ [HistGradientBoostingRegressor](generated/sklearn.ensemble.histgradientboostingregressor)
* **Estimators that allow NaN values for type** `classifier`**:**
+ [HistGradientBoostingClassifier](generated/sklearn.ensemble.histgradientboostingclassifier)
* **Estimators that allow NaN values for type** `transformer`**:**
+ [IterativeImputer](generated/sklearn.impute.iterativeimputer)
+ [KNNImputer](generated/sklearn.impute.knnimputer)
+ [MaxAbsScaler](generated/sklearn.preprocessing.maxabsscaler)
+ [MinMaxScaler](generated/sklearn.preprocessing.minmaxscaler)
+ [MissingIndicator](generated/sklearn.impute.missingindicator)
+ [PowerTransformer](generated/sklearn.preprocessing.powertransformer)
+ [QuantileTransformer](generated/sklearn.preprocessing.quantiletransformer)
+ [RobustScaler](generated/sklearn.preprocessing.robustscaler)
+ [SimpleImputer](generated/sklearn.impute.simpleimputer)
+ [StandardScaler](generated/sklearn.preprocessing.standardscaler)
+ [VarianceThreshold](generated/sklearn.feature_selection.variancethreshold)
| programming_docs |
scikit_learn 6.2. Feature extraction 6.2. Feature extraction
=======================
The [`sklearn.feature_extraction`](classes#module-sklearn.feature_extraction "sklearn.feature_extraction") module can be used to extract features in a format supported by machine learning algorithms from datasets consisting of formats such as text and image.
Note
Feature extraction is very different from [Feature selection](feature_selection#feature-selection): the former consists in transforming arbitrary data, such as text or images, into numerical features usable for machine learning. The latter is a machine learning technique applied on these features.
6.2.1. Loading features from dicts
-----------------------------------
The class [`DictVectorizer`](generated/sklearn.feature_extraction.dictvectorizer#sklearn.feature_extraction.DictVectorizer "sklearn.feature_extraction.DictVectorizer") can be used to convert feature arrays represented as lists of standard Python `dict` objects to the NumPy/SciPy representation used by scikit-learn estimators.
While not particularly fast to process, Python’s `dict` has the advantages of being convenient to use, being sparse (absent features need not be stored) and storing feature names in addition to values.
[`DictVectorizer`](generated/sklearn.feature_extraction.dictvectorizer#sklearn.feature_extraction.DictVectorizer "sklearn.feature_extraction.DictVectorizer") implements what is called one-of-K or “one-hot” coding for categorical (aka nominal, discrete) features. Categorical features are “attribute-value” pairs where the value is restricted to a list of discrete possibilities without ordering (e.g. topic identifiers, types of objects, tags, names…).
In the following, “city” is a categorical attribute while “temperature” is a traditional numerical feature:
```
>>> measurements = [
... {'city': 'Dubai', 'temperature': 33.},
... {'city': 'London', 'temperature': 12.},
... {'city': 'San Francisco', 'temperature': 18.},
... ]
>>> from sklearn.feature_extraction import DictVectorizer
>>> vec = DictVectorizer()
>>> vec.fit_transform(measurements).toarray()
array([[ 1., 0., 0., 33.],
[ 0., 1., 0., 12.],
[ 0., 0., 1., 18.]])
>>> vec.get_feature_names_out()
array(['city=Dubai', 'city=London', 'city=San Francisco', 'temperature'], ...)
```
[`DictVectorizer`](generated/sklearn.feature_extraction.dictvectorizer#sklearn.feature_extraction.DictVectorizer "sklearn.feature_extraction.DictVectorizer") accepts multiple string values for one feature, like, e.g., multiple categories for a movie.
Assume a database classifies each movie using some categories (not mandatories) and its year of release.
```
>>> movie_entry = [{'category': ['thriller', 'drama'], 'year': 2003},
... {'category': ['animation', 'family'], 'year': 2011},
... {'year': 1974}]
>>> vec.fit_transform(movie_entry).toarray()
array([[0.000e+00, 1.000e+00, 0.000e+00, 1.000e+00, 2.003e+03],
[1.000e+00, 0.000e+00, 1.000e+00, 0.000e+00, 2.011e+03],
[0.000e+00, 0.000e+00, 0.000e+00, 0.000e+00, 1.974e+03]])
>>> vec.get_feature_names_out()
array(['category=animation', 'category=drama', 'category=family',
'category=thriller', 'year'], ...)
>>> vec.transform({'category': ['thriller'],
... 'unseen_feature': '3'}).toarray()
array([[0., 0., 0., 1., 0.]])
```
[`DictVectorizer`](generated/sklearn.feature_extraction.dictvectorizer#sklearn.feature_extraction.DictVectorizer "sklearn.feature_extraction.DictVectorizer") is also a useful representation transformation for training sequence classifiers in Natural Language Processing models that typically work by extracting feature windows around a particular word of interest.
For example, suppose that we have a first algorithm that extracts Part of Speech (PoS) tags that we want to use as complementary tags for training a sequence classifier (e.g. a chunker). The following dict could be such a window of features extracted around the word ‘sat’ in the sentence ‘The cat sat on the mat.’:
```
>>> pos_window = [
... {
... 'word-2': 'the',
... 'pos-2': 'DT',
... 'word-1': 'cat',
... 'pos-1': 'NN',
... 'word+1': 'on',
... 'pos+1': 'PP',
... },
... # in a real application one would extract many such dictionaries
... ]
```
This description can be vectorized into a sparse two-dimensional matrix suitable for feeding into a classifier (maybe after being piped into a [`TfidfTransformer`](generated/sklearn.feature_extraction.text.tfidftransformer#sklearn.feature_extraction.text.TfidfTransformer "sklearn.feature_extraction.text.TfidfTransformer") for normalization):
```
>>> vec = DictVectorizer()
>>> pos_vectorized = vec.fit_transform(pos_window)
>>> pos_vectorized
<1x6 sparse matrix of type '<... 'numpy.float64'>'
with 6 stored elements in Compressed Sparse ... format>
>>> pos_vectorized.toarray()
array([[1., 1., 1., 1., 1., 1.]])
>>> vec.get_feature_names_out()
array(['pos+1=PP', 'pos-1=NN', 'pos-2=DT', 'word+1=on', 'word-1=cat',
'word-2=the'], ...)
```
As you can imagine, if one extracts such a context around each individual word of a corpus of documents the resulting matrix will be very wide (many one-hot-features) with most of them being valued to zero most of the time. So as to make the resulting data structure able to fit in memory the `DictVectorizer` class uses a `scipy.sparse` matrix by default instead of a `numpy.ndarray`.
6.2.2. Feature hashing
-----------------------
The class [`FeatureHasher`](generated/sklearn.feature_extraction.featurehasher#sklearn.feature_extraction.FeatureHasher "sklearn.feature_extraction.FeatureHasher") is a high-speed, low-memory vectorizer that uses a technique known as [feature hashing](https://en.wikipedia.org/wiki/Feature_hashing), or the “hashing trick”. Instead of building a hash table of the features encountered in training, as the vectorizers do, instances of [`FeatureHasher`](generated/sklearn.feature_extraction.featurehasher#sklearn.feature_extraction.FeatureHasher "sklearn.feature_extraction.FeatureHasher") apply a hash function to the features to determine their column index in sample matrices directly. The result is increased speed and reduced memory usage, at the expense of inspectability; the hasher does not remember what the input features looked like and has no `inverse_transform` method.
Since the hash function might cause collisions between (unrelated) features, a signed hash function is used and the sign of the hash value determines the sign of the value stored in the output matrix for a feature. This way, collisions are likely to cancel out rather than accumulate error, and the expected mean of any output feature’s value is zero. This mechanism is enabled by default with `alternate_sign=True` and is particularly useful for small hash table sizes (`n_features < 10000`). For large hash table sizes, it can be disabled, to allow the output to be passed to estimators like [`MultinomialNB`](generated/sklearn.naive_bayes.multinomialnb#sklearn.naive_bayes.MultinomialNB "sklearn.naive_bayes.MultinomialNB") or [`chi2`](generated/sklearn.feature_selection.chi2#sklearn.feature_selection.chi2 "sklearn.feature_selection.chi2") feature selectors that expect non-negative inputs.
[`FeatureHasher`](generated/sklearn.feature_extraction.featurehasher#sklearn.feature_extraction.FeatureHasher "sklearn.feature_extraction.FeatureHasher") accepts either mappings (like Python’s `dict` and its variants in the `collections` module), `(feature, value)` pairs, or strings, depending on the constructor parameter `input_type`. Mapping are treated as lists of `(feature, value)` pairs, while single strings have an implicit value of 1, so `['feat1', 'feat2', 'feat3']` is interpreted as `[('feat1', 1), ('feat2', 1), ('feat3', 1)]`. If a single feature occurs multiple times in a sample, the associated values will be summed (so `('feat', 2)` and `('feat', 3.5)` become `('feat', 5.5)`). The output from [`FeatureHasher`](generated/sklearn.feature_extraction.featurehasher#sklearn.feature_extraction.FeatureHasher "sklearn.feature_extraction.FeatureHasher") is always a `scipy.sparse` matrix in the CSR format.
Feature hashing can be employed in document classification, but unlike [`CountVectorizer`](generated/sklearn.feature_extraction.text.countvectorizer#sklearn.feature_extraction.text.CountVectorizer "sklearn.feature_extraction.text.CountVectorizer"), [`FeatureHasher`](generated/sklearn.feature_extraction.featurehasher#sklearn.feature_extraction.FeatureHasher "sklearn.feature_extraction.FeatureHasher") does not do word splitting or any other preprocessing except Unicode-to-UTF-8 encoding; see [Vectorizing a large text corpus with the hashing trick](#hashing-vectorizer), below, for a combined tokenizer/hasher.
As an example, consider a word-level natural language processing task that needs features extracted from `(token, part_of_speech)` pairs. One could use a Python generator function to extract features:
```
def token_features(token, part_of_speech):
if token.isdigit():
yield "numeric"
else:
yield "token={}".format(token.lower())
yield "token,pos={},{}".format(token, part_of_speech)
if token[0].isupper():
yield "uppercase_initial"
if token.isupper():
yield "all_uppercase"
yield "pos={}".format(part_of_speech)
```
Then, the `raw_X` to be fed to `FeatureHasher.transform` can be constructed using:
```
raw_X = (token_features(tok, pos_tagger(tok)) for tok in corpus)
```
and fed to a hasher with:
```
hasher = FeatureHasher(input_type='string')
X = hasher.transform(raw_X)
```
to get a `scipy.sparse` matrix `X`.
Note the use of a generator comprehension, which introduces laziness into the feature extraction: tokens are only processed on demand from the hasher.
###
6.2.2.1. Implementation details
[`FeatureHasher`](generated/sklearn.feature_extraction.featurehasher#sklearn.feature_extraction.FeatureHasher "sklearn.feature_extraction.FeatureHasher") uses the signed 32-bit variant of MurmurHash3. As a result (and because of limitations in `scipy.sparse`), the maximum number of features supported is currently \(2^{31} - 1\).
The original formulation of the hashing trick by Weinberger et al. used two separate hash functions \(h\) and \(\xi\) to determine the column index and sign of a feature, respectively. The present implementation works under the assumption that the sign bit of MurmurHash3 is independent of its other bits.
Since a simple modulo is used to transform the hash function to a column index, it is advisable to use a power of two as the `n_features` parameter; otherwise the features will not be mapped evenly to the columns.
6.2.3. Text feature extraction
-------------------------------
###
6.2.3.1. The Bag of Words representation
Text Analysis is a major application field for machine learning algorithms. However the raw data, a sequence of symbols cannot be fed directly to the algorithms themselves as most of them expect numerical feature vectors with a fixed size rather than the raw text documents with variable length.
In order to address this, scikit-learn provides utilities for the most common ways to extract numerical features from text content, namely:
* **tokenizing** strings and giving an integer id for each possible token, for instance by using white-spaces and punctuation as token separators.
* **counting** the occurrences of tokens in each document.
* **normalizing** and weighting with diminishing importance tokens that occur in the majority of samples / documents.
In this scheme, features and samples are defined as follows:
* each **individual token occurrence frequency** (normalized or not) is treated as a **feature**.
* the vector of all the token frequencies for a given **document** is considered a multivariate **sample**.
A corpus of documents can thus be represented by a matrix with one row per document and one column per token (e.g. word) occurring in the corpus.
We call **vectorization** the general process of turning a collection of text documents into numerical feature vectors. This specific strategy (tokenization, counting and normalization) is called the **Bag of Words** or “Bag of n-grams” representation. Documents are described by word occurrences while completely ignoring the relative position information of the words in the document.
###
6.2.3.2. Sparsity
As most documents will typically use a very small subset of the words used in the corpus, the resulting matrix will have many feature values that are zeros (typically more than 99% of them).
For instance a collection of 10,000 short text documents (such as emails) will use a vocabulary with a size in the order of 100,000 unique words in total while each document will use 100 to 1000 unique words individually.
In order to be able to store such a matrix in memory but also to speed up algebraic operations matrix / vector, implementations will typically use a sparse representation such as the implementations available in the `scipy.sparse` package.
###
6.2.3.3. Common Vectorizer usage
[`CountVectorizer`](generated/sklearn.feature_extraction.text.countvectorizer#sklearn.feature_extraction.text.CountVectorizer "sklearn.feature_extraction.text.CountVectorizer") implements both tokenization and occurrence counting in a single class:
```
>>> from sklearn.feature_extraction.text import CountVectorizer
```
This model has many parameters, however the default values are quite reasonable (please see the [reference documentation](classes#text-feature-extraction-ref) for the details):
```
>>> vectorizer = CountVectorizer()
>>> vectorizer
CountVectorizer()
```
Let’s use it to tokenize and count the word occurrences of a minimalistic corpus of text documents:
```
>>> corpus = [
... 'This is the first document.',
... 'This is the second second document.',
... 'And the third one.',
... 'Is this the first document?',
... ]
>>> X = vectorizer.fit_transform(corpus)
>>> X
<4x9 sparse matrix of type '<... 'numpy.int64'>'
with 19 stored elements in Compressed Sparse ... format>
```
The default configuration tokenizes the string by extracting words of at least 2 letters. The specific function that does this step can be requested explicitly:
```
>>> analyze = vectorizer.build_analyzer()
>>> analyze("This is a text document to analyze.") == (
... ['this', 'is', 'text', 'document', 'to', 'analyze'])
True
```
Each term found by the analyzer during the fit is assigned a unique integer index corresponding to a column in the resulting matrix. This interpretation of the columns can be retrieved as follows:
```
>>> vectorizer.get_feature_names_out()
array(['and', 'document', 'first', 'is', 'one', 'second', 'the',
'third', 'this'], ...)
>>> X.toarray()
array([[0, 1, 1, 1, 0, 0, 1, 0, 1],
[0, 1, 0, 1, 0, 2, 1, 0, 1],
[1, 0, 0, 0, 1, 0, 1, 1, 0],
[0, 1, 1, 1, 0, 0, 1, 0, 1]]...)
```
The converse mapping from feature name to column index is stored in the `vocabulary_` attribute of the vectorizer:
```
>>> vectorizer.vocabulary_.get('document')
1
```
Hence words that were not seen in the training corpus will be completely ignored in future calls to the transform method:
```
>>> vectorizer.transform(['Something completely new.']).toarray()
array([[0, 0, 0, 0, 0, 0, 0, 0, 0]]...)
```
Note that in the previous corpus, the first and the last documents have exactly the same words hence are encoded in equal vectors. In particular we lose the information that the last document is an interrogative form. To preserve some of the local ordering information we can extract 2-grams of words in addition to the 1-grams (individual words):
```
>>> bigram_vectorizer = CountVectorizer(ngram_range=(1, 2),
... token_pattern=r'\b\w+\b', min_df=1)
>>> analyze = bigram_vectorizer.build_analyzer()
>>> analyze('Bi-grams are cool!') == (
... ['bi', 'grams', 'are', 'cool', 'bi grams', 'grams are', 'are cool'])
True
```
The vocabulary extracted by this vectorizer is hence much bigger and can now resolve ambiguities encoded in local positioning patterns:
```
>>> X_2 = bigram_vectorizer.fit_transform(corpus).toarray()
>>> X_2
array([[0, 0, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0, 0, 1, 1, 0],
[0, 0, 1, 0, 0, 1, 1, 0, 0, 2, 1, 1, 1, 0, 1, 0, 0, 0, 1, 1, 0],
[1, 1, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 1, 0, 0, 1, 1, 1, 0, 0, 0],
[0, 0, 1, 1, 1, 1, 0, 1, 0, 0, 0, 0, 1, 1, 0, 0, 0, 0, 1, 0, 1]]...)
```
In particular the interrogative form “Is this” is only present in the last document:
```
>>> feature_index = bigram_vectorizer.vocabulary_.get('is this')
>>> X_2[:, feature_index]
array([0, 0, 0, 1]...)
```
####
6.2.3.3.1. Using stop words
Stop words are words like “and”, “the”, “him”, which are presumed to be uninformative in representing the content of a text, and which may be removed to avoid them being construed as signal for prediction. Sometimes, however, similar words are useful for prediction, such as in classifying writing style or personality.
There are several known issues in our provided ‘english’ stop word list. It does not aim to be a general, ‘one-size-fits-all’ solution as some tasks may require a more custom solution. See [[NQY18]](#nqy18) for more details.
Please take care in choosing a stop word list. Popular stop word lists may include words that are highly informative to some tasks, such as *computer*.
You should also make sure that the stop word list has had the same preprocessing and tokenization applied as the one used in the vectorizer. The word *we’ve* is split into *we* and *ve* by CountVectorizer’s default tokenizer, so if *we’ve* is in `stop_words`, but *ve* is not, *ve* will be retained from *we’ve* in transformed text. Our vectorizers will try to identify and warn about some kinds of inconsistencies.
###
6.2.3.4. Tf–idf term weighting
In a large text corpus, some words will be very present (e.g. “the”, “a”, “is” in English) hence carrying very little meaningful information about the actual contents of the document. If we were to feed the direct count data directly to a classifier those very frequent terms would shadow the frequencies of rarer yet more interesting terms.
In order to re-weight the count features into floating point values suitable for usage by a classifier it is very common to use the tf–idf transform.
Tf means **term-frequency** while tf–idf means term-frequency times **inverse document-frequency**: \(\text{tf-idf(t,d)}=\text{tf(t,d)} \times \text{idf(t)}\).
Using the `TfidfTransformer`’s default settings, `TfidfTransformer(norm='l2', use_idf=True, smooth_idf=True, sublinear_tf=False)` the term frequency, the number of times a term occurs in a given document, is multiplied with idf component, which is computed as
\(\text{idf}(t) = \log{\frac{1 + n}{1+\text{df}(t)}} + 1\),
where \(n\) is the total number of documents in the document set, and \(\text{df}(t)\) is the number of documents in the document set that contain term \(t\). The resulting tf-idf vectors are then normalized by the Euclidean norm:
\(v\_{norm} = \frac{v}{||v||\_2} = \frac{v}{\sqrt{v{\_1}^2 + v{\_2}^2 + \dots + v{\_n}^2}}\).
This was originally a term weighting scheme developed for information retrieval (as a ranking function for search engines results) that has also found good use in document classification and clustering.
The following sections contain further explanations and examples that illustrate how the tf-idfs are computed exactly and how the tf-idfs computed in scikit-learn’s [`TfidfTransformer`](generated/sklearn.feature_extraction.text.tfidftransformer#sklearn.feature_extraction.text.TfidfTransformer "sklearn.feature_extraction.text.TfidfTransformer") and [`TfidfVectorizer`](generated/sklearn.feature_extraction.text.tfidfvectorizer#sklearn.feature_extraction.text.TfidfVectorizer "sklearn.feature_extraction.text.TfidfVectorizer") differ slightly from the standard textbook notation that defines the idf as
\(\text{idf}(t) = \log{\frac{n}{1+\text{df}(t)}}.\)
In the [`TfidfTransformer`](generated/sklearn.feature_extraction.text.tfidftransformer#sklearn.feature_extraction.text.TfidfTransformer "sklearn.feature_extraction.text.TfidfTransformer") and [`TfidfVectorizer`](generated/sklearn.feature_extraction.text.tfidfvectorizer#sklearn.feature_extraction.text.TfidfVectorizer "sklearn.feature_extraction.text.TfidfVectorizer") with `smooth_idf=False`, the “1” count is added to the idf instead of the idf’s denominator:
\(\text{idf}(t) = \log{\frac{n}{\text{df}(t)}} + 1\)
This normalization is implemented by the [`TfidfTransformer`](generated/sklearn.feature_extraction.text.tfidftransformer#sklearn.feature_extraction.text.TfidfTransformer "sklearn.feature_extraction.text.TfidfTransformer") class:
```
>>> from sklearn.feature_extraction.text import TfidfTransformer
>>> transformer = TfidfTransformer(smooth_idf=False)
>>> transformer
TfidfTransformer(smooth_idf=False)
```
Again please see the [reference documentation](classes#text-feature-extraction-ref) for the details on all the parameters.
Let’s take an example with the following counts. The first term is present 100% of the time hence not very interesting. The two other features only in less than 50% of the time hence probably more representative of the content of the documents:
```
>>> counts = [[3, 0, 1],
... [2, 0, 0],
... [3, 0, 0],
... [4, 0, 0],
... [3, 2, 0],
... [3, 0, 2]]
...
>>> tfidf = transformer.fit_transform(counts)
>>> tfidf
<6x3 sparse matrix of type '<... 'numpy.float64'>'
with 9 stored elements in Compressed Sparse ... format>
>>> tfidf.toarray()
array([[0.81940995, 0. , 0.57320793],
[1. , 0. , 0. ],
[1. , 0. , 0. ],
[1. , 0. , 0. ],
[0.47330339, 0.88089948, 0. ],
[0.58149261, 0. , 0.81355169]])
```
Each row is normalized to have unit Euclidean norm:
\(v\_{norm} = \frac{v}{||v||\_2} = \frac{v}{\sqrt{v{\_1}^2 + v{\_2}^2 + \dots + v{\_n}^2}}\)
For example, we can compute the tf-idf of the first term in the first document in the `counts` array as follows:
\(n = 6\)
\(\text{df}(t)\_{\text{term1}} = 6\)
\(\text{idf}(t)\_{\text{term1}} = \log \frac{n}{\text{df}(t)} + 1 = \log(1)+1 = 1\)
\(\text{tf-idf}\_{\text{term1}} = \text{tf} \times \text{idf} = 3 \times 1 = 3\)
Now, if we repeat this computation for the remaining 2 terms in the document, we get
\(\text{tf-idf}\_{\text{term2}} = 0 \times (\log(6/1)+1) = 0\)
\(\text{tf-idf}\_{\text{term3}} = 1 \times (\log(6/2)+1) \approx 2.0986\)
and the vector of raw tf-idfs:
\(\text{tf-idf}\_{\text{raw}} = [3, 0, 2.0986].\)
Then, applying the Euclidean (L2) norm, we obtain the following tf-idfs for document 1:
\(\frac{[3, 0, 2.0986]}{\sqrt{\big(3^2 + 0^2 + 2.0986^2\big)}} = [ 0.819, 0, 0.573].\)
Furthermore, the default parameter `smooth_idf=True` adds “1” to the numerator and denominator as if an extra document was seen containing every term in the collection exactly once, which prevents zero divisions:
\(\text{idf}(t) = \log{\frac{1 + n}{1+\text{df}(t)}} + 1\)
Using this modification, the tf-idf of the third term in document 1 changes to 1.8473:
\(\text{tf-idf}\_{\text{term3}} = 1 \times \log(7/3)+1 \approx 1.8473\)
And the L2-normalized tf-idf changes to
\(\frac{[3, 0, 1.8473]}{\sqrt{\big(3^2 + 0^2 + 1.8473^2\big)}} = [0.8515, 0, 0.5243]\):
```
>>> transformer = TfidfTransformer()
>>> transformer.fit_transform(counts).toarray()
array([[0.85151335, 0. , 0.52433293],
[1. , 0. , 0. ],
[1. , 0. , 0. ],
[1. , 0. , 0. ],
[0.55422893, 0.83236428, 0. ],
[0.63035731, 0. , 0.77630514]])
```
The weights of each feature computed by the `fit` method call are stored in a model attribute:
```
>>> transformer.idf_
array([1. ..., 2.25..., 1.84...])
```
As tf–idf is very often used for text features, there is also another class called [`TfidfVectorizer`](generated/sklearn.feature_extraction.text.tfidfvectorizer#sklearn.feature_extraction.text.TfidfVectorizer "sklearn.feature_extraction.text.TfidfVectorizer") that combines all the options of [`CountVectorizer`](generated/sklearn.feature_extraction.text.countvectorizer#sklearn.feature_extraction.text.CountVectorizer "sklearn.feature_extraction.text.CountVectorizer") and [`TfidfTransformer`](generated/sklearn.feature_extraction.text.tfidftransformer#sklearn.feature_extraction.text.TfidfTransformer "sklearn.feature_extraction.text.TfidfTransformer") in a single model:
```
>>> from sklearn.feature_extraction.text import TfidfVectorizer
>>> vectorizer = TfidfVectorizer()
>>> vectorizer.fit_transform(corpus)
<4x9 sparse matrix of type '<... 'numpy.float64'>'
with 19 stored elements in Compressed Sparse ... format>
```
While the tf–idf normalization is often very useful, there might be cases where the binary occurrence markers might offer better features. This can be achieved by using the `binary` parameter of [`CountVectorizer`](generated/sklearn.feature_extraction.text.countvectorizer#sklearn.feature_extraction.text.CountVectorizer "sklearn.feature_extraction.text.CountVectorizer"). In particular, some estimators such as [Bernoulli Naive Bayes](naive_bayes#bernoulli-naive-bayes) explicitly model discrete boolean random variables. Also, very short texts are likely to have noisy tf–idf values while the binary occurrence info is more stable.
As usual the best way to adjust the feature extraction parameters is to use a cross-validated grid search, for instance by pipelining the feature extractor with a classifier:
* [Sample pipeline for text feature extraction and evaluation](../auto_examples/model_selection/grid_search_text_feature_extraction#sphx-glr-auto-examples-model-selection-grid-search-text-feature-extraction-py)
###
6.2.3.5. Decoding text files
Text is made of characters, but files are made of bytes. These bytes represent characters according to some *encoding*. To work with text files in Python, their bytes must be *decoded* to a character set called Unicode. Common encodings are ASCII, Latin-1 (Western Europe), KOI8-R (Russian) and the universal encodings UTF-8 and UTF-16. Many others exist.
Note
An encoding can also be called a ‘character set’, but this term is less accurate: several encodings can exist for a single character set.
The text feature extractors in scikit-learn know how to decode text files, but only if you tell them what encoding the files are in. The [`CountVectorizer`](generated/sklearn.feature_extraction.text.countvectorizer#sklearn.feature_extraction.text.CountVectorizer "sklearn.feature_extraction.text.CountVectorizer") takes an `encoding` parameter for this purpose. For modern text files, the correct encoding is probably UTF-8, which is therefore the default (`encoding="utf-8"`).
If the text you are loading is not actually encoded with UTF-8, however, you will get a `UnicodeDecodeError`. The vectorizers can be told to be silent about decoding errors by setting the `decode_error` parameter to either `"ignore"` or `"replace"`. See the documentation for the Python function `bytes.decode` for more details (type `help(bytes.decode)` at the Python prompt).
If you are having trouble decoding text, here are some things to try:
* Find out what the actual encoding of the text is. The file might come with a header or README that tells you the encoding, or there might be some standard encoding you can assume based on where the text comes from.
* You may be able to find out what kind of encoding it is in general using the UNIX command `file`. The Python `chardet` module comes with a script called `chardetect.py` that will guess the specific encoding, though you cannot rely on its guess being correct.
* You could try UTF-8 and disregard the errors. You can decode byte strings with `bytes.decode(errors='replace')` to replace all decoding errors with a meaningless character, or set `decode_error='replace'` in the vectorizer. This may damage the usefulness of your features.
* Real text may come from a variety of sources that may have used different encodings, or even be sloppily decoded in a different encoding than the one it was encoded with. This is common in text retrieved from the Web. The Python package [ftfy](https://github.com/LuminosoInsight/python-ftfy) can automatically sort out some classes of decoding errors, so you could try decoding the unknown text as `latin-1` and then using `ftfy` to fix errors.
* If the text is in a mish-mash of encodings that is simply too hard to sort out (which is the case for the 20 Newsgroups dataset), you can fall back on a simple single-byte encoding such as `latin-1`. Some text may display incorrectly, but at least the same sequence of bytes will always represent the same feature.
For example, the following snippet uses `chardet` (not shipped with scikit-learn, must be installed separately) to figure out the encoding of three texts. It then vectorizes the texts and prints the learned vocabulary. The output is not shown here.
```
>>> import chardet
>>> text1 = b"Sei mir gegr\xc3\xbc\xc3\x9ft mein Sauerkraut"
>>> text2 = b"holdselig sind deine Ger\xfcche"
>>> text3 = b"\xff\xfeA\x00u\x00f\x00 \x00F\x00l\x00\xfc\x00g\x00e\x00l\x00n\x00 \x00d\x00e\x00s\x00 \x00G\x00e\x00s\x00a\x00n\x00g\x00e\x00s\x00,\x00 \x00H\x00e\x00r\x00z\x00l\x00i\x00e\x00b\x00c\x00h\x00e\x00n\x00,\x00 \x00t\x00r\x00a\x00g\x00 \x00i\x00c\x00h\x00 \x00d\x00i\x00c\x00h\x00 \x00f\x00o\x00r\x00t\x00"
>>> decoded = [x.decode(chardet.detect(x)['encoding'])
... for x in (text1, text2, text3)]
>>> v = CountVectorizer().fit(decoded).vocabulary_
>>> for term in v: print(v)
```
(Depending on the version of `chardet`, it might get the first one wrong.)
For an introduction to Unicode and character encodings in general, see Joel Spolsky’s [Absolute Minimum Every Software Developer Must Know About Unicode](https://www.joelonsoftware.com/articles/Unicode.html).
###
6.2.3.6. Applications and examples
The bag of words representation is quite simplistic but surprisingly useful in practice.
In particular in a **supervised setting** it can be successfully combined with fast and scalable linear models to train **document classifiers**, for instance:
* [Classification of text documents using sparse features](../auto_examples/text/plot_document_classification_20newsgroups#sphx-glr-auto-examples-text-plot-document-classification-20newsgroups-py)
In an **unsupervised setting** it can be used to group similar documents together by applying clustering algorithms such as [K-means](clustering#k-means):
* [Clustering text documents using k-means](../auto_examples/text/plot_document_clustering#sphx-glr-auto-examples-text-plot-document-clustering-py)
Finally it is possible to discover the main topics of a corpus by relaxing the hard assignment constraint of clustering, for instance by using [Non-negative matrix factorization (NMF or NNMF)](decomposition#nmf):
* [Topic extraction with Non-negative Matrix Factorization and Latent Dirichlet Allocation](../auto_examples/applications/plot_topics_extraction_with_nmf_lda#sphx-glr-auto-examples-applications-plot-topics-extraction-with-nmf-lda-py)
###
6.2.3.7. Limitations of the Bag of Words representation
A collection of unigrams (what bag of words is) cannot capture phrases and multi-word expressions, effectively disregarding any word order dependence. Additionally, the bag of words model doesn’t account for potential misspellings or word derivations.
N-grams to the rescue! Instead of building a simple collection of unigrams (n=1), one might prefer a collection of bigrams (n=2), where occurrences of pairs of consecutive words are counted.
One might alternatively consider a collection of character n-grams, a representation resilient against misspellings and derivations.
For example, let’s say we’re dealing with a corpus of two documents: `['words', 'wprds']`. The second document contains a misspelling of the word ‘words’. A simple bag of words representation would consider these two as very distinct documents, differing in both of the two possible features. A character 2-gram representation, however, would find the documents matching in 4 out of 8 features, which may help the preferred classifier decide better:
```
>>> ngram_vectorizer = CountVectorizer(analyzer='char_wb', ngram_range=(2, 2))
>>> counts = ngram_vectorizer.fit_transform(['words', 'wprds'])
>>> ngram_vectorizer.get_feature_names_out()
array([' w', 'ds', 'or', 'pr', 'rd', 's ', 'wo', 'wp'], ...)
>>> counts.toarray().astype(int)
array([[1, 1, 1, 0, 1, 1, 1, 0],
[1, 1, 0, 1, 1, 1, 0, 1]])
```
In the above example, `char_wb` analyzer is used, which creates n-grams only from characters inside word boundaries (padded with space on each side). The `char` analyzer, alternatively, creates n-grams that span across words:
```
>>> ngram_vectorizer = CountVectorizer(analyzer='char_wb', ngram_range=(5, 5))
>>> ngram_vectorizer.fit_transform(['jumpy fox'])
<1x4 sparse matrix of type '<... 'numpy.int64'>'
with 4 stored elements in Compressed Sparse ... format>
>>> ngram_vectorizer.get_feature_names_out()
array([' fox ', ' jump', 'jumpy', 'umpy '], ...)
>>> ngram_vectorizer = CountVectorizer(analyzer='char', ngram_range=(5, 5))
>>> ngram_vectorizer.fit_transform(['jumpy fox'])
<1x5 sparse matrix of type '<... 'numpy.int64'>'
with 5 stored elements in Compressed Sparse ... format>
>>> ngram_vectorizer.get_feature_names_out()
array(['jumpy', 'mpy f', 'py fo', 'umpy ', 'y fox'], ...)
```
The word boundaries-aware variant `char_wb` is especially interesting for languages that use white-spaces for word separation as it generates significantly less noisy features than the raw `char` variant in that case. For such languages it can increase both the predictive accuracy and convergence speed of classifiers trained using such features while retaining the robustness with regards to misspellings and word derivations.
While some local positioning information can be preserved by extracting n-grams instead of individual words, bag of words and bag of n-grams destroy most of the inner structure of the document and hence most of the meaning carried by that internal structure.
In order to address the wider task of Natural Language Understanding, the local structure of sentences and paragraphs should thus be taken into account. Many such models will thus be casted as “Structured output” problems which are currently outside of the scope of scikit-learn.
###
6.2.3.8. Vectorizing a large text corpus with the hashing trick
The above vectorization scheme is simple but the fact that it holds an **in- memory mapping from the string tokens to the integer feature indices** (the `vocabulary_` attribute) causes several **problems when dealing with large datasets**:
* the larger the corpus, the larger the vocabulary will grow and hence the memory use too,
* fitting requires the allocation of intermediate data structures of size proportional to that of the original dataset.
* building the word-mapping requires a full pass over the dataset hence it is not possible to fit text classifiers in a strictly online manner.
* pickling and un-pickling vectorizers with a large `vocabulary_` can be very slow (typically much slower than pickling / un-pickling flat data structures such as a NumPy array of the same size),
* it is not easily possible to split the vectorization work into concurrent sub tasks as the `vocabulary_` attribute would have to be a shared state with a fine grained synchronization barrier: the mapping from token string to feature index is dependent on ordering of the first occurrence of each token hence would have to be shared, potentially harming the concurrent workers’ performance to the point of making them slower than the sequential variant.
It is possible to overcome those limitations by combining the “hashing trick” ([Feature hashing](#feature-hashing)) implemented by the [`FeatureHasher`](generated/sklearn.feature_extraction.featurehasher#sklearn.feature_extraction.FeatureHasher "sklearn.feature_extraction.FeatureHasher") class and the text preprocessing and tokenization features of the [`CountVectorizer`](generated/sklearn.feature_extraction.text.countvectorizer#sklearn.feature_extraction.text.CountVectorizer "sklearn.feature_extraction.text.CountVectorizer").
This combination is implementing in [`HashingVectorizer`](generated/sklearn.feature_extraction.text.hashingvectorizer#sklearn.feature_extraction.text.HashingVectorizer "sklearn.feature_extraction.text.HashingVectorizer"), a transformer class that is mostly API compatible with [`CountVectorizer`](generated/sklearn.feature_extraction.text.countvectorizer#sklearn.feature_extraction.text.CountVectorizer "sklearn.feature_extraction.text.CountVectorizer"). [`HashingVectorizer`](generated/sklearn.feature_extraction.text.hashingvectorizer#sklearn.feature_extraction.text.HashingVectorizer "sklearn.feature_extraction.text.HashingVectorizer") is stateless, meaning that you don’t have to call `fit` on it:
```
>>> from sklearn.feature_extraction.text import HashingVectorizer
>>> hv = HashingVectorizer(n_features=10)
>>> hv.transform(corpus)
<4x10 sparse matrix of type '<... 'numpy.float64'>'
with 16 stored elements in Compressed Sparse ... format>
```
You can see that 16 non-zero feature tokens were extracted in the vector output: this is less than the 19 non-zeros extracted previously by the [`CountVectorizer`](generated/sklearn.feature_extraction.text.countvectorizer#sklearn.feature_extraction.text.CountVectorizer "sklearn.feature_extraction.text.CountVectorizer") on the same toy corpus. The discrepancy comes from hash function collisions because of the low value of the `n_features` parameter.
In a real world setting, the `n_features` parameter can be left to its default value of `2 ** 20` (roughly one million possible features). If memory or downstream models size is an issue selecting a lower value such as `2 **
18` might help without introducing too many additional collisions on typical text classification tasks.
Note that the dimensionality does not affect the CPU training time of algorithms which operate on CSR matrices (`LinearSVC(dual=True)`, `Perceptron`, `SGDClassifier`, `PassiveAggressive`) but it does for algorithms that work with CSC matrices (`LinearSVC(dual=False)`, `Lasso()`, etc).
Let’s try again with the default setting:
```
>>> hv = HashingVectorizer()
>>> hv.transform(corpus)
<4x1048576 sparse matrix of type '<... 'numpy.float64'>'
with 19 stored elements in Compressed Sparse ... format>
```
We no longer get the collisions, but this comes at the expense of a much larger dimensionality of the output space. Of course, other terms than the 19 used here might still collide with each other.
The [`HashingVectorizer`](generated/sklearn.feature_extraction.text.hashingvectorizer#sklearn.feature_extraction.text.HashingVectorizer "sklearn.feature_extraction.text.HashingVectorizer") also comes with the following limitations:
* it is not possible to invert the model (no `inverse_transform` method), nor to access the original string representation of the features, because of the one-way nature of the hash function that performs the mapping.
* it does not provide IDF weighting as that would introduce statefulness in the model. A [`TfidfTransformer`](generated/sklearn.feature_extraction.text.tfidftransformer#sklearn.feature_extraction.text.TfidfTransformer "sklearn.feature_extraction.text.TfidfTransformer") can be appended to it in a pipeline if required.
###
6.2.3.9. Performing out-of-core scaling with HashingVectorizer
An interesting development of using a [`HashingVectorizer`](generated/sklearn.feature_extraction.text.hashingvectorizer#sklearn.feature_extraction.text.HashingVectorizer "sklearn.feature_extraction.text.HashingVectorizer") is the ability to perform [out-of-core](https://en.wikipedia.org/wiki/Out-of-core_algorithm) scaling. This means that we can learn from data that does not fit into the computer’s main memory.
A strategy to implement out-of-core scaling is to stream data to the estimator in mini-batches. Each mini-batch is vectorized using [`HashingVectorizer`](generated/sklearn.feature_extraction.text.hashingvectorizer#sklearn.feature_extraction.text.HashingVectorizer "sklearn.feature_extraction.text.HashingVectorizer") so as to guarantee that the input space of the estimator has always the same dimensionality. The amount of memory used at any time is thus bounded by the size of a mini-batch. Although there is no limit to the amount of data that can be ingested using such an approach, from a practical point of view the learning time is often limited by the CPU time one wants to spend on the task.
For a full-fledged example of out-of-core scaling in a text classification task see [Out-of-core classification of text documents](../auto_examples/applications/plot_out_of_core_classification#sphx-glr-auto-examples-applications-plot-out-of-core-classification-py).
###
6.2.3.10. Customizing the vectorizer classes
It is possible to customize the behavior by passing a callable to the vectorizer constructor:
```
>>> def my_tokenizer(s):
... return s.split()
...
>>> vectorizer = CountVectorizer(tokenizer=my_tokenizer)
>>> vectorizer.build_analyzer()(u"Some... punctuation!") == (
... ['some...', 'punctuation!'])
True
```
In particular we name:
* `preprocessor`: a callable that takes an entire document as input (as a single string), and returns a possibly transformed version of the document, still as an entire string. This can be used to remove HTML tags, lowercase the entire document, etc.
* `tokenizer`: a callable that takes the output from the preprocessor and splits it into tokens, then returns a list of these.
* `analyzer`: a callable that replaces the preprocessor and tokenizer. The default analyzers all call the preprocessor and tokenizer, but custom analyzers will skip this. N-gram extraction and stop word filtering take place at the analyzer level, so a custom analyzer may have to reproduce these steps.
(Lucene users might recognize these names, but be aware that scikit-learn concepts may not map one-to-one onto Lucene concepts.)
To make the preprocessor, tokenizer and analyzers aware of the model parameters it is possible to derive from the class and override the `build_preprocessor`, `build_tokenizer` and `build_analyzer` factory methods instead of passing custom functions.
Some tips and tricks:
* If documents are pre-tokenized by an external package, then store them in files (or strings) with the tokens separated by whitespace and pass `analyzer=str.split`
* Fancy token-level analysis such as stemming, lemmatizing, compound splitting, filtering based on part-of-speech, etc. are not included in the scikit-learn codebase, but can be added by customizing either the tokenizer or the analyzer. Here’s a `CountVectorizer` with a tokenizer and lemmatizer using [NLTK](https://www.nltk.org/):
```
>>> from nltk import word_tokenize
>>> from nltk.stem import WordNetLemmatizer
>>> class LemmaTokenizer:
... def __init__(self):
... self.wnl = WordNetLemmatizer()
... def __call__(self, doc):
... return [self.wnl.lemmatize(t) for t in word_tokenize(doc)]
...
>>> vect = CountVectorizer(tokenizer=LemmaTokenizer())
```
(Note that this will not filter out punctuation.)
The following example will, for instance, transform some British spelling to American spelling:
```
>>> import re
>>> def to_british(tokens):
... for t in tokens:
... t = re.sub(r"(...)our$", r"\1or", t)
... t = re.sub(r"([bt])re$", r"\1er", t)
... t = re.sub(r"([iy])s(e$|ing|ation)", r"\1z\2", t)
... t = re.sub(r"ogue$", "og", t)
... yield t
...
>>> class CustomVectorizer(CountVectorizer):
... def build_tokenizer(self):
... tokenize = super().build_tokenizer()
... return lambda doc: list(to_british(tokenize(doc)))
...
>>> print(CustomVectorizer().build_analyzer()(u"color colour"))
[...'color', ...'color']
```
for other styles of preprocessing; examples include stemming, lemmatization, or normalizing numerical tokens, with the latter illustrated in:
+ [Biclustering documents with the Spectral Co-clustering algorithm](../auto_examples/bicluster/plot_bicluster_newsgroups#sphx-glr-auto-examples-bicluster-plot-bicluster-newsgroups-py)
Customizing the vectorizer can also be useful when handling Asian languages that do not use an explicit word separator such as whitespace.
6.2.4. Image feature extraction
--------------------------------
###
6.2.4.1. Patch extraction
The [`extract_patches_2d`](generated/sklearn.feature_extraction.image.extract_patches_2d#sklearn.feature_extraction.image.extract_patches_2d "sklearn.feature_extraction.image.extract_patches_2d") function extracts patches from an image stored as a two-dimensional array, or three-dimensional with color information along the third axis. For rebuilding an image from all its patches, use [`reconstruct_from_patches_2d`](generated/sklearn.feature_extraction.image.reconstruct_from_patches_2d#sklearn.feature_extraction.image.reconstruct_from_patches_2d "sklearn.feature_extraction.image.reconstruct_from_patches_2d"). For example let us generate a 4x4 pixel picture with 3 color channels (e.g. in RGB format):
```
>>> import numpy as np
>>> from sklearn.feature_extraction import image
>>> one_image = np.arange(4 * 4 * 3).reshape((4, 4, 3))
>>> one_image[:, :, 0] # R channel of a fake RGB picture
array([[ 0, 3, 6, 9],
[12, 15, 18, 21],
[24, 27, 30, 33],
[36, 39, 42, 45]])
>>> patches = image.extract_patches_2d(one_image, (2, 2), max_patches=2,
... random_state=0)
>>> patches.shape
(2, 2, 2, 3)
>>> patches[:, :, :, 0]
array([[[ 0, 3],
[12, 15]],
[[15, 18],
[27, 30]]])
>>> patches = image.extract_patches_2d(one_image, (2, 2))
>>> patches.shape
(9, 2, 2, 3)
>>> patches[4, :, :, 0]
array([[15, 18],
[27, 30]])
```
Let us now try to reconstruct the original image from the patches by averaging on overlapping areas:
```
>>> reconstructed = image.reconstruct_from_patches_2d(patches, (4, 4, 3))
>>> np.testing.assert_array_equal(one_image, reconstructed)
```
The [`PatchExtractor`](generated/sklearn.feature_extraction.image.patchextractor#sklearn.feature_extraction.image.PatchExtractor "sklearn.feature_extraction.image.PatchExtractor") class works in the same way as [`extract_patches_2d`](generated/sklearn.feature_extraction.image.extract_patches_2d#sklearn.feature_extraction.image.extract_patches_2d "sklearn.feature_extraction.image.extract_patches_2d"), only it supports multiple images as input. It is implemented as an estimator, so it can be used in pipelines. See:
```
>>> five_images = np.arange(5 * 4 * 4 * 3).reshape(5, 4, 4, 3)
>>> patches = image.PatchExtractor(patch_size=(2, 2)).transform(five_images)
>>> patches.shape
(45, 2, 2, 3)
```
###
6.2.4.2. Connectivity graph of an image
Several estimators in the scikit-learn can use connectivity information between features or samples. For instance Ward clustering ([Hierarchical clustering](clustering#hierarchical-clustering)) can cluster together only neighboring pixels of an image, thus forming contiguous patches:
For this purpose, the estimators use a ‘connectivity’ matrix, giving which samples are connected.
The function [`img_to_graph`](generated/sklearn.feature_extraction.image.img_to_graph#sklearn.feature_extraction.image.img_to_graph "sklearn.feature_extraction.image.img_to_graph") returns such a matrix from a 2D or 3D image. Similarly, [`grid_to_graph`](generated/sklearn.feature_extraction.image.grid_to_graph#sklearn.feature_extraction.image.grid_to_graph "sklearn.feature_extraction.image.grid_to_graph") build a connectivity matrix for images given the shape of these image.
These matrices can be used to impose connectivity in estimators that use connectivity information, such as Ward clustering ([Hierarchical clustering](clustering#hierarchical-clustering)), but also to build precomputed kernels, or similarity matrices.
Note
**Examples**
* [A demo of structured Ward hierarchical clustering on an image of coins](../auto_examples/cluster/plot_coin_ward_segmentation#sphx-glr-auto-examples-cluster-plot-coin-ward-segmentation-py)
* [Spectral clustering for image segmentation](../auto_examples/cluster/plot_segmentation_toy#sphx-glr-auto-examples-cluster-plot-segmentation-toy-py)
* [Feature agglomeration vs. univariate selection](../auto_examples/cluster/plot_feature_agglomeration_vs_univariate_selection#sphx-glr-auto-examples-cluster-plot-feature-agglomeration-vs-univariate-selection-py)
| programming_docs |
scikit_learn 1.11. Ensemble methods 1.11. Ensemble methods
======================
The goal of **ensemble methods** is to combine the predictions of several base estimators built with a given learning algorithm in order to improve generalizability / robustness over a single estimator.
Two families of ensemble methods are usually distinguished:
* In **averaging methods**, the driving principle is to build several estimators independently and then to average their predictions. On average, the combined estimator is usually better than any of the single base estimator because its variance is reduced.
**Examples:** [Bagging methods](#bagging), [Forests of randomized trees](#forest), …
* By contrast, in **boosting methods**, base estimators are built sequentially and one tries to reduce the bias of the combined estimator. The motivation is to combine several weak models to produce a powerful ensemble.
**Examples:** [AdaBoost](#adaboost), [Gradient Tree Boosting](#gradient-boosting), …
1.11.1. Bagging meta-estimator
-------------------------------
In ensemble algorithms, bagging methods form a class of algorithms which build several instances of a black-box estimator on random subsets of the original training set and then aggregate their individual predictions to form a final prediction. These methods are used as a way to reduce the variance of a base estimator (e.g., a decision tree), by introducing randomization into its construction procedure and then making an ensemble out of it. In many cases, bagging methods constitute a very simple way to improve with respect to a single model, without making it necessary to adapt the underlying base algorithm. As they provide a way to reduce overfitting, bagging methods work best with strong and complex models (e.g., fully developed decision trees), in contrast with boosting methods which usually work best with weak models (e.g., shallow decision trees).
Bagging methods come in many flavours but mostly differ from each other by the way they draw random subsets of the training set:
* When random subsets of the dataset are drawn as random subsets of the samples, then this algorithm is known as Pasting [[B1999]](#b1999).
* When samples are drawn with replacement, then the method is known as Bagging [[B1996]](#b1996).
* When random subsets of the dataset are drawn as random subsets of the features, then the method is known as Random Subspaces [[H1998]](#h1998).
* Finally, when base estimators are built on subsets of both samples and features, then the method is known as Random Patches [[LG2012]](#lg2012).
In scikit-learn, bagging methods are offered as a unified [`BaggingClassifier`](generated/sklearn.ensemble.baggingclassifier#sklearn.ensemble.BaggingClassifier "sklearn.ensemble.BaggingClassifier") meta-estimator (resp. [`BaggingRegressor`](generated/sklearn.ensemble.baggingregressor#sklearn.ensemble.BaggingRegressor "sklearn.ensemble.BaggingRegressor")), taking as input a user-specified base estimator along with parameters specifying the strategy to draw random subsets. In particular, `max_samples` and `max_features` control the size of the subsets (in terms of samples and features), while `bootstrap` and `bootstrap_features` control whether samples and features are drawn with or without replacement. When using a subset of the available samples the generalization accuracy can be estimated with the out-of-bag samples by setting `oob_score=True`. As an example, the snippet below illustrates how to instantiate a bagging ensemble of `KNeighborsClassifier` base estimators, each built on random subsets of 50% of the samples and 50% of the features.
```
>>> from sklearn.ensemble import BaggingClassifier
>>> from sklearn.neighbors import KNeighborsClassifier
>>> bagging = BaggingClassifier(KNeighborsClassifier(),
... max_samples=0.5, max_features=0.5)
```
1.11.2. Forests of randomized trees
------------------------------------
The [`sklearn.ensemble`](classes#module-sklearn.ensemble "sklearn.ensemble") module includes two averaging algorithms based on randomized [decision trees](tree#tree): the RandomForest algorithm and the Extra-Trees method. Both algorithms are perturb-and-combine techniques [[B1998]](#b1998) specifically designed for trees. This means a diverse set of classifiers is created by introducing randomness in the classifier construction. The prediction of the ensemble is given as the averaged prediction of the individual classifiers.
As other classifiers, forest classifiers have to be fitted with two arrays: a sparse or dense array X of shape `(n_samples, n_features)` holding the training samples, and an array Y of shape `(n_samples,)` holding the target values (class labels) for the training samples:
```
>>> from sklearn.ensemble import RandomForestClassifier
>>> X = [[0, 0], [1, 1]]
>>> Y = [0, 1]
>>> clf = RandomForestClassifier(n_estimators=10)
>>> clf = clf.fit(X, Y)
```
Like [decision trees](tree#tree), forests of trees also extend to [multi-output problems](tree#tree-multioutput) (if Y is an array of shape `(n_samples, n_outputs)`).
###
1.11.2.1. Random Forests
In random forests (see [`RandomForestClassifier`](generated/sklearn.ensemble.randomforestclassifier#sklearn.ensemble.RandomForestClassifier "sklearn.ensemble.RandomForestClassifier") and [`RandomForestRegressor`](generated/sklearn.ensemble.randomforestregressor#sklearn.ensemble.RandomForestRegressor "sklearn.ensemble.RandomForestRegressor") classes), each tree in the ensemble is built from a sample drawn with replacement (i.e., a bootstrap sample) from the training set.
Furthermore, when splitting each node during the construction of a tree, the best split is found either from all input features or a random subset of size `max_features`. (See the [parameter tuning guidelines](#random-forest-parameters) for more details).
The purpose of these two sources of randomness is to decrease the variance of the forest estimator. Indeed, individual decision trees typically exhibit high variance and tend to overfit. The injected randomness in forests yield decision trees with somewhat decoupled prediction errors. By taking an average of those predictions, some errors can cancel out. Random forests achieve a reduced variance by combining diverse trees, sometimes at the cost of a slight increase in bias. In practice the variance reduction is often significant hence yielding an overall better model.
In contrast to the original publication [[B2001]](#b2001), the scikit-learn implementation combines classifiers by averaging their probabilistic prediction, instead of letting each classifier vote for a single class.
###
1.11.2.2. Extremely Randomized Trees
In extremely randomized trees (see [`ExtraTreesClassifier`](generated/sklearn.ensemble.extratreesclassifier#sklearn.ensemble.ExtraTreesClassifier "sklearn.ensemble.ExtraTreesClassifier") and [`ExtraTreesRegressor`](generated/sklearn.ensemble.extratreesregressor#sklearn.ensemble.ExtraTreesRegressor "sklearn.ensemble.ExtraTreesRegressor") classes), randomness goes one step further in the way splits are computed. As in random forests, a random subset of candidate features is used, but instead of looking for the most discriminative thresholds, thresholds are drawn at random for each candidate feature and the best of these randomly-generated thresholds is picked as the splitting rule. This usually allows to reduce the variance of the model a bit more, at the expense of a slightly greater increase in bias:
```
>>> from sklearn.model_selection import cross_val_score
>>> from sklearn.datasets import make_blobs
>>> from sklearn.ensemble import RandomForestClassifier
>>> from sklearn.ensemble import ExtraTreesClassifier
>>> from sklearn.tree import DecisionTreeClassifier
>>> X, y = make_blobs(n_samples=10000, n_features=10, centers=100,
... random_state=0)
>>> clf = DecisionTreeClassifier(max_depth=None, min_samples_split=2,
... random_state=0)
>>> scores = cross_val_score(clf, X, y, cv=5)
>>> scores.mean()
0.98...
>>> clf = RandomForestClassifier(n_estimators=10, max_depth=None,
... min_samples_split=2, random_state=0)
>>> scores = cross_val_score(clf, X, y, cv=5)
>>> scores.mean()
0.999...
>>> clf = ExtraTreesClassifier(n_estimators=10, max_depth=None,
... min_samples_split=2, random_state=0)
>>> scores = cross_val_score(clf, X, y, cv=5)
>>> scores.mean() > 0.999
True
```
###
1.11.2.3. Parameters
The main parameters to adjust when using these methods is `n_estimators` and `max_features`. The former is the number of trees in the forest. The larger the better, but also the longer it will take to compute. In addition, note that results will stop getting significantly better beyond a critical number of trees. The latter is the size of the random subsets of features to consider when splitting a node. The lower the greater the reduction of variance, but also the greater the increase in bias. Empirical good default values are `max_features=1.0` or equivalently `max_features=None` (always considering all features instead of a random subset) for regression problems, and `max_features="sqrt"` (using a random subset of size `sqrt(n_features)`) for classification tasks (where `n_features` is the number of features in the data). The default value of `max_features=1.0` is equivalent to bagged trees and more randomness can be achieved by setting smaller values (e.g. 0.3 is a typical default in the literature). Good results are often achieved when setting `max_depth=None` in combination with `min_samples_split=2` (i.e., when fully developing the trees). Bear in mind though that these values are usually not optimal, and might result in models that consume a lot of RAM. The best parameter values should always be cross-validated. In addition, note that in random forests, bootstrap samples are used by default (`bootstrap=True`) while the default strategy for extra-trees is to use the whole dataset (`bootstrap=False`). When using bootstrap sampling the generalization error can be estimated on the left out or out-of-bag samples. This can be enabled by setting `oob_score=True`.
Note
The size of the model with the default parameters is \(O( M \* N \* log (N) )\), where \(M\) is the number of trees and \(N\) is the number of samples. In order to reduce the size of the model, you can change these parameters: `min_samples_split`, `max_leaf_nodes`, `max_depth` and `min_samples_leaf`.
###
1.11.2.4. Parallelization
Finally, this module also features the parallel construction of the trees and the parallel computation of the predictions through the `n_jobs` parameter. If `n_jobs=k` then computations are partitioned into `k` jobs, and run on `k` cores of the machine. If `n_jobs=-1` then all cores available on the machine are used. Note that because of inter-process communication overhead, the speedup might not be linear (i.e., using `k` jobs will unfortunately not be `k` times as fast). Significant speedup can still be achieved though when building a large number of trees, or when building a single tree requires a fair amount of time (e.g., on large datasets).
###
1.11.2.5. Feature importance evaluation
The relative rank (i.e. depth) of a feature used as a decision node in a tree can be used to assess the relative importance of that feature with respect to the predictability of the target variable. Features used at the top of the tree contribute to the final prediction decision of a larger fraction of the input samples. The **expected fraction of the samples** they contribute to can thus be used as an estimate of the **relative importance of the features**. In scikit-learn, the fraction of samples a feature contributes to is combined with the decrease in impurity from splitting them to create a normalized estimate of the predictive power of that feature.
By **averaging** the estimates of predictive ability over several randomized trees one can **reduce the variance** of such an estimate and use it for feature selection. This is known as the mean decrease in impurity, or MDI. Refer to [[L2014]](#l2014) for more information on MDI and feature importance evaluation with Random Forests.
Warning
The impurity-based feature importances computed on tree-based models suffer from two flaws that can lead to misleading conclusions. First they are computed on statistics derived from the training dataset and therefore **do not necessarily inform us on which features are most important to make good predictions on held-out dataset**. Secondly, **they favor high cardinality features**, that is features with many unique values. [Permutation feature importance](permutation_importance#permutation-importance) is an alternative to impurity-based feature importance that does not suffer from these flaws. These two methods of obtaining feature importance are explored in: [Permutation Importance vs Random Forest Feature Importance (MDI)](../auto_examples/inspection/plot_permutation_importance#sphx-glr-auto-examples-inspection-plot-permutation-importance-py).
The following example shows a color-coded representation of the relative importances of each individual pixel for a face recognition task using a [`ExtraTreesClassifier`](generated/sklearn.ensemble.extratreesclassifier#sklearn.ensemble.ExtraTreesClassifier "sklearn.ensemble.ExtraTreesClassifier") model.
In practice those estimates are stored as an attribute named `feature_importances_` on the fitted model. This is an array with shape `(n_features,)` whose values are positive and sum to 1.0. The higher the value, the more important is the contribution of the matching feature to the prediction function.
###
1.11.2.6. Totally Random Trees Embedding
[`RandomTreesEmbedding`](generated/sklearn.ensemble.randomtreesembedding#sklearn.ensemble.RandomTreesEmbedding "sklearn.ensemble.RandomTreesEmbedding") implements an unsupervised transformation of the data. Using a forest of completely random trees, [`RandomTreesEmbedding`](generated/sklearn.ensemble.randomtreesembedding#sklearn.ensemble.RandomTreesEmbedding "sklearn.ensemble.RandomTreesEmbedding") encodes the data by the indices of the leaves a data point ends up in. This index is then encoded in a one-of-K manner, leading to a high dimensional, sparse binary coding. This coding can be computed very efficiently and can then be used as a basis for other learning tasks. The size and sparsity of the code can be influenced by choosing the number of trees and the maximum depth per tree. For each tree in the ensemble, the coding contains one entry of one. The size of the coding is at most `n_estimators * 2
** max_depth`, the maximum number of leaves in the forest.
As neighboring data points are more likely to lie within the same leaf of a tree, the transformation performs an implicit, non-parametric density estimation.
See also
[Manifold learning](manifold#manifold) techniques can also be useful to derive non-linear representations of feature space, also these approaches focus also on dimensionality reduction.
1.11.3. AdaBoost
-----------------
The module [`sklearn.ensemble`](classes#module-sklearn.ensemble "sklearn.ensemble") includes the popular boosting algorithm AdaBoost, introduced in 1995 by Freund and Schapire [[FS1995]](#fs1995).
The core principle of AdaBoost is to fit a sequence of weak learners (i.e., models that are only slightly better than random guessing, such as small decision trees) on repeatedly modified versions of the data. The predictions from all of them are then combined through a weighted majority vote (or sum) to produce the final prediction. The data modifications at each so-called boosting iteration consist of applying weights \(w\_1\), \(w\_2\), …, \(w\_N\) to each of the training samples. Initially, those weights are all set to \(w\_i = 1/N\), so that the first step simply trains a weak learner on the original data. For each successive iteration, the sample weights are individually modified and the learning algorithm is reapplied to the reweighted data. At a given step, those training examples that were incorrectly predicted by the boosted model induced at the previous step have their weights increased, whereas the weights are decreased for those that were predicted correctly. As iterations proceed, examples that are difficult to predict receive ever-increasing influence. Each subsequent weak learner is thereby forced to concentrate on the examples that are missed by the previous ones in the sequence [[HTF]](#htf).
AdaBoost can be used both for classification and regression problems:
* For multi-class classification, [`AdaBoostClassifier`](generated/sklearn.ensemble.adaboostclassifier#sklearn.ensemble.AdaBoostClassifier "sklearn.ensemble.AdaBoostClassifier") implements AdaBoost-SAMME and AdaBoost-SAMME.R [[ZZRH2009]](#zzrh2009).
* For regression, [`AdaBoostRegressor`](generated/sklearn.ensemble.adaboostregressor#sklearn.ensemble.AdaBoostRegressor "sklearn.ensemble.AdaBoostRegressor") implements AdaBoost.R2 [[D1997]](#d1997).
###
1.11.3.1. Usage
The following example shows how to fit an AdaBoost classifier with 100 weak learners:
```
>>> from sklearn.model_selection import cross_val_score
>>> from sklearn.datasets import load_iris
>>> from sklearn.ensemble import AdaBoostClassifier
>>> X, y = load_iris(return_X_y=True)
>>> clf = AdaBoostClassifier(n_estimators=100)
>>> scores = cross_val_score(clf, X, y, cv=5)
>>> scores.mean()
0.9...
```
The number of weak learners is controlled by the parameter `n_estimators`. The `learning_rate` parameter controls the contribution of the weak learners in the final combination. By default, weak learners are decision stumps. Different weak learners can be specified through the `base_estimator` parameter. The main parameters to tune to obtain good results are `n_estimators` and the complexity of the base estimators (e.g., its depth `max_depth` or minimum required number of samples to consider a split `min_samples_split`).
1.11.4. Gradient Tree Boosting
-------------------------------
[Gradient Tree Boosting](https://en.wikipedia.org/wiki/Gradient_boosting) or Gradient Boosted Decision Trees (GBDT) is a generalization of boosting to arbitrary differentiable loss functions, see the seminal work of [[Friedman2001]](#friedman2001). GBDT is an accurate and effective off-the-shelf procedure that can be used for both regression and classification problems in a variety of areas including Web search ranking and ecology.
The module [`sklearn.ensemble`](classes#module-sklearn.ensemble "sklearn.ensemble") provides methods for both classification and regression via gradient boosted decision trees.
Note
Scikit-learn 0.21 introduces two new implementations of gradient boosting trees, namely [`HistGradientBoostingClassifier`](generated/sklearn.ensemble.histgradientboostingclassifier#sklearn.ensemble.HistGradientBoostingClassifier "sklearn.ensemble.HistGradientBoostingClassifier") and [`HistGradientBoostingRegressor`](generated/sklearn.ensemble.histgradientboostingregressor#sklearn.ensemble.HistGradientBoostingRegressor "sklearn.ensemble.HistGradientBoostingRegressor"), inspired by [LightGBM](https://github.com/Microsoft/LightGBM) (See [[LightGBM]](#lightgbm)).
These histogram-based estimators can be **orders of magnitude faster** than [`GradientBoostingClassifier`](generated/sklearn.ensemble.gradientboostingclassifier#sklearn.ensemble.GradientBoostingClassifier "sklearn.ensemble.GradientBoostingClassifier") and [`GradientBoostingRegressor`](generated/sklearn.ensemble.gradientboostingregressor#sklearn.ensemble.GradientBoostingRegressor "sklearn.ensemble.GradientBoostingRegressor") when the number of samples is larger than tens of thousands of samples.
They also have built-in support for missing values, which avoids the need for an imputer.
These estimators are described in more detail below in [Histogram-Based Gradient Boosting](#histogram-based-gradient-boosting).
The following guide focuses on [`GradientBoostingClassifier`](generated/sklearn.ensemble.gradientboostingclassifier#sklearn.ensemble.GradientBoostingClassifier "sklearn.ensemble.GradientBoostingClassifier") and [`GradientBoostingRegressor`](generated/sklearn.ensemble.gradientboostingregressor#sklearn.ensemble.GradientBoostingRegressor "sklearn.ensemble.GradientBoostingRegressor"), which might be preferred for small sample sizes since binning may lead to split points that are too approximate in this setting.
The usage and the parameters of [`GradientBoostingClassifier`](generated/sklearn.ensemble.gradientboostingclassifier#sklearn.ensemble.GradientBoostingClassifier "sklearn.ensemble.GradientBoostingClassifier") and [`GradientBoostingRegressor`](generated/sklearn.ensemble.gradientboostingregressor#sklearn.ensemble.GradientBoostingRegressor "sklearn.ensemble.GradientBoostingRegressor") are described below. The 2 most important parameters of these estimators are `n_estimators` and `learning_rate`.
###
1.11.4.1. Classification
[`GradientBoostingClassifier`](generated/sklearn.ensemble.gradientboostingclassifier#sklearn.ensemble.GradientBoostingClassifier "sklearn.ensemble.GradientBoostingClassifier") supports both binary and multi-class classification. The following example shows how to fit a gradient boosting classifier with 100 decision stumps as weak learners:
```
>>> from sklearn.datasets import make_hastie_10_2
>>> from sklearn.ensemble import GradientBoostingClassifier
>>> X, y = make_hastie_10_2(random_state=0)
>>> X_train, X_test = X[:2000], X[2000:]
>>> y_train, y_test = y[:2000], y[2000:]
>>> clf = GradientBoostingClassifier(n_estimators=100, learning_rate=1.0,
... max_depth=1, random_state=0).fit(X_train, y_train)
>>> clf.score(X_test, y_test)
0.913...
```
The number of weak learners (i.e. regression trees) is controlled by the parameter `n_estimators`; [The size of each tree](#gradient-boosting-tree-size) can be controlled either by setting the tree depth via `max_depth` or by setting the number of leaf nodes via `max_leaf_nodes`. The `learning_rate` is a hyper-parameter in the range (0.0, 1.0] that controls overfitting via [shrinkage](#gradient-boosting-shrinkage) .
Note
Classification with more than 2 classes requires the induction of `n_classes` regression trees at each iteration, thus, the total number of induced trees equals `n_classes * n_estimators`. For datasets with a large number of classes we strongly recommend to use [`HistGradientBoostingClassifier`](generated/sklearn.ensemble.histgradientboostingclassifier#sklearn.ensemble.HistGradientBoostingClassifier "sklearn.ensemble.HistGradientBoostingClassifier") as an alternative to [`GradientBoostingClassifier`](generated/sklearn.ensemble.gradientboostingclassifier#sklearn.ensemble.GradientBoostingClassifier "sklearn.ensemble.GradientBoostingClassifier") .
###
1.11.4.2. Regression
[`GradientBoostingRegressor`](generated/sklearn.ensemble.gradientboostingregressor#sklearn.ensemble.GradientBoostingRegressor "sklearn.ensemble.GradientBoostingRegressor") supports a number of [different loss functions](#gradient-boosting-loss) for regression which can be specified via the argument `loss`; the default loss function for regression is squared error (`'squared_error'`).
```
>>> import numpy as np
>>> from sklearn.metrics import mean_squared_error
>>> from sklearn.datasets import make_friedman1
>>> from sklearn.ensemble import GradientBoostingRegressor
>>> X, y = make_friedman1(n_samples=1200, random_state=0, noise=1.0)
>>> X_train, X_test = X[:200], X[200:]
>>> y_train, y_test = y[:200], y[200:]
>>> est = GradientBoostingRegressor(
... n_estimators=100, learning_rate=0.1, max_depth=1, random_state=0,
... loss='squared_error'
... ).fit(X_train, y_train)
>>> mean_squared_error(y_test, est.predict(X_test))
5.00...
```
The figure below shows the results of applying [`GradientBoostingRegressor`](generated/sklearn.ensemble.gradientboostingregressor#sklearn.ensemble.GradientBoostingRegressor "sklearn.ensemble.GradientBoostingRegressor") with least squares loss and 500 base learners to the diabetes dataset ([`sklearn.datasets.load_diabetes`](generated/sklearn.datasets.load_diabetes#sklearn.datasets.load_diabetes "sklearn.datasets.load_diabetes")). The plot shows the train and test error at each iteration. The train error at each iteration is stored in the `train_score_` attribute of the gradient boosting model. The test error at each iterations can be obtained via the [`staged_predict`](generated/sklearn.ensemble.gradientboostingregressor#sklearn.ensemble.GradientBoostingRegressor.staged_predict "sklearn.ensemble.GradientBoostingRegressor.staged_predict") method which returns a generator that yields the predictions at each stage. Plots like these can be used to determine the optimal number of trees (i.e. `n_estimators`) by early stopping.
###
1.11.4.3. Fitting additional weak-learners
Both [`GradientBoostingRegressor`](generated/sklearn.ensemble.gradientboostingregressor#sklearn.ensemble.GradientBoostingRegressor "sklearn.ensemble.GradientBoostingRegressor") and [`GradientBoostingClassifier`](generated/sklearn.ensemble.gradientboostingclassifier#sklearn.ensemble.GradientBoostingClassifier "sklearn.ensemble.GradientBoostingClassifier") support `warm_start=True` which allows you to add more estimators to an already fitted model.
```
>>> _ = est.set_params(n_estimators=200, warm_start=True) # set warm_start and new nr of trees
>>> _ = est.fit(X_train, y_train) # fit additional 100 trees to est
>>> mean_squared_error(y_test, est.predict(X_test))
3.84...
```
###
1.11.4.4. Controlling the tree size
The size of the regression tree base learners defines the level of variable interactions that can be captured by the gradient boosting model. In general, a tree of depth `h` can capture interactions of order `h` . There are two ways in which the size of the individual regression trees can be controlled.
If you specify `max_depth=h` then complete binary trees of depth `h` will be grown. Such trees will have (at most) `2**h` leaf nodes and `2**h - 1` split nodes.
Alternatively, you can control the tree size by specifying the number of leaf nodes via the parameter `max_leaf_nodes`. In this case, trees will be grown using best-first search where nodes with the highest improvement in impurity will be expanded first. A tree with `max_leaf_nodes=k` has `k - 1` split nodes and thus can model interactions of up to order `max_leaf_nodes - 1` .
We found that `max_leaf_nodes=k` gives comparable results to `max_depth=k-1` but is significantly faster to train at the expense of a slightly higher training error. The parameter `max_leaf_nodes` corresponds to the variable `J` in the chapter on gradient boosting in [[Friedman2001]](#friedman2001) and is related to the parameter `interaction.depth` in R’s gbm package where `max_leaf_nodes == interaction.depth + 1` .
###
1.11.4.5. Mathematical formulation
We first present GBRT for regression, and then detail the classification case.
####
1.11.4.5.1. Regression
GBRT regressors are additive models whose prediction \(\hat{y}\_i\) for a given input \(x\_i\) is of the following form:
\[\hat{y}\_i = F\_M(x\_i) = \sum\_{m=1}^{M} h\_m(x\_i)\] where the \(h\_m\) are estimators called *weak learners* in the context of boosting. Gradient Tree Boosting uses [decision tree regressors](tree#tree) of fixed size as weak learners. The constant M corresponds to the `n_estimators` parameter.
Similar to other boosting algorithms, a GBRT is built in a greedy fashion:
\[F\_m(x) = F\_{m-1}(x) + h\_m(x),\] where the newly added tree \(h\_m\) is fitted in order to minimize a sum of losses \(L\_m\), given the previous ensemble \(F\_{m-1}\):
\[h\_m = \arg\min\_{h} L\_m = \arg\min\_{h} \sum\_{i=1}^{n} l(y\_i, F\_{m-1}(x\_i) + h(x\_i)),\] where \(l(y\_i, F(x\_i))\) is defined by the `loss` parameter, detailed in the next section.
By default, the initial model \(F\_{0}\) is chosen as the constant that minimizes the loss: for a least-squares loss, this is the empirical mean of the target values. The initial model can also be specified via the `init` argument.
Using a first-order Taylor approximation, the value of \(l\) can be approximated as follows:
\[l(y\_i, F\_{m-1}(x\_i) + h\_m(x\_i)) \approx l(y\_i, F\_{m-1}(x\_i)) + h\_m(x\_i) \left[ \frac{\partial l(y\_i, F(x\_i))}{\partial F(x\_i)} \right]\_{F=F\_{m - 1}}.\] Note
Briefly, a first-order Taylor approximation says that \(l(z) \approx l(a) + (z - a) \frac{\partial l(a)}{\partial a}\). Here, \(z\) corresponds to \(F\_{m - 1}(x\_i) + h\_m(x\_i)\), and \(a\) corresponds to \(F\_{m-1}(x\_i)\)
The quantity \(\left[ \frac{\partial l(y\_i, F(x\_i))}{\partial F(x\_i)} \right]\_{F=F\_{m - 1}}\) is the derivative of the loss with respect to its second parameter, evaluated at \(F\_{m-1}(x)\). It is easy to compute for any given \(F\_{m - 1}(x\_i)\) in a closed form since the loss is differentiable. We will denote it by \(g\_i\).
Removing the constant terms, we have:
\[h\_m \approx \arg\min\_{h} \sum\_{i=1}^{n} h(x\_i) g\_i\] This is minimized if \(h(x\_i)\) is fitted to predict a value that is proportional to the negative gradient \(-g\_i\). Therefore, at each iteration, **the estimator** \(h\_m\) **is fitted to predict the negative gradients of the samples**. The gradients are updated at each iteration. This can be considered as some kind of gradient descent in a functional space.
Note
For some losses, e.g. the least absolute deviation (LAD) where the gradients are \(\pm 1\), the values predicted by a fitted \(h\_m\) are not accurate enough: the tree can only output integer values. As a result, the leaves values of the tree \(h\_m\) are modified once the tree is fitted, such that the leaves values minimize the loss \(L\_m\). The update is loss-dependent: for the LAD loss, the value of a leaf is updated to the median of the samples in that leaf.
####
1.11.4.5.2. Classification
Gradient boosting for classification is very similar to the regression case. However, the sum of the trees \(F\_M(x\_i) = \sum\_m h\_m(x\_i)\) is not homogeneous to a prediction: it cannot be a class, since the trees predict continuous values.
The mapping from the value \(F\_M(x\_i)\) to a class or a probability is loss-dependent. For the log-loss, the probability that \(x\_i\) belongs to the positive class is modeled as \(p(y\_i = 1 | x\_i) = \sigma(F\_M(x\_i))\) where \(\sigma\) is the sigmoid or expit function.
For multiclass classification, K trees (for K classes) are built at each of the \(M\) iterations. The probability that \(x\_i\) belongs to class k is modeled as a softmax of the \(F\_{M,k}(x\_i)\) values.
Note that even for a classification task, the \(h\_m\) sub-estimator is still a regressor, not a classifier. This is because the sub-estimators are trained to predict (negative) *gradients*, which are always continuous quantities.
###
1.11.4.6. Loss Functions
The following loss functions are supported and can be specified using the parameter `loss`:
* Regression
+ Squared error (`'squared_error'`): The natural choice for regression due to its superior computational properties. The initial model is given by the mean of the target values.
+ Least absolute deviation (`'lad'`): A robust loss function for regression. The initial model is given by the median of the target values.
+ Huber (`'huber'`): Another robust loss function that combines least squares and least absolute deviation; use `alpha` to control the sensitivity with regards to outliers (see [[Friedman2001]](#friedman2001) for more details).
+ Quantile (`'quantile'`): A loss function for quantile regression. Use `0 < alpha < 1` to specify the quantile. This loss function can be used to create prediction intervals (see [Prediction Intervals for Gradient Boosting Regression](../auto_examples/ensemble/plot_gradient_boosting_quantile#sphx-glr-auto-examples-ensemble-plot-gradient-boosting-quantile-py)).
* Classification
+ Binary log-loss (`'log-loss'`): The binomial negative log-likelihood loss function for binary classification. It provides probability estimates. The initial model is given by the log odds-ratio.
+ Multi-class log-loss (`'log-loss'`): The multinomial negative log-likelihood loss function for multi-class classification with `n_classes` mutually exclusive classes. It provides probability estimates. The initial model is given by the prior probability of each class. At each iteration `n_classes` regression trees have to be constructed which makes GBRT rather inefficient for data sets with a large number of classes.
+ Exponential loss (`'exponential'`): The same loss function as [`AdaBoostClassifier`](generated/sklearn.ensemble.adaboostclassifier#sklearn.ensemble.AdaBoostClassifier "sklearn.ensemble.AdaBoostClassifier"). Less robust to mislabeled examples than `'log-loss'`; can only be used for binary classification.
###
1.11.4.7. Shrinkage via learning rate
[[Friedman2001]](#friedman2001) proposed a simple regularization strategy that scales the contribution of each weak learner by a constant factor \(\nu\):
\[F\_m(x) = F\_{m-1}(x) + \nu h\_m(x)\] The parameter \(\nu\) is also called the **learning rate** because it scales the step length the gradient descent procedure; it can be set via the `learning_rate` parameter.
The parameter `learning_rate` strongly interacts with the parameter `n_estimators`, the number of weak learners to fit. Smaller values of `learning_rate` require larger numbers of weak learners to maintain a constant training error. Empirical evidence suggests that small values of `learning_rate` favor better test error. [[HTF]](#htf) recommend to set the learning rate to a small constant (e.g. `learning_rate <= 0.1`) and choose `n_estimators` by early stopping. For a more detailed discussion of the interaction between `learning_rate` and `n_estimators` see [[R2007]](#r2007).
###
1.11.4.8. Subsampling
[[Friedman2002]](#friedman2002) proposed stochastic gradient boosting, which combines gradient boosting with bootstrap averaging (bagging). At each iteration the base classifier is trained on a fraction `subsample` of the available training data. The subsample is drawn without replacement. A typical value of `subsample` is 0.5.
The figure below illustrates the effect of shrinkage and subsampling on the goodness-of-fit of the model. We can clearly see that shrinkage outperforms no-shrinkage. Subsampling with shrinkage can further increase the accuracy of the model. Subsampling without shrinkage, on the other hand, does poorly.
Another strategy to reduce the variance is by subsampling the features analogous to the random splits in [`RandomForestClassifier`](generated/sklearn.ensemble.randomforestclassifier#sklearn.ensemble.RandomForestClassifier "sklearn.ensemble.RandomForestClassifier"). The number of subsampled features can be controlled via the `max_features` parameter.
Note
Using a small `max_features` value can significantly decrease the runtime.
Stochastic gradient boosting allows to compute out-of-bag estimates of the test deviance by computing the improvement in deviance on the examples that are not included in the bootstrap sample (i.e. the out-of-bag examples). The improvements are stored in the attribute `oob_improvement_`. `oob_improvement_[i]` holds the improvement in terms of the loss on the OOB samples if you add the i-th stage to the current predictions. Out-of-bag estimates can be used for model selection, for example to determine the optimal number of iterations. OOB estimates are usually very pessimistic thus we recommend to use cross-validation instead and only use OOB if cross-validation is too time consuming.
###
1.11.4.9. Interpretation with feature importance
Individual decision trees can be interpreted easily by simply visualizing the tree structure. Gradient boosting models, however, comprise hundreds of regression trees thus they cannot be easily interpreted by visual inspection of the individual trees. Fortunately, a number of techniques have been proposed to summarize and interpret gradient boosting models.
Often features do not contribute equally to predict the target response; in many situations the majority of the features are in fact irrelevant. When interpreting a model, the first question usually is: what are those important features and how do they contributing in predicting the target response?
Individual decision trees intrinsically perform feature selection by selecting appropriate split points. This information can be used to measure the importance of each feature; the basic idea is: the more often a feature is used in the split points of a tree the more important that feature is. This notion of importance can be extended to decision tree ensembles by simply averaging the impurity-based feature importance of each tree (see [Feature importance evaluation](#random-forest-feature-importance) for more details).
The feature importance scores of a fit gradient boosting model can be accessed via the `feature_importances_` property:
```
>>> from sklearn.datasets import make_hastie_10_2
>>> from sklearn.ensemble import GradientBoostingClassifier
>>> X, y = make_hastie_10_2(random_state=0)
>>> clf = GradientBoostingClassifier(n_estimators=100, learning_rate=1.0,
... max_depth=1, random_state=0).fit(X, y)
>>> clf.feature_importances_
array([0.10..., 0.10..., 0.11..., ...
```
Note that this computation of feature importance is based on entropy, and it is distinct from [`sklearn.inspection.permutation_importance`](generated/sklearn.inspection.permutation_importance#sklearn.inspection.permutation_importance "sklearn.inspection.permutation_importance") which is based on permutation of the features.
1.11.5. Histogram-Based Gradient Boosting
------------------------------------------
Scikit-learn 0.21 introduced two new implementations of gradient boosting trees, namely [`HistGradientBoostingClassifier`](generated/sklearn.ensemble.histgradientboostingclassifier#sklearn.ensemble.HistGradientBoostingClassifier "sklearn.ensemble.HistGradientBoostingClassifier") and [`HistGradientBoostingRegressor`](generated/sklearn.ensemble.histgradientboostingregressor#sklearn.ensemble.HistGradientBoostingRegressor "sklearn.ensemble.HistGradientBoostingRegressor"), inspired by [LightGBM](https://github.com/Microsoft/LightGBM) (See [[LightGBM]](#lightgbm)).
These histogram-based estimators can be **orders of magnitude faster** than [`GradientBoostingClassifier`](generated/sklearn.ensemble.gradientboostingclassifier#sklearn.ensemble.GradientBoostingClassifier "sklearn.ensemble.GradientBoostingClassifier") and [`GradientBoostingRegressor`](generated/sklearn.ensemble.gradientboostingregressor#sklearn.ensemble.GradientBoostingRegressor "sklearn.ensemble.GradientBoostingRegressor") when the number of samples is larger than tens of thousands of samples.
They also have built-in support for missing values, which avoids the need for an imputer.
These fast estimators first bin the input samples `X` into integer-valued bins (typically 256 bins) which tremendously reduces the number of splitting points to consider, and allows the algorithm to leverage integer-based data structures (histograms) instead of relying on sorted continuous values when building the trees. The API of these estimators is slightly different, and some of the features from [`GradientBoostingClassifier`](generated/sklearn.ensemble.gradientboostingclassifier#sklearn.ensemble.GradientBoostingClassifier "sklearn.ensemble.GradientBoostingClassifier") and [`GradientBoostingRegressor`](generated/sklearn.ensemble.gradientboostingregressor#sklearn.ensemble.GradientBoostingRegressor "sklearn.ensemble.GradientBoostingRegressor") are not yet supported, for instance some loss functions.
###
1.11.5.1. Usage
Most of the parameters are unchanged from [`GradientBoostingClassifier`](generated/sklearn.ensemble.gradientboostingclassifier#sklearn.ensemble.GradientBoostingClassifier "sklearn.ensemble.GradientBoostingClassifier") and [`GradientBoostingRegressor`](generated/sklearn.ensemble.gradientboostingregressor#sklearn.ensemble.GradientBoostingRegressor "sklearn.ensemble.GradientBoostingRegressor"). One exception is the `max_iter` parameter that replaces `n_estimators`, and controls the number of iterations of the boosting process:
```
>>> from sklearn.ensemble import HistGradientBoostingClassifier
>>> from sklearn.datasets import make_hastie_10_2
>>> X, y = make_hastie_10_2(random_state=0)
>>> X_train, X_test = X[:2000], X[2000:]
>>> y_train, y_test = y[:2000], y[2000:]
>>> clf = HistGradientBoostingClassifier(max_iter=100).fit(X_train, y_train)
>>> clf.score(X_test, y_test)
0.8965
```
Available losses for regression are ‘squared\_error’, ‘absolute\_error’, which is less sensitive to outliers, and ‘poisson’, which is well suited to model counts and frequencies. For classification, ‘log\_loss’ is the only option. For binary classification it uses the binary log loss, also kown as binomial deviance or binary cross-entropy. For `n_classes >= 3`, it uses the multi-class log loss function, with multinomial deviance and categorical cross-entropy as alternative names. The appropriate loss version is selected based on [y](https://scikit-learn.org/1.1/glossary.html#term-y) passed to [fit](https://scikit-learn.org/1.1/glossary.html#term-fit).
The size of the trees can be controlled through the `max_leaf_nodes`, `max_depth`, and `min_samples_leaf` parameters.
The number of bins used to bin the data is controlled with the `max_bins` parameter. Using less bins acts as a form of regularization. It is generally recommended to use as many bins as possible, which is the default.
The `l2_regularization` parameter is a regularizer on the loss function and corresponds to \(\lambda\) in equation (2) of [[XGBoost]](#xgboost).
Note that **early-stopping is enabled by default if the number of samples is larger than 10,000**. The early-stopping behaviour is controlled via the `early_stopping`, `scoring`, `validation_fraction`, `n_iter_no_change`, and `tol` parameters. It is possible to early-stop using an arbitrary [scorer](https://scikit-learn.org/1.1/glossary.html#term-scorer), or just the training or validation loss. Note that for technical reasons, using a scorer is significantly slower than using the loss. By default, early-stopping is performed if there are at least 10,000 samples in the training set, using the validation loss.
###
1.11.5.2. Missing values support
[`HistGradientBoostingClassifier`](generated/sklearn.ensemble.histgradientboostingclassifier#sklearn.ensemble.HistGradientBoostingClassifier "sklearn.ensemble.HistGradientBoostingClassifier") and [`HistGradientBoostingRegressor`](generated/sklearn.ensemble.histgradientboostingregressor#sklearn.ensemble.HistGradientBoostingRegressor "sklearn.ensemble.HistGradientBoostingRegressor") have built-in support for missing values (NaNs).
During training, the tree grower learns at each split point whether samples with missing values should go to the left or right child, based on the potential gain. When predicting, samples with missing values are assigned to the left or right child consequently:
```
>>> from sklearn.ensemble import HistGradientBoostingClassifier
>>> import numpy as np
>>> X = np.array([0, 1, 2, np.nan]).reshape(-1, 1)
>>> y = [0, 0, 1, 1]
>>> gbdt = HistGradientBoostingClassifier(min_samples_leaf=1).fit(X, y)
>>> gbdt.predict(X)
array([0, 0, 1, 1])
```
When the missingness pattern is predictive, the splits can be done on whether the feature value is missing or not:
```
>>> X = np.array([0, np.nan, 1, 2, np.nan]).reshape(-1, 1)
>>> y = [0, 1, 0, 0, 1]
>>> gbdt = HistGradientBoostingClassifier(min_samples_leaf=1,
... max_depth=2,
... learning_rate=1,
... max_iter=1).fit(X, y)
>>> gbdt.predict(X)
array([0, 1, 0, 0, 1])
```
If no missing values were encountered for a given feature during training, then samples with missing values are mapped to whichever child has the most samples.
###
1.11.5.3. Sample weight support
[`HistGradientBoostingClassifier`](generated/sklearn.ensemble.histgradientboostingclassifier#sklearn.ensemble.HistGradientBoostingClassifier "sklearn.ensemble.HistGradientBoostingClassifier") and [`HistGradientBoostingRegressor`](generated/sklearn.ensemble.histgradientboostingregressor#sklearn.ensemble.HistGradientBoostingRegressor "sklearn.ensemble.HistGradientBoostingRegressor") sample support weights during [fit](https://scikit-learn.org/1.1/glossary.html#term-fit).
The following toy example demonstrates how the model ignores the samples with zero sample weights:
```
>>> X = [[1, 0],
... [1, 0],
... [1, 0],
... [0, 1]]
>>> y = [0, 0, 1, 0]
>>> # ignore the first 2 training samples by setting their weight to 0
>>> sample_weight = [0, 0, 1, 1]
>>> gb = HistGradientBoostingClassifier(min_samples_leaf=1)
>>> gb.fit(X, y, sample_weight=sample_weight)
HistGradientBoostingClassifier(...)
>>> gb.predict([[1, 0]])
array([1])
>>> gb.predict_proba([[1, 0]])[0, 1]
0.99...
```
As you can see, the `[1, 0]` is comfortably classified as `1` since the first two samples are ignored due to their sample weights.
Implementation detail: taking sample weights into account amounts to multiplying the gradients (and the hessians) by the sample weights. Note that the binning stage (specifically the quantiles computation) does not take the weights into account.
###
1.11.5.4. Categorical Features Support
[`HistGradientBoostingClassifier`](generated/sklearn.ensemble.histgradientboostingclassifier#sklearn.ensemble.HistGradientBoostingClassifier "sklearn.ensemble.HistGradientBoostingClassifier") and [`HistGradientBoostingRegressor`](generated/sklearn.ensemble.histgradientboostingregressor#sklearn.ensemble.HistGradientBoostingRegressor "sklearn.ensemble.HistGradientBoostingRegressor") have native support for categorical features: they can consider splits on non-ordered, categorical data.
For datasets with categorical features, using the native categorical support is often better than relying on one-hot encoding ([`OneHotEncoder`](generated/sklearn.preprocessing.onehotencoder#sklearn.preprocessing.OneHotEncoder "sklearn.preprocessing.OneHotEncoder")), because one-hot encoding requires more tree depth to achieve equivalent splits. It is also usually better to rely on the native categorical support rather than to treat categorical features as continuous (ordinal), which happens for ordinal-encoded categorical data, since categories are nominal quantities where order does not matter.
To enable categorical support, a boolean mask can be passed to the `categorical_features` parameter, indicating which feature is categorical. In the following, the first feature will be treated as categorical and the second feature as numerical:
```
>>> gbdt = HistGradientBoostingClassifier(categorical_features=[True, False])
```
Equivalently, one can pass a list of integers indicating the indices of the categorical features:
```
>>> gbdt = HistGradientBoostingClassifier(categorical_features=[0])
```
The cardinality of each categorical feature should be less than the `max_bins` parameter, and each categorical feature is expected to be encoded in `[0, max_bins - 1]`. To that end, it might be useful to pre-process the data with an [`OrdinalEncoder`](generated/sklearn.preprocessing.ordinalencoder#sklearn.preprocessing.OrdinalEncoder "sklearn.preprocessing.OrdinalEncoder") as done in [Categorical Feature Support in Gradient Boosting](../auto_examples/ensemble/plot_gradient_boosting_categorical#sphx-glr-auto-examples-ensemble-plot-gradient-boosting-categorical-py).
If there are missing values during training, the missing values will be treated as a proper category. If there are no missing values during training, then at prediction time, missing values are mapped to the child node that has the most samples (just like for continuous features). When predicting, categories that were not seen during fit time will be treated as missing values.
**Split finding with categorical features**: The canonical way of considering categorical splits in a tree is to consider all of the \(2^{K - 1} - 1\) partitions, where \(K\) is the number of categories. This can quickly become prohibitive when \(K\) is large. Fortunately, since gradient boosting trees are always regression trees (even for classification problems), there exist a faster strategy that can yield equivalent splits. First, the categories of a feature are sorted according to the variance of the target, for each category `k`. Once the categories are sorted, one can consider *continuous partitions*, i.e. treat the categories as if they were ordered continuous values (see Fisher [[Fisher1958]](#fisher1958) for a formal proof). As a result, only \(K - 1\) splits need to be considered instead of \(2^{K - 1} - 1\). The initial sorting is a \(\mathcal{O}(K \log(K))\) operation, leading to a total complexity of \(\mathcal{O}(K \log(K) + K)\), instead of \(\mathcal{O}(2^K)\).
###
1.11.5.5. Monotonic Constraints
Depending on the problem at hand, you may have prior knowledge indicating that a given feature should in general have a positive (or negative) effect on the target value. For example, all else being equal, a higher credit score should increase the probability of getting approved for a loan. Monotonic constraints allow you to incorporate such prior knowledge into the model.
A positive monotonic constraint is a constraint of the form:
\(x\_1 \leq x\_1' \implies F(x\_1, x\_2) \leq F(x\_1', x\_2)\), where \(F\) is the predictor with two features.
Similarly, a negative monotonic constraint is of the form:
\(x\_1 \leq x\_1' \implies F(x\_1, x\_2) \geq F(x\_1', x\_2)\).
Note that monotonic constraints only constraint the output “all else being equal”. Indeed, the following relation **is not enforced** by a positive constraint: \(x\_1 \leq x\_1' \implies F(x\_1, x\_2) \leq F(x\_1', x\_2')\).
You can specify a monotonic constraint on each feature using the `monotonic_cst` parameter. For each feature, a value of 0 indicates no constraint, while -1 and 1 indicate a negative and positive constraint, respectively:
```
>>> from sklearn.ensemble import HistGradientBoostingRegressor
... # positive, negative, and no constraint on the 3 features
>>> gbdt = HistGradientBoostingRegressor(monotonic_cst=[1, -1, 0])
```
In a binary classification context, imposing a monotonic constraint means that the feature is supposed to have a positive / negative effect on the probability to belong to the positive class. Monotonic constraints are not supported for multiclass context.
Note
Since categories are unordered quantities, it is not possible to enforce monotonic constraints on categorical features.
###
1.11.5.6. Low-level parallelism
[`HistGradientBoostingClassifier`](generated/sklearn.ensemble.histgradientboostingclassifier#sklearn.ensemble.HistGradientBoostingClassifier "sklearn.ensemble.HistGradientBoostingClassifier") and [`HistGradientBoostingRegressor`](generated/sklearn.ensemble.histgradientboostingregressor#sklearn.ensemble.HistGradientBoostingRegressor "sklearn.ensemble.HistGradientBoostingRegressor") have implementations that use OpenMP for parallelization through Cython. For more details on how to control the number of threads, please refer to our [Parallelism](https://scikit-learn.org/1.1/computing/parallelism.html#parallelism) notes.
The following parts are parallelized:
* mapping samples from real values to integer-valued bins (finding the bin thresholds is however sequential)
* building histograms is parallelized over features
* finding the best split point at a node is parallelized over features
* during fit, mapping samples into the left and right children is parallelized over samples
* gradient and hessians computations are parallelized over samples
* predicting is parallelized over samples
###
1.11.5.7. Why it’s faster
The bottleneck of a gradient boosting procedure is building the decision trees. Building a traditional decision tree (as in the other GBDTs [`GradientBoostingClassifier`](generated/sklearn.ensemble.gradientboostingclassifier#sklearn.ensemble.GradientBoostingClassifier "sklearn.ensemble.GradientBoostingClassifier") and [`GradientBoostingRegressor`](generated/sklearn.ensemble.gradientboostingregressor#sklearn.ensemble.GradientBoostingRegressor "sklearn.ensemble.GradientBoostingRegressor")) requires sorting the samples at each node (for each feature). Sorting is needed so that the potential gain of a split point can be computed efficiently. Splitting a single node has thus a complexity of \(\mathcal{O}(n\_\text{features} \times n \log(n))\) where \(n\) is the number of samples at the node.
[`HistGradientBoostingClassifier`](generated/sklearn.ensemble.histgradientboostingclassifier#sklearn.ensemble.HistGradientBoostingClassifier "sklearn.ensemble.HistGradientBoostingClassifier") and [`HistGradientBoostingRegressor`](generated/sklearn.ensemble.histgradientboostingregressor#sklearn.ensemble.HistGradientBoostingRegressor "sklearn.ensemble.HistGradientBoostingRegressor"), in contrast, do not require sorting the feature values and instead use a data-structure called a histogram, where the samples are implicitly ordered. Building a histogram has a \(\mathcal{O}(n)\) complexity, so the node splitting procedure has a \(\mathcal{O}(n\_\text{features} \times n)\) complexity, much smaller than the previous one. In addition, instead of considering \(n\) split points, we here consider only `max_bins` split points, which is much smaller.
In order to build histograms, the input data `X` needs to be binned into integer-valued bins. This binning procedure does require sorting the feature values, but it only happens once at the very beginning of the boosting process (not at each node, like in [`GradientBoostingClassifier`](generated/sklearn.ensemble.gradientboostingclassifier#sklearn.ensemble.GradientBoostingClassifier "sklearn.ensemble.GradientBoostingClassifier") and [`GradientBoostingRegressor`](generated/sklearn.ensemble.gradientboostingregressor#sklearn.ensemble.GradientBoostingRegressor "sklearn.ensemble.GradientBoostingRegressor")).
Finally, many parts of the implementation of [`HistGradientBoostingClassifier`](generated/sklearn.ensemble.histgradientboostingclassifier#sklearn.ensemble.HistGradientBoostingClassifier "sklearn.ensemble.HistGradientBoostingClassifier") and [`HistGradientBoostingRegressor`](generated/sklearn.ensemble.histgradientboostingregressor#sklearn.ensemble.HistGradientBoostingRegressor "sklearn.ensemble.HistGradientBoostingRegressor") are parallelized.
1.11.6. Voting Classifier
--------------------------
The idea behind the [`VotingClassifier`](generated/sklearn.ensemble.votingclassifier#sklearn.ensemble.VotingClassifier "sklearn.ensemble.VotingClassifier") is to combine conceptually different machine learning classifiers and use a majority vote or the average predicted probabilities (soft vote) to predict the class labels. Such a classifier can be useful for a set of equally well performing models in order to balance out their individual weaknesses.
###
1.11.6.1. Majority Class Labels (Majority/Hard Voting)
In majority voting, the predicted class label for a particular sample is the class label that represents the majority (mode) of the class labels predicted by each individual classifier.
E.g., if the prediction for a given sample is
* classifier 1 -> class 1
* classifier 2 -> class 1
* classifier 3 -> class 2
the VotingClassifier (with `voting='hard'`) would classify the sample as “class 1” based on the majority class label.
In the cases of a tie, the [`VotingClassifier`](generated/sklearn.ensemble.votingclassifier#sklearn.ensemble.VotingClassifier "sklearn.ensemble.VotingClassifier") will select the class based on the ascending sort order. E.g., in the following scenario
* classifier 1 -> class 2
* classifier 2 -> class 1
the class label 1 will be assigned to the sample.
###
1.11.6.2. Usage
The following example shows how to fit the majority rule classifier:
```
>>> from sklearn import datasets
>>> from sklearn.model_selection import cross_val_score
>>> from sklearn.linear_model import LogisticRegression
>>> from sklearn.naive_bayes import GaussianNB
>>> from sklearn.ensemble import RandomForestClassifier
>>> from sklearn.ensemble import VotingClassifier
>>> iris = datasets.load_iris()
>>> X, y = iris.data[:, 1:3], iris.target
>>> clf1 = LogisticRegression(random_state=1)
>>> clf2 = RandomForestClassifier(n_estimators=50, random_state=1)
>>> clf3 = GaussianNB()
>>> eclf = VotingClassifier(
... estimators=[('lr', clf1), ('rf', clf2), ('gnb', clf3)],
... voting='hard')
>>> for clf, label in zip([clf1, clf2, clf3, eclf], ['Logistic Regression', 'Random Forest', 'naive Bayes', 'Ensemble']):
... scores = cross_val_score(clf, X, y, scoring='accuracy', cv=5)
... print("Accuracy: %0.2f (+/- %0.2f) [%s]" % (scores.mean(), scores.std(), label))
Accuracy: 0.95 (+/- 0.04) [Logistic Regression]
Accuracy: 0.94 (+/- 0.04) [Random Forest]
Accuracy: 0.91 (+/- 0.04) [naive Bayes]
Accuracy: 0.95 (+/- 0.04) [Ensemble]
```
###
1.11.6.3. Weighted Average Probabilities (Soft Voting)
In contrast to majority voting (hard voting), soft voting returns the class label as argmax of the sum of predicted probabilities.
Specific weights can be assigned to each classifier via the `weights` parameter. When weights are provided, the predicted class probabilities for each classifier are collected, multiplied by the classifier weight, and averaged. The final class label is then derived from the class label with the highest average probability.
To illustrate this with a simple example, let’s assume we have 3 classifiers and a 3-class classification problems where we assign equal weights to all classifiers: w1=1, w2=1, w3=1.
The weighted average probabilities for a sample would then be calculated as follows:
| classifier | class 1 | class 2 | class 3 |
| --- | --- | --- | --- |
| classifier 1 | w1 \* 0.2 | w1 \* 0.5 | w1 \* 0.3 |
| classifier 2 | w2 \* 0.6 | w2 \* 0.3 | w2 \* 0.1 |
| classifier 3 | w3 \* 0.3 | w3 \* 0.4 | w3 \* 0.3 |
| weighted average | 0.37 | 0.4 | 0.23 |
Here, the predicted class label is 2, since it has the highest average probability.
The following example illustrates how the decision regions may change when a soft [`VotingClassifier`](generated/sklearn.ensemble.votingclassifier#sklearn.ensemble.VotingClassifier "sklearn.ensemble.VotingClassifier") is used based on a linear Support Vector Machine, a Decision Tree, and a K-nearest neighbor classifier:
```
>>> from sklearn import datasets
>>> from sklearn.tree import DecisionTreeClassifier
>>> from sklearn.neighbors import KNeighborsClassifier
>>> from sklearn.svm import SVC
>>> from itertools import product
>>> from sklearn.ensemble import VotingClassifier
>>> # Loading some example data
>>> iris = datasets.load_iris()
>>> X = iris.data[:, [0, 2]]
>>> y = iris.target
>>> # Training classifiers
>>> clf1 = DecisionTreeClassifier(max_depth=4)
>>> clf2 = KNeighborsClassifier(n_neighbors=7)
>>> clf3 = SVC(kernel='rbf', probability=True)
>>> eclf = VotingClassifier(estimators=[('dt', clf1), ('knn', clf2), ('svc', clf3)],
... voting='soft', weights=[2, 1, 2])
>>> clf1 = clf1.fit(X, y)
>>> clf2 = clf2.fit(X, y)
>>> clf3 = clf3.fit(X, y)
>>> eclf = eclf.fit(X, y)
```
###
1.11.6.4. Using the `VotingClassifier` with `GridSearchCV`
The [`VotingClassifier`](generated/sklearn.ensemble.votingclassifier#sklearn.ensemble.VotingClassifier "sklearn.ensemble.VotingClassifier") can also be used together with [`GridSearchCV`](generated/sklearn.model_selection.gridsearchcv#sklearn.model_selection.GridSearchCV "sklearn.model_selection.GridSearchCV") in order to tune the hyperparameters of the individual estimators:
```
>>> from sklearn.model_selection import GridSearchCV
>>> clf1 = LogisticRegression(random_state=1)
>>> clf2 = RandomForestClassifier(random_state=1)
>>> clf3 = GaussianNB()
>>> eclf = VotingClassifier(
... estimators=[('lr', clf1), ('rf', clf2), ('gnb', clf3)],
... voting='soft'
... )
>>> params = {'lr__C': [1.0, 100.0], 'rf__n_estimators': [20, 200]}
>>> grid = GridSearchCV(estimator=eclf, param_grid=params, cv=5)
>>> grid = grid.fit(iris.data, iris.target)
```
###
1.11.6.5. Usage
In order to predict the class labels based on the predicted class-probabilities (scikit-learn estimators in the VotingClassifier must support `predict_proba` method):
```
>>> eclf = VotingClassifier(
... estimators=[('lr', clf1), ('rf', clf2), ('gnb', clf3)],
... voting='soft'
... )
```
Optionally, weights can be provided for the individual classifiers:
```
>>> eclf = VotingClassifier(
... estimators=[('lr', clf1), ('rf', clf2), ('gnb', clf3)],
... voting='soft', weights=[2,5,1]
... )
```
1.11.7. Voting Regressor
-------------------------
The idea behind the [`VotingRegressor`](generated/sklearn.ensemble.votingregressor#sklearn.ensemble.VotingRegressor "sklearn.ensemble.VotingRegressor") is to combine conceptually different machine learning regressors and return the average predicted values. Such a regressor can be useful for a set of equally well performing models in order to balance out their individual weaknesses.
###
1.11.7.1. Usage
The following example shows how to fit the VotingRegressor:
```
>>> from sklearn.datasets import load_diabetes
>>> from sklearn.ensemble import GradientBoostingRegressor
>>> from sklearn.ensemble import RandomForestRegressor
>>> from sklearn.linear_model import LinearRegression
>>> from sklearn.ensemble import VotingRegressor
>>> # Loading some example data
>>> X, y = load_diabetes(return_X_y=True)
>>> # Training classifiers
>>> reg1 = GradientBoostingRegressor(random_state=1)
>>> reg2 = RandomForestRegressor(random_state=1)
>>> reg3 = LinearRegression()
>>> ereg = VotingRegressor(estimators=[('gb', reg1), ('rf', reg2), ('lr', reg3)])
>>> ereg = ereg.fit(X, y)
```
1.11.8. Stacked generalization
-------------------------------
Stacked generalization is a method for combining estimators to reduce their biases [[W1992]](#w1992) [[HTF]](#htf). More precisely, the predictions of each individual estimator are stacked together and used as input to a final estimator to compute the prediction. This final estimator is trained through cross-validation.
The [`StackingClassifier`](generated/sklearn.ensemble.stackingclassifier#sklearn.ensemble.StackingClassifier "sklearn.ensemble.StackingClassifier") and [`StackingRegressor`](generated/sklearn.ensemble.stackingregressor#sklearn.ensemble.StackingRegressor "sklearn.ensemble.StackingRegressor") provide such strategies which can be applied to classification and regression problems.
The `estimators` parameter corresponds to the list of the estimators which are stacked together in parallel on the input data. It should be given as a list of names and estimators:
```
>>> from sklearn.linear_model import RidgeCV, LassoCV
>>> from sklearn.neighbors import KNeighborsRegressor
>>> estimators = [('ridge', RidgeCV()),
... ('lasso', LassoCV(random_state=42)),
... ('knr', KNeighborsRegressor(n_neighbors=20,
... metric='euclidean'))]
```
The `final_estimator` will use the predictions of the `estimators` as input. It needs to be a classifier or a regressor when using [`StackingClassifier`](generated/sklearn.ensemble.stackingclassifier#sklearn.ensemble.StackingClassifier "sklearn.ensemble.StackingClassifier") or [`StackingRegressor`](generated/sklearn.ensemble.stackingregressor#sklearn.ensemble.StackingRegressor "sklearn.ensemble.StackingRegressor"), respectively:
```
>>> from sklearn.ensemble import GradientBoostingRegressor
>>> from sklearn.ensemble import StackingRegressor
>>> final_estimator = GradientBoostingRegressor(
... n_estimators=25, subsample=0.5, min_samples_leaf=25, max_features=1,
... random_state=42)
>>> reg = StackingRegressor(
... estimators=estimators,
... final_estimator=final_estimator)
```
To train the `estimators` and `final_estimator`, the `fit` method needs to be called on the training data:
```
>>> from sklearn.datasets import load_diabetes
>>> X, y = load_diabetes(return_X_y=True)
>>> from sklearn.model_selection import train_test_split
>>> X_train, X_test, y_train, y_test = train_test_split(X, y,
... random_state=42)
>>> reg.fit(X_train, y_train)
StackingRegressor(...)
```
During training, the `estimators` are fitted on the whole training data `X_train`. They will be used when calling `predict` or `predict_proba`. To generalize and avoid over-fitting, the `final_estimator` is trained on out-samples using [`sklearn.model_selection.cross_val_predict`](generated/sklearn.model_selection.cross_val_predict#sklearn.model_selection.cross_val_predict "sklearn.model_selection.cross_val_predict") internally.
For [`StackingClassifier`](generated/sklearn.ensemble.stackingclassifier#sklearn.ensemble.StackingClassifier "sklearn.ensemble.StackingClassifier"), note that the output of the `estimators` is controlled by the parameter `stack_method` and it is called by each estimator. This parameter is either a string, being estimator method names, or `'auto'` which will automatically identify an available method depending on the availability, tested in the order of preference: `predict_proba`, `decision_function` and `predict`.
A [`StackingRegressor`](generated/sklearn.ensemble.stackingregressor#sklearn.ensemble.StackingRegressor "sklearn.ensemble.StackingRegressor") and [`StackingClassifier`](generated/sklearn.ensemble.stackingclassifier#sklearn.ensemble.StackingClassifier "sklearn.ensemble.StackingClassifier") can be used as any other regressor or classifier, exposing a `predict`, `predict_proba`, and `decision_function` methods, e.g.:
```
>>> y_pred = reg.predict(X_test)
>>> from sklearn.metrics import r2_score
>>> print('R2 score: {:.2f}'.format(r2_score(y_test, y_pred)))
R2 score: 0.53
```
Note that it is also possible to get the output of the stacked `estimators` using the `transform` method:
```
>>> reg.transform(X_test[:5])
array([[142..., 138..., 146...],
[179..., 182..., 151...],
[139..., 132..., 158...],
[286..., 292..., 225...],
[126..., 124..., 164...]])
```
In practice, a stacking predictor predicts as good as the best predictor of the base layer and even sometimes outperforms it by combining the different strengths of the these predictors. However, training a stacking predictor is computationally expensive.
Note
For [`StackingClassifier`](generated/sklearn.ensemble.stackingclassifier#sklearn.ensemble.StackingClassifier "sklearn.ensemble.StackingClassifier"), when using `stack_method_='predict_proba'`, the first column is dropped when the problem is a binary classification problem. Indeed, both probability columns predicted by each estimator are perfectly collinear.
Note
Multiple stacking layers can be achieved by assigning `final_estimator` to a [`StackingClassifier`](generated/sklearn.ensemble.stackingclassifier#sklearn.ensemble.StackingClassifier "sklearn.ensemble.StackingClassifier") or [`StackingRegressor`](generated/sklearn.ensemble.stackingregressor#sklearn.ensemble.StackingRegressor "sklearn.ensemble.StackingRegressor"):
```
>>> final_layer_rfr = RandomForestRegressor(
... n_estimators=10, max_features=1, max_leaf_nodes=5,random_state=42)
>>> final_layer_gbr = GradientBoostingRegressor(
... n_estimators=10, max_features=1, max_leaf_nodes=5,random_state=42)
>>> final_layer = StackingRegressor(
... estimators=[('rf', final_layer_rfr),
... ('gbrt', final_layer_gbr)],
... final_estimator=RidgeCV()
... )
>>> multi_layer_regressor = StackingRegressor(
... estimators=[('ridge', RidgeCV()),
... ('lasso', LassoCV(random_state=42)),
... ('knr', KNeighborsRegressor(n_neighbors=20,
... metric='euclidean'))],
... final_estimator=final_layer
... )
>>> multi_layer_regressor.fit(X_train, y_train)
StackingRegressor(...)
>>> print('R2 score: {:.2f}'
... .format(multi_layer_regressor.score(X_test, y_test)))
R2 score: 0.53
```
| programming_docs |
scikit_learn 2.1. Gaussian mixture models 2.1. Gaussian mixture models
============================
`sklearn.mixture` is a package which enables one to learn Gaussian Mixture Models (diagonal, spherical, tied and full covariance matrices supported), sample them, and estimate them from data. Facilities to help determine the appropriate number of components are also provided.
**Two-component Gaussian mixture model:** *data points, and equi-probability surfaces of the model.*
A Gaussian mixture model is a probabilistic model that assumes all the data points are generated from a mixture of a finite number of Gaussian distributions with unknown parameters. One can think of mixture models as generalizing k-means clustering to incorporate information about the covariance structure of the data as well as the centers of the latent Gaussians.
Scikit-learn implements different classes to estimate Gaussian mixture models, that correspond to different estimation strategies, detailed below.
2.1.1. Gaussian Mixture
------------------------
The [`GaussianMixture`](generated/sklearn.mixture.gaussianmixture#sklearn.mixture.GaussianMixture "sklearn.mixture.GaussianMixture") object implements the [expectation-maximization](#expectation-maximization) (EM) algorithm for fitting mixture-of-Gaussian models. It can also draw confidence ellipsoids for multivariate models, and compute the Bayesian Information Criterion to assess the number of clusters in the data. A [`GaussianMixture.fit`](generated/sklearn.mixture.gaussianmixture#sklearn.mixture.GaussianMixture.fit "sklearn.mixture.GaussianMixture.fit") method is provided that learns a Gaussian Mixture Model from train data. Given test data, it can assign to each sample the Gaussian it mostly probably belongs to using the [`GaussianMixture.predict`](generated/sklearn.mixture.gaussianmixture#sklearn.mixture.GaussianMixture.predict "sklearn.mixture.GaussianMixture.predict") method.
The [`GaussianMixture`](generated/sklearn.mixture.gaussianmixture#sklearn.mixture.GaussianMixture "sklearn.mixture.GaussianMixture") comes with different options to constrain the covariance of the difference classes estimated: spherical, diagonal, tied or full covariance.
###
2.1.1.1. Pros and cons of class [`GaussianMixture`](generated/sklearn.mixture.gaussianmixture#sklearn.mixture.GaussianMixture "sklearn.mixture.GaussianMixture")
####
2.1.1.1.1. Pros
Speed:
It is the fastest algorithm for learning mixture models
Agnostic:
As this algorithm maximizes only the likelihood, it will not bias the means towards zero, or bias the cluster sizes to have specific structures that might or might not apply.
####
2.1.1.1.2. Cons
Singularities:
When one has insufficiently many points per mixture, estimating the covariance matrices becomes difficult, and the algorithm is known to diverge and find solutions with infinite likelihood unless one regularizes the covariances artificially.
Number of components:
This algorithm will always use all the components it has access to, needing held-out data or information theoretical criteria to decide how many components to use in the absence of external cues.
###
2.1.1.2. Selecting the number of components in a classical Gaussian Mixture Model
The BIC criterion can be used to select the number of components in a Gaussian Mixture in an efficient way. In theory, it recovers the true number of components only in the asymptotic regime (i.e. if much data is available and assuming that the data was actually generated i.i.d. from a mixture of Gaussian distribution). Note that using a [Variational Bayesian Gaussian mixture](#bgmm) avoids the specification of the number of components for a Gaussian mixture model.
###
2.1.1.3. Estimation algorithm Expectation-maximization
The main difficulty in learning Gaussian mixture models from unlabeled data is that one usually doesn’t know which points came from which latent component (if one has access to this information it gets very easy to fit a separate Gaussian distribution to each set of points). [Expectation-maximization](https://en.wikipedia.org/wiki/Expectation%E2%80%93maximization_algorithm) is a well-founded statistical algorithm to get around this problem by an iterative process. First one assumes random components (randomly centered on data points, learned from k-means, or even just normally distributed around the origin) and computes for each point a probability of being generated by each component of the model. Then, one tweaks the parameters to maximize the likelihood of the data given those assignments. Repeating this process is guaranteed to always converge to a local optimum.
###
2.1.1.4. Choice of the Initialization Method
There is a choice of four initialization methods (as well as inputting user defined initial means) to generate the initial centers for the model components:
k-means (default)
This applies a traditional k-means clustering algorithm. This can be computationally expensive compared to other initialization methods.
k-means++
This uses the initialization method of k-means clustering: k-means++. This will pick the first center at random from the data. Subsequent centers will be chosen from a weighted distribution of the data favouring points further away from existing centers. k-means++ is the default initialization for k-means so will be quicker than running a full k-means but can still take a significant amount of time for large data sets with many components.
random\_from\_data
This will pick random data points from the input data as the initial centers. This is a very fast method of initialization but can produce non-convergent results if the chosen points are too close to each other.
random
Centers are chosen as a small perturbation away from the mean of all data. This method is simple but can lead to the model taking longer to converge.
2.1.2. Variational Bayesian Gaussian Mixture
---------------------------------------------
The [`BayesianGaussianMixture`](generated/sklearn.mixture.bayesiangaussianmixture#sklearn.mixture.BayesianGaussianMixture "sklearn.mixture.BayesianGaussianMixture") object implements a variant of the Gaussian mixture model with variational inference algorithms. The API is similar to the one defined by [`GaussianMixture`](generated/sklearn.mixture.gaussianmixture#sklearn.mixture.GaussianMixture "sklearn.mixture.GaussianMixture").
###
2.1.2.1. Estimation algorithm: variational inference
Variational inference is an extension of expectation-maximization that maximizes a lower bound on model evidence (including priors) instead of data likelihood. The principle behind variational methods is the same as expectation-maximization (that is both are iterative algorithms that alternate between finding the probabilities for each point to be generated by each mixture and fitting the mixture to these assigned points), but variational methods add regularization by integrating information from prior distributions. This avoids the singularities often found in expectation-maximization solutions but introduces some subtle biases to the model. Inference is often notably slower, but not usually as much so as to render usage unpractical.
Due to its Bayesian nature, the variational algorithm needs more hyperparameters than expectation-maximization, the most important of these being the concentration parameter `weight_concentration_prior`. Specifying a low value for the concentration prior will make the model put most of the weight on a few components and set the remaining components’ weights very close to zero. High values of the concentration prior will allow a larger number of components to be active in the mixture.
The parameters implementation of the [`BayesianGaussianMixture`](generated/sklearn.mixture.bayesiangaussianmixture#sklearn.mixture.BayesianGaussianMixture "sklearn.mixture.BayesianGaussianMixture") class proposes two types of prior for the weights distribution: a finite mixture model with Dirichlet distribution and an infinite mixture model with the Dirichlet Process. In practice Dirichlet Process inference algorithm is approximated and uses a truncated distribution with a fixed maximum number of components (called the Stick-breaking representation). The number of components actually used almost always depends on the data.
The next figure compares the results obtained for the different type of the weight concentration prior (parameter `weight_concentration_prior_type`) for different values of `weight_concentration_prior`. Here, we can see the value of the `weight_concentration_prior` parameter has a strong impact on the effective number of active components obtained. We can also notice that large values for the concentration weight prior lead to more uniform weights when the type of prior is ‘dirichlet\_distribution’ while this is not necessarily the case for the ‘dirichlet\_process’ type (used by default).
The examples below compare Gaussian mixture models with a fixed number of components, to the variational Gaussian mixture models with a Dirichlet process prior. Here, a classical Gaussian mixture is fitted with 5 components on a dataset composed of 2 clusters. We can see that the variational Gaussian mixture with a Dirichlet process prior is able to limit itself to only 2 components whereas the Gaussian mixture fits the data with a fixed number of components that has to be set a priori by the user. In this case the user has selected `n_components=5` which does not match the true generative distribution of this toy dataset. Note that with very little observations, the variational Gaussian mixture models with a Dirichlet process prior can take a conservative stand, and fit only one component.
On the following figure we are fitting a dataset not well-depicted by a Gaussian mixture. Adjusting the `weight_concentration_prior`, parameter of the [`BayesianGaussianMixture`](generated/sklearn.mixture.bayesiangaussianmixture#sklearn.mixture.BayesianGaussianMixture "sklearn.mixture.BayesianGaussianMixture") controls the number of components used to fit this data. We also present on the last two plots a random sampling generated from the two resulting mixtures.
###
2.1.2.2. Pros and cons of variational inference with [`BayesianGaussianMixture`](generated/sklearn.mixture.bayesiangaussianmixture#sklearn.mixture.BayesianGaussianMixture "sklearn.mixture.BayesianGaussianMixture")
####
2.1.2.2.1. Pros
Automatic selection:
when `weight_concentration_prior` is small enough and `n_components` is larger than what is found necessary by the model, the Variational Bayesian mixture model has a natural tendency to set some mixture weights values close to zero. This makes it possible to let the model choose a suitable number of effective components automatically. Only an upper bound of this number needs to be provided. Note however that the “ideal” number of active components is very application specific and is typically ill-defined in a data exploration setting.
Less sensitivity to the number of parameters:
unlike finite models, which will almost always use all components as much as they can, and hence will produce wildly different solutions for different numbers of components, the variational inference with a Dirichlet process prior (`weight_concentration_prior_type='dirichlet_process'`) won’t change much with changes to the parameters, leading to more stability and less tuning.
Regularization:
due to the incorporation of prior information, variational solutions have less pathological special cases than expectation-maximization solutions.
####
2.1.2.2.2. Cons
Speed:
the extra parametrization necessary for variational inference makes inference slower, although not by much.
Hyperparameters:
this algorithm needs an extra hyperparameter that might need experimental tuning via cross-validation.
Bias:
there are many implicit biases in the inference algorithms (and also in the Dirichlet process if used), and whenever there is a mismatch between these biases and the data it might be possible to fit better models using a finite mixture.
###
2.1.2.3. The Dirichlet Process
Here we describe variational inference algorithms on Dirichlet process mixture. The Dirichlet process is a prior probability distribution on *clusterings with an infinite, unbounded, number of partitions*. Variational techniques let us incorporate this prior structure on Gaussian mixture models at almost no penalty in inference time, comparing with a finite Gaussian mixture model.
An important question is how can the Dirichlet process use an infinite, unbounded number of clusters and still be consistent. While a full explanation doesn’t fit this manual, one can think of its [stick breaking process](https://en.wikipedia.org/wiki/Dirichlet_process#The_stick-breaking_process) analogy to help understanding it. The stick breaking process is a generative story for the Dirichlet process. We start with a unit-length stick and in each step we break off a portion of the remaining stick. Each time, we associate the length of the piece of the stick to the proportion of points that falls into a group of the mixture. At the end, to represent the infinite mixture, we associate the last remaining piece of the stick to the proportion of points that don’t fall into all the other groups. The length of each piece is a random variable with probability proportional to the concentration parameter. Smaller values of the concentration will divide the unit-length into larger pieces of the stick (defining more concentrated distribution). Larger concentration values will create smaller pieces of the stick (increasing the number of components with non zero weights).
Variational inference techniques for the Dirichlet process still work with a finite approximation to this infinite mixture model, but instead of having to specify a priori how many components one wants to use, one just specifies the concentration parameter and an upper bound on the number of mixture components (this upper bound, assuming it is higher than the “true” number of components, affects only algorithmic complexity, not the actual number of components used).
scikit_learn 1.4. Support Vector Machines 1.4. Support Vector Machines
============================
**Support vector machines (SVMs)** are a set of supervised learning methods used for [classification](#svm-classification), [regression](#svm-regression) and [outliers detection](#svm-outlier-detection).
The advantages of support vector machines are:
* Effective in high dimensional spaces.
* Still effective in cases where number of dimensions is greater than the number of samples.
* Uses a subset of training points in the decision function (called support vectors), so it is also memory efficient.
* Versatile: different [Kernel functions](#svm-kernels) can be specified for the decision function. Common kernels are provided, but it is also possible to specify custom kernels.
The disadvantages of support vector machines include:
* If the number of features is much greater than the number of samples, avoid over-fitting in choosing [Kernel functions](#svm-kernels) and regularization term is crucial.
* SVMs do not directly provide probability estimates, these are calculated using an expensive five-fold cross-validation (see [Scores and probabilities](#scores-probabilities), below).
The support vector machines in scikit-learn support both dense (`numpy.ndarray` and convertible to that by `numpy.asarray`) and sparse (any `scipy.sparse`) sample vectors as input. However, to use an SVM to make predictions for sparse data, it must have been fit on such data. For optimal performance, use C-ordered `numpy.ndarray` (dense) or `scipy.sparse.csr_matrix` (sparse) with `dtype=float64`.
1.4.1. Classification
----------------------
[`SVC`](generated/sklearn.svm.svc#sklearn.svm.SVC "sklearn.svm.SVC"), [`NuSVC`](generated/sklearn.svm.nusvc#sklearn.svm.NuSVC "sklearn.svm.NuSVC") and [`LinearSVC`](generated/sklearn.svm.linearsvc#sklearn.svm.LinearSVC "sklearn.svm.LinearSVC") are classes capable of performing binary and multi-class classification on a dataset.
[`SVC`](generated/sklearn.svm.svc#sklearn.svm.SVC "sklearn.svm.SVC") and [`NuSVC`](generated/sklearn.svm.nusvc#sklearn.svm.NuSVC "sklearn.svm.NuSVC") are similar methods, but accept slightly different sets of parameters and have different mathematical formulations (see section [Mathematical formulation](#svm-mathematical-formulation)). On the other hand, [`LinearSVC`](generated/sklearn.svm.linearsvc#sklearn.svm.LinearSVC "sklearn.svm.LinearSVC") is another (faster) implementation of Support Vector Classification for the case of a linear kernel. Note that [`LinearSVC`](generated/sklearn.svm.linearsvc#sklearn.svm.LinearSVC "sklearn.svm.LinearSVC") does not accept parameter `kernel`, as this is assumed to be linear. It also lacks some of the attributes of [`SVC`](generated/sklearn.svm.svc#sklearn.svm.SVC "sklearn.svm.SVC") and [`NuSVC`](generated/sklearn.svm.nusvc#sklearn.svm.NuSVC "sklearn.svm.NuSVC"), like `support_`.
As other classifiers, [`SVC`](generated/sklearn.svm.svc#sklearn.svm.SVC "sklearn.svm.SVC"), [`NuSVC`](generated/sklearn.svm.nusvc#sklearn.svm.NuSVC "sklearn.svm.NuSVC") and [`LinearSVC`](generated/sklearn.svm.linearsvc#sklearn.svm.LinearSVC "sklearn.svm.LinearSVC") take as input two arrays: an array `X` of shape `(n_samples, n_features)` holding the training samples, and an array `y` of class labels (strings or integers), of shape `(n_samples)`:
```
>>> from sklearn import svm
>>> X = [[0, 0], [1, 1]]
>>> y = [0, 1]
>>> clf = svm.SVC()
>>> clf.fit(X, y)
SVC()
```
After being fitted, the model can then be used to predict new values:
```
>>> clf.predict([[2., 2.]])
array([1])
```
SVMs decision function (detailed in the [Mathematical formulation](#svm-mathematical-formulation)) depends on some subset of the training data, called the support vectors. Some properties of these support vectors can be found in attributes `support_vectors_`, `support_` and `n_support_`:
```
>>> # get support vectors
>>> clf.support_vectors_
array([[0., 0.],
[1., 1.]])
>>> # get indices of support vectors
>>> clf.support_
array([0, 1]...)
>>> # get number of support vectors for each class
>>> clf.n_support_
array([1, 1]...)
```
###
1.4.1.1. Multi-class classification
[`SVC`](generated/sklearn.svm.svc#sklearn.svm.SVC "sklearn.svm.SVC") and [`NuSVC`](generated/sklearn.svm.nusvc#sklearn.svm.NuSVC "sklearn.svm.NuSVC") implement the “one-versus-one” approach for multi-class classification. In total, `n_classes * (n_classes - 1) / 2` classifiers are constructed and each one trains data from two classes. To provide a consistent interface with other classifiers, the `decision_function_shape` option allows to monotonically transform the results of the “one-versus-one” classifiers to a “one-vs-rest” decision function of shape `(n_samples, n_classes)`.
```
>>> X = [[0], [1], [2], [3]]
>>> Y = [0, 1, 2, 3]
>>> clf = svm.SVC(decision_function_shape='ovo')
>>> clf.fit(X, Y)
SVC(decision_function_shape='ovo')
>>> dec = clf.decision_function([[1]])
>>> dec.shape[1] # 4 classes: 4*3/2 = 6
6
>>> clf.decision_function_shape = "ovr"
>>> dec = clf.decision_function([[1]])
>>> dec.shape[1] # 4 classes
4
```
On the other hand, [`LinearSVC`](generated/sklearn.svm.linearsvc#sklearn.svm.LinearSVC "sklearn.svm.LinearSVC") implements “one-vs-the-rest” multi-class strategy, thus training `n_classes` models.
```
>>> lin_clf = svm.LinearSVC()
>>> lin_clf.fit(X, Y)
LinearSVC()
>>> dec = lin_clf.decision_function([[1]])
>>> dec.shape[1]
4
```
See [Mathematical formulation](#svm-mathematical-formulation) for a complete description of the decision function.
Note that the [`LinearSVC`](generated/sklearn.svm.linearsvc#sklearn.svm.LinearSVC "sklearn.svm.LinearSVC") also implements an alternative multi-class strategy, the so-called multi-class SVM formulated by Crammer and Singer [[16]](#id18), by using the option `multi_class='crammer_singer'`. In practice, one-vs-rest classification is usually preferred, since the results are mostly similar, but the runtime is significantly less.
For “one-vs-rest” [`LinearSVC`](generated/sklearn.svm.linearsvc#sklearn.svm.LinearSVC "sklearn.svm.LinearSVC") the attributes `coef_` and `intercept_` have the shape `(n_classes, n_features)` and `(n_classes,)` respectively. Each row of the coefficients corresponds to one of the `n_classes` “one-vs-rest” classifiers and similar for the intercepts, in the order of the “one” class.
In the case of “one-vs-one” [`SVC`](generated/sklearn.svm.svc#sklearn.svm.SVC "sklearn.svm.SVC") and [`NuSVC`](generated/sklearn.svm.nusvc#sklearn.svm.NuSVC "sklearn.svm.NuSVC"), the layout of the attributes is a little more involved. In the case of a linear kernel, the attributes `coef_` and `intercept_` have the shape `(n_classes * (n_classes - 1) / 2, n_features)` and `(n_classes *
(n_classes - 1) / 2)` respectively. This is similar to the layout for [`LinearSVC`](generated/sklearn.svm.linearsvc#sklearn.svm.LinearSVC "sklearn.svm.LinearSVC") described above, with each row now corresponding to a binary classifier. The order for classes 0 to n is “0 vs 1”, “0 vs 2” , … “0 vs n”, “1 vs 2”, “1 vs 3”, “1 vs n”, . . . “n-1 vs n”.
The shape of `dual_coef_` is `(n_classes-1, n_SV)` with a somewhat hard to grasp layout. The columns correspond to the support vectors involved in any of the `n_classes * (n_classes - 1) / 2` “one-vs-one” classifiers. Each support vector `v` has a dual coefficient in each of the `n_classes - 1` classifiers comparing the class of `v` against another class. Note that some, but not all, of these dual coefficients, may be zero. The `n_classes - 1` entries in each column are these dual coefficients, ordered by the opposing class.
This might be clearer with an example: consider a three class problem with class 0 having three support vectors \(v^{0}\_0, v^{1}\_0, v^{2}\_0\) and class 1 and 2 having two support vectors \(v^{0}\_1, v^{1}\_1\) and \(v^{0}\_2, v^{1}\_2\) respectively. For each support vector \(v^{j}\_i\), there are two dual coefficients. Let’s call the coefficient of support vector \(v^{j}\_i\) in the classifier between classes \(i\) and \(k\) \(\alpha^{j}\_{i,k}\). Then `dual_coef_` looks like this:
| | | | | | | |
| --- | --- | --- | --- | --- | --- | --- |
| \(\alpha^{0}\_{0,1}\) | \(\alpha^{1}\_{0,1}\) | \(\alpha^{2}\_{0,1}\) | \(\alpha^{0}\_{1,0}\) | \(\alpha^{1}\_{1,0}\) | \(\alpha^{0}\_{2,0}\) | \(\alpha^{1}\_{2,0}\) |
| \(\alpha^{0}\_{0,2}\) | \(\alpha^{1}\_{0,2}\) | \(\alpha^{2}\_{0,2}\) | \(\alpha^{0}\_{1,2}\) | \(\alpha^{1}\_{1,2}\) | \(\alpha^{0}\_{2,1}\) | \(\alpha^{1}\_{2,1}\) |
| Coefficients for SVs of class 0 | Coefficients for SVs of class 1 | Coefficients for SVs of class 2 |
###
1.4.1.2. Scores and probabilities
The `decision_function` method of [`SVC`](generated/sklearn.svm.svc#sklearn.svm.SVC "sklearn.svm.SVC") and [`NuSVC`](generated/sklearn.svm.nusvc#sklearn.svm.NuSVC "sklearn.svm.NuSVC") gives per-class scores for each sample (or a single score per sample in the binary case). When the constructor option `probability` is set to `True`, class membership probability estimates (from the methods `predict_proba` and `predict_log_proba`) are enabled. In the binary case, the probabilities are calibrated using Platt scaling [[9]](#id11): logistic regression on the SVM’s scores, fit by an additional cross-validation on the training data. In the multiclass case, this is extended as per [[10]](#id12).
Note
The same probability calibration procedure is available for all estimators via the [`CalibratedClassifierCV`](generated/sklearn.calibration.calibratedclassifiercv#sklearn.calibration.CalibratedClassifierCV "sklearn.calibration.CalibratedClassifierCV") (see [Probability calibration](calibration#calibration)). In the case of [`SVC`](generated/sklearn.svm.svc#sklearn.svm.SVC "sklearn.svm.SVC") and [`NuSVC`](generated/sklearn.svm.nusvc#sklearn.svm.NuSVC "sklearn.svm.NuSVC"), this procedure is builtin in [libsvm](https://www.csie.ntu.edu.tw/~cjlin/libsvm/) which is used under the hood, so it does not rely on scikit-learn’s [`CalibratedClassifierCV`](generated/sklearn.calibration.calibratedclassifiercv#sklearn.calibration.CalibratedClassifierCV "sklearn.calibration.CalibratedClassifierCV").
The cross-validation involved in Platt scaling is an expensive operation for large datasets. In addition, the probability estimates may be inconsistent with the scores:
* the “argmax” of the scores may not be the argmax of the probabilities
* in binary classification, a sample may be labeled by `predict` as belonging to the positive class even if the output of `predict_proba` is less than 0.5; and similarly, it could be labeled as negative even if the output of `predict_proba` is more than 0.5.
Platt’s method is also known to have theoretical issues. If confidence scores are required, but these do not have to be probabilities, then it is advisable to set `probability=False` and use `decision_function` instead of `predict_proba`.
Please note that when `decision_function_shape='ovr'` and `n_classes > 2`, unlike `decision_function`, the `predict` method does not try to break ties by default. You can set `break_ties=True` for the output of `predict` to be the same as `np.argmax(clf.decision_function(...), axis=1)`, otherwise the first class among the tied classes will always be returned; but have in mind that it comes with a computational cost. See [SVM Tie Breaking Example](../auto_examples/svm/plot_svm_tie_breaking#sphx-glr-auto-examples-svm-plot-svm-tie-breaking-py) for an example on tie breaking.
###
1.4.1.3. Unbalanced problems
In problems where it is desired to give more importance to certain classes or certain individual samples, the parameters `class_weight` and `sample_weight` can be used.
[`SVC`](generated/sklearn.svm.svc#sklearn.svm.SVC "sklearn.svm.SVC") (but not [`NuSVC`](generated/sklearn.svm.nusvc#sklearn.svm.NuSVC "sklearn.svm.NuSVC")) implements the parameter `class_weight` in the `fit` method. It’s a dictionary of the form `{class_label : value}`, where value is a floating point number > 0 that sets the parameter `C` of class `class_label` to `C * value`. The figure below illustrates the decision boundary of an unbalanced problem, with and without weight correction.
[`SVC`](generated/sklearn.svm.svc#sklearn.svm.SVC "sklearn.svm.SVC"), [`NuSVC`](generated/sklearn.svm.nusvc#sklearn.svm.NuSVC "sklearn.svm.NuSVC"), [`SVR`](generated/sklearn.svm.svr#sklearn.svm.SVR "sklearn.svm.SVR"), [`NuSVR`](generated/sklearn.svm.nusvr#sklearn.svm.NuSVR "sklearn.svm.NuSVR"), [`LinearSVC`](generated/sklearn.svm.linearsvc#sklearn.svm.LinearSVC "sklearn.svm.LinearSVC"), [`LinearSVR`](generated/sklearn.svm.linearsvr#sklearn.svm.LinearSVR "sklearn.svm.LinearSVR") and [`OneClassSVM`](generated/sklearn.svm.oneclasssvm#sklearn.svm.OneClassSVM "sklearn.svm.OneClassSVM") implement also weights for individual samples in the `fit` method through the `sample_weight` parameter. Similar to `class_weight`, this sets the parameter `C` for the i-th example to `C * sample_weight[i]`, which will encourage the classifier to get these samples right. The figure below illustrates the effect of sample weighting on the decision boundary. The size of the circles is proportional to the sample weights:
1.4.2. Regression
------------------
The method of Support Vector Classification can be extended to solve regression problems. This method is called Support Vector Regression.
The model produced by support vector classification (as described above) depends only on a subset of the training data, because the cost function for building the model does not care about training points that lie beyond the margin. Analogously, the model produced by Support Vector Regression depends only on a subset of the training data, because the cost function ignores samples whose prediction is close to their target.
There are three different implementations of Support Vector Regression: [`SVR`](generated/sklearn.svm.svr#sklearn.svm.SVR "sklearn.svm.SVR"), [`NuSVR`](generated/sklearn.svm.nusvr#sklearn.svm.NuSVR "sklearn.svm.NuSVR") and [`LinearSVR`](generated/sklearn.svm.linearsvr#sklearn.svm.LinearSVR "sklearn.svm.LinearSVR"). [`LinearSVR`](generated/sklearn.svm.linearsvr#sklearn.svm.LinearSVR "sklearn.svm.LinearSVR") provides a faster implementation than [`SVR`](generated/sklearn.svm.svr#sklearn.svm.SVR "sklearn.svm.SVR") but only considers the linear kernel, while [`NuSVR`](generated/sklearn.svm.nusvr#sklearn.svm.NuSVR "sklearn.svm.NuSVR") implements a slightly different formulation than [`SVR`](generated/sklearn.svm.svr#sklearn.svm.SVR "sklearn.svm.SVR") and [`LinearSVR`](generated/sklearn.svm.linearsvr#sklearn.svm.LinearSVR "sklearn.svm.LinearSVR"). See [Implementation details](#svm-implementation-details) for further details.
As with classification classes, the fit method will take as argument vectors X, y, only that in this case y is expected to have floating point values instead of integer values:
```
>>> from sklearn import svm
>>> X = [[0, 0], [2, 2]]
>>> y = [0.5, 2.5]
>>> regr = svm.SVR()
>>> regr.fit(X, y)
SVR()
>>> regr.predict([[1, 1]])
array([1.5])
```
1.4.3. Density estimation, novelty detection
---------------------------------------------
The class [`OneClassSVM`](generated/sklearn.svm.oneclasssvm#sklearn.svm.OneClassSVM "sklearn.svm.OneClassSVM") implements a One-Class SVM which is used in outlier detection.
See [Novelty and Outlier Detection](outlier_detection#outlier-detection) for the description and usage of OneClassSVM.
1.4.4. Complexity
------------------
Support Vector Machines are powerful tools, but their compute and storage requirements increase rapidly with the number of training vectors. The core of an SVM is a quadratic programming problem (QP), separating support vectors from the rest of the training data. The QP solver used by the [libsvm](https://www.csie.ntu.edu.tw/~cjlin/libsvm/)-based implementation scales between \(O(n\_{features} \times n\_{samples}^2)\) and \(O(n\_{features} \times n\_{samples}^3)\) depending on how efficiently the [libsvm](https://www.csie.ntu.edu.tw/~cjlin/libsvm/) cache is used in practice (dataset dependent). If the data is very sparse \(n\_{features}\) should be replaced by the average number of non-zero features in a sample vector.
For the linear case, the algorithm used in [`LinearSVC`](generated/sklearn.svm.linearsvc#sklearn.svm.LinearSVC "sklearn.svm.LinearSVC") by the [liblinear](https://www.csie.ntu.edu.tw/~cjlin/liblinear/) implementation is much more efficient than its [libsvm](https://www.csie.ntu.edu.tw/~cjlin/libsvm/)-based [`SVC`](generated/sklearn.svm.svc#sklearn.svm.SVC "sklearn.svm.SVC") counterpart and can scale almost linearly to millions of samples and/or features.
1.4.5. Tips on Practical Use
-----------------------------
* **Avoiding data copy**: For [`SVC`](generated/sklearn.svm.svc#sklearn.svm.SVC "sklearn.svm.SVC"), [`SVR`](generated/sklearn.svm.svr#sklearn.svm.SVR "sklearn.svm.SVR"), [`NuSVC`](generated/sklearn.svm.nusvc#sklearn.svm.NuSVC "sklearn.svm.NuSVC") and [`NuSVR`](generated/sklearn.svm.nusvr#sklearn.svm.NuSVR "sklearn.svm.NuSVR"), if the data passed to certain methods is not C-ordered contiguous and double precision, it will be copied before calling the underlying C implementation. You can check whether a given numpy array is C-contiguous by inspecting its `flags` attribute.
For [`LinearSVC`](generated/sklearn.svm.linearsvc#sklearn.svm.LinearSVC "sklearn.svm.LinearSVC") (and [`LogisticRegression`](generated/sklearn.linear_model.logisticregression#sklearn.linear_model.LogisticRegression "sklearn.linear_model.LogisticRegression")) any input passed as a numpy array will be copied and converted to the [liblinear](https://www.csie.ntu.edu.tw/~cjlin/liblinear/) internal sparse data representation (double precision floats and int32 indices of non-zero components). If you want to fit a large-scale linear classifier without copying a dense numpy C-contiguous double precision array as input, we suggest to use the [`SGDClassifier`](generated/sklearn.linear_model.sgdclassifier#sklearn.linear_model.SGDClassifier "sklearn.linear_model.SGDClassifier") class instead. The objective function can be configured to be almost the same as the [`LinearSVC`](generated/sklearn.svm.linearsvc#sklearn.svm.LinearSVC "sklearn.svm.LinearSVC") model.
* **Kernel cache size**: For [`SVC`](generated/sklearn.svm.svc#sklearn.svm.SVC "sklearn.svm.SVC"), [`SVR`](generated/sklearn.svm.svr#sklearn.svm.SVR "sklearn.svm.SVR"), [`NuSVC`](generated/sklearn.svm.nusvc#sklearn.svm.NuSVC "sklearn.svm.NuSVC") and [`NuSVR`](generated/sklearn.svm.nusvr#sklearn.svm.NuSVR "sklearn.svm.NuSVR"), the size of the kernel cache has a strong impact on run times for larger problems. If you have enough RAM available, it is recommended to set `cache_size` to a higher value than the default of 200(MB), such as 500(MB) or 1000(MB).
* **Setting C**: `C` is `1` by default and it’s a reasonable default choice. If you have a lot of noisy observations you should decrease it: decreasing C corresponds to more regularization.
[`LinearSVC`](generated/sklearn.svm.linearsvc#sklearn.svm.LinearSVC "sklearn.svm.LinearSVC") and [`LinearSVR`](generated/sklearn.svm.linearsvr#sklearn.svm.LinearSVR "sklearn.svm.LinearSVR") are less sensitive to `C` when it becomes large, and prediction results stop improving after a certain threshold. Meanwhile, larger `C` values will take more time to train, sometimes up to 10 times longer, as shown in [[11]](#id13).
* Support Vector Machine algorithms are not scale invariant, so **it is highly recommended to scale your data**. For example, scale each attribute on the input vector X to [0,1] or [-1,+1], or standardize it to have mean 0 and variance 1. Note that the *same* scaling must be applied to the test vector to obtain meaningful results. This can be done easily by using a [`Pipeline`](generated/sklearn.pipeline.pipeline#sklearn.pipeline.Pipeline "sklearn.pipeline.Pipeline"):
```
>>> from sklearn.pipeline import make_pipeline
>>> from sklearn.preprocessing import StandardScaler
>>> from sklearn.svm import SVC
>>> clf = make_pipeline(StandardScaler(), SVC())
```
See section [Preprocessing data](preprocessing#preprocessing) for more details on scaling and normalization.
* Regarding the `shrinking` parameter, quoting [[12]](#id14): *We found that if the number of iterations is large, then shrinking can shorten the training time. However, if we loosely solve the optimization problem (e.g., by using a large stopping tolerance), the code without using shrinking may be much faster*
* Parameter `nu` in [`NuSVC`](generated/sklearn.svm.nusvc#sklearn.svm.NuSVC "sklearn.svm.NuSVC")/[`OneClassSVM`](generated/sklearn.svm.oneclasssvm#sklearn.svm.OneClassSVM "sklearn.svm.OneClassSVM")/[`NuSVR`](generated/sklearn.svm.nusvr#sklearn.svm.NuSVR "sklearn.svm.NuSVR") approximates the fraction of training errors and support vectors.
* In [`SVC`](generated/sklearn.svm.svc#sklearn.svm.SVC "sklearn.svm.SVC"), if the data is unbalanced (e.g. many positive and few negative), set `class_weight='balanced'` and/or try different penalty parameters `C`.
* **Randomness of the underlying implementations**: The underlying implementations of [`SVC`](generated/sklearn.svm.svc#sklearn.svm.SVC "sklearn.svm.SVC") and [`NuSVC`](generated/sklearn.svm.nusvc#sklearn.svm.NuSVC "sklearn.svm.NuSVC") use a random number generator only to shuffle the data for probability estimation (when `probability` is set to `True`). This randomness can be controlled with the `random_state` parameter. If `probability` is set to `False` these estimators are not random and `random_state` has no effect on the results. The underlying [`OneClassSVM`](generated/sklearn.svm.oneclasssvm#sklearn.svm.OneClassSVM "sklearn.svm.OneClassSVM") implementation is similar to the ones of [`SVC`](generated/sklearn.svm.svc#sklearn.svm.SVC "sklearn.svm.SVC") and [`NuSVC`](generated/sklearn.svm.nusvc#sklearn.svm.NuSVC "sklearn.svm.NuSVC"). As no probability estimation is provided for [`OneClassSVM`](generated/sklearn.svm.oneclasssvm#sklearn.svm.OneClassSVM "sklearn.svm.OneClassSVM"), it is not random.
The underlying [`LinearSVC`](generated/sklearn.svm.linearsvc#sklearn.svm.LinearSVC "sklearn.svm.LinearSVC") implementation uses a random number generator to select features when fitting the model with a dual coordinate descent (i.e when `dual` is set to `True`). It is thus not uncommon to have slightly different results for the same input data. If that happens, try with a smaller `tol` parameter. This randomness can also be controlled with the `random_state` parameter. When `dual` is set to `False` the underlying implementation of [`LinearSVC`](generated/sklearn.svm.linearsvc#sklearn.svm.LinearSVC "sklearn.svm.LinearSVC") is not random and `random_state` has no effect on the results.
* Using L1 penalization as provided by `LinearSVC(penalty='l1',
dual=False)` yields a sparse solution, i.e. only a subset of feature weights is different from zero and contribute to the decision function. Increasing `C` yields a more complex model (more features are selected). The `C` value that yields a “null” model (all weights equal to zero) can be calculated using [`l1_min_c`](generated/sklearn.svm.l1_min_c#sklearn.svm.l1_min_c "sklearn.svm.l1_min_c").
1.4.6. Kernel functions
------------------------
The *kernel function* can be any of the following:
* linear: \(\langle x, x'\rangle\).
* polynomial: \((\gamma \langle x, x'\rangle + r)^d\), where \(d\) is specified by parameter `degree`, \(r\) by `coef0`.
* rbf: \(\exp(-\gamma \|x-x'\|^2)\), where \(\gamma\) is specified by parameter `gamma`, must be greater than 0.
* sigmoid \(\tanh(\gamma \langle x,x'\rangle + r)\), where \(r\) is specified by `coef0`.
Different kernels are specified by the `kernel` parameter:
```
>>> linear_svc = svm.SVC(kernel='linear')
>>> linear_svc.kernel
'linear'
>>> rbf_svc = svm.SVC(kernel='rbf')
>>> rbf_svc.kernel
'rbf'
```
See also [Kernel Approximation](kernel_approximation#kernel-approximation) for a solution to use RBF kernels that is much faster and more scalable.
###
1.4.6.1. Parameters of the RBF Kernel
When training an SVM with the *Radial Basis Function* (RBF) kernel, two parameters must be considered: `C` and `gamma`. The parameter `C`, common to all SVM kernels, trades off misclassification of training examples against simplicity of the decision surface. A low `C` makes the decision surface smooth, while a high `C` aims at classifying all training examples correctly. `gamma` defines how much influence a single training example has. The larger `gamma` is, the closer other examples must be to be affected.
Proper choice of `C` and `gamma` is critical to the SVM’s performance. One is advised to use [`GridSearchCV`](generated/sklearn.model_selection.gridsearchcv#sklearn.model_selection.GridSearchCV "sklearn.model_selection.GridSearchCV") with `C` and `gamma` spaced exponentially far apart to choose good values.
###
1.4.6.2. Custom Kernels
You can define your own kernels by either giving the kernel as a python function or by precomputing the Gram matrix.
Classifiers with custom kernels behave the same way as any other classifiers, except that:
* Field `support_vectors_` is now empty, only indices of support vectors are stored in `support_`
* A reference (and not a copy) of the first argument in the `fit()` method is stored for future reference. If that array changes between the use of `fit()` and `predict()` you will have unexpected results.
####
1.4.6.2.1. Using Python functions as kernels
You can use your own defined kernels by passing a function to the `kernel` parameter.
Your kernel must take as arguments two matrices of shape `(n_samples_1, n_features)`, `(n_samples_2, n_features)` and return a kernel matrix of shape `(n_samples_1, n_samples_2)`.
The following code defines a linear kernel and creates a classifier instance that will use that kernel:
```
>>> import numpy as np
>>> from sklearn import svm
>>> def my_kernel(X, Y):
... return np.dot(X, Y.T)
...
>>> clf = svm.SVC(kernel=my_kernel)
```
####
1.4.6.2.2. Using the Gram matrix
You can pass pre-computed kernels by using the `kernel='precomputed'` option. You should then pass Gram matrix instead of X to the `fit` and `predict` methods. The kernel values between *all* training vectors and the test vectors must be provided:
```
>>> import numpy as np
>>> from sklearn.datasets import make_classification
>>> from sklearn.model_selection import train_test_split
>>> from sklearn import svm
>>> X, y = make_classification(n_samples=10, random_state=0)
>>> X_train , X_test , y_train, y_test = train_test_split(X, y, random_state=0)
>>> clf = svm.SVC(kernel='precomputed')
>>> # linear kernel computation
>>> gram_train = np.dot(X_train, X_train.T)
>>> clf.fit(gram_train, y_train)
SVC(kernel='precomputed')
>>> # predict on training examples
>>> gram_test = np.dot(X_test, X_train.T)
>>> clf.predict(gram_test)
array([0, 1, 0])
```
1.4.7. Mathematical formulation
--------------------------------
A support vector machine constructs a hyper-plane or set of hyper-planes in a high or infinite dimensional space, which can be used for classification, regression or other tasks. Intuitively, a good separation is achieved by the hyper-plane that has the largest distance to the nearest training data points of any class (so-called functional margin), since in general the larger the margin the lower the generalization error of the classifier. The figure below shows the decision function for a linearly separable problem, with three samples on the margin boundaries, called “support vectors”:
In general, when the problem isn’t linearly separable, the support vectors are the samples *within* the margin boundaries.
We recommend [[13]](#id15) and [[14]](#id16) as good references for the theory and practicalities of SVMs.
###
1.4.7.1. SVC
Given training vectors \(x\_i \in \mathbb{R}^p\), i=1,…, n, in two classes, and a vector \(y \in \{1, -1\}^n\), our goal is to find \(w \in \mathbb{R}^p\) and \(b \in \mathbb{R}\) such that the prediction given by \(\text{sign} (w^T\phi(x) + b)\) is correct for most samples.
SVC solves the following primal problem:
\[ \begin{align}\begin{aligned}\min\_ {w, b, \zeta} \frac{1}{2} w^T w + C \sum\_{i=1}^{n} \zeta\_i\\\begin{split}\textrm {subject to } & y\_i (w^T \phi (x\_i) + b) \geq 1 - \zeta\_i,\\ & \zeta\_i \geq 0, i=1, ..., n\end{split}\end{aligned}\end{align} \] Intuitively, we’re trying to maximize the margin (by minimizing \(||w||^2 = w^Tw\)), while incurring a penalty when a sample is misclassified or within the margin boundary. Ideally, the value \(y\_i (w^T \phi (x\_i) + b)\) would be \(\geq 1\) for all samples, which indicates a perfect prediction. But problems are usually not always perfectly separable with a hyperplane, so we allow some samples to be at a distance \(\zeta\_i\) from their correct margin boundary. The penalty term `C` controls the strength of this penalty, and as a result, acts as an inverse regularization parameter (see note below).
The dual problem to the primal is
\[ \begin{align}\begin{aligned}\min\_{\alpha} \frac{1}{2} \alpha^T Q \alpha - e^T \alpha\\\begin{split} \textrm {subject to } & y^T \alpha = 0\\ & 0 \leq \alpha\_i \leq C, i=1, ..., n\end{split}\end{aligned}\end{align} \] where \(e\) is the vector of all ones, and \(Q\) is an \(n\) by \(n\) positive semidefinite matrix, \(Q\_{ij} \equiv y\_i y\_j K(x\_i, x\_j)\), where \(K(x\_i, x\_j) = \phi (x\_i)^T \phi (x\_j)\) is the kernel. The terms \(\alpha\_i\) are called the dual coefficients, and they are upper-bounded by \(C\). This dual representation highlights the fact that training vectors are implicitly mapped into a higher (maybe infinite) dimensional space by the function \(\phi\): see [kernel trick](https://en.wikipedia.org/wiki/Kernel_method).
Once the optimization problem is solved, the output of [decision\_function](https://scikit-learn.org/1.1/glossary.html#term-decision_function) for a given sample \(x\) becomes:
\[\sum\_{i\in SV} y\_i \alpha\_i K(x\_i, x) + b,\] and the predicted class correspond to its sign. We only need to sum over the support vectors (i.e. the samples that lie within the margin) because the dual coefficients \(\alpha\_i\) are zero for the other samples.
These parameters can be accessed through the attributes `dual_coef_` which holds the product \(y\_i \alpha\_i\), `support_vectors_` which holds the support vectors, and `intercept_` which holds the independent term \(b\)
Note
While SVM models derived from [libsvm](https://www.csie.ntu.edu.tw/~cjlin/libsvm/) and [liblinear](https://www.csie.ntu.edu.tw/~cjlin/liblinear/) use `C` as regularization parameter, most other estimators use `alpha`. The exact equivalence between the amount of regularization of two models depends on the exact objective function optimized by the model. For example, when the estimator used is [`Ridge`](generated/sklearn.linear_model.ridge#sklearn.linear_model.Ridge "sklearn.linear_model.Ridge") regression, the relation between them is given as \(C = \frac{1}{alpha}\).
###
1.4.7.2. LinearSVC
The primal problem can be equivalently formulated as
\[\min\_ {w, b} \frac{1}{2} w^T w + C \sum\_{i=1}^{n}\max(0, 1 - y\_i (w^T \phi(x\_i) + b)),\] where we make use of the [hinge loss](https://en.wikipedia.org/wiki/Hinge_loss). This is the form that is directly optimized by [`LinearSVC`](generated/sklearn.svm.linearsvc#sklearn.svm.LinearSVC "sklearn.svm.LinearSVC"), but unlike the dual form, this one does not involve inner products between samples, so the famous kernel trick cannot be applied. This is why only the linear kernel is supported by [`LinearSVC`](generated/sklearn.svm.linearsvc#sklearn.svm.LinearSVC "sklearn.svm.LinearSVC") (\(\phi\) is the identity function).
###
1.4.7.3. NuSVC
The \(\nu\)-SVC formulation [[15]](#id17) is a reparameterization of the \(C\)-SVC and therefore mathematically equivalent.
We introduce a new parameter \(\nu\) (instead of \(C\)) which controls the number of support vectors and *margin errors*: \(\nu \in (0, 1]\) is an upper bound on the fraction of margin errors and a lower bound of the fraction of support vectors. A margin error corresponds to a sample that lies on the wrong side of its margin boundary: it is either misclassified, or it is correctly classified but does not lie beyond the margin.
###
1.4.7.4. SVR
Given training vectors \(x\_i \in \mathbb{R}^p\), i=1,…, n, and a vector \(y \in \mathbb{R}^n\) \(\varepsilon\)-SVR solves the following primal problem:
\[ \begin{align}\begin{aligned}\min\_ {w, b, \zeta, \zeta^\*} \frac{1}{2} w^T w + C \sum\_{i=1}^{n} (\zeta\_i + \zeta\_i^\*)\\\begin{split}\textrm {subject to } & y\_i - w^T \phi (x\_i) - b \leq \varepsilon + \zeta\_i,\\ & w^T \phi (x\_i) + b - y\_i \leq \varepsilon + \zeta\_i^\*,\\ & \zeta\_i, \zeta\_i^\* \geq 0, i=1, ..., n\end{split}\end{aligned}\end{align} \] Here, we are penalizing samples whose prediction is at least \(\varepsilon\) away from their true target. These samples penalize the objective by \(\zeta\_i\) or \(\zeta\_i^\*\), depending on whether their predictions lie above or below the \(\varepsilon\) tube.
The dual problem is
\[ \begin{align}\begin{aligned}\min\_{\alpha, \alpha^\*} \frac{1}{2} (\alpha - \alpha^\*)^T Q (\alpha - \alpha^\*) + \varepsilon e^T (\alpha + \alpha^\*) - y^T (\alpha - \alpha^\*)\\\begin{split} \textrm {subject to } & e^T (\alpha - \alpha^\*) = 0\\ & 0 \leq \alpha\_i, \alpha\_i^\* \leq C, i=1, ..., n\end{split}\end{aligned}\end{align} \] where \(e\) is the vector of all ones, \(Q\) is an \(n\) by \(n\) positive semidefinite matrix, \(Q\_{ij} \equiv K(x\_i, x\_j) = \phi (x\_i)^T \phi (x\_j)\) is the kernel. Here training vectors are implicitly mapped into a higher (maybe infinite) dimensional space by the function \(\phi\).
The prediction is:
\[\sum\_{i \in SV}(\alpha\_i - \alpha\_i^\*) K(x\_i, x) + b\] These parameters can be accessed through the attributes `dual_coef_` which holds the difference \(\alpha\_i - \alpha\_i^\*\), `support_vectors_` which holds the support vectors, and `intercept_` which holds the independent term \(b\)
###
1.4.7.5. LinearSVR
The primal problem can be equivalently formulated as
\[\min\_ {w, b} \frac{1}{2} w^T w + C \sum\_{i=1}\max(0, |y\_i - (w^T \phi(x\_i) + b)| - \varepsilon),\] where we make use of the epsilon-insensitive loss, i.e. errors of less than \(\varepsilon\) are ignored. This is the form that is directly optimized by [`LinearSVR`](generated/sklearn.svm.linearsvr#sklearn.svm.LinearSVR "sklearn.svm.LinearSVR").
1.4.8. Implementation details
------------------------------
Internally, we use [libsvm](https://www.csie.ntu.edu.tw/~cjlin/libsvm/) [[12]](#id14) and [liblinear](https://www.csie.ntu.edu.tw/~cjlin/liblinear/) [[11]](#id13) to handle all computations. These libraries are wrapped using C and Cython. For a description of the implementation and details of the algorithms used, please refer to their respective papers.
| programming_docs |
scikit_learn 6.5. Unsupervised dimensionality reduction 6.5. Unsupervised dimensionality reduction
==========================================
If your number of features is high, it may be useful to reduce it with an unsupervised step prior to supervised steps. Many of the [Unsupervised learning](https://scikit-learn.org/1.1/unsupervised_learning.html#unsupervised-learning) methods implement a `transform` method that can be used to reduce the dimensionality. Below we discuss two specific example of this pattern that are heavily used.
6.5.1. PCA: principal component analysis
-----------------------------------------
[`decomposition.PCA`](generated/sklearn.decomposition.pca#sklearn.decomposition.PCA "sklearn.decomposition.PCA") looks for a combination of features that capture well the variance of the original features. See [Decomposing signals in components (matrix factorization problems)](decomposition#decompositions).
6.5.2. Random projections
--------------------------
The module: `random_projection` provides several tools for data reduction by random projections. See the relevant section of the documentation: [Random Projection](random_projection#random-projection).
6.5.3. Feature agglomeration
-----------------------------
[`cluster.FeatureAgglomeration`](generated/sklearn.cluster.featureagglomeration#sklearn.cluster.FeatureAgglomeration "sklearn.cluster.FeatureAgglomeration") applies [Hierarchical clustering](clustering#hierarchical-clustering) to group together features that behave similarly.
scikit_learn 2.7. Novelty and Outlier Detection 2.7. Novelty and Outlier Detection
==================================
Many applications require being able to decide whether a new observation belongs to the same distribution as existing observations (it is an *inlier*), or should be considered as different (it is an *outlier*). Often, this ability is used to clean real data sets. Two important distinctions must be made:
outlier detection:
The training data contains outliers which are defined as observations that are far from the others. Outlier detection estimators thus try to fit the regions where the training data is the most concentrated, ignoring the deviant observations.
novelty detection:
The training data is not polluted by outliers and we are interested in detecting whether a **new** observation is an outlier. In this context an outlier is also called a novelty.
Outlier detection and novelty detection are both used for anomaly detection, where one is interested in detecting abnormal or unusual observations. Outlier detection is then also known as unsupervised anomaly detection and novelty detection as semi-supervised anomaly detection. In the context of outlier detection, the outliers/anomalies cannot form a dense cluster as available estimators assume that the outliers/anomalies are located in low density regions. On the contrary, in the context of novelty detection, novelties/anomalies can form a dense cluster as long as they are in a low density region of the training data, considered as normal in this context.
The scikit-learn project provides a set of machine learning tools that can be used both for novelty or outlier detection. This strategy is implemented with objects learning in an unsupervised way from the data:
```
estimator.fit(X_train)
```
new observations can then be sorted as inliers or outliers with a `predict` method:
```
estimator.predict(X_test)
```
Inliers are labeled 1, while outliers are labeled -1. The predict method makes use of a threshold on the raw scoring function computed by the estimator. This scoring function is accessible through the `score_samples` method, while the threshold can be controlled by the `contamination` parameter.
The `decision_function` method is also defined from the scoring function, in such a way that negative values are outliers and non-negative ones are inliers:
```
estimator.decision_function(X_test)
```
Note that [`neighbors.LocalOutlierFactor`](generated/sklearn.neighbors.localoutlierfactor#sklearn.neighbors.LocalOutlierFactor "sklearn.neighbors.LocalOutlierFactor") does not support `predict`, `decision_function` and `score_samples` methods by default but only a `fit_predict` method, as this estimator was originally meant to be applied for outlier detection. The scores of abnormality of the training samples are accessible through the `negative_outlier_factor_` attribute.
If you really want to use [`neighbors.LocalOutlierFactor`](generated/sklearn.neighbors.localoutlierfactor#sklearn.neighbors.LocalOutlierFactor "sklearn.neighbors.LocalOutlierFactor") for novelty detection, i.e. predict labels or compute the score of abnormality of new unseen data, you can instantiate the estimator with the `novelty` parameter set to `True` before fitting the estimator. In this case, `fit_predict` is not available.
Warning
**Novelty detection with Local Outlier Factor**
When `novelty` is set to `True` be aware that you must only use `predict`, `decision_function` and `score_samples` on new unseen data and not on the training samples as this would lead to wrong results. I.e., the result of `predict` will not be the same as `fit_predict`. The scores of abnormality of the training samples are always accessible through the `negative_outlier_factor_` attribute.
The behavior of [`neighbors.LocalOutlierFactor`](generated/sklearn.neighbors.localoutlierfactor#sklearn.neighbors.LocalOutlierFactor "sklearn.neighbors.LocalOutlierFactor") is summarized in the following table.
| Method | Outlier detection | Novelty detection |
| --- | --- | --- |
| `fit_predict` | OK | Not available |
| `predict` | Not available | Use only on new data |
| `decision_function` | Not available | Use only on new data |
| `score_samples` | Use `negative_outlier_factor_` | Use only on new data |
| `negative_outlier_factor_` | OK | OK |
2.7.1. Overview of outlier detection methods
---------------------------------------------
A comparison of the outlier detection algorithms in scikit-learn. Local Outlier Factor (LOF) does not show a decision boundary in black as it has no predict method to be applied on new data when it is used for outlier detection.
[](../auto_examples/miscellaneous/plot_anomaly_comparison) [`ensemble.IsolationForest`](generated/sklearn.ensemble.isolationforest#sklearn.ensemble.IsolationForest "sklearn.ensemble.IsolationForest") and [`neighbors.LocalOutlierFactor`](generated/sklearn.neighbors.localoutlierfactor#sklearn.neighbors.LocalOutlierFactor "sklearn.neighbors.LocalOutlierFactor") perform reasonably well on the data sets considered here. The [`svm.OneClassSVM`](generated/sklearn.svm.oneclasssvm#sklearn.svm.OneClassSVM "sklearn.svm.OneClassSVM") is known to be sensitive to outliers and thus does not perform very well for outlier detection. That being said, outlier detection in high-dimension, or without any assumptions on the distribution of the inlying data is very challenging. [`svm.OneClassSVM`](generated/sklearn.svm.oneclasssvm#sklearn.svm.OneClassSVM "sklearn.svm.OneClassSVM") may still be used with outlier detection but requires fine-tuning of its hyperparameter `nu` to handle outliers and prevent overfitting. [`linear_model.SGDOneClassSVM`](generated/sklearn.linear_model.sgdoneclasssvm#sklearn.linear_model.SGDOneClassSVM "sklearn.linear_model.SGDOneClassSVM") provides an implementation of a linear One-Class SVM with a linear complexity in the number of samples. This implementation is here used with a kernel approximation technique to obtain results similar to [`svm.OneClassSVM`](generated/sklearn.svm.oneclasssvm#sklearn.svm.OneClassSVM "sklearn.svm.OneClassSVM") which uses a Gaussian kernel by default. Finally, [`covariance.EllipticEnvelope`](generated/sklearn.covariance.ellipticenvelope#sklearn.covariance.EllipticEnvelope "sklearn.covariance.EllipticEnvelope") assumes the data is Gaussian and learns an ellipse. For more details on the different estimators refer to the example [Comparing anomaly detection algorithms for outlier detection on toy datasets](../auto_examples/miscellaneous/plot_anomaly_comparison#sphx-glr-auto-examples-miscellaneous-plot-anomaly-comparison-py) and the sections hereunder.
2.7.2. Novelty Detection
-------------------------
Consider a data set of \(n\) observations from the same distribution described by \(p\) features. Consider now that we add one more observation to that data set. Is the new observation so different from the others that we can doubt it is regular? (i.e. does it come from the same distribution?) Or on the contrary, is it so similar to the other that we cannot distinguish it from the original observations? This is the question addressed by the novelty detection tools and methods.
In general, it is about to learn a rough, close frontier delimiting the contour of the initial observations distribution, plotted in embedding \(p\)-dimensional space. Then, if further observations lay within the frontier-delimited subspace, they are considered as coming from the same population than the initial observations. Otherwise, if they lay outside the frontier, we can say that they are abnormal with a given confidence in our assessment.
The One-Class SVM has been introduced by Schölkopf et al. for that purpose and implemented in the [Support Vector Machines](svm#svm) module in the [`svm.OneClassSVM`](generated/sklearn.svm.oneclasssvm#sklearn.svm.OneClassSVM "sklearn.svm.OneClassSVM") object. It requires the choice of a kernel and a scalar parameter to define a frontier. The RBF kernel is usually chosen although there exists no exact formula or algorithm to set its bandwidth parameter. This is the default in the scikit-learn implementation. The `nu` parameter, also known as the margin of the One-Class SVM, corresponds to the probability of finding a new, but regular, observation outside the frontier.
###
2.7.2.1. Scaling up the One-Class SVM
An online linear version of the One-Class SVM is implemented in [`linear_model.SGDOneClassSVM`](generated/sklearn.linear_model.sgdoneclasssvm#sklearn.linear_model.SGDOneClassSVM "sklearn.linear_model.SGDOneClassSVM"). This implementation scales linearly with the number of samples and can be used with a kernel approximation to approximate the solution of a kernelized [`svm.OneClassSVM`](generated/sklearn.svm.oneclasssvm#sklearn.svm.OneClassSVM "sklearn.svm.OneClassSVM") whose complexity is at best quadratic in the number of samples. See section [Online One-Class SVM](sgd#sgd-online-one-class-svm) for more details.
2.7.3. Outlier Detection
-------------------------
Outlier detection is similar to novelty detection in the sense that the goal is to separate a core of regular observations from some polluting ones, called *outliers*. Yet, in the case of outlier detection, we don’t have a clean data set representing the population of regular observations that can be used to train any tool.
###
2.7.3.1. Fitting an elliptic envelope
One common way of performing outlier detection is to assume that the regular data come from a known distribution (e.g. data are Gaussian distributed). From this assumption, we generally try to define the “shape” of the data, and can define outlying observations as observations which stand far enough from the fit shape.
The scikit-learn provides an object [`covariance.EllipticEnvelope`](generated/sklearn.covariance.ellipticenvelope#sklearn.covariance.EllipticEnvelope "sklearn.covariance.EllipticEnvelope") that fits a robust covariance estimate to the data, and thus fits an ellipse to the central data points, ignoring points outside the central mode.
For instance, assuming that the inlier data are Gaussian distributed, it will estimate the inlier location and covariance in a robust way (i.e. without being influenced by outliers). The Mahalanobis distances obtained from this estimate is used to derive a measure of outlyingness. This strategy is illustrated below.
###
2.7.3.2. Isolation Forest
One efficient way of performing outlier detection in high-dimensional datasets is to use random forests. The [`ensemble.IsolationForest`](generated/sklearn.ensemble.isolationforest#sklearn.ensemble.IsolationForest "sklearn.ensemble.IsolationForest") ‘isolates’ observations by randomly selecting a feature and then randomly selecting a split value between the maximum and minimum values of the selected feature.
Since recursive partitioning can be represented by a tree structure, the number of splittings required to isolate a sample is equivalent to the path length from the root node to the terminating node.
This path length, averaged over a forest of such random trees, is a measure of normality and our decision function.
Random partitioning produces noticeably shorter paths for anomalies. Hence, when a forest of random trees collectively produce shorter path lengths for particular samples, they are highly likely to be anomalies.
The implementation of [`ensemble.IsolationForest`](generated/sklearn.ensemble.isolationforest#sklearn.ensemble.IsolationForest "sklearn.ensemble.IsolationForest") is based on an ensemble of [`tree.ExtraTreeRegressor`](generated/sklearn.tree.extratreeregressor#sklearn.tree.ExtraTreeRegressor "sklearn.tree.ExtraTreeRegressor"). Following Isolation Forest original paper, the maximum depth of each tree is set to \(\lceil \log\_2(n) \rceil\) where \(n\) is the number of samples used to build the tree (see (Liu et al., 2008) for more details).
This algorithm is illustrated below.
The [`ensemble.IsolationForest`](generated/sklearn.ensemble.isolationforest#sklearn.ensemble.IsolationForest "sklearn.ensemble.IsolationForest") supports `warm_start=True` which allows you to add more trees to an already fitted model:
```
>>> from sklearn.ensemble import IsolationForest
>>> import numpy as np
>>> X = np.array([[-1, -1], [-2, -1], [-3, -2], [0, 0], [-20, 50], [3, 5]])
>>> clf = IsolationForest(n_estimators=10, warm_start=True)
>>> clf.fit(X) # fit 10 trees
>>> clf.set_params(n_estimators=20) # add 10 more trees
>>> clf.fit(X) # fit the added trees
```
###
2.7.3.3. Local Outlier Factor
Another efficient way to perform outlier detection on moderately high dimensional datasets is to use the Local Outlier Factor (LOF) algorithm.
The [`neighbors.LocalOutlierFactor`](generated/sklearn.neighbors.localoutlierfactor#sklearn.neighbors.LocalOutlierFactor "sklearn.neighbors.LocalOutlierFactor") (LOF) algorithm computes a score (called local outlier factor) reflecting the degree of abnormality of the observations. It measures the local density deviation of a given data point with respect to its neighbors. The idea is to detect the samples that have a substantially lower density than their neighbors.
In practice the local density is obtained from the k-nearest neighbors. The LOF score of an observation is equal to the ratio of the average local density of its k-nearest neighbors, and its own local density: a normal instance is expected to have a local density similar to that of its neighbors, while abnormal data are expected to have much smaller local density.
The number k of neighbors considered, (alias parameter n\_neighbors) is typically chosen 1) greater than the minimum number of objects a cluster has to contain, so that other objects can be local outliers relative to this cluster, and 2) smaller than the maximum number of close by objects that can potentially be local outliers. In practice, such information is generally not available, and taking n\_neighbors=20 appears to work well in general. When the proportion of outliers is high (i.e. greater than 10 %, as in the example below), n\_neighbors should be greater (n\_neighbors=35 in the example below).
The strength of the LOF algorithm is that it takes both local and global properties of datasets into consideration: it can perform well even in datasets where abnormal samples have different underlying densities. The question is not, how isolated the sample is, but how isolated it is with respect to the surrounding neighborhood.
When applying LOF for outlier detection, there are no `predict`, `decision_function` and `score_samples` methods but only a `fit_predict` method. The scores of abnormality of the training samples are accessible through the `negative_outlier_factor_` attribute. Note that `predict`, `decision_function` and `score_samples` can be used on new unseen data when LOF is applied for novelty detection, i.e. when the `novelty` parameter is set to `True`, but the result of `predict` may differ from that of `fit_predict`. See [Novelty detection with Local Outlier Factor](#novelty-with-lof).
This strategy is illustrated below.
2.7.4. Novelty detection with Local Outlier Factor
---------------------------------------------------
To use [`neighbors.LocalOutlierFactor`](generated/sklearn.neighbors.localoutlierfactor#sklearn.neighbors.LocalOutlierFactor "sklearn.neighbors.LocalOutlierFactor") for novelty detection, i.e. predict labels or compute the score of abnormality of new unseen data, you need to instantiate the estimator with the `novelty` parameter set to `True` before fitting the estimator:
```
lof = LocalOutlierFactor(novelty=True)
lof.fit(X_train)
```
Note that `fit_predict` is not available in this case to avoid inconsistencies.
Warning
**Novelty detection with Local Outlier Factor`**
When `novelty` is set to `True` be aware that you must only use `predict`, `decision_function` and `score_samples` on new unseen data and not on the training samples as this would lead to wrong results. I.e., the result of `predict` will not be the same as `fit_predict`. The scores of abnormality of the training samples are always accessible through the `negative_outlier_factor_` attribute.
Novelty detection with Local Outlier Factor is illustrated below.
scikit_learn 1.5. Stochastic Gradient Descent 1.5. Stochastic Gradient Descent
================================
**Stochastic Gradient Descent (SGD)** is a simple yet very efficient approach to fitting linear classifiers and regressors under convex loss functions such as (linear) [Support Vector Machines](https://en.wikipedia.org/wiki/Support_vector_machine) and [Logistic Regression](https://en.wikipedia.org/wiki/Logistic_regression). Even though SGD has been around in the machine learning community for a long time, it has received a considerable amount of attention just recently in the context of large-scale learning.
SGD has been successfully applied to large-scale and sparse machine learning problems often encountered in text classification and natural language processing. Given that the data is sparse, the classifiers in this module easily scale to problems with more than 10^5 training examples and more than 10^5 features.
Strictly speaking, SGD is merely an optimization technique and does not correspond to a specific family of machine learning models. It is only a *way* to train a model. Often, an instance of [`SGDClassifier`](generated/sklearn.linear_model.sgdclassifier#sklearn.linear_model.SGDClassifier "sklearn.linear_model.SGDClassifier") or [`SGDRegressor`](generated/sklearn.linear_model.sgdregressor#sklearn.linear_model.SGDRegressor "sklearn.linear_model.SGDRegressor") will have an equivalent estimator in the scikit-learn API, potentially using a different optimization technique. For example, using `SGDClassifier(loss='log_loss')` results in logistic regression, i.e. a model equivalent to [`LogisticRegression`](generated/sklearn.linear_model.logisticregression#sklearn.linear_model.LogisticRegression "sklearn.linear_model.LogisticRegression") which is fitted via SGD instead of being fitted by one of the other solvers in [`LogisticRegression`](generated/sklearn.linear_model.logisticregression#sklearn.linear_model.LogisticRegression "sklearn.linear_model.LogisticRegression"). Similarly, `SGDRegressor(loss='squared_error', penalty='l2')` and [`Ridge`](generated/sklearn.linear_model.ridge#sklearn.linear_model.Ridge "sklearn.linear_model.Ridge") solve the same optimization problem, via different means.
The advantages of Stochastic Gradient Descent are:
* Efficiency.
* Ease of implementation (lots of opportunities for code tuning).
The disadvantages of Stochastic Gradient Descent include:
* SGD requires a number of hyperparameters such as the regularization parameter and the number of iterations.
* SGD is sensitive to feature scaling.
Warning
Make sure you permute (shuffle) your training data before fitting the model or use `shuffle=True` to shuffle after each iteration (used by default). Also, ideally, features should be standardized using e.g. `make_pipeline(StandardScaler(), SGDClassifier())` (see [Pipelines](compose#combining-estimators)).
1.5.1. Classification
----------------------
The class [`SGDClassifier`](generated/sklearn.linear_model.sgdclassifier#sklearn.linear_model.SGDClassifier "sklearn.linear_model.SGDClassifier") implements a plain stochastic gradient descent learning routine which supports different loss functions and penalties for classification. Below is the decision boundary of a [`SGDClassifier`](generated/sklearn.linear_model.sgdclassifier#sklearn.linear_model.SGDClassifier "sklearn.linear_model.SGDClassifier") trained with the hinge loss, equivalent to a linear SVM.
As other classifiers, SGD has to be fitted with two arrays: an array `X` of shape (n\_samples, n\_features) holding the training samples, and an array y of shape (n\_samples,) holding the target values (class labels) for the training samples:
```
>>> from sklearn.linear_model import SGDClassifier
>>> X = [[0., 0.], [1., 1.]]
>>> y = [0, 1]
>>> clf = SGDClassifier(loss="hinge", penalty="l2", max_iter=5)
>>> clf.fit(X, y)
SGDClassifier(max_iter=5)
```
After being fitted, the model can then be used to predict new values:
```
>>> clf.predict([[2., 2.]])
array([1])
```
SGD fits a linear model to the training data. The `coef_` attribute holds the model parameters:
```
>>> clf.coef_
array([[9.9..., 9.9...]])
```
The `intercept_` attribute holds the intercept (aka offset or bias):
```
>>> clf.intercept_
array([-9.9...])
```
Whether or not the model should use an intercept, i.e. a biased hyperplane, is controlled by the parameter `fit_intercept`.
The signed distance to the hyperplane (computed as the dot product between the coefficients and the input sample, plus the intercept) is given by [`SGDClassifier.decision_function`](generated/sklearn.linear_model.sgdclassifier#sklearn.linear_model.SGDClassifier.decision_function "sklearn.linear_model.SGDClassifier.decision_function"):
```
>>> clf.decision_function([[2., 2.]])
array([29.6...])
```
The concrete loss function can be set via the `loss` parameter. [`SGDClassifier`](generated/sklearn.linear_model.sgdclassifier#sklearn.linear_model.SGDClassifier "sklearn.linear_model.SGDClassifier") supports the following loss functions:
* `loss="hinge"`: (soft-margin) linear Support Vector Machine,
* `loss="modified_huber"`: smoothed hinge loss,
* `loss="log_loss"`: logistic regression,
* and all regression losses below. In this case the target is encoded as -1 or 1, and the problem is treated as a regression problem. The predicted class then correspond to the sign of the predicted target.
Please refer to the [mathematical section below](#sgd-mathematical-formulation) for formulas. The first two loss functions are lazy, they only update the model parameters if an example violates the margin constraint, which makes training very efficient and may result in sparser models (i.e. with more zero coefficients), even when L2 penalty is used.
Using `loss="log_loss"` or `loss="modified_huber"` enables the `predict_proba` method, which gives a vector of probability estimates \(P(y|x)\) per sample \(x\):
```
>>> clf = SGDClassifier(loss="log_loss", max_iter=5).fit(X, y)
>>> clf.predict_proba([[1., 1.]])
array([[0.00..., 0.99...]])
```
The concrete penalty can be set via the `penalty` parameter. SGD supports the following penalties:
* `penalty="l2"`: L2 norm penalty on `coef_`.
* `penalty="l1"`: L1 norm penalty on `coef_`.
* `penalty="elasticnet"`: Convex combination of L2 and L1; `(1 - l1_ratio) * L2 + l1_ratio * L1`.
The default setting is `penalty="l2"`. The L1 penalty leads to sparse solutions, driving most coefficients to zero. The Elastic Net [[11]](#id15) solves some deficiencies of the L1 penalty in the presence of highly correlated attributes. The parameter `l1_ratio` controls the convex combination of L1 and L2 penalty.
[`SGDClassifier`](generated/sklearn.linear_model.sgdclassifier#sklearn.linear_model.SGDClassifier "sklearn.linear_model.SGDClassifier") supports multi-class classification by combining multiple binary classifiers in a “one versus all” (OVA) scheme. For each of the \(K\) classes, a binary classifier is learned that discriminates between that and all other \(K-1\) classes. At testing time, we compute the confidence score (i.e. the signed distances to the hyperplane) for each classifier and choose the class with the highest confidence. The Figure below illustrates the OVA approach on the iris dataset. The dashed lines represent the three OVA classifiers; the background colors show the decision surface induced by the three classifiers.
In the case of multi-class classification `coef_` is a two-dimensional array of shape (n\_classes, n\_features) and `intercept_` is a one-dimensional array of shape (n\_classes,). The i-th row of `coef_` holds the weight vector of the OVA classifier for the i-th class; classes are indexed in ascending order (see attribute `classes_`). Note that, in principle, since they allow to create a probability model, `loss="log_loss"` and `loss="modified_huber"` are more suitable for one-vs-all classification.
[`SGDClassifier`](generated/sklearn.linear_model.sgdclassifier#sklearn.linear_model.SGDClassifier "sklearn.linear_model.SGDClassifier") supports both weighted classes and weighted instances via the fit parameters `class_weight` and `sample_weight`. See the examples below and the docstring of [`SGDClassifier.fit`](generated/sklearn.linear_model.sgdclassifier#sklearn.linear_model.SGDClassifier.fit "sklearn.linear_model.SGDClassifier.fit") for further information.
[`SGDClassifier`](generated/sklearn.linear_model.sgdclassifier#sklearn.linear_model.SGDClassifier "sklearn.linear_model.SGDClassifier") supports averaged SGD (ASGD) [[10]](#id14). Averaging can be enabled by setting `average=True`. ASGD performs the same updates as the regular SGD (see [Mathematical formulation](#sgd-mathematical-formulation)), but instead of using the last value of the coefficients as the `coef_` attribute (i.e. the values of the last update), `coef_` is set instead to the **average** value of the coefficients across all updates. The same is done for the `intercept_` attribute. When using ASGD the learning rate can be larger and even constant, leading on some datasets to a speed up in training time.
For classification with a logistic loss, another variant of SGD with an averaging strategy is available with Stochastic Average Gradient (SAG) algorithm, available as a solver in [`LogisticRegression`](generated/sklearn.linear_model.logisticregression#sklearn.linear_model.LogisticRegression "sklearn.linear_model.LogisticRegression").
1.5.2. Regression
------------------
The class [`SGDRegressor`](generated/sklearn.linear_model.sgdregressor#sklearn.linear_model.SGDRegressor "sklearn.linear_model.SGDRegressor") implements a plain stochastic gradient descent learning routine which supports different loss functions and penalties to fit linear regression models. [`SGDRegressor`](generated/sklearn.linear_model.sgdregressor#sklearn.linear_model.SGDRegressor "sklearn.linear_model.SGDRegressor") is well suited for regression problems with a large number of training samples (> 10.000), for other problems we recommend [`Ridge`](generated/sklearn.linear_model.ridge#sklearn.linear_model.Ridge "sklearn.linear_model.Ridge"), [`Lasso`](generated/sklearn.linear_model.lasso#sklearn.linear_model.Lasso "sklearn.linear_model.Lasso"), or [`ElasticNet`](generated/sklearn.linear_model.elasticnet#sklearn.linear_model.ElasticNet "sklearn.linear_model.ElasticNet").
The concrete loss function can be set via the `loss` parameter. [`SGDRegressor`](generated/sklearn.linear_model.sgdregressor#sklearn.linear_model.SGDRegressor "sklearn.linear_model.SGDRegressor") supports the following loss functions:
* `loss="squared_error"`: Ordinary least squares,
* `loss="huber"`: Huber loss for robust regression,
* `loss="epsilon_insensitive"`: linear Support Vector Regression.
Please refer to the [mathematical section below](#sgd-mathematical-formulation) for formulas. The Huber and epsilon-insensitive loss functions can be used for robust regression. The width of the insensitive region has to be specified via the parameter `epsilon`. This parameter depends on the scale of the target variables.
The `penalty` parameter determines the regularization to be used (see description above in the classification section).
[`SGDRegressor`](generated/sklearn.linear_model.sgdregressor#sklearn.linear_model.SGDRegressor "sklearn.linear_model.SGDRegressor") also supports averaged SGD [[10]](#id14) (here again, see description above in the classification section).
For regression with a squared loss and a l2 penalty, another variant of SGD with an averaging strategy is available with Stochastic Average Gradient (SAG) algorithm, available as a solver in [`Ridge`](generated/sklearn.linear_model.ridge#sklearn.linear_model.Ridge "sklearn.linear_model.Ridge").
1.5.3. Online One-Class SVM
----------------------------
The class [`sklearn.linear_model.SGDOneClassSVM`](generated/sklearn.linear_model.sgdoneclasssvm#sklearn.linear_model.SGDOneClassSVM "sklearn.linear_model.SGDOneClassSVM") implements an online linear version of the One-Class SVM using a stochastic gradient descent. Combined with kernel approximation techniques, [`sklearn.linear_model.SGDOneClassSVM`](generated/sklearn.linear_model.sgdoneclasssvm#sklearn.linear_model.SGDOneClassSVM "sklearn.linear_model.SGDOneClassSVM") can be used to approximate the solution of a kernelized One-Class SVM, implemented in [`sklearn.svm.OneClassSVM`](generated/sklearn.svm.oneclasssvm#sklearn.svm.OneClassSVM "sklearn.svm.OneClassSVM"), with a linear complexity in the number of samples. Note that the complexity of a kernelized One-Class SVM is at best quadratic in the number of samples. [`sklearn.linear_model.SGDOneClassSVM`](generated/sklearn.linear_model.sgdoneclasssvm#sklearn.linear_model.SGDOneClassSVM "sklearn.linear_model.SGDOneClassSVM") is thus well suited for datasets with a large number of training samples (> 10,000) for which the SGD variant can be several orders of magnitude faster.
Its implementation is based on the implementation of the stochastic gradient descent. Indeed, the original optimization problem of the One-Class SVM is given by
\[\begin{split}\begin{aligned} \min\_{w, \rho, \xi} & \quad \frac{1}{2}\Vert w \Vert^2 - \rho + \frac{1}{\nu n} \sum\_{i=1}^n \xi\_i \\ \text{s.t.} & \quad \langle w, x\_i \rangle \geq \rho - \xi\_i \quad 1 \leq i \leq n \\ & \quad \xi\_i \geq 0 \quad 1 \leq i \leq n \end{aligned}\end{split}\] where \(\nu \in (0, 1]\) is the user-specified parameter controlling the proportion of outliers and the proportion of support vectors. Getting rid of the slack variables \(\xi\_i\) this problem is equivalent to
\[\min\_{w, \rho} \frac{1}{2}\Vert w \Vert^2 - \rho + \frac{1}{\nu n} \sum\_{i=1}^n \max(0, \rho - \langle w, x\_i \rangle) \, .\] Multiplying by the constant \(\nu\) and introducing the intercept \(b = 1 - \rho\) we obtain the following equivalent optimization problem
\[\min\_{w, b} \frac{\nu}{2}\Vert w \Vert^2 + b\nu + \frac{1}{n} \sum\_{i=1}^n \max(0, 1 - (\langle w, x\_i \rangle + b)) \, .\] This is similar to the optimization problems studied in section [Mathematical formulation](#sgd-mathematical-formulation) with \(y\_i = 1, 1 \leq i \leq n\) and \(\alpha = \nu/2\), \(L\) being the hinge loss function and \(R\) being the L2 norm. We just need to add the term \(b\nu\) in the optimization loop.
As [`SGDClassifier`](generated/sklearn.linear_model.sgdclassifier#sklearn.linear_model.SGDClassifier "sklearn.linear_model.SGDClassifier") and [`SGDRegressor`](generated/sklearn.linear_model.sgdregressor#sklearn.linear_model.SGDRegressor "sklearn.linear_model.SGDRegressor"), [`SGDOneClassSVM`](generated/sklearn.linear_model.sgdoneclasssvm#sklearn.linear_model.SGDOneClassSVM "sklearn.linear_model.SGDOneClassSVM") supports averaged SGD. Averaging can be enabled by setting `average=True`.
1.5.4. Stochastic Gradient Descent for sparse data
---------------------------------------------------
Note
The sparse implementation produces slightly different results from the dense implementation, due to a shrunk learning rate for the intercept. See [Implementation details](#implementation-details).
There is built-in support for sparse data given in any matrix in a format supported by [scipy.sparse](https://docs.scipy.org/doc/scipy/reference/sparse.html). For maximum efficiency, however, use the CSR matrix format as defined in [scipy.sparse.csr\_matrix](https://docs.scipy.org/doc/scipy/reference/generated/scipy.sparse.csr_matrix.html).
1.5.5. Complexity
------------------
The major advantage of SGD is its efficiency, which is basically linear in the number of training examples. If X is a matrix of size (n, p) training has a cost of \(O(k n \bar p)\), where k is the number of iterations (epochs) and \(\bar p\) is the average number of non-zero attributes per sample.
Recent theoretical results, however, show that the runtime to get some desired optimization accuracy does not increase as the training set size increases.
1.5.6. Stopping criterion
--------------------------
The classes [`SGDClassifier`](generated/sklearn.linear_model.sgdclassifier#sklearn.linear_model.SGDClassifier "sklearn.linear_model.SGDClassifier") and [`SGDRegressor`](generated/sklearn.linear_model.sgdregressor#sklearn.linear_model.SGDRegressor "sklearn.linear_model.SGDRegressor") provide two criteria to stop the algorithm when a given level of convergence is reached:
* With `early_stopping=True`, the input data is split into a training set and a validation set. The model is then fitted on the training set, and the stopping criterion is based on the prediction score (using the `score` method) computed on the validation set. The size of the validation set can be changed with the parameter `validation_fraction`.
* With `early_stopping=False`, the model is fitted on the entire input data and the stopping criterion is based on the objective function computed on the training data.
In both cases, the criterion is evaluated once by epoch, and the algorithm stops when the criterion does not improve `n_iter_no_change` times in a row. The improvement is evaluated with absolute tolerance `tol`, and the algorithm stops in any case after a maximum number of iteration `max_iter`.
1.5.7. Tips on Practical Use
-----------------------------
* Stochastic Gradient Descent is sensitive to feature scaling, so it is highly recommended to scale your data. For example, scale each attribute on the input vector X to [0,1] or [-1,+1], or standardize it to have mean 0 and variance 1. Note that the *same* scaling must be applied to the test vector to obtain meaningful results. This can be easily done using `StandardScaler`:
```
from sklearn.preprocessing import StandardScaler
scaler = StandardScaler()
scaler.fit(X_train) # Don't cheat - fit only on training data
X_train = scaler.transform(X_train)
X_test = scaler.transform(X_test) # apply same transformation to test data
# Or better yet: use a pipeline!
from sklearn.pipeline import make_pipeline
est = make_pipeline(StandardScaler(), SGDClassifier())
est.fit(X_train)
est.predict(X_test)
```
If your attributes have an intrinsic scale (e.g. word frequencies or indicator features) scaling is not needed.
* Finding a reasonable regularization term \(\alpha\) is best done using automatic hyper-parameter search, e.g. [`GridSearchCV`](generated/sklearn.model_selection.gridsearchcv#sklearn.model_selection.GridSearchCV "sklearn.model_selection.GridSearchCV") or [`RandomizedSearchCV`](generated/sklearn.model_selection.randomizedsearchcv#sklearn.model_selection.RandomizedSearchCV "sklearn.model_selection.RandomizedSearchCV"), usually in the range `10.0**-np.arange(1,7)`.
* Empirically, we found that SGD converges after observing approximately 10^6 training samples. Thus, a reasonable first guess for the number of iterations is `max_iter = np.ceil(10**6 / n)`, where `n` is the size of the training set.
* If you apply SGD to features extracted using PCA we found that it is often wise to scale the feature values by some constant `c` such that the average L2 norm of the training data equals one.
* We found that Averaged SGD works best with a larger number of features and a higher eta0
1.5.8. Mathematical formulation
--------------------------------
We describe here the mathematical details of the SGD procedure. A good overview with convergence rates can be found in [[12]](#id16).
Given a set of training examples \((x\_1, y\_1), \ldots, (x\_n, y\_n)\) where \(x\_i \in \mathbf{R}^m\) and \(y\_i \in \mathcal{R}\) (\(y\_i \in {-1, 1}\) for classification), our goal is to learn a linear scoring function \(f(x) = w^T x + b\) with model parameters \(w \in \mathbf{R}^m\) and intercept \(b \in \mathbf{R}\). In order to make predictions for binary classification, we simply look at the sign of \(f(x)\). To find the model parameters, we minimize the regularized training error given by
\[E(w,b) = \frac{1}{n}\sum\_{i=1}^{n} L(y\_i, f(x\_i)) + \alpha R(w)\] where \(L\) is a loss function that measures model (mis)fit and \(R\) is a regularization term (aka penalty) that penalizes model complexity; \(\alpha > 0\) is a non-negative hyperparameter that controls the regularization strength.
Different choices for \(L\) entail different classifiers or regressors:
* Hinge (soft-margin): equivalent to Support Vector Classification. \(L(y\_i, f(x\_i)) = \max(0, 1 - y\_i f(x\_i))\).
* Perceptron: \(L(y\_i, f(x\_i)) = \max(0, - y\_i f(x\_i))\).
* Modified Huber: \(L(y\_i, f(x\_i)) = \max(0, 1 - y\_i f(x\_i))^2\) if \(y\_i f(x\_i) > 1\), and \(L(y\_i, f(x\_i)) = -4 y\_i f(x\_i)\) otherwise.
* Log Loss: equivalent to Logistic Regression. \(L(y\_i, f(x\_i)) = \log(1 + \exp (-y\_i f(x\_i)))\).
* Squared Error: Linear regression (Ridge or Lasso depending on \(R\)). \(L(y\_i, f(x\_i)) = \frac{1}{2}(y\_i - f(x\_i))^2\).
* Huber: less sensitive to outliers than least-squares. It is equivalent to least squares when \(|y\_i - f(x\_i)| \leq \varepsilon\), and \(L(y\_i, f(x\_i)) = \varepsilon |y\_i - f(x\_i)| - \frac{1}{2} \varepsilon^2\) otherwise.
* Epsilon-Insensitive: (soft-margin) equivalent to Support Vector Regression. \(L(y\_i, f(x\_i)) = \max(0, |y\_i - f(x\_i)| - \varepsilon)\).
All of the above loss functions can be regarded as an upper bound on the misclassification error (Zero-one loss) as shown in the Figure below.
Popular choices for the regularization term \(R\) (the `penalty` parameter) include:
* L2 norm: \(R(w) := \frac{1}{2} \sum\_{j=1}^{m} w\_j^2 = ||w||\_2^2\),
* L1 norm: \(R(w) := \sum\_{j=1}^{m} |w\_j|\), which leads to sparse solutions.
* Elastic Net: \(R(w) := \frac{\rho}{2} \sum\_{j=1}^{n} w\_j^2 + (1-\rho) \sum\_{j=1}^{m} |w\_j|\), a convex combination of L2 and L1, where \(\rho\) is given by `1 - l1_ratio`.
The Figure below shows the contours of the different regularization terms in a 2-dimensional parameter space (\(m=2\)) when \(R(w) = 1\).
###
1.5.8.1. SGD
Stochastic gradient descent is an optimization method for unconstrained optimization problems. In contrast to (batch) gradient descent, SGD approximates the true gradient of \(E(w,b)\) by considering a single training example at a time.
The class [`SGDClassifier`](generated/sklearn.linear_model.sgdclassifier#sklearn.linear_model.SGDClassifier "sklearn.linear_model.SGDClassifier") implements a first-order SGD learning routine. The algorithm iterates over the training examples and for each example updates the model parameters according to the update rule given by
\[w \leftarrow w - \eta \left[\alpha \frac{\partial R(w)}{\partial w} + \frac{\partial L(w^T x\_i + b, y\_i)}{\partial w}\right]\] where \(\eta\) is the learning rate which controls the step-size in the parameter space. The intercept \(b\) is updated similarly but without regularization (and with additional decay for sparse matrices, as detailed in [Implementation details](#implementation-details)).
The learning rate \(\eta\) can be either constant or gradually decaying. For classification, the default learning rate schedule (`learning_rate='optimal'`) is given by
\[\eta^{(t)} = \frac {1}{\alpha (t\_0 + t)}\] where \(t\) is the time step (there are a total of `n_samples * n_iter` time steps), \(t\_0\) is determined based on a heuristic proposed by Léon Bottou such that the expected initial updates are comparable with the expected size of the weights (this assuming that the norm of the training samples is approx. 1). The exact definition can be found in `_init_t` in `BaseSGD`.
For regression the default learning rate schedule is inverse scaling (`learning_rate='invscaling'`), given by
\[\eta^{(t)} = \frac{eta\_0}{t^{power\\_t}}\] where \(eta\_0\) and \(power\\_t\) are hyperparameters chosen by the user via `eta0` and `power_t`, resp.
For a constant learning rate use `learning_rate='constant'` and use `eta0` to specify the learning rate.
For an adaptively decreasing learning rate, use `learning_rate='adaptive'` and use `eta0` to specify the starting learning rate. When the stopping criterion is reached, the learning rate is divided by 5, and the algorithm does not stop. The algorithm stops when the learning rate goes below 1e-6.
The model parameters can be accessed through the `coef_` and `intercept_` attributes: `coef_` holds the weights \(w\) and `intercept_` holds \(b\).
When using Averaged SGD (with the `average` parameter), `coef_` is set to the average weight across all updates: `coef_` \(= \frac{1}{T} \sum\_{t=0}^{T-1} w^{(t)}\), where \(T\) is the total number of updates, found in the `t_` attribute.
1.5.9. Implementation details
------------------------------
The implementation of SGD is influenced by the `Stochastic Gradient SVM` of [[7]](#id10). Similar to SvmSGD, the weight vector is represented as the product of a scalar and a vector which allows an efficient weight update in the case of L2 regularization. In the case of sparse input `X`, the intercept is updated with a smaller learning rate (multiplied by 0.01) to account for the fact that it is updated more frequently. Training examples are picked up sequentially and the learning rate is lowered after each observed example. We adopted the learning rate schedule from [[8]](#id12). For multi-class classification, a “one versus all” approach is used. We use the truncated gradient algorithm proposed in [[9]](#id13) for L1 regularization (and the Elastic Net). The code is written in Cython.
| programming_docs |
scikit_learn 2.4. Biclustering 2.4. Biclustering
=================
Biclustering can be performed with the module `sklearn.cluster.bicluster`. Biclustering algorithms simultaneously cluster rows and columns of a data matrix. These clusters of rows and columns are known as biclusters. Each determines a submatrix of the original data matrix with some desired properties.
For instance, given a matrix of shape `(10, 10)`, one possible bicluster with three rows and two columns induces a submatrix of shape `(3, 2)`:
```
>>> import numpy as np
>>> data = np.arange(100).reshape(10, 10)
>>> rows = np.array([0, 2, 3])[:, np.newaxis]
>>> columns = np.array([1, 2])
>>> data[rows, columns]
array([[ 1, 2],
[21, 22],
[31, 32]])
```
For visualization purposes, given a bicluster, the rows and columns of the data matrix may be rearranged to make the bicluster contiguous.
Algorithms differ in how they define biclusters. Some of the common types include:
* constant values, constant rows, or constant columns
* unusually high or low values
* submatrices with low variance
* correlated rows or columns
Algorithms also differ in how rows and columns may be assigned to biclusters, which leads to different bicluster structures. Block diagonal or checkerboard structures occur when rows and columns are divided into partitions.
If each row and each column belongs to exactly one bicluster, then rearranging the rows and columns of the data matrix reveals the biclusters on the diagonal. Here is an example of this structure where biclusters have higher average values than the other rows and columns:
An example of biclusters formed by partitioning rows and columns.
In the checkerboard case, each row belongs to all column clusters, and each column belongs to all row clusters. Here is an example of this structure where the variance of the values within each bicluster is small:
An example of checkerboard biclusters.
After fitting a model, row and column cluster membership can be found in the `rows_` and `columns_` attributes. `rows_[i]` is a binary vector with nonzero entries corresponding to rows that belong to bicluster `i`. Similarly, `columns_[i]` indicates which columns belong to bicluster `i`.
Some models also have `row_labels_` and `column_labels_` attributes. These models partition the rows and columns, such as in the block diagonal and checkerboard bicluster structures.
Note
Biclustering has many other names in different fields including co-clustering, two-mode clustering, two-way clustering, block clustering, coupled two-way clustering, etc. The names of some algorithms, such as the Spectral Co-Clustering algorithm, reflect these alternate names.
2.4.1. Spectral Co-Clustering
------------------------------
The `SpectralCoclustering` algorithm finds biclusters with values higher than those in the corresponding other rows and columns. Each row and each column belongs to exactly one bicluster, so rearranging the rows and columns to make partitions contiguous reveals these high values along the diagonal:
Note
The algorithm treats the input data matrix as a bipartite graph: the rows and columns of the matrix correspond to the two sets of vertices, and each entry corresponds to an edge between a row and a column. The algorithm approximates the normalized cut of this graph to find heavy subgraphs.
###
2.4.1.1. Mathematical formulation
An approximate solution to the optimal normalized cut may be found via the generalized eigenvalue decomposition of the Laplacian of the graph. Usually this would mean working directly with the Laplacian matrix. If the original data matrix \(A\) has shape \(m \times n\), the Laplacian matrix for the corresponding bipartite graph has shape \((m + n) \times (m + n)\). However, in this case it is possible to work directly with \(A\), which is smaller and more efficient.
The input matrix \(A\) is preprocessed as follows:
\[A\_n = R^{-1/2} A C^{-1/2}\] Where \(R\) is the diagonal matrix with entry \(i\) equal to \(\sum\_{j} A\_{ij}\) and \(C\) is the diagonal matrix with entry \(j\) equal to \(\sum\_{i} A\_{ij}\).
The singular value decomposition, \(A\_n = U \Sigma V^\top\), provides the partitions of the rows and columns of \(A\). A subset of the left singular vectors gives the row partitions, and a subset of the right singular vectors gives the column partitions.
The \(\ell = \lceil \log\_2 k \rceil\) singular vectors, starting from the second, provide the desired partitioning information. They are used to form the matrix \(Z\):
\[\begin{split}Z = \begin{bmatrix} R^{-1/2} U \\\\ C^{-1/2} V \end{bmatrix}\end{split}\] where the columns of \(U\) are \(u\_2, \dots, u\_{\ell + 1}\), and similarly for \(V\).
Then the rows of \(Z\) are clustered using [k-means](clustering#k-means). The first `n_rows` labels provide the row partitioning, and the remaining `n_columns` labels provide the column partitioning.
2.4.2. Spectral Biclustering
-----------------------------
The `SpectralBiclustering` algorithm assumes that the input data matrix has a hidden checkerboard structure. The rows and columns of a matrix with this structure may be partitioned so that the entries of any bicluster in the Cartesian product of row clusters and column clusters are approximately constant. For instance, if there are two row partitions and three column partitions, each row will belong to three biclusters, and each column will belong to two biclusters.
The algorithm partitions the rows and columns of a matrix so that a corresponding blockwise-constant checkerboard matrix provides a good approximation to the original matrix.
###
2.4.2.1. Mathematical formulation
The input matrix \(A\) is first normalized to make the checkerboard pattern more obvious. There are three possible methods:
1. *Independent row and column normalization*, as in Spectral Co-Clustering. This method makes the rows sum to a constant and the columns sum to a different constant.
2. **Bistochastization**: repeated row and column normalization until convergence. This method makes both rows and columns sum to the same constant.
3. **Log normalization**: the log of the data matrix is computed: \(L = \log A\). Then the column mean \(\overline{L\_{i \cdot}}\), row mean \(\overline{L\_{\cdot j}}\), and overall mean \(\overline{L\_{\cdot \cdot}}\) of \(L\) are computed. The final matrix is computed according to the formula
\[K\_{ij} = L\_{ij} - \overline{L\_{i \cdot}} - \overline{L\_{\cdot j}} + \overline{L\_{\cdot \cdot}}\] After normalizing, the first few singular vectors are computed, just as in the Spectral Co-Clustering algorithm.
If log normalization was used, all the singular vectors are meaningful. However, if independent normalization or bistochastization were used, the first singular vectors, \(u\_1\) and \(v\_1\). are discarded. From now on, the “first” singular vectors refers to \(u\_2 \dots u\_{p+1}\) and \(v\_2 \dots v\_{p+1}\) except in the case of log normalization.
Given these singular vectors, they are ranked according to which can be best approximated by a piecewise-constant vector. The approximations for each vector are found using one-dimensional k-means and scored using the Euclidean distance. Some subset of the best left and right singular vector are selected. Next, the data is projected to this best subset of singular vectors and clustered.
For instance, if \(p\) singular vectors were calculated, the \(q\) best are found as described, where \(q<p\). Let \(U\) be the matrix with columns the \(q\) best left singular vectors, and similarly \(V\) for the right. To partition the rows, the rows of \(A\) are projected to a \(q\) dimensional space: \(A \* V\). Treating the \(m\) rows of this \(m \times q\) matrix as samples and clustering using k-means yields the row labels. Similarly, projecting the columns to \(A^{\top} \* U\) and clustering this \(n \times q\) matrix yields the column labels.
2.4.3. Biclustering evaluation
-------------------------------
There are two ways of evaluating a biclustering result: internal and external. Internal measures, such as cluster stability, rely only on the data and the result themselves. Currently there are no internal bicluster measures in scikit-learn. External measures refer to an external source of information, such as the true solution. When working with real data the true solution is usually unknown, but biclustering artificial data may be useful for evaluating algorithms precisely because the true solution is known.
To compare a set of found biclusters to the set of true biclusters, two similarity measures are needed: a similarity measure for individual biclusters, and a way to combine these individual similarities into an overall score.
To compare individual biclusters, several measures have been used. For now, only the Jaccard index is implemented:
\[J(A, B) = \frac{|A \cap B|}{|A| + |B| - |A \cap B|}\] where \(A\) and \(B\) are biclusters, \(|A \cap B|\) is the number of elements in their intersection. The Jaccard index achieves its minimum of 0 when the biclusters to not overlap at all and its maximum of 1 when they are identical.
Several methods have been developed to compare two sets of biclusters. For now, only [`consensus_score`](generated/sklearn.metrics.consensus_score#sklearn.metrics.consensus_score "sklearn.metrics.consensus_score") (Hochreiter et. al., 2010) is available:
1. Compute bicluster similarities for pairs of biclusters, one in each set, using the Jaccard index or a similar measure.
2. Assign biclusters from one set to another in a one-to-one fashion to maximize the sum of their similarities. This step is performed using the Hungarian algorithm.
3. The final sum of similarities is divided by the size of the larger set.
The minimum consensus score, 0, occurs when all pairs of biclusters are totally dissimilar. The maximum score, 1, occurs when both sets are identical.
scikit_learn 6.1. Pipelines and composite estimators 6.1. Pipelines and composite estimators
=======================================
Transformers are usually combined with classifiers, regressors or other estimators to build a composite estimator. The most common tool is a [Pipeline](#pipeline). Pipeline is often used in combination with [FeatureUnion](#feature-union) which concatenates the output of transformers into a composite feature space. [TransformedTargetRegressor](#transformed-target-regressor) deals with transforming the [target](https://scikit-learn.org/1.1/glossary.html#term-target) (i.e. log-transform [y](https://scikit-learn.org/1.1/glossary.html#term-y)). In contrast, Pipelines only transform the observed data ([X](https://scikit-learn.org/1.1/glossary.html#term-X)).
6.1.1. Pipeline: chaining estimators
-------------------------------------
[`Pipeline`](generated/sklearn.pipeline.pipeline#sklearn.pipeline.Pipeline "sklearn.pipeline.Pipeline") can be used to chain multiple estimators into one. This is useful as there is often a fixed sequence of steps in processing the data, for example feature selection, normalization and classification. [`Pipeline`](generated/sklearn.pipeline.pipeline#sklearn.pipeline.Pipeline "sklearn.pipeline.Pipeline") serves multiple purposes here:
Convenience and encapsulation
You only have to call [fit](https://scikit-learn.org/1.1/glossary.html#term-fit) and [predict](https://scikit-learn.org/1.1/glossary.html#term-predict) once on your data to fit a whole sequence of estimators.
Joint parameter selection
You can [grid search](grid_search#grid-search) over parameters of all estimators in the pipeline at once.
Safety
Pipelines help avoid leaking statistics from your test data into the trained model in cross-validation, by ensuring that the same samples are used to train the transformers and predictors.
All estimators in a pipeline, except the last one, must be transformers (i.e. must have a [transform](https://scikit-learn.org/1.1/glossary.html#term-transform) method). The last estimator may be any type (transformer, classifier, etc.).
###
6.1.1.1. Usage
####
6.1.1.1.1. Construction
The [`Pipeline`](generated/sklearn.pipeline.pipeline#sklearn.pipeline.Pipeline "sklearn.pipeline.Pipeline") is built using a list of `(key, value)` pairs, where the `key` is a string containing the name you want to give this step and `value` is an estimator object:
```
>>> from sklearn.pipeline import Pipeline
>>> from sklearn.svm import SVC
>>> from sklearn.decomposition import PCA
>>> estimators = [('reduce_dim', PCA()), ('clf', SVC())]
>>> pipe = Pipeline(estimators)
>>> pipe
Pipeline(steps=[('reduce_dim', PCA()), ('clf', SVC())])
```
The utility function [`make_pipeline`](generated/sklearn.pipeline.make_pipeline#sklearn.pipeline.make_pipeline "sklearn.pipeline.make_pipeline") is a shorthand for constructing pipelines; it takes a variable number of estimators and returns a pipeline, filling in the names automatically:
```
>>> from sklearn.pipeline import make_pipeline
>>> from sklearn.naive_bayes import MultinomialNB
>>> from sklearn.preprocessing import Binarizer
>>> make_pipeline(Binarizer(), MultinomialNB())
Pipeline(steps=[('binarizer', Binarizer()), ('multinomialnb', MultinomialNB())])
```
####
6.1.1.1.2. Accessing steps
The estimators of a pipeline are stored as a list in the `steps` attribute, but can be accessed by index or name by indexing (with `[idx]`) the Pipeline:
```
>>> pipe.steps[0]
('reduce_dim', PCA())
>>> pipe[0]
PCA()
>>> pipe['reduce_dim']
PCA()
```
Pipeline’s `named_steps` attribute allows accessing steps by name with tab completion in interactive environments:
```
>>> pipe.named_steps.reduce_dim is pipe['reduce_dim']
True
```
A sub-pipeline can also be extracted using the slicing notation commonly used for Python Sequences such as lists or strings (although only a step of 1 is permitted). This is convenient for performing only some of the transformations (or their inverse):
```
>>> pipe[:1]
Pipeline(steps=[('reduce_dim', PCA())])
>>> pipe[-1:]
Pipeline(steps=[('clf', SVC())])
```
####
6.1.1.1.3. Nested parameters
Parameters of the estimators in the pipeline can be accessed using the `<estimator>__<parameter>` syntax:
```
>>> pipe.set_params(clf__C=10)
Pipeline(steps=[('reduce_dim', PCA()), ('clf', SVC(C=10))])
```
This is particularly important for doing grid searches:
```
>>> from sklearn.model_selection import GridSearchCV
>>> param_grid = dict(reduce_dim__n_components=[2, 5, 10],
... clf__C=[0.1, 10, 100])
>>> grid_search = GridSearchCV(pipe, param_grid=param_grid)
```
Individual steps may also be replaced as parameters, and non-final steps may be ignored by setting them to `'passthrough'`:
```
>>> from sklearn.linear_model import LogisticRegression
>>> param_grid = dict(reduce_dim=['passthrough', PCA(5), PCA(10)],
... clf=[SVC(), LogisticRegression()],
... clf__C=[0.1, 10, 100])
>>> grid_search = GridSearchCV(pipe, param_grid=param_grid)
```
The estimators of the pipeline can be retrieved by index:
```
>>> pipe[0]
PCA()
```
or by name:
```
>>> pipe['reduce_dim']
PCA()
```
To enable model inspection, [`Pipeline`](generated/sklearn.pipeline.pipeline#sklearn.pipeline.Pipeline "sklearn.pipeline.Pipeline") has a `get_feature_names_out()` method, just like all transformers. You can use pipeline slicing to get the feature names going into each step:
```
>>> from sklearn.datasets import load_iris
>>> from sklearn.feature_selection import SelectKBest
>>> iris = load_iris()
>>> pipe = Pipeline(steps=[
... ('select', SelectKBest(k=2)),
... ('clf', LogisticRegression())])
>>> pipe.fit(iris.data, iris.target)
Pipeline(steps=[('select', SelectKBest(...)), ('clf', LogisticRegression(...))])
>>> pipe[:-1].get_feature_names_out()
array(['x2', 'x3'], ...)
```
You can also provide custom feature names for the input data using `get_feature_names_out`:
```
>>> pipe[:-1].get_feature_names_out(iris.feature_names)
array(['petal length (cm)', 'petal width (cm)'], ...)
```
###
6.1.1.2. Notes
Calling `fit` on the pipeline is the same as calling `fit` on each estimator in turn, `transform` the input and pass it on to the next step. The pipeline has all the methods that the last estimator in the pipeline has, i.e. if the last estimator is a classifier, the [`Pipeline`](generated/sklearn.pipeline.pipeline#sklearn.pipeline.Pipeline "sklearn.pipeline.Pipeline") can be used as a classifier. If the last estimator is a transformer, again, so is the pipeline.
###
6.1.1.3. Caching transformers: avoid repeated computation
Fitting transformers may be computationally expensive. With its `memory` parameter set, [`Pipeline`](generated/sklearn.pipeline.pipeline#sklearn.pipeline.Pipeline "sklearn.pipeline.Pipeline") will cache each transformer after calling `fit`. This feature is used to avoid computing the fit transformers within a pipeline if the parameters and input data are identical. A typical example is the case of a grid search in which the transformers can be fitted only once and reused for each configuration.
The parameter `memory` is needed in order to cache the transformers. `memory` can be either a string containing the directory where to cache the transformers or a [joblib.Memory](https://joblib.readthedocs.io/en/latest/memory.html) object:
```
>>> from tempfile import mkdtemp
>>> from shutil import rmtree
>>> from sklearn.decomposition import PCA
>>> from sklearn.svm import SVC
>>> from sklearn.pipeline import Pipeline
>>> estimators = [('reduce_dim', PCA()), ('clf', SVC())]
>>> cachedir = mkdtemp()
>>> pipe = Pipeline(estimators, memory=cachedir)
>>> pipe
Pipeline(memory=...,
steps=[('reduce_dim', PCA()), ('clf', SVC())])
>>> # Clear the cache directory when you don't need it anymore
>>> rmtree(cachedir)
```
Warning
**Side effect of caching transformers**
Using a [`Pipeline`](generated/sklearn.pipeline.pipeline#sklearn.pipeline.Pipeline "sklearn.pipeline.Pipeline") without cache enabled, it is possible to inspect the original instance such as:
```
>>> from sklearn.datasets import load_digits
>>> X_digits, y_digits = load_digits(return_X_y=True)
>>> pca1 = PCA()
>>> svm1 = SVC()
>>> pipe = Pipeline([('reduce_dim', pca1), ('clf', svm1)])
>>> pipe.fit(X_digits, y_digits)
Pipeline(steps=[('reduce_dim', PCA()), ('clf', SVC())])
>>> # The pca instance can be inspected directly
>>> print(pca1.components_)
[[-1.77484909e-19 ... 4.07058917e-18]]
```
Enabling caching triggers a clone of the transformers before fitting. Therefore, the transformer instance given to the pipeline cannot be inspected directly. In following example, accessing the `PCA` instance `pca2` will raise an `AttributeError` since `pca2` will be an unfitted transformer. Instead, use the attribute `named_steps` to inspect estimators within the pipeline:
```
>>> cachedir = mkdtemp()
>>> pca2 = PCA()
>>> svm2 = SVC()
>>> cached_pipe = Pipeline([('reduce_dim', pca2), ('clf', svm2)],
... memory=cachedir)
>>> cached_pipe.fit(X_digits, y_digits)
Pipeline(memory=...,
steps=[('reduce_dim', PCA()), ('clf', SVC())])
>>> print(cached_pipe.named_steps['reduce_dim'].components_)
[[-1.77484909e-19 ... 4.07058917e-18]]
>>> # Remove the cache directory
>>> rmtree(cachedir)
```
6.1.2. Transforming target in regression
-----------------------------------------
[`TransformedTargetRegressor`](generated/sklearn.compose.transformedtargetregressor#sklearn.compose.TransformedTargetRegressor "sklearn.compose.TransformedTargetRegressor") transforms the targets `y` before fitting a regression model. The predictions are mapped back to the original space via an inverse transform. It takes as an argument the regressor that will be used for prediction, and the transformer that will be applied to the target variable:
```
>>> import numpy as np
>>> from sklearn.datasets import fetch_california_housing
>>> from sklearn.compose import TransformedTargetRegressor
>>> from sklearn.preprocessing import QuantileTransformer
>>> from sklearn.linear_model import LinearRegression
>>> from sklearn.model_selection import train_test_split
>>> X, y = fetch_california_housing(return_X_y=True)
>>> X, y = X[:2000, :], y[:2000] # select a subset of data
>>> transformer = QuantileTransformer(output_distribution='normal')
>>> regressor = LinearRegression()
>>> regr = TransformedTargetRegressor(regressor=regressor,
... transformer=transformer)
>>> X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=0)
>>> regr.fit(X_train, y_train)
TransformedTargetRegressor(...)
>>> print('R2 score: {0:.2f}'.format(regr.score(X_test, y_test)))
R2 score: 0.61
>>> raw_target_regr = LinearRegression().fit(X_train, y_train)
>>> print('R2 score: {0:.2f}'.format(raw_target_regr.score(X_test, y_test)))
R2 score: 0.59
```
For simple transformations, instead of a Transformer object, a pair of functions can be passed, defining the transformation and its inverse mapping:
```
>>> def func(x):
... return np.log(x)
>>> def inverse_func(x):
... return np.exp(x)
```
Subsequently, the object is created as:
```
>>> regr = TransformedTargetRegressor(regressor=regressor,
... func=func,
... inverse_func=inverse_func)
>>> regr.fit(X_train, y_train)
TransformedTargetRegressor(...)
>>> print('R2 score: {0:.2f}'.format(regr.score(X_test, y_test)))
R2 score: 0.51
```
By default, the provided functions are checked at each fit to be the inverse of each other. However, it is possible to bypass this checking by setting `check_inverse` to `False`:
```
>>> def inverse_func(x):
... return x
>>> regr = TransformedTargetRegressor(regressor=regressor,
... func=func,
... inverse_func=inverse_func,
... check_inverse=False)
>>> regr.fit(X_train, y_train)
TransformedTargetRegressor(...)
>>> print('R2 score: {0:.2f}'.format(regr.score(X_test, y_test)))
R2 score: -1.57
```
Note
The transformation can be triggered by setting either `transformer` or the pair of functions `func` and `inverse_func`. However, setting both options will raise an error.
6.1.3. FeatureUnion: composite feature spaces
----------------------------------------------
[`FeatureUnion`](generated/sklearn.pipeline.featureunion#sklearn.pipeline.FeatureUnion "sklearn.pipeline.FeatureUnion") combines several transformer objects into a new transformer that combines their output. A [`FeatureUnion`](generated/sklearn.pipeline.featureunion#sklearn.pipeline.FeatureUnion "sklearn.pipeline.FeatureUnion") takes a list of transformer objects. During fitting, each of these is fit to the data independently. The transformers are applied in parallel, and the feature matrices they output are concatenated side-by-side into a larger matrix.
When you want to apply different transformations to each field of the data, see the related class [`ColumnTransformer`](generated/sklearn.compose.columntransformer#sklearn.compose.ColumnTransformer "sklearn.compose.ColumnTransformer") (see [user guide](#column-transformer)).
[`FeatureUnion`](generated/sklearn.pipeline.featureunion#sklearn.pipeline.FeatureUnion "sklearn.pipeline.FeatureUnion") serves the same purposes as [`Pipeline`](generated/sklearn.pipeline.pipeline#sklearn.pipeline.Pipeline "sklearn.pipeline.Pipeline") - convenience and joint parameter estimation and validation.
[`FeatureUnion`](generated/sklearn.pipeline.featureunion#sklearn.pipeline.FeatureUnion "sklearn.pipeline.FeatureUnion") and [`Pipeline`](generated/sklearn.pipeline.pipeline#sklearn.pipeline.Pipeline "sklearn.pipeline.Pipeline") can be combined to create complex models.
(A [`FeatureUnion`](generated/sklearn.pipeline.featureunion#sklearn.pipeline.FeatureUnion "sklearn.pipeline.FeatureUnion") has no way of checking whether two transformers might produce identical features. It only produces a union when the feature sets are disjoint, and making sure they are is the caller’s responsibility.)
###
6.1.3.1. Usage
A [`FeatureUnion`](generated/sklearn.pipeline.featureunion#sklearn.pipeline.FeatureUnion "sklearn.pipeline.FeatureUnion") is built using a list of `(key, value)` pairs, where the `key` is the name you want to give to a given transformation (an arbitrary string; it only serves as an identifier) and `value` is an estimator object:
```
>>> from sklearn.pipeline import FeatureUnion
>>> from sklearn.decomposition import PCA
>>> from sklearn.decomposition import KernelPCA
>>> estimators = [('linear_pca', PCA()), ('kernel_pca', KernelPCA())]
>>> combined = FeatureUnion(estimators)
>>> combined
FeatureUnion(transformer_list=[('linear_pca', PCA()),
('kernel_pca', KernelPCA())])
```
Like pipelines, feature unions have a shorthand constructor called [`make_union`](generated/sklearn.pipeline.make_union#sklearn.pipeline.make_union "sklearn.pipeline.make_union") that does not require explicit naming of the components.
Like `Pipeline`, individual steps may be replaced using `set_params`, and ignored by setting to `'drop'`:
```
>>> combined.set_params(kernel_pca='drop')
FeatureUnion(transformer_list=[('linear_pca', PCA()),
('kernel_pca', 'drop')])
```
6.1.4. ColumnTransformer for heterogeneous data
------------------------------------------------
Many datasets contain features of different types, say text, floats, and dates, where each type of feature requires separate preprocessing or feature extraction steps. Often it is easiest to preprocess data before applying scikit-learn methods, for example using [pandas](https://pandas.pydata.org/). Processing your data before passing it to scikit-learn might be problematic for one of the following reasons:
1. Incorporating statistics from test data into the preprocessors makes cross-validation scores unreliable (known as *data leakage*), for example in the case of scalers or imputing missing values.
2. You may want to include the parameters of the preprocessors in a [parameter search](grid_search#grid-search).
The [`ColumnTransformer`](generated/sklearn.compose.columntransformer#sklearn.compose.ColumnTransformer "sklearn.compose.ColumnTransformer") helps performing different transformations for different columns of the data, within a [`Pipeline`](generated/sklearn.pipeline.pipeline#sklearn.pipeline.Pipeline "sklearn.pipeline.Pipeline") that is safe from data leakage and that can be parametrized. [`ColumnTransformer`](generated/sklearn.compose.columntransformer#sklearn.compose.ColumnTransformer "sklearn.compose.ColumnTransformer") works on arrays, sparse matrices, and [pandas DataFrames](https://pandas.pydata.org/pandas-docs/stable/).
To each column, a different transformation can be applied, such as preprocessing or a specific feature extraction method:
```
>>> import pandas as pd
>>> X = pd.DataFrame(
... {'city': ['London', 'London', 'Paris', 'Sallisaw'],
... 'title': ["His Last Bow", "How Watson Learned the Trick",
... "A Moveable Feast", "The Grapes of Wrath"],
... 'expert_rating': [5, 3, 4, 5],
... 'user_rating': [4, 5, 4, 3]})
```
For this data, we might want to encode the `'city'` column as a categorical variable using [`OneHotEncoder`](generated/sklearn.preprocessing.onehotencoder#sklearn.preprocessing.OneHotEncoder "sklearn.preprocessing.OneHotEncoder") but apply a [`CountVectorizer`](generated/sklearn.feature_extraction.text.countvectorizer#sklearn.feature_extraction.text.CountVectorizer "sklearn.feature_extraction.text.CountVectorizer") to the `'title'` column. As we might use multiple feature extraction methods on the same column, we give each transformer a unique name, say `'city_category'` and `'title_bow'`. By default, the remaining rating columns are ignored (`remainder='drop'`):
```
>>> from sklearn.compose import ColumnTransformer
>>> from sklearn.feature_extraction.text import CountVectorizer
>>> from sklearn.preprocessing import OneHotEncoder
>>> column_trans = ColumnTransformer(
... [('categories', OneHotEncoder(dtype='int'), ['city']),
... ('title_bow', CountVectorizer(), 'title')],
... remainder='drop', verbose_feature_names_out=False)
>>> column_trans.fit(X)
ColumnTransformer(transformers=[('categories', OneHotEncoder(dtype='int'),
['city']),
('title_bow', CountVectorizer(), 'title')],
verbose_feature_names_out=False)
>>> column_trans.get_feature_names_out()
array(['city_London', 'city_Paris', 'city_Sallisaw', 'bow', 'feast',
'grapes', 'his', 'how', 'last', 'learned', 'moveable', 'of', 'the',
'trick', 'watson', 'wrath'], ...)
>>> column_trans.transform(X).toarray()
array([[1, 0, 0, 1, 0, 0, 1, 0, 1, 0, 0, 0, 0, 0, 0, 0],
[1, 0, 0, 0, 0, 0, 0, 1, 0, 1, 0, 0, 1, 1, 1, 0],
[0, 1, 0, 0, 1, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0],
[0, 0, 1, 0, 0, 1, 0, 0, 0, 0, 0, 1, 1, 0, 0, 1]]...)
```
In the above example, the [`CountVectorizer`](generated/sklearn.feature_extraction.text.countvectorizer#sklearn.feature_extraction.text.CountVectorizer "sklearn.feature_extraction.text.CountVectorizer") expects a 1D array as input and therefore the columns were specified as a string (`'title'`). However, [`OneHotEncoder`](generated/sklearn.preprocessing.onehotencoder#sklearn.preprocessing.OneHotEncoder "sklearn.preprocessing.OneHotEncoder") as most of other transformers expects 2D data, therefore in that case you need to specify the column as a list of strings (`['city']`).
Apart from a scalar or a single item list, the column selection can be specified as a list of multiple items, an integer array, a slice, a boolean mask, or with a [`make_column_selector`](generated/sklearn.compose.make_column_selector#sklearn.compose.make_column_selector "sklearn.compose.make_column_selector"). The [`make_column_selector`](generated/sklearn.compose.make_column_selector#sklearn.compose.make_column_selector "sklearn.compose.make_column_selector") is used to select columns based on data type or column name:
```
>>> from sklearn.preprocessing import StandardScaler
>>> from sklearn.compose import make_column_selector
>>> ct = ColumnTransformer([
... ('scale', StandardScaler(),
... make_column_selector(dtype_include=np.number)),
... ('onehot',
... OneHotEncoder(),
... make_column_selector(pattern='city', dtype_include=object))])
>>> ct.fit_transform(X)
array([[ 0.904..., 0. , 1. , 0. , 0. ],
[-1.507..., 1.414..., 1. , 0. , 0. ],
[-0.301..., 0. , 0. , 1. , 0. ],
[ 0.904..., -1.414..., 0. , 0. , 1. ]])
```
Strings can reference columns if the input is a DataFrame, integers are always interpreted as the positional columns.
We can keep the remaining rating columns by setting `remainder='passthrough'`. The values are appended to the end of the transformation:
```
>>> column_trans = ColumnTransformer(
... [('city_category', OneHotEncoder(dtype='int'),['city']),
... ('title_bow', CountVectorizer(), 'title')],
... remainder='passthrough')
>>> column_trans.fit_transform(X)
array([[1, 0, 0, 1, 0, 0, 1, 0, 1, 0, 0, 0, 0, 0, 0, 0, 5, 4],
[1, 0, 0, 0, 0, 0, 0, 1, 0, 1, 0, 0, 1, 1, 1, 0, 3, 5],
[0, 1, 0, 0, 1, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 4, 4],
[0, 0, 1, 0, 0, 1, 0, 0, 0, 0, 0, 1, 1, 0, 0, 1, 5, 3]]...)
```
The `remainder` parameter can be set to an estimator to transform the remaining rating columns. The transformed values are appended to the end of the transformation:
```
>>> from sklearn.preprocessing import MinMaxScaler
>>> column_trans = ColumnTransformer(
... [('city_category', OneHotEncoder(), ['city']),
... ('title_bow', CountVectorizer(), 'title')],
... remainder=MinMaxScaler())
>>> column_trans.fit_transform(X)[:, -2:]
array([[1. , 0.5],
[0. , 1. ],
[0.5, 0.5],
[1. , 0. ]])
```
The [`make_column_transformer`](generated/sklearn.compose.make_column_transformer#sklearn.compose.make_column_transformer "sklearn.compose.make_column_transformer") function is available to more easily create a [`ColumnTransformer`](generated/sklearn.compose.columntransformer#sklearn.compose.ColumnTransformer "sklearn.compose.ColumnTransformer") object. Specifically, the names will be given automatically. The equivalent for the above example would be:
```
>>> from sklearn.compose import make_column_transformer
>>> column_trans = make_column_transformer(
... (OneHotEncoder(), ['city']),
... (CountVectorizer(), 'title'),
... remainder=MinMaxScaler())
>>> column_trans
ColumnTransformer(remainder=MinMaxScaler(),
transformers=[('onehotencoder', OneHotEncoder(), ['city']),
('countvectorizer', CountVectorizer(),
'title')])
```
If [`ColumnTransformer`](generated/sklearn.compose.columntransformer#sklearn.compose.ColumnTransformer "sklearn.compose.ColumnTransformer") is fitted with a dataframe and the dataframe only has string column names, then transforming a dataframe will use the column names to select the columns:
```
>>> ct = ColumnTransformer(
... [("scale", StandardScaler(), ["expert_rating"])]).fit(X)
>>> X_new = pd.DataFrame({"expert_rating": [5, 6, 1],
... "ignored_new_col": [1.2, 0.3, -0.1]})
>>> ct.transform(X_new)
array([[ 0.9...],
[ 2.1...],
[-3.9...]])
```
6.1.5. Visualizing Composite Estimators
----------------------------------------
Estimators are displayed with an HTML representation when shown in a jupyter notebook. This is useful to diagnose or visualize a Pipeline with many estimators. This visualization is activated by default:
```
>>> column_trans
```
It can be deactivated by setting the `display` option in [`set_config`](generated/sklearn.set_config#sklearn.set_config "sklearn.set_config") to ‘text’:
```
>>> from sklearn import set_config
>>> set_config(display='text')
>>> # displays text representation in a jupyter context
>>> column_trans
```
An example of the HTML output can be seen in the **HTML representation of Pipeline** section of [Column Transformer with Mixed Types](../auto_examples/compose/plot_column_transformer_mixed_types#sphx-glr-auto-examples-compose-plot-column-transformer-mixed-types-py). As an alternative, the HTML can be written to a file using [`estimator_html_repr`](generated/sklearn.utils.estimator_html_repr#sklearn.utils.estimator_html_repr "sklearn.utils.estimator_html_repr"):
```
>>> from sklearn.utils import estimator_html_repr
>>> with open('my_estimator.html', 'w') as f:
... f.write(estimator_html_repr(clf))
```
| programming_docs |
scikit_learn 6.6. Random Projection 6.6. Random Projection
======================
The [`sklearn.random_projection`](classes#module-sklearn.random_projection "sklearn.random_projection") module implements a simple and computationally efficient way to reduce the dimensionality of the data by trading a controlled amount of accuracy (as additional variance) for faster processing times and smaller model sizes. This module implements two types of unstructured random matrix: [Gaussian random matrix](#gaussian-random-matrix) and [sparse random matrix](#sparse-random-matrix).
The dimensions and distribution of random projections matrices are controlled so as to preserve the pairwise distances between any two samples of the dataset. Thus random projection is a suitable approximation technique for distance based method.
6.6.1. The Johnson-Lindenstrauss lemma
---------------------------------------
The main theoretical result behind the efficiency of random projection is the [Johnson-Lindenstrauss lemma (quoting Wikipedia)](https://en.wikipedia.org/wiki/Johnson%E2%80%93Lindenstrauss_lemma):
In mathematics, the Johnson-Lindenstrauss lemma is a result concerning low-distortion embeddings of points from high-dimensional into low-dimensional Euclidean space. The lemma states that a small set of points in a high-dimensional space can be embedded into a space of much lower dimension in such a way that distances between the points are nearly preserved. The map used for the embedding is at least Lipschitz, and can even be taken to be an orthogonal projection.
Knowing only the number of samples, the [`johnson_lindenstrauss_min_dim`](generated/sklearn.random_projection.johnson_lindenstrauss_min_dim#sklearn.random_projection.johnson_lindenstrauss_min_dim "sklearn.random_projection.johnson_lindenstrauss_min_dim") estimates conservatively the minimal size of the random subspace to guarantee a bounded distortion introduced by the random projection:
```
>>> from sklearn.random_projection import johnson_lindenstrauss_min_dim
>>> johnson_lindenstrauss_min_dim(n_samples=1e6, eps=0.5)
663
>>> johnson_lindenstrauss_min_dim(n_samples=1e6, eps=[0.5, 0.1, 0.01])
array([ 663, 11841, 1112658])
>>> johnson_lindenstrauss_min_dim(n_samples=[1e4, 1e5, 1e6], eps=0.1)
array([ 7894, 9868, 11841])
```
6.6.2. Gaussian random projection
----------------------------------
The [`GaussianRandomProjection`](generated/sklearn.random_projection.gaussianrandomprojection#sklearn.random_projection.GaussianRandomProjection "sklearn.random_projection.GaussianRandomProjection") reduces the dimensionality by projecting the original input space on a randomly generated matrix where components are drawn from the following distribution \(N(0, \frac{1}{n\_{components}})\).
Here a small excerpt which illustrates how to use the Gaussian random projection transformer:
```
>>> import numpy as np
>>> from sklearn import random_projection
>>> X = np.random.rand(100, 10000)
>>> transformer = random_projection.GaussianRandomProjection()
>>> X_new = transformer.fit_transform(X)
>>> X_new.shape
(100, 3947)
```
6.6.3. Sparse random projection
--------------------------------
The [`SparseRandomProjection`](generated/sklearn.random_projection.sparserandomprojection#sklearn.random_projection.SparseRandomProjection "sklearn.random_projection.SparseRandomProjection") reduces the dimensionality by projecting the original input space using a sparse random matrix.
Sparse random matrices are an alternative to dense Gaussian random projection matrix that guarantees similar embedding quality while being much more memory efficient and allowing faster computation of the projected data.
If we define `s = 1 / density`, the elements of the random matrix are drawn from
\[\begin{split}\left\{ \begin{array}{c c l} -\sqrt{\frac{s}{n\_{\text{components}}}} & & 1 / 2s\\ 0 &\text{with probability} & 1 - 1 / s \\ +\sqrt{\frac{s}{n\_{\text{components}}}} & & 1 / 2s\\ \end{array} \right.\end{split}\] where \(n\_{\text{components}}\) is the size of the projected subspace. By default the density of non zero elements is set to the minimum density as recommended by Ping Li et al.: \(1 / \sqrt{n\_{\text{features}}}\).
Here a small excerpt which illustrates how to use the sparse random projection transformer:
```
>>> import numpy as np
>>> from sklearn import random_projection
>>> X = np.random.rand(100, 10000)
>>> transformer = random_projection.SparseRandomProjection()
>>> X_new = transformer.fit_transform(X)
>>> X_new.shape
(100, 3947)
```
6.6.4. Inverse Transform
-------------------------
The random projection transformers have `compute_inverse_components` parameter. When set to True, after creating the random `components_` matrix during fitting, the transformer computes the pseudo-inverse of this matrix and stores it as `inverse_components_`. The `inverse_components_` matrix has shape \(n\_{features} \times n\_{components}\), and it is always a dense matrix, regardless of whether the components matrix is sparse or dense. So depending on the number of features and components, it may use a lot of memory.
When the `inverse_transform` method is called, it computes the product of the input `X` and the transpose of the inverse components. If the inverse components have been computed during fit, they are reused at each call to `inverse_transform`. Otherwise they are recomputed each time, which can be costly. The result is always dense, even if `X` is sparse.
Here a small code example which illustrates how to use the inverse transform feature:
```
>>> import numpy as np
>>> from sklearn.random_projection import SparseRandomProjection
>>> X = np.random.rand(100, 10000)
>>> transformer = SparseRandomProjection(
... compute_inverse_components=True
... )
...
>>> X_new = transformer.fit_transform(X)
>>> X_new.shape
(100, 3947)
>>> X_new_inversed = transformer.inverse_transform(X_new)
>>> X_new_inversed.shape
(100, 10000)
>>> X_new_again = transformer.transform(X_new_inversed)
>>> np.allclose(X_new, X_new_again)
True
```
scikit_learn 2.5. Decomposing signals in components (matrix factorization problems) 2.5. Decomposing signals in components (matrix factorization problems)
======================================================================
2.5.1. Principal component analysis (PCA)
------------------------------------------
###
2.5.1.1. Exact PCA and probabilistic interpretation
PCA is used to decompose a multivariate dataset in a set of successive orthogonal components that explain a maximum amount of the variance. In scikit-learn, [`PCA`](generated/sklearn.decomposition.pca#sklearn.decomposition.PCA "sklearn.decomposition.PCA") is implemented as a *transformer* object that learns \(n\) components in its `fit` method, and can be used on new data to project it on these components.
PCA centers but does not scale the input data for each feature before applying the SVD. The optional parameter `whiten=True` makes it possible to project the data onto the singular space while scaling each component to unit variance. This is often useful if the models down-stream make strong assumptions on the isotropy of the signal: this is for example the case for Support Vector Machines with the RBF kernel and the K-Means clustering algorithm.
Below is an example of the iris dataset, which is comprised of 4 features, projected on the 2 dimensions that explain most variance:
The [`PCA`](generated/sklearn.decomposition.pca#sklearn.decomposition.PCA "sklearn.decomposition.PCA") object also provides a probabilistic interpretation of the PCA that can give a likelihood of data based on the amount of variance it explains. As such it implements a [score](https://scikit-learn.org/1.1/glossary.html#term-score) method that can be used in cross-validation:
###
2.5.1.2. Incremental PCA
The [`PCA`](generated/sklearn.decomposition.pca#sklearn.decomposition.PCA "sklearn.decomposition.PCA") object is very useful, but has certain limitations for large datasets. The biggest limitation is that [`PCA`](generated/sklearn.decomposition.pca#sklearn.decomposition.PCA "sklearn.decomposition.PCA") only supports batch processing, which means all of the data to be processed must fit in main memory. The [`IncrementalPCA`](generated/sklearn.decomposition.incrementalpca#sklearn.decomposition.IncrementalPCA "sklearn.decomposition.IncrementalPCA") object uses a different form of processing and allows for partial computations which almost exactly match the results of [`PCA`](generated/sklearn.decomposition.pca#sklearn.decomposition.PCA "sklearn.decomposition.PCA") while processing the data in a minibatch fashion. [`IncrementalPCA`](generated/sklearn.decomposition.incrementalpca#sklearn.decomposition.IncrementalPCA "sklearn.decomposition.IncrementalPCA") makes it possible to implement out-of-core Principal Component Analysis either by:
* Using its `partial_fit` method on chunks of data fetched sequentially from the local hard drive or a network database.
* Calling its fit method on a sparse matrix or a memory mapped file using `numpy.memmap`.
[`IncrementalPCA`](generated/sklearn.decomposition.incrementalpca#sklearn.decomposition.IncrementalPCA "sklearn.decomposition.IncrementalPCA") only stores estimates of component and noise variances, in order update `explained_variance_ratio_` incrementally. This is why memory usage depends on the number of samples per batch, rather than the number of samples to be processed in the dataset.
As in [`PCA`](generated/sklearn.decomposition.pca#sklearn.decomposition.PCA "sklearn.decomposition.PCA"), [`IncrementalPCA`](generated/sklearn.decomposition.incrementalpca#sklearn.decomposition.IncrementalPCA "sklearn.decomposition.IncrementalPCA") centers but does not scale the input data for each feature before applying the SVD.
###
2.5.1.3. PCA using randomized SVD
It is often interesting to project data to a lower-dimensional space that preserves most of the variance, by dropping the singular vector of components associated with lower singular values.
For instance, if we work with 64x64 pixel gray-level pictures for face recognition, the dimensionality of the data is 4096 and it is slow to train an RBF support vector machine on such wide data. Furthermore we know that the intrinsic dimensionality of the data is much lower than 4096 since all pictures of human faces look somewhat alike. The samples lie on a manifold of much lower dimension (say around 200 for instance). The PCA algorithm can be used to linearly transform the data while both reducing the dimensionality and preserve most of the explained variance at the same time.
The class [`PCA`](generated/sklearn.decomposition.pca#sklearn.decomposition.PCA "sklearn.decomposition.PCA") used with the optional parameter `svd_solver='randomized'` is very useful in that case: since we are going to drop most of the singular vectors it is much more efficient to limit the computation to an approximated estimate of the singular vectors we will keep to actually perform the transform.
For instance, the following shows 16 sample portraits (centered around 0.0) from the Olivetti dataset. On the right hand side are the first 16 singular vectors reshaped as portraits. Since we only require the top 16 singular vectors of a dataset with size \(n\_{samples} = 400\) and \(n\_{features} = 64 \times 64 = 4096\), the computation time is less than 1s:
If we note \(n\_{\max} = \max(n\_{\mathrm{samples}}, n\_{\mathrm{features}})\) and \(n\_{\min} = \min(n\_{\mathrm{samples}}, n\_{\mathrm{features}})\), the time complexity of the randomized [`PCA`](generated/sklearn.decomposition.pca#sklearn.decomposition.PCA "sklearn.decomposition.PCA") is \(O(n\_{\max}^2 \cdot n\_{\mathrm{components}})\) instead of \(O(n\_{\max}^2 \cdot n\_{\min})\) for the exact method implemented in [`PCA`](generated/sklearn.decomposition.pca#sklearn.decomposition.PCA "sklearn.decomposition.PCA").
The memory footprint of randomized [`PCA`](generated/sklearn.decomposition.pca#sklearn.decomposition.PCA "sklearn.decomposition.PCA") is also proportional to \(2 \cdot n\_{\max} \cdot n\_{\mathrm{components}}\) instead of \(n\_{\max} \cdot n\_{\min}\) for the exact method.
Note: the implementation of `inverse_transform` in [`PCA`](generated/sklearn.decomposition.pca#sklearn.decomposition.PCA "sklearn.decomposition.PCA") with `svd_solver='randomized'` is not the exact inverse transform of `transform` even when `whiten=False` (default).
###
2.5.1.4. Sparse principal components analysis (SparsePCA and MiniBatchSparsePCA)
[`SparsePCA`](generated/sklearn.decomposition.sparsepca#sklearn.decomposition.SparsePCA "sklearn.decomposition.SparsePCA") is a variant of PCA, with the goal of extracting the set of sparse components that best reconstruct the data.
Mini-batch sparse PCA ([`MiniBatchSparsePCA`](generated/sklearn.decomposition.minibatchsparsepca#sklearn.decomposition.MiniBatchSparsePCA "sklearn.decomposition.MiniBatchSparsePCA")) is a variant of [`SparsePCA`](generated/sklearn.decomposition.sparsepca#sklearn.decomposition.SparsePCA "sklearn.decomposition.SparsePCA") that is faster but less accurate. The increased speed is reached by iterating over small chunks of the set of features, for a given number of iterations.
Principal component analysis ([`PCA`](generated/sklearn.decomposition.pca#sklearn.decomposition.PCA "sklearn.decomposition.PCA")) has the disadvantage that the components extracted by this method have exclusively dense expressions, i.e. they have non-zero coefficients when expressed as linear combinations of the original variables. This can make interpretation difficult. In many cases, the real underlying components can be more naturally imagined as sparse vectors; for example in face recognition, components might naturally map to parts of faces.
Sparse principal components yields a more parsimonious, interpretable representation, clearly emphasizing which of the original features contribute to the differences between samples.
The following example illustrates 16 components extracted using sparse PCA from the Olivetti faces dataset. It can be seen how the regularization term induces many zeros. Furthermore, the natural structure of the data causes the non-zero coefficients to be vertically adjacent. The model does not enforce this mathematically: each component is a vector \(h \in \mathbf{R}^{4096}\), and there is no notion of vertical adjacency except during the human-friendly visualization as 64x64 pixel images. The fact that the components shown below appear local is the effect of the inherent structure of the data, which makes such local patterns minimize reconstruction error. There exist sparsity-inducing norms that take into account adjacency and different kinds of structure; see [[Jen09]](#jen09) for a review of such methods. For more details on how to use Sparse PCA, see the Examples section, below.
Note that there are many different formulations for the Sparse PCA problem. The one implemented here is based on [[Mrl09]](#mrl09) . The optimization problem solved is a PCA problem (dictionary learning) with an \(\ell\_1\) penalty on the components:
\[\begin{split}(U^\*, V^\*) = \underset{U, V}{\operatorname{arg\,min\,}} & \frac{1}{2} ||X-UV||\_{\text{Fro}}^2+\alpha||V||\_{1,1} \\ \text{subject to } & ||U\_k||\_2 <= 1 \text{ for all } 0 \leq k < n\_{components}\end{split}\] \(||.||\_{\text{Fro}}\) stands for the Frobenius norm and \(||.||\_{1,1}\) stands for the entry-wise matrix norm which is the sum of the absolute values of all the entries in the matrix. The sparsity-inducing \(||.||\_{1,1}\) matrix norm also prevents learning components from noise when few training samples are available. The degree of penalization (and thus sparsity) can be adjusted through the hyperparameter `alpha`. Small values lead to a gently regularized factorization, while larger values shrink many coefficients to zero.
Note
While in the spirit of an online algorithm, the class [`MiniBatchSparsePCA`](generated/sklearn.decomposition.minibatchsparsepca#sklearn.decomposition.MiniBatchSparsePCA "sklearn.decomposition.MiniBatchSparsePCA") does not implement `partial_fit` because the algorithm is online along the features direction, not the samples direction.
2.5.2. Kernel Principal Component Analysis (kPCA)
--------------------------------------------------
###
2.5.2.1. Exact Kernel PCA
[`KernelPCA`](generated/sklearn.decomposition.kernelpca#sklearn.decomposition.KernelPCA "sklearn.decomposition.KernelPCA") is an extension of PCA which achieves non-linear dimensionality reduction through the use of kernels (see [Pairwise metrics, Affinities and Kernels](metrics#metrics)) [[Scholkopf1997]](#scholkopf1997). It has many applications including denoising, compression and structured prediction (kernel dependency estimation). [`KernelPCA`](generated/sklearn.decomposition.kernelpca#sklearn.decomposition.KernelPCA "sklearn.decomposition.KernelPCA") supports both `transform` and `inverse_transform`.
Note
[`KernelPCA.inverse_transform`](generated/sklearn.decomposition.kernelpca#sklearn.decomposition.KernelPCA.inverse_transform "sklearn.decomposition.KernelPCA.inverse_transform") relies on a kernel ridge to learn the function mapping samples from the PCA basis into the original feature space [[Bakir2003]](#bakir2003). Thus, the reconstruction obtained with [`KernelPCA.inverse_transform`](generated/sklearn.decomposition.kernelpca#sklearn.decomposition.KernelPCA.inverse_transform "sklearn.decomposition.KernelPCA.inverse_transform") is an approximation. See the example linked below for more details.
###
2.5.2.2. Choice of solver for Kernel PCA
While in [`PCA`](generated/sklearn.decomposition.pca#sklearn.decomposition.PCA "sklearn.decomposition.PCA") the number of components is bounded by the number of features, in [`KernelPCA`](generated/sklearn.decomposition.kernelpca#sklearn.decomposition.KernelPCA "sklearn.decomposition.KernelPCA") the number of components is bounded by the number of samples. Many real-world datasets have large number of samples! In these cases finding *all* the components with a full kPCA is a waste of computation time, as data is mostly described by the first few components (e.g. `n_components<=100`). In other words, the centered Gram matrix that is eigendecomposed in the Kernel PCA fitting process has an effective rank that is much smaller than its size. This is a situation where approximate eigensolvers can provide speedup with very low precision loss.
The optional parameter `eigen_solver='randomized'` can be used to *significantly* reduce the computation time when the number of requested `n_components` is small compared with the number of samples. It relies on randomized decomposition methods to find an approximate solution in a shorter time.
The time complexity of the randomized [`KernelPCA`](generated/sklearn.decomposition.kernelpca#sklearn.decomposition.KernelPCA "sklearn.decomposition.KernelPCA") is \(O(n\_{\mathrm{samples}}^2 \cdot n\_{\mathrm{components}})\) instead of \(O(n\_{\mathrm{samples}}^3)\) for the exact method implemented with `eigen_solver='dense'`.
The memory footprint of randomized [`KernelPCA`](generated/sklearn.decomposition.kernelpca#sklearn.decomposition.KernelPCA "sklearn.decomposition.KernelPCA") is also proportional to \(2 \cdot n\_{\mathrm{samples}} \cdot n\_{\mathrm{components}}\) instead of \(n\_{\mathrm{samples}}^2\) for the exact method.
Note: this technique is the same as in [PCA using randomized SVD](#randomizedpca).
In addition to the above two solvers, `eigen_solver='arpack'` can be used as an alternate way to get an approximate decomposition. In practice, this method only provides reasonable execution times when the number of components to find is extremely small. It is enabled by default when the desired number of components is less than 10 (strict) and the number of samples is more than 200 (strict). See [`KernelPCA`](generated/sklearn.decomposition.kernelpca#sklearn.decomposition.KernelPCA "sklearn.decomposition.KernelPCA") for details.
2.5.3. Truncated singular value decomposition and latent semantic analysis
---------------------------------------------------------------------------
[`TruncatedSVD`](generated/sklearn.decomposition.truncatedsvd#sklearn.decomposition.TruncatedSVD "sklearn.decomposition.TruncatedSVD") implements a variant of singular value decomposition (SVD) that only computes the \(k\) largest singular values, where \(k\) is a user-specified parameter.
When truncated SVD is applied to term-document matrices (as returned by [`CountVectorizer`](generated/sklearn.feature_extraction.text.countvectorizer#sklearn.feature_extraction.text.CountVectorizer "sklearn.feature_extraction.text.CountVectorizer") or [`TfidfVectorizer`](generated/sklearn.feature_extraction.text.tfidfvectorizer#sklearn.feature_extraction.text.TfidfVectorizer "sklearn.feature_extraction.text.TfidfVectorizer")), this transformation is known as [latent semantic analysis](https://nlp.stanford.edu/IR-book/pdf/18lsi.pdf) (LSA), because it transforms such matrices to a “semantic” space of low dimensionality. In particular, LSA is known to combat the effects of synonymy and polysemy (both of which roughly mean there are multiple meanings per word), which cause term-document matrices to be overly sparse and exhibit poor similarity under measures such as cosine similarity.
Note
LSA is also known as latent semantic indexing, LSI, though strictly that refers to its use in persistent indexes for information retrieval purposes.
Mathematically, truncated SVD applied to training samples \(X\) produces a low-rank approximation \(X\):
\[X \approx X\_k = U\_k \Sigma\_k V\_k^\top\] After this operation, \(U\_k \Sigma\_k\) is the transformed training set with \(k\) features (called `n_components` in the API).
To also transform a test set \(X\), we multiply it with \(V\_k\):
\[X' = X V\_k\] Note
Most treatments of LSA in the natural language processing (NLP) and information retrieval (IR) literature swap the axes of the matrix \(X\) so that it has shape `n_features` × `n_samples`. We present LSA in a different way that matches the scikit-learn API better, but the singular values found are the same.
[`TruncatedSVD`](generated/sklearn.decomposition.truncatedsvd#sklearn.decomposition.TruncatedSVD "sklearn.decomposition.TruncatedSVD") is very similar to [`PCA`](generated/sklearn.decomposition.pca#sklearn.decomposition.PCA "sklearn.decomposition.PCA"), but differs in that the matrix \(X\) does not need to be centered. When the columnwise (per-feature) means of \(X\) are subtracted from the feature values, truncated SVD on the resulting matrix is equivalent to PCA. In practical terms, this means that the [`TruncatedSVD`](generated/sklearn.decomposition.truncatedsvd#sklearn.decomposition.TruncatedSVD "sklearn.decomposition.TruncatedSVD") transformer accepts `scipy.sparse` matrices without the need to densify them, as densifying may fill up memory even for medium-sized document collections.
While the [`TruncatedSVD`](generated/sklearn.decomposition.truncatedsvd#sklearn.decomposition.TruncatedSVD "sklearn.decomposition.TruncatedSVD") transformer works with any feature matrix, using it on tf–idf matrices is recommended over raw frequency counts in an LSA/document processing setting. In particular, sublinear scaling and inverse document frequency should be turned on (`sublinear_tf=True, use_idf=True`) to bring the feature values closer to a Gaussian distribution, compensating for LSA’s erroneous assumptions about textual data.
2.5.4. Dictionary Learning
---------------------------
###
2.5.4.1. Sparse coding with a precomputed dictionary
The [`SparseCoder`](generated/sklearn.decomposition.sparsecoder#sklearn.decomposition.SparseCoder "sklearn.decomposition.SparseCoder") object is an estimator that can be used to transform signals into sparse linear combination of atoms from a fixed, precomputed dictionary such as a discrete wavelet basis. This object therefore does not implement a `fit` method. The transformation amounts to a sparse coding problem: finding a representation of the data as a linear combination of as few dictionary atoms as possible. All variations of dictionary learning implement the following transform methods, controllable via the `transform_method` initialization parameter:
* Orthogonal matching pursuit ([Orthogonal Matching Pursuit (OMP)](linear_model#omp))
* Least-angle regression ([Least Angle Regression](linear_model#least-angle-regression))
* Lasso computed by least-angle regression
* Lasso using coordinate descent ([Lasso](linear_model#lasso))
* Thresholding
Thresholding is very fast but it does not yield accurate reconstructions. They have been shown useful in literature for classification tasks. For image reconstruction tasks, orthogonal matching pursuit yields the most accurate, unbiased reconstruction.
The dictionary learning objects offer, via the `split_code` parameter, the possibility to separate the positive and negative values in the results of sparse coding. This is useful when dictionary learning is used for extracting features that will be used for supervised learning, because it allows the learning algorithm to assign different weights to negative loadings of a particular atom, from to the corresponding positive loading.
The split code for a single sample has length `2 * n_components` and is constructed using the following rule: First, the regular code of length `n_components` is computed. Then, the first `n_components` entries of the `split_code` are filled with the positive part of the regular code vector. The second half of the split code is filled with the negative part of the code vector, only with a positive sign. Therefore, the split\_code is non-negative.
###
2.5.4.2. Generic dictionary learning
Dictionary learning ([`DictionaryLearning`](generated/sklearn.decomposition.dictionarylearning#sklearn.decomposition.DictionaryLearning "sklearn.decomposition.DictionaryLearning")) is a matrix factorization problem that amounts to finding a (usually overcomplete) dictionary that will perform well at sparsely encoding the fitted data.
Representing data as sparse combinations of atoms from an overcomplete dictionary is suggested to be the way the mammalian primary visual cortex works. Consequently, dictionary learning applied on image patches has been shown to give good results in image processing tasks such as image completion, inpainting and denoising, as well as for supervised recognition tasks.
Dictionary learning is an optimization problem solved by alternatively updating the sparse code, as a solution to multiple Lasso problems, considering the dictionary fixed, and then updating the dictionary to best fit the sparse code.
\[\begin{split}(U^\*, V^\*) = \underset{U, V}{\operatorname{arg\,min\,}} & \frac{1}{2} ||X-UV||\_{\text{Fro}}^2+\alpha||U||\_{1,1} \\ \text{subject to } & ||V\_k||\_2 <= 1 \text{ for all } 0 \leq k < n\_{\mathrm{atoms}}\end{split}\]
\(||.||\_{\text{Fro}}\) stands for the Frobenius norm and \(||.||\_{1,1}\) stands for the entry-wise matrix norm which is the sum of the absolute values of all the entries in the matrix. After using such a procedure to fit the dictionary, the transform is simply a sparse coding step that shares the same implementation with all dictionary learning objects (see [Sparse coding with a precomputed dictionary](#sparsecoder)).
It is also possible to constrain the dictionary and/or code to be positive to match constraints that may be present in the data. Below are the faces with different positivity constraints applied. Red indicates negative values, blue indicates positive values, and white represents zeros.
The following image shows how a dictionary learned from 4x4 pixel image patches extracted from part of the image of a raccoon face looks like.
###
2.5.4.3. Mini-batch dictionary learning
[`MiniBatchDictionaryLearning`](generated/sklearn.decomposition.minibatchdictionarylearning#sklearn.decomposition.MiniBatchDictionaryLearning "sklearn.decomposition.MiniBatchDictionaryLearning") implements a faster, but less accurate version of the dictionary learning algorithm that is better suited for large datasets.
By default, [`MiniBatchDictionaryLearning`](generated/sklearn.decomposition.minibatchdictionarylearning#sklearn.decomposition.MiniBatchDictionaryLearning "sklearn.decomposition.MiniBatchDictionaryLearning") divides the data into mini-batches and optimizes in an online manner by cycling over the mini-batches for the specified number of iterations. However, at the moment it does not implement a stopping condition.
The estimator also implements `partial_fit`, which updates the dictionary by iterating only once over a mini-batch. This can be used for online learning when the data is not readily available from the start, or for when the data does not fit into the memory.
2.5.5. Factor Analysis
-----------------------
In unsupervised learning we only have a dataset \(X = \{x\_1, x\_2, \dots, x\_n \}\). How can this dataset be described mathematically? A very simple `continuous latent variable` model for \(X\) is
\[x\_i = W h\_i + \mu + \epsilon\] The vector \(h\_i\) is called “latent” because it is unobserved. \(\epsilon\) is considered a noise term distributed according to a Gaussian with mean 0 and covariance \(\Psi\) (i.e. \(\epsilon \sim \mathcal{N}(0, \Psi)\)), \(\mu\) is some arbitrary offset vector. Such a model is called “generative” as it describes how \(x\_i\) is generated from \(h\_i\). If we use all the \(x\_i\)’s as columns to form a matrix \(\mathbf{X}\) and all the \(h\_i\)’s as columns of a matrix \(\mathbf{H}\) then we can write (with suitably defined \(\mathbf{M}\) and \(\mathbf{E}\)):
\[\mathbf{X} = W \mathbf{H} + \mathbf{M} + \mathbf{E}\] In other words, we *decomposed* matrix \(\mathbf{X}\).
If \(h\_i\) is given, the above equation automatically implies the following probabilistic interpretation:
\[p(x\_i|h\_i) = \mathcal{N}(Wh\_i + \mu, \Psi)\] For a complete probabilistic model we also need a prior distribution for the latent variable \(h\). The most straightforward assumption (based on the nice properties of the Gaussian distribution) is \(h \sim \mathcal{N}(0, \mathbf{I})\). This yields a Gaussian as the marginal distribution of \(x\):
\[p(x) = \mathcal{N}(\mu, WW^T + \Psi)\] Now, without any further assumptions the idea of having a latent variable \(h\) would be superfluous – \(x\) can be completely modelled with a mean and a covariance. We need to impose some more specific structure on one of these two parameters. A simple additional assumption regards the structure of the error covariance \(\Psi\):
* \(\Psi = \sigma^2 \mathbf{I}\): This assumption leads to the probabilistic model of [`PCA`](generated/sklearn.decomposition.pca#sklearn.decomposition.PCA "sklearn.decomposition.PCA").
* \(\Psi = \mathrm{diag}(\psi\_1, \psi\_2, \dots, \psi\_n)\): This model is called [`FactorAnalysis`](generated/sklearn.decomposition.factoranalysis#sklearn.decomposition.FactorAnalysis "sklearn.decomposition.FactorAnalysis"), a classical statistical model. The matrix W is sometimes called the “factor loading matrix”.
Both models essentially estimate a Gaussian with a low-rank covariance matrix. Because both models are probabilistic they can be integrated in more complex models, e.g. Mixture of Factor Analysers. One gets very different models (e.g. [`FastICA`](generated/sklearn.decomposition.fastica#sklearn.decomposition.FastICA "sklearn.decomposition.FastICA")) if non-Gaussian priors on the latent variables are assumed.
Factor analysis *can* produce similar components (the columns of its loading matrix) to [`PCA`](generated/sklearn.decomposition.pca#sklearn.decomposition.PCA "sklearn.decomposition.PCA"). However, one can not make any general statements about these components (e.g. whether they are orthogonal):
The main advantage for Factor Analysis over [`PCA`](generated/sklearn.decomposition.pca#sklearn.decomposition.PCA "sklearn.decomposition.PCA") is that it can model the variance in every direction of the input space independently (heteroscedastic noise):
This allows better model selection than probabilistic PCA in the presence of heteroscedastic noise:
Factor Analysis is often followed by a rotation of the factors (with the parameter `rotation`), usually to improve interpretability. For example, Varimax rotation maximizes the sum of the variances of the squared loadings, i.e., it tends to produce sparser factors, which are influenced by only a few features each (the “simple structure”). See e.g., the first example below.
2.5.6. Independent component analysis (ICA)
--------------------------------------------
Independent component analysis separates a multivariate signal into additive subcomponents that are maximally independent. It is implemented in scikit-learn using the [`Fast ICA`](generated/sklearn.decomposition.fastica#sklearn.decomposition.FastICA "sklearn.decomposition.FastICA") algorithm. Typically, ICA is not used for reducing dimensionality but for separating superimposed signals. Since the ICA model does not include a noise term, for the model to be correct, whitening must be applied. This can be done internally using the whiten argument or manually using one of the PCA variants.
It is classically used to separate mixed signals (a problem known as *blind source separation*), as in the example below:
ICA can also be used as yet another non linear decomposition that finds components with some sparsity:
2.5.7. Non-negative matrix factorization (NMF or NNMF)
-------------------------------------------------------
###
2.5.7.1. NMF with the Frobenius norm
[`NMF`](generated/sklearn.decomposition.nmf#sklearn.decomposition.NMF "sklearn.decomposition.NMF") [[1]](#id13) is an alternative approach to decomposition that assumes that the data and the components are non-negative. [`NMF`](generated/sklearn.decomposition.nmf#sklearn.decomposition.NMF "sklearn.decomposition.NMF") can be plugged in instead of [`PCA`](generated/sklearn.decomposition.pca#sklearn.decomposition.PCA "sklearn.decomposition.PCA") or its variants, in the cases where the data matrix does not contain negative values. It finds a decomposition of samples \(X\) into two matrices \(W\) and \(H\) of non-negative elements, by optimizing the distance \(d\) between \(X\) and the matrix product \(WH\). The most widely used distance function is the squared Frobenius norm, which is an obvious extension of the Euclidean norm to matrices:
\[d\_{\mathrm{Fro}}(X, Y) = \frac{1}{2} ||X - Y||\_{\mathrm{Fro}}^2 = \frac{1}{2} \sum\_{i,j} (X\_{ij} - {Y}\_{ij})^2\] Unlike [`PCA`](generated/sklearn.decomposition.pca#sklearn.decomposition.PCA "sklearn.decomposition.PCA"), the representation of a vector is obtained in an additive fashion, by superimposing the components, without subtracting. Such additive models are efficient for representing images and text.
It has been observed in [Hoyer, 2004] [[2]](#id14) that, when carefully constrained, [`NMF`](generated/sklearn.decomposition.nmf#sklearn.decomposition.NMF "sklearn.decomposition.NMF") can produce a parts-based representation of the dataset, resulting in interpretable models. The following example displays 16 sparse components found by [`NMF`](generated/sklearn.decomposition.nmf#sklearn.decomposition.NMF "sklearn.decomposition.NMF") from the images in the Olivetti faces dataset, in comparison with the PCA eigenfaces.
The `init` attribute determines the initialization method applied, which has a great impact on the performance of the method. [`NMF`](generated/sklearn.decomposition.nmf#sklearn.decomposition.NMF "sklearn.decomposition.NMF") implements the method Nonnegative Double Singular Value Decomposition. NNDSVD [[4]](#id15) is based on two SVD processes, one approximating the data matrix, the other approximating positive sections of the resulting partial SVD factors utilizing an algebraic property of unit rank matrices. The basic NNDSVD algorithm is better fit for sparse factorization. Its variants NNDSVDa (in which all zeros are set equal to the mean of all elements of the data), and NNDSVDar (in which the zeros are set to random perturbations less than the mean of the data divided by 100) are recommended in the dense case.
Note that the Multiplicative Update (‘mu’) solver cannot update zeros present in the initialization, so it leads to poorer results when used jointly with the basic NNDSVD algorithm which introduces a lot of zeros; in this case, NNDSVDa or NNDSVDar should be preferred.
[`NMF`](generated/sklearn.decomposition.nmf#sklearn.decomposition.NMF "sklearn.decomposition.NMF") can also be initialized with correctly scaled random non-negative matrices by setting `init="random"`. An integer seed or a `RandomState` can also be passed to `random_state` to control reproducibility.
In [`NMF`](generated/sklearn.decomposition.nmf#sklearn.decomposition.NMF "sklearn.decomposition.NMF"), L1 and L2 priors can be added to the loss function in order to regularize the model. The L2 prior uses the Frobenius norm, while the L1 prior uses an elementwise L1 norm. As in `ElasticNet`, we control the combination of L1 and L2 with the `l1_ratio` (\(\rho\)) parameter, and the intensity of the regularization with the `alpha_W` and `alpha_H` (\(\alpha\_W\) and \(\alpha\_H\)) parameters. The priors are scaled by the number of samples (\(n\\_samples\)) for `H` and the number of features (\(n\\_features\)) for `W` to keep their impact balanced with respect to one another and to the data fit term as independent as possible of the size of the training set. Then the priors terms are:
\[(\alpha\_W \rho ||W||\_1 + \frac{\alpha\_W(1-\rho)}{2} ||W||\_{\mathrm{Fro}} ^ 2) \* n\\_features + (\alpha\_H \rho ||H||\_1 + \frac{\alpha\_H(1-\rho)}{2} ||H||\_{\mathrm{Fro}} ^ 2) \* n\\_samples\] and the regularized objective function is:
\[d\_{\mathrm{Fro}}(X, WH) + (\alpha\_W \rho ||W||\_1 + \frac{\alpha\_W(1-\rho)}{2} ||W||\_{\mathrm{Fro}} ^ 2) \* n\\_features + (\alpha\_H \rho ||H||\_1 + \frac{\alpha\_H(1-\rho)}{2} ||H||\_{\mathrm{Fro}} ^ 2) \* n\\_samples\] ###
2.5.7.2. NMF with a beta-divergence
As described previously, the most widely used distance function is the squared Frobenius norm, which is an obvious extension of the Euclidean norm to matrices:
\[d\_{\mathrm{Fro}}(X, Y) = \frac{1}{2} ||X - Y||\_{Fro}^2 = \frac{1}{2} \sum\_{i,j} (X\_{ij} - {Y}\_{ij})^2\] Other distance functions can be used in NMF as, for example, the (generalized) Kullback-Leibler (KL) divergence, also referred as I-divergence:
\[d\_{KL}(X, Y) = \sum\_{i,j} (X\_{ij} \log(\frac{X\_{ij}}{Y\_{ij}}) - X\_{ij} + Y\_{ij})\] Or, the Itakura-Saito (IS) divergence:
\[d\_{IS}(X, Y) = \sum\_{i,j} (\frac{X\_{ij}}{Y\_{ij}} - \log(\frac{X\_{ij}}{Y\_{ij}}) - 1)\] These three distances are special cases of the beta-divergence family, with \(\beta = 2, 1, 0\) respectively [[6]](#id17). The beta-divergence are defined by :
\[d\_{\beta}(X, Y) = \sum\_{i,j} \frac{1}{\beta(\beta - 1)}(X\_{ij}^\beta + (\beta-1)Y\_{ij}^\beta - \beta X\_{ij} Y\_{ij}^{\beta - 1})\] Note that this definition is not valid if \(\beta \in (0; 1)\), yet it can be continuously extended to the definitions of \(d\_{KL}\) and \(d\_{IS}\) respectively.
[`NMF`](generated/sklearn.decomposition.nmf#sklearn.decomposition.NMF "sklearn.decomposition.NMF") implements two solvers, using Coordinate Descent (‘cd’) [[5]](#id16), and Multiplicative Update (‘mu’) [[6]](#id17). The ‘mu’ solver can optimize every beta-divergence, including of course the Frobenius norm (\(\beta=2\)), the (generalized) Kullback-Leibler divergence (\(\beta=1\)) and the Itakura-Saito divergence (\(\beta=0\)). Note that for \(\beta \in (1; 2)\), the ‘mu’ solver is significantly faster than for other values of \(\beta\). Note also that with a negative (or 0, i.e. ‘itakura-saito’) \(\beta\), the input matrix cannot contain zero values.
The ‘cd’ solver can only optimize the Frobenius norm. Due to the underlying non-convexity of NMF, the different solvers may converge to different minima, even when optimizing the same distance function.
NMF is best used with the `fit_transform` method, which returns the matrix W. The matrix H is stored into the fitted model in the `components_` attribute; the method `transform` will decompose a new matrix X\_new based on these stored components:
```
>>> import numpy as np
>>> X = np.array([[1, 1], [2, 1], [3, 1.2], [4, 1], [5, 0.8], [6, 1]])
>>> from sklearn.decomposition import NMF
>>> model = NMF(n_components=2, init='random', random_state=0)
>>> W = model.fit_transform(X)
>>> H = model.components_
>>> X_new = np.array([[1, 0], [1, 6.1], [1, 0], [1, 4], [3.2, 1], [0, 4]])
>>> W_new = model.transform(X_new)
```
###
2.5.7.3. Mini-batch Non Negative Matrix Factorization
[`MiniBatchNMF`](generated/sklearn.decomposition.minibatchnmf#sklearn.decomposition.MiniBatchNMF "sklearn.decomposition.MiniBatchNMF") [[7]](#id18) implements a faster, but less accurate version of the non negative matrix factorization (i.e. [`NMF`](generated/sklearn.decomposition.nmf#sklearn.decomposition.NMF "sklearn.decomposition.NMF")), better suited for large datasets.
By default, [`MiniBatchNMF`](generated/sklearn.decomposition.minibatchnmf#sklearn.decomposition.MiniBatchNMF "sklearn.decomposition.MiniBatchNMF") divides the data into mini-batches and optimizes the NMF model in an online manner by cycling over the mini-batches for the specified number of iterations. The `batch_size` parameter controls the size of the batches.
In order to speed up the mini-batch algorithm it is also possible to scale past batches, giving them less importance than newer batches. This is done introducing a so-called forgetting factor controlled by the `forget_factor` parameter.
The estimator also implements `partial_fit`, which updates `H` by iterating only once over a mini-batch. This can be used for online learning when the data is not readily available from the start, or when the data does not fit into memory.
2.5.8. Latent Dirichlet Allocation (LDA)
-----------------------------------------
Latent Dirichlet Allocation is a generative probabilistic model for collections of discrete dataset such as text corpora. It is also a topic model that is used for discovering abstract topics from a collection of documents.
The graphical model of LDA is a three-level generative model:
Note on notations presented in the graphical model above, which can be found in Hoffman et al. (2013):
* The corpus is a collection of \(D\) documents.
* A document is a sequence of \(N\) words.
* There are \(K\) topics in the corpus.
* The boxes represent repeated sampling.
In the graphical model, each node is a random variable and has a role in the generative process. A shaded node indicates an observed variable and an unshaded node indicates a hidden (latent) variable. In this case, words in the corpus are the only data that we observe. The latent variables determine the random mixture of topics in the corpus and the distribution of words in the documents. The goal of LDA is to use the observed words to infer the hidden topic structure.
When modeling text corpora, the model assumes the following generative process for a corpus with \(D\) documents and \(K\) topics, with \(K\) corresponding to `n_components` in the API:
1. For each topic \(k \in K\), draw \(\beta\_k \sim \mathrm{Dirichlet}(\eta)\). This provides a distribution over the words, i.e. the probability of a word appearing in topic \(k\). \(\eta\) corresponds to `topic_word_prior`.
2. For each document \(d \in D\), draw the topic proportions \(\theta\_d \sim \mathrm{Dirichlet}(\alpha)\). \(\alpha\) corresponds to `doc_topic_prior`.
3. For each word \(i\) in document \(d\):
1. Draw the topic assignment \(z\_{di} \sim \mathrm{Multinomial} (\theta\_d)\)
2. Draw the observed word \(w\_{ij} \sim \mathrm{Multinomial} (\beta\_{z\_{di}})\)
For parameter estimation, the posterior distribution is:
\[p(z, \theta, \beta |w, \alpha, \eta) = \frac{p(z, \theta, \beta|\alpha, \eta)}{p(w|\alpha, \eta)}\] Since the posterior is intractable, variational Bayesian method uses a simpler distribution \(q(z,\theta,\beta | \lambda, \phi, \gamma)\) to approximate it, and those variational parameters \(\lambda\), \(\phi\), \(\gamma\) are optimized to maximize the Evidence Lower Bound (ELBO):
\[\log\: P(w | \alpha, \eta) \geq L(w,\phi,\gamma,\lambda) \overset{\triangle}{=} E\_{q}[\log\:p(w,z,\theta,\beta|\alpha,\eta)] - E\_{q}[\log\:q(z, \theta, \beta)]\] Maximizing ELBO is equivalent to minimizing the Kullback-Leibler(KL) divergence between \(q(z,\theta,\beta)\) and the true posterior \(p(z, \theta, \beta |w, \alpha, \eta)\).
[`LatentDirichletAllocation`](generated/sklearn.decomposition.latentdirichletallocation#sklearn.decomposition.LatentDirichletAllocation "sklearn.decomposition.LatentDirichletAllocation") implements the online variational Bayes algorithm and supports both online and batch update methods. While the batch method updates variational variables after each full pass through the data, the online method updates variational variables from mini-batch data points.
Note
Although the online method is guaranteed to converge to a local optimum point, the quality of the optimum point and the speed of convergence may depend on mini-batch size and attributes related to learning rate setting.
When [`LatentDirichletAllocation`](generated/sklearn.decomposition.latentdirichletallocation#sklearn.decomposition.LatentDirichletAllocation "sklearn.decomposition.LatentDirichletAllocation") is applied on a “document-term” matrix, the matrix will be decomposed into a “topic-term” matrix and a “document-topic” matrix. While “topic-term” matrix is stored as `components_` in the model, “document-topic” matrix can be calculated from `transform` method.
[`LatentDirichletAllocation`](generated/sklearn.decomposition.latentdirichletallocation#sklearn.decomposition.LatentDirichletAllocation "sklearn.decomposition.LatentDirichletAllocation") also implements `partial_fit` method. This is used when data can be fetched sequentially.
See also [Dimensionality reduction](neighbors#nca-dim-reduction) for dimensionality reduction with Neighborhood Components Analysis.
| programming_docs |
scikit_learn 1.3. Kernel ridge regression 1.3. Kernel ridge regression
============================
Kernel ridge regression (KRR) [[M2012]](#m2012) combines [Ridge regression and classification](linear_model#ridge-regression) (linear least squares with l2-norm regularization) with the [kernel trick](https://en.wikipedia.org/wiki/Kernel_method). It thus learns a linear function in the space induced by the respective kernel and the data. For non-linear kernels, this corresponds to a non-linear function in the original space.
The form of the model learned by [`KernelRidge`](generated/sklearn.kernel_ridge.kernelridge#sklearn.kernel_ridge.KernelRidge "sklearn.kernel_ridge.KernelRidge") is identical to support vector regression ([`SVR`](generated/sklearn.svm.svr#sklearn.svm.SVR "sklearn.svm.SVR")). However, different loss functions are used: KRR uses squared error loss while support vector regression uses \(\epsilon\)-insensitive loss, both combined with l2 regularization. In contrast to [`SVR`](generated/sklearn.svm.svr#sklearn.svm.SVR "sklearn.svm.SVR"), fitting [`KernelRidge`](generated/sklearn.kernel_ridge.kernelridge#sklearn.kernel_ridge.KernelRidge "sklearn.kernel_ridge.KernelRidge") can be done in closed-form and is typically faster for medium-sized datasets. On the other hand, the learned model is non-sparse and thus slower than [`SVR`](generated/sklearn.svm.svr#sklearn.svm.SVR "sklearn.svm.SVR"), which learns a sparse model for \(\epsilon > 0\), at prediction-time.
The following figure compares [`KernelRidge`](generated/sklearn.kernel_ridge.kernelridge#sklearn.kernel_ridge.KernelRidge "sklearn.kernel_ridge.KernelRidge") and [`SVR`](generated/sklearn.svm.svr#sklearn.svm.SVR "sklearn.svm.SVR") on an artificial dataset, which consists of a sinusoidal target function and strong noise added to every fifth datapoint. The learned model of [`KernelRidge`](generated/sklearn.kernel_ridge.kernelridge#sklearn.kernel_ridge.KernelRidge "sklearn.kernel_ridge.KernelRidge") and [`SVR`](generated/sklearn.svm.svr#sklearn.svm.SVR "sklearn.svm.SVR") is plotted, where both complexity/regularization and bandwidth of the RBF kernel have been optimized using grid-search. The learned functions are very similar; however, fitting [`KernelRidge`](generated/sklearn.kernel_ridge.kernelridge#sklearn.kernel_ridge.KernelRidge "sklearn.kernel_ridge.KernelRidge") is approximately seven times faster than fitting [`SVR`](generated/sklearn.svm.svr#sklearn.svm.SVR "sklearn.svm.SVR") (both with grid-search). However, prediction of 100000 target values is more than three times faster with [`SVR`](generated/sklearn.svm.svr#sklearn.svm.SVR "sklearn.svm.SVR") since it has learned a sparse model using only approximately 1/3 of the 100 training datapoints as support vectors.
The next figure compares the time for fitting and prediction of [`KernelRidge`](generated/sklearn.kernel_ridge.kernelridge#sklearn.kernel_ridge.KernelRidge "sklearn.kernel_ridge.KernelRidge") and [`SVR`](generated/sklearn.svm.svr#sklearn.svm.SVR "sklearn.svm.SVR") for different sizes of the training set. Fitting [`KernelRidge`](generated/sklearn.kernel_ridge.kernelridge#sklearn.kernel_ridge.KernelRidge "sklearn.kernel_ridge.KernelRidge") is faster than [`SVR`](generated/sklearn.svm.svr#sklearn.svm.SVR "sklearn.svm.SVR") for medium-sized training sets (less than 1000 samples); however, for larger training sets [`SVR`](generated/sklearn.svm.svr#sklearn.svm.SVR "sklearn.svm.SVR") scales better. With regard to prediction time, [`SVR`](generated/sklearn.svm.svr#sklearn.svm.SVR "sklearn.svm.SVR") is faster than [`KernelRidge`](generated/sklearn.kernel_ridge.kernelridge#sklearn.kernel_ridge.KernelRidge "sklearn.kernel_ridge.KernelRidge") for all sizes of the training set because of the learned sparse solution. Note that the degree of sparsity and thus the prediction time depends on the parameters \(\epsilon\) and \(C\) of the [`SVR`](generated/sklearn.svm.svr#sklearn.svm.SVR "sklearn.svm.SVR"); \(\epsilon = 0\) would correspond to a dense model.
scikit_learn 1.12. Multiclass and multioutput algorithms 1.12. Multiclass and multioutput algorithms
===========================================
This section of the user guide covers functionality related to multi-learning problems, including [multiclass](https://scikit-learn.org/1.1/glossary.html#term-multiclass), [multilabel](https://scikit-learn.org/1.1/glossary.html#term-multilabel), and [multioutput](https://scikit-learn.org/1.1/glossary.html#term-multioutput) classification and regression.
The modules in this section implement [meta-estimators](https://scikit-learn.org/1.1/glossary.html#term-meta-estimators), which require a base estimator to be provided in their constructor. Meta-estimators extend the functionality of the base estimator to support multi-learning problems, which is accomplished by transforming the multi-learning problem into a set of simpler problems, then fitting one estimator per problem.
This section covers two modules: [`sklearn.multiclass`](classes#module-sklearn.multiclass "sklearn.multiclass") and [`sklearn.multioutput`](classes#module-sklearn.multioutput "sklearn.multioutput"). The chart below demonstrates the problem types that each module is responsible for, and the corresponding meta-estimators that each module provides.
The table below provides a quick reference on the differences between problem types. More detailed explanations can be found in subsequent sections of this guide.
| | Number of targets | Target cardinality | Valid [`type_of_target`](generated/sklearn.utils.multiclass.type_of_target#sklearn.utils.multiclass.type_of_target "sklearn.utils.multiclass.type_of_target") |
| --- | --- | --- | --- |
| Multiclass classification | 1 | >2 | ‘multiclass’ |
| Multilabel classification | >1 | 2 (0 or 1) | ‘multilabel-indicator’ |
| Multiclass-multioutput classification | >1 | >2 | ‘multiclass-multioutput’ |
| Multioutput regression | >1 | Continuous | ‘continuous-multioutput’ |
Below is a summary of scikit-learn estimators that have multi-learning support built-in, grouped by strategy. You don’t need the meta-estimators provided by this section if you’re using one of these estimators. However, meta-estimators can provide additional strategies beyond what is built-in:
* **Inherently multiclass:**
+ [`naive_bayes.BernoulliNB`](generated/sklearn.naive_bayes.bernoullinb#sklearn.naive_bayes.BernoulliNB "sklearn.naive_bayes.BernoulliNB")
+ [`tree.DecisionTreeClassifier`](generated/sklearn.tree.decisiontreeclassifier#sklearn.tree.DecisionTreeClassifier "sklearn.tree.DecisionTreeClassifier")
+ [`tree.ExtraTreeClassifier`](generated/sklearn.tree.extratreeclassifier#sklearn.tree.ExtraTreeClassifier "sklearn.tree.ExtraTreeClassifier")
+ [`ensemble.ExtraTreesClassifier`](generated/sklearn.ensemble.extratreesclassifier#sklearn.ensemble.ExtraTreesClassifier "sklearn.ensemble.ExtraTreesClassifier")
+ [`naive_bayes.GaussianNB`](generated/sklearn.naive_bayes.gaussiannb#sklearn.naive_bayes.GaussianNB "sklearn.naive_bayes.GaussianNB")
+ [`neighbors.KNeighborsClassifier`](generated/sklearn.neighbors.kneighborsclassifier#sklearn.neighbors.KNeighborsClassifier "sklearn.neighbors.KNeighborsClassifier")
+ [`semi_supervised.LabelPropagation`](generated/sklearn.semi_supervised.labelpropagation#sklearn.semi_supervised.LabelPropagation "sklearn.semi_supervised.LabelPropagation")
+ [`semi_supervised.LabelSpreading`](generated/sklearn.semi_supervised.labelspreading#sklearn.semi_supervised.LabelSpreading "sklearn.semi_supervised.LabelSpreading")
+ [`discriminant_analysis.LinearDiscriminantAnalysis`](generated/sklearn.discriminant_analysis.lineardiscriminantanalysis#sklearn.discriminant_analysis.LinearDiscriminantAnalysis "sklearn.discriminant_analysis.LinearDiscriminantAnalysis")
+ [`svm.LinearSVC`](generated/sklearn.svm.linearsvc#sklearn.svm.LinearSVC "sklearn.svm.LinearSVC") (setting multi\_class=”crammer\_singer”)
+ [`linear_model.LogisticRegression`](generated/sklearn.linear_model.logisticregression#sklearn.linear_model.LogisticRegression "sklearn.linear_model.LogisticRegression") (setting multi\_class=”multinomial”)
+ [`linear_model.LogisticRegressionCV`](generated/sklearn.linear_model.logisticregressioncv#sklearn.linear_model.LogisticRegressionCV "sklearn.linear_model.LogisticRegressionCV") (setting multi\_class=”multinomial”)
+ [`neural_network.MLPClassifier`](generated/sklearn.neural_network.mlpclassifier#sklearn.neural_network.MLPClassifier "sklearn.neural_network.MLPClassifier")
+ [`neighbors.NearestCentroid`](generated/sklearn.neighbors.nearestcentroid#sklearn.neighbors.NearestCentroid "sklearn.neighbors.NearestCentroid")
+ [`discriminant_analysis.QuadraticDiscriminantAnalysis`](generated/sklearn.discriminant_analysis.quadraticdiscriminantanalysis#sklearn.discriminant_analysis.QuadraticDiscriminantAnalysis "sklearn.discriminant_analysis.QuadraticDiscriminantAnalysis")
+ [`neighbors.RadiusNeighborsClassifier`](generated/sklearn.neighbors.radiusneighborsclassifier#sklearn.neighbors.RadiusNeighborsClassifier "sklearn.neighbors.RadiusNeighborsClassifier")
+ [`ensemble.RandomForestClassifier`](generated/sklearn.ensemble.randomforestclassifier#sklearn.ensemble.RandomForestClassifier "sklearn.ensemble.RandomForestClassifier")
+ [`linear_model.RidgeClassifier`](generated/sklearn.linear_model.ridgeclassifier#sklearn.linear_model.RidgeClassifier "sklearn.linear_model.RidgeClassifier")
+ [`linear_model.RidgeClassifierCV`](generated/sklearn.linear_model.ridgeclassifiercv#sklearn.linear_model.RidgeClassifierCV "sklearn.linear_model.RidgeClassifierCV")
* **Multiclass as One-Vs-One:**
+ [`svm.NuSVC`](generated/sklearn.svm.nusvc#sklearn.svm.NuSVC "sklearn.svm.NuSVC")
+ [`svm.SVC`](generated/sklearn.svm.svc#sklearn.svm.SVC "sklearn.svm.SVC").
+ [`gaussian_process.GaussianProcessClassifier`](generated/sklearn.gaussian_process.gaussianprocessclassifier#sklearn.gaussian_process.GaussianProcessClassifier "sklearn.gaussian_process.GaussianProcessClassifier") (setting multi\_class = “one\_vs\_one”)
* **Multiclass as One-Vs-The-Rest:**
+ [`ensemble.GradientBoostingClassifier`](generated/sklearn.ensemble.gradientboostingclassifier#sklearn.ensemble.GradientBoostingClassifier "sklearn.ensemble.GradientBoostingClassifier")
+ [`gaussian_process.GaussianProcessClassifier`](generated/sklearn.gaussian_process.gaussianprocessclassifier#sklearn.gaussian_process.GaussianProcessClassifier "sklearn.gaussian_process.GaussianProcessClassifier") (setting multi\_class = “one\_vs\_rest”)
+ [`svm.LinearSVC`](generated/sklearn.svm.linearsvc#sklearn.svm.LinearSVC "sklearn.svm.LinearSVC") (setting multi\_class=”ovr”)
+ [`linear_model.LogisticRegression`](generated/sklearn.linear_model.logisticregression#sklearn.linear_model.LogisticRegression "sklearn.linear_model.LogisticRegression") (setting multi\_class=”ovr”)
+ [`linear_model.LogisticRegressionCV`](generated/sklearn.linear_model.logisticregressioncv#sklearn.linear_model.LogisticRegressionCV "sklearn.linear_model.LogisticRegressionCV") (setting multi\_class=”ovr”)
+ [`linear_model.SGDClassifier`](generated/sklearn.linear_model.sgdclassifier#sklearn.linear_model.SGDClassifier "sklearn.linear_model.SGDClassifier")
+ [`linear_model.Perceptron`](generated/sklearn.linear_model.perceptron#sklearn.linear_model.Perceptron "sklearn.linear_model.Perceptron")
+ [`linear_model.PassiveAggressiveClassifier`](generated/sklearn.linear_model.passiveaggressiveclassifier#sklearn.linear_model.PassiveAggressiveClassifier "sklearn.linear_model.PassiveAggressiveClassifier")
* **Support multilabel:**
+ [`tree.DecisionTreeClassifier`](generated/sklearn.tree.decisiontreeclassifier#sklearn.tree.DecisionTreeClassifier "sklearn.tree.DecisionTreeClassifier")
+ [`tree.ExtraTreeClassifier`](generated/sklearn.tree.extratreeclassifier#sklearn.tree.ExtraTreeClassifier "sklearn.tree.ExtraTreeClassifier")
+ [`ensemble.ExtraTreesClassifier`](generated/sklearn.ensemble.extratreesclassifier#sklearn.ensemble.ExtraTreesClassifier "sklearn.ensemble.ExtraTreesClassifier")
+ [`neighbors.KNeighborsClassifier`](generated/sklearn.neighbors.kneighborsclassifier#sklearn.neighbors.KNeighborsClassifier "sklearn.neighbors.KNeighborsClassifier")
+ [`neural_network.MLPClassifier`](generated/sklearn.neural_network.mlpclassifier#sklearn.neural_network.MLPClassifier "sklearn.neural_network.MLPClassifier")
+ [`neighbors.RadiusNeighborsClassifier`](generated/sklearn.neighbors.radiusneighborsclassifier#sklearn.neighbors.RadiusNeighborsClassifier "sklearn.neighbors.RadiusNeighborsClassifier")
+ [`ensemble.RandomForestClassifier`](generated/sklearn.ensemble.randomforestclassifier#sklearn.ensemble.RandomForestClassifier "sklearn.ensemble.RandomForestClassifier")
+ [`linear_model.RidgeClassifier`](generated/sklearn.linear_model.ridgeclassifier#sklearn.linear_model.RidgeClassifier "sklearn.linear_model.RidgeClassifier")
+ [`linear_model.RidgeClassifierCV`](generated/sklearn.linear_model.ridgeclassifiercv#sklearn.linear_model.RidgeClassifierCV "sklearn.linear_model.RidgeClassifierCV")
* **Support multiclass-multioutput:**
+ [`tree.DecisionTreeClassifier`](generated/sklearn.tree.decisiontreeclassifier#sklearn.tree.DecisionTreeClassifier "sklearn.tree.DecisionTreeClassifier")
+ [`tree.ExtraTreeClassifier`](generated/sklearn.tree.extratreeclassifier#sklearn.tree.ExtraTreeClassifier "sklearn.tree.ExtraTreeClassifier")
+ [`ensemble.ExtraTreesClassifier`](generated/sklearn.ensemble.extratreesclassifier#sklearn.ensemble.ExtraTreesClassifier "sklearn.ensemble.ExtraTreesClassifier")
+ [`neighbors.KNeighborsClassifier`](generated/sklearn.neighbors.kneighborsclassifier#sklearn.neighbors.KNeighborsClassifier "sklearn.neighbors.KNeighborsClassifier")
+ [`neighbors.RadiusNeighborsClassifier`](generated/sklearn.neighbors.radiusneighborsclassifier#sklearn.neighbors.RadiusNeighborsClassifier "sklearn.neighbors.RadiusNeighborsClassifier")
+ [`ensemble.RandomForestClassifier`](generated/sklearn.ensemble.randomforestclassifier#sklearn.ensemble.RandomForestClassifier "sklearn.ensemble.RandomForestClassifier")
1.12.1. Multiclass classification
----------------------------------
Warning
All classifiers in scikit-learn do multiclass classification out-of-the-box. You don’t need to use the [`sklearn.multiclass`](classes#module-sklearn.multiclass "sklearn.multiclass") module unless you want to experiment with different multiclass strategies.
**Multiclass classification** is a classification task with more than two classes. Each sample can only be labeled as one class.
For example, classification using features extracted from a set of images of fruit, where each image may either be of an orange, an apple, or a pear. Each image is one sample and is labeled as one of the 3 possible classes. Multiclass classification makes the assumption that each sample is assigned to one and only one label - one sample cannot, for example, be both a pear and an apple.
While all scikit-learn classifiers are capable of multiclass classification, the meta-estimators offered by [`sklearn.multiclass`](classes#module-sklearn.multiclass "sklearn.multiclass") permit changing the way they handle more than two classes because this may have an effect on classifier performance (either in terms of generalization error or required computational resources).
###
1.12.1.1. Target format
Valid [multiclass](https://scikit-learn.org/1.1/glossary.html#term-multiclass) representations for [`type_of_target`](generated/sklearn.utils.multiclass.type_of_target#sklearn.utils.multiclass.type_of_target "sklearn.utils.multiclass.type_of_target") (`y`) are:
* 1d or column vector containing more than two discrete values. An example of a vector `y` for 4 samples:
```
>>> import numpy as np
>>> y = np.array(['apple', 'pear', 'apple', 'orange'])
>>> print(y)
['apple' 'pear' 'apple' 'orange']
```
* Dense or sparse [binary](https://scikit-learn.org/1.1/glossary.html#term-binary) matrix of shape `(n_samples, n_classes)` with a single sample per row, where each column represents one class. An example of both a dense and sparse [binary](https://scikit-learn.org/1.1/glossary.html#term-binary) matrix `y` for 4 samples, where the columns, in order, are apple, orange, and pear:
```
>>> import numpy as np
>>> from sklearn.preprocessing import LabelBinarizer
>>> y = np.array(['apple', 'pear', 'apple', 'orange'])
>>> y_dense = LabelBinarizer().fit_transform(y)
>>> print(y_dense)
[[1 0 0]
[0 0 1]
[1 0 0]
[0 1 0]]
>>> from scipy import sparse
>>> y_sparse = sparse.csr_matrix(y_dense)
>>> print(y_sparse)
(0, 0) 1
(1, 2) 1
(2, 0) 1
(3, 1) 1
```
For more information about [`LabelBinarizer`](generated/sklearn.preprocessing.labelbinarizer#sklearn.preprocessing.LabelBinarizer "sklearn.preprocessing.LabelBinarizer"), refer to [Transforming the prediction target (y)](preprocessing_targets#preprocessing-targets).
###
1.12.1.2. OneVsRestClassifier
The **one-vs-rest** strategy, also known as **one-vs-all**, is implemented in [`OneVsRestClassifier`](generated/sklearn.multiclass.onevsrestclassifier#sklearn.multiclass.OneVsRestClassifier "sklearn.multiclass.OneVsRestClassifier"). The strategy consists in fitting one classifier per class. For each classifier, the class is fitted against all the other classes. In addition to its computational efficiency (only `n_classes` classifiers are needed), one advantage of this approach is its interpretability. Since each class is represented by one and only one classifier, it is possible to gain knowledge about the class by inspecting its corresponding classifier. This is the most commonly used strategy and is a fair default choice.
Below is an example of multiclass learning using OvR:
```
>>> from sklearn import datasets
>>> from sklearn.multiclass import OneVsRestClassifier
>>> from sklearn.svm import LinearSVC
>>> X, y = datasets.load_iris(return_X_y=True)
>>> OneVsRestClassifier(LinearSVC(random_state=0)).fit(X, y).predict(X)
array([0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 2, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 2, 2, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 1, 2, 2, 2, 1, 2, 2, 2, 2,
2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2])
```
[`OneVsRestClassifier`](generated/sklearn.multiclass.onevsrestclassifier#sklearn.multiclass.OneVsRestClassifier "sklearn.multiclass.OneVsRestClassifier") also supports multilabel classification. To use this feature, feed the classifier an indicator matrix, in which cell [i, j] indicates the presence of label j in sample i.
###
1.12.1.3. OneVsOneClassifier
[`OneVsOneClassifier`](generated/sklearn.multiclass.onevsoneclassifier#sklearn.multiclass.OneVsOneClassifier "sklearn.multiclass.OneVsOneClassifier") constructs one classifier per pair of classes. At prediction time, the class which received the most votes is selected. In the event of a tie (among two classes with an equal number of votes), it selects the class with the highest aggregate classification confidence by summing over the pair-wise classification confidence levels computed by the underlying binary classifiers.
Since it requires to fit `n_classes * (n_classes - 1) / 2` classifiers, this method is usually slower than one-vs-the-rest, due to its O(n\_classes^2) complexity. However, this method may be advantageous for algorithms such as kernel algorithms which don’t scale well with `n_samples`. This is because each individual learning problem only involves a small subset of the data whereas, with one-vs-the-rest, the complete dataset is used `n_classes` times. The decision function is the result of a monotonic transformation of the one-versus-one classification.
Below is an example of multiclass learning using OvO:
```
>>> from sklearn import datasets
>>> from sklearn.multiclass import OneVsOneClassifier
>>> from sklearn.svm import LinearSVC
>>> X, y = datasets.load_iris(return_X_y=True)
>>> OneVsOneClassifier(LinearSVC(random_state=0)).fit(X, y).predict(X)
array([0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 2, 1, 2, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 2, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2])
```
###
1.12.1.4. OutputCodeClassifier
Error-Correcting Output Code-based strategies are fairly different from one-vs-the-rest and one-vs-one. With these strategies, each class is represented in a Euclidean space, where each dimension can only be 0 or 1. Another way to put it is that each class is represented by a binary code (an array of 0 and 1). The matrix which keeps track of the location/code of each class is called the code book. The code size is the dimensionality of the aforementioned space. Intuitively, each class should be represented by a code as unique as possible and a good code book should be designed to optimize classification accuracy. In this implementation, we simply use a randomly-generated code book as advocated in [[3]](#id3) although more elaborate methods may be added in the future.
At fitting time, one binary classifier per bit in the code book is fitted. At prediction time, the classifiers are used to project new points in the class space and the class closest to the points is chosen.
In [`OutputCodeClassifier`](generated/sklearn.multiclass.outputcodeclassifier#sklearn.multiclass.OutputCodeClassifier "sklearn.multiclass.OutputCodeClassifier"), the `code_size` attribute allows the user to control the number of classifiers which will be used. It is a percentage of the total number of classes.
A number between 0 and 1 will require fewer classifiers than one-vs-the-rest. In theory, `log2(n_classes) / n_classes` is sufficient to represent each class unambiguously. However, in practice, it may not lead to good accuracy since `log2(n_classes)` is much smaller than `n_classes`.
A number greater than 1 will require more classifiers than one-vs-the-rest. In this case, some classifiers will in theory correct for the mistakes made by other classifiers, hence the name “error-correcting”. In practice, however, this may not happen as classifier mistakes will typically be correlated. The error-correcting output codes have a similar effect to bagging.
Below is an example of multiclass learning using Output-Codes:
```
>>> from sklearn import datasets
>>> from sklearn.multiclass import OutputCodeClassifier
>>> from sklearn.svm import LinearSVC
>>> X, y = datasets.load_iris(return_X_y=True)
>>> clf = OutputCodeClassifier(LinearSVC(random_state=0),
... code_size=2, random_state=0)
>>> clf.fit(X, y).predict(X)
array([0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 2, 1, 1, 1, 1, 1, 1, 1, 1, 1, 2, 1, 1,
1, 2, 1, 1, 1, 1, 1, 1, 2, 1, 1, 1, 1, 1, 2, 2, 2, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2, 1, 2, 2, 2, 2, 2, 2, 2, 2, 2, 1, 2, 2, 2, 1, 1, 2, 2, 2,
2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2])
```
1.12.2. Multilabel classification
----------------------------------
**Multilabel classification** (closely related to **multioutput** **classification**) is a classification task labeling each sample with `m` labels from `n_classes` possible classes, where `m` can be 0 to `n_classes` inclusive. This can be thought of as predicting properties of a sample that are not mutually exclusive. Formally, a binary output is assigned to each class, for every sample. Positive classes are indicated with 1 and negative classes with 0 or -1. It is thus comparable to running `n_classes` binary classification tasks, for example with [`MultiOutputClassifier`](generated/sklearn.multioutput.multioutputclassifier#sklearn.multioutput.MultiOutputClassifier "sklearn.multioutput.MultiOutputClassifier"). This approach treats each label independently whereas multilabel classifiers *may* treat the multiple classes simultaneously, accounting for correlated behavior among them.
For example, prediction of the topics relevant to a text document or video. The document or video may be about one of ‘religion’, ‘politics’, ‘finance’ or ‘education’, several of the topic classes or all of the topic classes.
###
1.12.2.1. Target format
A valid representation of [multilabel](https://scikit-learn.org/1.1/glossary.html#term-multilabel) `y` is an either dense or sparse [binary](https://scikit-learn.org/1.1/glossary.html#term-binary) matrix of shape `(n_samples, n_classes)`. Each column represents a class. The `1`’s in each row denote the positive classes a sample has been labeled with. An example of a dense matrix `y` for 3 samples:
```
>>> y = np.array([[1, 0, 0, 1], [0, 0, 1, 1], [0, 0, 0, 0]])
>>> print(y)
[[1 0 0 1]
[0 0 1 1]
[0 0 0 0]]
```
Dense binary matrices can also be created using [`MultiLabelBinarizer`](generated/sklearn.preprocessing.multilabelbinarizer#sklearn.preprocessing.MultiLabelBinarizer "sklearn.preprocessing.MultiLabelBinarizer"). For more information, refer to [Transforming the prediction target (y)](preprocessing_targets#preprocessing-targets).
An example of the same `y` in sparse matrix form:
```
>>> y_sparse = sparse.csr_matrix(y)
>>> print(y_sparse)
(0, 0) 1
(0, 3) 1
(1, 2) 1
(1, 3) 1
```
###
1.12.2.2. MultiOutputClassifier
Multilabel classification support can be added to any classifier with [`MultiOutputClassifier`](generated/sklearn.multioutput.multioutputclassifier#sklearn.multioutput.MultiOutputClassifier "sklearn.multioutput.MultiOutputClassifier"). This strategy consists of fitting one classifier per target. This allows multiple target variable classifications. The purpose of this class is to extend estimators to be able to estimate a series of target functions (f1,f2,f3…,fn) that are trained on a single X predictor matrix to predict a series of responses (y1,y2,y3…,yn).
You can find a usage example for [`MultiOutputClassifier`](generated/sklearn.multioutput.multioutputclassifier#sklearn.multioutput.MultiOutputClassifier "sklearn.multioutput.MultiOutputClassifier") as part of the section on [Multiclass-multioutput classification](#multiclass-multioutput-classification) since it is a generalization of multilabel classification to multiclass outputs instead of binary outputs.
###
1.12.2.3. ClassifierChain
Classifier chains (see [`ClassifierChain`](generated/sklearn.multioutput.classifierchain#sklearn.multioutput.ClassifierChain "sklearn.multioutput.ClassifierChain")) are a way of combining a number of binary classifiers into a single multi-label model that is capable of exploiting correlations among targets.
For a multi-label classification problem with N classes, N binary classifiers are assigned an integer between 0 and N-1. These integers define the order of models in the chain. Each classifier is then fit on the available training data plus the true labels of the classes whose models were assigned a lower number.
When predicting, the true labels will not be available. Instead the predictions of each model are passed on to the subsequent models in the chain to be used as features.
Clearly the order of the chain is important. The first model in the chain has no information about the other labels while the last model in the chain has features indicating the presence of all of the other labels. In general one does not know the optimal ordering of the models in the chain so typically many randomly ordered chains are fit and their predictions are averaged together.
1.12.3. Multiclass-multioutput classification
----------------------------------------------
**Multiclass-multioutput classification** (also known as **multitask classification**) is a classification task which labels each sample with a set of **non-binary** properties. Both the number of properties and the number of classes per property is greater than 2. A single estimator thus handles several joint classification tasks. This is both a generalization of the multi*label* classification task, which only considers binary attributes, as well as a generalization of the multi*class* classification task, where only one property is considered.
For example, classification of the properties “type of fruit” and “colour” for a set of images of fruit. The property “type of fruit” has the possible classes: “apple”, “pear” and “orange”. The property “colour” has the possible classes: “green”, “red”, “yellow” and “orange”. Each sample is an image of a fruit, a label is output for both properties and each label is one of the possible classes of the corresponding property.
Note that all classifiers handling multiclass-multioutput (also known as multitask classification) tasks, support the multilabel classification task as a special case. Multitask classification is similar to the multioutput classification task with different model formulations. For more information, see the relevant estimator documentat
Below is an example of multiclass-multioutput classification:
```
>>> from sklearn.datasets import make_classification
>>> from sklearn.multioutput import MultiOutputClassifier
>>> from sklearn.ensemble import RandomForestClassifier
>>> from sklearn.utils import shuffle
>>> import numpy as np
>>> X, y1 = make_classification(n_samples=10, n_features=100,
... n_informative=30, n_classes=3,
... random_state=1)
>>> y2 = shuffle(y1, random_state=1)
>>> y3 = shuffle(y1, random_state=2)
>>> Y = np.vstack((y1, y2, y3)).T
>>> n_samples, n_features = X.shape # 10,100
>>> n_outputs = Y.shape[1] # 3
>>> n_classes = 3
>>> forest = RandomForestClassifier(random_state=1)
>>> multi_target_forest = MultiOutputClassifier(forest, n_jobs=2)
>>> multi_target_forest.fit(X, Y).predict(X)
array([[2, 2, 0],
[1, 2, 1],
[2, 1, 0],
[0, 0, 2],
[0, 2, 1],
[0, 0, 2],
[1, 1, 0],
[1, 1, 1],
[0, 0, 2],
[2, 0, 0]])
```
Warning
At present, no metric in [`sklearn.metrics`](classes#module-sklearn.metrics "sklearn.metrics") supports the multiclass-multioutput classification task.
###
1.12.3.1. Target format
A valid representation of [multioutput](https://scikit-learn.org/1.1/glossary.html#term-multioutput) `y` is a dense matrix of shape `(n_samples, n_classes)` of class labels. A column wise concatenation of 1d [multiclass](https://scikit-learn.org/1.1/glossary.html#term-multiclass) variables. An example of `y` for 3 samples:
```
>>> y = np.array([['apple', 'green'], ['orange', 'orange'], ['pear', 'green']])
>>> print(y)
[['apple' 'green']
['orange' 'orange']
['pear' 'green']]
```
1.12.4. Multioutput regression
-------------------------------
**Multioutput regression** predicts multiple numerical properties for each sample. Each property is a numerical variable and the number of properties to be predicted for each sample is greater than or equal to 2. Some estimators that support multioutput regression are faster than just running `n_output` estimators.
For example, prediction of both wind speed and wind direction, in degrees, using data obtained at a certain location. Each sample would be data obtained at one location and both wind speed and direction would be output for each sample.
###
1.12.4.1. Target format
A valid representation of [multioutput](https://scikit-learn.org/1.1/glossary.html#term-multioutput) `y` is a dense matrix of shape `(n_samples, n_output)` of floats. A column wise concatenation of [continuous](https://scikit-learn.org/1.1/glossary.html#term-continuous) variables. An example of `y` for 3 samples:
```
>>> y = np.array([[31.4, 94], [40.5, 109], [25.0, 30]])
>>> print(y)
[[ 31.4 94. ]
[ 40.5 109. ]
[ 25. 30. ]]
```
###
1.12.4.2. MultiOutputRegressor
Multioutput regression support can be added to any regressor with [`MultiOutputRegressor`](generated/sklearn.multioutput.multioutputregressor#sklearn.multioutput.MultiOutputRegressor "sklearn.multioutput.MultiOutputRegressor"). This strategy consists of fitting one regressor per target. Since each target is represented by exactly one regressor it is possible to gain knowledge about the target by inspecting its corresponding regressor. As [`MultiOutputRegressor`](generated/sklearn.multioutput.multioutputregressor#sklearn.multioutput.MultiOutputRegressor "sklearn.multioutput.MultiOutputRegressor") fits one regressor per target it can not take advantage of correlations between targets.
Below is an example of multioutput regression:
```
>>> from sklearn.datasets import make_regression
>>> from sklearn.multioutput import MultiOutputRegressor
>>> from sklearn.ensemble import GradientBoostingRegressor
>>> X, y = make_regression(n_samples=10, n_targets=3, random_state=1)
>>> MultiOutputRegressor(GradientBoostingRegressor(random_state=0)).fit(X, y).predict(X)
array([[-154.75474165, -147.03498585, -50.03812219],
[ 7.12165031, 5.12914884, -81.46081961],
[-187.8948621 , -100.44373091, 13.88978285],
[-141.62745778, 95.02891072, -191.48204257],
[ 97.03260883, 165.34867495, 139.52003279],
[ 123.92529176, 21.25719016, -7.84253 ],
[-122.25193977, -85.16443186, -107.12274212],
[ -30.170388 , -94.80956739, 12.16979946],
[ 140.72667194, 176.50941682, -17.50447799],
[ 149.37967282, -81.15699552, -5.72850319]])
```
###
1.12.4.3. RegressorChain
Regressor chains (see [`RegressorChain`](generated/sklearn.multioutput.regressorchain#sklearn.multioutput.RegressorChain "sklearn.multioutput.RegressorChain")) is analogous to [`ClassifierChain`](generated/sklearn.multioutput.classifierchain#sklearn.multioutput.ClassifierChain "sklearn.multioutput.ClassifierChain") as a way of combining a number of regressions into a single multi-target model that is capable of exploiting correlations among targets.
| programming_docs |
Subsets and Splits