questions
stringlengths 4
1.65k
| answers
stringlengths 1.73k
353k
| site
stringclasses 24
values | answers_cleaned
stringlengths 1.73k
353k
|
---|---|---|---|
prometheus to the usage of the specific types and in the wire protocol The Prometheus title Metric types currently only differentiated in the client libraries to enable APIs tailored sortrank 2 Metric types The Prometheus client libraries offer four core metric types These are | ---
title: Metric types
sort_rank: 2
---
# Metric types
The Prometheus client libraries offer four core metric types. These are
currently only differentiated in the client libraries (to enable APIs tailored
to the usage of the specific types) and in the wire protocol. The Prometheus
server does not yet make use of the type information and flattens all data into
untyped time series. This may change in the future.
## Counter
A _counter_ is a cumulative metric that represents a single [monotonically
increasing counter](https://en.wikipedia.org/wiki/Monotonic_function) whose
value can only increase or be reset to zero on restart. For example, you can
use a counter to represent the number of requests served, tasks completed, or
errors.
Do not use a counter to expose a value that can decrease. For example, do not
use a counter for the number of currently running processes; instead use a gauge.
Client library usage documentation for counters:
* [Go](http://godoc.org/github.com/prometheus/client_golang/prometheus#Counter)
* [Java](https://github.com/prometheus/client_java#counter)
* [Python](https://prometheus.github.io/client_python/instrumenting/counter/)
* [Ruby](https://github.com/prometheus/client_ruby#counter)
* [.Net](https://github.com/prometheus-net/prometheus-net#counters)
* [Rust](https://docs.rs/prometheus-client/latest/prometheus_client/metrics/counter/index.html)
## Gauge
A _gauge_ is a metric that represents a single numerical value that can
arbitrarily go up and down.
Gauges are typically used for measured values like temperatures or current
memory usage, but also "counts" that can go up and down, like the number of
concurrent requests.
Client library usage documentation for gauges:
* [Go](http://godoc.org/github.com/prometheus/client_golang/prometheus#Gauge)
* [Java](https://github.com/prometheus/client_java#gauge)
* [Python](https://prometheus.github.io/client_python/instrumenting/gauge/)
* [Ruby](https://github.com/prometheus/client_ruby#gauge)
* [.Net](https://github.com/prometheus-net/prometheus-net#gauges)
* [Rust](https://docs.rs/prometheus-client/latest/prometheus_client/metrics/gauge/index.html)
## Histogram
A _histogram_ samples observations (usually things like request durations or
response sizes) and counts them in configurable buckets. It also provides a sum
of all observed values.
A histogram with a base metric name of `<basename>` exposes multiple time series
during a scrape:
* cumulative counters for the observation buckets, exposed as `<basename>_bucket{le="<upper inclusive bound>"}`
* the **total sum** of all observed values, exposed as `<basename>_sum`
* the **count** of events that have been observed, exposed as `<basename>_count` (identical to `<basename>_bucket{le="+Inf"}` above)
Use the
[`histogram_quantile()` function](/docs/prometheus/latest/querying/functions/#histogram_quantile)
to calculate quantiles from histograms or even aggregations of histograms. A
histogram is also suitable to calculate an
[Apdex score](http://en.wikipedia.org/wiki/Apdex). When operating on buckets,
remember that the histogram is
[cumulative](https://en.wikipedia.org/wiki/Histogram#Cumulative_histogram). See
[histograms and summaries](/docs/practices/histograms) for details of histogram
usage and differences to [summaries](#summary).
NOTE: Beginning with Prometheus v2.40, there is experimental support for native
histograms. A native histogram requires only one time series, which includes a
dynamic number of buckets in addition to the sum and count of
observations. Native histograms allow much higher resolution at a fraction of
the cost. Detailed documentation will follow once native histograms are closer
to becoming a stable feature.
Client library usage documentation for histograms:
* [Go](http://godoc.org/github.com/prometheus/client_golang/prometheus#Histogram)
* [Java](https://github.com/prometheus/client_java#histogram)
* [Python](https://prometheus.github.io/client_python/instrumenting/histogram/)
* [Ruby](https://github.com/prometheus/client_ruby#histogram)
* [.Net](https://github.com/prometheus-net/prometheus-net#histogram)
* [Rust](https://docs.rs/prometheus-client/latest/prometheus_client/metrics/histogram/index.html)
## Summary
Similar to a _histogram_, a _summary_ samples observations (usually things like
request durations and response sizes). While it also provides a total count of
observations and a sum of all observed values, it calculates configurable
quantiles over a sliding time window.
A summary with a base metric name of `<basename>` exposes multiple time series
during a scrape:
* streaming **φ-quantiles** (0 ≤ φ ≤ 1) of observed events, exposed as `<basename>{quantile="<φ>"}`
* the **total sum** of all observed values, exposed as `<basename>_sum`
* the **count** of events that have been observed, exposed as `<basename>_count`
See [histograms and summaries](/docs/practices/histograms) for
detailed explanations of φ-quantiles, summary usage, and differences
to [histograms](#histogram).
Client library usage documentation for summaries:
* [Go](http://godoc.org/github.com/prometheus/client_golang/prometheus#Summary)
* [Java](https://github.com/prometheus/client_java#summary)
* [Python](https://prometheus.github.io/client_python/instrumenting/summary/)
* [Ruby](https://github.com/prometheus/client_ruby#summary)
* [.Net](https://github.com/prometheus-net/prometheus-net#summary) | prometheus | title Metric types sort rank 2 Metric types The Prometheus client libraries offer four core metric types These are currently only differentiated in the client libraries to enable APIs tailored to the usage of the specific types and in the wire protocol The Prometheus server does not yet make use of the type information and flattens all data into untyped time series This may change in the future Counter A counter is a cumulative metric that represents a single monotonically increasing counter https en wikipedia org wiki Monotonic function whose value can only increase or be reset to zero on restart For example you can use a counter to represent the number of requests served tasks completed or errors Do not use a counter to expose a value that can decrease For example do not use a counter for the number of currently running processes instead use a gauge Client library usage documentation for counters Go http godoc org github com prometheus client golang prometheus Counter Java https github com prometheus client java counter Python https prometheus github io client python instrumenting counter Ruby https github com prometheus client ruby counter Net https github com prometheus net prometheus net counters Rust https docs rs prometheus client latest prometheus client metrics counter index html Gauge A gauge is a metric that represents a single numerical value that can arbitrarily go up and down Gauges are typically used for measured values like temperatures or current memory usage but also counts that can go up and down like the number of concurrent requests Client library usage documentation for gauges Go http godoc org github com prometheus client golang prometheus Gauge Java https github com prometheus client java gauge Python https prometheus github io client python instrumenting gauge Ruby https github com prometheus client ruby gauge Net https github com prometheus net prometheus net gauges Rust https docs rs prometheus client latest prometheus client metrics gauge index html Histogram A histogram samples observations usually things like request durations or response sizes and counts them in configurable buckets It also provides a sum of all observed values A histogram with a base metric name of basename exposes multiple time series during a scrape cumulative counters for the observation buckets exposed as basename bucket le upper inclusive bound the total sum of all observed values exposed as basename sum the count of events that have been observed exposed as basename count identical to basename bucket le Inf above Use the histogram quantile function docs prometheus latest querying functions histogram quantile to calculate quantiles from histograms or even aggregations of histograms A histogram is also suitable to calculate an Apdex score http en wikipedia org wiki Apdex When operating on buckets remember that the histogram is cumulative https en wikipedia org wiki Histogram Cumulative histogram See histograms and summaries docs practices histograms for details of histogram usage and differences to summaries summary NOTE Beginning with Prometheus v2 40 there is experimental support for native histograms A native histogram requires only one time series which includes a dynamic number of buckets in addition to the sum and count of observations Native histograms allow much higher resolution at a fraction of the cost Detailed documentation will follow once native histograms are closer to becoming a stable feature Client library usage documentation for histograms Go http godoc org github com prometheus client golang prometheus Histogram Java https github com prometheus client java histogram Python https prometheus github io client python instrumenting histogram Ruby https github com prometheus client ruby histogram Net https github com prometheus net prometheus net histogram Rust https docs rs prometheus client latest prometheus client metrics histogram index html Summary Similar to a histogram a summary samples observations usually things like request durations and response sizes While it also provides a total count of observations and a sum of all observed values it calculates configurable quantiles over a sliding time window A summary with a base metric name of basename exposes multiple time series during a scrape streaming quantiles 0 1 of observed events exposed as basename quantile the total sum of all observed values exposed as basename sum the count of events that have been observed exposed as basename count See histograms and summaries docs practices histograms for detailed explanations of quantiles summary usage and differences to histograms histogram Client library usage documentation for summaries Go http godoc org github com prometheus client golang prometheus Summary Java https github com prometheus client java summary Python https prometheus github io client python instrumenting summary Ruby https github com prometheus client ruby summary Net https github com prometheus net prometheus net summary |
prometheus Gauge Prometheus supports four types of metrics which are sortrank 2 title Understanding metric types Counter Types of metrics | ---
title: Understanding metric types
sort_rank: 2
---
# Types of metrics.
Prometheus supports four types of metrics, which are
- Counter
- Gauge
- Histogram
- Summary
## Counter
Counter is a metric value that can only increase or reset i.e. the value cannot reduce than the previous value. It can be used for metrics like the number of requests, no of errors, etc.
Type the below query in the query bar and click execute.
<code>go\_gc\_duration\_seconds\_count</code>
[](/assets/tutorial/counter_example.png)
The rate() function in PromQL takes the history of metrics over a time frame and calculates how fast the value is increasing per second. Rate is applicable on counter values only.
<code> rate(go\_gc\_duration\_seconds\_count[5m])</code>
[](/assets/tutorial/rate_example.png)
## Gauge
Gauge is a number which can either go up or down. It can be used for metrics like the number of pods in a cluster, the number of events in a queue, etc.
<code> go\_memstats\_heap\_alloc\_bytes</code>
[](/assets/tutorial/gauge_example.png)
PromQL functions like `max_over_time`, `min_over_time` and `avg_over_time` can be used on gauge metrics
## Histogram
Histogram is a more complex metric type when compared to the previous two. Histogram can be used for any calculated value which is counted based on bucket values. Bucket boundaries can be configured by the developer. A common example would be the time it takes to reply to a request, called latency.
Example: Let's assume we want to observe the time taken to process API requests. Instead of storing the request time for each request, histograms allow us to store them in buckets. We define buckets for time taken, for example `lower or equal 0.3`, `le 0.5`, `le 0.7`, `le 1`, and `le 1.2`. So these are our buckets and once the time taken for a request is calculated it is added to the count of all the buckets whose bucket boundaries are higher than the measured value.
Let's say Request 1 for endpoint “/ping” takes 0.25 s. The count values for the buckets will be.
> /ping
| Bucket | Count |
| --------- | ----- |
| 0 - 0.3 | 1 |
| 0 - 0.5 | 1 |
| 0 - 0.7 | 1 |
| 0 - 1 | 1 |
| 0 - 1.2 | 1 |
| 0 - +Inf | 1 |
Note: +Inf bucket is added by default.
(Since the histogram is a cumulative frequency 1 is added to all the buckets that are greater than the value)
Request 2 for endpoint “/ping” takes 0.4s The count values for the buckets will be this.
> /ping
| Bucket | Count |
| --------- | ----- |
| 0 - 0.3 | 1 |
| 0 - 0.5 | 2 |
| 0 - 0.7 | 2 |
| 0 - 1 | 2 |
| 0 - 1.2 | 2 |
| 0 - +Inf | 2 |
Since 0.4 is below 0.5, all buckets up to that boundary increase their counts.
Let's explore a histogram metric from the Prometheus UI and apply a few functions.
<code>prometheus\_http\_request\_duration\_seconds\_bucket{handler="/graph"}</code>
[](/assets/tutorial/histogram_example.png)
`histogram_quantile()` function can be used to calculate quantiles from a histogram
<code>histogram\_quantile(0.9,prometheus\_http\_request\_duration\_seconds\_bucket{handler="/graph"})</code>
[](/assets/tutorial/histogram_quantile_example.png)
The graph shows that the 90th percentile is 0.09, To find the histogram_quantile over the last 5m you can use the rate() and time frame
<code>histogram_quantile(0.9, rate(prometheus\_http\_request\_duration\_seconds\_bucket{handler="/graph"}[5m]))</code>
[](/assets/tutorial/histogram_rate_example.png)
## Summary
Summaries also measure events and are an alternative to histograms. They are cheaper but lose more data. They are calculated on the application level hence aggregation of metrics from multiple instances of the same process is not possible. They are used when the buckets of a metric are not known beforehand, but it is highly recommended to use histograms over summaries whenever possible.
In this tutorial, we covered the types of metrics in detail and a few PromQL operations like rate, histogram_quantile, etc. | prometheus | title Understanding metric types sort rank 2 Types of metrics Prometheus supports four types of metrics which are Counter Gauge Histogram Summary Counter Counter is a metric value that can only increase or reset i e the value cannot reduce than the previous value It can be used for metrics like the number of requests no of errors etc Type the below query in the query bar and click execute code go gc duration seconds count code Counter assets tutorial counter example png assets tutorial counter example png The rate function in PromQL takes the history of metrics over a time frame and calculates how fast the value is increasing per second Rate is applicable on counter values only code rate go gc duration seconds count 5m code Rate Counter assets tutorial rate example png assets tutorial rate example png Gauge Gauge is a number which can either go up or down It can be used for metrics like the number of pods in a cluster the number of events in a queue etc code go memstats heap alloc bytes code Gauge assets tutorial gauge example png assets tutorial gauge example png PromQL functions like max over time min over time and avg over time can be used on gauge metrics Histogram Histogram is a more complex metric type when compared to the previous two Histogram can be used for any calculated value which is counted based on bucket values Bucket boundaries can be configured by the developer A common example would be the time it takes to reply to a request called latency Example Let s assume we want to observe the time taken to process API requests Instead of storing the request time for each request histograms allow us to store them in buckets We define buckets for time taken for example lower or equal 0 3 le 0 5 le 0 7 le 1 and le 1 2 So these are our buckets and once the time taken for a request is calculated it is added to the count of all the buckets whose bucket boundaries are higher than the measured value Let s say Request 1 for endpoint ping takes 0 25 s The count values for the buckets will be ping Bucket Count 0 0 3 1 0 0 5 1 0 0 7 1 0 1 1 0 1 2 1 0 Inf 1 Note Inf bucket is added by default Since the histogram is a cumulative frequency 1 is added to all the buckets that are greater than the value Request 2 for endpoint ping takes 0 4s The count values for the buckets will be this ping Bucket Count 0 0 3 1 0 0 5 2 0 0 7 2 0 1 2 0 1 2 2 0 Inf 2 Since 0 4 is below 0 5 all buckets up to that boundary increase their counts Let s explore a histogram metric from the Prometheus UI and apply a few functions code prometheus http request duration seconds bucket handler graph code Histogram assets tutorial histogram example png assets tutorial histogram example png histogram quantile function can be used to calculate quantiles from a histogram code histogram quantile 0 9 prometheus http request duration seconds bucket handler graph code Histogram Quantile assets tutorial histogram quantile example png assets tutorial histogram quantile example png The graph shows that the 90th percentile is 0 09 To find the histogram quantile over the last 5m you can use the rate and time frame code histogram quantile 0 9 rate prometheus http request duration seconds bucket handler graph 5m code Histogram Quantile Rate assets tutorial histogram rate example png assets tutorial histogram rate example png Summary Summaries also measure events and are an alternative to histograms They are cheaper but lose more data They are calculated on the application level hence aggregation of metrics from multiple instances of the same process is not possible They are used when the buckets of a metric are not known beforehand but it is highly recommended to use histograms over summaries whenever possible In this tutorial we covered the types of metrics in detail and a few PromQL operations like rate histogram quantile etc |
prometheus What is Prometheus sortrank 1 Prometheus is a system monitoring and alerting system It was opensourced by SoundCloud in 2012 and is the second project both to join and to graduate within Cloud Native Computing Foundation after Kubernetes Prometheus stores all metrics data as time series i e metrics information is stored along with the timestamp at which it was recorded optional key value pairs called as labels can also be stored along with metrics title Getting Started with Prometheus What are metrics and why is it important | ---
title: Getting Started with Prometheus
sort_rank: 1
---
# What is Prometheus ?
Prometheus is a system monitoring and alerting system. It was opensourced by SoundCloud in 2012 and is the second project both to join and to graduate within Cloud Native Computing Foundation after Kubernetes. Prometheus stores all metrics data as time series, i.e metrics information is stored along with the timestamp at which it was recorded, optional key-value pairs called as labels can also be stored along with metrics.
# What are metrics and why is it important?
Metrics in layperson terms is a standard for measurement. What we want to measure depends from application to application. For a web server it can be request times, for a database it can be CPU usage or number of active connections etc.
Metrics play an important role in understanding why your application is working in a certain way. If you run a web application and someone comes up to you and says that the application is slow, you will need some information to find out what is happening with your application. For example the application can become slow when the number of requests are high. If you have the request count metric you can spot the reason and increase the number of servers to handle the heavy load. Whenever you are defining the metrics for your application you must put on your detective hat and ask this question **what all information will be important for me to debug if any issue occurs in my application?**
# Basic Architecture of Prometheus
The basic components of a Prometheus setup are:
- Prometheus Server (the server which scrapes and stores the metrics data).
- Targets to be scraped, for example an instrumented application that exposes its metrics, or an exporter that exposes metrics of another application.
- Alertmanager to raise alerts based on preset rules.
(Note: Apart from this Prometheus has push_gateway which is not covered here).
[](/assets/tutorial/architecture.png)
Let's consider a web server as an example application and we want to extract a certain metric like the number of API calls processed by the web server. So we add certain instrumentation code using the Prometheus client library and expose the metrics information. Now that our web server exposes its metrics we can configure Prometheus to scrape it. Now Prometheus is configured to fetch the metrics from the web server which is listening on xyz IP address port 7500 at a specific time interval, say, every minute.
At 11:00:00 when I make the server public for consumption, the application calculates the request count and exposes it, Prometheus simultaneously scrapes the count metric and stores the value as 0.
By 11:01:00 one request is processed. The instrumentation logic in the server increments the count to 1. When Prometheus scrapes the metric the value of count is 1 now.
By 11:02:00 two more requests are processed and the request count is 1+2 = 3 now. Similarly metrics are scraped and stored.
The user can control the frequency at which metrics are scraped by Prometheus.
| Time Stamp | Request Count (metric) |
| ---------- | ---------------------- |
| 11:00:00 | 0 |
| 11:01:00 | 1 |
| 11:02:00 | 3 |
(Note: This table is just a representation for understanding purposes. Prometheus doesn’t store the values in this exact format)
Prometheus also has an API which allows to query metrics which have been stored by scraping. This API is used to query the metrics, create dashboards/charts on it etc. PromQL is used to query these metrics.
A simple Line chart created on the Request Count metric will look like this
[](/assets/tutorial/sample_graph.png)
One can scrape multiple useful metrics to understand what is happening in the application and create multiple charts on them. Group the charts into a dashboard and use it to get an overview of the application.
# Show me how it is done
Let’s get our hands dirty and setup Prometheus. Prometheus is written using [Go](https://golang.org/) and all you need is the binary compiled for your operating system. Download the binary corresponding to your operating system from [here](https://prometheus.io/download/) and add the binary to your path.
Prometheus exposes its own metrics which can be consumed by itself or another Prometheus server.
Now that we have Prometheus installed, the next step is to run it. All that we need is just the binary and a configuration file. Prometheus uses yaml files for configuration.
```yaml
global:
scrape_interval: 15s
scrape_configs:
- job_name: prometheus
static_configs:
- targets: ["localhost:9090"]
```
In the above configuration file we have mentioned the `scrape_interval`, i.e how frequently we want Prometheus to scrape the metrics. We have added `scrape_configs` which has a name and target to scrape the metrics from. Prometheus by default listens on port 9090. So add it to targets.
> prometheus --config.file=prometheus.yml
<iframe width="560" height="315" src="https://www.youtube.com/embed/ioa0eISf1Q0" frameborder="0" allowfullscreen></iframe>
Now we have Prometheus up and running and scraping its own metrics every 15s. Prometheus has standard exporters available to export metrics. Next we will run a node exporter which is an exporter for machine metrics and scrape the same using Prometheus. ([Download node metrics exporter.](https://prometheus.io/download/#node_exporter))
Run the node exporter in a terminal.
<code>./node_exporter</code>
[](/assets/tutorial/node_exporter.png)
Next, add node exporter to the list of scrape_configs:
```yaml
global:
scrape_interval: 15s
scrape_configs:
- job_name: prometheus
static_configs:
- targets: ["localhost:9090"]
- job_name: node_exporter
static_configs:
- targets: ["localhost:9100"]
```
<iframe width="560" height="315" src="https://www.youtube.com/embed/hM5bp53C7Y8" frameborder="0" allowfullscreen></iframe>
In this tutorial we discussed what are metrics and why they are important, basic architecture of Prometheus and how to
run Prometheus. | prometheus | title Getting Started with Prometheus sort rank 1 What is Prometheus Prometheus is a system monitoring and alerting system It was opensourced by SoundCloud in 2012 and is the second project both to join and to graduate within Cloud Native Computing Foundation after Kubernetes Prometheus stores all metrics data as time series i e metrics information is stored along with the timestamp at which it was recorded optional key value pairs called as labels can also be stored along with metrics What are metrics and why is it important Metrics in layperson terms is a standard for measurement What we want to measure depends from application to application For a web server it can be request times for a database it can be CPU usage or number of active connections etc Metrics play an important role in understanding why your application is working in a certain way If you run a web application and someone comes up to you and says that the application is slow you will need some information to find out what is happening with your application For example the application can become slow when the number of requests are high If you have the request count metric you can spot the reason and increase the number of servers to handle the heavy load Whenever you are defining the metrics for your application you must put on your detective hat and ask this question what all information will be important for me to debug if any issue occurs in my application Basic Architecture of Prometheus The basic components of a Prometheus setup are Prometheus Server the server which scrapes and stores the metrics data Targets to be scraped for example an instrumented application that exposes its metrics or an exporter that exposes metrics of another application Alertmanager to raise alerts based on preset rules Note Apart from this Prometheus has push gateway which is not covered here Architecture assets tutorial architecture png assets tutorial architecture png Let s consider a web server as an example application and we want to extract a certain metric like the number of API calls processed by the web server So we add certain instrumentation code using the Prometheus client library and expose the metrics information Now that our web server exposes its metrics we can configure Prometheus to scrape it Now Prometheus is configured to fetch the metrics from the web server which is listening on xyz IP address port 7500 at a specific time interval say every minute At 11 00 00 when I make the server public for consumption the application calculates the request count and exposes it Prometheus simultaneously scrapes the count metric and stores the value as 0 By 11 01 00 one request is processed The instrumentation logic in the server increments the count to 1 When Prometheus scrapes the metric the value of count is 1 now By 11 02 00 two more requests are processed and the request count is 1 2 3 now Similarly metrics are scraped and stored The user can control the frequency at which metrics are scraped by Prometheus Time Stamp Request Count metric 11 00 00 0 11 01 00 1 11 02 00 3 Note This table is just a representation for understanding purposes Prometheus doesn t store the values in this exact format Prometheus also has an API which allows to query metrics which have been stored by scraping This API is used to query the metrics create dashboards charts on it etc PromQL is used to query these metrics A simple Line chart created on the Request Count metric will look like this Graph assets tutorial sample graph png assets tutorial sample graph png One can scrape multiple useful metrics to understand what is happening in the application and create multiple charts on them Group the charts into a dashboard and use it to get an overview of the application Show me how it is done Let s get our hands dirty and setup Prometheus Prometheus is written using Go https golang org and all you need is the binary compiled for your operating system Download the binary corresponding to your operating system from here https prometheus io download and add the binary to your path Prometheus exposes its own metrics which can be consumed by itself or another Prometheus server Now that we have Prometheus installed the next step is to run it All that we need is just the binary and a configuration file Prometheus uses yaml files for configuration yaml global scrape interval 15s scrape configs job name prometheus static configs targets localhost 9090 In the above configuration file we have mentioned the scrape interval i e how frequently we want Prometheus to scrape the metrics We have added scrape configs which has a name and target to scrape the metrics from Prometheus by default listens on port 9090 So add it to targets prometheus config file prometheus yml iframe width 560 height 315 src https www youtube com embed ioa0eISf1Q0 frameborder 0 allowfullscreen iframe Now we have Prometheus up and running and scraping its own metrics every 15s Prometheus has standard exporters available to export metrics Next we will run a node exporter which is an exporter for machine metrics and scrape the same using Prometheus Download node metrics exporter https prometheus io download node exporter Run the node exporter in a terminal code node exporter code Node exporter assets tutorial node exporter png assets tutorial node exporter png Next add node exporter to the list of scrape configs yaml global scrape interval 15s scrape configs job name prometheus static configs targets localhost 9090 job name node exporter static configs targets localhost 9100 iframe width 560 height 315 src https www youtube com embed hM5bp53C7Y8 frameborder 0 allowfullscreen iframe In this tutorial we discussed what are metrics and why they are important basic architecture of Prometheus and how to run Prometheus |
prometheus Here we have a simple HTTP server with endpoint which returns as response title Instrumenting HTTP server written in Go In this tutorial we will create a simple Go HTTP server and instrumentation it by adding a counter sortrank 3 metric to keep count of the total number of requests processed by the server | ---
title: Instrumenting HTTP server written in Go
sort_rank: 3
---
In this tutorial we will create a simple Go HTTP server and instrumentation it by adding a counter
metric to keep count of the total number of requests processed by the server.
Here we have a simple HTTP server with `/ping` endpoint which returns `pong` as response.
```go
package main
import (
"fmt"
"net/http"
)
func ping(w http.ResponseWriter, req *http.Request){
fmt.Fprintf(w,"pong")
}
func main() {
http.HandleFunc("/ping",ping)
http.ListenAndServe(":8090", nil)
}
```
Compile and run the server
```bash
go build server.go
./server
```
Now open `http://localhost:8090/ping` in your browser and you must see `pong`.
[](/assets/tutorial/server.png)
Now lets add a metric to the server which will instrument the number of requests made to the ping endpoint,the counter metric type is suitable for this as we know the request count doesn’t go down and only increases.
Create a Prometheus counter
```go
var pingCounter = prometheus.NewCounter(
prometheus.CounterOpts{
Name: "ping_request_count",
Help: "No of request handled by Ping handler",
},
)
```
Next lets update the ping Handler to increase the count of the counter using `pingCounter.Inc()`.
```go
func ping(w http.ResponseWriter, req *http.Request) {
pingCounter.Inc()
fmt.Fprintf(w, "pong")
}
```
Then register the counter to the Default Register and expose the metrics.
```go
func main() {
prometheus.MustRegister(pingCounter)
http.HandleFunc("/ping", ping)
http.Handle("/metrics", promhttp.Handler())
http.ListenAndServe(":8090", nil)
}
```
The `prometheus.MustRegister` function registers the pingCounter to the default Register.
To expose the metrics the Go Prometheus client library provides the promhttp package.
`promhttp.Handler()` provides a `http.Handler` which exposes the metrics registered in the Default Register.
The sample code depends on the
```go
package main
import (
"fmt"
"net/http"
"github.com/prometheus/client_golang/prometheus"
"github.com/prometheus/client_golang/prometheus/promhttp"
)
var pingCounter = prometheus.NewCounter(
prometheus.CounterOpts{
Name: "ping_request_count",
Help: "No of request handled by Ping handler",
},
)
func ping(w http.ResponseWriter, req *http.Request) {
pingCounter.Inc()
fmt.Fprintf(w, "pong")
}
func main() {
prometheus.MustRegister(pingCounter)
http.HandleFunc("/ping", ping)
http.Handle("/metrics", promhttp.Handler())
http.ListenAndServe(":8090", nil)
}
```
Run the example
```sh
go mod init prom_example
go mod tidy
go run server.go
```
Now hit the localhost:8090/ping endpoint a couple of times and sending a request to localhost:8090 will provide the metrics.
[](/assets/tutorial/ping_metric.png)
Here the `ping_request_count` shows that `/ping` endpoint was called 3 times.
The Default Register comes with a collector for go runtime metrics and that is why we see other metrics like `go_threads`, `go_goroutines` etc.
We have built our first metric exporter. Let’s update our Prometheus config to scrape the metrics from our server.
```yaml
global:
scrape_interval: 15s
scrape_configs:
- job_name: prometheus
static_configs:
- targets: ["localhost:9090"]
- job_name: simple_server
static_configs:
- targets: ["localhost:8090"]
```
<code>prometheus --config.file=prometheus.yml</code>
<iframe width="560" height="315" src="https://www.youtube.com/embed/yQIWgZoiW0o" frameborder="0" allowfullscreen></iframe> | prometheus | title Instrumenting HTTP server written in Go sort rank 3 In this tutorial we will create a simple Go HTTP server and instrumentation it by adding a counter metric to keep count of the total number of requests processed by the server Here we have a simple HTTP server with ping endpoint which returns pong as response go package main import fmt net http func ping w http ResponseWriter req http Request fmt Fprintf w pong func main http HandleFunc ping ping http ListenAndServe 8090 nil Compile and run the server bash go build server go server Now open http localhost 8090 ping in your browser and you must see pong Server assets tutorial server png assets tutorial server png Now lets add a metric to the server which will instrument the number of requests made to the ping endpoint the counter metric type is suitable for this as we know the request count doesn t go down and only increases Create a Prometheus counter go var pingCounter prometheus NewCounter prometheus CounterOpts Name ping request count Help No of request handled by Ping handler Next lets update the ping Handler to increase the count of the counter using pingCounter Inc go func ping w http ResponseWriter req http Request pingCounter Inc fmt Fprintf w pong Then register the counter to the Default Register and expose the metrics go func main prometheus MustRegister pingCounter http HandleFunc ping ping http Handle metrics promhttp Handler http ListenAndServe 8090 nil The prometheus MustRegister function registers the pingCounter to the default Register To expose the metrics the Go Prometheus client library provides the promhttp package promhttp Handler provides a http Handler which exposes the metrics registered in the Default Register The sample code depends on the go package main import fmt net http github com prometheus client golang prometheus github com prometheus client golang prometheus promhttp var pingCounter prometheus NewCounter prometheus CounterOpts Name ping request count Help No of request handled by Ping handler func ping w http ResponseWriter req http Request pingCounter Inc fmt Fprintf w pong func main prometheus MustRegister pingCounter http HandleFunc ping ping http Handle metrics promhttp Handler http ListenAndServe 8090 nil Run the example sh go mod init prom example go mod tidy go run server go Now hit the localhost 8090 ping endpoint a couple of times and sending a request to localhost 8090 will provide the metrics Ping Metric assets tutorial ping metric png assets tutorial ping metric png Here the ping request count shows that ping endpoint was called 3 times The Default Register comes with a collector for go runtime metrics and that is why we see other metrics like go threads go goroutines etc We have built our first metric exporter Let s update our Prometheus config to scrape the metrics from our server yaml global scrape interval 15s scrape configs job name prometheus static configs targets localhost 9090 job name simple server static configs targets localhost 8090 code prometheus config file prometheus yml code iframe width 560 height 315 src https www youtube com embed yQIWgZoiW0o frameborder 0 allowfullscreen iframe |
prometheus title Remote write tuning Prometheus implements sane defaults for remote write but many users have different requirements and would like to optimize their remote settings Remote write tuning sortrank 8 | ---
title: Remote write tuning
sort_rank: 8
---
# Remote write tuning
Prometheus implements sane defaults for remote write, but many users have
different requirements and would like to optimize their remote settings.
This page describes the tuning parameters available via the [remote write
configuration.](/docs/prometheus/latest/configuration/configuration/#remote_write)
## Remote write characteristics
Each remote write destination starts a queue which reads from the write-ahead
log (WAL), writes the samples into an in memory queue owned by a shard, which
then sends a request to the configured endpoint. The flow of data looks like:
```
|--> queue (shard_1) --> remote endpoint
WAL --|--> queue (shard_...) --> remote endpoint
|--> queue (shard_n) --> remote endpoint
```
When one shard backs up and fills its queue, Prometheus will block reading from
the WAL into any shards. Failures will be retried without loss of data unless
the remote endpoint remains down for more than 2 hours. After 2 hours, the WAL
will be compacted and data that has not been sent will be lost.
During operation, Prometheus will continuously calculate the optimal number of
shards to use based on the incoming sample rate, number of outstanding samples
not sent, and time taken to send each sample.
### Resource usage
Using remote write increases the memory footprint of Prometheus. Most users
report ~25% increased memory usage, but that number is dependent on the shape
of the data. For each series in the WAL, the remote write code caches a mapping
of series ID to label values, causing large amounts of series churn to
significantly increase memory usage.
In addition to the series cache, each shard and its queue increases memory
usage. Shard memory is proportional to the `number of shards * (capacity +
max_samples_per_send)`. When tuning, consider reducing `max_shards` alongside
increases to `capacity` and `max_samples_per_send` to avoid inadvertently
running out of memory. The default values for `capacity: 10000` and
`max_samples_per_send: 2000` will constrain shard memory usage to less than 2
MB per shard.
Remote write will also increase CPU and network usage. However, for the same
reasons as above, it is difficult to predict by how much. It is generally a
good practice to check for CPU and network saturation if your Prometheus server
falls behind sending samples via remote write
(`prometheus_remote_storage_samples_pending`).
## Parameters
All the relevant parameters are found under the `queue_config` section of the
remote write configuration.
### `capacity`
Capacity controls how many samples are queued in memory per shard before
blocking reading from the WAL. Once the WAL is blocked, samples cannot be
appended to any shards and all throughput will cease.
Capacity should be high enough to avoid blocking other shards in most
cases, but too much capacity can cause excess memory consumption and longer
times to clear queues during resharding. It is recommended to set capacity
to 3-10 times `max_samples_per_send`.
### `max_shards`
Max shards configures the maximum number of shards, or parallelism, Prometheus
will use for each remote write queue. Prometheus will try not to use too many
shards, but if the queue falls behind the remote write component will increase
the number of shards up to max shards to increase throughput. Unless remote
writing to a very slow endpoint, it is unlikely that `max_shards` should be
increased beyond the default. However, it may be necessary to reduce max shards
if there is potential to overwhelm the remote endpoint, or to reduce memory
usage when data is backed up.
### `min_shards`
Min shards configures the minimum number of shards used by Prometheus, and is
the number of shards used when remote write starts. If remote write falls
behind, Prometheus will automatically scale up the number of shards so most
users do not have to adjust this parameter. However, increasing min shards will
allow Prometheus to avoid falling behind at the beginning while calculating the
required number of shards.
### `max_samples_per_send`
Max samples per send can be adjusted depending on the backend in use. Many
systems work very well by sending more samples per batch without a significant
increase in latency. Other backends will have issues if trying to send a large
number of samples in each request. The default value is small enough to work for
most systems.
### `batch_send_deadline`
Batch send deadline sets the maximum amount of time between sends for a single
shard. Even if the queued shards has not reached `max_samples_per_send`, a
request will be sent. Batch send deadline can be increased for low volume
systems that are not latency sensitive in order to increase request efficiency.
### `min_backoff`
Min backoff controls the minimum amount of time to wait before retrying a failed
request. Increasing the backoff spreads out requests when a remote endpoint
comes back online. The backoff interval is doubled for each failed requests up
to `max_backoff`.
### `max_backoff`
Max backoff controls the maximum amount of time to wait before retrying a failed
request. | prometheus | title Remote write tuning sort rank 8 Remote write tuning Prometheus implements sane defaults for remote write but many users have different requirements and would like to optimize their remote settings This page describes the tuning parameters available via the remote write configuration docs prometheus latest configuration configuration remote write Remote write characteristics Each remote write destination starts a queue which reads from the write ahead log WAL writes the samples into an in memory queue owned by a shard which then sends a request to the configured endpoint The flow of data looks like queue shard 1 remote endpoint WAL queue shard remote endpoint queue shard n remote endpoint When one shard backs up and fills its queue Prometheus will block reading from the WAL into any shards Failures will be retried without loss of data unless the remote endpoint remains down for more than 2 hours After 2 hours the WAL will be compacted and data that has not been sent will be lost During operation Prometheus will continuously calculate the optimal number of shards to use based on the incoming sample rate number of outstanding samples not sent and time taken to send each sample Resource usage Using remote write increases the memory footprint of Prometheus Most users report 25 increased memory usage but that number is dependent on the shape of the data For each series in the WAL the remote write code caches a mapping of series ID to label values causing large amounts of series churn to significantly increase memory usage In addition to the series cache each shard and its queue increases memory usage Shard memory is proportional to the number of shards capacity max samples per send When tuning consider reducing max shards alongside increases to capacity and max samples per send to avoid inadvertently running out of memory The default values for capacity 10000 and max samples per send 2000 will constrain shard memory usage to less than 2 MB per shard Remote write will also increase CPU and network usage However for the same reasons as above it is difficult to predict by how much It is generally a good practice to check for CPU and network saturation if your Prometheus server falls behind sending samples via remote write prometheus remote storage samples pending Parameters All the relevant parameters are found under the queue config section of the remote write configuration capacity Capacity controls how many samples are queued in memory per shard before blocking reading from the WAL Once the WAL is blocked samples cannot be appended to any shards and all throughput will cease Capacity should be high enough to avoid blocking other shards in most cases but too much capacity can cause excess memory consumption and longer times to clear queues during resharding It is recommended to set capacity to 3 10 times max samples per send max shards Max shards configures the maximum number of shards or parallelism Prometheus will use for each remote write queue Prometheus will try not to use too many shards but if the queue falls behind the remote write component will increase the number of shards up to max shards to increase throughput Unless remote writing to a very slow endpoint it is unlikely that max shards should be increased beyond the default However it may be necessary to reduce max shards if there is potential to overwhelm the remote endpoint or to reduce memory usage when data is backed up min shards Min shards configures the minimum number of shards used by Prometheus and is the number of shards used when remote write starts If remote write falls behind Prometheus will automatically scale up the number of shards so most users do not have to adjust this parameter However increasing min shards will allow Prometheus to avoid falling behind at the beginning while calculating the required number of shards max samples per send Max samples per send can be adjusted depending on the backend in use Many systems work very well by sending more samples per batch without a significant increase in latency Other backends will have issues if trying to send a large number of samples in each request The default value is small enough to work for most systems batch send deadline Batch send deadline sets the maximum amount of time between sends for a single shard Even if the queued shards has not reached max samples per send a request will be sent Batch send deadline can be increased for low volume systems that are not latency sensitive in order to increase request efficiency min backoff Min backoff controls the minimum amount of time to wait before retrying a failed request Increasing the backoff spreads out requests when a remote endpoint comes back online The backoff interval is doubled for each failed requests up to max backoff max backoff Max backoff controls the maximum amount of time to wait before retrying a failed request |
prometheus sortrank 4 NOTE This document predates native histograms added as an experimental stable feature this document will be thoroughly updated title Histograms and summaries feature in Prometheus v2 40 Once native histograms are closer to becoming a Histograms and summaries | ---
title: Histograms and summaries
sort_rank: 4
---
NOTE: This document predates native histograms (added as an experimental
feature in Prometheus v2.40). Once native histograms are closer to becoming a
stable feature, this document will be thoroughly updated.
# Histograms and summaries
Histograms and summaries are more complex metric types. Not only does
a single histogram or summary create a multitude of time series, it is
also more difficult to use these metric types correctly. This section
helps you to pick and configure the appropriate metric type for your
use case.
## Library support
First of all, check the library support for
[histograms](/docs/concepts/metric_types/#histogram) and
[summaries](/docs/concepts/metric_types/#summary).
Some libraries support only one of the two types, or they support summaries
only in a limited fashion (lacking [quantile calculation](#quantiles)).
## Count and sum of observations
Histograms and summaries both sample observations, typically request
durations or response sizes. They track the number of observations
*and* the sum of the observed values, allowing you to calculate the
*average* of the observed values. Note that the number of observations
(showing up in Prometheus as a time series with a `_count` suffix) is
inherently a counter (as described above, it only goes up). The sum of
observations (showing up as a time series with a `_sum` suffix)
behaves like a counter, too, as long as there are no negative
observations. Obviously, request durations or response sizes are
never negative. In principle, however, you can use summaries and
histograms to observe negative values (e.g. temperatures in
centigrade). In that case, the sum of observations can go down, so you
cannot apply `rate()` to it anymore. In those rare cases where you need to
apply `rate()` and cannot avoid negative observations, you can use two
separate summaries, one for positive and one for negative observations
(the latter with inverted sign), and combine the results later with suitable
PromQL expressions.
To calculate the average request duration during the last 5 minutes
from a histogram or summary called `http_request_duration_seconds`,
use the following expression:
rate(http_request_duration_seconds_sum[5m])
/
rate(http_request_duration_seconds_count[5m])
## Apdex score
A straight-forward use of histograms (but not summaries) is to count
observations falling into particular buckets of observation
values.
You might have an SLO to serve 95% of requests within 300ms. In that
case, configure a histogram to have a bucket with an upper limit of
0.3 seconds. You can then directly express the relative amount of
requests served within 300ms and easily alert if the value drops below
0.95. The following expression calculates it by job for the requests
served in the last 5 minutes. The request durations were collected with
a histogram called `http_request_duration_seconds`.
sum(rate(http_request_duration_seconds_bucket{le="0.3"}[5m])) by (job)
/
sum(rate(http_request_duration_seconds_count[5m])) by (job)
You can approximate the well-known [Apdex
score](http://en.wikipedia.org/wiki/Apdex) in a similar way. Configure
a bucket with the target request duration as the upper bound and
another bucket with the tolerated request duration (usually 4 times
the target request duration) as the upper bound. Example: The target
request duration is 300ms. The tolerable request duration is 1.2s. The
following expression yields the Apdex score for each job over the last
5 minutes:
(
sum(rate(http_request_duration_seconds_bucket{le="0.3"}[5m])) by (job)
+
sum(rate(http_request_duration_seconds_bucket{le="1.2"}[5m])) by (job)
) / 2 / sum(rate(http_request_duration_seconds_count[5m])) by (job)
Note that we divide the sum of both buckets. The reason is that the histogram
buckets are
[cumulative](https://en.wikipedia.org/wiki/Histogram#Cumulative_histogram). The
`le="0.3"` bucket is also contained in the `le="1.2"` bucket; dividing it by 2
corrects for that.
The calculation does not exactly match the traditional Apdex score, as it
includes errors in the satisfied and tolerable parts of the calculation.
## Quantiles
You can use both summaries and histograms to calculate so-called φ-quantiles,
where 0 ≤ φ ≤ 1. The φ-quantile is the observation value that ranks at number
φ*N among the N observations. Examples for φ-quantiles: The 0.5-quantile is
known as the median. The 0.95-quantile is the 95th percentile.
The essential difference between summaries and histograms is that summaries
calculate streaming φ-quantiles on the client side and expose them directly,
while histograms expose bucketed observation counts and the calculation of
quantiles from the buckets of a histogram happens on the server side using the
[`histogram_quantile()`
function](/docs/prometheus/latest/querying/functions/#histogram_quantile).
The two approaches have a number of different implications:
| | Histogram | Summary
|---|-----------|---------
| Required configuration | Pick buckets suitable for the expected range of observed values. | Pick desired φ-quantiles and sliding window. Other φ-quantiles and sliding windows cannot be calculated later.
| Client performance | Observations are very cheap as they only need to increment counters. | Observations are expensive due to the streaming quantile calculation.
| Server performance | The server has to calculate quantiles. You can use [recording rules](/docs/prometheus/latest/configuration/recording_rules/#recording-rules) should the ad-hoc calculation take too long (e.g. in a large dashboard). | Low server-side cost.
| Number of time series (in addition to the `_sum` and `_count` series) | One time series per configured bucket. | One time series per configured quantile.
| Quantile error (see below for details) | Error is limited in the dimension of observed values by the width of the relevant bucket. | Error is limited in the dimension of φ by a configurable value.
| Specification of φ-quantile and sliding time-window | Ad-hoc with [Prometheus expressions](/docs/prometheus/latest/querying/functions/#histogram_quantile). | Preconfigured by the client.
| Aggregation | Ad-hoc with [Prometheus expressions](/docs/prometheus/latest/querying/functions/#histogram_quantile). | In general [not aggregatable](http://latencytipoftheday.blogspot.de/2014/06/latencytipoftheday-you-cant-average.html).
Note the importance of the last item in the table. Let us return to
the SLO of serving 95% of requests within 300ms. This time, you do not
want to display the percentage of requests served within 300ms, but
instead the 95th percentile, i.e. the request duration within which
you have served 95% of requests. To do that, you can either configure
a summary with a 0.95-quantile and (for example) a 5-minute decay
time, or you configure a histogram with a few buckets around the 300ms
mark, e.g. `{le="0.1"}`, `{le="0.2"}`, `{le="0.3"}`, and
`{le="0.45"}`. If your service runs replicated with a number of
instances, you will collect request durations from every single one of
them, and then you want to aggregate everything into an overall 95th
percentile. However, aggregating the precomputed quantiles from a
summary rarely makes sense. In this particular case, averaging the
quantiles yields statistically nonsensical values.
avg(http_request_duration_seconds{quantile="0.95"}) // BAD!
Using histograms, the aggregation is perfectly possible with the
[`histogram_quantile()`
function](/docs/prometheus/latest/querying/functions/#histogram_quantile).
histogram_quantile(0.95, sum(rate(http_request_duration_seconds_bucket[5m])) by (le)) // GOOD.
Furthermore, should your SLO change and you now want to plot the 90th
percentile, or you want to take into account the last 10 minutes
instead of the last 5 minutes, you only have to adjust the expression
above and you do not need to reconfigure the clients.
## Errors of quantile estimation
Quantiles, whether calculated client-side or server-side, are
estimated. It is important to understand the errors of that
estimation.
Continuing the histogram example from above, imagine your usual
request durations are almost all very close to 220ms, or in other
words, if you could plot the "true" histogram, you would see a very
sharp spike at 220ms. In the Prometheus histogram metric as configured
above, almost all observations, and therefore also the 95th percentile,
will fall into the bucket labeled `{le="0.3"}`, i.e. the bucket from
200ms to 300ms. The histogram implementation guarantees that the true
95th percentile is somewhere between 200ms and 300ms. To return a
single value (rather than an interval), it applies linear
interpolation, which yields 295ms in this case. The calculated
quantile gives you the impression that you are close to breaching the
SLO, but in reality, the 95th percentile is a tiny bit above 220ms,
a quite comfortable distance to your SLO.
Next step in our thought experiment: A change in backend routing
adds a fixed amount of 100ms to all request durations. Now the request
duration has its sharp spike at 320ms and almost all observations will
fall into the bucket from 300ms to 450ms. The 95th percentile is
calculated to be 442.5ms, although the correct value is close to
320ms. While you are only a tiny bit outside of your SLO, the
calculated 95th quantile looks much worse.
A summary would have had no problem calculating the correct percentile
value in both cases, at least if it uses an appropriate algorithm on
the client side (like the [one used by the Go
client](http://dimacs.rutgers.edu/~graham/pubs/slides/bquant-long.pdf)).
Unfortunately, you cannot use a summary if you need to aggregate the
observations from a number of instances.
Luckily, due to your appropriate choice of bucket boundaries, even in
this contrived example of very sharp spikes in the distribution of
observed values, the histogram was able to identify correctly if you
were within or outside of your SLO. Also, the closer the actual value
of the quantile is to our SLO (or in other words, the value we are
actually most interested in), the more accurate the calculated value
becomes.
Let us now modify the experiment once more. In the new setup, the
distributions of request durations has a spike at 150ms, but it is not
quite as sharp as before and only comprises 90% of the
observations. 10% of the observations are evenly spread out in a long
tail between 150ms and 450ms. With that distribution, the 95th
percentile happens to be exactly at our SLO of 300ms. With the
histogram, the calculated value is accurate, as the value of the 95th
percentile happens to coincide with one of the bucket boundaries. Even
slightly different values would still be accurate as the (contrived)
even distribution within the relevant buckets is exactly what the
linear interpolation within a bucket assumes.
The error of the quantile reported by a summary gets more interesting
now. The error of the quantile in a summary is configured in the
dimension of φ. In our case we might have configured 0.95±0.01,
i.e. the calculated value will be between the 94th and 96th
percentile. The 94th quantile with the distribution described above is
270ms, the 96th quantile is 330ms. The calculated value of the 95th
percentile reported by the summary can be anywhere in the interval
between 270ms and 330ms, which unfortunately is all the difference
between clearly within the SLO vs. clearly outside the SLO.
The bottom line is: If you use a summary, you control the error in the
dimension of φ. If you use a histogram, you control the error in the
dimension of the observed value (via choosing the appropriate bucket
layout). With a broad distribution, small changes in φ result in
large deviations in the observed value. With a sharp distribution, a
small interval of observed values covers a large interval of φ.
Two rules of thumb:
1. If you need to aggregate, choose histograms.
2. Otherwise, choose a histogram if you have an idea of the range
and distribution of values that will be observed. Choose a
summary if you need an accurate quantile, no matter what the
range and distribution of the values is.
## What can I do if my client library does not support the metric type I need?
Implement it! [Code contributions are welcome](/community/). In general, we
expect histograms to be more urgently needed than summaries. Histograms are
also easier to implement in a client library, so we recommend to implement
histograms first, if in doubt. | prometheus | title Histograms and summaries sort rank 4 NOTE This document predates native histograms added as an experimental feature in Prometheus v2 40 Once native histograms are closer to becoming a stable feature this document will be thoroughly updated Histograms and summaries Histograms and summaries are more complex metric types Not only does a single histogram or summary create a multitude of time series it is also more difficult to use these metric types correctly This section helps you to pick and configure the appropriate metric type for your use case Library support First of all check the library support for histograms docs concepts metric types histogram and summaries docs concepts metric types summary Some libraries support only one of the two types or they support summaries only in a limited fashion lacking quantile calculation quantiles Count and sum of observations Histograms and summaries both sample observations typically request durations or response sizes They track the number of observations and the sum of the observed values allowing you to calculate the average of the observed values Note that the number of observations showing up in Prometheus as a time series with a count suffix is inherently a counter as described above it only goes up The sum of observations showing up as a time series with a sum suffix behaves like a counter too as long as there are no negative observations Obviously request durations or response sizes are never negative In principle however you can use summaries and histograms to observe negative values e g temperatures in centigrade In that case the sum of observations can go down so you cannot apply rate to it anymore In those rare cases where you need to apply rate and cannot avoid negative observations you can use two separate summaries one for positive and one for negative observations the latter with inverted sign and combine the results later with suitable PromQL expressions To calculate the average request duration during the last 5 minutes from a histogram or summary called http request duration seconds use the following expression rate http request duration seconds sum 5m rate http request duration seconds count 5m Apdex score A straight forward use of histograms but not summaries is to count observations falling into particular buckets of observation values You might have an SLO to serve 95 of requests within 300ms In that case configure a histogram to have a bucket with an upper limit of 0 3 seconds You can then directly express the relative amount of requests served within 300ms and easily alert if the value drops below 0 95 The following expression calculates it by job for the requests served in the last 5 minutes The request durations were collected with a histogram called http request duration seconds sum rate http request duration seconds bucket le 0 3 5m by job sum rate http request duration seconds count 5m by job You can approximate the well known Apdex score http en wikipedia org wiki Apdex in a similar way Configure a bucket with the target request duration as the upper bound and another bucket with the tolerated request duration usually 4 times the target request duration as the upper bound Example The target request duration is 300ms The tolerable request duration is 1 2s The following expression yields the Apdex score for each job over the last 5 minutes sum rate http request duration seconds bucket le 0 3 5m by job sum rate http request duration seconds bucket le 1 2 5m by job 2 sum rate http request duration seconds count 5m by job Note that we divide the sum of both buckets The reason is that the histogram buckets are cumulative https en wikipedia org wiki Histogram Cumulative histogram The le 0 3 bucket is also contained in the le 1 2 bucket dividing it by 2 corrects for that The calculation does not exactly match the traditional Apdex score as it includes errors in the satisfied and tolerable parts of the calculation Quantiles You can use both summaries and histograms to calculate so called quantiles where 0 1 The quantile is the observation value that ranks at number N among the N observations Examples for quantiles The 0 5 quantile is known as the median The 0 95 quantile is the 95th percentile The essential difference between summaries and histograms is that summaries calculate streaming quantiles on the client side and expose them directly while histograms expose bucketed observation counts and the calculation of quantiles from the buckets of a histogram happens on the server side using the histogram quantile function docs prometheus latest querying functions histogram quantile The two approaches have a number of different implications Histogram Summary Required configuration Pick buckets suitable for the expected range of observed values Pick desired quantiles and sliding window Other quantiles and sliding windows cannot be calculated later Client performance Observations are very cheap as they only need to increment counters Observations are expensive due to the streaming quantile calculation Server performance The server has to calculate quantiles You can use recording rules docs prometheus latest configuration recording rules recording rules should the ad hoc calculation take too long e g in a large dashboard Low server side cost Number of time series in addition to the sum and count series One time series per configured bucket One time series per configured quantile Quantile error see below for details Error is limited in the dimension of observed values by the width of the relevant bucket Error is limited in the dimension of by a configurable value Specification of quantile and sliding time window Ad hoc with Prometheus expressions docs prometheus latest querying functions histogram quantile Preconfigured by the client Aggregation Ad hoc with Prometheus expressions docs prometheus latest querying functions histogram quantile In general not aggregatable http latencytipoftheday blogspot de 2014 06 latencytipoftheday you cant average html Note the importance of the last item in the table Let us return to the SLO of serving 95 of requests within 300ms This time you do not want to display the percentage of requests served within 300ms but instead the 95th percentile i e the request duration within which you have served 95 of requests To do that you can either configure a summary with a 0 95 quantile and for example a 5 minute decay time or you configure a histogram with a few buckets around the 300ms mark e g le 0 1 le 0 2 le 0 3 and le 0 45 If your service runs replicated with a number of instances you will collect request durations from every single one of them and then you want to aggregate everything into an overall 95th percentile However aggregating the precomputed quantiles from a summary rarely makes sense In this particular case averaging the quantiles yields statistically nonsensical values avg http request duration seconds quantile 0 95 BAD Using histograms the aggregation is perfectly possible with the histogram quantile function docs prometheus latest querying functions histogram quantile histogram quantile 0 95 sum rate http request duration seconds bucket 5m by le GOOD Furthermore should your SLO change and you now want to plot the 90th percentile or you want to take into account the last 10 minutes instead of the last 5 minutes you only have to adjust the expression above and you do not need to reconfigure the clients Errors of quantile estimation Quantiles whether calculated client side or server side are estimated It is important to understand the errors of that estimation Continuing the histogram example from above imagine your usual request durations are almost all very close to 220ms or in other words if you could plot the true histogram you would see a very sharp spike at 220ms In the Prometheus histogram metric as configured above almost all observations and therefore also the 95th percentile will fall into the bucket labeled le 0 3 i e the bucket from 200ms to 300ms The histogram implementation guarantees that the true 95th percentile is somewhere between 200ms and 300ms To return a single value rather than an interval it applies linear interpolation which yields 295ms in this case The calculated quantile gives you the impression that you are close to breaching the SLO but in reality the 95th percentile is a tiny bit above 220ms a quite comfortable distance to your SLO Next step in our thought experiment A change in backend routing adds a fixed amount of 100ms to all request durations Now the request duration has its sharp spike at 320ms and almost all observations will fall into the bucket from 300ms to 450ms The 95th percentile is calculated to be 442 5ms although the correct value is close to 320ms While you are only a tiny bit outside of your SLO the calculated 95th quantile looks much worse A summary would have had no problem calculating the correct percentile value in both cases at least if it uses an appropriate algorithm on the client side like the one used by the Go client http dimacs rutgers edu graham pubs slides bquant long pdf Unfortunately you cannot use a summary if you need to aggregate the observations from a number of instances Luckily due to your appropriate choice of bucket boundaries even in this contrived example of very sharp spikes in the distribution of observed values the histogram was able to identify correctly if you were within or outside of your SLO Also the closer the actual value of the quantile is to our SLO or in other words the value we are actually most interested in the more accurate the calculated value becomes Let us now modify the experiment once more In the new setup the distributions of request durations has a spike at 150ms but it is not quite as sharp as before and only comprises 90 of the observations 10 of the observations are evenly spread out in a long tail between 150ms and 450ms With that distribution the 95th percentile happens to be exactly at our SLO of 300ms With the histogram the calculated value is accurate as the value of the 95th percentile happens to coincide with one of the bucket boundaries Even slightly different values would still be accurate as the contrived even distribution within the relevant buckets is exactly what the linear interpolation within a bucket assumes The error of the quantile reported by a summary gets more interesting now The error of the quantile in a summary is configured in the dimension of In our case we might have configured 0 95 0 01 i e the calculated value will be between the 94th and 96th percentile The 94th quantile with the distribution described above is 270ms the 96th quantile is 330ms The calculated value of the 95th percentile reported by the summary can be anywhere in the interval between 270ms and 330ms which unfortunately is all the difference between clearly within the SLO vs clearly outside the SLO The bottom line is If you use a summary you control the error in the dimension of If you use a histogram you control the error in the dimension of the observed value via choosing the appropriate bucket layout With a broad distribution small changes in result in large deviations in the observed value With a sharp distribution a small interval of observed values covers a large interval of Two rules of thumb 1 If you need to aggregate choose histograms 2 Otherwise choose a histogram if you have an idea of the range and distribution of values that will be observed Choose a summary if you need an accurate quantile no matter what the range and distribution of the values is What can I do if my client library does not support the metric type I need Implement it Code contributions are welcome community In general we expect histograms to be more urgently needed than summaries Histograms are also easier to implement in a client library so we recommend to implement histograms first if in doubt |
prometheus title Instrumentation sortrank 3 Instrumentation How to instrument This page provides an opinionated set of guidelines for instrumenting your code | ---
title: Instrumentation
sort_rank: 3
---
# Instrumentation
This page provides an opinionated set of guidelines for instrumenting your code.
## How to instrument
The short answer is to instrument everything. Every library, subsystem and
service should have at least a few metrics to give you a rough idea of how it is
performing.
Instrumentation should be an integral part of your code. Instantiate the metric
classes in the same file you use them. This makes going from alert to console to code
easy when you are chasing an error.
### The three types of services
For monitoring purposes, services can generally be broken down into three types:
online-serving, offline-processing, and batch jobs. There is overlap between
them, but every service tends to fit well into one of these categories.
#### Online-serving systems
An online-serving system is one where a human or another system is expecting an
immediate response. For example, most database and HTTP requests fall into
this category.
The key metrics in such a system are the number of performed queries, errors,
and latency. The number of in-progress requests can also be useful.
For counting failed queries, see section [Failures](#failures) below.
Online-serving systems should be monitored on both the client and server side.
If the two sides see different behaviors, that is very useful information for debugging.
If a service has many clients, it is not practical for the service to track them
individually, so they have to rely on their own stats.
Be consistent in whether you count queries when they start or when they end.
When they end is suggested, as it will line up with the error and latency stats,
and tends to be easier to code.
#### Offline processing
For offline processing, no one is actively waiting for a response, and batching
of work is common. There may also be multiple stages of processing.
For each stage, track the items coming in, how many are in progress, the last
time you processed something, and how many items were sent out. If batching, you
should also track batches going in and out.
Knowing the last time that a system processed something is useful for detecting if it has stalled,
but it is very localised information. A better approach is to send a heartbeat
through the system: some dummy item that gets passed all the way through
and includes the timestamp when it was inserted. Each stage can export the most
recent heartbeat timestamp it has seen, letting you know how long items are
taking to propagate through the system. For systems that do not have quiet
periods where no processing occurs, an explicit heartbeat may not be needed.
#### Batch jobs
There is a fuzzy line between offline-processing and batch jobs, as offline
processing may be done in batch jobs. Batch jobs are distinguished by the
fact that they do not run continuously, which makes scraping them difficult.
The key metric of a batch job is the last time it succeeded. It is also useful to track
how long each major stage of the job took, the overall runtime and the last
time the job completed (successful or failed). These are all gauges, and should
be [pushed to a PushGateway](/docs/instrumenting/pushing/).
There are generally also some overall job-specific statistics that would be
useful to track, such as the total number of records processed.
For batch jobs that take more than a few minutes to run, it is useful to also
scrape them using pull-based monitoring. This lets you track the same metrics over time
as for other types of jobs, such as resource usage and latency when talking to other
systems. This can aid debugging if the job starts to get slow.
For batch jobs that run very often (say, more often than every 15 minutes), you should
consider converting them into daemons and handling them as offline-processing jobs.
### Subsystems
In addition to the three main types of services, systems have sub-parts that
should also be monitored.
#### Libraries
Libraries should provide instrumentation with no additional configuration
required by users.
If it is a library used to access some resource outside of the process (for example,
network, disk, or IPC), track the overall query count, errors (if errors are possible)
and latency at a minimum.
Depending on how heavy the library is, track internal errors and
latency within the library itself, and any general statistics you think may be
useful.
A library may be used by multiple independent parts of an application against
different resources, so take care to distinguish uses with labels where
appropriate. For example, a database connection pool should distinguish the databases
it is talking to, whereas there is no need to differentiate
between users of a DNS client library.
#### Logging
As a general rule, for every line of logging code you should also have a
counter that is incremented. If you find an interesting log message, you want to
be able to see how often it has been happening and for how long.
If there are multiple closely-related log messages in the same function (for example,
different branches of an if or switch statement), it can sometimes make sense to
increment a single counter for all of them.
It is also generally useful to export the total number of info/error/warning
lines that were logged by the application as a whole, and check for significant
differences as part of your release process.
#### Failures
Failures should be handled similarly to logging. Every time there is a failure, a
counter should be incremented. Unlike logging, the error may also bubble up to a
more general error counter depending on how your code is structured.
When reporting failures, you should generally have some other metric
representing the total number of attempts. This makes the failure ratio easy to calculate.
#### Threadpools
For any sort of threadpool, the key metrics are the number of queued requests, the number of
threads in use, the total number of threads, the number of tasks processed, and how long they took.
It is also useful to track how long things were waiting in the queue.
#### Caches
The key metrics for a cache are total queries, hits, overall latency and then
the query count, errors and latency of whatever online-serving system the cache is in front of.
#### Collectors
When implementing a non-trivial custom metrics collector, it is advised to export a
gauge for how long the collection took in seconds and another for the number of
errors encountered.
This is one of the two cases when it is okay to export a duration as a gauge
rather than a summary or a histogram, the other being batch job durations. This
is because both represent information about that particular push/scrape, rather
than tracking multiple durations over time.
## Things to watch out for
There are some general things to be aware of when doing monitoring, and also
Prometheus-specific ones in particular.
### Use labels
Few monitoring systems have the notion of labels and an expression language to
take advantage of them, so it takes a bit of getting used to.
When you have multiple metrics that you want to add/average/sum, they should
usually be one metric with labels rather than multiple metrics.
For example, rather than `http_responses_500_total` and `http_responses_403_total`,
create a single metric called `http_responses_total` with a `code` label
for the HTTP response code. You can then process the entire metric as one in
rules and graphs.
As a rule of thumb, no part of a metric name should ever be procedurally
generated (use labels instead). The one exception is when proxying metrics
from another monitoring/instrumentation system.
See also the [naming](/docs/practices/naming/) section.
### Do not overuse labels
Each labelset is an additional time series that has RAM, CPU, disk, and network
costs. Usually the overhead is negligible, but in scenarios with lots of
metrics and hundreds of labelsets across hundreds of servers, this can add up
quickly.
As a general guideline, try to keep the cardinality of your metrics below 10,
and for metrics that exceed that, aim to limit them to a handful across your
whole system. The vast majority of your metrics should have no labels.
If you have a metric that has a cardinality over 100 or the potential to grow
that large, investigate alternate solutions such as reducing the number of
dimensions or moving the analysis away from monitoring and to a general-purpose
processing system.
To give you a better idea of the underlying numbers, let's look at node\_exporter.
node\_exporter exposes metrics for every mounted filesystem. Every node will have
in the tens of timeseries for, say, `node_filesystem_avail`. If you have
10,000 nodes, you will end up with roughly 100,000 timeseries for
`node_filesystem_avail`, which is fine for Prometheus to handle.
If you were to now add quota per user, you would quickly reach a double digit
number of millions with 10,000 users on 10,000 nodes. This is too much for the
current implementation of Prometheus. Even with smaller numbers, there's an
opportunity cost as you can't have other, potentially more useful metrics on
this machine any more.
If you are unsure, start with no labels and add more labels over time as
concrete use cases arise.
### Counter vs. gauge, summary vs. histogram
It is important to know which of the four main metric types to use for
a given metric.
To pick between counter and gauge, there is a simple rule of thumb: if
the value can go down, it is a gauge.
Counters can only go up (and reset, such as when a process restarts). They are
useful for accumulating the number of events, or the amount of something at
each event. For example, the total number of HTTP requests, or the total number of
bytes sent in HTTP requests. Raw counters are rarely useful. Use the
`rate()` function to get the per-second rate at which they are increasing.
Gauges can be set, go up, and go down. They are useful for snapshots of state,
such as in-progress requests, free/total memory, or temperature. You should
never take a `rate()` of a gauge.
Summaries and histograms are more complex metric types discussed in
[their own section](/docs/practices/histograms/).
### Timestamps, not time since
If you want to track the amount of time since something happened, export the
Unix timestamp at which it happened - not the time since it happened.
With the timestamp exported, you can use the expression `time() - my_timestamp_metric` to
calculate the time since the event, removing the need for update logic and
protecting you against the update logic getting stuck.
### Inner loops
In general, the additional resource cost of instrumentation is far outweighed by
the benefits it brings to operations and development.
For code which is performance-critical or called more than 100k times a second
inside a given process, you may wish to take some care as to how many metrics
you update.
A Java counter takes
[12-17ns](https://github.com/prometheus/client_java/blob/master/benchmark/README.md)
to increment depending on contention. Other languages will have similar
performance. If that amount of time is significant for your inner loop, limit
the number of metrics you increment in the inner loop and avoid labels (or
cache the result of the label lookup, for example, the return value of `With()`
in Go or `labels()` in Java) where possible.
Beware also of metric updates involving time or durations, as getting the time
may involve a syscall. As with all matters involving performance-critical code,
benchmarks are the best way to determine the impact of any given change.
### Avoid missing metrics
Time series that are not present until something happens are difficult
to deal with, as the usual simple operations are no longer sufficient to
correctly handle them. To avoid this, export a default value such as `0` for
any time series you know may exist in advance.
Most Prometheus client libraries (including Go, Java, and Python) will
automatically export a `0` for you for metrics with no labels. | prometheus | title Instrumentation sort rank 3 Instrumentation This page provides an opinionated set of guidelines for instrumenting your code How to instrument The short answer is to instrument everything Every library subsystem and service should have at least a few metrics to give you a rough idea of how it is performing Instrumentation should be an integral part of your code Instantiate the metric classes in the same file you use them This makes going from alert to console to code easy when you are chasing an error The three types of services For monitoring purposes services can generally be broken down into three types online serving offline processing and batch jobs There is overlap between them but every service tends to fit well into one of these categories Online serving systems An online serving system is one where a human or another system is expecting an immediate response For example most database and HTTP requests fall into this category The key metrics in such a system are the number of performed queries errors and latency The number of in progress requests can also be useful For counting failed queries see section Failures failures below Online serving systems should be monitored on both the client and server side If the two sides see different behaviors that is very useful information for debugging If a service has many clients it is not practical for the service to track them individually so they have to rely on their own stats Be consistent in whether you count queries when they start or when they end When they end is suggested as it will line up with the error and latency stats and tends to be easier to code Offline processing For offline processing no one is actively waiting for a response and batching of work is common There may also be multiple stages of processing For each stage track the items coming in how many are in progress the last time you processed something and how many items were sent out If batching you should also track batches going in and out Knowing the last time that a system processed something is useful for detecting if it has stalled but it is very localised information A better approach is to send a heartbeat through the system some dummy item that gets passed all the way through and includes the timestamp when it was inserted Each stage can export the most recent heartbeat timestamp it has seen letting you know how long items are taking to propagate through the system For systems that do not have quiet periods where no processing occurs an explicit heartbeat may not be needed Batch jobs There is a fuzzy line between offline processing and batch jobs as offline processing may be done in batch jobs Batch jobs are distinguished by the fact that they do not run continuously which makes scraping them difficult The key metric of a batch job is the last time it succeeded It is also useful to track how long each major stage of the job took the overall runtime and the last time the job completed successful or failed These are all gauges and should be pushed to a PushGateway docs instrumenting pushing There are generally also some overall job specific statistics that would be useful to track such as the total number of records processed For batch jobs that take more than a few minutes to run it is useful to also scrape them using pull based monitoring This lets you track the same metrics over time as for other types of jobs such as resource usage and latency when talking to other systems This can aid debugging if the job starts to get slow For batch jobs that run very often say more often than every 15 minutes you should consider converting them into daemons and handling them as offline processing jobs Subsystems In addition to the three main types of services systems have sub parts that should also be monitored Libraries Libraries should provide instrumentation with no additional configuration required by users If it is a library used to access some resource outside of the process for example network disk or IPC track the overall query count errors if errors are possible and latency at a minimum Depending on how heavy the library is track internal errors and latency within the library itself and any general statistics you think may be useful A library may be used by multiple independent parts of an application against different resources so take care to distinguish uses with labels where appropriate For example a database connection pool should distinguish the databases it is talking to whereas there is no need to differentiate between users of a DNS client library Logging As a general rule for every line of logging code you should also have a counter that is incremented If you find an interesting log message you want to be able to see how often it has been happening and for how long If there are multiple closely related log messages in the same function for example different branches of an if or switch statement it can sometimes make sense to increment a single counter for all of them It is also generally useful to export the total number of info error warning lines that were logged by the application as a whole and check for significant differences as part of your release process Failures Failures should be handled similarly to logging Every time there is a failure a counter should be incremented Unlike logging the error may also bubble up to a more general error counter depending on how your code is structured When reporting failures you should generally have some other metric representing the total number of attempts This makes the failure ratio easy to calculate Threadpools For any sort of threadpool the key metrics are the number of queued requests the number of threads in use the total number of threads the number of tasks processed and how long they took It is also useful to track how long things were waiting in the queue Caches The key metrics for a cache are total queries hits overall latency and then the query count errors and latency of whatever online serving system the cache is in front of Collectors When implementing a non trivial custom metrics collector it is advised to export a gauge for how long the collection took in seconds and another for the number of errors encountered This is one of the two cases when it is okay to export a duration as a gauge rather than a summary or a histogram the other being batch job durations This is because both represent information about that particular push scrape rather than tracking multiple durations over time Things to watch out for There are some general things to be aware of when doing monitoring and also Prometheus specific ones in particular Use labels Few monitoring systems have the notion of labels and an expression language to take advantage of them so it takes a bit of getting used to When you have multiple metrics that you want to add average sum they should usually be one metric with labels rather than multiple metrics For example rather than http responses 500 total and http responses 403 total create a single metric called http responses total with a code label for the HTTP response code You can then process the entire metric as one in rules and graphs As a rule of thumb no part of a metric name should ever be procedurally generated use labels instead The one exception is when proxying metrics from another monitoring instrumentation system See also the naming docs practices naming section Do not overuse labels Each labelset is an additional time series that has RAM CPU disk and network costs Usually the overhead is negligible but in scenarios with lots of metrics and hundreds of labelsets across hundreds of servers this can add up quickly As a general guideline try to keep the cardinality of your metrics below 10 and for metrics that exceed that aim to limit them to a handful across your whole system The vast majority of your metrics should have no labels If you have a metric that has a cardinality over 100 or the potential to grow that large investigate alternate solutions such as reducing the number of dimensions or moving the analysis away from monitoring and to a general purpose processing system To give you a better idea of the underlying numbers let s look at node exporter node exporter exposes metrics for every mounted filesystem Every node will have in the tens of timeseries for say node filesystem avail If you have 10 000 nodes you will end up with roughly 100 000 timeseries for node filesystem avail which is fine for Prometheus to handle If you were to now add quota per user you would quickly reach a double digit number of millions with 10 000 users on 10 000 nodes This is too much for the current implementation of Prometheus Even with smaller numbers there s an opportunity cost as you can t have other potentially more useful metrics on this machine any more If you are unsure start with no labels and add more labels over time as concrete use cases arise Counter vs gauge summary vs histogram It is important to know which of the four main metric types to use for a given metric To pick between counter and gauge there is a simple rule of thumb if the value can go down it is a gauge Counters can only go up and reset such as when a process restarts They are useful for accumulating the number of events or the amount of something at each event For example the total number of HTTP requests or the total number of bytes sent in HTTP requests Raw counters are rarely useful Use the rate function to get the per second rate at which they are increasing Gauges can be set go up and go down They are useful for snapshots of state such as in progress requests free total memory or temperature You should never take a rate of a gauge Summaries and histograms are more complex metric types discussed in their own section docs practices histograms Timestamps not time since If you want to track the amount of time since something happened export the Unix timestamp at which it happened not the time since it happened With the timestamp exported you can use the expression time my timestamp metric to calculate the time since the event removing the need for update logic and protecting you against the update logic getting stuck Inner loops In general the additional resource cost of instrumentation is far outweighed by the benefits it brings to operations and development For code which is performance critical or called more than 100k times a second inside a given process you may wish to take some care as to how many metrics you update A Java counter takes 12 17ns https github com prometheus client java blob master benchmark README md to increment depending on contention Other languages will have similar performance If that amount of time is significant for your inner loop limit the number of metrics you increment in the inner loop and avoid labels or cache the result of the label lookup for example the return value of With in Go or labels in Java where possible Beware also of metric updates involving time or durations as getting the time may involve a syscall As with all matters involving performance critical code benchmarks are the best way to determine the impact of any given change Avoid missing metrics Time series that are not present until something happens are difficult to deal with as the usual simple operations are no longer sufficient to correctly handle them To avoid this export a default value such as 0 for any time series you know may exist in advance Most Prometheus client libraries including Go Java and Python will automatically export a 0 for you for metrics with no labels |
prometheus sortrank 6 mistakes by making incorrect or meaningless calculations stand out A consistent naming scheme for makes it easier to interpret the meaning of a rule at a glance It also avoids title Recording rules Recording rules | ---
title: Recording rules
sort_rank: 6
---
# Recording rules
A consistent naming scheme for [recording rules](/docs/prometheus/latest/configuration/recording_rules/)
makes it easier to interpret the meaning of a rule at a glance. It also avoids
mistakes by making incorrect or meaningless calculations stand out.
This page documents proper naming conventions and aggregation for recording rules.
## Naming
* Recording rules should be of the general form `level:metric:operations`.
* `level` represents the aggregation level and labels of the rule output.
* `metric` is the metric name and should be unchanged other than stripping `_total` off counters when using `rate()` or `irate()`.
* `operations` is a list of operations that were applied to the metric, newest operation first.
Keeping the metric name unchanged makes it easy to know what a metric is and
easy to find in the codebase.
To keep the operations clean, `_sum` is omitted if there are other operations,
as `sum()`. Associative operations can be merged (for example `min_min` is the
same as `min`).
If there is no obvious operation to use, use `sum`. When taking a ratio by
doing division, separate the metrics using `_per_` and call the operation
`ratio`.
## Aggregation
* When aggregating up ratios, aggregate up the numerator and denominator
separately and then divide.
* Do not take the average of a ratio or average of an
average, as that is not statistically valid.
* When aggregating up the `_count` and `_sum` of a Summary and dividing to
calculate average observation size, treating it as a ratio would be unwieldy.
Instead keep the metric name without the `_count` or `_sum` suffix and replace
the `rate` in the operation with `mean`. This represents the average
observation size over that time period.
* Always specify a `without` clause with the labels you are aggregating away.
This is to preserve all the other labels such as `job`, which will avoid
conflicts and give you more useful metrics and alerts.
## Examples
_Note the indentation style with outdented operators on their own line between
two vectors. To make this style possible in Yaml, [block quotes with an
indentation indicator](https://yaml.org/spec/1.2/spec.html#style/block/scalar)
(e.g. `|2`) are used._
Aggregating up requests per second that has a `path` label:
```
- record: instance_path:requests:rate5m
expr: rate(requests_total{job="myjob"}[5m])
- record: path:requests:rate5m
expr: sum without (instance)(instance_path:requests:rate5m{job="myjob"})
```
Calculating a request failure ratio and aggregating up to the job-level failure ratio:
```
- record: instance_path:request_failures:rate5m
expr: rate(request_failures_total{job="myjob"}[5m])
- record: instance_path:request_failures_per_requests:ratio_rate5m
expr: |2
instance_path:request_failures:rate5m{job="myjob"}
/
instance_path:requests:rate5m{job="myjob"}
# Aggregate up numerator and denominator, then divide to get path-level ratio.
- record: path:request_failures_per_requests:ratio_rate5m
expr: |2
sum without (instance)(instance_path:request_failures:rate5m{job="myjob"})
/
sum without (instance)(instance_path:requests:rate5m{job="myjob"})
# No labels left from instrumentation or distinguishing instances,
# so we use 'job' as the level.
- record: job:request_failures_per_requests:ratio_rate5m
expr: |2
sum without (instance, path)(instance_path:request_failures:rate5m{job="myjob"})
/
sum without (instance, path)(instance_path:requests:rate5m{job="myjob"})
```
Calculating average latency over a time period from a Summary:
```
- record: instance_path:request_latency_seconds_count:rate5m
expr: rate(request_latency_seconds_count{job="myjob"}[5m])
- record: instance_path:request_latency_seconds_sum:rate5m
expr: rate(request_latency_seconds_sum{job="myjob"}[5m])
- record: instance_path:request_latency_seconds:mean5m
expr: |2
instance_path:request_latency_seconds_sum:rate5m{job="myjob"}
/
instance_path:request_latency_seconds_count:rate5m{job="myjob"}
# Aggregate up numerator and denominator, then divide.
- record: path:request_latency_seconds:mean5m
expr: |2
sum without (instance)(instance_path:request_latency_seconds_sum:rate5m{job="myjob"})
/
sum without (instance)(instance_path:request_latency_seconds_count:rate5m{job="myjob"})
```
Calculating the average query rate across instances and paths is done using the
`avg()` function:
```
- record: job:request_latency_seconds_count:avg_rate5m
expr: avg without (instance, path)(instance:request_latency_seconds_count:rate5m{job="myjob"})
```
Notice that when aggregating that the labels in the `without` clause are removed
from the level of the output metric name compared to the input metric names.
When there is no aggregation, the levels always match. If this is not the case
a mistake has likely been made in the rules. | prometheus | title Recording rules sort rank 6 Recording rules A consistent naming scheme for recording rules docs prometheus latest configuration recording rules makes it easier to interpret the meaning of a rule at a glance It also avoids mistakes by making incorrect or meaningless calculations stand out This page documents proper naming conventions and aggregation for recording rules Naming Recording rules should be of the general form level metric operations level represents the aggregation level and labels of the rule output metric is the metric name and should be unchanged other than stripping total off counters when using rate or irate operations is a list of operations that were applied to the metric newest operation first Keeping the metric name unchanged makes it easy to know what a metric is and easy to find in the codebase To keep the operations clean sum is omitted if there are other operations as sum Associative operations can be merged for example min min is the same as min If there is no obvious operation to use use sum When taking a ratio by doing division separate the metrics using per and call the operation ratio Aggregation When aggregating up ratios aggregate up the numerator and denominator separately and then divide Do not take the average of a ratio or average of an average as that is not statistically valid When aggregating up the count and sum of a Summary and dividing to calculate average observation size treating it as a ratio would be unwieldy Instead keep the metric name without the count or sum suffix and replace the rate in the operation with mean This represents the average observation size over that time period Always specify a without clause with the labels you are aggregating away This is to preserve all the other labels such as job which will avoid conflicts and give you more useful metrics and alerts Examples Note the indentation style with outdented operators on their own line between two vectors To make this style possible in Yaml block quotes with an indentation indicator https yaml org spec 1 2 spec html style block scalar e g 2 are used Aggregating up requests per second that has a path label record instance path requests rate5m expr rate requests total job myjob 5m record path requests rate5m expr sum without instance instance path requests rate5m job myjob Calculating a request failure ratio and aggregating up to the job level failure ratio record instance path request failures rate5m expr rate request failures total job myjob 5m record instance path request failures per requests ratio rate5m expr 2 instance path request failures rate5m job myjob instance path requests rate5m job myjob Aggregate up numerator and denominator then divide to get path level ratio record path request failures per requests ratio rate5m expr 2 sum without instance instance path request failures rate5m job myjob sum without instance instance path requests rate5m job myjob No labels left from instrumentation or distinguishing instances so we use job as the level record job request failures per requests ratio rate5m expr 2 sum without instance path instance path request failures rate5m job myjob sum without instance path instance path requests rate5m job myjob Calculating average latency over a time period from a Summary record instance path request latency seconds count rate5m expr rate request latency seconds count job myjob 5m record instance path request latency seconds sum rate5m expr rate request latency seconds sum job myjob 5m record instance path request latency seconds mean5m expr 2 instance path request latency seconds sum rate5m job myjob instance path request latency seconds count rate5m job myjob Aggregate up numerator and denominator then divide record path request latency seconds mean5m expr 2 sum without instance instance path request latency seconds sum rate5m job myjob sum without instance instance path request latency seconds count rate5m job myjob Calculating the average query rate across instances and paths is done using the avg function record job request latency seconds count avg rate5m expr avg without instance path instance request latency seconds count rate5m job myjob Notice that when aggregating that the labels in the without clause are removed from the level of the output metric name compared to the input metric names When there is no aggregation the levels always match If this is not the case a mistake has likely been made in the rules |
prometheus best practices Individual organizations may want to approach some of these Metric and label naming sortrank 1 title Metric and label naming for using Prometheus but can serve as both a style guide and a collection of The metric and label conventions presented in this document are not required | ---
title: Metric and label naming
sort_rank: 1
---
# Metric and label naming
The metric and label conventions presented in this document are not required
for using Prometheus, but can serve as both a style-guide and a collection of
best practices. Individual organizations may want to approach some of these
practices, e.g. naming conventions, differently.
## Metric names
A metric name...
* ...must comply with the [data model](/docs/concepts/data_model/#metric-names-and-labels) for valid characters.
* ...should have a (single-word) application prefix relevant to the domain the
metric belongs to. The prefix is sometimes referred to as `namespace` by
client libraries. For metrics specific to an application, the prefix is
usually the application name itself. Sometimes, however, metrics are more
generic, like standardized metrics exported by client libraries. Examples:
* <code><b>prometheus</b>\_notifications\_total</code>
(specific to the Prometheus server)
* <code><b>process</b>\_cpu\_seconds\_total</code>
(exported by many client libraries)
* <code><b>http</b>\_request\_duration\_seconds</code>
(for all HTTP requests)
* ...must have a single unit (i.e. do not mix seconds with milliseconds, or seconds with bytes).
* ...should use base units (e.g. seconds, bytes, meters - not milliseconds, megabytes, kilometers).See below for a list of base units.
* ...should have a suffix describing the unit, in plural form. Note that an accumulating count has `total` as a suffix, in addition to the unit if applicable. Also note that this applies to units in the narrow sense (like the units in the table below), but not to countable things in general. For example, <code>connections</code> or <code>notifications</code> are not considered units for this rule and do not have to be at the end of the metric name. (See also examples in the next paragraph.)
* <code>http\_request\_duration\_<b>seconds</b></code>
* <code>node\_memory\_usage\_<b>bytes</b></code>
* <code>http\_requests\_<b>total</b></code>
(for a unit-less accumulating count)
* <code>process\_cpu\_<b>seconds\_total</b></code>
(for an accumulating count with unit)
* <code>foobar_build<b>\_info</b></code>
(for a pseudo-metric that provides [metadata](https://www.robustperception.io/exposing-the-software-version-to-prometheus) about the running binary)
* <code>data\_pipeline\_last\_record\_processed\_<b>timestamp_seconds</b></code>
(for a timestamp that tracks the time of the latest record processed in a data processing pipeline)
* ...may order its name components in a way that leads to convenient grouping when a list of metric names is sorted lexicographically, as long as all the other rules are followed. The following examples have their the common name components first so that all the related metrics are sorted together:
* <code>prometheus\_tsdb\_head\_truncations\_closed\_total</code>
* <code>prometheus\_tsdb\_head\_truncations\_established\_total</code>
* <code>prometheus\_tsdb\_head\_truncations\_failed\_total</code>
* <code>prometheus\_tsdb\_head\_truncations\_total</code>
The following examples are also valid, but are following a different trade-off. They are easier to read individually, but unrelated metrics like <code>prometheus\_tsdb\_head\_series</code> might get sorted in between.
* <code>prometheus\_tsdb\_head\_closed\_truncations\_total</code>
* <code>prometheus\_tsdb\_head\_established\_truncations\_total</code>
* <code>prometheus\_tsdb\_head\_failed\_truncations\_total</code>
* <code>prometheus\_tsdb\_head\_truncations\_total</code>
* ...should represent the same logical thing-being-measured across all label
dimensions.
* request duration
* bytes of data transfer
* instantaneous resource usage as a percentage
As a rule of thumb, either the `sum()` or the `avg()` over all dimensions of a
given metric should be meaningful (though not necessarily useful). If it is not
meaningful, split the data up into multiple metrics. For example, having the
capacity of various queues in one metric is good, while mixing the capacity of a
queue with the current number of elements in the queue is not.
## Labels
Use labels to differentiate the characteristics of the thing that is being measured:
* `api_http_requests_total` - differentiate request types: `operation="create|update|delete"`
* `api_request_duration_seconds` - differentiate request stages: `stage="extract|transform|load"`
Do not put the label names in the metric name, as this introduces redundancy
and will cause confusion if the respective labels are aggregated away.
CAUTION: Remember that every unique combination of key-value label
pairs represents a new time series, which can dramatically increase the amount
of data stored. Do not use labels to store dimensions with high cardinality
(many different label values), such as user IDs, email addresses, or other
unbounded sets of values.
## Base units
Prometheus does not have any units hard coded. For better compatibility, base
units should be used. The following lists some metrics families with their base unit.
The list is not exhaustive.
| Family | Base unit | Remark |
| -------| --------- | ------ |
| Time | seconds | |
| Temperature | celsius | _celsius_ is preferred over _kelvin_ for practical reasons. _kelvin_ is acceptable as a base unit in special cases like color temperature or where temperature has to be absolute. |
| Length | meters | |
| Bytes | bytes | |
| Bits | bytes | To avoid confusion combining different metrics, always use _bytes_, even where _bits_ appear more common. |
| Percent | ratio | Values are 0–1 (rather than 0–100). `ratio` is only used as a suffix for names like `disk_usage_ratio`. The usual metric name follows the pattern `A_per_B`. |
| Voltage | volts | |
| Electric current | amperes | |
| Energy | joules | |
| Power | | Prefer exporting a counter of joules, then `rate(joules[5m])` gives you power in Watts. |
| Mass | grams | _grams_ is preferred over _kilograms_ to avoid issues with the _kilo_ prefix. | | prometheus | title Metric and label naming sort rank 1 Metric and label naming The metric and label conventions presented in this document are not required for using Prometheus but can serve as both a style guide and a collection of best practices Individual organizations may want to approach some of these practices e g naming conventions differently Metric names A metric name must comply with the data model docs concepts data model metric names and labels for valid characters should have a single word application prefix relevant to the domain the metric belongs to The prefix is sometimes referred to as namespace by client libraries For metrics specific to an application the prefix is usually the application name itself Sometimes however metrics are more generic like standardized metrics exported by client libraries Examples code b prometheus b notifications total code specific to the Prometheus server code b process b cpu seconds total code exported by many client libraries code b http b request duration seconds code for all HTTP requests must have a single unit i e do not mix seconds with milliseconds or seconds with bytes should use base units e g seconds bytes meters not milliseconds megabytes kilometers See below for a list of base units should have a suffix describing the unit in plural form Note that an accumulating count has total as a suffix in addition to the unit if applicable Also note that this applies to units in the narrow sense like the units in the table below but not to countable things in general For example code connections code or code notifications code are not considered units for this rule and do not have to be at the end of the metric name See also examples in the next paragraph code http request duration b seconds b code code node memory usage b bytes b code code http requests b total b code for a unit less accumulating count code process cpu b seconds total b code for an accumulating count with unit code foobar build b info b code for a pseudo metric that provides metadata https www robustperception io exposing the software version to prometheus about the running binary code data pipeline last record processed b timestamp seconds b code for a timestamp that tracks the time of the latest record processed in a data processing pipeline may order its name components in a way that leads to convenient grouping when a list of metric names is sorted lexicographically as long as all the other rules are followed The following examples have their the common name components first so that all the related metrics are sorted together code prometheus tsdb head truncations closed total code code prometheus tsdb head truncations established total code code prometheus tsdb head truncations failed total code code prometheus tsdb head truncations total code The following examples are also valid but are following a different trade off They are easier to read individually but unrelated metrics like code prometheus tsdb head series code might get sorted in between code prometheus tsdb head closed truncations total code code prometheus tsdb head established truncations total code code prometheus tsdb head failed truncations total code code prometheus tsdb head truncations total code should represent the same logical thing being measured across all label dimensions request duration bytes of data transfer instantaneous resource usage as a percentage As a rule of thumb either the sum or the avg over all dimensions of a given metric should be meaningful though not necessarily useful If it is not meaningful split the data up into multiple metrics For example having the capacity of various queues in one metric is good while mixing the capacity of a queue with the current number of elements in the queue is not Labels Use labels to differentiate the characteristics of the thing that is being measured api http requests total differentiate request types operation create update delete api request duration seconds differentiate request stages stage extract transform load Do not put the label names in the metric name as this introduces redundancy and will cause confusion if the respective labels are aggregated away CAUTION Remember that every unique combination of key value label pairs represents a new time series which can dramatically increase the amount of data stored Do not use labels to store dimensions with high cardinality many different label values such as user IDs email addresses or other unbounded sets of values Base units Prometheus does not have any units hard coded For better compatibility base units should be used The following lists some metrics families with their base unit The list is not exhaustive Family Base unit Remark Time seconds Temperature celsius celsius is preferred over kelvin for practical reasons kelvin is acceptable as a base unit in special cases like color temperature or where temperature has to be absolute Length meters Bytes bytes Bits bytes To avoid confusion combining different metrics always use bytes even where bits appear more common Percent ratio Values are 0 1 rather than 0 100 ratio is only used as a suffix for names like disk usage ratio The usual metric name follows the pattern A per B Voltage volts Electric current amperes Energy joules Power Prefer exporting a counter of joules then rate joules 5m gives you power in Watts Mass grams grams is preferred over kilograms to avoid issues with the kilo prefix |
prometheus numerous other generic integration points in Prometheus This page lists some In addition to and Integrations there are title Integrations sortrank 5 | ---
title: Integrations
sort_rank: 5
---
# Integrations
In addition to [client libraries](/docs/instrumenting/clientlibs/) and
[exporters and related libraries](/docs/instrumenting/exporters/), there are
numerous other generic integration points in Prometheus. This page lists some
of the integrations with these.
Not all integrations are listed here, due to overlapping functionality or still
being in development. The [exporter default
port](https://github.com/prometheus/prometheus/wiki/Default-port-allocations)
wiki page also happens to include a few non-exporter integrations that fit in
these categories.
## File Service Discovery
For service discovery mechanisms not natively supported by Prometheus,
[file-based service discovery](/docs/operating/configuration/#%3Cfile_sd_config%3E) provides an interface for integrating.
* [Kuma](https://github.com/kumahq/kuma/tree/master/app/kuma-prometheus-sd)
* [Lightsail](https://github.com/n888/prometheus-lightsail-sd)
* [Netbox](https://github.com/FlxPeters/netbox-prometheus-sd)
* [Packet](https://github.com/packethost/prometheus-packet-sd)
* [Scaleway](https://github.com/scaleway/prometheus-scw-sd)
## Remote Endpoints and Storage
The [remote write](/docs/operating/configuration/#remote_write) and [remote read](/docs/operating/configuration/#remote_read)
features of Prometheus allow transparently sending and receiving samples. This
is primarily intended for long term storage. It is recommended that you perform
careful evaluation of any solution in this space to confirm it can handle your
data volumes.
* [AppOptics](https://github.com/solarwinds/prometheus2appoptics): write
* [AWS Timestream](https://github.com/dpattmann/prometheus-timestream-adapter): read and write
* [Azure Data Explorer](https://github.com/cosh/PrometheusToAdx): read and write
* [Azure Event Hubs](https://github.com/bryanklewis/prometheus-eventhubs-adapter): write
* [Chronix](https://github.com/ChronixDB/chronix.ingester): write
* [Cortex](https://github.com/cortexproject/cortex): read and write
* [CrateDB](https://github.com/crate/crate_adapter): read and write
* [Elasticsearch](https://www.elastic.co/guide/en/beats/metricbeat/master/metricbeat-metricset-prometheus-remote_write.html): write
* [Gnocchi](https://gnocchi.osci.io/prometheus.html): write
* [Google BigQuery](https://github.com/KohlsTechnology/prometheus_bigquery_remote_storage_adapter): read and write
* [Google Cloud Spanner](https://github.com/google/truestreet): read and write
* [Grafana Mimir](https://github.com/grafana/mimir): read and write
* [Graphite](https://github.com/prometheus/prometheus/tree/main/documentation/examples/remote_storage/remote_storage_adapter): write
* [GreptimeDB](https://github.com/GreptimeTeam/greptimedb): read and write
* [InfluxDB](https://docs.influxdata.com/influxdb/v1.8/supported_protocols/prometheus): read and write
* [Instana](https://www.instana.com/docs/ecosystem/prometheus/#remote-write): write
* [IRONdb](https://github.com/circonus-labs/irondb-prometheus-adapter): read and write
* [Kafka](https://github.com/Telefonica/prometheus-kafka-adapter): write
* [M3DB](https://m3db.io/docs/integrations/prometheus/): read and write
* [Mezmo](https://docs.mezmo.com/telemetry-pipelines/prometheus-remote-write-pipeline-source): write
* [New Relic](https://docs.newrelic.com/docs/set-or-remove-your-prometheus-remote-write-integration): write
* [OpenTSDB](https://github.com/prometheus/prometheus/tree/main/documentation/examples/remote_storage/remote_storage_adapter): write
* [QuasarDB](https://doc.quasardb.net/master/user-guide/integration/prometheus.html): read and write
* [SignalFx](https://github.com/signalfx/metricproxy#prometheus): write
* [Splunk](https://github.com/kebe7jun/ropee): read and write
* [Sysdig Monitor](https://docs.sysdig.com/en/docs/installation/prometheus-remote-write/): write
* [TiKV](https://github.com/bragfoo/TiPrometheus): read and write
* [Thanos](https://github.com/thanos-io/thanos): read and write
* [VictoriaMetrics](https://github.com/VictoriaMetrics/VictoriaMetrics): write
* [Wavefront](https://github.com/wavefrontHQ/prometheus-storage-adapter): write
[Prom-migrator](https://github.com/timescale/promscale/tree/master/migration-tool/cmd/prom-migrator) is a tool for migrating data between remote storage systems.
## Alertmanager Webhook Receiver
For notification mechanisms not natively supported by the Alertmanager, the
[webhook receiver](/docs/alerting/configuration/#webhook_config) allows for integration.
* [alertmanager-webhook-logger](https://github.com/tomtom-international/alertmanager-webhook-logger): logs alerts
* [Alertsnitch](https://gitlab.com/yakshaving.art/alertsnitch): saves alerts to a MySQL database
* [All Quiet](https://allquiet.app/integrations/inbound/prometheus): on-call & incident management
* [Asana](https://gitlab.com/lupudu/alertmanager-asana-bridge)
* [AWS SNS](https://github.com/DataReply/alertmanager-sns-forwarder)
* [Better Uptime](https://docs.betteruptime.com/integrations/prometheus)
* [Canopsis](https://git.canopsis.net/canopsis-connectors/connector-prometheus2canopsis)
* [DingTalk](https://github.com/timonwong/prometheus-webhook-dingtalk)
* [Discord](https://github.com/benjojo/alertmanager-discord)
* [GitLab](https://docs.gitlab.com/ee/operations/metrics/alerts.html#external-prometheus-instances)
* [Gotify](https://github.com/DRuggeri/alertmanager_gotify_bridge)
* [GELF](https://github.com/b-com-software-basis/alertmanager2gelf)
* [Heii On-Call](https://heiioncall.com/guides/prometheus-integration)
* [Icinga2](https://github.com/vshn/signalilo)
* [iLert](https://docs.ilert.com/integrations/prometheus)
* [IRC Bot](https://github.com/multimfi/bot)
* [JIRAlert](https://github.com/free/jiralert)
* [Matrix](https://github.com/matrix-org/go-neb)
* [Phabricator / Maniphest](https://github.com/knyar/phalerts)
* [prom2teams](https://github.com/idealista/prom2teams): forwards notifications to Microsoft Teams
* [Ansible Tower](https://github.com/pja237/prom2tower): call Ansible Tower (AWX) API on alerts (launch jobs etc.)
* [Signal](https://github.com/dgl/alertmanager-webhook-signald)
* [SIGNL4](https://www.signl4.com/blog/portfolio_item/prometheus-alertmanager-mobile-alert-notification-duty-schedule-escalation)
* [Simplepush](https://codeberg.org/stealth/alertpush)
* [SMS](https://github.com/messagebird/sachet): supports [multiple providers](https://github.com/messagebird/sachet/blob/master/examples/config.yaml)
* [SNMP traps](https://github.com/maxwo/snmp_notifier)
* [Squadcast](https://support.squadcast.com/docs/prometheus)
* [STOMP](https://github.com/thewillyhuman/alertmanager-stomp-forwarder)
* [Telegram bot](https://github.com/inCaller/prometheus_bot)
* [xMatters](https://github.com/xmatters/xm-labs-prometheus)
* [XMPP Bot](https://github.com/jelmer/prometheus-xmpp-alerts)
* [Zenduty](https://docs.zenduty.com/docs/prometheus/)
* [Zoom](https://github.com/Code2Life/nodess-apps/tree/master/src/zoom-alert-2.0)
## Management
Prometheus does not include configuration management functionality, allowing
you to integrate it with your existing systems or build on top of it.
* [Prometheus Operator](https://github.com/coreos/prometheus-operator): Manages Prometheus on top of Kubernetes
* [Promgen](https://github.com/line/promgen): Web UI and configuration generator for Prometheus and Alertmanager
## Other
* [Alert analysis](https://github.com/m0nikasingh/am2ch): Stores alerts into a ClickHouse database and provides alert analysis dashboards
* [karma](https://github.com/prymitive/karma): alert dashboard
* [PushProx](https://github.com/RobustPerception/PushProx): Proxy to transverse NAT and similar network setups
* [Promdump](https://github.com/ihcsim/promdump): kubectl plugin to dump and restore data blocks
* [Promregator](https://github.com/promregator/promregator): discovery and scraping for Cloud Foundry applications
* [pint](https://github.com/cloudflare/pint): Prometheus rule linter | prometheus | title Integrations sort rank 5 Integrations In addition to client libraries docs instrumenting clientlibs and exporters and related libraries docs instrumenting exporters there are numerous other generic integration points in Prometheus This page lists some of the integrations with these Not all integrations are listed here due to overlapping functionality or still being in development The exporter default port https github com prometheus prometheus wiki Default port allocations wiki page also happens to include a few non exporter integrations that fit in these categories File Service Discovery For service discovery mechanisms not natively supported by Prometheus file based service discovery docs operating configuration 3Cfile sd config 3E provides an interface for integrating Kuma https github com kumahq kuma tree master app kuma prometheus sd Lightsail https github com n888 prometheus lightsail sd Netbox https github com FlxPeters netbox prometheus sd Packet https github com packethost prometheus packet sd Scaleway https github com scaleway prometheus scw sd Remote Endpoints and Storage The remote write docs operating configuration remote write and remote read docs operating configuration remote read features of Prometheus allow transparently sending and receiving samples This is primarily intended for long term storage It is recommended that you perform careful evaluation of any solution in this space to confirm it can handle your data volumes AppOptics https github com solarwinds prometheus2appoptics write AWS Timestream https github com dpattmann prometheus timestream adapter read and write Azure Data Explorer https github com cosh PrometheusToAdx read and write Azure Event Hubs https github com bryanklewis prometheus eventhubs adapter write Chronix https github com ChronixDB chronix ingester write Cortex https github com cortexproject cortex read and write CrateDB https github com crate crate adapter read and write Elasticsearch https www elastic co guide en beats metricbeat master metricbeat metricset prometheus remote write html write Gnocchi https gnocchi osci io prometheus html write Google BigQuery https github com KohlsTechnology prometheus bigquery remote storage adapter read and write Google Cloud Spanner https github com google truestreet read and write Grafana Mimir https github com grafana mimir read and write Graphite https github com prometheus prometheus tree main documentation examples remote storage remote storage adapter write GreptimeDB https github com GreptimeTeam greptimedb read and write InfluxDB https docs influxdata com influxdb v1 8 supported protocols prometheus read and write Instana https www instana com docs ecosystem prometheus remote write write IRONdb https github com circonus labs irondb prometheus adapter read and write Kafka https github com Telefonica prometheus kafka adapter write M3DB https m3db io docs integrations prometheus read and write Mezmo https docs mezmo com telemetry pipelines prometheus remote write pipeline source write New Relic https docs newrelic com docs set or remove your prometheus remote write integration write OpenTSDB https github com prometheus prometheus tree main documentation examples remote storage remote storage adapter write QuasarDB https doc quasardb net master user guide integration prometheus html read and write SignalFx https github com signalfx metricproxy prometheus write Splunk https github com kebe7jun ropee read and write Sysdig Monitor https docs sysdig com en docs installation prometheus remote write write TiKV https github com bragfoo TiPrometheus read and write Thanos https github com thanos io thanos read and write VictoriaMetrics https github com VictoriaMetrics VictoriaMetrics write Wavefront https github com wavefrontHQ prometheus storage adapter write Prom migrator https github com timescale promscale tree master migration tool cmd prom migrator is a tool for migrating data between remote storage systems Alertmanager Webhook Receiver For notification mechanisms not natively supported by the Alertmanager the webhook receiver docs alerting configuration webhook config allows for integration alertmanager webhook logger https github com tomtom international alertmanager webhook logger logs alerts Alertsnitch https gitlab com yakshaving art alertsnitch saves alerts to a MySQL database All Quiet https allquiet app integrations inbound prometheus on call incident management Asana https gitlab com lupudu alertmanager asana bridge AWS SNS https github com DataReply alertmanager sns forwarder Better Uptime https docs betteruptime com integrations prometheus Canopsis https git canopsis net canopsis connectors connector prometheus2canopsis DingTalk https github com timonwong prometheus webhook dingtalk Discord https github com benjojo alertmanager discord GitLab https docs gitlab com ee operations metrics alerts html external prometheus instances Gotify https github com DRuggeri alertmanager gotify bridge GELF https github com b com software basis alertmanager2gelf Heii On Call https heiioncall com guides prometheus integration Icinga2 https github com vshn signalilo iLert https docs ilert com integrations prometheus IRC Bot https github com multimfi bot JIRAlert https github com free jiralert Matrix https github com matrix org go neb Phabricator Maniphest https github com knyar phalerts prom2teams https github com idealista prom2teams forwards notifications to Microsoft Teams Ansible Tower https github com pja237 prom2tower call Ansible Tower AWX API on alerts launch jobs etc Signal https github com dgl alertmanager webhook signald SIGNL4 https www signl4 com blog portfolio item prometheus alertmanager mobile alert notification duty schedule escalation Simplepush https codeberg org stealth alertpush SMS https github com messagebird sachet supports multiple providers https github com messagebird sachet blob master examples config yaml SNMP traps https github com maxwo snmp notifier Squadcast https support squadcast com docs prometheus STOMP https github com thewillyhuman alertmanager stomp forwarder Telegram bot https github com inCaller prometheus bot xMatters https github com xmatters xm labs prometheus XMPP Bot https github com jelmer prometheus xmpp alerts Zenduty https docs zenduty com docs prometheus Zoom https github com Code2Life nodess apps tree master src zoom alert 2 0 Management Prometheus does not include configuration management functionality allowing you to integrate it with your existing systems or build on top of it Prometheus Operator https github com coreos prometheus operator Manages Prometheus on top of Kubernetes Promgen https github com line promgen Web UI and configuration generator for Prometheus and Alertmanager Other Alert analysis https github com m0nikasingh am2ch Stores alerts into a ClickHouse database and provides alert analysis dashboards karma https github com prymitive karma alert dashboard PushProx https github com RobustPerception PushProx Proxy to transverse NAT and similar network setups Promdump https github com ihcsim promdump kubectl plugin to dump and restore data blocks Promregator https github com promregator promregator discovery and scraping for Cloud Foundry applications pint https github com cloudflare pint Prometheus rule linter |
prometheus sortrank 4 title Security Prometheus is a sophisticated system with many components and many integrations with other systems It can be deployed in a variety of trusted and untrusted environments Security Model | ---
title: Security
sort_rank: 4
---
# Security Model
Prometheus is a sophisticated system with many components and many integrations
with other systems. It can be deployed in a variety of trusted and untrusted
environments.
This page describes the general security assumptions of Prometheus and the
attack vectors that some configurations may enable.
As with any complex system, it is near certain that bugs will be found, some of
them security-relevant. If you find a _security bug_ please report it
privately to the maintainers listed in the MAINTAINERS of the relevant
repository and CC [email protected]. We will fix the issue as soon
as possible and coordinate a release date with you. You will be able to choose
if you want public acknowledgement of your effort and if you want to be
mentioned by name.
### Automated security scanners
Special note for security scanner users: Please be mindful with the reports produced.
Most scanners are generic and produce lots of false positives. More and more
reports are being sent to us, and it takes a significant amount of work to go
through all of them and reply with the care you expect. This problem is particularly
bad with Go and NPM dependency scanners.
As a courtesy to us and our time, we would ask you not to submit raw reports.
Instead, please submit them with an analysis outlining which specific results
are applicable to us and why.
Prometheus is maintained by volunteers, not by a company. Therefore, fixing
security issues is done on a best-effort basis. We strive to release security
fixes within 7 days for: Prometheus, Alertmanager, Node Exporter,
Blackbox Exporter, and Pushgateway.
## Prometheus
It is presumed that untrusted users have access to the Prometheus HTTP endpoint
and logs. They have access to all time series information contained in the
database, plus a variety of operational/debugging information.
It is also presumed that only trusted users have the ability to change the
command line, configuration file, rule files and other aspects of the runtime
environment of Prometheus and other components.
Which targets Prometheus scrapes, how often and with what other settings is
determined entirely via the configuration file. The administrator may
decide to use information from service discovery systems, which combined with
relabelling may grant some of this control to anyone who can modify data in
that service discovery system.
Scraped targets may be run by untrusted users. It should not by default be
possible for a target to expose data that impersonates a different target. The
`honor_labels` option removes this protection, as can certain relabelling
setups.
As of Prometheus 2.0, the `--web.enable-admin-api` flag controls access to the
administrative HTTP API which includes functionality such as deleting time
series. This is disabled by default. If enabled, administrative and mutating
functionality will be accessible under the `/api/*/admin/` paths. The
`--web.enable-lifecycle` flag controls HTTP reloads and shutdowns of
Prometheus. This is also disabled by default. If enabled they will be
accessible under the `/-/reload` and `/-/quit` paths.
In Prometheus 1.x, `/-/reload` and using `DELETE` on `/api/v1/series` are
accessible to anyone with access to the HTTP API. The `/-/quit` endpoint is
disabled by default, but can be enabled with the `-web.enable-remote-shutdown`
flag.
The remote read feature allows anyone with HTTP access to send queries to the
remote read endpoint. If for example the PromQL queries were ending up directly
run against a relational database, then anyone with the ability to send queries
to Prometheus (such as via Grafana) can run arbitrary SQL against that
database.
## Alertmanager
Any user with access to the Alertmanager HTTP endpoint has access to its data.
They can create and resolve alerts. They can create, modify and delete
silences.
Where notifications are sent to is determined by the configuration file. With
certain templating setups it is possible for notifications to end up at an
alert-defined destination. For example if notifications use an alert label as
the destination email address, anyone who can send alerts to the Alertmanager
can send notifications to any email address. If the alert-defined destination
is a templatable secret field, anyone with access to either Prometheus or
Alertmanager will be able to view the secrets.
Any secret fields which are templatable are intended for routing notifications
in the above use case. They are not intended as a way for secrets to be
separated out from the configuration files using the template file feature. Any
secrets stored in template files could be exfiltrated by anyone able to
configure receivers in the Alertmanager configuration file. For example in
large setups, each team might have an alertmanager configuration file fragment
which they fully control, that are then combined into the full final
configuration file.
## Pushgateway
Any user with access to the Pushgateway HTTP endpoint can create, modify and
delete the metrics contained within. As the Pushgateway is usually scraped with
`honor_labels` enabled, this means anyone with access to the Pushgateway can
create any time series in Prometheus.
The `--web.enable-admin-api` flag controls access to the
administrative HTTP API, which includes functionality such as wiping all the existing
metric groups. This is disabled by default. If enabled, administrative
functionality will be accessible under the `/api/*/admin/` paths.
## Exporters
Exporters generally only talk to one configured instance with a preset set of
commands/requests, which cannot be expanded via their HTTP endpoint.
There are also exporters such as the SNMP and Blackbox exporters that take
their targets from URL parameters. Thus anyone with HTTP access to these
exporters can make them send requests to arbitrary endpoints. As they also
support client-side authentication, this could lead to a leak of secrets such
as HTTP Basic Auth passwords or SNMP community strings. Challenge-response
authentication mechanisms such as TLS are not affected by this.
## Client Libraries
Client libraries are intended to be included in users' applications.
If using a client-library-provided HTTP handler, it should not be possible for
malicious requests that reach that handler to cause issues beyond those
resulting from additional load and failed scrapes.
## Authentication, Authorization, and Encryption
Prometheus, and most exporters, support TLS. Including authentication of clients
via TLS client certificates. Details on configuring Prometheus are [`here`](https://prometheus.io/docs/guides/tls-encryption/).
The Go projects share the same TLS library, based on the
Go [crypto/tls](https://golang.org/pkg/crypto/tls) library.
We default to TLS 1.2 as minimum version. Our policy regarding this is based on
[Qualys SSL Labs](https://www.ssllabs.com/) recommendations, where we strive to
achieve a grade 'A' with a default configuration and correctly provided
certificates, while sticking as closely as possible to the upstream Go defaults.
Achieving that grade provides a balance between perfect security and usability.
TLS will be added to Java exporters in the future.
If you have special TLS needs, like a different cipher suite or older TLS
version, you can tune the minimum TLS version and the ciphers, as long as the
cipher is not [marked as insecure](https://golang.org/pkg/crypto/tls/#InsecureCipherSuites)
in the [crypto/tls](https://golang.org/pkg/crypto/tls) library. If that still
does not suit you, the current TLS settings enable you to build a secure tunnel
between the servers and reverse proxies with more special requirements.
HTTP Basic Authentication is also supported. Basic Authentication can be
used without TLS, but it will then expose usernames and passwords in cleartext
over the network.
On the server side, basic authentication passwords are stored as hashes with the
[bcrypt](https://en.wikipedia.org/wiki/Bcrypt) algorithm. It is your
responsibility to pick the number of rounds that matches your security
standards. More rounds make brute-force more complicated at the cost of more CPU
power and more time to authenticate the requests.
Various Prometheus components support client-side authentication and
encryption. If TLS client support is offered, there is often also an option
called `insecure_skip_verify` which skips SSL verification.
## API Security
As administrative and mutating endpoints are intended to be accessed via simple
tools such as cURL, there is no built in
[CSRF](https://en.wikipedia.org/wiki/Cross-site_request_forgery) protection as
that would break such use cases. Accordingly when using a reverse proxy, you
may wish to block such paths to prevent CSRF.
For non-mutating endpoints, you may wish to set [CORS
headers](https://fetch.spec.whatwg.org/#http-cors-protocol) such as
`Access-Control-Allow-Origin` in your reverse proxy to prevent
[XSS](https://en.wikipedia.org/wiki/Cross-site_scripting).
If you are composing PromQL queries that include input from untrusted users
(e.g. URL parameters to console templates, or something you built yourself) who
are not meant to be able to run arbitrary PromQL queries make sure any
untrusted input is appropriately escaped to prevent injection attacks. For
example `up{job="<user_input>"}` would become `up{job=""} or
some_metric{zzz=""}` if the `<user_input>` was `"} or some_metric{zzz="`.
For those using Grafana note that [dashboard permissions are not data source
permissions](https://grafana.com/docs/grafana/latest/permissions/#data-source-permissions),
so do not limit a user's ability to run arbitrary queries in proxy mode.
## Secrets
Non-secret information or fields may be available via the HTTP API and/or logs.
In Prometheus, metadata retrieved from service discovery is not considered
secret. Throughout the Prometheus system, metrics are not considered secret.
Fields containing secrets in configuration files (marked explicitly as such in
the documentation) will not be exposed in logs or via the HTTP API. Secrets
should not be placed in other configuration fields, as it is common for
components to expose their configuration over their HTTP endpoint. It is the
responsibility of the user to protect files on disk from unwanted reads and
writes.
Secrets from other sources used by dependencies (e.g. the `AWS_SECRET_KEY`
environment variable as used by EC2 service discovery) may end up exposed due to
code outside of our control or due to functionality that happens to expose
wherever it is stored.
## Denial of Service
There are some mitigations in place for excess load or expensive queries.
However, if too many or too expensive queries/metrics are provided components
will fall over. It is more likely that a component will be accidentally taken
out by a trusted user than by malicious action.
It is the responsibility of the user to ensure they provide components with
sufficient resources including CPU, RAM, disk space, IOPS, file descriptors,
and bandwidth.
It is recommended to monitor all components for failure, and to have them
automatically restart on failure.
## Libraries
This document considers vanilla binaries built from the stock source code.
Information presented here does not apply if you modify Prometheus source code,
or use Prometheus internals (beyond the official client library APIs) in your
own code.
## Build Process
The build pipeline for Prometheus runs on third-party providers to which many
members of the Prometheus development team and the staff of those providers
have access. If you are concerned about the exact provenance of your binaries,
it is recommended to build them yourself rather than relying on the
pre-built binaries provided by the project.
## Prometheus-Community
The repositories under the [Prometheus-Community](https://github.com/prometheus-community)
organization are supported by third-party maintainers.
If you find a _security bug_ in the [Prometheus-Community](https://github.com/prometheus-community) organization,
please report it privately to the maintainers listed in the MAINTAINERS of the
relevant repository and CC [email protected].
Some repositories under that organization might have a different security model
than the ones presented in this document. In such a case, please refer to the
documentation of those repositories.
## External audits
* In 2018, [CNCF](https://cncf.io) sponsored an external security audit by
[cure53](https://cure53.de) which ran from April 2018 to June 2018. For more
details, please read the [final report of the audit](/assets/downloads/2018-06-11--cure53_security_audit.pdf).
* In 2020, CNCF sponsored a
[second audit by cure53](/assets/downloads/2020-07-21--cure53_security_audit_node_exporter.pdf)
of Node Exporter.
* In 2023, CNCF sponsored a
[software supply chain security assessment of Prometheus](/assets/downloads/2023-04-19--chainguard_supply_chain_assessment.pdf)
by Chainguard. | prometheus | title Security sort rank 4 Security Model Prometheus is a sophisticated system with many components and many integrations with other systems It can be deployed in a variety of trusted and untrusted environments This page describes the general security assumptions of Prometheus and the attack vectors that some configurations may enable As with any complex system it is near certain that bugs will be found some of them security relevant If you find a security bug please report it privately to the maintainers listed in the MAINTAINERS of the relevant repository and CC prometheus team googlegroups com We will fix the issue as soon as possible and coordinate a release date with you You will be able to choose if you want public acknowledgement of your effort and if you want to be mentioned by name Automated security scanners Special note for security scanner users Please be mindful with the reports produced Most scanners are generic and produce lots of false positives More and more reports are being sent to us and it takes a significant amount of work to go through all of them and reply with the care you expect This problem is particularly bad with Go and NPM dependency scanners As a courtesy to us and our time we would ask you not to submit raw reports Instead please submit them with an analysis outlining which specific results are applicable to us and why Prometheus is maintained by volunteers not by a company Therefore fixing security issues is done on a best effort basis We strive to release security fixes within 7 days for Prometheus Alertmanager Node Exporter Blackbox Exporter and Pushgateway Prometheus It is presumed that untrusted users have access to the Prometheus HTTP endpoint and logs They have access to all time series information contained in the database plus a variety of operational debugging information It is also presumed that only trusted users have the ability to change the command line configuration file rule files and other aspects of the runtime environment of Prometheus and other components Which targets Prometheus scrapes how often and with what other settings is determined entirely via the configuration file The administrator may decide to use information from service discovery systems which combined with relabelling may grant some of this control to anyone who can modify data in that service discovery system Scraped targets may be run by untrusted users It should not by default be possible for a target to expose data that impersonates a different target The honor labels option removes this protection as can certain relabelling setups As of Prometheus 2 0 the web enable admin api flag controls access to the administrative HTTP API which includes functionality such as deleting time series This is disabled by default If enabled administrative and mutating functionality will be accessible under the api admin paths The web enable lifecycle flag controls HTTP reloads and shutdowns of Prometheus This is also disabled by default If enabled they will be accessible under the reload and quit paths In Prometheus 1 x reload and using DELETE on api v1 series are accessible to anyone with access to the HTTP API The quit endpoint is disabled by default but can be enabled with the web enable remote shutdown flag The remote read feature allows anyone with HTTP access to send queries to the remote read endpoint If for example the PromQL queries were ending up directly run against a relational database then anyone with the ability to send queries to Prometheus such as via Grafana can run arbitrary SQL against that database Alertmanager Any user with access to the Alertmanager HTTP endpoint has access to its data They can create and resolve alerts They can create modify and delete silences Where notifications are sent to is determined by the configuration file With certain templating setups it is possible for notifications to end up at an alert defined destination For example if notifications use an alert label as the destination email address anyone who can send alerts to the Alertmanager can send notifications to any email address If the alert defined destination is a templatable secret field anyone with access to either Prometheus or Alertmanager will be able to view the secrets Any secret fields which are templatable are intended for routing notifications in the above use case They are not intended as a way for secrets to be separated out from the configuration files using the template file feature Any secrets stored in template files could be exfiltrated by anyone able to configure receivers in the Alertmanager configuration file For example in large setups each team might have an alertmanager configuration file fragment which they fully control that are then combined into the full final configuration file Pushgateway Any user with access to the Pushgateway HTTP endpoint can create modify and delete the metrics contained within As the Pushgateway is usually scraped with honor labels enabled this means anyone with access to the Pushgateway can create any time series in Prometheus The web enable admin api flag controls access to the administrative HTTP API which includes functionality such as wiping all the existing metric groups This is disabled by default If enabled administrative functionality will be accessible under the api admin paths Exporters Exporters generally only talk to one configured instance with a preset set of commands requests which cannot be expanded via their HTTP endpoint There are also exporters such as the SNMP and Blackbox exporters that take their targets from URL parameters Thus anyone with HTTP access to these exporters can make them send requests to arbitrary endpoints As they also support client side authentication this could lead to a leak of secrets such as HTTP Basic Auth passwords or SNMP community strings Challenge response authentication mechanisms such as TLS are not affected by this Client Libraries Client libraries are intended to be included in users applications If using a client library provided HTTP handler it should not be possible for malicious requests that reach that handler to cause issues beyond those resulting from additional load and failed scrapes Authentication Authorization and Encryption Prometheus and most exporters support TLS Including authentication of clients via TLS client certificates Details on configuring Prometheus are here https prometheus io docs guides tls encryption The Go projects share the same TLS library based on the Go crypto tls https golang org pkg crypto tls library We default to TLS 1 2 as minimum version Our policy regarding this is based on Qualys SSL Labs https www ssllabs com recommendations where we strive to achieve a grade A with a default configuration and correctly provided certificates while sticking as closely as possible to the upstream Go defaults Achieving that grade provides a balance between perfect security and usability TLS will be added to Java exporters in the future If you have special TLS needs like a different cipher suite or older TLS version you can tune the minimum TLS version and the ciphers as long as the cipher is not marked as insecure https golang org pkg crypto tls InsecureCipherSuites in the crypto tls https golang org pkg crypto tls library If that still does not suit you the current TLS settings enable you to build a secure tunnel between the servers and reverse proxies with more special requirements HTTP Basic Authentication is also supported Basic Authentication can be used without TLS but it will then expose usernames and passwords in cleartext over the network On the server side basic authentication passwords are stored as hashes with the bcrypt https en wikipedia org wiki Bcrypt algorithm It is your responsibility to pick the number of rounds that matches your security standards More rounds make brute force more complicated at the cost of more CPU power and more time to authenticate the requests Various Prometheus components support client side authentication and encryption If TLS client support is offered there is often also an option called insecure skip verify which skips SSL verification API Security As administrative and mutating endpoints are intended to be accessed via simple tools such as cURL there is no built in CSRF https en wikipedia org wiki Cross site request forgery protection as that would break such use cases Accordingly when using a reverse proxy you may wish to block such paths to prevent CSRF For non mutating endpoints you may wish to set CORS headers https fetch spec whatwg org http cors protocol such as Access Control Allow Origin in your reverse proxy to prevent XSS https en wikipedia org wiki Cross site scripting If you are composing PromQL queries that include input from untrusted users e g URL parameters to console templates or something you built yourself who are not meant to be able to run arbitrary PromQL queries make sure any untrusted input is appropriately escaped to prevent injection attacks For example up job user input would become up job or some metric zzz if the user input was or some metric zzz For those using Grafana note that dashboard permissions are not data source permissions https grafana com docs grafana latest permissions data source permissions so do not limit a user s ability to run arbitrary queries in proxy mode Secrets Non secret information or fields may be available via the HTTP API and or logs In Prometheus metadata retrieved from service discovery is not considered secret Throughout the Prometheus system metrics are not considered secret Fields containing secrets in configuration files marked explicitly as such in the documentation will not be exposed in logs or via the HTTP API Secrets should not be placed in other configuration fields as it is common for components to expose their configuration over their HTTP endpoint It is the responsibility of the user to protect files on disk from unwanted reads and writes Secrets from other sources used by dependencies e g the AWS SECRET KEY environment variable as used by EC2 service discovery may end up exposed due to code outside of our control or due to functionality that happens to expose wherever it is stored Denial of Service There are some mitigations in place for excess load or expensive queries However if too many or too expensive queries metrics are provided components will fall over It is more likely that a component will be accidentally taken out by a trusted user than by malicious action It is the responsibility of the user to ensure they provide components with sufficient resources including CPU RAM disk space IOPS file descriptors and bandwidth It is recommended to monitor all components for failure and to have them automatically restart on failure Libraries This document considers vanilla binaries built from the stock source code Information presented here does not apply if you modify Prometheus source code or use Prometheus internals beyond the official client library APIs in your own code Build Process The build pipeline for Prometheus runs on third party providers to which many members of the Prometheus development team and the staff of those providers have access If you are concerned about the exact provenance of your binaries it is recommended to build them yourself rather than relying on the pre built binaries provided by the project Prometheus Community The repositories under the Prometheus Community https github com prometheus community organization are supported by third party maintainers If you find a security bug in the Prometheus Community https github com prometheus community organization please report it privately to the maintainers listed in the MAINTAINERS of the relevant repository and CC prometheus team googlegroups com Some repositories under that organization might have a different security model than the ones presented in this document In such a case please refer to the documentation of those repositories External audits In 2018 CNCF https cncf io sponsored an external security audit by cure53 https cure53 de which ran from April 2018 to June 2018 For more details please read the final report of the audit assets downloads 2018 06 11 cure53 security audit pdf In 2020 CNCF sponsored a second audit by cure53 assets downloads 2020 07 21 cure53 security audit node exporter pdf of Node Exporter In 2023 CNCF sponsored a software supply chain security assessment of Prometheus assets downloads 2023 04 19 chainguard supply chain assessment pdf by Chainguard |
redis weight 5 topics replication topics replication md title Redis replication aliases Replication How Redis supports high availability and failover with replication docs manual replication md docs manual replication | ---
title: Redis replication
linkTitle: Replication
weight: 5
description: How Redis supports high availability and failover with replication
aliases: [
/topics/replication,
/topics/replication.md,
/docs/manual/replication,
/docs/manual/replication.md
]
---
At the base of Redis replication (excluding the high availability features provided as an additional layer by Redis Cluster or Redis Sentinel) there is a *leader follower* (master-replica) replication that is simple to use and configure. It allows replica Redis instances to be exact copies of master instances. The replica will automatically reconnect to the master every time the link breaks, and will attempt to be an exact copy of it *regardless* of what happens to the master.
This system works using three main mechanisms:
1. When a master and a replica instances are well-connected, the master keeps the replica updated by sending a stream of commands to the replica to replicate the effects on the dataset happening in the master side due to: client writes, keys expired or evicted, any other action changing the master dataset.
2. When the link between the master and the replica breaks, for network issues or because a timeout is sensed in the master or the replica, the replica reconnects and attempts to proceed with a partial resynchronization: it means that it will try to just obtain the part of the stream of commands it missed during the disconnection.
3. When a partial resynchronization is not possible, the replica will ask for a full resynchronization. This will involve a more complex process in which the master needs to create a snapshot of all its data, send it to the replica, and then continue sending the stream of commands as the dataset changes.
Redis uses by default asynchronous replication, which being low latency and
high performance, is the natural replication mode for the vast majority of Redis
use cases. However, Redis replicas asynchronously acknowledge the amount of data
they received periodically with the master. So the master does not wait every time
for a command to be processed by the replicas, however it knows, if needed, what
replica already processed what command. This allows having optional synchronous replication.
Synchronous replication of certain data can be requested by the clients using
the `WAIT` command. However `WAIT` is only able to ensure there are the
specified number of acknowledged copies in the other Redis instances, it does not
turn a set of Redis instances into a CP system with strong consistency: acknowledged
writes can still be lost during a failover, depending on the exact configuration
of the Redis persistence. However with `WAIT` the probability of losing a write
after a failure event is greatly reduced to certain hard to trigger failure
modes.
You can check the Redis Sentinel or Redis Cluster documentation for more information
about high availability and failover. The rest of this document mainly describes the basic characteristics of Redis basic replication.
### Important facts about Redis replication
* Redis uses asynchronous replication, with asynchronous replica-to-master acknowledges of the amount of data processed.
* A master can have multiple replicas.
* Replicas are able to accept connections from other replicas. Aside from connecting a number of replicas to the same master, replicas can also be connected to other replicas in a cascading-like structure. Since Redis 4.0, all the sub-replicas will receive exactly the same replication stream from the master.
* Redis replication is non-blocking on the master side. This means that the master will continue to handle queries when one or more replicas perform the initial synchronization or a partial resynchronization.
* Replication is also largely non-blocking on the replica side. While the replica is performing the initial synchronization, it can handle queries using the old version of the dataset, assuming you configured Redis to do so in redis.conf. Otherwise, you can configure Redis replicas to return an error to clients if the replication stream is down. However, after the initial sync, the old dataset must be deleted and the new one must be loaded. The replica will block incoming connections during this brief window (that can be as long as many seconds for very large datasets). Since Redis 4.0 you can configure Redis so that the deletion of the old data set happens in a different thread, however loading the new initial dataset will still happen in the main thread and block the replica.
* Replication can be used both for scalability, to have multiple replicas for read-only queries (for example, slow O(N) operations can be offloaded to replicas), or simply for improving data safety and high availability.
* You can use replication to avoid the cost of having the master writing the full dataset to disk: a typical technique involves configuring your master `redis.conf` to avoid persisting to disk at all, then connect a replica configured to save from time to time, or with AOF enabled. However, this setup must be handled with care, since a restarting master will start with an empty dataset: if the replica tries to sync with it, the replica will be emptied as well.
## Safety of replication when master has persistence turned off
In setups where Redis replication is used, it is strongly advised to have
persistence turned on in the master and in the replicas. When this is not possible,
for example because of latency concerns due to very slow disks, instances should
be configured to **avoid restarting automatically** after a reboot.
To better understand why masters with persistence turned off configured to
auto restart are dangerous, check the following failure mode where data
is wiped from the master and all its replicas:
1. We have a setup with node A acting as master, with persistence turned down, and nodes B and C replicating from node A.
2. Node A crashes, however it has some auto-restart system, that restarts the process. However since persistence is turned off, the node restarts with an empty data set.
3. Nodes B and C will replicate from node A, which is empty, so they'll effectively destroy their copy of the data.
When Redis Sentinel is used for high availability, also turning off persistence
on the master, together with auto restart of the process, is dangerous. For example the master can restart fast enough for Sentinel to not detect a failure, so that the failure mode described above happens.
Every time data safety is important, and replication is used with master configured without persistence, auto restart of instances should be disabled.
## How Redis replication works
Every Redis master has a replication ID: it is a large pseudo random string
that marks a given story of the dataset. Each master also takes an offset that
increments for every byte of replication stream that it is produced to be
sent to replicas, to update the state of the replicas with the new changes
modifying the dataset. The replication offset is incremented even if no replica
is actually connected, so basically every given pair of:
Replication ID, offset
Identifies an exact version of the dataset of a master.
When replicas connect to masters, they use the `PSYNC` command to send
their old master replication ID and the offsets they processed so far. This way
the master can send just the incremental part needed. However if there is not
enough *backlog* in the master buffers, or if the replica is referring to an
history (replication ID) which is no longer known, then a full resynchronization
happens: in this case the replica will get a full copy of the dataset, from scratch.
This is how a full synchronization works in more details:
The master starts a background saving process to produce an RDB file. At the same time it starts to buffer all new write commands received from the clients. When the background saving is complete, the master transfers the database file to the replica, which saves it on disk, and then loads it into memory. The master will then send all buffered commands to the replica. This is done as a stream of commands and is in the same format of the Redis protocol itself.
You can try it yourself via telnet. Connect to the Redis port while the
server is doing some work and issue the `SYNC` command. You'll see a bulk
transfer and then every command received by the master will be re-issued
in the telnet session. Actually `SYNC` is an old protocol no longer used by
newer Redis instances, but is still there for backward compatibility: it does
not allow partial resynchronizations, so now `PSYNC` is used instead.
As already said, replicas are able to automatically reconnect when the master-replica link goes down for some reason. If the master receives multiple concurrent replica synchronization requests, it performs a single background save in to serve all of them.
## Replication ID explained
In the previous section we said that if two instances have the same replication
ID and replication offset, they have exactly the same data. However it is useful
to understand what exactly is the replication ID, and why instances have actually
two replication IDs: the main ID and the secondary ID.
A replication ID basically marks a given *history* of the data set. Every time
an instance restarts from scratch as a master, or a replica is promoted to master,
a new replication ID is generated for this instance. The replicas connected to
a master will inherit its replication ID after the handshake. So two instances
with the same ID are related by the fact that they hold the same data, but
potentially at a different time. It is the offset that works as a logical time
to understand, for a given history (replication ID), who holds the most updated
data set.
For instance, if two instances A and B have the same replication ID, but one
with offset 1000 and one with offset 1023, it means that the first lacks certain
commands applied to the data set. It also means that A, by applying just a few
commands, may reach exactly the same state of B.
The reason why Redis instances have two replication IDs is because of replicas
that are promoted to masters. After a failover, the promoted replica requires
to still remember what was its past replication ID, because such replication ID
was the one of the former master. In this way, when other replicas will sync
with the new master, they will try to perform a partial resynchronization using the
old master replication ID. This will work as expected, because when the replica
is promoted to master it sets its secondary ID to its main ID, remembering what
was the offset when this ID switch happened. Later it will select a new random
replication ID, because a new history begins. When handling the new replicas
connecting, the master will match their IDs and offsets both with the current
ID and the secondary ID (up to a given offset, for safety). In short this means
that after a failover, replicas connecting to the newly promoted master don't have
to perform a full sync.
In case you wonder why a replica promoted to master needs to change its
replication ID after a failover: it is possible that the old master is still
working as a master because of some network partition: retaining the same
replication ID would violate the fact that the same ID and same offset of any
two random instances mean they have the same data set.
## Diskless replication
Normally a full resynchronization requires creating an RDB file on disk,
then reloading the same RDB from disk to feed the replicas with the data.
With slow disks this can be a very stressing operation for the master.
Redis version 2.8.18 is the first version to have support for diskless
replication. In this setup the child process directly sends the
RDB over the wire to replicas, without using the disk as intermediate storage.
## Configuration
To configure basic Redis replication is trivial: just add the following line to the replica configuration file:
replicaof 192.168.1.1 6379
Of course you need to replace 192.168.1.1 6379 with your master IP address (or
hostname) and port. Alternatively, you can call the `REPLICAOF` command and the
master host will start a sync with the replica.
There are also a few parameters for tuning the replication backlog taken
in memory by the master to perform the partial resynchronization. See the example
`redis.conf` shipped with the Redis distribution for more information.
Diskless replication can be enabled using the `repl-diskless-sync` configuration
parameter. The delay to start the transfer to wait for more replicas to
arrive after the first one is controlled by the `repl-diskless-sync-delay`
parameter. Please refer to the example `redis.conf` file in the Redis distribution
for more details.
## Read-only replica
Since Redis 2.6, replicas support a read-only mode that is enabled by default.
This behavior is controlled by the `replica-read-only` option in the redis.conf file, and can be enabled and disabled at runtime using `CONFIG SET`.
Read-only replicas will reject all write commands, so that it is not possible to write to a replica because of a mistake. This does not mean that the feature is intended to expose a replica instance to the internet or more generally to a network where untrusted clients exist, because administrative commands like `DEBUG` or `CONFIG` are still enabled. The [Security](/topics/security) page describes how to secure a Redis instance.
You may wonder why it is possible to revert the read-only setting
and have replica instances that can be targeted by write operations.
The answer is that writable replicas exist only for historical reasons.
Using writable replicas can result in inconsistency between the master and the replica, so it is not recommended to use writable replicas.
To understand in which situations this can be a problem, we need to understand how replication works.
Changes on the master is replicated by propagating regular Redis commands to the replica.
When a key expires on the master, this is propagated as a DEL command.
If a key which exists on the master but is deleted, expired or has a different type on the replica compared to the master will react differently to commands like DEL, INCR or RPOP propagated from the master than intended.
The propagated command may fail on the replica or result in a different outcome.
To minimize the risks (if you insist on using writable replicas) we suggest you follow these recommendations:
* Don't write to keys in a writable replica that are also used on the master.
(This can be hard to guarantee if you don't have control over all the clients that write to the master.)
* Don't configure an instance as a writable replica as an intermediary step when upgrading a set of instances in a running system.
In general, don't configure an instance as a writable replica if it can ever be promoted to a master if you want to guarantee data consistency.
Historically, there were some use cases that were considered legitimate for writable replicas.
As of version 7.0, these use cases are now all obsolete and the same can be achieved by other means.
For example:
* Computing slow Set or Sorted set operations and storing the result in temporary local keys using commands like `SUNIONSTORE` and `ZINTERSTORE`.
Instead, use commands that return the result without storing it, such as `SUNION` and `ZINTER`.
* Using the `SORT` command (which is not considered a read-only command because of the optional STORE option and therefore cannot be used on a read-only replica).
Instead, use `SORT_RO`, which is a read-only command.
* Using `EVAL` and `EVALSHA` are also not considered read-only commands, because the Lua script may call write commands.
Instead, use `EVAL_RO` and `EVALSHA_RO` where the Lua script can only call read-only commands.
While writes to a replica will be discarded if the replica and the master resync or if the replica is restarted, there is no guarantee that they will sync automatically.
Before version 4.0, writable replicas were incapable of expiring keys with a time to live set.
This means that if you use `EXPIRE` or other commands that set a maximum TTL for a key, the key will leak, and while you may no longer see it while accessing it with read commands, you will see it in the count of keys and it will still use memory.
Redis 4.0 RC3 and greater versions are able to evict keys with TTL as masters do, with the exceptions of keys written in DB numbers greater than 63 (but by default Redis instances only have 16 databases).
Note though that even in versions greater than 4.0, using `EXPIRE` on a key that could ever exists on the master can cause inconsistency between the replica and the master.
Also note that since Redis 4.0 replica writes are only local, and are not propagated to sub-replicas attached to the instance. Sub-replicas instead will always receive the replication stream identical to the one sent by the top-level master to the intermediate replicas. So for example in the following setup:
A ---> B ---> C
Even if `B` is writable, C will not see `B` writes and will instead have identical dataset as the master instance `A`.
## Setting a replica to authenticate to a master
If your master has a password via `requirepass`, it's trivial to configure the
replica to use that password in all sync operations.
To do it on a running instance, use `redis-cli` and type:
config set masterauth <password>
To set it permanently, add this to your config file:
masterauth <password>
## Allow writes only with N attached replicas
Starting with Redis 2.8, you can configure a Redis master to
accept write queries only if at least N replicas are currently connected to the
master.
However, because Redis uses asynchronous replication it is not possible to ensure
the replica actually received a given write, so there is always a window for data
loss.
This is how the feature works:
* Redis replicas ping the master every second, acknowledging the amount of replication stream processed.
* Redis masters will remember the last time it received a ping from every replica.
* The user can configure a minimum number of replicas that have a lag not greater than a maximum number of seconds.
If there are at least N replicas, with a lag less than M seconds, then the write will be accepted.
You may think of it as a best effort data safety mechanism, where consistency is not ensured for a given write, but at least the time window for data loss is restricted to a given number of seconds. In general bound data loss is better than unbound one.
If the conditions are not met, the master will instead reply with an error and the write will not be accepted.
There are two configuration parameters for this feature:
* min-replicas-to-write `<number of replicas>`
* min-replicas-max-lag `<number of seconds>`
For more information, please check the example `redis.conf` file shipped with the
Redis source distribution.
## How Redis replication deals with expires on keys
Redis expires allow keys to have a limited time to live (TTL). Such a feature depends
on the ability of an instance to count the time, however Redis replicas correctly
replicate keys with expires, even when such keys are altered using Lua
scripts.
To implement such a feature Redis cannot rely on the ability of the master and
replica to have synced clocks, since this is a problem that cannot be solved
and would result in race conditions and diverging data sets, so Redis
uses three main techniques to make the replication of expired keys
able to work:
1. Replicas don't expire keys, instead they wait for masters to expire the keys. When a master expires a key (or evict it because of LRU), it synthesizes a `DEL` command which is transmitted to all the replicas.
2. However because of master-driven expire, sometimes replicas may still have in memory keys that are already logically expired, since the master was not able to provide the `DEL` command in time. To deal with that the replica uses its logical clock to report that a key does not exist **only for read operations** that don't violate the consistency of the data set (as new commands from the master will arrive). In this way replicas avoid reporting logically expired keys that are still existing. In practical terms, an HTML fragments cache that uses replicas to scale will avoid returning items that are already older than the desired time to live.
3. During Lua scripts executions no key expiries are performed. As a Lua script runs, conceptually the time in the master is frozen, so that a given key will either exist or not for all the time the script runs. This prevents keys expiring in the middle of a script, and is needed to send the same script to the replica in a way that is guaranteed to have the same effects in the data set.
Once a replica is promoted to a master it will start to expire keys independently, and will not require any help from its old master.
## Configuring replication in Docker and NAT
When Docker, or other types of containers using port forwarding, or Network Address Translation is used, Redis replication needs some extra care, especially when using Redis Sentinel or other systems where the master `INFO` or `ROLE` commands output is scanned to discover replicas' addresses.
The problem is that the `ROLE` command, and the replication section of
the `INFO` output, when issued into a master instance, will show replicas
as having the IP address they use to connect to the master, which, in
environments using NAT may be different compared to the logical address of the
replica instance (the one that clients should use to connect to replicas).
Similarly the replicas will be listed with the listening port configured
into `redis.conf`, that may be different from the forwarded port in case
the port is remapped.
To fix both issues, it is possible, since Redis 3.2.2, to force
a replica to announce an arbitrary pair of IP and port to the master.
The two configurations directives to use are:
replica-announce-ip 5.5.5.5
replica-announce-port 1234
And are documented in the example `redis.conf` of recent Redis distributions.
## The INFO and ROLE command
There are two Redis commands that provide a lot of information on the current
replication parameters of master and replica instances. One is `INFO`. If the
command is called with the `replication` argument as `INFO replication` only
information relevant to the replication are displayed. Another more
computer-friendly command is `ROLE`, that provides the replication status of
masters and replicas together with their replication offsets, list of connected
replicas and so forth.
## Partial sync after restarts and failovers
Since Redis 4.0, when an instance is promoted to master after a failover,
it will still be able to perform a partial resynchronization with the replicas
of the old master. To do so, the replica remembers the old replication ID and
offset of its former master, so can provide part of the backlog to the connecting
replicas even if they ask for the old replication ID.
However the new replication ID of the promoted replica will be different, since it
constitutes a different history of the data set. For example, the master can
return available and can continue accepting writes for some time, so using the
same replication ID in the promoted replica would violate the rule that a
replication ID and offset pair identifies only a single data set.
Moreover, replicas - when powered off gently and restarted - are able to store
in the `RDB` file the information needed to resync with their
master. This is useful in case of upgrades. When this is needed, it is better to
use the `SHUTDOWN` command in order to perform a `save & quit` operation on the
replica.
It is not possible to partially sync a replica that restarted via the
AOF file. However the instance may be turned to RDB persistence before shutting
down it, than can be restarted, and finally AOF can be enabled again.
## `Maxmemory` on replicas
By default, a replica will ignore `maxmemory` (unless it is promoted to master after a failover or manually).
It means that the eviction of keys will be handled by the master, sending the DEL commands to the replica as keys evict in the master side.
This behavior ensures that masters and replicas stay consistent, which is usually what you want.
However, if your replica is writable, or you want the replica to have a different memory setting, and you are sure all the writes performed to the replica are idempotent, then you may change this default (but be sure to understand what you are doing).
Note that since the replica by default does not evict, it may end up using more memory than what is set via `maxmemory` (since there are certain buffers that may be larger on the replica, or data structures may sometimes take more memory and so forth).
Make sure you monitor your replicas, and make sure they have enough memory to never hit a real out-of-memory condition before the master hits the configured `maxmemory` setting.
To change this behavior, you can allow a replica to not ignore the `maxmemory`. The configuration directives to use is:
replica-ignore-maxmemory no | redis | title Redis replication linkTitle Replication weight 5 description How Redis supports high availability and failover with replication aliases topics replication topics replication md docs manual replication docs manual replication md At the base of Redis replication excluding the high availability features provided as an additional layer by Redis Cluster or Redis Sentinel there is a leader follower master replica replication that is simple to use and configure It allows replica Redis instances to be exact copies of master instances The replica will automatically reconnect to the master every time the link breaks and will attempt to be an exact copy of it regardless of what happens to the master This system works using three main mechanisms 1 When a master and a replica instances are well connected the master keeps the replica updated by sending a stream of commands to the replica to replicate the effects on the dataset happening in the master side due to client writes keys expired or evicted any other action changing the master dataset 2 When the link between the master and the replica breaks for network issues or because a timeout is sensed in the master or the replica the replica reconnects and attempts to proceed with a partial resynchronization it means that it will try to just obtain the part of the stream of commands it missed during the disconnection 3 When a partial resynchronization is not possible the replica will ask for a full resynchronization This will involve a more complex process in which the master needs to create a snapshot of all its data send it to the replica and then continue sending the stream of commands as the dataset changes Redis uses by default asynchronous replication which being low latency and high performance is the natural replication mode for the vast majority of Redis use cases However Redis replicas asynchronously acknowledge the amount of data they received periodically with the master So the master does not wait every time for a command to be processed by the replicas however it knows if needed what replica already processed what command This allows having optional synchronous replication Synchronous replication of certain data can be requested by the clients using the WAIT command However WAIT is only able to ensure there are the specified number of acknowledged copies in the other Redis instances it does not turn a set of Redis instances into a CP system with strong consistency acknowledged writes can still be lost during a failover depending on the exact configuration of the Redis persistence However with WAIT the probability of losing a write after a failure event is greatly reduced to certain hard to trigger failure modes You can check the Redis Sentinel or Redis Cluster documentation for more information about high availability and failover The rest of this document mainly describes the basic characteristics of Redis basic replication Important facts about Redis replication Redis uses asynchronous replication with asynchronous replica to master acknowledges of the amount of data processed A master can have multiple replicas Replicas are able to accept connections from other replicas Aside from connecting a number of replicas to the same master replicas can also be connected to other replicas in a cascading like structure Since Redis 4 0 all the sub replicas will receive exactly the same replication stream from the master Redis replication is non blocking on the master side This means that the master will continue to handle queries when one or more replicas perform the initial synchronization or a partial resynchronization Replication is also largely non blocking on the replica side While the replica is performing the initial synchronization it can handle queries using the old version of the dataset assuming you configured Redis to do so in redis conf Otherwise you can configure Redis replicas to return an error to clients if the replication stream is down However after the initial sync the old dataset must be deleted and the new one must be loaded The replica will block incoming connections during this brief window that can be as long as many seconds for very large datasets Since Redis 4 0 you can configure Redis so that the deletion of the old data set happens in a different thread however loading the new initial dataset will still happen in the main thread and block the replica Replication can be used both for scalability to have multiple replicas for read only queries for example slow O N operations can be offloaded to replicas or simply for improving data safety and high availability You can use replication to avoid the cost of having the master writing the full dataset to disk a typical technique involves configuring your master redis conf to avoid persisting to disk at all then connect a replica configured to save from time to time or with AOF enabled However this setup must be handled with care since a restarting master will start with an empty dataset if the replica tries to sync with it the replica will be emptied as well Safety of replication when master has persistence turned off In setups where Redis replication is used it is strongly advised to have persistence turned on in the master and in the replicas When this is not possible for example because of latency concerns due to very slow disks instances should be configured to avoid restarting automatically after a reboot To better understand why masters with persistence turned off configured to auto restart are dangerous check the following failure mode where data is wiped from the master and all its replicas 1 We have a setup with node A acting as master with persistence turned down and nodes B and C replicating from node A 2 Node A crashes however it has some auto restart system that restarts the process However since persistence is turned off the node restarts with an empty data set 3 Nodes B and C will replicate from node A which is empty so they ll effectively destroy their copy of the data When Redis Sentinel is used for high availability also turning off persistence on the master together with auto restart of the process is dangerous For example the master can restart fast enough for Sentinel to not detect a failure so that the failure mode described above happens Every time data safety is important and replication is used with master configured without persistence auto restart of instances should be disabled How Redis replication works Every Redis master has a replication ID it is a large pseudo random string that marks a given story of the dataset Each master also takes an offset that increments for every byte of replication stream that it is produced to be sent to replicas to update the state of the replicas with the new changes modifying the dataset The replication offset is incremented even if no replica is actually connected so basically every given pair of Replication ID offset Identifies an exact version of the dataset of a master When replicas connect to masters they use the PSYNC command to send their old master replication ID and the offsets they processed so far This way the master can send just the incremental part needed However if there is not enough backlog in the master buffers or if the replica is referring to an history replication ID which is no longer known then a full resynchronization happens in this case the replica will get a full copy of the dataset from scratch This is how a full synchronization works in more details The master starts a background saving process to produce an RDB file At the same time it starts to buffer all new write commands received from the clients When the background saving is complete the master transfers the database file to the replica which saves it on disk and then loads it into memory The master will then send all buffered commands to the replica This is done as a stream of commands and is in the same format of the Redis protocol itself You can try it yourself via telnet Connect to the Redis port while the server is doing some work and issue the SYNC command You ll see a bulk transfer and then every command received by the master will be re issued in the telnet session Actually SYNC is an old protocol no longer used by newer Redis instances but is still there for backward compatibility it does not allow partial resynchronizations so now PSYNC is used instead As already said replicas are able to automatically reconnect when the master replica link goes down for some reason If the master receives multiple concurrent replica synchronization requests it performs a single background save in to serve all of them Replication ID explained In the previous section we said that if two instances have the same replication ID and replication offset they have exactly the same data However it is useful to understand what exactly is the replication ID and why instances have actually two replication IDs the main ID and the secondary ID A replication ID basically marks a given history of the data set Every time an instance restarts from scratch as a master or a replica is promoted to master a new replication ID is generated for this instance The replicas connected to a master will inherit its replication ID after the handshake So two instances with the same ID are related by the fact that they hold the same data but potentially at a different time It is the offset that works as a logical time to understand for a given history replication ID who holds the most updated data set For instance if two instances A and B have the same replication ID but one with offset 1000 and one with offset 1023 it means that the first lacks certain commands applied to the data set It also means that A by applying just a few commands may reach exactly the same state of B The reason why Redis instances have two replication IDs is because of replicas that are promoted to masters After a failover the promoted replica requires to still remember what was its past replication ID because such replication ID was the one of the former master In this way when other replicas will sync with the new master they will try to perform a partial resynchronization using the old master replication ID This will work as expected because when the replica is promoted to master it sets its secondary ID to its main ID remembering what was the offset when this ID switch happened Later it will select a new random replication ID because a new history begins When handling the new replicas connecting the master will match their IDs and offsets both with the current ID and the secondary ID up to a given offset for safety In short this means that after a failover replicas connecting to the newly promoted master don t have to perform a full sync In case you wonder why a replica promoted to master needs to change its replication ID after a failover it is possible that the old master is still working as a master because of some network partition retaining the same replication ID would violate the fact that the same ID and same offset of any two random instances mean they have the same data set Diskless replication Normally a full resynchronization requires creating an RDB file on disk then reloading the same RDB from disk to feed the replicas with the data With slow disks this can be a very stressing operation for the master Redis version 2 8 18 is the first version to have support for diskless replication In this setup the child process directly sends the RDB over the wire to replicas without using the disk as intermediate storage Configuration To configure basic Redis replication is trivial just add the following line to the replica configuration file replicaof 192 168 1 1 6379 Of course you need to replace 192 168 1 1 6379 with your master IP address or hostname and port Alternatively you can call the REPLICAOF command and the master host will start a sync with the replica There are also a few parameters for tuning the replication backlog taken in memory by the master to perform the partial resynchronization See the example redis conf shipped with the Redis distribution for more information Diskless replication can be enabled using the repl diskless sync configuration parameter The delay to start the transfer to wait for more replicas to arrive after the first one is controlled by the repl diskless sync delay parameter Please refer to the example redis conf file in the Redis distribution for more details Read only replica Since Redis 2 6 replicas support a read only mode that is enabled by default This behavior is controlled by the replica read only option in the redis conf file and can be enabled and disabled at runtime using CONFIG SET Read only replicas will reject all write commands so that it is not possible to write to a replica because of a mistake This does not mean that the feature is intended to expose a replica instance to the internet or more generally to a network where untrusted clients exist because administrative commands like DEBUG or CONFIG are still enabled The Security topics security page describes how to secure a Redis instance You may wonder why it is possible to revert the read only setting and have replica instances that can be targeted by write operations The answer is that writable replicas exist only for historical reasons Using writable replicas can result in inconsistency between the master and the replica so it is not recommended to use writable replicas To understand in which situations this can be a problem we need to understand how replication works Changes on the master is replicated by propagating regular Redis commands to the replica When a key expires on the master this is propagated as a DEL command If a key which exists on the master but is deleted expired or has a different type on the replica compared to the master will react differently to commands like DEL INCR or RPOP propagated from the master than intended The propagated command may fail on the replica or result in a different outcome To minimize the risks if you insist on using writable replicas we suggest you follow these recommendations Don t write to keys in a writable replica that are also used on the master This can be hard to guarantee if you don t have control over all the clients that write to the master Don t configure an instance as a writable replica as an intermediary step when upgrading a set of instances in a running system In general don t configure an instance as a writable replica if it can ever be promoted to a master if you want to guarantee data consistency Historically there were some use cases that were considered legitimate for writable replicas As of version 7 0 these use cases are now all obsolete and the same can be achieved by other means For example Computing slow Set or Sorted set operations and storing the result in temporary local keys using commands like SUNIONSTORE and ZINTERSTORE Instead use commands that return the result without storing it such as SUNION and ZINTER Using the SORT command which is not considered a read only command because of the optional STORE option and therefore cannot be used on a read only replica Instead use SORT RO which is a read only command Using EVAL and EVALSHA are also not considered read only commands because the Lua script may call write commands Instead use EVAL RO and EVALSHA RO where the Lua script can only call read only commands While writes to a replica will be discarded if the replica and the master resync or if the replica is restarted there is no guarantee that they will sync automatically Before version 4 0 writable replicas were incapable of expiring keys with a time to live set This means that if you use EXPIRE or other commands that set a maximum TTL for a key the key will leak and while you may no longer see it while accessing it with read commands you will see it in the count of keys and it will still use memory Redis 4 0 RC3 and greater versions are able to evict keys with TTL as masters do with the exceptions of keys written in DB numbers greater than 63 but by default Redis instances only have 16 databases Note though that even in versions greater than 4 0 using EXPIRE on a key that could ever exists on the master can cause inconsistency between the replica and the master Also note that since Redis 4 0 replica writes are only local and are not propagated to sub replicas attached to the instance Sub replicas instead will always receive the replication stream identical to the one sent by the top level master to the intermediate replicas So for example in the following setup A B C Even if B is writable C will not see B writes and will instead have identical dataset as the master instance A Setting a replica to authenticate to a master If your master has a password via requirepass it s trivial to configure the replica to use that password in all sync operations To do it on a running instance use redis cli and type config set masterauth password To set it permanently add this to your config file masterauth password Allow writes only with N attached replicas Starting with Redis 2 8 you can configure a Redis master to accept write queries only if at least N replicas are currently connected to the master However because Redis uses asynchronous replication it is not possible to ensure the replica actually received a given write so there is always a window for data loss This is how the feature works Redis replicas ping the master every second acknowledging the amount of replication stream processed Redis masters will remember the last time it received a ping from every replica The user can configure a minimum number of replicas that have a lag not greater than a maximum number of seconds If there are at least N replicas with a lag less than M seconds then the write will be accepted You may think of it as a best effort data safety mechanism where consistency is not ensured for a given write but at least the time window for data loss is restricted to a given number of seconds In general bound data loss is better than unbound one If the conditions are not met the master will instead reply with an error and the write will not be accepted There are two configuration parameters for this feature min replicas to write number of replicas min replicas max lag number of seconds For more information please check the example redis conf file shipped with the Redis source distribution How Redis replication deals with expires on keys Redis expires allow keys to have a limited time to live TTL Such a feature depends on the ability of an instance to count the time however Redis replicas correctly replicate keys with expires even when such keys are altered using Lua scripts To implement such a feature Redis cannot rely on the ability of the master and replica to have synced clocks since this is a problem that cannot be solved and would result in race conditions and diverging data sets so Redis uses three main techniques to make the replication of expired keys able to work 1 Replicas don t expire keys instead they wait for masters to expire the keys When a master expires a key or evict it because of LRU it synthesizes a DEL command which is transmitted to all the replicas 2 However because of master driven expire sometimes replicas may still have in memory keys that are already logically expired since the master was not able to provide the DEL command in time To deal with that the replica uses its logical clock to report that a key does not exist only for read operations that don t violate the consistency of the data set as new commands from the master will arrive In this way replicas avoid reporting logically expired keys that are still existing In practical terms an HTML fragments cache that uses replicas to scale will avoid returning items that are already older than the desired time to live 3 During Lua scripts executions no key expiries are performed As a Lua script runs conceptually the time in the master is frozen so that a given key will either exist or not for all the time the script runs This prevents keys expiring in the middle of a script and is needed to send the same script to the replica in a way that is guaranteed to have the same effects in the data set Once a replica is promoted to a master it will start to expire keys independently and will not require any help from its old master Configuring replication in Docker and NAT When Docker or other types of containers using port forwarding or Network Address Translation is used Redis replication needs some extra care especially when using Redis Sentinel or other systems where the master INFO or ROLE commands output is scanned to discover replicas addresses The problem is that the ROLE command and the replication section of the INFO output when issued into a master instance will show replicas as having the IP address they use to connect to the master which in environments using NAT may be different compared to the logical address of the replica instance the one that clients should use to connect to replicas Similarly the replicas will be listed with the listening port configured into redis conf that may be different from the forwarded port in case the port is remapped To fix both issues it is possible since Redis 3 2 2 to force a replica to announce an arbitrary pair of IP and port to the master The two configurations directives to use are replica announce ip 5 5 5 5 replica announce port 1234 And are documented in the example redis conf of recent Redis distributions The INFO and ROLE command There are two Redis commands that provide a lot of information on the current replication parameters of master and replica instances One is INFO If the command is called with the replication argument as INFO replication only information relevant to the replication are displayed Another more computer friendly command is ROLE that provides the replication status of masters and replicas together with their replication offsets list of connected replicas and so forth Partial sync after restarts and failovers Since Redis 4 0 when an instance is promoted to master after a failover it will still be able to perform a partial resynchronization with the replicas of the old master To do so the replica remembers the old replication ID and offset of its former master so can provide part of the backlog to the connecting replicas even if they ask for the old replication ID However the new replication ID of the promoted replica will be different since it constitutes a different history of the data set For example the master can return available and can continue accepting writes for some time so using the same replication ID in the promoted replica would violate the rule that a replication ID and offset pair identifies only a single data set Moreover replicas when powered off gently and restarted are able to store in the RDB file the information needed to resync with their master This is useful in case of upgrades When this is needed it is better to use the SHUTDOWN command in order to perform a save quit operation on the replica It is not possible to partially sync a replica that restarted via the AOF file However the instance may be turned to RDB persistence before shutting down it than can be restarted and finally AOF can be enabled again Maxmemory on replicas By default a replica will ignore maxmemory unless it is promoted to master after a failover or manually It means that the eviction of keys will be handled by the master sending the DEL commands to the replica as keys evict in the master side This behavior ensures that masters and replicas stay consistent which is usually what you want However if your replica is writable or you want the replica to have a different memory setting and you are sure all the writes performed to the replica are idempotent then you may change this default but be sure to understand what you are doing Note that since the replica by default does not evict it may end up using more memory than what is set via maxmemory since there are certain buffers that may be larger on the replica or data structures may sometimes take more memory and so forth Make sure you monitor your replicas and make sure they have enough memory to never hit a real out of memory condition before the master hits the configured maxmemory setting To change this behavior you can allow a replica to not ignore the maxmemory The configuration directives to use is replica ignore maxmemory no |
redis High availability with Sentinel aliases title High availability with Redis Sentinel docs manual sentinel docs manual sentinel md weight 4 topics sentinel High availability for non clustered Redis | ---
title: "High availability with Redis Sentinel"
linkTitle: "High availability with Sentinel"
weight: 4
description: High availability for non-clustered Redis
aliases: [
/topics/sentinel,
/docs/manual/sentinel,
/docs/manual/sentinel.md
]
---
Redis Sentinel provides high availability for Redis when not using [Redis Cluster](/docs/manual/scaling).
Redis Sentinel also provides other collateral tasks such as monitoring,
notifications and acts as a configuration provider for clients.
This is the full list of Sentinel capabilities at a macroscopic level (i.e. the *big picture*):
* **Monitoring**. Sentinel constantly checks if your master and replica instances are working as expected.
* **Notification**. Sentinel can notify the system administrator, or other computer programs, via an API, that something is wrong with one of the monitored Redis instances.
* **Automatic failover**. If a master is not working as expected, Sentinel can start a failover process where a replica is promoted to master, the other additional replicas are reconfigured to use the new master, and the applications using the Redis server are informed about the new address to use when connecting.
* **Configuration provider**. Sentinel acts as a source of authority for clients service discovery: clients connect to Sentinels in order to ask for the address of the current Redis master responsible for a given service. If a failover occurs, Sentinels will report the new address.
## Sentinel as a distributed system
Redis Sentinel is a distributed system:
Sentinel itself is designed to run in a configuration where there are multiple Sentinel processes cooperating together. The advantage of having multiple Sentinel processes cooperating are the following:
1. Failure detection is performed when multiple Sentinels agree about the fact a given master is no longer available. This lowers the probability of false positives.
2. Sentinel works even if not all the Sentinel processes are working, making the system robust against failures. There is no fun in having a failover system which is itself a single point of failure, after all.
The sum of Sentinels, Redis instances (masters and replicas) and clients
connecting to Sentinel and Redis, are also a larger distributed system with
specific properties. In this document concepts will be introduced gradually
starting from basic information needed in order to understand the basic
properties of Sentinel, to more complex information (that are optional) in
order to understand how exactly Sentinel works.
## Sentinel quick start
### Obtaining Sentinel
The current version of Sentinel is called **Sentinel 2**. It is a rewrite of
the initial Sentinel implementation using stronger and simpler-to-predict
algorithms (that are explained in this documentation).
A stable release of Redis Sentinel is shipped since Redis 2.8.
New developments are performed in the *unstable* branch, and new features
sometimes are back ported into the latest stable branch as soon as they are
considered to be stable.
Redis Sentinel version 1, shipped with Redis 2.6, is deprecated and should not be used.
### Running Sentinel
If you are using the `redis-sentinel` executable (or if you have a symbolic
link with that name to the `redis-server` executable) you can run Sentinel
with the following command line:
redis-sentinel /path/to/sentinel.conf
Otherwise you can use directly the `redis-server` executable starting it in
Sentinel mode:
redis-server /path/to/sentinel.conf --sentinel
Both ways work the same.
However **it is mandatory** to use a configuration file when running Sentinel, as this file will be used by the system in order to save the current state that will be reloaded in case of restarts. Sentinel will simply refuse to start if no configuration file is given or if the configuration file path is not writable.
Sentinels by default run **listening for connections to TCP port 26379**, so
for Sentinels to work, port 26379 of your servers **must be open** to receive
connections from the IP addresses of the other Sentinel instances.
Otherwise Sentinels can't talk and can't agree about what to do, so failover
will never be performed.
### Fundamental things to know about Sentinel before deploying
1. You need at least three Sentinel instances for a robust deployment.
2. The three Sentinel instances should be placed into computers or virtual machines that are believed to fail in an independent way. So for example different physical servers or Virtual Machines executed on different availability zones.
3. Sentinel + Redis distributed system does not guarantee that acknowledged writes are retained during failures, since Redis uses asynchronous replication. However there are ways to deploy Sentinel that make the window to lose writes limited to certain moments, while there are other less secure ways to deploy it.
4. You need Sentinel support in your clients. Popular client libraries have Sentinel support, but not all.
5. There is no HA setup which is safe if you don't test from time to time in development environments, or even better if you can, in production environments, if they work. You may have a misconfiguration that will become apparent only when it's too late (at 3am when your master stops working).
6. **Sentinel, Docker, or other forms of Network Address Translation or Port Mapping should be mixed with care**: Docker performs port remapping, breaking Sentinel auto discovery of other Sentinel processes and the list of replicas for a master. Check the [section about _Sentinel and Docker_](#sentinel-docker-nat-and-possible-issues) later in this document for more information.
### Configuring Sentinel
The Redis source distribution contains a file called `sentinel.conf`
that is a self-documented example configuration file you can use to
configure Sentinel, however a typical minimal configuration file looks like the
following:
sentinel monitor mymaster 127.0.0.1 6379 2
sentinel down-after-milliseconds mymaster 60000
sentinel failover-timeout mymaster 180000
sentinel parallel-syncs mymaster 1
sentinel monitor resque 192.168.1.3 6380 4
sentinel down-after-milliseconds resque 10000
sentinel failover-timeout resque 180000
sentinel parallel-syncs resque 5
You only need to specify the masters to monitor, giving to each separated
master (that may have any number of replicas) a different name. There is no
need to specify replicas, which are auto-discovered. Sentinel will update the
configuration automatically with additional information about replicas (in
order to retain the information in case of restart). The configuration is
also rewritten every time a replica is promoted to master during a failover
and every time a new Sentinel is discovered.
The example configuration above basically monitors two sets of Redis
instances, each composed of a master and an undefined number of replicas.
One set of instances is called `mymaster`, and the other `resque`.
The meaning of the arguments of `sentinel monitor` statements is the following:
sentinel monitor <master-name> <ip> <port> <quorum>
For the sake of clarity, let's check line by line what the configuration
options mean:
The first line is used to tell Redis to monitor a master called *mymaster*,
that is at address 127.0.0.1 and port 6379, with a quorum of 2. Everything
is pretty obvious but the **quorum** argument:
* The **quorum** is the number of Sentinels that need to agree about the fact the master is not reachable, in order to really mark the master as failing, and eventually start a failover procedure if possible.
* However **the quorum is only used to detect the failure**. In order to actually perform a failover, one of the Sentinels need to be elected leader for the failover and be authorized to proceed. This only happens with the vote of the **majority of the Sentinel processes**.
So for example if you have 5 Sentinel processes, and the quorum for a given
master set to the value of 2, this is what happens:
* If two Sentinels agree at the same time about the master being unreachable, one of the two will try to start a failover.
* If there are at least a total of three Sentinels reachable, the failover will be authorized and will actually start.
In practical terms this means during failures **Sentinel never starts a failover if the majority of Sentinel processes are unable to talk** (aka no failover in the minority partition).
### Other Sentinel options
The other options are almost always in the form:
sentinel <option_name> <master_name> <option_value>
And are used for the following purposes:
* `down-after-milliseconds` is the time in milliseconds an instance should not
be reachable (either does not reply to our PINGs or it is replying with an
error) for a Sentinel starting to think it is down.
* `parallel-syncs` sets the number of replicas that can be reconfigured to use
the new master after a failover at the same time. The lower the number, the
more time it will take for the failover process to complete, however if the
replicas are configured to serve old data, you may not want all the replicas to
re-synchronize with the master at the same time. While the replication
process is mostly non blocking for a replica, there is a moment when it stops to
load the bulk data from the master. You may want to make sure only one replica
at a time is not reachable by setting this option to the value of 1.
Additional options are described in the rest of this document and
documented in the example `sentinel.conf` file shipped with the Redis
distribution.
Configuration parameters can be modified at runtime:
* Master-specific configuration parameters are modified using `SENTINEL SET`.
* Global configuration parameters are modified using `SENTINEL CONFIG SET`.
See the [_Reconfiguring Sentinel at runtime_ section](#reconfiguring-sentinel-at-runtime) for more information.
### Example Sentinel deployments
Now that you know the basic information about Sentinel, you may wonder where
you should place your Sentinel processes, how many Sentinel processes you need
and so forth. This section shows a few example deployments.
We use ASCII art in order to show you configuration examples in a *graphical*
format, this is what the different symbols means:
+--------------------+
| This is a computer |
| or VM that fails |
| independently. We |
| call it a "box" |
+--------------------+
We write inside the boxes what they are running:
+-------------------+
| Redis master M1 |
| Redis Sentinel S1 |
+-------------------+
Different boxes are connected by lines, to show that they are able to talk:
+-------------+ +-------------+
| Sentinel S1 |---------------| Sentinel S2 |
+-------------+ +-------------+
Network partitions are shown as interrupted lines using slashes:
+-------------+ +-------------+
| Sentinel S1 |------ // ------| Sentinel S2 |
+-------------+ +-------------+
Also note that:
* Masters are called M1, M2, M3, ..., Mn.
* Replicas are called R1, R2, R3, ..., Rn (R stands for *replica*).
* Sentinels are called S1, S2, S3, ..., Sn.
* Clients are called C1, C2, C3, ..., Cn.
* When an instance changes role because of Sentinel actions, we put it inside square brackets, so [M1] means an instance that is now a master because of Sentinel intervention.
Note that we will never show **setups where just two Sentinels are used**, since
Sentinels always need **to talk with the majority** in order to start a
failover.
#### Example 1: just two Sentinels, DON'T DO THIS
+----+ +----+
| M1 |---------| R1 |
| S1 | | S2 |
+----+ +----+
Configuration: quorum = 1
* In this setup, if the master M1 fails, R1 will be promoted since the two Sentinels can reach agreement about the failure (obviously with quorum set to 1) and can also authorize a failover because the majority is two. So apparently it could superficially work, however check the next points to see why this setup is broken.
* If the box where M1 is running stops working, also S1 stops working. The Sentinel running in the other box S2 will not be able to authorize a failover, so the system will become not available.
Note that a majority is needed in order to order different failovers, and later propagate the latest configuration to all the Sentinels. Also note that the ability to failover in a single side of the above setup, without any agreement, would be very dangerous:
+----+ +------+
| M1 |----//-----| [M1] |
| S1 | | S2 |
+----+ +------+
In the above configuration we created two masters (assuming S2 could failover
without authorization) in a perfectly symmetrical way. Clients may write
indefinitely to both sides, and there is no way to understand when the
partition heals what configuration is the right one, in order to prevent
a *permanent split brain condition*.
So please **deploy at least three Sentinels in three different boxes** always.
#### Example 2: basic setup with three boxes
This is a very simple setup, that has the advantage to be simple to tune
for additional safety. It is based on three boxes, each box running both
a Redis process and a Sentinel process.
+----+
| M1 |
| S1 |
+----+
|
+----+ | +----+
| R2 |----+----| R3 |
| S2 | | S3 |
+----+ +----+
Configuration: quorum = 2
If the master M1 fails, S2 and S3 will agree about the failure and will
be able to authorize a failover, making clients able to continue.
In every Sentinel setup, as Redis uses asynchronous replication, there is
always the risk of losing some writes because a given acknowledged write
may not be able to reach the replica which is promoted to master. However in
the above setup there is a higher risk due to clients being partitioned away
with an old master, like in the following picture:
+----+
| M1 |
| S1 | <- C1 (writes will be lost)
+----+
|
/
/
+------+ | +----+
| [M2] |----+----| R3 |
| S2 | | S3 |
+------+ +----+
In this case a network partition isolated the old master M1, so the
replica R2 is promoted to master. However clients, like C1, that are
in the same partition as the old master, may continue to write data
to the old master. This data will be lost forever since when the partition
will heal, the master will be reconfigured as a replica of the new master,
discarding its data set.
This problem can be mitigated using the following Redis replication
feature, that allows to stop accepting writes if a master detects that
it is no longer able to transfer its writes to the specified number of replicas.
min-replicas-to-write 1
min-replicas-max-lag 10
With the above configuration (please see the self-commented `redis.conf` example in the Redis distribution for more information) a Redis instance, when acting as a master, will stop accepting writes if it can't write to at least 1 replica. Since replication is asynchronous *not being able to write* actually means that the replica is either disconnected, or is not sending us asynchronous acknowledges for more than the specified `max-lag` number of seconds.
Using this configuration, the old Redis master M1 in the above example, will become unavailable after 10 seconds. When the partition heals, the Sentinel configuration will converge to the new one, the client C1 will be able to fetch a valid configuration and will continue with the new master.
However there is no free lunch. With this refinement, if the two replicas are
down, the master will stop accepting writes. It's a trade off.
#### Example 3: Sentinel in the client boxes
Sometimes we have only two Redis boxes available, one for the master and
one for the replica. The configuration in the example 2 is not viable in
that case, so we can resort to the following, where Sentinels are placed
where clients are:
+----+ +----+
| M1 |----+----| R1 |
| | | | |
+----+ | +----+
|
+------------+------------+
| | |
| | |
+----+ +----+ +----+
| C1 | | C2 | | C3 |
| S1 | | S2 | | S3 |
+----+ +----+ +----+
Configuration: quorum = 2
In this setup, the point of view Sentinels is the same as the clients: if
a master is reachable by the majority of the clients, it is fine.
C1, C2, C3 here are generic clients, it does not mean that C1 identifies
a single client connected to Redis. It is more likely something like
an application server, a Rails app, or something like that.
If the box where M1 and S1 are running fails, the failover will happen
without issues, however it is easy to see that different network partitions
will result in different behaviors. For example Sentinel will not be able
to setup if the network between the clients and the Redis servers is
disconnected, since the Redis master and replica will both be unavailable.
Note that if C3 gets partitioned with M1 (hardly possible with
the network described above, but more likely possible with different
layouts, or because of failures at the software layer), we have a similar
issue as described in Example 2, with the difference that here we have
no way to break the symmetry, since there is just a replica and master, so
the master can't stop accepting queries when it is disconnected from its replica,
otherwise the master would never be available during replica failures.
So this is a valid setup but the setup in the Example 2 has advantages
such as the HA system of Redis running in the same boxes as Redis itself
which may be simpler to manage, and the ability to put a bound on the amount
of time a master in the minority partition can receive writes.
#### Example 4: Sentinel client side with less than three clients
The setup described in the Example 3 cannot be used if there are less than
three boxes in the client side (for example three web servers). In this
case we need to resort to a mixed setup like the following:
+----+ +----+
| M1 |----+----| R1 |
| S1 | | | S2 |
+----+ | +----+
|
+------+-----+
| |
| |
+----+ +----+
| C1 | | C2 |
| S3 | | S4 |
+----+ +----+
Configuration: quorum = 3
This is similar to the setup in Example 3, but here we run four Sentinels
in the four boxes we have available. If the master M1 becomes unavailable
the other three Sentinels will perform the failover.
In theory this setup works removing the box where C2 and S4 are running, and
setting the quorum to 2. However it is unlikely that we want HA in the
Redis side without having high availability in our application layer.
### Sentinel, Docker, NAT, and possible issues
Docker uses a technique called port mapping: programs running inside Docker
containers may be exposed with a different port compared to the one the
program believes to be using. This is useful in order to run multiple
containers using the same ports, at the same time, in the same server.
Docker is not the only software system where this happens, there are other
Network Address Translation setups where ports may be remapped, and sometimes
not ports but also IP addresses.
Remapping ports and addresses creates issues with Sentinel in two ways:
1. Sentinel auto-discovery of other Sentinels no longer works, since it is based on *hello* messages where each Sentinel announce at which port and IP address they are listening for connection. However Sentinels have no way to understand that an address or port is remapped, so it is announcing an information that is not correct for other Sentinels to connect.
2. Replicas are listed in the `INFO` output of a Redis master in a similar way: the address is detected by the master checking the remote peer of the TCP connection, while the port is advertised by the replica itself during the handshake, however the port may be wrong for the same reason as exposed in point 1.
Since Sentinels auto detect replicas using masters `INFO` output information,
the detected replicas will not be reachable, and Sentinel will never be able to
failover the master, since there are no good replicas from the point of view of
the system, so there is currently no way to monitor with Sentinel a set of
master and replica instances deployed with Docker, **unless you instruct Docker
to map the port 1:1**.
For the first problem, in case you want to run a set of Sentinel
instances using Docker with forwarded ports (or any other NAT setup where ports
are remapped), you can use the following two Sentinel configuration directives
in order to force Sentinel to announce a specific set of IP and port:
sentinel announce-ip <ip>
sentinel announce-port <port>
Note that Docker has the ability to run in *host networking mode* (check the `--net=host` option for more information). This should create no issues since ports are not remapped in this setup.
### IP Addresses and DNS names
Older versions of Sentinel did not support host names and required IP addresses to be specified everywhere.
Starting with version 6.2, Sentinel has *optional* support for host names.
**This capability is disabled by default. If you're going to enable DNS/hostnames support, please note:**
1. The name resolution configuration on your Redis and Sentinel nodes must be reliable and be able to resolve addresses quickly. Unexpected delays in address resolution may have a negative impact on Sentinel.
2. You should use hostnames everywhere and avoid mixing hostnames and IP addresses. To do that, use `replica-announce-ip <hostname>` and `sentinel announce-ip <hostname>` for all Redis and Sentinel instances, respectively.
Enabling the `resolve-hostnames` global configuration allows Sentinel to accept host names:
* As part of a `sentinel monitor` command
* As a replica address, if the replica uses a host name value for `replica-announce-ip`
Sentinel will accept host names as valid inputs and resolve them, but will still refer to IP addresses when announcing an instance, updating configuration files, etc.
Enabling the `announce-hostnames` global configuration makes Sentinel use host names instead. This affects replies to clients, values written in configuration files, the `REPLICAOF` command issued to replicas, etc.
This behavior may not be compatible with all Sentinel clients, that may explicitly expect an IP address.
Using host names may be useful when clients use TLS to connect to instances and require a name rather than an IP address in order to perform certificate ASN matching.
## A quick tutorial
In the next sections of this document, all the details about [_Sentinel API_](#sentinel-api),
configuration and semantics will be covered incrementally. However for people
that want to play with the system ASAP, this section is a tutorial that shows
how to configure and interact with 3 Sentinel instances.
Here we assume that the instances are executed at port 5000, 5001, 5002.
We also assume that you have a running Redis master at port 6379 with a
replica running at port 6380. We will use the IPv4 loopback address 127.0.0.1
everywhere during the tutorial, assuming you are running the simulation
on your personal computer.
The three Sentinel configuration files should look like the following:
port 5000
sentinel monitor mymaster 127.0.0.1 6379 2
sentinel down-after-milliseconds mymaster 5000
sentinel failover-timeout mymaster 60000
sentinel parallel-syncs mymaster 1
The other two configuration files will be identical but using 5001 and 5002
as port numbers.
A few things to note about the above configuration:
* The master set is called `mymaster`. It identifies the master and its replicas. Since each *master set* has a different name, Sentinel can monitor different sets of masters and replicas at the same time.
* The quorum was set to the value of 2 (last argument of `sentinel monitor` configuration directive).
* The `down-after-milliseconds` value is 5000 milliseconds, that is 5 seconds, so masters will be detected as failing as soon as we don't receive any reply from our pings within this amount of time.
Once you start the three Sentinels, you'll see a few messages they log, like:
+monitor master mymaster 127.0.0.1 6379 quorum 2
This is a Sentinel event, and you can receive this kind of events via Pub/Sub
if you `SUBSCRIBE` to the event name as specified later in [_Pub/Sub Messages_ section](#pubsub-messages).
Sentinel generates and logs different events during failure detection and
failover.
Asking Sentinel about the state of a master
---
The most obvious thing to do with Sentinel to get started, is check if the
master it is monitoring is doing well:
$ redis-cli -p 5000
127.0.0.1:5000> sentinel master mymaster
1) "name"
2) "mymaster"
3) "ip"
4) "127.0.0.1"
5) "port"
6) "6379"
7) "runid"
8) "953ae6a589449c13ddefaee3538d356d287f509b"
9) "flags"
10) "master"
11) "link-pending-commands"
12) "0"
13) "link-refcount"
14) "1"
15) "last-ping-sent"
16) "0"
17) "last-ok-ping-reply"
18) "735"
19) "last-ping-reply"
20) "735"
21) "down-after-milliseconds"
22) "5000"
23) "info-refresh"
24) "126"
25) "role-reported"
26) "master"
27) "role-reported-time"
28) "532439"
29) "config-epoch"
30) "1"
31) "num-slaves"
32) "1"
33) "num-other-sentinels"
34) "2"
35) "quorum"
36) "2"
37) "failover-timeout"
38) "60000"
39) "parallel-syncs"
40) "1"
As you can see, it prints a number of information about the master. There are
a few that are of particular interest for us:
1. `num-other-sentinels` is 2, so we know the Sentinel already detected two more Sentinels for this master. If you check the logs you'll see the `+sentinel` events generated.
2. `flags` is just `master`. If the master was down we could expect to see `s_down` or `o_down` flag as well here.
3. `num-slaves` is correctly set to 1, so Sentinel also detected that there is an attached replica to our master.
In order to explore more about this instance, you may want to try the following
two commands:
SENTINEL replicas mymaster
SENTINEL sentinels mymaster
The first will provide similar information about the replicas connected to the
master, and the second about the other Sentinels.
Obtaining the address of the current master
---
As we already specified, Sentinel also acts as a configuration provider for
clients that want to connect to a set of master and replicas. Because of
possible failovers or reconfigurations, clients have no idea about who is
the currently active master for a given set of instances, so Sentinel exports
an API to ask this question:
127.0.0.1:5000> SENTINEL get-master-addr-by-name mymaster
1) "127.0.0.1"
2) "6379"
### Testing the failover
At this point our toy Sentinel deployment is ready to be tested. We can
just kill our master and check if the configuration changes. To do so
we can just do:
redis-cli -p 6379 DEBUG sleep 30
This command will make our master no longer reachable, sleeping for 30 seconds.
It basically simulates a master hanging for some reason.
If you check the Sentinel logs, you should be able to see a lot of action:
1. Each Sentinel detects the master is down with an `+sdown` event.
2. This event is later escalated to `+odown`, which means that multiple Sentinels agree about the fact the master is not reachable.
3. Sentinels vote a Sentinel that will start the first failover attempt.
4. The failover happens.
If you ask again what is the current master address for `mymaster`, eventually
we should get a different reply this time:
127.0.0.1:5000> SENTINEL get-master-addr-by-name mymaster
1) "127.0.0.1"
2) "6380"
So far so good... At this point you may jump to create your Sentinel deployment
or can read more to understand all the Sentinel commands and internals.
## Sentinel API
Sentinel provides an API in order to inspect its state, check the health
of monitored masters and replicas, subscribe in order to receive specific
notifications, and change the Sentinel configuration at run time.
By default Sentinel runs using TCP port 26379 (note that 6379 is the normal
Redis port). Sentinels accept commands using the Redis protocol, so you can
use `redis-cli` or any other unmodified Redis client in order to talk with
Sentinel.
It is possible to directly query a Sentinel to check what is the state of
the monitored Redis instances from its point of view, to see what other
Sentinels it knows, and so forth. Alternatively, using Pub/Sub, it is possible
to receive *push style* notifications from Sentinels, every time some event
happens, like a failover, or an instance entering an error condition, and
so forth.
### Sentinel commands
The `SENTINEL` command is the main API for Sentinel. The following is the list of its subcommands (minimal version is noted for where applicable):
* **SENTINEL CONFIG GET `<name>`** (`>= 6.2`) Get the current value of a global Sentinel configuration parameter. The specified name may be a wildcard, similar to the Redis `CONFIG GET` command.
* **SENTINEL CONFIG SET `<name>` `<value>`** (`>= 6.2`) Set the value of a global Sentinel configuration parameter.
* **SENTINEL CKQUORUM `<master name>`** Check if the current Sentinel configuration is able to reach the quorum needed to failover a master, and the majority needed to authorize the failover. This command should be used in monitoring systems to check if a Sentinel deployment is ok.
* **SENTINEL FLUSHCONFIG** Force Sentinel to rewrite its configuration on disk, including the current Sentinel state. Normally Sentinel rewrites the configuration every time something changes in its state (in the context of the subset of the state which is persisted on disk across restart). However sometimes it is possible that the configuration file is lost because of operation errors, disk failures, package upgrade scripts or configuration managers. In those cases a way to force Sentinel to rewrite the configuration file is handy. This command works even if the previous configuration file is completely missing.
* **SENTINEL FAILOVER `<master name>`** Force a failover as if the master was not reachable, and without asking for agreement to other Sentinels (however a new version of the configuration will be published so that the other Sentinels will update their configurations).
* **SENTINEL GET-MASTER-ADDR-BY-NAME `<master name>`** Return the ip and port number of the master with that name. If a failover is in progress or terminated successfully for this master it returns the address and port of the promoted replica.
* **SENTINEL INFO-CACHE** (`>= 3.2`) Return cached `INFO` output from masters and replicas.
* **SENTINEL IS-MASTER-DOWN-BY-ADDR <ip> <port> <current-epoch> <runid>** Check if the master specified by ip:port is down from current Sentinel's point of view. This command is mostly for internal use.
* **SENTINEL MASTER `<master name>`** Show the state and info of the specified master.
* **SENTINEL MASTERS** Show a list of monitored masters and their state.
* **SENTINEL MONITOR** Start Sentinel's monitoring. Refer to the [_Reconfiguring Sentinel at Runtime_ section](#reconfiguring-sentinel-at-runtime) for more information.
* **SENTINEL MYID** (`>= 6.2`) Return the ID of the Sentinel instance.
* **SENTINEL PENDING-SCRIPTS** This command returns information about pending scripts.
* **SENTINEL REMOVE** Stop Sentinel's monitoring. Refer to the [_Reconfiguring Sentinel at Runtime_ section](#reconfiguring-sentinel-at-runtime) for more information.
* **SENTINEL REPLICAS `<master name>`** (`>= 5.0`) Show a list of replicas for this master, and their state.
* **SENTINEL SENTINELS `<master name>`** Show a list of sentinel instances for this master, and their state.
* **SENTINEL SET** Set Sentinel's monitoring configuration. Refer to the [_Reconfiguring Sentinel at Runtime_ section](#reconfiguring-sentinel-at-runtime) for more information.
* **SENTINEL SIMULATE-FAILURE (crash-after-election|crash-after-promotion|help)** (`>= 3.2`) This command simulates different Sentinel crash scenarios.
* **SENTINEL RESET `<pattern>`** This command will reset all the masters with matching name. The pattern argument is a glob-style pattern. The reset process clears any previous state in a master (including a failover in progress), and removes every replica and sentinel already discovered and associated with the master.
For connection management and administration purposes, Sentinel supports the following subset of Redis' commands:
* **ACL** (`>= 6.2`) This command manages the Sentinel Access Control List. For more information refer to the [ACL](/topics/acl) documentation page and the [_Sentinel Access Control List authentication_](#sentinel-access-control-list-authentication).
* **AUTH** (`>= 5.0.1`) Authenticate a client connection. For more information refer to the `AUTH` command and the [_Configuring Sentinel instances with authentication_ section](#configuring-sentinel-instances-with-authentication).
* **CLIENT** This command manages client connections. For more information refer to its subcommands' pages.
* **COMMAND** (`>= 6.2`) This command returns information about commands. For more information refer to the `COMMAND` command and its various subcommands.
* **HELLO** (`>= 6.0`) Switch the connection's protocol. For more information refer to the `HELLO` command.
* **INFO** Return information and statistics about the Sentinel server. For more information see the `INFO` command.
* **PING** This command simply returns PONG.
* **ROLE** This command returns the string "sentinel" and a list of monitored masters. For more information refer to the `ROLE` command.
* **SHUTDOWN** Shut down the Sentinel instance.
Lastly, Sentinel also supports the `SUBSCRIBE`, `UNSUBSCRIBE`, `PSUBSCRIBE` and `PUNSUBSCRIBE` commands. Refer to the [_Pub/Sub Messages_ section](#pubsub-messages) for more details.
### Reconfiguring Sentinel at Runtime
Starting with Redis version 2.8.4, Sentinel provides an API in order to add, remove, or change the configuration of a given master. Note that if you have multiple sentinels you should apply the changes to all to your instances for Redis Sentinel to work properly. This means that changing the configuration of a single Sentinel does not automatically propagate the changes to the other Sentinels in the network.
The following is a list of `SENTINEL` subcommands used in order to update the configuration of a Sentinel instance.
* **SENTINEL MONITOR `<name>` `<ip>` `<port>` `<quorum>`** This command tells the Sentinel to start monitoring a new master with the specified name, ip, port, and quorum. It is identical to the `sentinel monitor` configuration directive in `sentinel.conf` configuration file, with the difference that you can't use a hostname in as `ip`, but you need to provide an IPv4 or IPv6 address.
* **SENTINEL REMOVE `<name>`** is used in order to remove the specified master: the master will no longer be monitored, and will totally be removed from the internal state of the Sentinel, so it will no longer listed by `SENTINEL masters` and so forth.
* **SENTINEL SET `<name>` [`<option>` `<value>` ...]** The SET command is very similar to the `CONFIG SET` command of Redis, and is used in order to change configuration parameters of a specific master. Multiple option / value pairs can be specified (or none at all). All the configuration parameters that can be configured via `sentinel.conf` are also configurable using the SET command.
The following is an example of `SENTINEL SET` command in order to modify the `down-after-milliseconds` configuration of a master called `objects-cache`:
SENTINEL SET objects-cache-master down-after-milliseconds 1000
As already stated, `SENTINEL SET` can be used to set all the configuration parameters that are settable in the startup configuration file. Moreover it is possible to change just the master quorum configuration without removing and re-adding the master with `SENTINEL REMOVE` followed by `SENTINEL MONITOR`, but simply using:
SENTINEL SET objects-cache-master quorum 5
Note that there is no equivalent GET command since `SENTINEL MASTER` provides all the configuration parameters in a simple to parse format (as a field/value pairs array).
Starting with Redis version 6.2, Sentinel also allows getting and setting global configuration parameters which were only supported in the configuration file prior to that.
* **SENTINEL CONFIG GET `<name>`** Get the current value of a global Sentinel configuration parameter. The specified name may be a wildcard, similar to the Redis `CONFIG GET` command.
* **SENTINEL CONFIG SET `<name>` `<value>`** Set the value of a global Sentinel configuration parameter.
Global parameters that can be manipulated include:
* `resolve-hostnames`, `announce-hostnames`. See [_IP addresses and DNS names_](#ip-addresses-and-dns-names).
* `announce-ip`, `announce-port`. See [_Sentinel, Docker, NAT, and possible issues_](#sentinel-docker-nat-and-possible-issues).
* `sentinel-user`, `sentinel-pass`. See [_Configuring Sentinel instances with authentication_](#configuring-sentinel-instances-with-authentication).
### Adding or removing Sentinels
Adding a new Sentinel to your deployment is a simple process because of the
auto-discover mechanism implemented by Sentinel. All you need to do is to
start the new Sentinel configured to monitor the currently active master.
Within 10 seconds the Sentinel will acquire the list of other Sentinels and
the set of replicas attached to the master.
If you need to add multiple Sentinels at once, it is suggested to add it
one after the other, waiting for all the other Sentinels to already know
about the first one before adding the next. This is useful in order to still
guarantee that majority can be achieved only in one side of a partition,
in the chance failures should happen in the process of adding new Sentinels.
This can be easily achieved by adding every new Sentinel with a 30 seconds delay, and during absence of network partitions.
At the end of the process it is possible to use the command
`SENTINEL MASTER mastername` in order to check if all the Sentinels agree about
the total number of Sentinels monitoring the master.
Removing a Sentinel is a bit more complex: **Sentinels never forget already seen
Sentinels**, even if they are not reachable for a long time, since we don't
want to dynamically change the majority needed to authorize a failover and
the creation of a new configuration number. So in order to remove a Sentinel
the following steps should be performed in absence of network partitions:
1. Stop the Sentinel process of the Sentinel you want to remove.
2. Send a `SENTINEL RESET *` command to all the other Sentinel instances (instead of `*` you can use the exact master name if you want to reset just a single master). One after the other, waiting at least 30 seconds between instances.
3. Check that all the Sentinels agree about the number of Sentinels currently active, by inspecting the output of `SENTINEL MASTER mastername` of every Sentinel.
### Removing the old master or unreachable replicas
Sentinels never forget about replicas of a given master, even when they are
unreachable for a long time. This is useful, because Sentinels should be able
to correctly reconfigure a returning replica after a network partition or a
failure event.
Moreover, after a failover, the failed over master is virtually added as a
replica of the new master, this way it will be reconfigured to replicate with
the new master as soon as it will be available again.
However sometimes you want to remove a replica (that may be the old master)
forever from the list of replicas monitored by Sentinels.
In order to do this, you need to send a `SENTINEL RESET mastername` command
to all the Sentinels: they'll refresh the list of replicas within the next
10 seconds, only adding the ones listed as correctly replicating from the
current master `INFO` output.
### Pub/Sub messages
A client can use a Sentinel as a Redis-compatible Pub/Sub server
(but you can't use `PUBLISH`) in order to `SUBSCRIBE` or `PSUBSCRIBE` to
channels and get notified about specific events.
The channel name is the same as the name of the event. For instance the
channel named `+sdown` will receive all the notifications related to instances
entering an `SDOWN` (SDOWN means the instance is no longer reachable from
the point of view of the Sentinel you are querying) condition.
To get all the messages simply subscribe using `PSUBSCRIBE *`.
The following is a list of channels and message formats you can receive using
this API. The first word is the channel / event name, the rest is the format of the data.
Note: where *instance details* is specified it means that the following arguments are provided to identify the target instance:
<instance-type> <name> <ip> <port> @ <master-name> <master-ip> <master-port>
The part identifying the master (from the @ argument to the end) is optional
and is only specified if the instance is not a master itself.
* **+reset-master** `<instance details>` -- The master was reset.
* **+slave** `<instance details>` -- A new replica was detected and attached.
* **+failover-state-reconf-slaves** `<instance details>` -- Failover state changed to `reconf-slaves` state.
* **+failover-detected** `<instance details>` -- A failover started by another Sentinel or any other external entity was detected (An attached replica turned into a master).
* **+slave-reconf-sent** `<instance details>` -- The leader sentinel sent the `REPLICAOF` command to this instance in order to reconfigure it for the new replica.
* **+slave-reconf-inprog** `<instance details>` -- The replica being reconfigured showed to be a replica of the new master ip:port pair, but the synchronization process is not yet complete.
* **+slave-reconf-done** `<instance details>` -- The replica is now synchronized with the new master.
* **-dup-sentinel** `<instance details>` -- One or more sentinels for the specified master were removed as duplicated (this happens for instance when a Sentinel instance is restarted).
* **+sentinel** `<instance details>` -- A new sentinel for this master was detected and attached.
* **+sdown** `<instance details>` -- The specified instance is now in Subjectively Down state.
* **-sdown** `<instance details>` -- The specified instance is no longer in Subjectively Down state.
* **+odown** `<instance details>` -- The specified instance is now in Objectively Down state.
* **-odown** `<instance details>` -- The specified instance is no longer in Objectively Down state.
* **+new-epoch** `<instance details>` -- The current epoch was updated.
* **+try-failover** `<instance details>` -- New failover in progress, waiting to be elected by the majority.
* **+elected-leader** `<instance details>` -- Won the election for the specified epoch, can do the failover.
* **+failover-state-select-slave** `<instance details>` -- New failover state is `select-slave`: we are trying to find a suitable replica for promotion.
* **no-good-slave** `<instance details>` -- There is no good replica to promote. Currently we'll try after some time, but probably this will change and the state machine will abort the failover at all in this case.
* **selected-slave** `<instance details>` -- We found the specified good replica to promote.
* **failover-state-send-slaveof-noone** `<instance details>` -- We are trying to reconfigure the promoted replica as master, waiting for it to switch.
* **failover-end-for-timeout** `<instance details>` -- The failover terminated for timeout, replicas will eventually be configured to replicate with the new master anyway.
* **failover-end** `<instance details>` -- The failover terminated with success. All the replicas appears to be reconfigured to replicate with the new master.
* **switch-master** `<master name> <oldip> <oldport> <newip> <newport>` -- The master new IP and address is the specified one after a configuration change. This is **the message most external users are interested in**.
* **+tilt** -- Tilt mode entered.
* **-tilt** -- Tilt mode exited.
### Handling of -BUSY state
The -BUSY error is returned by a Redis instance when a Lua script is running for
more time than the configured Lua script time limit. When this happens before
triggering a fail over Redis Sentinel will try to send a `SCRIPT KILL`
command, that will only succeed if the script was read-only.
If the instance is still in an error condition after this try, it will
eventually be failed over.
Replicas priority
---
Redis instances have a configuration parameter called `replica-priority`.
This information is exposed by Redis replica instances in their `INFO` output,
and Sentinel uses it in order to pick a replica among the ones that can be
used in order to failover a master:
1. If the replica priority is set to 0, the replica is never promoted to master.
2. Replicas with a *lower* priority number are preferred by Sentinel.
For example if there is a replica S1 in the same data center of the current
master, and another replica S2 in another data center, it is possible to set
S1 with a priority of 10 and S2 with a priority of 100, so that if the master
fails and both S1 and S2 are available, S1 will be preferred.
For more information about the way replicas are selected, please check the [_Replica selection and priority_ section](#replica-selection-and-priority) of this documentation.
### Sentinel and Redis authentication
When the master is configured to require authentication from clients,
as a security measure, replicas need to also be aware of the credentials in
order to authenticate with the master and create the master-replica connection
used for the asynchronous replication protocol.
## Redis Access Control List authentication
Starting with Redis 6, user authentication and permission is managed with the [Access Control List (ACL)](/topics/acl).
In order for Sentinels to connect to Redis server instances when they are
configured with ACL, the Sentinel configuration must include the
following directives:
sentinel auth-user <master-name> <username>
sentinel auth-pass <master-name> <password>
Where `<username>` and `<password>` are the username and password for accessing the group's instances. These credentials should be provisioned on all of the group's Redis instances with the minimal control permissions. For example:
127.0.0.1:6379> ACL SETUSER sentinel-user ON >somepassword allchannels +multi +slaveof +ping +exec +subscribe +config|rewrite +role +publish +info +client|setname +client|kill +script|kill
### Redis password-only authentication
Until Redis 6, authentication is achieved using the following configuration directives:
* `requirepass` in the master, in order to set the authentication password, and to make sure the instance will not process requests for non authenticated clients.
* `masterauth` in the replicas in order for the replicas to authenticate with the master in order to correctly replicate data from it.
When Sentinel is used, there is not a single master, since after a failover
replicas may play the role of masters, and old masters can be reconfigured in
order to act as replicas, so what you want to do is to set the above directives
in all your instances, both masters and replicas.
This is also usually a sane setup since you don't want to protect
data only in the master, having the same data accessible in the replicas.
However, in the uncommon case where you need a replica that is accessible
without authentication, you can still do it by setting up **a replica priority
of zero**, to prevent this replica from being promoted to master, and
configuring in this replica only the `masterauth` directive, without
using the `requirepass` directive, so that data will be readable by
unauthenticated clients.
In order for Sentinels to connect to Redis server instances when they are
configured with `requirepass`, the Sentinel configuration must include the
`sentinel auth-pass` directive, in the format:
sentinel auth-pass <master-name> <password>
Configuring Sentinel instances with authentication
---
Sentinel instances themselves can be secured by requiring clients to authenticate via the `AUTH` command. Starting with Redis 6.2, the [Access Control List (ACL)](/topics/acl) is available, whereas previous versions (starting with Redis 5.0.1) support password-only authentication.
Note that Sentinel's authentication configuration should be **applied to each of the instances** in your deployment, and **all instances should use the same configuration**. Furthermore, ACL and password-only authentication should not be used together.
### Sentinel Access Control List authentication
The first step in securing a Sentinel instance with ACL is preventing any unauthorized access to it. To do that, you'll need to disable the default superuser (or at the very least set it up with a strong password) and create a new one and allow it access to Pub/Sub channels:
127.0.0.1:5000> ACL SETUSER admin ON >admin-password allchannels +@all
OK
127.0.0.1:5000> ACL SETUSER default off
OK
The default user is used by Sentinel to connect to other instances. You can provide the credentials of another superuser with the following configuration directives:
sentinel sentinel-user <username>
sentinel sentinel-pass <password>
Where `<username>` and `<password>` are the Sentinel's superuser and password, respectively (e.g. `admin` and `admin-password` in the example above).
Lastly, for authenticating incoming client connections, you can create a Sentinel restricted user profile such as the following:
127.0.0.1:5000> ACL SETUSER sentinel-user ON >user-password -@all +auth +client|getname +client|id +client|setname +command +hello +ping +role +sentinel|get-master-addr-by-name +sentinel|master +sentinel|myid +sentinel|replicas +sentinel|sentinels
Refer to the documentation of your Sentinel client of choice for further information.
### Sentinel password-only authentication
To use Sentinel with password-only authentication, add the `requirepass` configuration directive to **all** your Sentinel instances as follows:
requirepass "your_password_here"
When configured this way, Sentinels will do two things:
1. A password will be required from clients in order to send commands to Sentinels. This is obvious since this is how such configuration directive works in Redis in general.
2. Moreover the same password configured to access the local Sentinel, will be used by this Sentinel instance in order to authenticate to all the other Sentinel instances it connects to.
This means that **you will have to configure the same `requirepass` password in all the Sentinel instances**. This way every Sentinel can talk with every other Sentinel without any need to configure for each Sentinel the password to access all the other Sentinels, that would be very impractical.
Before using this configuration, make sure your client library can send the `AUTH` command to Sentinel instances.
### Sentinel clients implementation
---
Sentinel requires explicit client support, unless the system is configured to execute a script that performs a transparent redirection of all the requests to the new master instance (virtual IP or other similar systems). The topic of client libraries implementation is covered in the document [Sentinel clients guidelines](/topics/sentinel-clients).
## More advanced concepts
In the following sections we'll cover a few details about how Sentinel works,
without resorting to implementation details and algorithms that will be
covered in the final part of this document.
### SDOWN and ODOWN failure state
Redis Sentinel has two different concepts of *being down*, one is called
a *Subjectively Down* condition (SDOWN) and is a down condition that is
local to a given Sentinel instance. Another is called *Objectively Down*
condition (ODOWN) and is reached when enough Sentinels (at least the
number configured as the `quorum` parameter of the monitored master) have
an SDOWN condition, and get feedback from other Sentinels using
the `SENTINEL is-master-down-by-addr` command.
From the point of view of a Sentinel an SDOWN condition is reached when it
does not receive a valid reply to PING requests for the number of seconds
specified in the configuration as `is-master-down-after-milliseconds`
parameter.
An acceptable reply to PING is one of the following:
* PING replied with +PONG.
* PING replied with -LOADING error.
* PING replied with -MASTERDOWN error.
Any other reply (or no reply at all) is considered non valid.
However note that **a logical master that advertises itself as a replica in
the INFO output is considered to be down**.
Note that SDOWN requires that no acceptable reply is received for the whole
interval configured, so for instance if the interval is 30000 milliseconds
(30 seconds) and we receive an acceptable ping reply every 29 seconds, the
instance is considered to be working.
SDOWN is not enough to trigger a failover: it only means a single Sentinel
believes a Redis instance is not available. To trigger a failover, the
ODOWN state must be reached.
To switch from SDOWN to ODOWN no strong consensus algorithm is used, but
just a form of gossip: if a given Sentinel gets reports that a master
is not working from enough Sentinels **in a given time range**, the SDOWN is
promoted to ODOWN. If this acknowledge is later missing, the flag is cleared.
A more strict authorization that uses an actual majority is required in
order to really start the failover, but no failover can be triggered without
reaching the ODOWN state.
The ODOWN condition **only applies to masters**. For other kind of instances
Sentinel doesn't require to act, so the ODOWN state is never reached for replicas
and other sentinels, but only SDOWN is.
However SDOWN has also semantic implications. For example a replica in SDOWN
state is not selected to be promoted by a Sentinel performing a failover.
Sentinels and replicas auto discovery
---
Sentinels stay connected with other Sentinels in order to reciprocally
check the availability of each other, and to exchange messages. However you
don't need to configure a list of other Sentinel addresses in every Sentinel
instance you run, as Sentinel uses the Redis instances Pub/Sub capabilities
in order to discover the other Sentinels that are monitoring the same masters
and replicas.
This feature is implemented by sending *hello messages* into the channel named
`__sentinel__:hello`.
Similarly you don't need to configure what is the list of the replicas attached
to a master, as Sentinel will auto discover this list querying Redis.
* Every Sentinel publishes a message to every monitored master and replica Pub/Sub channel `__sentinel__:hello`, every two seconds, announcing its presence with ip, port, runid.
* Every Sentinel is subscribed to the Pub/Sub channel `__sentinel__:hello` of every master and replica, looking for unknown sentinels. When new sentinels are detected, they are added as sentinels of this master.
* Hello messages also include the full current configuration of the master. If the receiving Sentinel has a configuration for a given master which is older than the one received, it updates to the new configuration immediately.
* Before adding a new sentinel to a master a Sentinel always checks if there is already a sentinel with the same runid or the same address (ip and port pair). In that case all the matching sentinels are removed, and the new added.
Sentinel reconfiguration of instances outside the failover procedure
---
Even when no failover is in progress, Sentinels will always try to set the
current configuration on monitored instances. Specifically:
* Replicas (according to the current configuration) that claim to be masters, will be configured as replicas to replicate with the current master.
* Replicas connected to a wrong master, will be reconfigured to replicate with the right master.
For Sentinels to reconfigure replicas, the wrong configuration must be observed for some time, that is greater than the period used to broadcast new configurations.
This prevents Sentinels with a stale configuration (for example because they just rejoined from a partition) will try to change the replicas configuration before receiving an update.
Also note how the semantics of always trying to impose the current configuration makes the failover more resistant to partitions:
* Masters failed over are reconfigured as replicas when they return available.
* Replicas partitioned away during a partition are reconfigured once reachable.
The important lesson to remember about this section is: **Sentinel is a system where each process will always try to impose the last logical configuration to the set of monitored instances**.
### Replica selection and priority
When a Sentinel instance is ready to perform a failover, since the master
is in `ODOWN` state and the Sentinel received the authorization to failover
from the majority of the Sentinel instances known, a suitable replica needs
to be selected.
The replica selection process evaluates the following information about replicas:
1. Disconnection time from the master.
2. Replica priority.
3. Replication offset processed.
4. Run ID.
A replica that is found to be disconnected from the master for more than ten
times the configured master timeout (down-after-milliseconds option), plus
the time the master is also not available from the point of view of the
Sentinel doing the failover, is considered to be not suitable for the failover
and is skipped.
In more rigorous terms, a replica whose the `INFO` output suggests it has been
disconnected from the master for more than:
(down-after-milliseconds * 10) + milliseconds_since_master_is_in_SDOWN_state
Is considered to be unreliable and is disregarded entirely.
The replica selection only considers the replicas that passed the above test,
and sorts it based on the above criteria, in the following order.
1. The replicas are sorted by `replica-priority` as configured in the `redis.conf` file of the Redis instance. A lower priority will be preferred.
2. If the priority is the same, the replication offset processed by the replica is checked, and the replica that received more data from the master is selected.
3. If multiple replicas have the same priority and processed the same data from the master, a further check is performed, selecting the replica with the lexicographically smaller run ID. Having a lower run ID is not a real advantage for a replica, but is useful in order to make the process of replica selection more deterministic, instead of resorting to select a random replica.
In most cases, `replica-priority` does not need to be set explicitly so all
instances will use the same default value. If there is a particular fail-over
preference, `replica-priority` must be set on all instances, including masters,
as a master may become a replica at some future point in time - and it will then
need the proper `replica-priority` settings.
A Redis instance can be configured with a special `replica-priority` of zero
in order to be **never selected** by Sentinels as the new master.
However a replica configured in this way will still be reconfigured by
Sentinels in order to replicate with the new master after a failover, the
only difference is that it will never become a master itself.
## Algorithms and internals
In the following sections we will explore the details of Sentinel behavior.
It is not strictly needed for users to be aware of all the details, but a
deep understanding of Sentinel may help to deploy and operate Sentinel in
a more effective way.
### Quorum
The previous sections showed that every master monitored by Sentinel is associated to a configured **quorum**. It specifies the number of Sentinel processes
that need to agree about the unreachability or error condition of the master in
order to trigger a failover.
However, after the failover is triggered, in order for the failover to actually be performed, **at least a majority of Sentinels must authorize the Sentinel to
failover**. Sentinel never performs a failover in the partition where a
minority of Sentinels exist.
Let's try to make things a bit more clear:
* Quorum: the number of Sentinel processes that need to detect an error condition in order for a master to be flagged as **ODOWN**.
* The failover is triggered by the **ODOWN** state.
* Once the failover is triggered, the Sentinel trying to failover is required to ask for authorization to a majority of Sentinels (or more than the majority if the quorum is set to a number greater than the majority).
The difference may seem subtle but is actually quite simple to understand and use. For example if you have 5 Sentinel instances, and the quorum is set to 2, a failover will be triggered as soon as 2 Sentinels believe that the master is not reachable, however one of the two Sentinels will be able to failover only if it gets authorization at least from 3 Sentinels.
If instead the quorum is configured to 5, all the Sentinels must agree about the master error condition, and the authorization from all Sentinels is required in order to failover.
This means that the quorum can be used to tune Sentinel in two ways:
1. If a quorum is set to a value smaller than the majority of Sentinels we deploy, we are basically making Sentinel more sensitive to master failures, triggering a failover as soon as even just a minority of Sentinels is no longer able to talk with the master.
2. If a quorum is set to a value greater than the majority of Sentinels, we are making Sentinel able to failover only when there are a very large number (larger than majority) of well connected Sentinels which agree about the master being down.
### Configuration epochs
Sentinels require to get authorizations from a majority in order to start a
failover for a few important reasons:
When a Sentinel is authorized, it gets a unique **configuration epoch** for the master it is failing over. This is a number that will be used to version the new configuration after the failover is completed. Because a majority agreed that a given version was assigned to a given Sentinel, no other Sentinel will be able to use it. This means that every configuration of every failover is versioned with a unique version. We'll see why this is so important.
Moreover Sentinels have a rule: if a Sentinel voted another Sentinel for the failover of a given master, it will wait some time to try to failover the same master again. This delay is the `2 * failover-timeout` you can configure in `sentinel.conf`. This means that Sentinels will not try to failover the same master at the same time, the first to ask to be authorized will try, if it fails another will try after some time, and so forth.
Redis Sentinel guarantees the *liveness* property that if a majority of Sentinels are able to talk, eventually one will be authorized to failover if the master is down.
Redis Sentinel also guarantees the *safety* property that every Sentinel will failover the same master using a different *configuration epoch*.
### Configuration propagation
Once a Sentinel is able to failover a master successfully, it will start to broadcast the new configuration so that the other Sentinels will update their information about a given master.
For a failover to be considered successful, it requires that the Sentinel was able to send the `REPLICAOF NO ONE` command to the selected replica, and that the switch to master was later observed in the `INFO` output of the master.
At this point, even if the reconfiguration of the replicas is in progress, the failover is considered to be successful, and all the Sentinels are required to start reporting the new configuration.
The way a new configuration is propagated is the reason why we need that every
Sentinel failover is authorized with a different version number (configuration epoch).
Every Sentinel continuously broadcast its version of the configuration of a master using Redis Pub/Sub messages, both in the master and all the replicas. At the same time all the Sentinels wait for messages to see what is the configuration
advertised by the other Sentinels.
Configurations are broadcast in the `__sentinel__:hello` Pub/Sub channel.
Because every configuration has a different version number, the greater version
always wins over smaller versions.
So for example the configuration for the master `mymaster` start with all the
Sentinels believing the master is at 192.168.1.50:6379. This configuration
has version 1. After some time a Sentinel is authorized to failover with version 2. If the failover is successful, it will start to broadcast a new configuration, let's say 192.168.1.50:9000, with version 2. All the other instances will see this configuration and will update their configuration accordingly, since the new configuration has a greater version.
This means that Sentinel guarantees a second liveness property: a set of
Sentinels that are able to communicate will all converge to the same configuration with the higher version number.
Basically if the net is partitioned, every partition will converge to the higher
local configuration. In the special case of no partitions, there is a single
partition and every Sentinel will agree about the configuration.
### Consistency under partitions
Redis Sentinel configurations are eventually consistent, so every partition will
converge to the higher configuration available.
However in a real-world system using Sentinel there are three different players:
* Redis instances.
* Sentinel instances.
* Clients.
In order to define the behavior of the system we have to consider all three.
The following is a simple network where there are 3 nodes, each running
a Redis instance, and a Sentinel instance:
+-------------+
| Sentinel 1 |----- Client A
| Redis 1 (M) |
+-------------+
|
|
+-------------+ | +------------+
| Sentinel 2 |-----+-- // ----| Sentinel 3 |----- Client B
| Redis 2 (S) | | Redis 3 (M)|
+-------------+ +------------+
In this system the original state was that Redis 3 was the master, while
Redis 1 and 2 were replicas. A partition occurred isolating the old master.
Sentinels 1 and 2 started a failover promoting Sentinel 1 as the new master.
The Sentinel properties guarantee that Sentinel 1 and 2 now have the new
configuration for the master. However Sentinel 3 has still the old configuration
since it lives in a different partition.
We know that Sentinel 3 will get its configuration updated when the network
partition will heal, however what happens during the partition if there
are clients partitioned with the old master?
Clients will be still able to write to Redis 3, the old master. When the
partition will rejoin, Redis 3 will be turned into a replica of Redis 1, and
all the data written during the partition will be lost.
Depending on your configuration you may want or not that this scenario happens:
* If you are using Redis as a cache, it could be handy that Client B is still able to write to the old master, even if its data will be lost.
* If you are using Redis as a store, this is not good and you need to configure the system in order to partially prevent this problem.
Since Redis is asynchronously replicated, there is no way to totally prevent data loss in this scenario, however you can bound the divergence between Redis 3 and Redis 1
using the following Redis configuration option:
min-replicas-to-write 1
min-replicas-max-lag 10
With the above configuration (please see the self-commented `redis.conf` example in the Redis distribution for more information) a Redis instance, when acting as a master, will stop accepting writes if it can't write to at least 1 replica. Since replication is asynchronous *not being able to write* actually means that the replica is either disconnected, or is not sending us asynchronous acknowledges for more than the specified `max-lag` number of seconds.
Using this configuration the Redis 3 in the above example will become unavailable after 10 seconds. When the partition heals, the Sentinel 3 configuration will converge to
the new one, and Client B will be able to fetch a valid configuration and continue.
In general Redis + Sentinel as a whole are an **eventually consistent system** where the merge function is **last failover wins**, and the data from old masters are discarded to replicate the data of the current master, so there is always a window for losing acknowledged writes. This is due to Redis asynchronous
replication and the discarding nature of the "virtual" merge function of the system. Note that this is not a limitation of Sentinel itself, and if you orchestrate the failover with a strongly consistent replicated state machine, the same properties will still apply. There are only two ways to avoid losing acknowledged writes:
1. Use synchronous replication (and a proper consensus algorithm to run a replicated state machine).
2. Use an eventually consistent system where different versions of the same object can be merged.
Redis currently is not able to use any of the above systems, and is currently outside the development goals. However there are proxies implementing solution "2" on top of Redis stores such as SoundCloud [Roshi](https://github.com/soundcloud/roshi), or Netflix [Dynomite](https://github.com/Netflix/dynomite).
Sentinel persistent state
---
Sentinel state is persisted in the sentinel configuration file. For example
every time a new configuration is received, or created (leader Sentinels), for
a master, the configuration is persisted on disk together with the configuration
epoch. This means that it is safe to stop and restart Sentinel processes.
### TILT mode
Redis Sentinel is heavily dependent on the computer time: for instance in
order to understand if an instance is available it remembers the time of the
latest successful reply to the PING command, and compares it with the current
time to understand how old it is.
However if the computer time changes in an unexpected way, or if the computer
is very busy, or the process blocked for some reason, Sentinel may start to
behave in an unexpected way.
The TILT mode is a special "protection" mode that a Sentinel can enter when
something odd is detected that can lower the reliability of the system.
The Sentinel timer interrupt is normally called 10 times per second, so we
expect that more or less 100 milliseconds will elapse between two calls
to the timer interrupt.
What a Sentinel does is to register the previous time the timer interrupt
was called, and compare it with the current call: if the time difference
is negative or unexpectedly big (2 seconds or more) the TILT mode is entered
(or if it was already entered the exit from the TILT mode postponed).
When in TILT mode the Sentinel will continue to monitor everything, but:
* It stops acting at all.
* It starts to reply negatively to `SENTINEL is-master-down-by-addr` requests as the ability to detect a failure is no longer trusted.
If everything appears to be normal for 30 second, the TILT mode is exited.
In the Sentinel TILT mode, if we send the INFO command, we could get the following response:
$ redis-cli -p 26379
127.0.0.1:26379> info
(Other information from Sentinel server skipped.)
# Sentinel
sentinel_masters:1
sentinel_tilt:0
sentinel_tilt_since_seconds:-1
sentinel_running_scripts:0
sentinel_scripts_queue_length:0
sentinel_simulate_failure_flags:0
master0:name=mymaster,status=ok,address=127.0.0.1:6379,slaves=0,sentinels=1
The field "sentinel_tilt_since_seconds" indicates how many seconds the Sentinel already is in the TILT mode.
If it is not in TILT mode, the value will be -1.
Note that in some ways TILT mode could be replaced using the monotonic clock
API that many kernels offer. However it is not still clear if this is a good
solution since the current system avoids issues in case the process is just
suspended or not executed by the scheduler for a long time.
**A note about the word slave used in this man page**: Starting with Redis 5, if not for backward compatibility, the Redis project no longer uses the word slave. Unfortunately in this command the word slave is part of the protocol, so we'll be able to remove such occurrences only when this API will be naturally deprecated. | redis | title High availability with Redis Sentinel linkTitle High availability with Sentinel weight 4 description High availability for non clustered Redis aliases topics sentinel docs manual sentinel docs manual sentinel md Redis Sentinel provides high availability for Redis when not using Redis Cluster docs manual scaling Redis Sentinel also provides other collateral tasks such as monitoring notifications and acts as a configuration provider for clients This is the full list of Sentinel capabilities at a macroscopic level i e the big picture Monitoring Sentinel constantly checks if your master and replica instances are working as expected Notification Sentinel can notify the system administrator or other computer programs via an API that something is wrong with one of the monitored Redis instances Automatic failover If a master is not working as expected Sentinel can start a failover process where a replica is promoted to master the other additional replicas are reconfigured to use the new master and the applications using the Redis server are informed about the new address to use when connecting Configuration provider Sentinel acts as a source of authority for clients service discovery clients connect to Sentinels in order to ask for the address of the current Redis master responsible for a given service If a failover occurs Sentinels will report the new address Sentinel as a distributed system Redis Sentinel is a distributed system Sentinel itself is designed to run in a configuration where there are multiple Sentinel processes cooperating together The advantage of having multiple Sentinel processes cooperating are the following 1 Failure detection is performed when multiple Sentinels agree about the fact a given master is no longer available This lowers the probability of false positives 2 Sentinel works even if not all the Sentinel processes are working making the system robust against failures There is no fun in having a failover system which is itself a single point of failure after all The sum of Sentinels Redis instances masters and replicas and clients connecting to Sentinel and Redis are also a larger distributed system with specific properties In this document concepts will be introduced gradually starting from basic information needed in order to understand the basic properties of Sentinel to more complex information that are optional in order to understand how exactly Sentinel works Sentinel quick start Obtaining Sentinel The current version of Sentinel is called Sentinel 2 It is a rewrite of the initial Sentinel implementation using stronger and simpler to predict algorithms that are explained in this documentation A stable release of Redis Sentinel is shipped since Redis 2 8 New developments are performed in the unstable branch and new features sometimes are back ported into the latest stable branch as soon as they are considered to be stable Redis Sentinel version 1 shipped with Redis 2 6 is deprecated and should not be used Running Sentinel If you are using the redis sentinel executable or if you have a symbolic link with that name to the redis server executable you can run Sentinel with the following command line redis sentinel path to sentinel conf Otherwise you can use directly the redis server executable starting it in Sentinel mode redis server path to sentinel conf sentinel Both ways work the same However it is mandatory to use a configuration file when running Sentinel as this file will be used by the system in order to save the current state that will be reloaded in case of restarts Sentinel will simply refuse to start if no configuration file is given or if the configuration file path is not writable Sentinels by default run listening for connections to TCP port 26379 so for Sentinels to work port 26379 of your servers must be open to receive connections from the IP addresses of the other Sentinel instances Otherwise Sentinels can t talk and can t agree about what to do so failover will never be performed Fundamental things to know about Sentinel before deploying 1 You need at least three Sentinel instances for a robust deployment 2 The three Sentinel instances should be placed into computers or virtual machines that are believed to fail in an independent way So for example different physical servers or Virtual Machines executed on different availability zones 3 Sentinel Redis distributed system does not guarantee that acknowledged writes are retained during failures since Redis uses asynchronous replication However there are ways to deploy Sentinel that make the window to lose writes limited to certain moments while there are other less secure ways to deploy it 4 You need Sentinel support in your clients Popular client libraries have Sentinel support but not all 5 There is no HA setup which is safe if you don t test from time to time in development environments or even better if you can in production environments if they work You may have a misconfiguration that will become apparent only when it s too late at 3am when your master stops working 6 Sentinel Docker or other forms of Network Address Translation or Port Mapping should be mixed with care Docker performs port remapping breaking Sentinel auto discovery of other Sentinel processes and the list of replicas for a master Check the section about Sentinel and Docker sentinel docker nat and possible issues later in this document for more information Configuring Sentinel The Redis source distribution contains a file called sentinel conf that is a self documented example configuration file you can use to configure Sentinel however a typical minimal configuration file looks like the following sentinel monitor mymaster 127 0 0 1 6379 2 sentinel down after milliseconds mymaster 60000 sentinel failover timeout mymaster 180000 sentinel parallel syncs mymaster 1 sentinel monitor resque 192 168 1 3 6380 4 sentinel down after milliseconds resque 10000 sentinel failover timeout resque 180000 sentinel parallel syncs resque 5 You only need to specify the masters to monitor giving to each separated master that may have any number of replicas a different name There is no need to specify replicas which are auto discovered Sentinel will update the configuration automatically with additional information about replicas in order to retain the information in case of restart The configuration is also rewritten every time a replica is promoted to master during a failover and every time a new Sentinel is discovered The example configuration above basically monitors two sets of Redis instances each composed of a master and an undefined number of replicas One set of instances is called mymaster and the other resque The meaning of the arguments of sentinel monitor statements is the following sentinel monitor master name ip port quorum For the sake of clarity let s check line by line what the configuration options mean The first line is used to tell Redis to monitor a master called mymaster that is at address 127 0 0 1 and port 6379 with a quorum of 2 Everything is pretty obvious but the quorum argument The quorum is the number of Sentinels that need to agree about the fact the master is not reachable in order to really mark the master as failing and eventually start a failover procedure if possible However the quorum is only used to detect the failure In order to actually perform a failover one of the Sentinels need to be elected leader for the failover and be authorized to proceed This only happens with the vote of the majority of the Sentinel processes So for example if you have 5 Sentinel processes and the quorum for a given master set to the value of 2 this is what happens If two Sentinels agree at the same time about the master being unreachable one of the two will try to start a failover If there are at least a total of three Sentinels reachable the failover will be authorized and will actually start In practical terms this means during failures Sentinel never starts a failover if the majority of Sentinel processes are unable to talk aka no failover in the minority partition Other Sentinel options The other options are almost always in the form sentinel option name master name option value And are used for the following purposes down after milliseconds is the time in milliseconds an instance should not be reachable either does not reply to our PINGs or it is replying with an error for a Sentinel starting to think it is down parallel syncs sets the number of replicas that can be reconfigured to use the new master after a failover at the same time The lower the number the more time it will take for the failover process to complete however if the replicas are configured to serve old data you may not want all the replicas to re synchronize with the master at the same time While the replication process is mostly non blocking for a replica there is a moment when it stops to load the bulk data from the master You may want to make sure only one replica at a time is not reachable by setting this option to the value of 1 Additional options are described in the rest of this document and documented in the example sentinel conf file shipped with the Redis distribution Configuration parameters can be modified at runtime Master specific configuration parameters are modified using SENTINEL SET Global configuration parameters are modified using SENTINEL CONFIG SET See the Reconfiguring Sentinel at runtime section reconfiguring sentinel at runtime for more information Example Sentinel deployments Now that you know the basic information about Sentinel you may wonder where you should place your Sentinel processes how many Sentinel processes you need and so forth This section shows a few example deployments We use ASCII art in order to show you configuration examples in a graphical format this is what the different symbols means This is a computer or VM that fails independently We call it a box We write inside the boxes what they are running Redis master M1 Redis Sentinel S1 Different boxes are connected by lines to show that they are able to talk Sentinel S1 Sentinel S2 Network partitions are shown as interrupted lines using slashes Sentinel S1 Sentinel S2 Also note that Masters are called M1 M2 M3 Mn Replicas are called R1 R2 R3 Rn R stands for replica Sentinels are called S1 S2 S3 Sn Clients are called C1 C2 C3 Cn When an instance changes role because of Sentinel actions we put it inside square brackets so M1 means an instance that is now a master because of Sentinel intervention Note that we will never show setups where just two Sentinels are used since Sentinels always need to talk with the majority in order to start a failover Example 1 just two Sentinels DON T DO THIS M1 R1 S1 S2 Configuration quorum 1 In this setup if the master M1 fails R1 will be promoted since the two Sentinels can reach agreement about the failure obviously with quorum set to 1 and can also authorize a failover because the majority is two So apparently it could superficially work however check the next points to see why this setup is broken If the box where M1 is running stops working also S1 stops working The Sentinel running in the other box S2 will not be able to authorize a failover so the system will become not available Note that a majority is needed in order to order different failovers and later propagate the latest configuration to all the Sentinels Also note that the ability to failover in a single side of the above setup without any agreement would be very dangerous M1 M1 S1 S2 In the above configuration we created two masters assuming S2 could failover without authorization in a perfectly symmetrical way Clients may write indefinitely to both sides and there is no way to understand when the partition heals what configuration is the right one in order to prevent a permanent split brain condition So please deploy at least three Sentinels in three different boxes always Example 2 basic setup with three boxes This is a very simple setup that has the advantage to be simple to tune for additional safety It is based on three boxes each box running both a Redis process and a Sentinel process M1 S1 R2 R3 S2 S3 Configuration quorum 2 If the master M1 fails S2 and S3 will agree about the failure and will be able to authorize a failover making clients able to continue In every Sentinel setup as Redis uses asynchronous replication there is always the risk of losing some writes because a given acknowledged write may not be able to reach the replica which is promoted to master However in the above setup there is a higher risk due to clients being partitioned away with an old master like in the following picture M1 S1 C1 writes will be lost M2 R3 S2 S3 In this case a network partition isolated the old master M1 so the replica R2 is promoted to master However clients like C1 that are in the same partition as the old master may continue to write data to the old master This data will be lost forever since when the partition will heal the master will be reconfigured as a replica of the new master discarding its data set This problem can be mitigated using the following Redis replication feature that allows to stop accepting writes if a master detects that it is no longer able to transfer its writes to the specified number of replicas min replicas to write 1 min replicas max lag 10 With the above configuration please see the self commented redis conf example in the Redis distribution for more information a Redis instance when acting as a master will stop accepting writes if it can t write to at least 1 replica Since replication is asynchronous not being able to write actually means that the replica is either disconnected or is not sending us asynchronous acknowledges for more than the specified max lag number of seconds Using this configuration the old Redis master M1 in the above example will become unavailable after 10 seconds When the partition heals the Sentinel configuration will converge to the new one the client C1 will be able to fetch a valid configuration and will continue with the new master However there is no free lunch With this refinement if the two replicas are down the master will stop accepting writes It s a trade off Example 3 Sentinel in the client boxes Sometimes we have only two Redis boxes available one for the master and one for the replica The configuration in the example 2 is not viable in that case so we can resort to the following where Sentinels are placed where clients are M1 R1 C1 C2 C3 S1 S2 S3 Configuration quorum 2 In this setup the point of view Sentinels is the same as the clients if a master is reachable by the majority of the clients it is fine C1 C2 C3 here are generic clients it does not mean that C1 identifies a single client connected to Redis It is more likely something like an application server a Rails app or something like that If the box where M1 and S1 are running fails the failover will happen without issues however it is easy to see that different network partitions will result in different behaviors For example Sentinel will not be able to setup if the network between the clients and the Redis servers is disconnected since the Redis master and replica will both be unavailable Note that if C3 gets partitioned with M1 hardly possible with the network described above but more likely possible with different layouts or because of failures at the software layer we have a similar issue as described in Example 2 with the difference that here we have no way to break the symmetry since there is just a replica and master so the master can t stop accepting queries when it is disconnected from its replica otherwise the master would never be available during replica failures So this is a valid setup but the setup in the Example 2 has advantages such as the HA system of Redis running in the same boxes as Redis itself which may be simpler to manage and the ability to put a bound on the amount of time a master in the minority partition can receive writes Example 4 Sentinel client side with less than three clients The setup described in the Example 3 cannot be used if there are less than three boxes in the client side for example three web servers In this case we need to resort to a mixed setup like the following M1 R1 S1 S2 C1 C2 S3 S4 Configuration quorum 3 This is similar to the setup in Example 3 but here we run four Sentinels in the four boxes we have available If the master M1 becomes unavailable the other three Sentinels will perform the failover In theory this setup works removing the box where C2 and S4 are running and setting the quorum to 2 However it is unlikely that we want HA in the Redis side without having high availability in our application layer Sentinel Docker NAT and possible issues Docker uses a technique called port mapping programs running inside Docker containers may be exposed with a different port compared to the one the program believes to be using This is useful in order to run multiple containers using the same ports at the same time in the same server Docker is not the only software system where this happens there are other Network Address Translation setups where ports may be remapped and sometimes not ports but also IP addresses Remapping ports and addresses creates issues with Sentinel in two ways 1 Sentinel auto discovery of other Sentinels no longer works since it is based on hello messages where each Sentinel announce at which port and IP address they are listening for connection However Sentinels have no way to understand that an address or port is remapped so it is announcing an information that is not correct for other Sentinels to connect 2 Replicas are listed in the INFO output of a Redis master in a similar way the address is detected by the master checking the remote peer of the TCP connection while the port is advertised by the replica itself during the handshake however the port may be wrong for the same reason as exposed in point 1 Since Sentinels auto detect replicas using masters INFO output information the detected replicas will not be reachable and Sentinel will never be able to failover the master since there are no good replicas from the point of view of the system so there is currently no way to monitor with Sentinel a set of master and replica instances deployed with Docker unless you instruct Docker to map the port 1 1 For the first problem in case you want to run a set of Sentinel instances using Docker with forwarded ports or any other NAT setup where ports are remapped you can use the following two Sentinel configuration directives in order to force Sentinel to announce a specific set of IP and port sentinel announce ip ip sentinel announce port port Note that Docker has the ability to run in host networking mode check the net host option for more information This should create no issues since ports are not remapped in this setup IP Addresses and DNS names Older versions of Sentinel did not support host names and required IP addresses to be specified everywhere Starting with version 6 2 Sentinel has optional support for host names This capability is disabled by default If you re going to enable DNS hostnames support please note 1 The name resolution configuration on your Redis and Sentinel nodes must be reliable and be able to resolve addresses quickly Unexpected delays in address resolution may have a negative impact on Sentinel 2 You should use hostnames everywhere and avoid mixing hostnames and IP addresses To do that use replica announce ip hostname and sentinel announce ip hostname for all Redis and Sentinel instances respectively Enabling the resolve hostnames global configuration allows Sentinel to accept host names As part of a sentinel monitor command As a replica address if the replica uses a host name value for replica announce ip Sentinel will accept host names as valid inputs and resolve them but will still refer to IP addresses when announcing an instance updating configuration files etc Enabling the announce hostnames global configuration makes Sentinel use host names instead This affects replies to clients values written in configuration files the REPLICAOF command issued to replicas etc This behavior may not be compatible with all Sentinel clients that may explicitly expect an IP address Using host names may be useful when clients use TLS to connect to instances and require a name rather than an IP address in order to perform certificate ASN matching A quick tutorial In the next sections of this document all the details about Sentinel API sentinel api configuration and semantics will be covered incrementally However for people that want to play with the system ASAP this section is a tutorial that shows how to configure and interact with 3 Sentinel instances Here we assume that the instances are executed at port 5000 5001 5002 We also assume that you have a running Redis master at port 6379 with a replica running at port 6380 We will use the IPv4 loopback address 127 0 0 1 everywhere during the tutorial assuming you are running the simulation on your personal computer The three Sentinel configuration files should look like the following port 5000 sentinel monitor mymaster 127 0 0 1 6379 2 sentinel down after milliseconds mymaster 5000 sentinel failover timeout mymaster 60000 sentinel parallel syncs mymaster 1 The other two configuration files will be identical but using 5001 and 5002 as port numbers A few things to note about the above configuration The master set is called mymaster It identifies the master and its replicas Since each master set has a different name Sentinel can monitor different sets of masters and replicas at the same time The quorum was set to the value of 2 last argument of sentinel monitor configuration directive The down after milliseconds value is 5000 milliseconds that is 5 seconds so masters will be detected as failing as soon as we don t receive any reply from our pings within this amount of time Once you start the three Sentinels you ll see a few messages they log like monitor master mymaster 127 0 0 1 6379 quorum 2 This is a Sentinel event and you can receive this kind of events via Pub Sub if you SUBSCRIBE to the event name as specified later in Pub Sub Messages section pubsub messages Sentinel generates and logs different events during failure detection and failover Asking Sentinel about the state of a master The most obvious thing to do with Sentinel to get started is check if the master it is monitoring is doing well redis cli p 5000 127 0 0 1 5000 sentinel master mymaster 1 name 2 mymaster 3 ip 4 127 0 0 1 5 port 6 6379 7 runid 8 953ae6a589449c13ddefaee3538d356d287f509b 9 flags 10 master 11 link pending commands 12 0 13 link refcount 14 1 15 last ping sent 16 0 17 last ok ping reply 18 735 19 last ping reply 20 735 21 down after milliseconds 22 5000 23 info refresh 24 126 25 role reported 26 master 27 role reported time 28 532439 29 config epoch 30 1 31 num slaves 32 1 33 num other sentinels 34 2 35 quorum 36 2 37 failover timeout 38 60000 39 parallel syncs 40 1 As you can see it prints a number of information about the master There are a few that are of particular interest for us 1 num other sentinels is 2 so we know the Sentinel already detected two more Sentinels for this master If you check the logs you ll see the sentinel events generated 2 flags is just master If the master was down we could expect to see s down or o down flag as well here 3 num slaves is correctly set to 1 so Sentinel also detected that there is an attached replica to our master In order to explore more about this instance you may want to try the following two commands SENTINEL replicas mymaster SENTINEL sentinels mymaster The first will provide similar information about the replicas connected to the master and the second about the other Sentinels Obtaining the address of the current master As we already specified Sentinel also acts as a configuration provider for clients that want to connect to a set of master and replicas Because of possible failovers or reconfigurations clients have no idea about who is the currently active master for a given set of instances so Sentinel exports an API to ask this question 127 0 0 1 5000 SENTINEL get master addr by name mymaster 1 127 0 0 1 2 6379 Testing the failover At this point our toy Sentinel deployment is ready to be tested We can just kill our master and check if the configuration changes To do so we can just do redis cli p 6379 DEBUG sleep 30 This command will make our master no longer reachable sleeping for 30 seconds It basically simulates a master hanging for some reason If you check the Sentinel logs you should be able to see a lot of action 1 Each Sentinel detects the master is down with an sdown event 2 This event is later escalated to odown which means that multiple Sentinels agree about the fact the master is not reachable 3 Sentinels vote a Sentinel that will start the first failover attempt 4 The failover happens If you ask again what is the current master address for mymaster eventually we should get a different reply this time 127 0 0 1 5000 SENTINEL get master addr by name mymaster 1 127 0 0 1 2 6380 So far so good At this point you may jump to create your Sentinel deployment or can read more to understand all the Sentinel commands and internals Sentinel API Sentinel provides an API in order to inspect its state check the health of monitored masters and replicas subscribe in order to receive specific notifications and change the Sentinel configuration at run time By default Sentinel runs using TCP port 26379 note that 6379 is the normal Redis port Sentinels accept commands using the Redis protocol so you can use redis cli or any other unmodified Redis client in order to talk with Sentinel It is possible to directly query a Sentinel to check what is the state of the monitored Redis instances from its point of view to see what other Sentinels it knows and so forth Alternatively using Pub Sub it is possible to receive push style notifications from Sentinels every time some event happens like a failover or an instance entering an error condition and so forth Sentinel commands The SENTINEL command is the main API for Sentinel The following is the list of its subcommands minimal version is noted for where applicable SENTINEL CONFIG GET name 6 2 Get the current value of a global Sentinel configuration parameter The specified name may be a wildcard similar to the Redis CONFIG GET command SENTINEL CONFIG SET name value 6 2 Set the value of a global Sentinel configuration parameter SENTINEL CKQUORUM master name Check if the current Sentinel configuration is able to reach the quorum needed to failover a master and the majority needed to authorize the failover This command should be used in monitoring systems to check if a Sentinel deployment is ok SENTINEL FLUSHCONFIG Force Sentinel to rewrite its configuration on disk including the current Sentinel state Normally Sentinel rewrites the configuration every time something changes in its state in the context of the subset of the state which is persisted on disk across restart However sometimes it is possible that the configuration file is lost because of operation errors disk failures package upgrade scripts or configuration managers In those cases a way to force Sentinel to rewrite the configuration file is handy This command works even if the previous configuration file is completely missing SENTINEL FAILOVER master name Force a failover as if the master was not reachable and without asking for agreement to other Sentinels however a new version of the configuration will be published so that the other Sentinels will update their configurations SENTINEL GET MASTER ADDR BY NAME master name Return the ip and port number of the master with that name If a failover is in progress or terminated successfully for this master it returns the address and port of the promoted replica SENTINEL INFO CACHE 3 2 Return cached INFO output from masters and replicas SENTINEL IS MASTER DOWN BY ADDR ip port current epoch runid Check if the master specified by ip port is down from current Sentinel s point of view This command is mostly for internal use SENTINEL MASTER master name Show the state and info of the specified master SENTINEL MASTERS Show a list of monitored masters and their state SENTINEL MONITOR Start Sentinel s monitoring Refer to the Reconfiguring Sentinel at Runtime section reconfiguring sentinel at runtime for more information SENTINEL MYID 6 2 Return the ID of the Sentinel instance SENTINEL PENDING SCRIPTS This command returns information about pending scripts SENTINEL REMOVE Stop Sentinel s monitoring Refer to the Reconfiguring Sentinel at Runtime section reconfiguring sentinel at runtime for more information SENTINEL REPLICAS master name 5 0 Show a list of replicas for this master and their state SENTINEL SENTINELS master name Show a list of sentinel instances for this master and their state SENTINEL SET Set Sentinel s monitoring configuration Refer to the Reconfiguring Sentinel at Runtime section reconfiguring sentinel at runtime for more information SENTINEL SIMULATE FAILURE crash after election crash after promotion help 3 2 This command simulates different Sentinel crash scenarios SENTINEL RESET pattern This command will reset all the masters with matching name The pattern argument is a glob style pattern The reset process clears any previous state in a master including a failover in progress and removes every replica and sentinel already discovered and associated with the master For connection management and administration purposes Sentinel supports the following subset of Redis commands ACL 6 2 This command manages the Sentinel Access Control List For more information refer to the ACL topics acl documentation page and the Sentinel Access Control List authentication sentinel access control list authentication AUTH 5 0 1 Authenticate a client connection For more information refer to the AUTH command and the Configuring Sentinel instances with authentication section configuring sentinel instances with authentication CLIENT This command manages client connections For more information refer to its subcommands pages COMMAND 6 2 This command returns information about commands For more information refer to the COMMAND command and its various subcommands HELLO 6 0 Switch the connection s protocol For more information refer to the HELLO command INFO Return information and statistics about the Sentinel server For more information see the INFO command PING This command simply returns PONG ROLE This command returns the string sentinel and a list of monitored masters For more information refer to the ROLE command SHUTDOWN Shut down the Sentinel instance Lastly Sentinel also supports the SUBSCRIBE UNSUBSCRIBE PSUBSCRIBE and PUNSUBSCRIBE commands Refer to the Pub Sub Messages section pubsub messages for more details Reconfiguring Sentinel at Runtime Starting with Redis version 2 8 4 Sentinel provides an API in order to add remove or change the configuration of a given master Note that if you have multiple sentinels you should apply the changes to all to your instances for Redis Sentinel to work properly This means that changing the configuration of a single Sentinel does not automatically propagate the changes to the other Sentinels in the network The following is a list of SENTINEL subcommands used in order to update the configuration of a Sentinel instance SENTINEL MONITOR name ip port quorum This command tells the Sentinel to start monitoring a new master with the specified name ip port and quorum It is identical to the sentinel monitor configuration directive in sentinel conf configuration file with the difference that you can t use a hostname in as ip but you need to provide an IPv4 or IPv6 address SENTINEL REMOVE name is used in order to remove the specified master the master will no longer be monitored and will totally be removed from the internal state of the Sentinel so it will no longer listed by SENTINEL masters and so forth SENTINEL SET name option value The SET command is very similar to the CONFIG SET command of Redis and is used in order to change configuration parameters of a specific master Multiple option value pairs can be specified or none at all All the configuration parameters that can be configured via sentinel conf are also configurable using the SET command The following is an example of SENTINEL SET command in order to modify the down after milliseconds configuration of a master called objects cache SENTINEL SET objects cache master down after milliseconds 1000 As already stated SENTINEL SET can be used to set all the configuration parameters that are settable in the startup configuration file Moreover it is possible to change just the master quorum configuration without removing and re adding the master with SENTINEL REMOVE followed by SENTINEL MONITOR but simply using SENTINEL SET objects cache master quorum 5 Note that there is no equivalent GET command since SENTINEL MASTER provides all the configuration parameters in a simple to parse format as a field value pairs array Starting with Redis version 6 2 Sentinel also allows getting and setting global configuration parameters which were only supported in the configuration file prior to that SENTINEL CONFIG GET name Get the current value of a global Sentinel configuration parameter The specified name may be a wildcard similar to the Redis CONFIG GET command SENTINEL CONFIG SET name value Set the value of a global Sentinel configuration parameter Global parameters that can be manipulated include resolve hostnames announce hostnames See IP addresses and DNS names ip addresses and dns names announce ip announce port See Sentinel Docker NAT and possible issues sentinel docker nat and possible issues sentinel user sentinel pass See Configuring Sentinel instances with authentication configuring sentinel instances with authentication Adding or removing Sentinels Adding a new Sentinel to your deployment is a simple process because of the auto discover mechanism implemented by Sentinel All you need to do is to start the new Sentinel configured to monitor the currently active master Within 10 seconds the Sentinel will acquire the list of other Sentinels and the set of replicas attached to the master If you need to add multiple Sentinels at once it is suggested to add it one after the other waiting for all the other Sentinels to already know about the first one before adding the next This is useful in order to still guarantee that majority can be achieved only in one side of a partition in the chance failures should happen in the process of adding new Sentinels This can be easily achieved by adding every new Sentinel with a 30 seconds delay and during absence of network partitions At the end of the process it is possible to use the command SENTINEL MASTER mastername in order to check if all the Sentinels agree about the total number of Sentinels monitoring the master Removing a Sentinel is a bit more complex Sentinels never forget already seen Sentinels even if they are not reachable for a long time since we don t want to dynamically change the majority needed to authorize a failover and the creation of a new configuration number So in order to remove a Sentinel the following steps should be performed in absence of network partitions 1 Stop the Sentinel process of the Sentinel you want to remove 2 Send a SENTINEL RESET command to all the other Sentinel instances instead of you can use the exact master name if you want to reset just a single master One after the other waiting at least 30 seconds between instances 3 Check that all the Sentinels agree about the number of Sentinels currently active by inspecting the output of SENTINEL MASTER mastername of every Sentinel Removing the old master or unreachable replicas Sentinels never forget about replicas of a given master even when they are unreachable for a long time This is useful because Sentinels should be able to correctly reconfigure a returning replica after a network partition or a failure event Moreover after a failover the failed over master is virtually added as a replica of the new master this way it will be reconfigured to replicate with the new master as soon as it will be available again However sometimes you want to remove a replica that may be the old master forever from the list of replicas monitored by Sentinels In order to do this you need to send a SENTINEL RESET mastername command to all the Sentinels they ll refresh the list of replicas within the next 10 seconds only adding the ones listed as correctly replicating from the current master INFO output Pub Sub messages A client can use a Sentinel as a Redis compatible Pub Sub server but you can t use PUBLISH in order to SUBSCRIBE or PSUBSCRIBE to channels and get notified about specific events The channel name is the same as the name of the event For instance the channel named sdown will receive all the notifications related to instances entering an SDOWN SDOWN means the instance is no longer reachable from the point of view of the Sentinel you are querying condition To get all the messages simply subscribe using PSUBSCRIBE The following is a list of channels and message formats you can receive using this API The first word is the channel event name the rest is the format of the data Note where instance details is specified it means that the following arguments are provided to identify the target instance instance type name ip port master name master ip master port The part identifying the master from the argument to the end is optional and is only specified if the instance is not a master itself reset master instance details The master was reset slave instance details A new replica was detected and attached failover state reconf slaves instance details Failover state changed to reconf slaves state failover detected instance details A failover started by another Sentinel or any other external entity was detected An attached replica turned into a master slave reconf sent instance details The leader sentinel sent the REPLICAOF command to this instance in order to reconfigure it for the new replica slave reconf inprog instance details The replica being reconfigured showed to be a replica of the new master ip port pair but the synchronization process is not yet complete slave reconf done instance details The replica is now synchronized with the new master dup sentinel instance details One or more sentinels for the specified master were removed as duplicated this happens for instance when a Sentinel instance is restarted sentinel instance details A new sentinel for this master was detected and attached sdown instance details The specified instance is now in Subjectively Down state sdown instance details The specified instance is no longer in Subjectively Down state odown instance details The specified instance is now in Objectively Down state odown instance details The specified instance is no longer in Objectively Down state new epoch instance details The current epoch was updated try failover instance details New failover in progress waiting to be elected by the majority elected leader instance details Won the election for the specified epoch can do the failover failover state select slave instance details New failover state is select slave we are trying to find a suitable replica for promotion no good slave instance details There is no good replica to promote Currently we ll try after some time but probably this will change and the state machine will abort the failover at all in this case selected slave instance details We found the specified good replica to promote failover state send slaveof noone instance details We are trying to reconfigure the promoted replica as master waiting for it to switch failover end for timeout instance details The failover terminated for timeout replicas will eventually be configured to replicate with the new master anyway failover end instance details The failover terminated with success All the replicas appears to be reconfigured to replicate with the new master switch master master name oldip oldport newip newport The master new IP and address is the specified one after a configuration change This is the message most external users are interested in tilt Tilt mode entered tilt Tilt mode exited Handling of BUSY state The BUSY error is returned by a Redis instance when a Lua script is running for more time than the configured Lua script time limit When this happens before triggering a fail over Redis Sentinel will try to send a SCRIPT KILL command that will only succeed if the script was read only If the instance is still in an error condition after this try it will eventually be failed over Replicas priority Redis instances have a configuration parameter called replica priority This information is exposed by Redis replica instances in their INFO output and Sentinel uses it in order to pick a replica among the ones that can be used in order to failover a master 1 If the replica priority is set to 0 the replica is never promoted to master 2 Replicas with a lower priority number are preferred by Sentinel For example if there is a replica S1 in the same data center of the current master and another replica S2 in another data center it is possible to set S1 with a priority of 10 and S2 with a priority of 100 so that if the master fails and both S1 and S2 are available S1 will be preferred For more information about the way replicas are selected please check the Replica selection and priority section replica selection and priority of this documentation Sentinel and Redis authentication When the master is configured to require authentication from clients as a security measure replicas need to also be aware of the credentials in order to authenticate with the master and create the master replica connection used for the asynchronous replication protocol Redis Access Control List authentication Starting with Redis 6 user authentication and permission is managed with the Access Control List ACL topics acl In order for Sentinels to connect to Redis server instances when they are configured with ACL the Sentinel configuration must include the following directives sentinel auth user master name username sentinel auth pass master name password Where username and password are the username and password for accessing the group s instances These credentials should be provisioned on all of the group s Redis instances with the minimal control permissions For example 127 0 0 1 6379 ACL SETUSER sentinel user ON somepassword allchannels multi slaveof ping exec subscribe config rewrite role publish info client setname client kill script kill Redis password only authentication Until Redis 6 authentication is achieved using the following configuration directives requirepass in the master in order to set the authentication password and to make sure the instance will not process requests for non authenticated clients masterauth in the replicas in order for the replicas to authenticate with the master in order to correctly replicate data from it When Sentinel is used there is not a single master since after a failover replicas may play the role of masters and old masters can be reconfigured in order to act as replicas so what you want to do is to set the above directives in all your instances both masters and replicas This is also usually a sane setup since you don t want to protect data only in the master having the same data accessible in the replicas However in the uncommon case where you need a replica that is accessible without authentication you can still do it by setting up a replica priority of zero to prevent this replica from being promoted to master and configuring in this replica only the masterauth directive without using the requirepass directive so that data will be readable by unauthenticated clients In order for Sentinels to connect to Redis server instances when they are configured with requirepass the Sentinel configuration must include the sentinel auth pass directive in the format sentinel auth pass master name password Configuring Sentinel instances with authentication Sentinel instances themselves can be secured by requiring clients to authenticate via the AUTH command Starting with Redis 6 2 the Access Control List ACL topics acl is available whereas previous versions starting with Redis 5 0 1 support password only authentication Note that Sentinel s authentication configuration should be applied to each of the instances in your deployment and all instances should use the same configuration Furthermore ACL and password only authentication should not be used together Sentinel Access Control List authentication The first step in securing a Sentinel instance with ACL is preventing any unauthorized access to it To do that you ll need to disable the default superuser or at the very least set it up with a strong password and create a new one and allow it access to Pub Sub channels 127 0 0 1 5000 ACL SETUSER admin ON admin password allchannels all OK 127 0 0 1 5000 ACL SETUSER default off OK The default user is used by Sentinel to connect to other instances You can provide the credentials of another superuser with the following configuration directives sentinel sentinel user username sentinel sentinel pass password Where username and password are the Sentinel s superuser and password respectively e g admin and admin password in the example above Lastly for authenticating incoming client connections you can create a Sentinel restricted user profile such as the following 127 0 0 1 5000 ACL SETUSER sentinel user ON user password all auth client getname client id client setname command hello ping role sentinel get master addr by name sentinel master sentinel myid sentinel replicas sentinel sentinels Refer to the documentation of your Sentinel client of choice for further information Sentinel password only authentication To use Sentinel with password only authentication add the requirepass configuration directive to all your Sentinel instances as follows requirepass your password here When configured this way Sentinels will do two things 1 A password will be required from clients in order to send commands to Sentinels This is obvious since this is how such configuration directive works in Redis in general 2 Moreover the same password configured to access the local Sentinel will be used by this Sentinel instance in order to authenticate to all the other Sentinel instances it connects to This means that you will have to configure the same requirepass password in all the Sentinel instances This way every Sentinel can talk with every other Sentinel without any need to configure for each Sentinel the password to access all the other Sentinels that would be very impractical Before using this configuration make sure your client library can send the AUTH command to Sentinel instances Sentinel clients implementation Sentinel requires explicit client support unless the system is configured to execute a script that performs a transparent redirection of all the requests to the new master instance virtual IP or other similar systems The topic of client libraries implementation is covered in the document Sentinel clients guidelines topics sentinel clients More advanced concepts In the following sections we ll cover a few details about how Sentinel works without resorting to implementation details and algorithms that will be covered in the final part of this document SDOWN and ODOWN failure state Redis Sentinel has two different concepts of being down one is called a Subjectively Down condition SDOWN and is a down condition that is local to a given Sentinel instance Another is called Objectively Down condition ODOWN and is reached when enough Sentinels at least the number configured as the quorum parameter of the monitored master have an SDOWN condition and get feedback from other Sentinels using the SENTINEL is master down by addr command From the point of view of a Sentinel an SDOWN condition is reached when it does not receive a valid reply to PING requests for the number of seconds specified in the configuration as is master down after milliseconds parameter An acceptable reply to PING is one of the following PING replied with PONG PING replied with LOADING error PING replied with MASTERDOWN error Any other reply or no reply at all is considered non valid However note that a logical master that advertises itself as a replica in the INFO output is considered to be down Note that SDOWN requires that no acceptable reply is received for the whole interval configured so for instance if the interval is 30000 milliseconds 30 seconds and we receive an acceptable ping reply every 29 seconds the instance is considered to be working SDOWN is not enough to trigger a failover it only means a single Sentinel believes a Redis instance is not available To trigger a failover the ODOWN state must be reached To switch from SDOWN to ODOWN no strong consensus algorithm is used but just a form of gossip if a given Sentinel gets reports that a master is not working from enough Sentinels in a given time range the SDOWN is promoted to ODOWN If this acknowledge is later missing the flag is cleared A more strict authorization that uses an actual majority is required in order to really start the failover but no failover can be triggered without reaching the ODOWN state The ODOWN condition only applies to masters For other kind of instances Sentinel doesn t require to act so the ODOWN state is never reached for replicas and other sentinels but only SDOWN is However SDOWN has also semantic implications For example a replica in SDOWN state is not selected to be promoted by a Sentinel performing a failover Sentinels and replicas auto discovery Sentinels stay connected with other Sentinels in order to reciprocally check the availability of each other and to exchange messages However you don t need to configure a list of other Sentinel addresses in every Sentinel instance you run as Sentinel uses the Redis instances Pub Sub capabilities in order to discover the other Sentinels that are monitoring the same masters and replicas This feature is implemented by sending hello messages into the channel named sentinel hello Similarly you don t need to configure what is the list of the replicas attached to a master as Sentinel will auto discover this list querying Redis Every Sentinel publishes a message to every monitored master and replica Pub Sub channel sentinel hello every two seconds announcing its presence with ip port runid Every Sentinel is subscribed to the Pub Sub channel sentinel hello of every master and replica looking for unknown sentinels When new sentinels are detected they are added as sentinels of this master Hello messages also include the full current configuration of the master If the receiving Sentinel has a configuration for a given master which is older than the one received it updates to the new configuration immediately Before adding a new sentinel to a master a Sentinel always checks if there is already a sentinel with the same runid or the same address ip and port pair In that case all the matching sentinels are removed and the new added Sentinel reconfiguration of instances outside the failover procedure Even when no failover is in progress Sentinels will always try to set the current configuration on monitored instances Specifically Replicas according to the current configuration that claim to be masters will be configured as replicas to replicate with the current master Replicas connected to a wrong master will be reconfigured to replicate with the right master For Sentinels to reconfigure replicas the wrong configuration must be observed for some time that is greater than the period used to broadcast new configurations This prevents Sentinels with a stale configuration for example because they just rejoined from a partition will try to change the replicas configuration before receiving an update Also note how the semantics of always trying to impose the current configuration makes the failover more resistant to partitions Masters failed over are reconfigured as replicas when they return available Replicas partitioned away during a partition are reconfigured once reachable The important lesson to remember about this section is Sentinel is a system where each process will always try to impose the last logical configuration to the set of monitored instances Replica selection and priority When a Sentinel instance is ready to perform a failover since the master is in ODOWN state and the Sentinel received the authorization to failover from the majority of the Sentinel instances known a suitable replica needs to be selected The replica selection process evaluates the following information about replicas 1 Disconnection time from the master 2 Replica priority 3 Replication offset processed 4 Run ID A replica that is found to be disconnected from the master for more than ten times the configured master timeout down after milliseconds option plus the time the master is also not available from the point of view of the Sentinel doing the failover is considered to be not suitable for the failover and is skipped In more rigorous terms a replica whose the INFO output suggests it has been disconnected from the master for more than down after milliseconds 10 milliseconds since master is in SDOWN state Is considered to be unreliable and is disregarded entirely The replica selection only considers the replicas that passed the above test and sorts it based on the above criteria in the following order 1 The replicas are sorted by replica priority as configured in the redis conf file of the Redis instance A lower priority will be preferred 2 If the priority is the same the replication offset processed by the replica is checked and the replica that received more data from the master is selected 3 If multiple replicas have the same priority and processed the same data from the master a further check is performed selecting the replica with the lexicographically smaller run ID Having a lower run ID is not a real advantage for a replica but is useful in order to make the process of replica selection more deterministic instead of resorting to select a random replica In most cases replica priority does not need to be set explicitly so all instances will use the same default value If there is a particular fail over preference replica priority must be set on all instances including masters as a master may become a replica at some future point in time and it will then need the proper replica priority settings A Redis instance can be configured with a special replica priority of zero in order to be never selected by Sentinels as the new master However a replica configured in this way will still be reconfigured by Sentinels in order to replicate with the new master after a failover the only difference is that it will never become a master itself Algorithms and internals In the following sections we will explore the details of Sentinel behavior It is not strictly needed for users to be aware of all the details but a deep understanding of Sentinel may help to deploy and operate Sentinel in a more effective way Quorum The previous sections showed that every master monitored by Sentinel is associated to a configured quorum It specifies the number of Sentinel processes that need to agree about the unreachability or error condition of the master in order to trigger a failover However after the failover is triggered in order for the failover to actually be performed at least a majority of Sentinels must authorize the Sentinel to failover Sentinel never performs a failover in the partition where a minority of Sentinels exist Let s try to make things a bit more clear Quorum the number of Sentinel processes that need to detect an error condition in order for a master to be flagged as ODOWN The failover is triggered by the ODOWN state Once the failover is triggered the Sentinel trying to failover is required to ask for authorization to a majority of Sentinels or more than the majority if the quorum is set to a number greater than the majority The difference may seem subtle but is actually quite simple to understand and use For example if you have 5 Sentinel instances and the quorum is set to 2 a failover will be triggered as soon as 2 Sentinels believe that the master is not reachable however one of the two Sentinels will be able to failover only if it gets authorization at least from 3 Sentinels If instead the quorum is configured to 5 all the Sentinels must agree about the master error condition and the authorization from all Sentinels is required in order to failover This means that the quorum can be used to tune Sentinel in two ways 1 If a quorum is set to a value smaller than the majority of Sentinels we deploy we are basically making Sentinel more sensitive to master failures triggering a failover as soon as even just a minority of Sentinels is no longer able to talk with the master 2 If a quorum is set to a value greater than the majority of Sentinels we are making Sentinel able to failover only when there are a very large number larger than majority of well connected Sentinels which agree about the master being down Configuration epochs Sentinels require to get authorizations from a majority in order to start a failover for a few important reasons When a Sentinel is authorized it gets a unique configuration epoch for the master it is failing over This is a number that will be used to version the new configuration after the failover is completed Because a majority agreed that a given version was assigned to a given Sentinel no other Sentinel will be able to use it This means that every configuration of every failover is versioned with a unique version We ll see why this is so important Moreover Sentinels have a rule if a Sentinel voted another Sentinel for the failover of a given master it will wait some time to try to failover the same master again This delay is the 2 failover timeout you can configure in sentinel conf This means that Sentinels will not try to failover the same master at the same time the first to ask to be authorized will try if it fails another will try after some time and so forth Redis Sentinel guarantees the liveness property that if a majority of Sentinels are able to talk eventually one will be authorized to failover if the master is down Redis Sentinel also guarantees the safety property that every Sentinel will failover the same master using a different configuration epoch Configuration propagation Once a Sentinel is able to failover a master successfully it will start to broadcast the new configuration so that the other Sentinels will update their information about a given master For a failover to be considered successful it requires that the Sentinel was able to send the REPLICAOF NO ONE command to the selected replica and that the switch to master was later observed in the INFO output of the master At this point even if the reconfiguration of the replicas is in progress the failover is considered to be successful and all the Sentinels are required to start reporting the new configuration The way a new configuration is propagated is the reason why we need that every Sentinel failover is authorized with a different version number configuration epoch Every Sentinel continuously broadcast its version of the configuration of a master using Redis Pub Sub messages both in the master and all the replicas At the same time all the Sentinels wait for messages to see what is the configuration advertised by the other Sentinels Configurations are broadcast in the sentinel hello Pub Sub channel Because every configuration has a different version number the greater version always wins over smaller versions So for example the configuration for the master mymaster start with all the Sentinels believing the master is at 192 168 1 50 6379 This configuration has version 1 After some time a Sentinel is authorized to failover with version 2 If the failover is successful it will start to broadcast a new configuration let s say 192 168 1 50 9000 with version 2 All the other instances will see this configuration and will update their configuration accordingly since the new configuration has a greater version This means that Sentinel guarantees a second liveness property a set of Sentinels that are able to communicate will all converge to the same configuration with the higher version number Basically if the net is partitioned every partition will converge to the higher local configuration In the special case of no partitions there is a single partition and every Sentinel will agree about the configuration Consistency under partitions Redis Sentinel configurations are eventually consistent so every partition will converge to the higher configuration available However in a real world system using Sentinel there are three different players Redis instances Sentinel instances Clients In order to define the behavior of the system we have to consider all three The following is a simple network where there are 3 nodes each running a Redis instance and a Sentinel instance Sentinel 1 Client A Redis 1 M Sentinel 2 Sentinel 3 Client B Redis 2 S Redis 3 M In this system the original state was that Redis 3 was the master while Redis 1 and 2 were replicas A partition occurred isolating the old master Sentinels 1 and 2 started a failover promoting Sentinel 1 as the new master The Sentinel properties guarantee that Sentinel 1 and 2 now have the new configuration for the master However Sentinel 3 has still the old configuration since it lives in a different partition We know that Sentinel 3 will get its configuration updated when the network partition will heal however what happens during the partition if there are clients partitioned with the old master Clients will be still able to write to Redis 3 the old master When the partition will rejoin Redis 3 will be turned into a replica of Redis 1 and all the data written during the partition will be lost Depending on your configuration you may want or not that this scenario happens If you are using Redis as a cache it could be handy that Client B is still able to write to the old master even if its data will be lost If you are using Redis as a store this is not good and you need to configure the system in order to partially prevent this problem Since Redis is asynchronously replicated there is no way to totally prevent data loss in this scenario however you can bound the divergence between Redis 3 and Redis 1 using the following Redis configuration option min replicas to write 1 min replicas max lag 10 With the above configuration please see the self commented redis conf example in the Redis distribution for more information a Redis instance when acting as a master will stop accepting writes if it can t write to at least 1 replica Since replication is asynchronous not being able to write actually means that the replica is either disconnected or is not sending us asynchronous acknowledges for more than the specified max lag number of seconds Using this configuration the Redis 3 in the above example will become unavailable after 10 seconds When the partition heals the Sentinel 3 configuration will converge to the new one and Client B will be able to fetch a valid configuration and continue In general Redis Sentinel as a whole are an eventually consistent system where the merge function is last failover wins and the data from old masters are discarded to replicate the data of the current master so there is always a window for losing acknowledged writes This is due to Redis asynchronous replication and the discarding nature of the virtual merge function of the system Note that this is not a limitation of Sentinel itself and if you orchestrate the failover with a strongly consistent replicated state machine the same properties will still apply There are only two ways to avoid losing acknowledged writes 1 Use synchronous replication and a proper consensus algorithm to run a replicated state machine 2 Use an eventually consistent system where different versions of the same object can be merged Redis currently is not able to use any of the above systems and is currently outside the development goals However there are proxies implementing solution 2 on top of Redis stores such as SoundCloud Roshi https github com soundcloud roshi or Netflix Dynomite https github com Netflix dynomite Sentinel persistent state Sentinel state is persisted in the sentinel configuration file For example every time a new configuration is received or created leader Sentinels for a master the configuration is persisted on disk together with the configuration epoch This means that it is safe to stop and restart Sentinel processes TILT mode Redis Sentinel is heavily dependent on the computer time for instance in order to understand if an instance is available it remembers the time of the latest successful reply to the PING command and compares it with the current time to understand how old it is However if the computer time changes in an unexpected way or if the computer is very busy or the process blocked for some reason Sentinel may start to behave in an unexpected way The TILT mode is a special protection mode that a Sentinel can enter when something odd is detected that can lower the reliability of the system The Sentinel timer interrupt is normally called 10 times per second so we expect that more or less 100 milliseconds will elapse between two calls to the timer interrupt What a Sentinel does is to register the previous time the timer interrupt was called and compare it with the current call if the time difference is negative or unexpectedly big 2 seconds or more the TILT mode is entered or if it was already entered the exit from the TILT mode postponed When in TILT mode the Sentinel will continue to monitor everything but It stops acting at all It starts to reply negatively to SENTINEL is master down by addr requests as the ability to detect a failure is no longer trusted If everything appears to be normal for 30 second the TILT mode is exited In the Sentinel TILT mode if we send the INFO command we could get the following response redis cli p 26379 127 0 0 1 26379 info Other information from Sentinel server skipped Sentinel sentinel masters 1 sentinel tilt 0 sentinel tilt since seconds 1 sentinel running scripts 0 sentinel scripts queue length 0 sentinel simulate failure flags 0 master0 name mymaster status ok address 127 0 0 1 6379 slaves 0 sentinels 1 The field sentinel tilt since seconds indicates how many seconds the Sentinel already is in the TILT mode If it is not in TILT mode the value will be 1 Note that in some ways TILT mode could be replaced using the monotonic clock API that many kernels offer However it is not still clear if this is a good solution since the current system avoids issues in case the process is just suspended or not executed by the scheduler for a long time A note about the word slave used in this man page Starting with Redis 5 if not for backward compatibility the Redis project no longer uses the word slave Unfortunately in this command the word slave is part of the protocol so we ll be able to remove such occurrences only when this API will be naturally deprecated |
redis weight 2 docs manual config Overview of redis conf the Redis configuration file aliases Configuration title Redis configuration | ---
title: "Redis configuration"
linkTitle: "Configuration"
weight: 2
description: >
Overview of redis.conf, the Redis configuration file
aliases: [
/docs/manual/config
]
---
Redis is able to start without a configuration file using a built-in default
configuration, however this setup is only recommended for testing and
development purposes.
The proper way to configure Redis is by providing a Redis configuration file,
usually called `redis.conf`.
The `redis.conf` file contains a number of directives that have a very simple
format:
keyword argument1 argument2 ... argumentN
This is an example of a configuration directive:
replicaof 127.0.0.1 6380
It is possible to provide strings containing spaces as arguments using
(double or single) quotes, as in the following example:
requirepass "hello world"
Single-quoted string can contain characters escaped by backslashes, and
double-quoted strings can additionally include any ASCII symbols encoded using
backslashed hexadecimal notation "\\xff".
The list of configuration directives, and their meaning and intended usage
is available in the self documented example redis.conf shipped into the
Redis distribution.
* The self documented [redis.conf for Redis 7.2](https://raw.githubusercontent.com/redis/redis/7.2/redis.conf).
* The self documented [redis.conf for Redis 7.0](https://raw.githubusercontent.com/redis/redis/7.0/redis.conf).
* The self documented [redis.conf for Redis 6.2](https://raw.githubusercontent.com/redis/redis/6.2/redis.conf).
* The self documented [redis.conf for Redis 6.0](https://raw.githubusercontent.com/redis/redis/6.0/redis.conf).
* The self documented [redis.conf for Redis 5.0](https://raw.githubusercontent.com/redis/redis/5.0/redis.conf).
* The self documented [redis.conf for Redis 4.0](https://raw.githubusercontent.com/redis/redis/4.0/redis.conf).
* The self documented [redis.conf for Redis 3.2](https://raw.githubusercontent.com/redis/redis/3.2/redis.conf).
* The self documented [redis.conf for Redis 3.0](https://raw.githubusercontent.com/redis/redis/3.0/redis.conf).
* The self documented [redis.conf for Redis 2.8](https://raw.githubusercontent.com/redis/redis/2.8/redis.conf).
* The self documented [redis.conf for Redis 2.6](https://raw.githubusercontent.com/redis/redis/2.6/redis.conf).
* The self documented [redis.conf for Redis 2.4](https://raw.githubusercontent.com/redis/redis/2.4/redis.conf).
Passing arguments via the command line
---
You can also pass Redis configuration parameters
using the command line directly. This is very useful for testing purposes.
The following is an example that starts a new Redis instance using port 6380
as a replica of the instance running at 127.0.0.1 port 6379.
./redis-server --port 6380 --replicaof 127.0.0.1 6379
The format of the arguments passed via the command line is exactly the same
as the one used in the redis.conf file, with the exception that the keyword
is prefixed with `--`.
Note that internally this generates an in-memory temporary config file
(possibly concatenating the config file passed by the user, if any) where
arguments are translated into the format of redis.conf.
Changing Redis configuration while the server is running
---
It is possible to reconfigure Redis on the fly without stopping and restarting
the service, or querying the current configuration programmatically using the
special commands `CONFIG SET` and `CONFIG GET`.
Not all of the configuration directives are supported in this way, but most
are supported as expected.
Please refer to the `CONFIG SET` and `CONFIG GET` pages for more information.
Note that modifying the configuration on the fly **has no effects on the
redis.conf file** so at the next restart of Redis the old configuration will
be used instead.
Make sure to also modify the `redis.conf` file accordingly to the configuration
you set using `CONFIG SET`.
You can do it manually, or you can use `CONFIG REWRITE`, which will automatically scan your `redis.conf` file and update the fields which don't match the current configuration value.
Fields non existing but set to the default value are not added.
Comments inside your configuration file are retained.
Configuring Redis as a cache
---
If you plan to use Redis as a cache where every key will have an
expire set, you may consider using the following configuration instead
(assuming a max memory limit of 2 megabytes as an example):
maxmemory 2mb
maxmemory-policy allkeys-lru
In this configuration there is no need for the application to set a
time to live for keys using the `EXPIRE` command (or equivalent) since
all the keys will be evicted using an approximated LRU algorithm as long
as we hit the 2 megabyte memory limit.
Basically, in this configuration Redis acts in a similar way to memcached.
We have more extensive documentation about using Redis as an LRU cache [here](/topics/lru-cache). | redis | title Redis configuration linkTitle Configuration weight 2 description Overview of redis conf the Redis configuration file aliases docs manual config Redis is able to start without a configuration file using a built in default configuration however this setup is only recommended for testing and development purposes The proper way to configure Redis is by providing a Redis configuration file usually called redis conf The redis conf file contains a number of directives that have a very simple format keyword argument1 argument2 argumentN This is an example of a configuration directive replicaof 127 0 0 1 6380 It is possible to provide strings containing spaces as arguments using double or single quotes as in the following example requirepass hello world Single quoted string can contain characters escaped by backslashes and double quoted strings can additionally include any ASCII symbols encoded using backslashed hexadecimal notation xff The list of configuration directives and their meaning and intended usage is available in the self documented example redis conf shipped into the Redis distribution The self documented redis conf for Redis 7 2 https raw githubusercontent com redis redis 7 2 redis conf The self documented redis conf for Redis 7 0 https raw githubusercontent com redis redis 7 0 redis conf The self documented redis conf for Redis 6 2 https raw githubusercontent com redis redis 6 2 redis conf The self documented redis conf for Redis 6 0 https raw githubusercontent com redis redis 6 0 redis conf The self documented redis conf for Redis 5 0 https raw githubusercontent com redis redis 5 0 redis conf The self documented redis conf for Redis 4 0 https raw githubusercontent com redis redis 4 0 redis conf The self documented redis conf for Redis 3 2 https raw githubusercontent com redis redis 3 2 redis conf The self documented redis conf for Redis 3 0 https raw githubusercontent com redis redis 3 0 redis conf The self documented redis conf for Redis 2 8 https raw githubusercontent com redis redis 2 8 redis conf The self documented redis conf for Redis 2 6 https raw githubusercontent com redis redis 2 6 redis conf The self documented redis conf for Redis 2 4 https raw githubusercontent com redis redis 2 4 redis conf Passing arguments via the command line You can also pass Redis configuration parameters using the command line directly This is very useful for testing purposes The following is an example that starts a new Redis instance using port 6380 as a replica of the instance running at 127 0 0 1 port 6379 redis server port 6380 replicaof 127 0 0 1 6379 The format of the arguments passed via the command line is exactly the same as the one used in the redis conf file with the exception that the keyword is prefixed with Note that internally this generates an in memory temporary config file possibly concatenating the config file passed by the user if any where arguments are translated into the format of redis conf Changing Redis configuration while the server is running It is possible to reconfigure Redis on the fly without stopping and restarting the service or querying the current configuration programmatically using the special commands CONFIG SET and CONFIG GET Not all of the configuration directives are supported in this way but most are supported as expected Please refer to the CONFIG SET and CONFIG GET pages for more information Note that modifying the configuration on the fly has no effects on the redis conf file so at the next restart of Redis the old configuration will be used instead Make sure to also modify the redis conf file accordingly to the configuration you set using CONFIG SET You can do it manually or you can use CONFIG REWRITE which will automatically scan your redis conf file and update the fields which don t match the current configuration value Fields non existing but set to the default value are not added Comments inside your configuration file are retained Configuring Redis as a cache If you plan to use Redis as a cache where every key will have an expire set you may consider using the following configuration instead assuming a max memory limit of 2 megabytes as an example maxmemory 2mb maxmemory policy allkeys lru In this configuration there is no need for the application to set a time to live for keys using the EXPIRE command or equivalent since all the keys will be evicted using an approximated LRU algorithm as long as we hit the 2 megabyte memory limit Basically in this configuration Redis acts in a similar way to memcached We have more extensive documentation about using Redis as an LRU cache here topics lru cache |
redis weight 7 How Redis writes data to disk aliases topics persistence title Redis persistence docs manual persistence Persistence topics persistence md docs manual persistence md | ---
title: Redis persistence
linkTitle: Persistence
weight: 7
description: How Redis writes data to disk
aliases: [
/topics/persistence,
/topics/persistence.md,
/docs/manual/persistence,
/docs/manual/persistence.md
]
---
Persistence refers to the writing of data to durable storage, such as a solid-state disk (SSD). Redis provides a range of persistence options. These include:
* **RDB** (Redis Database): RDB persistence performs point-in-time snapshots of your dataset at specified intervals.
* **AOF** (Append Only File): AOF persistence logs every write operation received by the server. These operations can then be replayed again at server startup, reconstructing the original dataset. Commands are logged using the same format as the Redis protocol itself.
* **No persistence**: You can disable persistence completely. This is sometimes used when caching.
* **RDB + AOF**: You can also combine both AOF and RDB in the same instance.
If you'd rather not think about the tradeoffs between these different persistence strategies, you may want to consider [Redis Enterprise's persistence options](https://docs.redis.com/latest/rs/databases/configure/database-persistence/), which can be pre-configured using a UI.
To learn more about how to evaluate your Redis persistence strategy, read on.
## RDB advantages
* RDB is a very compact single-file point-in-time representation of your Redis data. RDB files are perfect for backups. For instance you may want to archive your RDB files every hour for the latest 24 hours, and to save an RDB snapshot every day for 30 days. This allows you to easily restore different versions of the data set in case of disasters.
* RDB is very good for disaster recovery, being a single compact file that can be transferred to far data centers, or onto Amazon S3 (possibly encrypted).
* RDB maximizes Redis performances since the only work the Redis parent process needs to do in order to persist is forking a child that will do all the rest. The parent process will never perform disk I/O or alike.
* RDB allows faster restarts with big datasets compared to AOF.
* On replicas, RDB supports [partial resynchronizations after restarts and failovers](https://redis.io/topics/replication#partial-resynchronizations-after-restarts-and-failovers).
## RDB disadvantages
* RDB is NOT good if you need to minimize the chance of data loss in case Redis stops working (for example after a power outage). You can configure different *save points* where an RDB is produced (for instance after at least five minutes and 100 writes against the data set, you can have multiple save points). However you'll usually create an RDB snapshot every five minutes or more, so in case of Redis stopping working without a correct shutdown for any reason you should be prepared to lose the latest minutes of data.
* RDB needs to fork() often in order to persist on disk using a child process. fork() can be time consuming if the dataset is big, and may result in Redis stopping serving clients for some milliseconds or even for one second if the dataset is very big and the CPU performance is not great. AOF also needs to fork() but less frequently and you can tune how often you want to rewrite your logs without any trade-off on durability.
## AOF advantages
* Using AOF Redis is much more durable: you can have different fsync policies: no fsync at all, fsync every second, fsync at every query. With the default policy of fsync every second, write performance is still great. fsync is performed using a background thread and the main thread will try hard to perform writes when no fsync is in progress, so you can only lose one second worth of writes.
* The AOF log is an append-only log, so there are no seeks, nor corruption problems if there is a power outage. Even if the log ends with a half-written command for some reason (disk full or other reasons) the redis-check-aof tool is able to fix it easily.
* Redis is able to automatically rewrite the AOF in background when it gets too big. The rewrite is completely safe as while Redis continues appending to the old file, a completely new one is produced with the minimal set of operations needed to create the current data set, and once this second file is ready Redis switches the two and starts appending to the new one.
* AOF contains a log of all the operations one after the other in an easy to understand and parse format. You can even easily export an AOF file. For instance even if you've accidentally flushed everything using the `FLUSHALL` command, as long as no rewrite of the log was performed in the meantime, you can still save your data set just by stopping the server, removing the latest command, and restarting Redis again.
## AOF disadvantages
* AOF files are usually bigger than the equivalent RDB files for the same dataset.
* AOF can be slower than RDB depending on the exact fsync policy. In general with fsync set to *every second* performance is still very high, and with fsync disabled it should be exactly as fast as RDB even under high load. Still RDB is able to provide more guarantees about the maximum latency even in the case of a huge write load.
**Redis < 7.0**
* AOF can use a lot of memory if there are writes to the database during a rewrite (these are buffered in memory and written to the new AOF at the end).
* All write commands that arrive during rewrite are written to disk twice.
* Redis could freeze writing and fsyncing these write commands to the new AOF file at the end of the rewrite.
Ok, so what should I use?
---
The general indication you should use both persistence methods is if
you want a degree of data safety comparable to what PostgreSQL can provide you.
If you care a lot about your data, but still can live with a few minutes of
data loss in case of disasters, you can simply use RDB alone.
There are many users using AOF alone, but we discourage it since to have an
RDB snapshot from time to time is a great idea for doing database backups,
for faster restarts, and in the event of bugs in the AOF engine.
The following sections will illustrate a few more details about the two persistence models.
## Snapshotting
By default Redis saves snapshots of the dataset on disk, in a binary
file called `dump.rdb`. You can configure Redis to have it save the
dataset every N seconds if there are at least M changes in the dataset,
or you can manually call the `SAVE` or `BGSAVE` commands.
For example, this configuration will make Redis automatically dump the
dataset to disk every 60 seconds if at least 1000 keys changed:
save 60 1000
This strategy is known as _snapshotting_.
### How it works
Whenever Redis needs to dump the dataset to disk, this is what happens:
* Redis [forks](http://linux.die.net/man/2/fork). We now have a child
and a parent process.
* The child starts to write the dataset to a temporary RDB file.
* When the child is done writing the new RDB file, it replaces the old
one.
This method allows Redis to benefit from copy-on-write semantics.
## Append-only file
Snapshotting is not very durable. If your computer running Redis stops,
your power line fails, or you accidentally `kill -9` your instance, the
latest data written to Redis will be lost. While this may not be a big
deal for some applications, there are use cases for full durability, and
in these cases Redis snapshotting alone is not a viable option.
The _append-only file_ is an alternative, fully-durable strategy for
Redis. It became available in version 1.1.
You can turn on the AOF in your configuration file:
appendonly yes
From now on, every time Redis receives a command that changes the
dataset (e.g. `SET`) it will append it to the AOF. When you restart
Redis it will re-play the AOF to rebuild the state.
Since Redis 7.0.0, Redis uses a multi part AOF mechanism.
That is, the original single AOF file is split into base file (at most one) and incremental files (there may be more than one).
The base file represents an initial (RDB or AOF format) snapshot of the data present when the AOF is [rewritten](#log-rewriting).
The incremental files contains incremental changes since the last base AOF file was created. All these files are put in a separate directory and are tracked by a manifest file.
### Log rewriting
The AOF gets bigger and bigger as write operations are
performed. For example, if you are incrementing a counter 100 times,
you'll end up with a single key in your dataset containing the final
value, but 100 entries in your AOF. 99 of those entries are not needed
to rebuild the current state.
The rewrite is completely safe.
While Redis continues appending to the old file,
a completely new one is produced with the minimal set of operations needed to create the current data set,
and once this second file is ready Redis switches the two and starts appending to the new one.
So Redis supports an interesting feature: it is able to rebuild the AOF
in the background without interrupting service to clients. Whenever
you issue a `BGREWRITEAOF`, Redis will write the shortest sequence of
commands needed to rebuild the current dataset in memory. If you're
using the AOF with Redis 2.2 you'll need to run `BGREWRITEAOF` from time to
time. Since Redis 2.4 is able to trigger log rewriting automatically (see the
example configuration file for more information).
Since Redis 7.0.0, when an AOF rewrite is scheduled, the Redis parent process opens a new incremental AOF file to continue writing.
The child process executes the rewrite logic and generates a new base AOF.
Redis will use a temporary manifest file to track the newly generated base file and incremental file.
When they are ready, Redis will perform an atomic replacement operation to make this temporary manifest file take effect.
In order to avoid the problem of creating many incremental files in case of repeated failures and retries of an AOF rewrite,
Redis introduces an AOF rewrite limiting mechanism to ensure that failed AOF rewrites are retried at a slower and slower rate.
### How durable is the append only file?
You can configure how many times Redis will
[`fsync`](http://linux.die.net/man/2/fsync) data on disk. There are
three options:
* `appendfsync always`: `fsync` every time new commands are appended to the AOF. Very very slow, very safe. Note that the commands are appended to the AOF after a batch of commands from multiple clients or a pipeline are executed, so it means a single write and a single fsync (before sending the replies).
* `appendfsync everysec`: `fsync` every second. Fast enough (since version 2.4 likely to be as fast as snapshotting), and you may lose 1 second of data if there is a disaster.
* `appendfsync no`: Never `fsync`, just put your data in the hands of the Operating System. The faster and less safe method. Normally Linux will flush data every 30 seconds with this configuration, but it's up to the kernel's exact tuning.
The suggested (and default) policy is to `fsync` every second. It is
both fast and relatively safe. The `always` policy is very slow in
practice, but it supports group commit, so if there are multiple parallel
writes Redis will try to perform a single `fsync` operation.
### What should I do if my AOF gets truncated?
It is possible the server crashed while writing the AOF file, or the
volume where the AOF file is stored was full at the time of writing. When this happens the
AOF still contains consistent data representing a given point-in-time version
of the dataset (that may be old up to one second with the default AOF fsync
policy), but the last command in the AOF could be truncated.
The latest major versions of Redis will be able to load the AOF anyway, just
discarding the last non well formed command in the file. In this case the
server will emit a log like the following:
```
* Reading RDB preamble from AOF file...
* Reading the remaining AOF tail...
# !!! Warning: short read while loading the AOF file !!!
# !!! Truncating the AOF at offset 439 !!!
# AOF loaded anyway because aof-load-truncated is enabled
```
You can change the default configuration to force Redis to stop in such
cases if you want, but the default configuration is to continue regardless of
the fact the last command in the file is not well-formed, in order to guarantee
availability after a restart.
Older versions of Redis may not recover, and may require the following steps:
* Make a backup copy of your AOF file.
* Fix the original file using the `redis-check-aof` tool that ships with Redis:
$ redis-check-aof --fix <filename>
* Optionally use `diff -u` to check what is the difference between two files.
* Restart the server with the fixed file.
### What should I do if my AOF gets corrupted?
If the AOF file is not just truncated, but corrupted with invalid byte
sequences in the middle, things are more complex. Redis will complain
at startup and will abort:
```
* Reading the remaining AOF tail...
# Bad file format reading the append only file: make a backup of your AOF file, then use ./redis-check-aof --fix <filename>
```
The best thing to do is to run the `redis-check-aof` utility, initially without
the `--fix` option, then understand the problem, jump to the given
offset in the file, and see if it is possible to manually repair the file:
The AOF uses the same format of the Redis protocol and is quite simple to fix
manually. Otherwise it is possible to let the utility fix the file for us, but
in that case all the AOF portion from the invalid part to the end of the
file may be discarded, leading to a massive amount of data loss if the
corruption happened to be in the initial part of the file.
### How it works
Log rewriting uses the same copy-on-write trick already in use for
snapshotting. This is how it works:
**Redis >= 7.0**
* Redis [forks](http://linux.die.net/man/2/fork), so now we have a child
and a parent process.
* The child starts writing the new base AOF in a temporary file.
* The parent opens a new increments AOF file to continue writing updates.
If the rewriting fails, the old base and increment files (if there are any) plus this newly opened increment file represent the complete updated dataset,
so we are safe.
* When the child is done rewriting the base file, the parent gets a signal,
and uses the newly opened increment file and child generated base file to build a temp manifest,
and persist it.
* Profit! Now Redis does an atomic exchange of the manifest files so that the result of this AOF rewrite takes effect. Redis also cleans up the old base file and any unused increment files.
**Redis < 7.0**
* Redis [forks](http://linux.die.net/man/2/fork), so now we have a child
and a parent process.
* The child starts writing the new AOF in a temporary file.
* The parent accumulates all the new changes in an in-memory buffer (but
at the same time it writes the new changes in the old append-only file,
so if the rewriting fails, we are safe).
* When the child is done rewriting the file, the parent gets a signal,
and appends the in-memory buffer at the end of the file generated by the
child.
* Now Redis atomically renames the new file into the old one,
and starts appending new data into the new file.
### How I can switch to AOF, if I'm currently using dump.rdb snapshots?
If you want to enable AOF in a server that is currently using RDB snapshots, you need to convert the data by enabling AOF via CONFIG command on the live server first.
**IMPORTANT:** not following this procedure (e.g. just changing the config and restarting the server) can result in data loss!
**Redis >= 2.2**
Preparations:
* Make a backup of your latest dump.rdb file.
* Transfer this backup to a safe place.
Switch to AOF on live database:
* Enable AOF: `redis-cli config set appendonly yes`
* Optionally disable RDB: `redis-cli config set save ""`
* Make sure writes are appended to the append only file correctly.
* **IMPORTANT:** Update your `redis.conf` (potentially through `CONFIG REWRITE`) and ensure that it matches the configuration above.
If you forget this step, when you restart the server, the configuration changes will be lost and the server will start again with the old configuration, resulting in a loss of your data.
Next time you restart the server:
* Before restarting the server, wait for AOF rewrite to finish persisting the data.
You can do that by watching `INFO persistence`, waiting for `aof_rewrite_in_progress` and `aof_rewrite_scheduled` to be `0`, and validating that `aof_last_bgrewrite_status` is `ok`.
* After restarting the server, check that your database contains the same number of keys it contained previously.
**Redis 2.0**
* Make a backup of your latest dump.rdb file.
* Transfer this backup into a safe place.
* Stop all the writes against the database!
* Issue a `redis-cli BGREWRITEAOF`. This will create the append only file.
* Stop the server when Redis finished generating the AOF dump.
* Edit redis.conf end enable append only file persistence.
* Restart the server.
* Make sure that your database contains the same number of keys it contained before the switch.
* Make sure that writes are appended to the append only file correctly.
## Interactions between AOF and RDB persistence
Redis >= 2.4 makes sure to avoid triggering an AOF rewrite when an RDB
snapshotting operation is already in progress, or allowing a `BGSAVE` while the
AOF rewrite is in progress. This prevents two Redis background processes
from doing heavy disk I/O at the same time.
When snapshotting is in progress and the user explicitly requests a log
rewrite operation using `BGREWRITEAOF` the server will reply with an OK
status code telling the user the operation is scheduled, and the rewrite
will start once the snapshotting is completed.
In the case both AOF and RDB persistence are enabled and Redis restarts the
AOF file will be used to reconstruct the original dataset since it is
guaranteed to be the most complete.
## Backing up Redis data
Before starting this section, make sure to read the following sentence: **Make Sure to Backup Your Database**. Disks break, instances in the cloud disappear, and so forth: no backups means huge risk of data disappearing into /dev/null.
Redis is very data backup friendly since you can copy RDB files while the
database is running: the RDB is never modified once produced, and while it
gets produced it uses a temporary name and is renamed into its final destination
atomically using rename(2) only when the new snapshot is complete.
This means that copying the RDB file is completely safe while the server is
running. This is what we suggest:
* Create a cron job in your server creating hourly snapshots of the RDB file in one directory, and daily snapshots in a different directory.
* Every time the cron script runs, make sure to call the `find` command to make sure too old snapshots are deleted: for instance you can take hourly snapshots for the latest 48 hours, and daily snapshots for one or two months. Make sure to name the snapshots with date and time information.
* At least one time every day make sure to transfer an RDB snapshot *outside your data center* or at least *outside the physical machine* running your Redis instance.
### Backing up AOF persistence
If you run a Redis instance with only AOF persistence enabled, you can still perform backups.
Since Redis 7.0.0, AOF files are split into multiple files which reside in a single directory determined by the `appenddirname` configuration.
During normal operation all you need to do is copy/tar the files in this directory to achieve a backup. However, if this is done during a [rewrite](#log-rewriting), you might end up with an invalid backup.
To work around this you must disable AOF rewrites during the backup:
1. Turn off automatic rewrites with<br/>
`CONFIG SET` `auto-aof-rewrite-percentage 0`<br/>
Make sure you don't manually start a rewrite (using `BGREWRITEAOF`) during this time.
2. Check there's no current rewrite in progress using<br/>
`INFO` `persistence`<br/>
and verifying `aof_rewrite_in_progress` is 0. If it's 1, then you'll need to wait for the rewrite to complete.
3. Now you can safely copy the files in the `appenddirname` directory.
4. Re-enable rewrites when done:<br/>
`CONFIG SET` `auto-aof-rewrite-percentage <prev-value>`
**Note:** If you want to minimize the time AOF rewrites are disabled you may create hard links to the files in `appenddirname` (in step 3 above) and then re-enable rewrites (step 4) after the hard links are created.
Now you can copy/tar the hardlinks and delete them when done. This works because Redis guarantees that it
only appends to files in this directory, or completely replaces them if necessary, so the content should be
consistent at any given point in time.
**Note:** If you want to handle the case of the server being restarted during the backup and make sure no rewrite will automatically start after the restart you can change step 1 above to also persist the updated configuration via `CONFIG REWRITE`.
Just make sure to re-enable automatic rewrites when done (step 4) and persist it with another `CONFIG REWRITE`.
Prior to version 7.0.0 backing up the AOF file can be done simply by copying the aof file (like backing up the RDB snapshot). The file may lack the final part
but Redis will still be able to load it (see the previous sections about [truncated AOF files](#what-should-i-do-if-my-aof-gets-truncated)).
## Disaster recovery
Disaster recovery in the context of Redis is basically the same story as
backups, plus the ability to transfer those backups in many different external
data centers. This way data is secured even in the case of some catastrophic
event affecting the main data center where Redis is running and producing its
snapshots.
We'll review the most interesting disaster recovery techniques
that don't have too high costs.
* Amazon S3 and other similar services are a good way for implementing your disaster recovery system. Simply transfer your daily or hourly RDB snapshot to S3 in an encrypted form. You can encrypt your data using `gpg -c` (in symmetric encryption mode). Make sure to store your password in many different safe places (for instance give a copy to the most important people of your organization). It is recommended to use multiple storage services for improved data safety.
* Transfer your snapshots using SCP (part of SSH) to far servers. This is a fairly simple and safe route: get a small VPS in a place that is very far from you, install ssh there, and generate a ssh client key without passphrase, then add it in the `authorized_keys` file of your small VPS. You are ready to transfer backups in an automated fashion. Get at least two VPS in two different providers
for best results.
It is important to understand that this system can easily fail if not
implemented in the right way. At least, make absolutely sure that after the
transfer is completed you are able to verify the file size (that should match
the one of the file you copied) and possibly the SHA1 digest, if you are using
a VPS.
You also need some kind of independent alert system if the transfer of fresh
backups is not working for some reason. | redis | title Redis persistence linkTitle Persistence weight 7 description How Redis writes data to disk aliases topics persistence topics persistence md docs manual persistence docs manual persistence md Persistence refers to the writing of data to durable storage such as a solid state disk SSD Redis provides a range of persistence options These include RDB Redis Database RDB persistence performs point in time snapshots of your dataset at specified intervals AOF Append Only File AOF persistence logs every write operation received by the server These operations can then be replayed again at server startup reconstructing the original dataset Commands are logged using the same format as the Redis protocol itself No persistence You can disable persistence completely This is sometimes used when caching RDB AOF You can also combine both AOF and RDB in the same instance If you d rather not think about the tradeoffs between these different persistence strategies you may want to consider Redis Enterprise s persistence options https docs redis com latest rs databases configure database persistence which can be pre configured using a UI To learn more about how to evaluate your Redis persistence strategy read on RDB advantages RDB is a very compact single file point in time representation of your Redis data RDB files are perfect for backups For instance you may want to archive your RDB files every hour for the latest 24 hours and to save an RDB snapshot every day for 30 days This allows you to easily restore different versions of the data set in case of disasters RDB is very good for disaster recovery being a single compact file that can be transferred to far data centers or onto Amazon S3 possibly encrypted RDB maximizes Redis performances since the only work the Redis parent process needs to do in order to persist is forking a child that will do all the rest The parent process will never perform disk I O or alike RDB allows faster restarts with big datasets compared to AOF On replicas RDB supports partial resynchronizations after restarts and failovers https redis io topics replication partial resynchronizations after restarts and failovers RDB disadvantages RDB is NOT good if you need to minimize the chance of data loss in case Redis stops working for example after a power outage You can configure different save points where an RDB is produced for instance after at least five minutes and 100 writes against the data set you can have multiple save points However you ll usually create an RDB snapshot every five minutes or more so in case of Redis stopping working without a correct shutdown for any reason you should be prepared to lose the latest minutes of data RDB needs to fork often in order to persist on disk using a child process fork can be time consuming if the dataset is big and may result in Redis stopping serving clients for some milliseconds or even for one second if the dataset is very big and the CPU performance is not great AOF also needs to fork but less frequently and you can tune how often you want to rewrite your logs without any trade off on durability AOF advantages Using AOF Redis is much more durable you can have different fsync policies no fsync at all fsync every second fsync at every query With the default policy of fsync every second write performance is still great fsync is performed using a background thread and the main thread will try hard to perform writes when no fsync is in progress so you can only lose one second worth of writes The AOF log is an append only log so there are no seeks nor corruption problems if there is a power outage Even if the log ends with a half written command for some reason disk full or other reasons the redis check aof tool is able to fix it easily Redis is able to automatically rewrite the AOF in background when it gets too big The rewrite is completely safe as while Redis continues appending to the old file a completely new one is produced with the minimal set of operations needed to create the current data set and once this second file is ready Redis switches the two and starts appending to the new one AOF contains a log of all the operations one after the other in an easy to understand and parse format You can even easily export an AOF file For instance even if you ve accidentally flushed everything using the FLUSHALL command as long as no rewrite of the log was performed in the meantime you can still save your data set just by stopping the server removing the latest command and restarting Redis again AOF disadvantages AOF files are usually bigger than the equivalent RDB files for the same dataset AOF can be slower than RDB depending on the exact fsync policy In general with fsync set to every second performance is still very high and with fsync disabled it should be exactly as fast as RDB even under high load Still RDB is able to provide more guarantees about the maximum latency even in the case of a huge write load Redis 7 0 AOF can use a lot of memory if there are writes to the database during a rewrite these are buffered in memory and written to the new AOF at the end All write commands that arrive during rewrite are written to disk twice Redis could freeze writing and fsyncing these write commands to the new AOF file at the end of the rewrite Ok so what should I use The general indication you should use both persistence methods is if you want a degree of data safety comparable to what PostgreSQL can provide you If you care a lot about your data but still can live with a few minutes of data loss in case of disasters you can simply use RDB alone There are many users using AOF alone but we discourage it since to have an RDB snapshot from time to time is a great idea for doing database backups for faster restarts and in the event of bugs in the AOF engine The following sections will illustrate a few more details about the two persistence models Snapshotting By default Redis saves snapshots of the dataset on disk in a binary file called dump rdb You can configure Redis to have it save the dataset every N seconds if there are at least M changes in the dataset or you can manually call the SAVE or BGSAVE commands For example this configuration will make Redis automatically dump the dataset to disk every 60 seconds if at least 1000 keys changed save 60 1000 This strategy is known as snapshotting How it works Whenever Redis needs to dump the dataset to disk this is what happens Redis forks http linux die net man 2 fork We now have a child and a parent process The child starts to write the dataset to a temporary RDB file When the child is done writing the new RDB file it replaces the old one This method allows Redis to benefit from copy on write semantics Append only file Snapshotting is not very durable If your computer running Redis stops your power line fails or you accidentally kill 9 your instance the latest data written to Redis will be lost While this may not be a big deal for some applications there are use cases for full durability and in these cases Redis snapshotting alone is not a viable option The append only file is an alternative fully durable strategy for Redis It became available in version 1 1 You can turn on the AOF in your configuration file appendonly yes From now on every time Redis receives a command that changes the dataset e g SET it will append it to the AOF When you restart Redis it will re play the AOF to rebuild the state Since Redis 7 0 0 Redis uses a multi part AOF mechanism That is the original single AOF file is split into base file at most one and incremental files there may be more than one The base file represents an initial RDB or AOF format snapshot of the data present when the AOF is rewritten log rewriting The incremental files contains incremental changes since the last base AOF file was created All these files are put in a separate directory and are tracked by a manifest file Log rewriting The AOF gets bigger and bigger as write operations are performed For example if you are incrementing a counter 100 times you ll end up with a single key in your dataset containing the final value but 100 entries in your AOF 99 of those entries are not needed to rebuild the current state The rewrite is completely safe While Redis continues appending to the old file a completely new one is produced with the minimal set of operations needed to create the current data set and once this second file is ready Redis switches the two and starts appending to the new one So Redis supports an interesting feature it is able to rebuild the AOF in the background without interrupting service to clients Whenever you issue a BGREWRITEAOF Redis will write the shortest sequence of commands needed to rebuild the current dataset in memory If you re using the AOF with Redis 2 2 you ll need to run BGREWRITEAOF from time to time Since Redis 2 4 is able to trigger log rewriting automatically see the example configuration file for more information Since Redis 7 0 0 when an AOF rewrite is scheduled the Redis parent process opens a new incremental AOF file to continue writing The child process executes the rewrite logic and generates a new base AOF Redis will use a temporary manifest file to track the newly generated base file and incremental file When they are ready Redis will perform an atomic replacement operation to make this temporary manifest file take effect In order to avoid the problem of creating many incremental files in case of repeated failures and retries of an AOF rewrite Redis introduces an AOF rewrite limiting mechanism to ensure that failed AOF rewrites are retried at a slower and slower rate How durable is the append only file You can configure how many times Redis will fsync http linux die net man 2 fsync data on disk There are three options appendfsync always fsync every time new commands are appended to the AOF Very very slow very safe Note that the commands are appended to the AOF after a batch of commands from multiple clients or a pipeline are executed so it means a single write and a single fsync before sending the replies appendfsync everysec fsync every second Fast enough since version 2 4 likely to be as fast as snapshotting and you may lose 1 second of data if there is a disaster appendfsync no Never fsync just put your data in the hands of the Operating System The faster and less safe method Normally Linux will flush data every 30 seconds with this configuration but it s up to the kernel s exact tuning The suggested and default policy is to fsync every second It is both fast and relatively safe The always policy is very slow in practice but it supports group commit so if there are multiple parallel writes Redis will try to perform a single fsync operation What should I do if my AOF gets truncated It is possible the server crashed while writing the AOF file or the volume where the AOF file is stored was full at the time of writing When this happens the AOF still contains consistent data representing a given point in time version of the dataset that may be old up to one second with the default AOF fsync policy but the last command in the AOF could be truncated The latest major versions of Redis will be able to load the AOF anyway just discarding the last non well formed command in the file In this case the server will emit a log like the following Reading RDB preamble from AOF file Reading the remaining AOF tail Warning short read while loading the AOF file Truncating the AOF at offset 439 AOF loaded anyway because aof load truncated is enabled You can change the default configuration to force Redis to stop in such cases if you want but the default configuration is to continue regardless of the fact the last command in the file is not well formed in order to guarantee availability after a restart Older versions of Redis may not recover and may require the following steps Make a backup copy of your AOF file Fix the original file using the redis check aof tool that ships with Redis redis check aof fix filename Optionally use diff u to check what is the difference between two files Restart the server with the fixed file What should I do if my AOF gets corrupted If the AOF file is not just truncated but corrupted with invalid byte sequences in the middle things are more complex Redis will complain at startup and will abort Reading the remaining AOF tail Bad file format reading the append only file make a backup of your AOF file then use redis check aof fix filename The best thing to do is to run the redis check aof utility initially without the fix option then understand the problem jump to the given offset in the file and see if it is possible to manually repair the file The AOF uses the same format of the Redis protocol and is quite simple to fix manually Otherwise it is possible to let the utility fix the file for us but in that case all the AOF portion from the invalid part to the end of the file may be discarded leading to a massive amount of data loss if the corruption happened to be in the initial part of the file How it works Log rewriting uses the same copy on write trick already in use for snapshotting This is how it works Redis 7 0 Redis forks http linux die net man 2 fork so now we have a child and a parent process The child starts writing the new base AOF in a temporary file The parent opens a new increments AOF file to continue writing updates If the rewriting fails the old base and increment files if there are any plus this newly opened increment file represent the complete updated dataset so we are safe When the child is done rewriting the base file the parent gets a signal and uses the newly opened increment file and child generated base file to build a temp manifest and persist it Profit Now Redis does an atomic exchange of the manifest files so that the result of this AOF rewrite takes effect Redis also cleans up the old base file and any unused increment files Redis 7 0 Redis forks http linux die net man 2 fork so now we have a child and a parent process The child starts writing the new AOF in a temporary file The parent accumulates all the new changes in an in memory buffer but at the same time it writes the new changes in the old append only file so if the rewriting fails we are safe When the child is done rewriting the file the parent gets a signal and appends the in memory buffer at the end of the file generated by the child Now Redis atomically renames the new file into the old one and starts appending new data into the new file How I can switch to AOF if I m currently using dump rdb snapshots If you want to enable AOF in a server that is currently using RDB snapshots you need to convert the data by enabling AOF via CONFIG command on the live server first IMPORTANT not following this procedure e g just changing the config and restarting the server can result in data loss Redis 2 2 Preparations Make a backup of your latest dump rdb file Transfer this backup to a safe place Switch to AOF on live database Enable AOF redis cli config set appendonly yes Optionally disable RDB redis cli config set save Make sure writes are appended to the append only file correctly IMPORTANT Update your redis conf potentially through CONFIG REWRITE and ensure that it matches the configuration above If you forget this step when you restart the server the configuration changes will be lost and the server will start again with the old configuration resulting in a loss of your data Next time you restart the server Before restarting the server wait for AOF rewrite to finish persisting the data You can do that by watching INFO persistence waiting for aof rewrite in progress and aof rewrite scheduled to be 0 and validating that aof last bgrewrite status is ok After restarting the server check that your database contains the same number of keys it contained previously Redis 2 0 Make a backup of your latest dump rdb file Transfer this backup into a safe place Stop all the writes against the database Issue a redis cli BGREWRITEAOF This will create the append only file Stop the server when Redis finished generating the AOF dump Edit redis conf end enable append only file persistence Restart the server Make sure that your database contains the same number of keys it contained before the switch Make sure that writes are appended to the append only file correctly Interactions between AOF and RDB persistence Redis 2 4 makes sure to avoid triggering an AOF rewrite when an RDB snapshotting operation is already in progress or allowing a BGSAVE while the AOF rewrite is in progress This prevents two Redis background processes from doing heavy disk I O at the same time When snapshotting is in progress and the user explicitly requests a log rewrite operation using BGREWRITEAOF the server will reply with an OK status code telling the user the operation is scheduled and the rewrite will start once the snapshotting is completed In the case both AOF and RDB persistence are enabled and Redis restarts the AOF file will be used to reconstruct the original dataset since it is guaranteed to be the most complete Backing up Redis data Before starting this section make sure to read the following sentence Make Sure to Backup Your Database Disks break instances in the cloud disappear and so forth no backups means huge risk of data disappearing into dev null Redis is very data backup friendly since you can copy RDB files while the database is running the RDB is never modified once produced and while it gets produced it uses a temporary name and is renamed into its final destination atomically using rename 2 only when the new snapshot is complete This means that copying the RDB file is completely safe while the server is running This is what we suggest Create a cron job in your server creating hourly snapshots of the RDB file in one directory and daily snapshots in a different directory Every time the cron script runs make sure to call the find command to make sure too old snapshots are deleted for instance you can take hourly snapshots for the latest 48 hours and daily snapshots for one or two months Make sure to name the snapshots with date and time information At least one time every day make sure to transfer an RDB snapshot outside your data center or at least outside the physical machine running your Redis instance Backing up AOF persistence If you run a Redis instance with only AOF persistence enabled you can still perform backups Since Redis 7 0 0 AOF files are split into multiple files which reside in a single directory determined by the appenddirname configuration During normal operation all you need to do is copy tar the files in this directory to achieve a backup However if this is done during a rewrite log rewriting you might end up with an invalid backup To work around this you must disable AOF rewrites during the backup 1 Turn off automatic rewrites with br CONFIG SET auto aof rewrite percentage 0 br Make sure you don t manually start a rewrite using BGREWRITEAOF during this time 2 Check there s no current rewrite in progress using br INFO persistence br and verifying aof rewrite in progress is 0 If it s 1 then you ll need to wait for the rewrite to complete 3 Now you can safely copy the files in the appenddirname directory 4 Re enable rewrites when done br CONFIG SET auto aof rewrite percentage prev value Note If you want to minimize the time AOF rewrites are disabled you may create hard links to the files in appenddirname in step 3 above and then re enable rewrites step 4 after the hard links are created Now you can copy tar the hardlinks and delete them when done This works because Redis guarantees that it only appends to files in this directory or completely replaces them if necessary so the content should be consistent at any given point in time Note If you want to handle the case of the server being restarted during the backup and make sure no rewrite will automatically start after the restart you can change step 1 above to also persist the updated configuration via CONFIG REWRITE Just make sure to re enable automatic rewrites when done step 4 and persist it with another CONFIG REWRITE Prior to version 7 0 0 backing up the AOF file can be done simply by copying the aof file like backing up the RDB snapshot The file may lack the final part but Redis will still be able to load it see the previous sections about truncated AOF files what should i do if my aof gets truncated Disaster recovery Disaster recovery in the context of Redis is basically the same story as backups plus the ability to transfer those backups in many different external data centers This way data is secured even in the case of some catastrophic event affecting the main data center where Redis is running and producing its snapshots We ll review the most interesting disaster recovery techniques that don t have too high costs Amazon S3 and other similar services are a good way for implementing your disaster recovery system Simply transfer your daily or hourly RDB snapshot to S3 in an encrypted form You can encrypt your data using gpg c in symmetric encryption mode Make sure to store your password in many different safe places for instance give a copy to the most important people of your organization It is recommended to use multiple storage services for improved data safety Transfer your snapshots using SCP part of SSH to far servers This is a fairly simple and safe route get a small VPS in a place that is very far from you install ssh there and generate a ssh client key without passphrase then add it in the authorized keys file of your small VPS You are ready to transfer backups in an automated fashion Get at least two VPS in two different providers for best results It is important to understand that this system can easily fail if not implemented in the right way At least make absolutely sure that after the transfer is completed you are able to verify the file size that should match the one of the file you copied and possibly the SHA1 digest if you are using a VPS You also need some kind of independent alert system if the transfer of fresh backups is not working for some reason |
redis weight 6 topics partitioning docs manual scaling md Scale with Redis Cluster aliases topics cluster tutorial docs manual scaling Horizontal scaling with Redis Cluster title Scale with Redis Cluster | ---
title: Scale with Redis Cluster
linkTitle: Scale with Redis Cluster
weight: 6
description: Horizontal scaling with Redis Cluster
aliases: [
/topics/cluster-tutorial,
/topics/partitioning,
/docs/manual/scaling,
/docs/manual/scaling.md
]
---
Redis scales horizontally with a deployment topology called Redis Cluster.
This topic will teach you how to set up, test, and operate Redis Cluster in production.
You will learn about the availability and consistency characteristics of Redis Cluster from the end user's point of view.
If you plan to run a production Redis Cluster deployment or want to understand better how Redis Cluster works internally, consult the [Redis Cluster specification](/topics/cluster-spec). To learn how Redis Enterprise handles scaling, see [Linear Scaling with Redis Enterprise](https://redis.com/redis-enterprise/technology/linear-scaling-redis-enterprise/).
## Redis Cluster 101
Redis Cluster provides a way to run a Redis installation where data is automatically sharded across multiple Redis nodes.
Redis Cluster also provides some degree of availability during partitions—in practical terms, the ability to continue operations when some nodes fail or are unable to communicate.
However, the cluster will become unavailable in the event of larger failures (for example, when the majority of masters are unavailable).
So, with Redis Cluster, you get the ability to:
* Automatically split your dataset among multiple nodes.
* Continue operations when a subset of the nodes are experiencing failures or are unable to communicate with the rest of the cluster.
#### Redis Cluster TCP ports
Every Redis Cluster node requires two open TCP connections: a Redis TCP port used to serve clients, e.g., 6379, and second port known as the _cluster bus port_.
By default, the cluster bus port is set by adding 10000 to the data port (e.g., 16379); however, you can override this in the `cluster-port` configuration.
Cluster bus is a node-to-node communication channel that uses a binary protocol, which is more suited to exchanging information between nodes due to
little bandwidth and processing time.
Nodes use the cluster bus for failure detection, configuration updates, failover authorization, and so forth.
Clients should never try to communicate with the cluster bus port, but rather use the Redis command port.
However, make sure you open both ports in your firewall, otherwise Redis cluster nodes won't be able to communicate.
For a Redis Cluster to work properly you need, for each node:
1. The client communication port (usually 6379) used to communicate with clients and be open to all the clients that need to reach the cluster, plus all the other cluster nodes that use the client port for key migrations.
2. The cluster bus port must be reachable from all the other cluster nodes.
If you don't open both TCP ports, your cluster will not work as expected.
#### Redis Cluster and Docker
Currently, Redis Cluster does not support NATted environments and in general
environments where IP addresses or TCP ports are remapped.
Docker uses a technique called _port mapping_: programs running inside Docker containers may be exposed with a different port compared to the one the program believes to be using.
This is useful for running multiple containers using the same ports, at the same time, in the same server.
To make Docker compatible with Redis Cluster, you need to use Docker's _host networking mode_.
Please see the `--net=host` option in the [Docker documentation](https://docs.docker.com/engine/userguide/networking/dockernetworks/) for more information.
#### Redis Cluster data sharding
Redis Cluster does not use consistent hashing, but a different form of sharding
where every key is conceptually part of what we call a **hash slot**.
There are 16384 hash slots in Redis Cluster, and to compute the hash
slot for a given key, we simply take the CRC16 of the key modulo
16384.
Every node in a Redis Cluster is responsible for a subset of the hash slots,
so, for example, you may have a cluster with 3 nodes, where:
* Node A contains hash slots from 0 to 5500.
* Node B contains hash slots from 5501 to 11000.
* Node C contains hash slots from 11001 to 16383.
This makes it easy to add and remove cluster nodes. For example, if
I want to add a new node D, I need to move some hash slots from nodes A, B, C
to D. Similarly, if I want to remove node A from the cluster, I can just
move the hash slots served by A to B and C. Once node A is empty,
I can remove it from the cluster completely.
Moving hash slots from a node to another does not require stopping
any operations; therefore, adding and removing nodes, or changing the percentage of hash slots held by a node, requires no downtime.
Redis Cluster supports multiple key operations as long as all of the keys involved in a single command execution (or whole transaction, or Lua script
execution) belong to the same hash slot. The user can force multiple keys
to be part of the same hash slot by using a feature called *hash tags*.
Hash tags are documented in the Redis Cluster specification, but the gist is
that if there is a substring between {} brackets in a key, only what is
inside the string is hashed. For example, the keys `user:{123}:profile` and `user:{123}:account` are guaranteed to be in the same hash slot because they share the same hash tag. As a result, you can operate on these two keys in the same multi-key operation.
#### Redis Cluster master-replica model
To remain available when a subset of master nodes are failing or are
not able to communicate with the majority of nodes, Redis Cluster uses a
master-replica model where every hash slot has from 1 (the master itself) to N
replicas (N-1 additional replica nodes).
In our example cluster with nodes A, B, C, if node B fails the cluster is not
able to continue, since we no longer have a way to serve hash slots in the
range 5501-11000.
However, when the cluster is created (or at a later time), we add a replica
node to every master, so that the final cluster is composed of A, B, C
that are master nodes, and A1, B1, C1 that are replica nodes.
This way, the system can continue if node B fails.
Node B1 replicates B, and B fails, the cluster will promote node B1 as the new
master and will continue to operate correctly.
However, note that if nodes B and B1 fail at the same time, Redis Cluster will not be able to continue to operate.
#### Redis Cluster consistency guarantees
Redis Cluster does not guarantee **strong consistency**. In practical
terms this means that under certain conditions it is possible that Redis
Cluster will lose writes that were acknowledged by the system to the client.
The first reason why Redis Cluster can lose writes is because it uses
asynchronous replication. This means that during writes the following
happens:
* Your client writes to the master B.
* The master B replies OK to your client.
* The master B propagates the write to its replicas B1, B2 and B3.
As you can see, B does not wait for an acknowledgement from B1, B2, B3 before
replying to the client, since this would be a prohibitive latency penalty
for Redis, so if your client writes something, B acknowledges the write,
but crashes before being able to send the write to its replicas, one of the
replicas (that did not receive the write) can be promoted to master, losing
the write forever.
This is very similar to what happens with most databases that are
configured to flush data to disk every second, so it is a scenario you
are already able to reason about because of past experiences with traditional
database systems not involving distributed systems. Similarly you can
improve consistency by forcing the database to flush data to disk before
replying to the client, but this usually results in prohibitively low
performance. That would be the equivalent of synchronous replication in
the case of Redis Cluster.
Basically, there is a trade-off to be made between performance and consistency.
Redis Cluster has support for synchronous writes when absolutely needed,
implemented via the `WAIT` command. This makes losing writes a lot less
likely. However, note that Redis Cluster does not implement strong consistency
even when synchronous replication is used: it is always possible, under more
complex failure scenarios, that a replica that was not able to receive the write
will be elected as master.
There is another notable scenario where Redis Cluster will lose writes, that
happens during a network partition where a client is isolated with a minority
of instances including at least a master.
Take as an example our 6 nodes cluster composed of A, B, C, A1, B1, C1,
with 3 masters and 3 replicas. There is also a client, that we will call Z1.
After a partition occurs, it is possible that in one side of the
partition we have A, C, A1, B1, C1, and in the other side we have B and Z1.
Z1 is still able to write to B, which will accept its writes. If the
partition heals in a very short time, the cluster will continue normally.
However, if the partition lasts enough time for B1 to be promoted to master
on the majority side of the partition, the writes that Z1 has sent to B
in the meantime will be lost.
There is a **maximum window** to the amount of writes Z1 will be able
to send to B: if enough time has elapsed for the majority side of the
partition to elect a replica as master, every master node in the minority
side will have stopped accepting writes.
This amount of time is a very important configuration directive of Redis
Cluster, and is called the **node timeout**.
After node timeout has elapsed, a master node is considered to be failing,
and can be replaced by one of its replicas.
Similarly, after node timeout has elapsed without a master node to be able
to sense the majority of the other master nodes, it enters an error state
and stops accepting writes.
## Redis Cluster configuration parameters
We are about to create an example cluster deployment.
Before we continue, let's introduce the configuration parameters that Redis Cluster introduces
in the `redis.conf` file.
* **cluster-enabled `<yes/no>`**: If yes, enables Redis Cluster support in a specific Redis instance. Otherwise the instance starts as a standalone instance as usual.
* **cluster-config-file `<filename>`**: Note that despite the name of this option, this is not a user editable configuration file, but the file where a Redis Cluster node automatically persists the cluster configuration (the state, basically) every time there is a change, in order to be able to re-read it at startup. The file lists things like the other nodes in the cluster, their state, persistent variables, and so forth. Often this file is rewritten and flushed on disk as a result of some message reception.
* **cluster-node-timeout `<milliseconds>`**: The maximum amount of time a Redis Cluster node can be unavailable, without it being considered as failing. If a master node is not reachable for more than the specified amount of time, it will be failed over by its replicas. This parameter controls other important things in Redis Cluster. Notably, every node that can't reach the majority of master nodes for the specified amount of time, will stop accepting queries.
* **cluster-slave-validity-factor `<factor>`**: If set to zero, a replica will always consider itself valid, and will therefore always try to failover a master, regardless of the amount of time the link between the master and the replica remained disconnected. If the value is positive, a maximum disconnection time is calculated as the *node timeout* value multiplied by the factor provided with this option, and if the node is a replica, it will not try to start a failover if the master link was disconnected for more than the specified amount of time. For example, if the node timeout is set to 5 seconds and the validity factor is set to 10, a replica disconnected from the master for more than 50 seconds will not try to failover its master. Note that any value different than zero may result in Redis Cluster being unavailable after a master failure if there is no replica that is able to failover it. In that case the cluster will return to being available only when the original master rejoins the cluster.
* **cluster-migration-barrier `<count>`**: Minimum number of replicas a master will remain connected with, for another replica to migrate to a master which is no longer covered by any replica. See the appropriate section about replica migration in this tutorial for more information.
* **cluster-require-full-coverage `<yes/no>`**: If this is set to yes, as it is by default, the cluster stops accepting writes if some percentage of the key space is not covered by any node. If the option is set to no, the cluster will still serve queries even if only requests about a subset of keys can be processed.
* **cluster-allow-reads-when-down `<yes/no>`**: If this is set to no, as it is by default, a node in a Redis Cluster will stop serving all traffic when the cluster is marked as failed, either when a node can't reach a quorum of masters or when full coverage is not met. This prevents reading potentially inconsistent data from a node that is unaware of changes in the cluster. This option can be set to yes to allow reads from a node during the fail state, which is useful for applications that want to prioritize read availability but still want to prevent inconsistent writes. It can also be used for when using Redis Cluster with only one or two shards, as it allows the nodes to continue serving writes when a master fails but automatic failover is impossible.
## Create and use a Redis Cluster
To create and use a Redis Cluster, follow these steps:
* [Create a Redis Cluster](#create-a-redis-cluster)
* [Interact with the cluster](#interact-with-the-cluster)
* [Write an example app with redis-rb-cluster](#write-an-example-app-with-redis-rb-cluster)
* [Reshard the cluster](#reshard-the-cluster)
* [A more interesting example application](#a-more-interesting-example-application)
* [Test the failover](#test-the-failover)
* [Manual failover](#manual-failover)
* [Add a new node](#add-a-new-node)
* [Remove a node](#remove-a-node)
* [Replica migration](#replica-migration)
* [Upgrade nodes in a Redis Cluster](#upgrade-nodes-in-a-redis-cluster)
* [Migrate to Redis Cluster](#migrate-to-redis-cluster)
But, first, familiarize yourself with the requirements for creating a cluster.
#### Requirements to create a Redis Cluster
To create a cluster, the first thing you need is to have a few empty Redis instances running in _cluster mode_.
At minimum, set the following directives in the `redis.conf` file:
```
port 7000
cluster-enabled yes
cluster-config-file nodes.conf
cluster-node-timeout 5000
appendonly yes
```
To enable cluster mode, set the `cluster-enabled` directive to `yes`.
Every instance also contains the path of a file where the
configuration for this node is stored, which by default is `nodes.conf`.
This file is never touched by humans; it is simply generated at startup
by the Redis Cluster instances, and updated every time it is needed.
Note that the **minimal cluster** that works as expected must contain
at least three master nodes. For deployment, we strongly recommend
a six-node cluster, with three masters and three replicas.
You can test this locally by creating the following directories named
after the port number of the instance you'll run inside any given directory.
For example:
```
mkdir cluster-test
cd cluster-test
mkdir 7000 7001 7002 7003 7004 7005
```
Create a `redis.conf` file inside each of the directories, from 7000 to 7005.
As a template for your configuration file just use the small example above,
but make sure to replace the port number `7000` with the right port number
according to the directory name.
You can start each instance as follows, each running in a separate terminal tab:
```
cd 7000
redis-server ./redis.conf
```
You'll see from the logs that every node assigns itself a new ID:
[82462] 26 Nov 11:56:55.329 * No cluster configuration found, I'm 97a3a64667477371c4479320d683e4c8db5858b1
This ID will be used forever by this specific instance in order for the instance
to have a unique name in the context of the cluster. Every node
remembers every other node using this IDs, and not by IP or port.
IP addresses and ports may change, but the unique node identifier will never
change for all the life of the node. We call this identifier simply **Node ID**.
#### Create a Redis Cluster
Now that we have a number of instances running, you need to create your cluster by writing some meaningful configuration to the nodes.
You can configure and execute individual instances manually or use the create-cluster script.
Let's go over how you do it manually.
To create the cluster, run:
redis-cli --cluster create 127.0.0.1:7000 127.0.0.1:7001 \
127.0.0.1:7002 127.0.0.1:7003 127.0.0.1:7004 127.0.0.1:7005 \
--cluster-replicas 1
The command used here is **create**, since we want to create a new cluster.
The option `--cluster-replicas 1` means that we want a replica for every master created.
The other arguments are the list of addresses of the instances I want to use
to create the new cluster.
`redis-cli` will propose a configuration. Accept the proposed configuration by typing **yes**.
The cluster will be configured and *joined*, which means that instances will be
bootstrapped into talking with each other. Finally, if everything has gone well, you'll see a message like this:
[OK] All 16384 slots covered
This means that there is at least one master instance serving each of the
16384 available slots.
If you don't want to create a Redis Cluster by configuring and executing
individual instances manually as explained above, there is a much simpler
system (but you'll not learn the same amount of operational details).
Find the `utils/create-cluster` directory in the Redis distribution.
There is a script called `create-cluster` inside (same name as the directory
it is contained into), it's a simple bash script. In order to start
a 6 nodes cluster with 3 masters and 3 replicas just type the following
commands:
1. `create-cluster start`
2. `create-cluster create`
Reply to `yes` in step 2 when the `redis-cli` utility wants you to accept
the cluster layout.
You can now interact with the cluster, the first node will start at port 30001
by default. When you are done, stop the cluster with:
3. `create-cluster stop`
Please read the `README` inside this directory for more information on how
to run the script.
#### Interact with the cluster
To connect to Redis Cluster, you'll need a cluster-aware Redis client.
See the [documentation](/docs/clients) for your client of choice to determine its cluster support.
You can also test your Redis Cluster using the `redis-cli` command line utility:
```
$ redis-cli -c -p 7000
redis 127.0.0.1:7000> set foo bar
-> Redirected to slot [12182] located at 127.0.0.1:7002
OK
redis 127.0.0.1:7002> set hello world
-> Redirected to slot [866] located at 127.0.0.1:7000
OK
redis 127.0.0.1:7000> get foo
-> Redirected to slot [12182] located at 127.0.0.1:7002
"bar"
redis 127.0.0.1:7002> get hello
-> Redirected to slot [866] located at 127.0.0.1:7000
"world"
```
If you created the cluster using the script, your nodes may listen
on different ports, starting from 30001 by default.
The `redis-cli` cluster support is very basic, so it always uses the fact that
Redis Cluster nodes are able to redirect a client to the right node.
A serious client is able to do better than that, and cache the map between
hash slots and nodes addresses, to directly use the right connection to the
right node. The map is refreshed only when something changed in the cluster
configuration, for example after a failover or after the system administrator
changed the cluster layout by adding or removing nodes.
#### Write an example app with redis-rb-cluster
Before going forward showing how to operate the Redis Cluster, doing things
like a failover, or a resharding, we need to create some example application
or at least to be able to understand the semantics of a simple Redis Cluster
client interaction.
In this way we can run an example and at the same time try to make nodes
failing, or start a resharding, to see how Redis Cluster behaves under real
world conditions. It is not very helpful to see what happens while nobody
is writing to the cluster.
This section explains some basic usage of
[redis-rb-cluster](https://github.com/antirez/redis-rb-cluster) showing two
examples.
The first is the following, and is the
[`example.rb`](https://github.com/antirez/redis-rb-cluster/blob/master/example.rb)
file inside the redis-rb-cluster distribution:
```
1 require './cluster'
2
3 if ARGV.length != 2
4 startup_nodes = [
5 {:host => "127.0.0.1", :port => 7000},
6 {:host => "127.0.0.1", :port => 7001}
7 ]
8 else
9 startup_nodes = [
10 {:host => ARGV[0], :port => ARGV[1].to_i}
11 ]
12 end
13
14 rc = RedisCluster.new(startup_nodes,32,:timeout => 0.1)
15
16 last = false
17
18 while not last
19 begin
20 last = rc.get("__last__")
21 last = 0 if !last
22 rescue => e
23 puts "error #{e.to_s}"
24 sleep 1
25 end
26 end
27
28 ((last.to_i+1)..1000000000).each{|x|
29 begin
30 rc.set("foo#{x}",x)
31 puts rc.get("foo#{x}")
32 rc.set("__last__",x)
33 rescue => e
34 puts "error #{e.to_s}"
35 end
36 sleep 0.1
37 }
```
The application does a very simple thing, it sets keys in the form `foo<number>` to `number`, one after the other. So if you run the program the result is the
following stream of commands:
* SET foo0 0
* SET foo1 1
* SET foo2 2
* And so forth...
The program looks more complex than it should usually as it is designed to
show errors on the screen instead of exiting with an exception, so every
operation performed with the cluster is wrapped by `begin` `rescue` blocks.
The **line 14** is the first interesting line in the program. It creates the
Redis Cluster object, using as argument a list of *startup nodes*, the maximum
number of connections this object is allowed to take against different nodes,
and finally the timeout after a given operation is considered to be failed.
The startup nodes don't need to be all the nodes of the cluster. The important
thing is that at least one node is reachable. Also note that redis-rb-cluster
updates this list of startup nodes as soon as it is able to connect with the
first node. You should expect such a behavior with any other serious client.
Now that we have the Redis Cluster object instance stored in the **rc** variable,
we are ready to use the object like if it was a normal Redis object instance.
This is exactly what happens in **line 18 to 26**: when we restart the example
we don't want to start again with `foo0`, so we store the counter inside
Redis itself. The code above is designed to read this counter, or if the
counter does not exist, to assign it the value of zero.
However note how it is a while loop, as we want to try again and again even
if the cluster is down and is returning errors. Normal applications don't need
to be so careful.
**Lines between 28 and 37** start the main loop where the keys are set or
an error is displayed.
Note the `sleep` call at the end of the loop. In your tests you can remove
the sleep if you want to write to the cluster as fast as possible (relatively
to the fact that this is a busy loop without real parallelism of course, so
you'll get the usually 10k ops/second in the best of the conditions).
Normally writes are slowed down in order for the example application to be
easier to follow by humans.
Starting the application produces the following output:
```
ruby ./example.rb
1
2
3
4
5
6
7
8
9
^C (I stopped the program here)
```
This is not a very interesting program and we'll use a better one in a moment
but we can already see what happens during a resharding when the program
is running.
#### Reshard the cluster
Now we are ready to try a cluster resharding. To do this, please
keep the example.rb program running, so that you can see if there is some
impact on the program running. Also, you may want to comment the `sleep`
call to have some more serious write load during resharding.
Resharding basically means to move hash slots from a set of nodes to another
set of nodes.
Like cluster creation, it is accomplished using the redis-cli utility.
To start a resharding, just type:
redis-cli --cluster reshard 127.0.0.1:7000
You only need to specify a single node, redis-cli will find the other nodes
automatically.
Currently redis-cli is only able to reshard with the administrator support,
you can't just say move 5% of slots from this node to the other one (but
this is pretty trivial to implement). So it starts with questions. The first
is how much of a resharding do you want to do:
How many slots do you want to move (from 1 to 16384)?
We can try to reshard 1000 hash slots, that should already contain a non
trivial amount of keys if the example is still running without the sleep
call.
Then redis-cli needs to know what is the target of the resharding, that is,
the node that will receive the hash slots.
I'll use the first master node, that is, 127.0.0.1:7000, but I need
to specify the Node ID of the instance. This was already printed in a
list by redis-cli, but I can always find the ID of a node with the following
command if I need:
```
$ redis-cli -p 7000 cluster nodes | grep myself
97a3a64667477371c4479320d683e4c8db5858b1 :0 myself,master - 0 0 0 connected 0-5460
```
Ok so my target node is 97a3a64667477371c4479320d683e4c8db5858b1.
Now you'll get asked from what nodes you want to take those keys.
I'll just type `all` in order to take a bit of hash slots from all the
other master nodes.
After the final confirmation you'll see a message for every slot that
redis-cli is going to move from a node to another, and a dot will be printed
for every actual key moved from one side to the other.
While the resharding is in progress you should be able to see your
example program running unaffected. You can stop and restart it multiple times
during the resharding if you want.
At the end of the resharding, you can test the health of the cluster with
the following command:
redis-cli --cluster check 127.0.0.1:7000
All the slots will be covered as usual, but this time the master at
127.0.0.1:7000 will have more hash slots, something around 6461.
Resharding can be performed automatically without the need to manually
enter the parameters in an interactive way. This is possible using a command
line like the following:
redis-cli --cluster reshard <host>:<port> --cluster-from <node-id> --cluster-to <node-id> --cluster-slots <number of slots> --cluster-yes
This allows to build some automatism if you are likely to reshard often,
however currently there is no way for `redis-cli` to automatically
rebalance the cluster checking the distribution of keys across the cluster
nodes and intelligently moving slots as needed. This feature will be added
in the future.
The `--cluster-yes` option instructs the cluster manager to automatically answer
"yes" to the command's prompts, allowing it to run in a non-interactive mode.
Note that this option can also be activated by setting the
`REDISCLI_CLUSTER_YES` environment variable.
#### A more interesting example application
The example application we wrote early is not very good.
It writes to the cluster in a simple way without even checking if what was
written is the right thing.
From our point of view the cluster receiving the writes could just always
write the key `foo` to `42` to every operation, and we would not notice at
all.
So in the `redis-rb-cluster` repository, there is a more interesting application
that is called `consistency-test.rb`. It uses a set of counters, by default 1000, and sends `INCR` commands in order to increment the counters.
However instead of just writing, the application does two additional things:
* When a counter is updated using `INCR`, the application remembers the write.
* It also reads a random counter before every write, and check if the value is what we expected it to be, comparing it with the value it has in memory.
What this means is that this application is a simple **consistency checker**,
and is able to tell you if the cluster lost some write, or if it accepted
a write that we did not receive acknowledgment for. In the first case we'll
see a counter having a value that is smaller than the one we remember, while
in the second case the value will be greater.
Running the consistency-test application produces a line of output every
second:
```
$ ruby consistency-test.rb
925 R (0 err) | 925 W (0 err) |
5030 R (0 err) | 5030 W (0 err) |
9261 R (0 err) | 9261 W (0 err) |
13517 R (0 err) | 13517 W (0 err) |
17780 R (0 err) | 17780 W (0 err) |
22025 R (0 err) | 22025 W (0 err) |
25818 R (0 err) | 25818 W (0 err) |
```
The line shows the number of **R**eads and **W**rites performed, and the
number of errors (query not accepted because of errors since the system was
not available).
If some inconsistency is found, new lines are added to the output.
This is what happens, for example, if I reset a counter manually while
the program is running:
```
$ redis-cli -h 127.0.0.1 -p 7000 set key_217 0
OK
(in the other tab I see...)
94774 R (0 err) | 94774 W (0 err) |
98821 R (0 err) | 98821 W (0 err) |
102886 R (0 err) | 102886 W (0 err) | 114 lost |
107046 R (0 err) | 107046 W (0 err) | 114 lost |
```
When I set the counter to 0 the real value was 114, so the program reports
114 lost writes (`INCR` commands that are not remembered by the cluster).
This program is much more interesting as a test case, so we'll use it
to test the Redis Cluster failover.
#### Test the failover
To trigger the failover, the simplest thing we can do (that is also
the semantically simplest failure that can occur in a distributed system)
is to crash a single process, in our case a single master.
During this test, you should take a tab open with the consistency test
application running.
We can identify a master and crash it with the following command:
```
$ redis-cli -p 7000 cluster nodes | grep master
3e3a6cb0d9a9a87168e266b0a0b24026c0aae3f0 127.0.0.1:7001 master - 0 1385482984082 0 connected 5960-10921
2938205e12de373867bf38f1ca29d31d0ddb3e46 127.0.0.1:7002 master - 0 1385482983582 0 connected 11423-16383
97a3a64667477371c4479320d683e4c8db5858b1 :0 myself,master - 0 0 0 connected 0-5959 10922-11422
```
Ok, so 7000, 7001, and 7002 are masters. Let's crash node 7002 with the
**DEBUG SEGFAULT** command:
```
$ redis-cli -p 7002 debug segfault
Error: Server closed the connection
```
Now we can look at the output of the consistency test to see what it reported.
```
18849 R (0 err) | 18849 W (0 err) |
23151 R (0 err) | 23151 W (0 err) |
27302 R (0 err) | 27302 W (0 err) |
... many error warnings here ...
29659 R (578 err) | 29660 W (577 err) |
33749 R (578 err) | 33750 W (577 err) |
37918 R (578 err) | 37919 W (577 err) |
42077 R (578 err) | 42078 W (577 err) |
```
As you can see during the failover the system was not able to accept 578 reads and 577 writes, however no inconsistency was created in the database. This may
sound unexpected as in the first part of this tutorial we stated that Redis
Cluster can lose writes during the failover because it uses asynchronous
replication. What we did not say is that this is not very likely to happen
because Redis sends the reply to the client, and the commands to replicate
to the replicas, about at the same time, so there is a very small window to
lose data. However the fact that it is hard to trigger does not mean that it
is impossible, so this does not change the consistency guarantees provided
by Redis cluster.
We can now check what is the cluster setup after the failover (note that
in the meantime I restarted the crashed instance so that it rejoins the
cluster as a replica):
```
$ redis-cli -p 7000 cluster nodes
3fc783611028b1707fd65345e763befb36454d73 127.0.0.1:7004 slave 3e3a6cb0d9a9a87168e266b0a0b24026c0aae3f0 0 1385503418521 0 connected
a211e242fc6b22a9427fed61285e85892fa04e08 127.0.0.1:7003 slave 97a3a64667477371c4479320d683e4c8db5858b1 0 1385503419023 0 connected
97a3a64667477371c4479320d683e4c8db5858b1 :0 myself,master - 0 0 0 connected 0-5959 10922-11422
3c3a0c74aae0b56170ccb03a76b60cfe7dc1912e 127.0.0.1:7005 master - 0 1385503419023 3 connected 11423-16383
3e3a6cb0d9a9a87168e266b0a0b24026c0aae3f0 127.0.0.1:7001 master - 0 1385503417005 0 connected 5960-10921
2938205e12de373867bf38f1ca29d31d0ddb3e46 127.0.0.1:7002 slave 3c3a0c74aae0b56170ccb03a76b60cfe7dc1912e 0 1385503418016 3 connected
```
Now the masters are running on ports 7000, 7001 and 7005. What was previously
a master, that is the Redis instance running on port 7002, is now a replica of
7005.
The output of the `CLUSTER NODES` command may look intimidating, but it is actually pretty simple, and is composed of the following tokens:
* Node ID
* ip:port
* flags: master, replica, myself, fail, ...
* if it is a replica, the Node ID of the master
* Time of the last pending PING still waiting for a reply.
* Time of the last PONG received.
* Configuration epoch for this node (see the Cluster specification).
* Status of the link to this node.
* Slots served...
#### Manual failover
Sometimes it is useful to force a failover without actually causing any problem
on a master. For example, to upgrade the Redis process of one of the
master nodes it is a good idea to failover it to turn it into a replica
with minimal impact on availability.
Manual failovers are supported by Redis Cluster using the `CLUSTER FAILOVER`
command, that must be executed in one of the replicas of the master you want
to failover.
Manual failovers are special and are safer compared to failovers resulting from
actual master failures. They occur in a way that avoids data loss in the
process, by switching clients from the original master to the new master only
when the system is sure that the new master processed all the replication stream
from the old one.
This is what you see in the replica log when you perform a manual failover:
# Manual failover user request accepted.
# Received replication offset for paused master manual failover: 347540
# All master replication stream processed, manual failover can start.
# Start of election delayed for 0 milliseconds (rank #0, offset 347540).
# Starting a failover election for epoch 7545.
# Failover election won: I'm the new master.
Basically clients connected to the master we are failing over are stopped.
At the same time the master sends its replication offset to the replica, that
waits to reach the offset on its side. When the replication offset is reached,
the failover starts, and the old master is informed about the configuration
switch. When the clients are unblocked on the old master, they are redirected
to the new master.
To promote a replica to master, it must first be known as a replica by a majority of the masters in the cluster.
Otherwise, it cannot win the failover election.
If the replica has just been added to the cluster (see [Add a new node as a replica](#add-a-new-node-as-a-replica)), you may need to wait a while before sending the `CLUSTER FAILOVER` command, to make sure the masters in cluster are aware of the new replica.
#### Add a new node
Adding a new node is basically the process of adding an empty node and then
moving some data into it, in case it is a new master, or telling it to
setup as a replica of a known node, in case it is a replica.
We'll show both, starting with the addition of a new master instance.
In both cases the first step to perform is **adding an empty node**.
This is as simple as to start a new node in port 7006 (we already used
from 7000 to 7005 for our existing 6 nodes) with the same configuration
used for the other nodes, except for the port number, so what you should
do in order to conform with the setup we used for the previous nodes:
* Create a new tab in your terminal application.
* Enter the `cluster-test` directory.
* Create a directory named `7006`.
* Create a redis.conf file inside, similar to the one used for the other nodes but using 7006 as port number.
* Finally start the server with `../redis-server ./redis.conf`
At this point the server should be running.
Now we can use **redis-cli** as usual in order to add the node to
the existing cluster.
redis-cli --cluster add-node 127.0.0.1:7006 127.0.0.1:7000
As you can see I used the **add-node** command specifying the address of the
new node as first argument, and the address of a random existing node in the
cluster as second argument.
In practical terms redis-cli here did very little to help us, it just
sent a `CLUSTER MEET` message to the node, something that is also possible
to accomplish manually. However redis-cli also checks the state of the
cluster before to operate, so it is a good idea to perform cluster operations
always via redis-cli even when you know how the internals work.
Now we can connect to the new node to see if it really joined the cluster:
```
redis 127.0.0.1:7006> cluster nodes
3e3a6cb0d9a9a87168e266b0a0b24026c0aae3f0 127.0.0.1:7001 master - 0 1385543178575 0 connected 5960-10921
3fc783611028b1707fd65345e763befb36454d73 127.0.0.1:7004 slave 3e3a6cb0d9a9a87168e266b0a0b24026c0aae3f0 0 1385543179583 0 connected
f093c80dde814da99c5cf72a7dd01590792b783b :0 myself,master - 0 0 0 connected
2938205e12de373867bf38f1ca29d31d0ddb3e46 127.0.0.1:7002 slave 3c3a0c74aae0b56170ccb03a76b60cfe7dc1912e 0 1385543178072 3 connected
a211e242fc6b22a9427fed61285e85892fa04e08 127.0.0.1:7003 slave 97a3a64667477371c4479320d683e4c8db5858b1 0 1385543178575 0 connected
97a3a64667477371c4479320d683e4c8db5858b1 127.0.0.1:7000 master - 0 1385543179080 0 connected 0-5959 10922-11422
3c3a0c74aae0b56170ccb03a76b60cfe7dc1912e 127.0.0.1:7005 master - 0 1385543177568 3 connected 11423-16383
```
Note that since this node is already connected to the cluster it is already
able to redirect client queries correctly and is generally speaking part of
the cluster. However it has two peculiarities compared to the other masters:
* It holds no data as it has no assigned hash slots.
* Because it is a master without assigned slots, it does not participate in the election process when a replica wants to become a master.
Now it is possible to assign hash slots to this node using the resharding
feature of `redis-cli`.
It is basically useless to show this as we already
did in a previous section, there is no difference, it is just a resharding
having as a target the empty node.
##### Add a new node as a replica
Adding a new replica can be performed in two ways. The obvious one is to
use redis-cli again, but with the --cluster-slave option, like this:
redis-cli --cluster add-node 127.0.0.1:7006 127.0.0.1:7000 --cluster-slave
Note that the command line here is exactly like the one we used to add
a new master, so we are not specifying to which master we want to add
the replica. In this case, what happens is that redis-cli will add the new
node as replica of a random master among the masters with fewer replicas.
However you can specify exactly what master you want to target with your
new replica with the following command line:
redis-cli --cluster add-node 127.0.0.1:7006 127.0.0.1:7000 --cluster-slave --cluster-master-id 3c3a0c74aae0b56170ccb03a76b60cfe7dc1912e
This way we assign the new replica to a specific master.
A more manual way to add a replica to a specific master is to add the new
node as an empty master, and then turn it into a replica using the
`CLUSTER REPLICATE` command. This also works if the node was added as a replica
but you want to move it as a replica of a different master.
For example in order to add a replica for the node 127.0.0.1:7005 that is
currently serving hash slots in the range 11423-16383, that has a Node ID
3c3a0c74aae0b56170ccb03a76b60cfe7dc1912e, all I need to do is to connect
with the new node (already added as empty master) and send the command:
redis 127.0.0.1:7006> cluster replicate 3c3a0c74aae0b56170ccb03a76b60cfe7dc1912e
That's it. Now we have a new replica for this set of hash slots, and all
the other nodes in the cluster already know (after a few seconds needed to
update their config). We can verify with the following command:
```
$ redis-cli -p 7000 cluster nodes | grep slave | grep 3c3a0c74aae0b56170ccb03a76b60cfe7dc1912e
f093c80dde814da99c5cf72a7dd01590792b783b 127.0.0.1:7006 slave 3c3a0c74aae0b56170ccb03a76b60cfe7dc1912e 0 1385543617702 3 connected
2938205e12de373867bf38f1ca29d31d0ddb3e46 127.0.0.1:7002 slave 3c3a0c74aae0b56170ccb03a76b60cfe7dc1912e 0 1385543617198 3 connected
```
The node 3c3a0c... now has two replicas, running on ports 7002 (the existing one) and 7006 (the new one).
#### Remove a node
To remove a replica node just use the `del-node` command of redis-cli:
redis-cli --cluster del-node 127.0.0.1:7000 `<node-id>`
The first argument is just a random node in the cluster, the second argument
is the ID of the node you want to remove.
You can remove a master node in the same way as well, **however in order to
remove a master node it must be empty**. If the master is not empty you need
to reshard data away from it to all the other master nodes before.
An alternative to remove a master node is to perform a manual failover of it
over one of its replicas and remove the node after it turned into a replica of the
new master. Obviously this does not help when you want to reduce the actual
number of masters in your cluster, in that case, a resharding is needed.
There is a special scenario where you want to remove a failed node.
You should not use the `del-node` command because it tries to connect to all nodes and you will encounter a "connection refused" error.
Instead, you can use the `call` command:
redis-cli --cluster call 127.0.0.1:7000 cluster forget `<node-id>`
This command will execute `CLUSTER FORGET` command on every node.
#### Replica migration
In Redis Cluster, you can reconfigure a replica to replicate with a
different master at any time just using this command:
CLUSTER REPLICATE <master-node-id>
However there is a special scenario where you want replicas to move from one
master to another one automatically, without the help of the system administrator.
The automatic reconfiguration of replicas is called *replicas migration* and is
able to improve the reliability of a Redis Cluster.
You can read the details of replicas migration in the [Redis Cluster Specification](/topics/cluster-spec), here we'll only provide some information about the
general idea and what you should do in order to benefit from it.
The reason why you may want to let your cluster replicas to move from one master
to another under certain condition, is that usually the Redis Cluster is as
resistant to failures as the number of replicas attached to a given master.
For example a cluster where every master has a single replica can't continue
operations if the master and its replica fail at the same time, simply because
there is no other instance to have a copy of the hash slots the master was
serving. However while net-splits are likely to isolate a number of nodes
at the same time, many other kind of failures, like hardware or software failures
local to a single node, are a very notable class of failures that are unlikely
to happen at the same time, so it is possible that in your cluster where
every master has a replica, the replica is killed at 4am, and the master is killed
at 6am. This still will result in a cluster that can no longer operate.
To improve reliability of the system we have the option to add additional
replicas to every master, but this is expensive. Replica migration allows to
add more replicas to just a few masters. So you have 10 masters with 1 replica
each, for a total of 20 instances. However you add, for example, 3 instances
more as replicas of some of your masters, so certain masters will have more
than a single replica.
With replicas migration what happens is that if a master is left without
replicas, a replica from a master that has multiple replicas will migrate to
the *orphaned* master. So after your replica goes down at 4am as in the example
we made above, another replica will take its place, and when the master
will fail as well at 5am, there is still a replica that can be elected so that
the cluster can continue to operate.
So what you should know about replicas migration in short?
* The cluster will try to migrate a replica from the master that has the greatest number of replicas in a given moment.
* To benefit from replica migration you have just to add a few more replicas to a single master in your cluster, it does not matter what master.
* There is a configuration parameter that controls the replica migration feature that is called `cluster-migration-barrier`: you can read more about it in the example `redis.conf` file provided with Redis Cluster.
#### Upgrade nodes in a Redis Cluster
Upgrading replica nodes is easy since you just need to stop the node and restart
it with an updated version of Redis. If there are clients scaling reads using
replica nodes, they should be able to reconnect to a different replica if a given
one is not available.
Upgrading masters is a bit more complex, and the suggested procedure is:
1. Use `CLUSTER FAILOVER` to trigger a manual failover of the master to one of its replicas.
(See the [Manual failover](#manual-failover) in this topic.)
2. Wait for the master to turn into a replica.
3. Finally upgrade the node as you do for replicas.
4. If you want the master to be the node you just upgraded, trigger a new manual failover in order to turn back the upgraded node into a master.
Following this procedure you should upgrade one node after the other until
all the nodes are upgraded.
#### Migrate to Redis Cluster
Users willing to migrate to Redis Cluster may have just a single master, or
may already using a preexisting sharding setup, where keys
are split among N nodes, using some in-house algorithm or a sharding algorithm
implemented by their client library or Redis proxy.
In both cases it is possible to migrate to Redis Cluster easily, however
what is the most important detail is if multiple-keys operations are used
by the application, and how. There are three different cases:
1. Multiple keys operations, or transactions, or Lua scripts involving multiple keys, are not used. Keys are accessed independently (even if accessed via transactions or Lua scripts grouping multiple commands, about the same key, together).
2. Multiple keys operations, or transactions, or Lua scripts involving multiple keys are used but only with keys having the same **hash tag**, which means that the keys used together all have a `{...}` sub-string that happens to be identical. For example the following multiple keys operation is defined in the context of the same hash tag: `SUNION {user:1000}.foo {user:1000}.bar`.
3. Multiple keys operations, or transactions, or Lua scripts involving multiple keys are used with key names not having an explicit, or the same, hash tag.
The third case is not handled by Redis Cluster: the application requires to
be modified in order to not use multi keys operations or only use them in
the context of the same hash tag.
Case 1 and 2 are covered, so we'll focus on those two cases, that are handled
in the same way, so no distinction will be made in the documentation.
Assuming you have your preexisting data set split into N masters, where
N=1 if you have no preexisting sharding, the following steps are needed
in order to migrate your data set to Redis Cluster:
1. Stop your clients. No automatic live-migration to Redis Cluster is currently possible. You may be able to do it orchestrating a live migration in the context of your application / environment.
2. Generate an append only file for all of your N masters using the `BGREWRITEAOF` command, and waiting for the AOF file to be completely generated.
3. Save your AOF files from aof-1 to aof-N somewhere. At this point you can stop your old instances if you wish (this is useful since in non-virtualized deployments you often need to reuse the same computers).
4. Create a Redis Cluster composed of N masters and zero replicas. You'll add replicas later. Make sure all your nodes are using the append only file for persistence.
5. Stop all the cluster nodes, substitute their append only file with your pre-existing append only files, aof-1 for the first node, aof-2 for the second node, up to aof-N.
6. Restart your Redis Cluster nodes with the new AOF files. They'll complain that there are keys that should not be there according to their configuration.
7. Use `redis-cli --cluster fix` command in order to fix the cluster so that keys will be migrated according to the hash slots each node is authoritative or not.
8. Use `redis-cli --cluster check` at the end to make sure your cluster is ok.
9. Restart your clients modified to use a Redis Cluster aware client library.
There is an alternative way to import data from external instances to a Redis
Cluster, which is to use the `redis-cli --cluster import` command.
The command moves all the keys of a running instance (deleting the keys from
the source instance) to the specified pre-existing Redis Cluster. However
note that if you use a Redis 2.8 instance as source instance the operation
may be slow since 2.8 does not implement migrate connection caching, so you
may want to restart your source instance with a Redis 3.x version before
to perform such operation.
Starting with Redis 5, if not for backward compatibility, the Redis project no longer uses the word slave. Unfortunately in this command the word slave is part of the protocol, so we'll be able to remove such occurrences only when this API will be naturally deprecated.
## Learn more
* [Redis Cluster specification](/topics/cluster-spec)
* [Linear Scaling with Redis Enterprise](https://redis.com/redis-enterprise/technology/linear-scaling-redis-enterprise/)
* [Docker documentation](https://docs.docker.com/engine/userguide/networking/dockernetworks/)
| redis | title Scale with Redis Cluster linkTitle Scale with Redis Cluster weight 6 description Horizontal scaling with Redis Cluster aliases topics cluster tutorial topics partitioning docs manual scaling docs manual scaling md Redis scales horizontally with a deployment topology called Redis Cluster This topic will teach you how to set up test and operate Redis Cluster in production You will learn about the availability and consistency characteristics of Redis Cluster from the end user s point of view If you plan to run a production Redis Cluster deployment or want to understand better how Redis Cluster works internally consult the Redis Cluster specification topics cluster spec To learn how Redis Enterprise handles scaling see Linear Scaling with Redis Enterprise https redis com redis enterprise technology linear scaling redis enterprise Redis Cluster 101 Redis Cluster provides a way to run a Redis installation where data is automatically sharded across multiple Redis nodes Redis Cluster also provides some degree of availability during partitions mdash in practical terms the ability to continue operations when some nodes fail or are unable to communicate However the cluster will become unavailable in the event of larger failures for example when the majority of masters are unavailable So with Redis Cluster you get the ability to Automatically split your dataset among multiple nodes Continue operations when a subset of the nodes are experiencing failures or are unable to communicate with the rest of the cluster Redis Cluster TCP ports Every Redis Cluster node requires two open TCP connections a Redis TCP port used to serve clients e g 6379 and second port known as the cluster bus port By default the cluster bus port is set by adding 10000 to the data port e g 16379 however you can override this in the cluster port configuration Cluster bus is a node to node communication channel that uses a binary protocol which is more suited to exchanging information between nodes due to little bandwidth and processing time Nodes use the cluster bus for failure detection configuration updates failover authorization and so forth Clients should never try to communicate with the cluster bus port but rather use the Redis command port However make sure you open both ports in your firewall otherwise Redis cluster nodes won t be able to communicate For a Redis Cluster to work properly you need for each node 1 The client communication port usually 6379 used to communicate with clients and be open to all the clients that need to reach the cluster plus all the other cluster nodes that use the client port for key migrations 2 The cluster bus port must be reachable from all the other cluster nodes If you don t open both TCP ports your cluster will not work as expected Redis Cluster and Docker Currently Redis Cluster does not support NATted environments and in general environments where IP addresses or TCP ports are remapped Docker uses a technique called port mapping programs running inside Docker containers may be exposed with a different port compared to the one the program believes to be using This is useful for running multiple containers using the same ports at the same time in the same server To make Docker compatible with Redis Cluster you need to use Docker s host networking mode Please see the net host option in the Docker documentation https docs docker com engine userguide networking dockernetworks for more information Redis Cluster data sharding Redis Cluster does not use consistent hashing but a different form of sharding where every key is conceptually part of what we call a hash slot There are 16384 hash slots in Redis Cluster and to compute the hash slot for a given key we simply take the CRC16 of the key modulo 16384 Every node in a Redis Cluster is responsible for a subset of the hash slots so for example you may have a cluster with 3 nodes where Node A contains hash slots from 0 to 5500 Node B contains hash slots from 5501 to 11000 Node C contains hash slots from 11001 to 16383 This makes it easy to add and remove cluster nodes For example if I want to add a new node D I need to move some hash slots from nodes A B C to D Similarly if I want to remove node A from the cluster I can just move the hash slots served by A to B and C Once node A is empty I can remove it from the cluster completely Moving hash slots from a node to another does not require stopping any operations therefore adding and removing nodes or changing the percentage of hash slots held by a node requires no downtime Redis Cluster supports multiple key operations as long as all of the keys involved in a single command execution or whole transaction or Lua script execution belong to the same hash slot The user can force multiple keys to be part of the same hash slot by using a feature called hash tags Hash tags are documented in the Redis Cluster specification but the gist is that if there is a substring between brackets in a key only what is inside the string is hashed For example the keys user 123 profile and user 123 account are guaranteed to be in the same hash slot because they share the same hash tag As a result you can operate on these two keys in the same multi key operation Redis Cluster master replica model To remain available when a subset of master nodes are failing or are not able to communicate with the majority of nodes Redis Cluster uses a master replica model where every hash slot has from 1 the master itself to N replicas N 1 additional replica nodes In our example cluster with nodes A B C if node B fails the cluster is not able to continue since we no longer have a way to serve hash slots in the range 5501 11000 However when the cluster is created or at a later time we add a replica node to every master so that the final cluster is composed of A B C that are master nodes and A1 B1 C1 that are replica nodes This way the system can continue if node B fails Node B1 replicates B and B fails the cluster will promote node B1 as the new master and will continue to operate correctly However note that if nodes B and B1 fail at the same time Redis Cluster will not be able to continue to operate Redis Cluster consistency guarantees Redis Cluster does not guarantee strong consistency In practical terms this means that under certain conditions it is possible that Redis Cluster will lose writes that were acknowledged by the system to the client The first reason why Redis Cluster can lose writes is because it uses asynchronous replication This means that during writes the following happens Your client writes to the master B The master B replies OK to your client The master B propagates the write to its replicas B1 B2 and B3 As you can see B does not wait for an acknowledgement from B1 B2 B3 before replying to the client since this would be a prohibitive latency penalty for Redis so if your client writes something B acknowledges the write but crashes before being able to send the write to its replicas one of the replicas that did not receive the write can be promoted to master losing the write forever This is very similar to what happens with most databases that are configured to flush data to disk every second so it is a scenario you are already able to reason about because of past experiences with traditional database systems not involving distributed systems Similarly you can improve consistency by forcing the database to flush data to disk before replying to the client but this usually results in prohibitively low performance That would be the equivalent of synchronous replication in the case of Redis Cluster Basically there is a trade off to be made between performance and consistency Redis Cluster has support for synchronous writes when absolutely needed implemented via the WAIT command This makes losing writes a lot less likely However note that Redis Cluster does not implement strong consistency even when synchronous replication is used it is always possible under more complex failure scenarios that a replica that was not able to receive the write will be elected as master There is another notable scenario where Redis Cluster will lose writes that happens during a network partition where a client is isolated with a minority of instances including at least a master Take as an example our 6 nodes cluster composed of A B C A1 B1 C1 with 3 masters and 3 replicas There is also a client that we will call Z1 After a partition occurs it is possible that in one side of the partition we have A C A1 B1 C1 and in the other side we have B and Z1 Z1 is still able to write to B which will accept its writes If the partition heals in a very short time the cluster will continue normally However if the partition lasts enough time for B1 to be promoted to master on the majority side of the partition the writes that Z1 has sent to B in the meantime will be lost There is a maximum window to the amount of writes Z1 will be able to send to B if enough time has elapsed for the majority side of the partition to elect a replica as master every master node in the minority side will have stopped accepting writes This amount of time is a very important configuration directive of Redis Cluster and is called the node timeout After node timeout has elapsed a master node is considered to be failing and can be replaced by one of its replicas Similarly after node timeout has elapsed without a master node to be able to sense the majority of the other master nodes it enters an error state and stops accepting writes Redis Cluster configuration parameters We are about to create an example cluster deployment Before we continue let s introduce the configuration parameters that Redis Cluster introduces in the redis conf file cluster enabled yes no If yes enables Redis Cluster support in a specific Redis instance Otherwise the instance starts as a standalone instance as usual cluster config file filename Note that despite the name of this option this is not a user editable configuration file but the file where a Redis Cluster node automatically persists the cluster configuration the state basically every time there is a change in order to be able to re read it at startup The file lists things like the other nodes in the cluster their state persistent variables and so forth Often this file is rewritten and flushed on disk as a result of some message reception cluster node timeout milliseconds The maximum amount of time a Redis Cluster node can be unavailable without it being considered as failing If a master node is not reachable for more than the specified amount of time it will be failed over by its replicas This parameter controls other important things in Redis Cluster Notably every node that can t reach the majority of master nodes for the specified amount of time will stop accepting queries cluster slave validity factor factor If set to zero a replica will always consider itself valid and will therefore always try to failover a master regardless of the amount of time the link between the master and the replica remained disconnected If the value is positive a maximum disconnection time is calculated as the node timeout value multiplied by the factor provided with this option and if the node is a replica it will not try to start a failover if the master link was disconnected for more than the specified amount of time For example if the node timeout is set to 5 seconds and the validity factor is set to 10 a replica disconnected from the master for more than 50 seconds will not try to failover its master Note that any value different than zero may result in Redis Cluster being unavailable after a master failure if there is no replica that is able to failover it In that case the cluster will return to being available only when the original master rejoins the cluster cluster migration barrier count Minimum number of replicas a master will remain connected with for another replica to migrate to a master which is no longer covered by any replica See the appropriate section about replica migration in this tutorial for more information cluster require full coverage yes no If this is set to yes as it is by default the cluster stops accepting writes if some percentage of the key space is not covered by any node If the option is set to no the cluster will still serve queries even if only requests about a subset of keys can be processed cluster allow reads when down yes no If this is set to no as it is by default a node in a Redis Cluster will stop serving all traffic when the cluster is marked as failed either when a node can t reach a quorum of masters or when full coverage is not met This prevents reading potentially inconsistent data from a node that is unaware of changes in the cluster This option can be set to yes to allow reads from a node during the fail state which is useful for applications that want to prioritize read availability but still want to prevent inconsistent writes It can also be used for when using Redis Cluster with only one or two shards as it allows the nodes to continue serving writes when a master fails but automatic failover is impossible Create and use a Redis Cluster To create and use a Redis Cluster follow these steps Create a Redis Cluster create a redis cluster Interact with the cluster interact with the cluster Write an example app with redis rb cluster write an example app with redis rb cluster Reshard the cluster reshard the cluster A more interesting example application a more interesting example application Test the failover test the failover Manual failover manual failover Add a new node add a new node Remove a node remove a node Replica migration replica migration Upgrade nodes in a Redis Cluster upgrade nodes in a redis cluster Migrate to Redis Cluster migrate to redis cluster But first familiarize yourself with the requirements for creating a cluster Requirements to create a Redis Cluster To create a cluster the first thing you need is to have a few empty Redis instances running in cluster mode At minimum set the following directives in the redis conf file port 7000 cluster enabled yes cluster config file nodes conf cluster node timeout 5000 appendonly yes To enable cluster mode set the cluster enabled directive to yes Every instance also contains the path of a file where the configuration for this node is stored which by default is nodes conf This file is never touched by humans it is simply generated at startup by the Redis Cluster instances and updated every time it is needed Note that the minimal cluster that works as expected must contain at least three master nodes For deployment we strongly recommend a six node cluster with three masters and three replicas You can test this locally by creating the following directories named after the port number of the instance you ll run inside any given directory For example mkdir cluster test cd cluster test mkdir 7000 7001 7002 7003 7004 7005 Create a redis conf file inside each of the directories from 7000 to 7005 As a template for your configuration file just use the small example above but make sure to replace the port number 7000 with the right port number according to the directory name You can start each instance as follows each running in a separate terminal tab cd 7000 redis server redis conf You ll see from the logs that every node assigns itself a new ID 82462 26 Nov 11 56 55 329 No cluster configuration found I m 97a3a64667477371c4479320d683e4c8db5858b1 This ID will be used forever by this specific instance in order for the instance to have a unique name in the context of the cluster Every node remembers every other node using this IDs and not by IP or port IP addresses and ports may change but the unique node identifier will never change for all the life of the node We call this identifier simply Node ID Create a Redis Cluster Now that we have a number of instances running you need to create your cluster by writing some meaningful configuration to the nodes You can configure and execute individual instances manually or use the create cluster script Let s go over how you do it manually To create the cluster run redis cli cluster create 127 0 0 1 7000 127 0 0 1 7001 127 0 0 1 7002 127 0 0 1 7003 127 0 0 1 7004 127 0 0 1 7005 cluster replicas 1 The command used here is create since we want to create a new cluster The option cluster replicas 1 means that we want a replica for every master created The other arguments are the list of addresses of the instances I want to use to create the new cluster redis cli will propose a configuration Accept the proposed configuration by typing yes The cluster will be configured and joined which means that instances will be bootstrapped into talking with each other Finally if everything has gone well you ll see a message like this OK All 16384 slots covered This means that there is at least one master instance serving each of the 16384 available slots If you don t want to create a Redis Cluster by configuring and executing individual instances manually as explained above there is a much simpler system but you ll not learn the same amount of operational details Find the utils create cluster directory in the Redis distribution There is a script called create cluster inside same name as the directory it is contained into it s a simple bash script In order to start a 6 nodes cluster with 3 masters and 3 replicas just type the following commands 1 create cluster start 2 create cluster create Reply to yes in step 2 when the redis cli utility wants you to accept the cluster layout You can now interact with the cluster the first node will start at port 30001 by default When you are done stop the cluster with 3 create cluster stop Please read the README inside this directory for more information on how to run the script Interact with the cluster To connect to Redis Cluster you ll need a cluster aware Redis client See the documentation docs clients for your client of choice to determine its cluster support You can also test your Redis Cluster using the redis cli command line utility redis cli c p 7000 redis 127 0 0 1 7000 set foo bar Redirected to slot 12182 located at 127 0 0 1 7002 OK redis 127 0 0 1 7002 set hello world Redirected to slot 866 located at 127 0 0 1 7000 OK redis 127 0 0 1 7000 get foo Redirected to slot 12182 located at 127 0 0 1 7002 bar redis 127 0 0 1 7002 get hello Redirected to slot 866 located at 127 0 0 1 7000 world If you created the cluster using the script your nodes may listen on different ports starting from 30001 by default The redis cli cluster support is very basic so it always uses the fact that Redis Cluster nodes are able to redirect a client to the right node A serious client is able to do better than that and cache the map between hash slots and nodes addresses to directly use the right connection to the right node The map is refreshed only when something changed in the cluster configuration for example after a failover or after the system administrator changed the cluster layout by adding or removing nodes Write an example app with redis rb cluster Before going forward showing how to operate the Redis Cluster doing things like a failover or a resharding we need to create some example application or at least to be able to understand the semantics of a simple Redis Cluster client interaction In this way we can run an example and at the same time try to make nodes failing or start a resharding to see how Redis Cluster behaves under real world conditions It is not very helpful to see what happens while nobody is writing to the cluster This section explains some basic usage of redis rb cluster https github com antirez redis rb cluster showing two examples The first is the following and is the example rb https github com antirez redis rb cluster blob master example rb file inside the redis rb cluster distribution 1 require cluster 2 3 if ARGV length 2 4 startup nodes 5 host 127 0 0 1 port 7000 6 host 127 0 0 1 port 7001 7 8 else 9 startup nodes 10 host ARGV 0 port ARGV 1 to i 11 12 end 13 14 rc RedisCluster new startup nodes 32 timeout 0 1 15 16 last false 17 18 while not last 19 begin 20 last rc get last 21 last 0 if last 22 rescue e 23 puts error e to s 24 sleep 1 25 end 26 end 27 28 last to i 1 1000000000 each x 29 begin 30 rc set foo x x 31 puts rc get foo x 32 rc set last x 33 rescue e 34 puts error e to s 35 end 36 sleep 0 1 37 The application does a very simple thing it sets keys in the form foo number to number one after the other So if you run the program the result is the following stream of commands SET foo0 0 SET foo1 1 SET foo2 2 And so forth The program looks more complex than it should usually as it is designed to show errors on the screen instead of exiting with an exception so every operation performed with the cluster is wrapped by begin rescue blocks The line 14 is the first interesting line in the program It creates the Redis Cluster object using as argument a list of startup nodes the maximum number of connections this object is allowed to take against different nodes and finally the timeout after a given operation is considered to be failed The startup nodes don t need to be all the nodes of the cluster The important thing is that at least one node is reachable Also note that redis rb cluster updates this list of startup nodes as soon as it is able to connect with the first node You should expect such a behavior with any other serious client Now that we have the Redis Cluster object instance stored in the rc variable we are ready to use the object like if it was a normal Redis object instance This is exactly what happens in line 18 to 26 when we restart the example we don t want to start again with foo0 so we store the counter inside Redis itself The code above is designed to read this counter or if the counter does not exist to assign it the value of zero However note how it is a while loop as we want to try again and again even if the cluster is down and is returning errors Normal applications don t need to be so careful Lines between 28 and 37 start the main loop where the keys are set or an error is displayed Note the sleep call at the end of the loop In your tests you can remove the sleep if you want to write to the cluster as fast as possible relatively to the fact that this is a busy loop without real parallelism of course so you ll get the usually 10k ops second in the best of the conditions Normally writes are slowed down in order for the example application to be easier to follow by humans Starting the application produces the following output ruby example rb 1 2 3 4 5 6 7 8 9 C I stopped the program here This is not a very interesting program and we ll use a better one in a moment but we can already see what happens during a resharding when the program is running Reshard the cluster Now we are ready to try a cluster resharding To do this please keep the example rb program running so that you can see if there is some impact on the program running Also you may want to comment the sleep call to have some more serious write load during resharding Resharding basically means to move hash slots from a set of nodes to another set of nodes Like cluster creation it is accomplished using the redis cli utility To start a resharding just type redis cli cluster reshard 127 0 0 1 7000 You only need to specify a single node redis cli will find the other nodes automatically Currently redis cli is only able to reshard with the administrator support you can t just say move 5 of slots from this node to the other one but this is pretty trivial to implement So it starts with questions The first is how much of a resharding do you want to do How many slots do you want to move from 1 to 16384 We can try to reshard 1000 hash slots that should already contain a non trivial amount of keys if the example is still running without the sleep call Then redis cli needs to know what is the target of the resharding that is the node that will receive the hash slots I ll use the first master node that is 127 0 0 1 7000 but I need to specify the Node ID of the instance This was already printed in a list by redis cli but I can always find the ID of a node with the following command if I need redis cli p 7000 cluster nodes grep myself 97a3a64667477371c4479320d683e4c8db5858b1 0 myself master 0 0 0 connected 0 5460 Ok so my target node is 97a3a64667477371c4479320d683e4c8db5858b1 Now you ll get asked from what nodes you want to take those keys I ll just type all in order to take a bit of hash slots from all the other master nodes After the final confirmation you ll see a message for every slot that redis cli is going to move from a node to another and a dot will be printed for every actual key moved from one side to the other While the resharding is in progress you should be able to see your example program running unaffected You can stop and restart it multiple times during the resharding if you want At the end of the resharding you can test the health of the cluster with the following command redis cli cluster check 127 0 0 1 7000 All the slots will be covered as usual but this time the master at 127 0 0 1 7000 will have more hash slots something around 6461 Resharding can be performed automatically without the need to manually enter the parameters in an interactive way This is possible using a command line like the following redis cli cluster reshard host port cluster from node id cluster to node id cluster slots number of slots cluster yes This allows to build some automatism if you are likely to reshard often however currently there is no way for redis cli to automatically rebalance the cluster checking the distribution of keys across the cluster nodes and intelligently moving slots as needed This feature will be added in the future The cluster yes option instructs the cluster manager to automatically answer yes to the command s prompts allowing it to run in a non interactive mode Note that this option can also be activated by setting the REDISCLI CLUSTER YES environment variable A more interesting example application The example application we wrote early is not very good It writes to the cluster in a simple way without even checking if what was written is the right thing From our point of view the cluster receiving the writes could just always write the key foo to 42 to every operation and we would not notice at all So in the redis rb cluster repository there is a more interesting application that is called consistency test rb It uses a set of counters by default 1000 and sends INCR commands in order to increment the counters However instead of just writing the application does two additional things When a counter is updated using INCR the application remembers the write It also reads a random counter before every write and check if the value is what we expected it to be comparing it with the value it has in memory What this means is that this application is a simple consistency checker and is able to tell you if the cluster lost some write or if it accepted a write that we did not receive acknowledgment for In the first case we ll see a counter having a value that is smaller than the one we remember while in the second case the value will be greater Running the consistency test application produces a line of output every second ruby consistency test rb 925 R 0 err 925 W 0 err 5030 R 0 err 5030 W 0 err 9261 R 0 err 9261 W 0 err 13517 R 0 err 13517 W 0 err 17780 R 0 err 17780 W 0 err 22025 R 0 err 22025 W 0 err 25818 R 0 err 25818 W 0 err The line shows the number of R eads and W rites performed and the number of errors query not accepted because of errors since the system was not available If some inconsistency is found new lines are added to the output This is what happens for example if I reset a counter manually while the program is running redis cli h 127 0 0 1 p 7000 set key 217 0 OK in the other tab I see 94774 R 0 err 94774 W 0 err 98821 R 0 err 98821 W 0 err 102886 R 0 err 102886 W 0 err 114 lost 107046 R 0 err 107046 W 0 err 114 lost When I set the counter to 0 the real value was 114 so the program reports 114 lost writes INCR commands that are not remembered by the cluster This program is much more interesting as a test case so we ll use it to test the Redis Cluster failover Test the failover To trigger the failover the simplest thing we can do that is also the semantically simplest failure that can occur in a distributed system is to crash a single process in our case a single master During this test you should take a tab open with the consistency test application running We can identify a master and crash it with the following command redis cli p 7000 cluster nodes grep master 3e3a6cb0d9a9a87168e266b0a0b24026c0aae3f0 127 0 0 1 7001 master 0 1385482984082 0 connected 5960 10921 2938205e12de373867bf38f1ca29d31d0ddb3e46 127 0 0 1 7002 master 0 1385482983582 0 connected 11423 16383 97a3a64667477371c4479320d683e4c8db5858b1 0 myself master 0 0 0 connected 0 5959 10922 11422 Ok so 7000 7001 and 7002 are masters Let s crash node 7002 with the DEBUG SEGFAULT command redis cli p 7002 debug segfault Error Server closed the connection Now we can look at the output of the consistency test to see what it reported 18849 R 0 err 18849 W 0 err 23151 R 0 err 23151 W 0 err 27302 R 0 err 27302 W 0 err many error warnings here 29659 R 578 err 29660 W 577 err 33749 R 578 err 33750 W 577 err 37918 R 578 err 37919 W 577 err 42077 R 578 err 42078 W 577 err As you can see during the failover the system was not able to accept 578 reads and 577 writes however no inconsistency was created in the database This may sound unexpected as in the first part of this tutorial we stated that Redis Cluster can lose writes during the failover because it uses asynchronous replication What we did not say is that this is not very likely to happen because Redis sends the reply to the client and the commands to replicate to the replicas about at the same time so there is a very small window to lose data However the fact that it is hard to trigger does not mean that it is impossible so this does not change the consistency guarantees provided by Redis cluster We can now check what is the cluster setup after the failover note that in the meantime I restarted the crashed instance so that it rejoins the cluster as a replica redis cli p 7000 cluster nodes 3fc783611028b1707fd65345e763befb36454d73 127 0 0 1 7004 slave 3e3a6cb0d9a9a87168e266b0a0b24026c0aae3f0 0 1385503418521 0 connected a211e242fc6b22a9427fed61285e85892fa04e08 127 0 0 1 7003 slave 97a3a64667477371c4479320d683e4c8db5858b1 0 1385503419023 0 connected 97a3a64667477371c4479320d683e4c8db5858b1 0 myself master 0 0 0 connected 0 5959 10922 11422 3c3a0c74aae0b56170ccb03a76b60cfe7dc1912e 127 0 0 1 7005 master 0 1385503419023 3 connected 11423 16383 3e3a6cb0d9a9a87168e266b0a0b24026c0aae3f0 127 0 0 1 7001 master 0 1385503417005 0 connected 5960 10921 2938205e12de373867bf38f1ca29d31d0ddb3e46 127 0 0 1 7002 slave 3c3a0c74aae0b56170ccb03a76b60cfe7dc1912e 0 1385503418016 3 connected Now the masters are running on ports 7000 7001 and 7005 What was previously a master that is the Redis instance running on port 7002 is now a replica of 7005 The output of the CLUSTER NODES command may look intimidating but it is actually pretty simple and is composed of the following tokens Node ID ip port flags master replica myself fail if it is a replica the Node ID of the master Time of the last pending PING still waiting for a reply Time of the last PONG received Configuration epoch for this node see the Cluster specification Status of the link to this node Slots served Manual failover Sometimes it is useful to force a failover without actually causing any problem on a master For example to upgrade the Redis process of one of the master nodes it is a good idea to failover it to turn it into a replica with minimal impact on availability Manual failovers are supported by Redis Cluster using the CLUSTER FAILOVER command that must be executed in one of the replicas of the master you want to failover Manual failovers are special and are safer compared to failovers resulting from actual master failures They occur in a way that avoids data loss in the process by switching clients from the original master to the new master only when the system is sure that the new master processed all the replication stream from the old one This is what you see in the replica log when you perform a manual failover Manual failover user request accepted Received replication offset for paused master manual failover 347540 All master replication stream processed manual failover can start Start of election delayed for 0 milliseconds rank 0 offset 347540 Starting a failover election for epoch 7545 Failover election won I m the new master Basically clients connected to the master we are failing over are stopped At the same time the master sends its replication offset to the replica that waits to reach the offset on its side When the replication offset is reached the failover starts and the old master is informed about the configuration switch When the clients are unblocked on the old master they are redirected to the new master To promote a replica to master it must first be known as a replica by a majority of the masters in the cluster Otherwise it cannot win the failover election If the replica has just been added to the cluster see Add a new node as a replica add a new node as a replica you may need to wait a while before sending the CLUSTER FAILOVER command to make sure the masters in cluster are aware of the new replica Add a new node Adding a new node is basically the process of adding an empty node and then moving some data into it in case it is a new master or telling it to setup as a replica of a known node in case it is a replica We ll show both starting with the addition of a new master instance In both cases the first step to perform is adding an empty node This is as simple as to start a new node in port 7006 we already used from 7000 to 7005 for our existing 6 nodes with the same configuration used for the other nodes except for the port number so what you should do in order to conform with the setup we used for the previous nodes Create a new tab in your terminal application Enter the cluster test directory Create a directory named 7006 Create a redis conf file inside similar to the one used for the other nodes but using 7006 as port number Finally start the server with redis server redis conf At this point the server should be running Now we can use redis cli as usual in order to add the node to the existing cluster redis cli cluster add node 127 0 0 1 7006 127 0 0 1 7000 As you can see I used the add node command specifying the address of the new node as first argument and the address of a random existing node in the cluster as second argument In practical terms redis cli here did very little to help us it just sent a CLUSTER MEET message to the node something that is also possible to accomplish manually However redis cli also checks the state of the cluster before to operate so it is a good idea to perform cluster operations always via redis cli even when you know how the internals work Now we can connect to the new node to see if it really joined the cluster redis 127 0 0 1 7006 cluster nodes 3e3a6cb0d9a9a87168e266b0a0b24026c0aae3f0 127 0 0 1 7001 master 0 1385543178575 0 connected 5960 10921 3fc783611028b1707fd65345e763befb36454d73 127 0 0 1 7004 slave 3e3a6cb0d9a9a87168e266b0a0b24026c0aae3f0 0 1385543179583 0 connected f093c80dde814da99c5cf72a7dd01590792b783b 0 myself master 0 0 0 connected 2938205e12de373867bf38f1ca29d31d0ddb3e46 127 0 0 1 7002 slave 3c3a0c74aae0b56170ccb03a76b60cfe7dc1912e 0 1385543178072 3 connected a211e242fc6b22a9427fed61285e85892fa04e08 127 0 0 1 7003 slave 97a3a64667477371c4479320d683e4c8db5858b1 0 1385543178575 0 connected 97a3a64667477371c4479320d683e4c8db5858b1 127 0 0 1 7000 master 0 1385543179080 0 connected 0 5959 10922 11422 3c3a0c74aae0b56170ccb03a76b60cfe7dc1912e 127 0 0 1 7005 master 0 1385543177568 3 connected 11423 16383 Note that since this node is already connected to the cluster it is already able to redirect client queries correctly and is generally speaking part of the cluster However it has two peculiarities compared to the other masters It holds no data as it has no assigned hash slots Because it is a master without assigned slots it does not participate in the election process when a replica wants to become a master Now it is possible to assign hash slots to this node using the resharding feature of redis cli It is basically useless to show this as we already did in a previous section there is no difference it is just a resharding having as a target the empty node Add a new node as a replica Adding a new replica can be performed in two ways The obvious one is to use redis cli again but with the cluster slave option like this redis cli cluster add node 127 0 0 1 7006 127 0 0 1 7000 cluster slave Note that the command line here is exactly like the one we used to add a new master so we are not specifying to which master we want to add the replica In this case what happens is that redis cli will add the new node as replica of a random master among the masters with fewer replicas However you can specify exactly what master you want to target with your new replica with the following command line redis cli cluster add node 127 0 0 1 7006 127 0 0 1 7000 cluster slave cluster master id 3c3a0c74aae0b56170ccb03a76b60cfe7dc1912e This way we assign the new replica to a specific master A more manual way to add a replica to a specific master is to add the new node as an empty master and then turn it into a replica using the CLUSTER REPLICATE command This also works if the node was added as a replica but you want to move it as a replica of a different master For example in order to add a replica for the node 127 0 0 1 7005 that is currently serving hash slots in the range 11423 16383 that has a Node ID 3c3a0c74aae0b56170ccb03a76b60cfe7dc1912e all I need to do is to connect with the new node already added as empty master and send the command redis 127 0 0 1 7006 cluster replicate 3c3a0c74aae0b56170ccb03a76b60cfe7dc1912e That s it Now we have a new replica for this set of hash slots and all the other nodes in the cluster already know after a few seconds needed to update their config We can verify with the following command redis cli p 7000 cluster nodes grep slave grep 3c3a0c74aae0b56170ccb03a76b60cfe7dc1912e f093c80dde814da99c5cf72a7dd01590792b783b 127 0 0 1 7006 slave 3c3a0c74aae0b56170ccb03a76b60cfe7dc1912e 0 1385543617702 3 connected 2938205e12de373867bf38f1ca29d31d0ddb3e46 127 0 0 1 7002 slave 3c3a0c74aae0b56170ccb03a76b60cfe7dc1912e 0 1385543617198 3 connected The node 3c3a0c now has two replicas running on ports 7002 the existing one and 7006 the new one Remove a node To remove a replica node just use the del node command of redis cli redis cli cluster del node 127 0 0 1 7000 node id The first argument is just a random node in the cluster the second argument is the ID of the node you want to remove You can remove a master node in the same way as well however in order to remove a master node it must be empty If the master is not empty you need to reshard data away from it to all the other master nodes before An alternative to remove a master node is to perform a manual failover of it over one of its replicas and remove the node after it turned into a replica of the new master Obviously this does not help when you want to reduce the actual number of masters in your cluster in that case a resharding is needed There is a special scenario where you want to remove a failed node You should not use the del node command because it tries to connect to all nodes and you will encounter a connection refused error Instead you can use the call command redis cli cluster call 127 0 0 1 7000 cluster forget node id This command will execute CLUSTER FORGET command on every node Replica migration In Redis Cluster you can reconfigure a replica to replicate with a different master at any time just using this command CLUSTER REPLICATE master node id However there is a special scenario where you want replicas to move from one master to another one automatically without the help of the system administrator The automatic reconfiguration of replicas is called replicas migration and is able to improve the reliability of a Redis Cluster You can read the details of replicas migration in the Redis Cluster Specification topics cluster spec here we ll only provide some information about the general idea and what you should do in order to benefit from it The reason why you may want to let your cluster replicas to move from one master to another under certain condition is that usually the Redis Cluster is as resistant to failures as the number of replicas attached to a given master For example a cluster where every master has a single replica can t continue operations if the master and its replica fail at the same time simply because there is no other instance to have a copy of the hash slots the master was serving However while net splits are likely to isolate a number of nodes at the same time many other kind of failures like hardware or software failures local to a single node are a very notable class of failures that are unlikely to happen at the same time so it is possible that in your cluster where every master has a replica the replica is killed at 4am and the master is killed at 6am This still will result in a cluster that can no longer operate To improve reliability of the system we have the option to add additional replicas to every master but this is expensive Replica migration allows to add more replicas to just a few masters So you have 10 masters with 1 replica each for a total of 20 instances However you add for example 3 instances more as replicas of some of your masters so certain masters will have more than a single replica With replicas migration what happens is that if a master is left without replicas a replica from a master that has multiple replicas will migrate to the orphaned master So after your replica goes down at 4am as in the example we made above another replica will take its place and when the master will fail as well at 5am there is still a replica that can be elected so that the cluster can continue to operate So what you should know about replicas migration in short The cluster will try to migrate a replica from the master that has the greatest number of replicas in a given moment To benefit from replica migration you have just to add a few more replicas to a single master in your cluster it does not matter what master There is a configuration parameter that controls the replica migration feature that is called cluster migration barrier you can read more about it in the example redis conf file provided with Redis Cluster Upgrade nodes in a Redis Cluster Upgrading replica nodes is easy since you just need to stop the node and restart it with an updated version of Redis If there are clients scaling reads using replica nodes they should be able to reconnect to a different replica if a given one is not available Upgrading masters is a bit more complex and the suggested procedure is 1 Use CLUSTER FAILOVER to trigger a manual failover of the master to one of its replicas See the Manual failover manual failover in this topic 2 Wait for the master to turn into a replica 3 Finally upgrade the node as you do for replicas 4 If you want the master to be the node you just upgraded trigger a new manual failover in order to turn back the upgraded node into a master Following this procedure you should upgrade one node after the other until all the nodes are upgraded Migrate to Redis Cluster Users willing to migrate to Redis Cluster may have just a single master or may already using a preexisting sharding setup where keys are split among N nodes using some in house algorithm or a sharding algorithm implemented by their client library or Redis proxy In both cases it is possible to migrate to Redis Cluster easily however what is the most important detail is if multiple keys operations are used by the application and how There are three different cases 1 Multiple keys operations or transactions or Lua scripts involving multiple keys are not used Keys are accessed independently even if accessed via transactions or Lua scripts grouping multiple commands about the same key together 2 Multiple keys operations or transactions or Lua scripts involving multiple keys are used but only with keys having the same hash tag which means that the keys used together all have a sub string that happens to be identical For example the following multiple keys operation is defined in the context of the same hash tag SUNION user 1000 foo user 1000 bar 3 Multiple keys operations or transactions or Lua scripts involving multiple keys are used with key names not having an explicit or the same hash tag The third case is not handled by Redis Cluster the application requires to be modified in order to not use multi keys operations or only use them in the context of the same hash tag Case 1 and 2 are covered so we ll focus on those two cases that are handled in the same way so no distinction will be made in the documentation Assuming you have your preexisting data set split into N masters where N 1 if you have no preexisting sharding the following steps are needed in order to migrate your data set to Redis Cluster 1 Stop your clients No automatic live migration to Redis Cluster is currently possible You may be able to do it orchestrating a live migration in the context of your application environment 2 Generate an append only file for all of your N masters using the BGREWRITEAOF command and waiting for the AOF file to be completely generated 3 Save your AOF files from aof 1 to aof N somewhere At this point you can stop your old instances if you wish this is useful since in non virtualized deployments you often need to reuse the same computers 4 Create a Redis Cluster composed of N masters and zero replicas You ll add replicas later Make sure all your nodes are using the append only file for persistence 5 Stop all the cluster nodes substitute their append only file with your pre existing append only files aof 1 for the first node aof 2 for the second node up to aof N 6 Restart your Redis Cluster nodes with the new AOF files They ll complain that there are keys that should not be there according to their configuration 7 Use redis cli cluster fix command in order to fix the cluster so that keys will be migrated according to the hash slots each node is authoritative or not 8 Use redis cli cluster check at the end to make sure your cluster is ok 9 Restart your clients modified to use a Redis Cluster aware client library There is an alternative way to import data from external instances to a Redis Cluster which is to use the redis cli cluster import command The command moves all the keys of a running instance deleting the keys from the source instance to the specified pre existing Redis Cluster However note that if you use a Redis 2 8 instance as source instance the operation may be slow since 2 8 does not implement migrate connection caching so you may want to restart your source instance with a Redis 3 x version before to perform such operation Starting with Redis 5 if not for backward compatibility the Redis project no longer uses the word slave Unfortunately in this command the word slave is part of the protocol so we ll be able to remove such occurrences only when this API will be naturally deprecated Learn more Redis Cluster specification topics cluster spec Linear Scaling with Redis Enterprise https redis com redis enterprise technology linear scaling redis enterprise Docker documentation https docs docker com engine userguide networking dockernetworks |
redis topics debugging Debugging aliases title Debugging weight 10 docs reference debugging md docs reference debugging A guide to debugging Redis server processes | ---
title: "Debugging"
linkTitle: "Debugging"
weight: 10
description: >
A guide to debugging Redis server processes
aliases: [
/topics/debugging,
/docs/reference/debugging,
/docs/reference/debugging.md
]
---
Redis is developed with an emphasis on stability. We do our best with
every release to make sure you'll experience a stable product with no
crashes. However, if you ever need to debug the Redis process itself, read on.
When Redis crashes, it produces a detailed report of what happened. However,
sometimes looking at the crash report is not enough, nor is it possible for
the Redis core team to reproduce the issue independently. In this scenario, we
need help from the user who can reproduce the issue.
This guide shows how to use GDB to provide the information the
Redis developers will need to track the bug more easily.
## What is GDB?
GDB is the Gnu Debugger: a program that is able to inspect the internal state
of another program. Usually tracking and fixing a bug is an exercise in
gathering more information about the state of the program at the moment the
bug happens, so GDB is an extremely useful tool.
GDB can be used in two ways:
* It can attach to a running program and inspect the state of it at runtime.
* It can inspect the state of a program that already terminated using what is called a *core file*, that is, the image of the memory at the time the program was running.
From the point of view of investigating Redis bugs we need to use both of these
GDB modes. The user able to reproduce the bug attaches GDB to their running Redis
instance, and when the crash happens, they create the `core` file that in turn
the developer will use to inspect the Redis internals at the time of the crash.
This way the developer can perform all the inspections in his or her computer
without the help of the user, and the user is free to restart Redis in their
production environment.
## Compiling Redis without optimizations
By default Redis is compiled with the `-O2` switch, this means that compiler
optimizations are enabled. This makes the Redis executable faster, but at the
same time it makes Redis (like any other program) harder to inspect using GDB.
It is better to attach GDB to Redis compiled without optimizations using the
`make noopt` command (instead of just using the plain `make` command). However,
if you have an already running Redis in production there is no need to recompile
and restart it if this is going to create problems on your side. GDB still works
against executables compiled with optimizations.
You should not be overly concerned at the loss of performance from compiling Redis
without optimizations. It is unlikely that this will cause problems in your
environment as Redis is not very CPU-bound.
## Attaching GDB to a running process
If you have an already running Redis server, you can attach GDB to it, so that
if Redis crashes it will be possible to both inspect the internals and generate
a `core dump` file.
After you attach GDB to the Redis process it will continue running as usual without
any loss of performance, so this is not a dangerous procedure.
In order to attach GDB the first thing you need is the *process ID* of the running
Redis instance (the *pid* of the process). You can easily obtain it using
`redis-cli`:
$ redis-cli info | grep process_id
process_id:58414
In the above example the process ID is **58414**.
Login into your Redis server.
(Optional but recommended) Start **screen** or **tmux** or any other program that will make sure that your GDB session will not be closed if your ssh connection times out. You can learn more about screen in [this article](http://www.linuxjournal.com/article/6340).
Attach GDB to the running Redis server by typing:
$ gdb <path-to-redis-executable> <pid>
For example:
$ gdb /usr/local/bin/redis-server 58414
GDB will start and will attach to the running server printing something like the following:
Reading symbols for shared libraries + done
0x00007fff8d4797e6 in epoll_wait ()
(gdb)
At this point GDB is attached but **your Redis instance is blocked by GDB**. In
order to let the Redis instance continue the execution just type **continue** at
the GDB prompt, and press enter.
(gdb) continue
Continuing.
Done! Now your Redis instance has GDB attached. Now you can wait for the next crash. :)
Now it's time to detach your screen/tmux session, if you are running GDB using it, by
pressing **Ctrl-a a** key combination.
## After the crash
Redis has a command to simulate a segmentation fault (in other words a bad crash) using
the `DEBUG SEGFAULT` command (don't use it against a real production instance of course!
So I'll use this command to crash my instance to show what happens in the GDB side:
(gdb) continue
Continuing.
Program received signal EXC_BAD_ACCESS, Could not access memory.
Reason: KERN_INVALID_ADDRESS at address: 0xffffffffffffffff
debugCommand (c=0x7ffc32005000) at debug.c:220
220 *((char*)-1) = 'x';
As you can see GDB detected that Redis crashed, and was even able to show me
the file name and line number causing the crash. This is already much better
than the Redis crash report back trace (containing just function names and
binary offsets).
## Obtaining the stack trace
The first thing to do is to obtain a full stack trace with GDB. This is as
simple as using the **bt** command:
(gdb) bt
#0 debugCommand (c=0x7ffc32005000) at debug.c:220
#1 0x000000010d246d63 in call (c=0x7ffc32005000) at redis.c:1163
#2 0x000000010d247290 in processCommand (c=0x7ffc32005000) at redis.c:1305
#3 0x000000010d251660 in processInputBuffer (c=0x7ffc32005000) at networking.c:959
#4 0x000000010d251872 in readQueryFromClient (el=0x0, fd=5, privdata=0x7fff76f1c0b0, mask=220924512) at networking.c:1021
#5 0x000000010d243523 in aeProcessEvents (eventLoop=0x7fff6ce408d0, flags=220829559) at ae.c:352
#6 0x000000010d24373b in aeMain (eventLoop=0x10d429ef0) at ae.c:397
#7 0x000000010d2494ff in main (argc=1, argv=0x10d2b2900) at redis.c:2046
This shows the backtrace, but we also want to dump the processor registers using the **info registers** command:
(gdb) info registers
rax 0x0 0
rbx 0x7ffc32005000 140721147367424
rcx 0x10d2b0a60 4515891808
rdx 0x7fff76f1c0b0 140735188943024
rsi 0x10d299777 4515796855
rdi 0x0 0
rbp 0x7fff6ce40730 0x7fff6ce40730
rsp 0x7fff6ce40650 0x7fff6ce40650
r8 0x4f26b3f7 1327936503
r9 0x7fff6ce40718 140735020271384
r10 0x81 129
r11 0x10d430398 4517462936
r12 0x4b7c04f8babc0 1327936503000000
r13 0x10d3350a0 4516434080
r14 0x10d42d9f0 4517452272
r15 0x10d430398 4517462936
rip 0x10d26cfd4 0x10d26cfd4 <debugCommand+68>
eflags 0x10246 66118
cs 0x2b 43
ss 0x0 0
ds 0x0 0
es 0x0 0
fs 0x0 0
gs 0x0 0
Please **make sure to include** both of these outputs in your bug report.
## Obtaining the core file
The next step is to generate the core dump, that is the image of the memory of the running Redis process. This is done using the `gcore` command:
(gdb) gcore
Saved corefile core.58414
Now you have the core dump to send to the Redis developer, but **it is important
to understand** that this happens to contain all the data that was inside the
Redis instance at the time of the crash; Redis developers will make sure not to
share the content with anyone else, and will delete the file as soon as it is no
longer used for debugging purposes, but you are warned that by sending the core
file you are sending your data.
## What to send to developers
Finally you can send everything to the Redis core team:
* The Redis executable you are using.
* The stack trace produced by the **bt** command, and the registers dump.
* The core file you generated with gdb.
* Information about the operating system and GCC version, and Redis version you are using.
## Thank you
Your help is extremely important! Many issues can only be tracked this way. So
thanks! | redis | title Debugging linkTitle Debugging weight 10 description A guide to debugging Redis server processes aliases topics debugging docs reference debugging docs reference debugging md Redis is developed with an emphasis on stability We do our best with every release to make sure you ll experience a stable product with no crashes However if you ever need to debug the Redis process itself read on When Redis crashes it produces a detailed report of what happened However sometimes looking at the crash report is not enough nor is it possible for the Redis core team to reproduce the issue independently In this scenario we need help from the user who can reproduce the issue This guide shows how to use GDB to provide the information the Redis developers will need to track the bug more easily What is GDB GDB is the Gnu Debugger a program that is able to inspect the internal state of another program Usually tracking and fixing a bug is an exercise in gathering more information about the state of the program at the moment the bug happens so GDB is an extremely useful tool GDB can be used in two ways It can attach to a running program and inspect the state of it at runtime It can inspect the state of a program that already terminated using what is called a core file that is the image of the memory at the time the program was running From the point of view of investigating Redis bugs we need to use both of these GDB modes The user able to reproduce the bug attaches GDB to their running Redis instance and when the crash happens they create the core file that in turn the developer will use to inspect the Redis internals at the time of the crash This way the developer can perform all the inspections in his or her computer without the help of the user and the user is free to restart Redis in their production environment Compiling Redis without optimizations By default Redis is compiled with the O2 switch this means that compiler optimizations are enabled This makes the Redis executable faster but at the same time it makes Redis like any other program harder to inspect using GDB It is better to attach GDB to Redis compiled without optimizations using the make noopt command instead of just using the plain make command However if you have an already running Redis in production there is no need to recompile and restart it if this is going to create problems on your side GDB still works against executables compiled with optimizations You should not be overly concerned at the loss of performance from compiling Redis without optimizations It is unlikely that this will cause problems in your environment as Redis is not very CPU bound Attaching GDB to a running process If you have an already running Redis server you can attach GDB to it so that if Redis crashes it will be possible to both inspect the internals and generate a core dump file After you attach GDB to the Redis process it will continue running as usual without any loss of performance so this is not a dangerous procedure In order to attach GDB the first thing you need is the process ID of the running Redis instance the pid of the process You can easily obtain it using redis cli redis cli info grep process id process id 58414 In the above example the process ID is 58414 Login into your Redis server Optional but recommended Start screen or tmux or any other program that will make sure that your GDB session will not be closed if your ssh connection times out You can learn more about screen in this article http www linuxjournal com article 6340 Attach GDB to the running Redis server by typing gdb path to redis executable pid For example gdb usr local bin redis server 58414 GDB will start and will attach to the running server printing something like the following Reading symbols for shared libraries done 0x00007fff8d4797e6 in epoll wait gdb At this point GDB is attached but your Redis instance is blocked by GDB In order to let the Redis instance continue the execution just type continue at the GDB prompt and press enter gdb continue Continuing Done Now your Redis instance has GDB attached Now you can wait for the next crash Now it s time to detach your screen tmux session if you are running GDB using it by pressing Ctrl a a key combination After the crash Redis has a command to simulate a segmentation fault in other words a bad crash using the DEBUG SEGFAULT command don t use it against a real production instance of course So I ll use this command to crash my instance to show what happens in the GDB side gdb continue Continuing Program received signal EXC BAD ACCESS Could not access memory Reason KERN INVALID ADDRESS at address 0xffffffffffffffff debugCommand c 0x7ffc32005000 at debug c 220 220 char 1 x As you can see GDB detected that Redis crashed and was even able to show me the file name and line number causing the crash This is already much better than the Redis crash report back trace containing just function names and binary offsets Obtaining the stack trace The first thing to do is to obtain a full stack trace with GDB This is as simple as using the bt command gdb bt 0 debugCommand c 0x7ffc32005000 at debug c 220 1 0x000000010d246d63 in call c 0x7ffc32005000 at redis c 1163 2 0x000000010d247290 in processCommand c 0x7ffc32005000 at redis c 1305 3 0x000000010d251660 in processInputBuffer c 0x7ffc32005000 at networking c 959 4 0x000000010d251872 in readQueryFromClient el 0x0 fd 5 privdata 0x7fff76f1c0b0 mask 220924512 at networking c 1021 5 0x000000010d243523 in aeProcessEvents eventLoop 0x7fff6ce408d0 flags 220829559 at ae c 352 6 0x000000010d24373b in aeMain eventLoop 0x10d429ef0 at ae c 397 7 0x000000010d2494ff in main argc 1 argv 0x10d2b2900 at redis c 2046 This shows the backtrace but we also want to dump the processor registers using the info registers command gdb info registers rax 0x0 0 rbx 0x7ffc32005000 140721147367424 rcx 0x10d2b0a60 4515891808 rdx 0x7fff76f1c0b0 140735188943024 rsi 0x10d299777 4515796855 rdi 0x0 0 rbp 0x7fff6ce40730 0x7fff6ce40730 rsp 0x7fff6ce40650 0x7fff6ce40650 r8 0x4f26b3f7 1327936503 r9 0x7fff6ce40718 140735020271384 r10 0x81 129 r11 0x10d430398 4517462936 r12 0x4b7c04f8babc0 1327936503000000 r13 0x10d3350a0 4516434080 r14 0x10d42d9f0 4517452272 r15 0x10d430398 4517462936 rip 0x10d26cfd4 0x10d26cfd4 debugCommand 68 eflags 0x10246 66118 cs 0x2b 43 ss 0x0 0 ds 0x0 0 es 0x0 0 fs 0x0 0 gs 0x0 0 Please make sure to include both of these outputs in your bug report Obtaining the core file The next step is to generate the core dump that is the image of the memory of the running Redis process This is done using the gcore command gdb gcore Saved corefile core 58414 Now you have the core dump to send to the Redis developer but it is important to understand that this happens to contain all the data that was inside the Redis instance at the time of the crash Redis developers will make sure not to share the content with anyone else and will delete the file as soon as it is no longer used for debugging purposes but you are warned that by sending the core file you are sending your data What to send to developers Finally you can send everything to the Redis core team The Redis executable you are using The stack trace produced by the bt command and the registers dump The core file you generated with gdb Information about the operating system and GCC version and Redis version you are using Thank you Your help is extremely important Many issues can only be tracked this way So thanks |
redis topics acl aliases Redis Access Control List title ACL ACL docs manual security acl md docs manual security acl weight 1 | ---
title: "ACL"
linkTitle: "ACL"
weight: 1
description: Redis Access Control List
aliases: [
/topics/acl,
/docs/manual/security/acl,
/docs/manual/security/acl.md
]
---
The Redis ACL, short for Access Control List, is the feature that allows certain
connections to be limited in terms of the commands that can be executed and the
keys that can be accessed. The way it works is that, after connecting, a client
is required to provide a username and a valid password to authenticate. If authentication succeeded, the connection is associated with a given
user and the limits the user has. Redis can be configured so that new
connections are already authenticated with a "default" user (this is the
default configuration). Configuring the default user has, as a side effect,
the ability to provide only a specific subset of functionalities to connections
that are not explicitly authenticated.
In the default configuration, Redis 6 (the first version to have ACLs) works
exactly like older versions of Redis. Every new connection is
capable of calling every possible command and accessing every key, so the
ACL feature is backward compatible with old clients and applications. Also
the old way to configure a password, using the **requirepass** configuration
directive, still works as expected. However, it now
sets a password for the default user.
The Redis `AUTH` command was extended in Redis 6, so now it is possible to
use it in the two-arguments form:
AUTH <username> <password>
Here's an example of the old form:
AUTH <password>
What happens is that the username used to authenticate is "default", so
just specifying the password implies that we want to authenticate against
the default user. This provides backward compatibility.
## When ACLs are useful
Before using ACLs, you may want to ask yourself what's the goal you want to
accomplish by implementing this layer of protection. Normally there are
two main goals that are well served by ACLs:
1. You want to improve security by restricting the access to commands and keys, so that untrusted clients have no access and trusted clients have just the minimum access level to the database in order to perform the work needed. For instance, certain clients may just be able to execute read only commands.
2. You want to improve operational safety, so that processes or humans accessing Redis are not allowed to damage the data or the configuration due to software errors or manual mistakes. For instance, there is no reason for a worker that fetches delayed jobs from Redis to be able to call the `FLUSHALL` command.
Another typical usage of ACLs is related to managed Redis instances. Redis is
often provided as a managed service both by internal company teams that handle
the Redis infrastructure for the other internal customers they have, or is
provided in a software-as-a-service setup by cloud providers. In both
setups, we want to be sure that configuration commands are excluded for the
customers.
## Configure ACLs with the ACL command
ACLs are defined using a DSL (domain specific language) that describes what
a given user is allowed to do. Such rules are always implemented from the
first to the last, left-to-right, because sometimes the order of the rules is
important to understand what the user is really able to do.
By default there is a single user defined, called *default*. We
can use the `ACL LIST` command in order to check the currently active ACLs
and verify what the configuration of a freshly started, defaults-configured
Redis instance is:
> ACL LIST
1) "user default on nopass ~* &* +@all"
The command above reports the list of users in the same format that is
used in the Redis configuration files, by translating the current ACLs set
for the users back into their description.
The first two words in each line are "user" followed by the username. The
next words are ACL rules that describe different things. We'll show how the rules work in detail, but for now it is enough to say that the default
user is configured to be active (on), to require no password (nopass), to
access every possible key (`~*`) and Pub/Sub channel (`&*`), and be able to
call every possible command (`+@all`).
Also, in the special case of the default user, having the *nopass* rule means
that new connections are automatically authenticated with the default user
without any explicit `AUTH` call needed.
## ACL rules
The following is the list of valid ACL rules. Certain rules are just
single words that are used in order to activate or remove a flag, or to
perform a given change to the user ACL. Other rules are char prefixes that
are concatenated with command or category names, key patterns, and
so forth.
Enable and disallow users:
* `on`: Enable the user: it is possible to authenticate as this user.
* `off`: Disallow the user: it's no longer possible to authenticate with this user; however, previously authenticated connections will still work. Note that if the default user is flagged as *off*, new connections will start as not authenticated and will require the user to send `AUTH` or `HELLO` with the AUTH option in order to authenticate in some way, regardless of the default user configuration.
Allow and disallow commands:
* `+<command>`: Add the command to the list of commands the user can call. Can be used with `|` for allowing subcommands (e.g "+config|get").
* `-<command>`: Remove the command to the list of commands the user can call. Starting Redis 7.0, it can be used with `|` for blocking subcommands (e.g "-config|set").
* `+@<category>`: Add all the commands in such category to be called by the user, with valid categories being like @admin, @set, @sortedset, ... and so forth, see the full list by calling the `ACL CAT` command. The special category @all means all the commands, both the ones currently present in the server, and the ones that will be loaded in the future via modules.
* `-@<category>`: Like `+@<category>` but removes the commands from the list of commands the client can call.
* `+<command>|first-arg`: Allow a specific first argument of an otherwise disabled command. It is only supported on commands with no sub-commands, and is not allowed as negative form like -SELECT|1, only additive starting with "+". This feature is deprecated and may be removed in the future.
* `allcommands`: Alias for +@all. Note that it implies the ability to execute all the future commands loaded via the modules system.
* `nocommands`: Alias for -@all.
Allow and disallow certain keys and key permissions:
* `~<pattern>`: Add a pattern of keys that can be mentioned as part of commands. For instance `~*` allows all the keys. The pattern is a glob-style pattern like the one of `KEYS`. It is possible to specify multiple patterns.
* `%R~<pattern>`: (Available in Redis 7.0 and later) Add the specified read key pattern. This behaves similar to the regular key pattern but only grants permission to read from keys that match the given pattern. See [key permissions](#key-permissions) for more information.
* `%W~<pattern>`: (Available in Redis 7.0 and later) Add the specified write key pattern. This behaves similar to the regular key pattern but only grants permission to write to keys that match the given pattern. See [key permissions](#key-permissions) for more information.
* `%RW~<pattern>`: (Available in Redis 7.0 and later) Alias for `~<pattern>`.
* `allkeys`: Alias for `~*`.
* `resetkeys`: Flush the list of allowed keys patterns. For instance the ACL `~foo:* ~bar:* resetkeys ~objects:*`, will only allow the client to access keys that match the pattern `objects:*`.
Allow and disallow Pub/Sub channels:
* `&<pattern>`: (Available in Redis 6.2 and later) Add a glob style pattern of Pub/Sub channels that can be accessed by the user. It is possible to specify multiple channel patterns. Note that pattern matching is done only for channels mentioned by `PUBLISH` and `SUBSCRIBE`, whereas `PSUBSCRIBE` requires a literal match between its channel patterns and those allowed for user.
* `allchannels`: Alias for `&*` that allows the user to access all Pub/Sub channels.
* `resetchannels`: Flush the list of allowed channel patterns and disconnect the user's Pub/Sub clients if these are no longer able to access their respective channels and/or channel patterns.
Configure valid passwords for the user:
* `><password>`: Add this password to the list of valid passwords for the user. For example `>mypass` will add "mypass" to the list of valid passwords. This directive clears the *nopass* flag (see later). Every user can have any number of passwords.
* `<<password>`: Remove this password from the list of valid passwords. Emits an error in case the password you are trying to remove is actually not set.
* `#<hash>`: Add this SHA-256 hash value to the list of valid passwords for the user. This hash value will be compared to the hash of a password entered for an ACL user. This allows users to store hashes in the `acl.conf` file rather than storing cleartext passwords. Only SHA-256 hash values are accepted as the password hash must be 64 characters and only contain lowercase hexadecimal characters.
* `!<hash>`: Remove this hash value from the list of valid passwords. This is useful when you do not know the password specified by the hash value but would like to remove the password from the user.
* `nopass`: All the set passwords of the user are removed, and the user is flagged as requiring no password: it means that every password will work against this user. If this directive is used for the default user, every new connection will be immediately authenticated with the default user without any explicit AUTH command required. Note that the *resetpass* directive will clear this condition.
* `resetpass`: Flushes the list of allowed passwords and removes the *nopass* status. After *resetpass*, the user has no associated passwords and there is no way to authenticate without adding some password (or setting it as *nopass* later).
*Note: if a user is not flagged with nopass and has no list of valid passwords, that user is effectively impossible to use because there will be no way to log in as that user.*
Configure selectors for the user:
* `(<rule list>)`: (Available in Redis 7.0 and later) Create a new selector to match rules against. Selectors are evaluated after the user permissions, and are evaluated according to the order they are defined. If a command matches either the user permissions or any selector, it is allowed. See [selectors](#selectors) for more information.
* `clearselectors`: (Available in Redis 7.0 and later) Delete all of the selectors attached to the user.
Reset the user:
* `reset` Performs the following actions: resetpass, resetkeys, resetchannels, allchannels (if acl-pubsub-default is set), off, clearselectors, -@all. The user returns to the same state it had immediately after its creation.
## Create and edit user ACLs with the ACL SETUSER command
Users can be created and modified in two main ways:
1. Using the ACL command and its `ACL SETUSER` subcommand.
2. Modifying the server configuration, where users can be defined, and restarting the server. With an *external ACL file*, just call `ACL LOAD`.
In this section we'll learn how to define users using the `ACL` command.
With such knowledge, it will be trivial to do the same things via the
configuration files. Defining users in the configuration deserves its own
section and will be discussed later separately.
To start, try the simplest `ACL SETUSER` command call:
> ACL SETUSER alice
OK
The `ACL SETUSER` command takes the username and a list of ACL rules to apply
to the user. However the above example did not specify any rule at all.
This will just create the user if it did not exist, using the defaults for new
users. If the user already exists, the command above will do nothing at all.
Check the default user status:
> ACL LIST
1) "user alice off resetchannels -@all"
2) "user default on nopass ~* &* +@all"
The new user "alice" is:
* In the off status, so `AUTH` will not work for the user "alice".
* The user also has no passwords set.
* Cannot access any command. Note that the user is created by default without the ability to access any command, so the `-@all` in the output above could be omitted; however, `ACL LIST` attempts to be explicit rather than implicit.
* There are no key patterns that the user can access.
* There are no Pub/Sub channels that the user can access.
New users are created with restrictive permissions by default. Starting with Redis 6.2, ACL provides Pub/Sub channels access management as well. To ensure backward compatibility with version 6.0 when upgrading to Redis 6.2, new users are granted the 'allchannels' permission by default. The default can be set to `resetchannels` via the `acl-pubsub-default` configuration directive.
From 7.0, The `acl-pubsub-default` value is set to `resetchannels` to restrict the channels access by default to provide better security.
The default can be set to `allchannels` via the `acl-pubsub-default` configuration directive to be compatible with previous versions.
Such user is completely useless. Let's try to define the user so that
it is active, has a password, and can access with only the `GET` command
to key names starting with the string "cached:".
> ACL SETUSER alice on >p1pp0 ~cached:* +get
OK
Now the user can do something, but will refuse to do other things:
> AUTH alice p1pp0
OK
> GET foo
(error) NOPERM this user has no permissions to access one of the keys used as arguments
> GET cached:1234
(nil)
> SET cached:1234 zap
(error) NOPERM this user has no permissions to run the 'set' command
Things are working as expected. In order to inspect the configuration of the
user alice (remember that user names are case sensitive), it is possible to
use an alternative to `ACL LIST` which is designed to be more suitable for
computers to read, while `ACL GETUSER` is more human readable.
> ACL GETUSER alice
1) "flags"
2) 1) "on"
3) "passwords"
4) 1) "2d9c75..."
5) "commands"
6) "-@all +get"
7) "keys"
8) "~cached:*"
9) "channels"
10) ""
11) "selectors"
12) (empty array)
The `ACL GETUSER` returns a field-value array that describes the user in more parsable terms. The output includes the set of flags, a list of key patterns, passwords, and so forth. The output is probably more readable if we use RESP3, so that it is returned as a map reply:
> ACL GETUSER alice
1# "flags" => 1~ "on"
2# "passwords" => 1) "2d9c75273d72b32df726fb545c8a4edc719f0a95a6fd993950b10c474ad9c927"
3# "commands" => "-@all +get"
4# "keys" => "~cached:*"
5# "channels" => ""
6# "selectors" => (empty array)
*Note: from now on, we'll continue using the Redis default protocol, version 2*
Using another `ACL SETUSER` command (from a different user, because alice cannot run the `ACL` command), we can add multiple patterns to the user:
> ACL SETUSER alice ~objects:* ~items:* ~public:*
OK
> ACL LIST
1) "user alice on #2d9c75... ~cached:* ~objects:* ~items:* ~public:* resetchannels -@all +get"
2) "user default on nopass ~* &* +@all"
The user representation in memory is now as we expect it to be.
## Multiple calls to ACL SETUSER
It is very important to understand what happens when `ACL SETUSER` is called
multiple times. What is critical to know is that every `ACL SETUSER` call will
NOT reset the user, but will just apply the ACL rules to the existing user.
The user is reset only if it was not known before. In that case, a brand new
user is created with zeroed-ACLs. The user cannot do anything, is
disallowed, has no passwords, and so forth. This is the best default for safety.
However later calls will just modify the user incrementally. For instance,
the following sequence:
> ACL SETUSER myuser +set
OK
> ACL SETUSER myuser +get
OK
Will result in myuser being able to call both `GET` and `SET`:
> ACL LIST
1) "user default on nopass ~* &* +@all"
2) "user myuser off resetchannels -@all +get +set"
## Command categories
Setting user ACLs by specifying all the commands one after the other is
really annoying, so instead we do things like this:
> ACL SETUSER antirez on +@all -@dangerous >42a979... ~*
By saying +@all and -@dangerous, we included all the commands and later removed
all the commands that are tagged as dangerous inside the Redis command table.
Note that command categories **never include modules commands** with
the exception of +@all. If you say +@all, all the commands can be executed by
the user, even future commands loaded via the modules system. However if you
use the ACL rule +@read or any other, the modules commands are always
excluded. This is very important because you should just trust the Redis
internal command table. Modules may expose dangerous things and in
the case of an ACL that is just additive, that is, in the form of `+@all -...`
You should be absolutely sure that you'll never include what you did not mean
to.
The following is a list of command categories and their meanings:
* **admin** - Administrative commands. Normal applications will never need to use
these. Includes `REPLICAOF`, `CONFIG`, `DEBUG`, `SAVE`, `MONITOR`, `ACL`, `SHUTDOWN`, etc.
* **bitmap** - Data type: bitmaps related.
* **blocking** - Potentially blocking the connection until released by another
command.
* **connection** - Commands affecting the connection or other connections.
This includes `AUTH`, `SELECT`, `COMMAND`, `CLIENT`, `ECHO`, `PING`, etc.
* **dangerous** - Potentially dangerous commands (each should be considered with care for
various reasons). This includes `FLUSHALL`, `MIGRATE`, `RESTORE`, `SORT`, `KEYS`,
`CLIENT`, `DEBUG`, `INFO`, `CONFIG`, `SAVE`, `REPLICAOF`, etc.
* **geo** - Data type: geospatial indexes related.
* **hash** - Data type: hashes related.
* **hyperloglog** - Data type: hyperloglog related.
* **fast** - Fast O(1) commands. May loop on the number of arguments, but not the
number of elements in the key.
* **keyspace** - Writing or reading from keys, databases, or their metadata
in a type agnostic way. Includes `DEL`, `RESTORE`, `DUMP`, `RENAME`, `EXISTS`, `DBSIZE`,
`KEYS`, `EXPIRE`, `TTL`, `FLUSHALL`, etc. Commands that may modify the keyspace,
key, or metadata will also have the `write` category. Commands that only read
the keyspace, key, or metadata will have the `read` category.
* **list** - Data type: lists related.
* **pubsub** - PubSub-related commands.
* **read** - Reading from keys (values or metadata). Note that commands that don't
interact with keys, will not have either `read` or `write`.
* **scripting** - Scripting related.
* **set** - Data type: sets related.
* **sortedset** - Data type: sorted sets related.
* **slow** - All commands that are not `fast`.
* **stream** - Data type: streams related.
* **string** - Data type: strings related.
* **transaction** - `WATCH` / `MULTI` / `EXEC` related commands.
* **write** - Writing to keys (values or metadata).
Redis can also show you a list of all categories and the exact commands each category includes using the Redis `ACL CAT` command. It can be used in two forms:
ACL CAT -- Will just list all the categories available
ACL CAT <category-name> -- Will list all the commands inside the category
Examples:
> ACL CAT
1) "keyspace"
2) "read"
3) "write"
4) "set"
5) "sortedset"
6) "list"
7) "hash"
8) "string"
9) "bitmap"
10) "hyperloglog"
11) "geo"
12) "stream"
13) "pubsub"
14) "admin"
15) "fast"
16) "slow"
17) "blocking"
18) "dangerous"
19) "connection"
20) "transaction"
21) "scripting"
As you can see, so far there are 21 distinct categories. Now let's check what
command is part of the *geo* category:
> ACL CAT geo
1) "geohash"
2) "georadius_ro"
3) "georadiusbymember"
4) "geopos"
5) "geoadd"
6) "georadiusbymember_ro"
7) "geodist"
8) "georadius"
9) "geosearch"
10) "geosearchstore"
Note that commands may be part of multiple categories. For example, an
ACL rule like `+@geo -@read` will result in certain geo commands to be
excluded because they are read-only commands.
## Allow/block subcommands
Starting from Redis 7.0, subcommands can be allowed/blocked just like other
commands (by using the separator `|` between the command and subcommand, for
example: `+config|get` or `-config|set`)
That is true for all commands except DEBUG. In order to allow/block specific
DEBUG subcommands, see the next section.
## Allow the first-arg of a blocked command
**Note: This feature is deprecated since Redis 7.0 and may be removed in the future.**
Sometimes the ability to exclude or include a command or a subcommand as a whole is not enough.
Many deployments may not be happy providing the ability to execute a `SELECT` for any DB, but may
still want to be able to run `SELECT 0`.
In such case we could alter the ACL of a user in the following way:
ACL SETUSER myuser -select +select|0
First, remove the `SELECT` command and then add the allowed
first-arg. Note that **it is not possible to do the reverse** since first-args
can be only added, not excluded. It is safer to specify all the first-args
that are valid for some user since it is possible that
new first-args may be added in the future.
Another example:
ACL SETUSER myuser -debug +debug|digest
Note that first-arg matching may add some performance penalty; however, it is hard to measure even with synthetic benchmarks. The
additional CPU cost is only paid when such commands are called, and not when
other commands are called.
It is possible to use this mechanism in order to allow subcommands in Redis
versions prior to 7.0 (see above section).
## +@all VS -@all
In the previous section, it was observed how it is possible to define command
ACLs based on adding/removing single commands.
## Selectors
Starting with Redis 7.0, Redis supports adding multiple sets of rules that are evaluated independently of each other.
These secondary sets of permissions are called selectors and added by wrapping a set of rules within parentheses.
In order to execute a command, either the root permissions (rules defined outside of parenthesis) or any of the selectors (rules defined inside parenthesis) must match the given command.
Internally, the root permissions are checked first followed by selectors in the order they were added.
For example, consider a user with the ACL rules `+GET ~key1 (+SET ~key2)`.
This user is able to execute `GET key1` and `SET key2 hello`, but not `GET key2` or `SET key1 world`.
Unlike the user's root permissions, selectors cannot be modified after they are added.
Instead, selectors can be removed with the `clearselectors` keyword, which removes all of the added selectors.
Note that `clearselectors` does not remove the root permissions.
## Key permissions
Starting with Redis 7.0, key patterns can also be used to define how a command is able to touch a key.
This is achieved through rules that define key permissions.
The key permission rules take the form of `%(<permission>)~<pattern>`.
Permissions are defined as individual characters that map to the following key permissions:
* W (Write): The data stored within the key may be updated or deleted.
* R (Read): User supplied data from the key is processed, copied or returned. Note that this does not include metadata such as size information (example `STRLEN`), type information (example `TYPE`) or information about whether a value exists within a collection (example `SISMEMBER`).
Permissions can be composed together by specifying multiple characters.
Specifying the permission as 'RW' is considered full access and is analogous to just passing in `~<pattern>`.
For a concrete example, consider a user with ACL rules `+@all ~app1:* (+@read ~app2:*)`.
This user has full access on `app1:*` and readonly access on `app2:*`.
However, some commands support reading data from one key, doing some transformation, and storing it into another key.
One such command is the `COPY` command, which copies the data from the source key into the destination key.
The example set of ACL rules is unable to handle a request copying data from `app2:user` into `app1:user`, since neither the root permission nor the selector fully matches the command.
However, using key selectors you can define a set of ACL rules that can handle this request `+@all ~app1:* %R~app2:*`.
The first pattern is able to match `app1:user` and the second pattern is able to match `app2:user`.
Which type of permission is required for a command is documented through [key specifications](/topics/key-specs#logical-operation-flags).
The type of permission is based off the keys logical operation flags.
The insert, update, and delete flags map to the write key permission.
The access flag maps to the read key permission.
If the key has no logical operation flags, such as `EXISTS`, the user still needs either key read or key write permissions to execute the command.
Note: Side channels to accessing user data are ignored when it comes to evaluating whether read permissions are required to execute a command.
This means that some write commands that return metadata about the modified key only require write permission on the key to execute.
For example, consider the following two commands:
* `LPUSH key1 data`: modifies "key1" but only returns metadata about it, the size of the list after the push, so the command only requires write permission on "key1" to execute.
* `LPOP key2`: modifies "key2" but also returns data from it, the left most item in the list, so the command requires both read and write permission on "key2" to execute.
If an application needs to make sure no data is accessed from a key, including side channels, it's recommended to not provide any access to the key.
## How passwords are stored internally
Redis internally stores passwords hashed with SHA256. If you set a password
and check the output of `ACL LIST` or `ACL GETUSER`, you'll see a long hex
string that looks pseudo random. Here is an example, because in the previous
examples, for the sake of brevity, the long hex string was trimmed:
> ACL GETUSER default
1) "flags"
2) 1) "on"
3) "passwords"
4) 1) "2d9c75273d72b32df726fb545c8a4edc719f0a95a6fd993950b10c474ad9c927"
5) "commands"
6) "+@all"
7) "keys"
8) "~*"
9) "channels"
10) "&*"
11) "selectors"
12) (empty array)
Using SHA256 provides the ability to avoid storing the password in clear text
while still allowing for a very fast `AUTH` command, which is a very important
feature of Redis and is coherent with what clients expect from Redis.
However ACL *passwords* are not really passwords. They are shared secrets
between the server and the client, because the password is
not an authentication token used by a human being. For instance:
* There are no length limits, the password will just be memorized in some client software. There is no human that needs to recall a password in this context.
* The ACL password does not protect any other thing. For example, it will never be the password for some email account.
* Often when you are able to access the hashed password itself, by having full access to the Redis commands of a given server, or corrupting the system itself, you already have access to what the password is protecting: the Redis instance stability and the data it contains.
For this reason, slowing down the password authentication, in order to use an
algorithm that uses time and space to make password cracking hard,
is a very poor choice. What we suggest instead is to generate strong
passwords, so that nobody will be able to crack it using a
dictionary or a brute force attack even if they have the hash. To do so, there is a special ACL
command `ACL GENPASS` that generates passwords using the system cryptographic pseudorandom
generator:
> ACL GENPASS
"dd721260bfe1b3d9601e7fbab36de6d04e2e67b0ef1c53de59d45950db0dd3cc"
The command outputs a 32-byte (256-bit) pseudorandom string converted to a
64-byte alphanumerical string. This is long enough to avoid attacks and short
enough to be easy to manage, cut & paste, store, and so forth. This is what
you should use in order to generate Redis passwords.
## Use an external ACL file
There are two ways to store users inside the Redis configuration:
1. Users can be specified directly inside the `redis.conf` file.
2. It is possible to specify an external ACL file.
The two methods are *mutually incompatible*, so Redis will ask you to use one
or the other. Specifying users inside `redis.conf` is
good for simple use cases. When there are multiple users to define, in a
complex environment, we recommend you use the ACL file instead.
The format used inside `redis.conf` and in the external ACL file is exactly
the same, so it is trivial to switch from one to the other, and is
the following:
user <username> ... acl rules ...
For instance:
user worker +@list +@connection ~jobs:* on >ffa9203c493aa99
When you want to use an external ACL file, you are required to specify
the configuration directive called `aclfile`, like this:
aclfile /etc/redis/users.acl
When you are just specifying a few users directly inside the `redis.conf`
file, you can use `CONFIG REWRITE` in order to store the new user configuration
inside the file by rewriting it.
The external ACL file however is more powerful. You can do the following:
* Use `ACL LOAD` if you modified the ACL file manually and you want Redis to reload the new configuration. Note that this command is able to load the file *only if all the users are correctly specified*. Otherwise, an error is reported to the user, and the old configuration will remain valid.
* Use `ACL SAVE` to save the current ACL configuration to the ACL file.
Note that `CONFIG REWRITE` does not also trigger `ACL SAVE`. When you use
an ACL file, the configuration and the ACLs are handled separately.
## ACL rules for Sentinel and Replicas
In case you don't want to provide Redis replicas and Redis Sentinel instances
full access to your Redis instances, the following is the set of commands
that must be allowed in order for everything to work correctly.
For Sentinel, allow the user to access the following commands both in the master and replica instances:
* AUTH, CLIENT, SUBSCRIBE, SCRIPT, PUBLISH, PING, INFO, MULTI, SLAVEOF, CONFIG, CLIENT, EXEC.
Sentinel does not need to access any key in the database but does use Pub/Sub, so the ACL rule would be the following (note: `AUTH` is not needed since it is always allowed):
ACL SETUSER sentinel-user on >somepassword allchannels +multi +slaveof +ping +exec +subscribe +config|rewrite +role +publish +info +client|setname +client|kill +script|kill
Redis replicas require the following commands to be allowed on the master instance:
* PSYNC, REPLCONF, PING
No keys need to be accessed, so this translates to the following rules:
ACL setuser replica-user on >somepassword +psync +replconf +ping
Note that you don't need to configure the replicas to allow the master to be able to execute any set of commands. The master is always authenticated as the root user from the point of view of replicas. | redis | title ACL linkTitle ACL weight 1 description Redis Access Control List aliases topics acl docs manual security acl docs manual security acl md The Redis ACL short for Access Control List is the feature that allows certain connections to be limited in terms of the commands that can be executed and the keys that can be accessed The way it works is that after connecting a client is required to provide a username and a valid password to authenticate If authentication succeeded the connection is associated with a given user and the limits the user has Redis can be configured so that new connections are already authenticated with a default user this is the default configuration Configuring the default user has as a side effect the ability to provide only a specific subset of functionalities to connections that are not explicitly authenticated In the default configuration Redis 6 the first version to have ACLs works exactly like older versions of Redis Every new connection is capable of calling every possible command and accessing every key so the ACL feature is backward compatible with old clients and applications Also the old way to configure a password using the requirepass configuration directive still works as expected However it now sets a password for the default user The Redis AUTH command was extended in Redis 6 so now it is possible to use it in the two arguments form AUTH username password Here s an example of the old form AUTH password What happens is that the username used to authenticate is default so just specifying the password implies that we want to authenticate against the default user This provides backward compatibility When ACLs are useful Before using ACLs you may want to ask yourself what s the goal you want to accomplish by implementing this layer of protection Normally there are two main goals that are well served by ACLs 1 You want to improve security by restricting the access to commands and keys so that untrusted clients have no access and trusted clients have just the minimum access level to the database in order to perform the work needed For instance certain clients may just be able to execute read only commands 2 You want to improve operational safety so that processes or humans accessing Redis are not allowed to damage the data or the configuration due to software errors or manual mistakes For instance there is no reason for a worker that fetches delayed jobs from Redis to be able to call the FLUSHALL command Another typical usage of ACLs is related to managed Redis instances Redis is often provided as a managed service both by internal company teams that handle the Redis infrastructure for the other internal customers they have or is provided in a software as a service setup by cloud providers In both setups we want to be sure that configuration commands are excluded for the customers Configure ACLs with the ACL command ACLs are defined using a DSL domain specific language that describes what a given user is allowed to do Such rules are always implemented from the first to the last left to right because sometimes the order of the rules is important to understand what the user is really able to do By default there is a single user defined called default We can use the ACL LIST command in order to check the currently active ACLs and verify what the configuration of a freshly started defaults configured Redis instance is ACL LIST 1 user default on nopass all The command above reports the list of users in the same format that is used in the Redis configuration files by translating the current ACLs set for the users back into their description The first two words in each line are user followed by the username The next words are ACL rules that describe different things We ll show how the rules work in detail but for now it is enough to say that the default user is configured to be active on to require no password nopass to access every possible key and Pub Sub channel and be able to call every possible command all Also in the special case of the default user having the nopass rule means that new connections are automatically authenticated with the default user without any explicit AUTH call needed ACL rules The following is the list of valid ACL rules Certain rules are just single words that are used in order to activate or remove a flag or to perform a given change to the user ACL Other rules are char prefixes that are concatenated with command or category names key patterns and so forth Enable and disallow users on Enable the user it is possible to authenticate as this user off Disallow the user it s no longer possible to authenticate with this user however previously authenticated connections will still work Note that if the default user is flagged as off new connections will start as not authenticated and will require the user to send AUTH or HELLO with the AUTH option in order to authenticate in some way regardless of the default user configuration Allow and disallow commands command Add the command to the list of commands the user can call Can be used with for allowing subcommands e g config get command Remove the command to the list of commands the user can call Starting Redis 7 0 it can be used with for blocking subcommands e g config set category Add all the commands in such category to be called by the user with valid categories being like admin set sortedset and so forth see the full list by calling the ACL CAT command The special category all means all the commands both the ones currently present in the server and the ones that will be loaded in the future via modules category Like category but removes the commands from the list of commands the client can call command first arg Allow a specific first argument of an otherwise disabled command It is only supported on commands with no sub commands and is not allowed as negative form like SELECT 1 only additive starting with This feature is deprecated and may be removed in the future allcommands Alias for all Note that it implies the ability to execute all the future commands loaded via the modules system nocommands Alias for all Allow and disallow certain keys and key permissions pattern Add a pattern of keys that can be mentioned as part of commands For instance allows all the keys The pattern is a glob style pattern like the one of KEYS It is possible to specify multiple patterns R pattern Available in Redis 7 0 and later Add the specified read key pattern This behaves similar to the regular key pattern but only grants permission to read from keys that match the given pattern See key permissions key permissions for more information W pattern Available in Redis 7 0 and later Add the specified write key pattern This behaves similar to the regular key pattern but only grants permission to write to keys that match the given pattern See key permissions key permissions for more information RW pattern Available in Redis 7 0 and later Alias for pattern allkeys Alias for resetkeys Flush the list of allowed keys patterns For instance the ACL foo bar resetkeys objects will only allow the client to access keys that match the pattern objects Allow and disallow Pub Sub channels pattern Available in Redis 6 2 and later Add a glob style pattern of Pub Sub channels that can be accessed by the user It is possible to specify multiple channel patterns Note that pattern matching is done only for channels mentioned by PUBLISH and SUBSCRIBE whereas PSUBSCRIBE requires a literal match between its channel patterns and those allowed for user allchannels Alias for that allows the user to access all Pub Sub channels resetchannels Flush the list of allowed channel patterns and disconnect the user s Pub Sub clients if these are no longer able to access their respective channels and or channel patterns Configure valid passwords for the user password Add this password to the list of valid passwords for the user For example mypass will add mypass to the list of valid passwords This directive clears the nopass flag see later Every user can have any number of passwords password Remove this password from the list of valid passwords Emits an error in case the password you are trying to remove is actually not set hash Add this SHA 256 hash value to the list of valid passwords for the user This hash value will be compared to the hash of a password entered for an ACL user This allows users to store hashes in the acl conf file rather than storing cleartext passwords Only SHA 256 hash values are accepted as the password hash must be 64 characters and only contain lowercase hexadecimal characters hash Remove this hash value from the list of valid passwords This is useful when you do not know the password specified by the hash value but would like to remove the password from the user nopass All the set passwords of the user are removed and the user is flagged as requiring no password it means that every password will work against this user If this directive is used for the default user every new connection will be immediately authenticated with the default user without any explicit AUTH command required Note that the resetpass directive will clear this condition resetpass Flushes the list of allowed passwords and removes the nopass status After resetpass the user has no associated passwords and there is no way to authenticate without adding some password or setting it as nopass later Note if a user is not flagged with nopass and has no list of valid passwords that user is effectively impossible to use because there will be no way to log in as that user Configure selectors for the user rule list Available in Redis 7 0 and later Create a new selector to match rules against Selectors are evaluated after the user permissions and are evaluated according to the order they are defined If a command matches either the user permissions or any selector it is allowed See selectors selectors for more information clearselectors Available in Redis 7 0 and later Delete all of the selectors attached to the user Reset the user reset Performs the following actions resetpass resetkeys resetchannels allchannels if acl pubsub default is set off clearselectors all The user returns to the same state it had immediately after its creation Create and edit user ACLs with the ACL SETUSER command Users can be created and modified in two main ways 1 Using the ACL command and its ACL SETUSER subcommand 2 Modifying the server configuration where users can be defined and restarting the server With an external ACL file just call ACL LOAD In this section we ll learn how to define users using the ACL command With such knowledge it will be trivial to do the same things via the configuration files Defining users in the configuration deserves its own section and will be discussed later separately To start try the simplest ACL SETUSER command call ACL SETUSER alice OK The ACL SETUSER command takes the username and a list of ACL rules to apply to the user However the above example did not specify any rule at all This will just create the user if it did not exist using the defaults for new users If the user already exists the command above will do nothing at all Check the default user status ACL LIST 1 user alice off resetchannels all 2 user default on nopass all The new user alice is In the off status so AUTH will not work for the user alice The user also has no passwords set Cannot access any command Note that the user is created by default without the ability to access any command so the all in the output above could be omitted however ACL LIST attempts to be explicit rather than implicit There are no key patterns that the user can access There are no Pub Sub channels that the user can access New users are created with restrictive permissions by default Starting with Redis 6 2 ACL provides Pub Sub channels access management as well To ensure backward compatibility with version 6 0 when upgrading to Redis 6 2 new users are granted the allchannels permission by default The default can be set to resetchannels via the acl pubsub default configuration directive From 7 0 The acl pubsub default value is set to resetchannels to restrict the channels access by default to provide better security The default can be set to allchannels via the acl pubsub default configuration directive to be compatible with previous versions Such user is completely useless Let s try to define the user so that it is active has a password and can access with only the GET command to key names starting with the string cached ACL SETUSER alice on p1pp0 cached get OK Now the user can do something but will refuse to do other things AUTH alice p1pp0 OK GET foo error NOPERM this user has no permissions to access one of the keys used as arguments GET cached 1234 nil SET cached 1234 zap error NOPERM this user has no permissions to run the set command Things are working as expected In order to inspect the configuration of the user alice remember that user names are case sensitive it is possible to use an alternative to ACL LIST which is designed to be more suitable for computers to read while ACL GETUSER is more human readable ACL GETUSER alice 1 flags 2 1 on 3 passwords 4 1 2d9c75 5 commands 6 all get 7 keys 8 cached 9 channels 10 11 selectors 12 empty array The ACL GETUSER returns a field value array that describes the user in more parsable terms The output includes the set of flags a list of key patterns passwords and so forth The output is probably more readable if we use RESP3 so that it is returned as a map reply ACL GETUSER alice 1 flags 1 on 2 passwords 1 2d9c75273d72b32df726fb545c8a4edc719f0a95a6fd993950b10c474ad9c927 3 commands all get 4 keys cached 5 channels 6 selectors empty array Note from now on we ll continue using the Redis default protocol version 2 Using another ACL SETUSER command from a different user because alice cannot run the ACL command we can add multiple patterns to the user ACL SETUSER alice objects items public OK ACL LIST 1 user alice on 2d9c75 cached objects items public resetchannels all get 2 user default on nopass all The user representation in memory is now as we expect it to be Multiple calls to ACL SETUSER It is very important to understand what happens when ACL SETUSER is called multiple times What is critical to know is that every ACL SETUSER call will NOT reset the user but will just apply the ACL rules to the existing user The user is reset only if it was not known before In that case a brand new user is created with zeroed ACLs The user cannot do anything is disallowed has no passwords and so forth This is the best default for safety However later calls will just modify the user incrementally For instance the following sequence ACL SETUSER myuser set OK ACL SETUSER myuser get OK Will result in myuser being able to call both GET and SET ACL LIST 1 user default on nopass all 2 user myuser off resetchannels all get set Command categories Setting user ACLs by specifying all the commands one after the other is really annoying so instead we do things like this ACL SETUSER antirez on all dangerous 42a979 By saying all and dangerous we included all the commands and later removed all the commands that are tagged as dangerous inside the Redis command table Note that command categories never include modules commands with the exception of all If you say all all the commands can be executed by the user even future commands loaded via the modules system However if you use the ACL rule read or any other the modules commands are always excluded This is very important because you should just trust the Redis internal command table Modules may expose dangerous things and in the case of an ACL that is just additive that is in the form of all You should be absolutely sure that you ll never include what you did not mean to The following is a list of command categories and their meanings admin Administrative commands Normal applications will never need to use these Includes REPLICAOF CONFIG DEBUG SAVE MONITOR ACL SHUTDOWN etc bitmap Data type bitmaps related blocking Potentially blocking the connection until released by another command connection Commands affecting the connection or other connections This includes AUTH SELECT COMMAND CLIENT ECHO PING etc dangerous Potentially dangerous commands each should be considered with care for various reasons This includes FLUSHALL MIGRATE RESTORE SORT KEYS CLIENT DEBUG INFO CONFIG SAVE REPLICAOF etc geo Data type geospatial indexes related hash Data type hashes related hyperloglog Data type hyperloglog related fast Fast O 1 commands May loop on the number of arguments but not the number of elements in the key keyspace Writing or reading from keys databases or their metadata in a type agnostic way Includes DEL RESTORE DUMP RENAME EXISTS DBSIZE KEYS EXPIRE TTL FLUSHALL etc Commands that may modify the keyspace key or metadata will also have the write category Commands that only read the keyspace key or metadata will have the read category list Data type lists related pubsub PubSub related commands read Reading from keys values or metadata Note that commands that don t interact with keys will not have either read or write scripting Scripting related set Data type sets related sortedset Data type sorted sets related slow All commands that are not fast stream Data type streams related string Data type strings related transaction WATCH MULTI EXEC related commands write Writing to keys values or metadata Redis can also show you a list of all categories and the exact commands each category includes using the Redis ACL CAT command It can be used in two forms ACL CAT Will just list all the categories available ACL CAT category name Will list all the commands inside the category Examples ACL CAT 1 keyspace 2 read 3 write 4 set 5 sortedset 6 list 7 hash 8 string 9 bitmap 10 hyperloglog 11 geo 12 stream 13 pubsub 14 admin 15 fast 16 slow 17 blocking 18 dangerous 19 connection 20 transaction 21 scripting As you can see so far there are 21 distinct categories Now let s check what command is part of the geo category ACL CAT geo 1 geohash 2 georadius ro 3 georadiusbymember 4 geopos 5 geoadd 6 georadiusbymember ro 7 geodist 8 georadius 9 geosearch 10 geosearchstore Note that commands may be part of multiple categories For example an ACL rule like geo read will result in certain geo commands to be excluded because they are read only commands Allow block subcommands Starting from Redis 7 0 subcommands can be allowed blocked just like other commands by using the separator between the command and subcommand for example config get or config set That is true for all commands except DEBUG In order to allow block specific DEBUG subcommands see the next section Allow the first arg of a blocked command Note This feature is deprecated since Redis 7 0 and may be removed in the future Sometimes the ability to exclude or include a command or a subcommand as a whole is not enough Many deployments may not be happy providing the ability to execute a SELECT for any DB but may still want to be able to run SELECT 0 In such case we could alter the ACL of a user in the following way ACL SETUSER myuser select select 0 First remove the SELECT command and then add the allowed first arg Note that it is not possible to do the reverse since first args can be only added not excluded It is safer to specify all the first args that are valid for some user since it is possible that new first args may be added in the future Another example ACL SETUSER myuser debug debug digest Note that first arg matching may add some performance penalty however it is hard to measure even with synthetic benchmarks The additional CPU cost is only paid when such commands are called and not when other commands are called It is possible to use this mechanism in order to allow subcommands in Redis versions prior to 7 0 see above section all VS all In the previous section it was observed how it is possible to define command ACLs based on adding removing single commands Selectors Starting with Redis 7 0 Redis supports adding multiple sets of rules that are evaluated independently of each other These secondary sets of permissions are called selectors and added by wrapping a set of rules within parentheses In order to execute a command either the root permissions rules defined outside of parenthesis or any of the selectors rules defined inside parenthesis must match the given command Internally the root permissions are checked first followed by selectors in the order they were added For example consider a user with the ACL rules GET key1 SET key2 This user is able to execute GET key1 and SET key2 hello but not GET key2 or SET key1 world Unlike the user s root permissions selectors cannot be modified after they are added Instead selectors can be removed with the clearselectors keyword which removes all of the added selectors Note that clearselectors does not remove the root permissions Key permissions Starting with Redis 7 0 key patterns can also be used to define how a command is able to touch a key This is achieved through rules that define key permissions The key permission rules take the form of permission pattern Permissions are defined as individual characters that map to the following key permissions W Write The data stored within the key may be updated or deleted R Read User supplied data from the key is processed copied or returned Note that this does not include metadata such as size information example STRLEN type information example TYPE or information about whether a value exists within a collection example SISMEMBER Permissions can be composed together by specifying multiple characters Specifying the permission as RW is considered full access and is analogous to just passing in pattern For a concrete example consider a user with ACL rules all app1 read app2 This user has full access on app1 and readonly access on app2 However some commands support reading data from one key doing some transformation and storing it into another key One such command is the COPY command which copies the data from the source key into the destination key The example set of ACL rules is unable to handle a request copying data from app2 user into app1 user since neither the root permission nor the selector fully matches the command However using key selectors you can define a set of ACL rules that can handle this request all app1 R app2 The first pattern is able to match app1 user and the second pattern is able to match app2 user Which type of permission is required for a command is documented through key specifications topics key specs logical operation flags The type of permission is based off the keys logical operation flags The insert update and delete flags map to the write key permission The access flag maps to the read key permission If the key has no logical operation flags such as EXISTS the user still needs either key read or key write permissions to execute the command Note Side channels to accessing user data are ignored when it comes to evaluating whether read permissions are required to execute a command This means that some write commands that return metadata about the modified key only require write permission on the key to execute For example consider the following two commands LPUSH key1 data modifies key1 but only returns metadata about it the size of the list after the push so the command only requires write permission on key1 to execute LPOP key2 modifies key2 but also returns data from it the left most item in the list so the command requires both read and write permission on key2 to execute If an application needs to make sure no data is accessed from a key including side channels it s recommended to not provide any access to the key How passwords are stored internally Redis internally stores passwords hashed with SHA256 If you set a password and check the output of ACL LIST or ACL GETUSER you ll see a long hex string that looks pseudo random Here is an example because in the previous examples for the sake of brevity the long hex string was trimmed ACL GETUSER default 1 flags 2 1 on 3 passwords 4 1 2d9c75273d72b32df726fb545c8a4edc719f0a95a6fd993950b10c474ad9c927 5 commands 6 all 7 keys 8 9 channels 10 11 selectors 12 empty array Using SHA256 provides the ability to avoid storing the password in clear text while still allowing for a very fast AUTH command which is a very important feature of Redis and is coherent with what clients expect from Redis However ACL passwords are not really passwords They are shared secrets between the server and the client because the password is not an authentication token used by a human being For instance There are no length limits the password will just be memorized in some client software There is no human that needs to recall a password in this context The ACL password does not protect any other thing For example it will never be the password for some email account Often when you are able to access the hashed password itself by having full access to the Redis commands of a given server or corrupting the system itself you already have access to what the password is protecting the Redis instance stability and the data it contains For this reason slowing down the password authentication in order to use an algorithm that uses time and space to make password cracking hard is a very poor choice What we suggest instead is to generate strong passwords so that nobody will be able to crack it using a dictionary or a brute force attack even if they have the hash To do so there is a special ACL command ACL GENPASS that generates passwords using the system cryptographic pseudorandom generator ACL GENPASS dd721260bfe1b3d9601e7fbab36de6d04e2e67b0ef1c53de59d45950db0dd3cc The command outputs a 32 byte 256 bit pseudorandom string converted to a 64 byte alphanumerical string This is long enough to avoid attacks and short enough to be easy to manage cut paste store and so forth This is what you should use in order to generate Redis passwords Use an external ACL file There are two ways to store users inside the Redis configuration 1 Users can be specified directly inside the redis conf file 2 It is possible to specify an external ACL file The two methods are mutually incompatible so Redis will ask you to use one or the other Specifying users inside redis conf is good for simple use cases When there are multiple users to define in a complex environment we recommend you use the ACL file instead The format used inside redis conf and in the external ACL file is exactly the same so it is trivial to switch from one to the other and is the following user username acl rules For instance user worker list connection jobs on ffa9203c493aa99 When you want to use an external ACL file you are required to specify the configuration directive called aclfile like this aclfile etc redis users acl When you are just specifying a few users directly inside the redis conf file you can use CONFIG REWRITE in order to store the new user configuration inside the file by rewriting it The external ACL file however is more powerful You can do the following Use ACL LOAD if you modified the ACL file manually and you want Redis to reload the new configuration Note that this command is able to load the file only if all the users are correctly specified Otherwise an error is reported to the user and the old configuration will remain valid Use ACL SAVE to save the current ACL configuration to the ACL file Note that CONFIG REWRITE does not also trigger ACL SAVE When you use an ACL file the configuration and the ACLs are handled separately ACL rules for Sentinel and Replicas In case you don t want to provide Redis replicas and Redis Sentinel instances full access to your Redis instances the following is the set of commands that must be allowed in order for everything to work correctly For Sentinel allow the user to access the following commands both in the master and replica instances AUTH CLIENT SUBSCRIBE SCRIPT PUBLISH PING INFO MULTI SLAVEOF CONFIG CLIENT EXEC Sentinel does not need to access any key in the database but does use Pub Sub so the ACL rule would be the following note AUTH is not needed since it is always allowed ACL SETUSER sentinel user on somepassword allchannels multi slaveof ping exec subscribe config rewrite role publish info client setname client kill script kill Redis replicas require the following commands to be allowed on the master instance PSYNC REPLCONF PING No keys need to be accessed so this translates to the following rules ACL setuser replica user on somepassword psync replconf ping Note that you don t need to configure the replicas to allow the master to be able to execute any set of commands The master is always authenticated as the root user from the point of view of replicas |
redis docs manual security encryption TLS aliases docs manual security encryption md topics encryption title TLS Redis TLS support weight 1 | ---
title: "TLS"
linkTitle: "TLS"
weight: 1
description: Redis TLS support
aliases: [
/topics/encryption,
/docs/manual/security/encryption,
/docs/manual/security/encryption.md
]
---
SSL/TLS is supported by Redis starting with version 6 as an optional feature
that needs to be enabled at compile time.
## Getting Started
### Building
To build with TLS support you'll need OpenSSL development libraries (e.g.
`libssl-dev` on Debian/Ubuntu).
Build Redis with the following command:
```sh
make BUILD_TLS=yes
```
### Tests
To run Redis test suite with TLS, you'll need TLS support for TCL (i.e.
`tcl-tls` package on Debian/Ubuntu).
1. Run `./utils/gen-test-certs.sh` to generate a root CA and a server
certificate.
2. Run `./runtest --tls` or `./runtest-cluster --tls` to run Redis and Redis
Cluster tests in TLS mode.
### Running manually
To manually run a Redis server with TLS mode (assuming `gen-test-certs.sh` was
invoked so sample certificates/keys are available):
./src/redis-server --tls-port 6379 --port 0 \
--tls-cert-file ./tests/tls/redis.crt \
--tls-key-file ./tests/tls/redis.key \
--tls-ca-cert-file ./tests/tls/ca.crt
To connect to this Redis server with `redis-cli`:
./src/redis-cli --tls \
--cert ./tests/tls/redis.crt \
--key ./tests/tls/redis.key \
--cacert ./tests/tls/ca.crt
### Certificate configuration
In order to support TLS, Redis must be configured with a X.509 certificate and a
private key. In addition, it is necessary to specify a CA certificate bundle
file or path to be used as a trusted root when validating certificates. To
support DH based ciphers, a DH params file can also be configured. For example:
```
tls-cert-file /path/to/redis.crt
tls-key-file /path/to/redis.key
tls-ca-cert-file /path/to/ca.crt
tls-dh-params-file /path/to/redis.dh
```
### TLS listening port
The `tls-port` configuration directive enables accepting SSL/TLS connections on
the specified port. This is **in addition** to listening on `port` for TCP
connections, so it is possible to access Redis on different ports using TLS and
non-TLS connections simultaneously.
You may specify `port 0` to disable the non-TLS port completely. To enable only
TLS on the default Redis port, use:
```
port 0
tls-port 6379
```
### Client certificate authentication
By default, Redis uses mutual TLS and requires clients to authenticate with a
valid certificate (authenticated against trusted root CAs specified by
`ca-cert-file` or `ca-cert-dir`).
You may use `tls-auth-clients no` to disable client authentication.
### Replication
A Redis master server handles connecting clients and replica servers in the same
way, so the above `tls-port` and `tls-auth-clients` directives apply to
replication links as well.
On the replica server side, it is necessary to specify `tls-replication yes` to
use TLS for outgoing connections to the master.
### Cluster
When Redis Cluster is used, use `tls-cluster yes` in order to enable TLS for the
cluster bus and cross-node connections.
### Sentinel
Sentinel inherits its networking configuration from the common Redis
configuration, so all of the above applies to Sentinel as well.
When connecting to master servers, Sentinel will use the `tls-replication`
directive to determine if a TLS or non-TLS connection is required.
In addition, the very same `tls-replication` directive will determine whether Sentinel's
port, that accepts connections from other Sentinels, will support TLS as well. That is,
Sentinel will be configured with `tls-port` if and only if `tls-replication` is enabled.
### Additional configuration
Additional TLS configuration is available to control the choice of TLS protocol
versions, ciphers and cipher suites, etc. Please consult the self documented
`redis.conf` for more information.
### Performance considerations
TLS adds a layer to the communication stack with overheads due to writing/reading to/from an SSL connection, encryption/decryption and integrity checks. Consequently, using TLS results in a decrease of the achievable throughput per Redis instance (for more information refer to this [discussion](https://github.com/redis/redis/issues/7595)).
### Limitations
I/O threading is currently not supported with TLS. | redis | title TLS linkTitle TLS weight 1 description Redis TLS support aliases topics encryption docs manual security encryption docs manual security encryption md SSL TLS is supported by Redis starting with version 6 as an optional feature that needs to be enabled at compile time Getting Started Building To build with TLS support you ll need OpenSSL development libraries e g libssl dev on Debian Ubuntu Build Redis with the following command sh make BUILD TLS yes Tests To run Redis test suite with TLS you ll need TLS support for TCL i e tcl tls package on Debian Ubuntu 1 Run utils gen test certs sh to generate a root CA and a server certificate 2 Run runtest tls or runtest cluster tls to run Redis and Redis Cluster tests in TLS mode Running manually To manually run a Redis server with TLS mode assuming gen test certs sh was invoked so sample certificates keys are available src redis server tls port 6379 port 0 tls cert file tests tls redis crt tls key file tests tls redis key tls ca cert file tests tls ca crt To connect to this Redis server with redis cli src redis cli tls cert tests tls redis crt key tests tls redis key cacert tests tls ca crt Certificate configuration In order to support TLS Redis must be configured with a X 509 certificate and a private key In addition it is necessary to specify a CA certificate bundle file or path to be used as a trusted root when validating certificates To support DH based ciphers a DH params file can also be configured For example tls cert file path to redis crt tls key file path to redis key tls ca cert file path to ca crt tls dh params file path to redis dh TLS listening port The tls port configuration directive enables accepting SSL TLS connections on the specified port This is in addition to listening on port for TCP connections so it is possible to access Redis on different ports using TLS and non TLS connections simultaneously You may specify port 0 to disable the non TLS port completely To enable only TLS on the default Redis port use port 0 tls port 6379 Client certificate authentication By default Redis uses mutual TLS and requires clients to authenticate with a valid certificate authenticated against trusted root CAs specified by ca cert file or ca cert dir You may use tls auth clients no to disable client authentication Replication A Redis master server handles connecting clients and replica servers in the same way so the above tls port and tls auth clients directives apply to replication links as well On the replica server side it is necessary to specify tls replication yes to use TLS for outgoing connections to the master Cluster When Redis Cluster is used use tls cluster yes in order to enable TLS for the cluster bus and cross node connections Sentinel Sentinel inherits its networking configuration from the common Redis configuration so all of the above applies to Sentinel as well When connecting to master servers Sentinel will use the tls replication directive to determine if a TLS or non TLS connection is required In addition the very same tls replication directive will determine whether Sentinel s port that accepts connections from other Sentinels will support TLS as well That is Sentinel will be configured with tls port if and only if tls replication is enabled Additional configuration Additional TLS configuration is available to control the choice of TLS protocol versions ciphers and cipher suites etc Please consult the self documented redis conf for more information Performance considerations TLS adds a layer to the communication stack with overheads due to writing reading to from an SSL connection encryption decryption and integrity checks Consequently using TLS results in a decrease of the achievable throughput per Redis instance for more information refer to this discussion https github com redis redis issues 7595 Limitations I O threading is currently not supported with TLS |
redis title Redis security Security model and features in Redis aliases docs manual security Security weight 1 topics security docs manual security md | ---
title: "Redis security"
linkTitle: "Security"
weight: 1
description: Security model and features in Redis
aliases: [
/topics/security,
/docs/manual/security,
/docs/manual/security.md
]
---
This document provides an introduction to the topic of security from the point of
view of Redis. It covers the access control provided by Redis, code security concerns,
attacks that can be triggered from the outside by selecting malicious inputs, and
other similar topics.
You can learn more about access control, data protection and encryption, secure Redis architectures, and secure deployment techniques by taking the [Redis University security course](https://university.redis.com/courses/ru330/).
For security-related contacts, open an issue on GitHub, or when you feel it
is really important to preserve the security of the communication, use the
GPG key at the end of this document.
## Security model
Redis is designed to be accessed by trusted clients inside trusted environments.
This means that usually it is not a good idea to expose the Redis instance
directly to the internet or, in general, to an environment where untrusted
clients can directly access the Redis TCP port or UNIX socket.
For instance, in the common context of a web application implemented using Redis
as a database, cache, or messaging system, the clients inside the front-end
(web side) of the application will query Redis to generate pages or
to perform operations requested or triggered by the web application user.
In this case, the web application mediates access between Redis and
untrusted clients (the user browsers accessing the web application).
In general, untrusted access to Redis should
always be mediated by a layer implementing ACLs, validating user input,
and deciding what operations to perform against the Redis instance.
## Network security
Access to the Redis port should be denied to everybody but trusted clients
in the network, so the servers running Redis should be directly accessible
only by the computers implementing the application using Redis.
In the common case of a single computer directly exposed to the internet, such
as a virtualized Linux instance (Linode, EC2, ...), the Redis port should be
firewalled to prevent access from the outside. Clients will still be able to
access Redis using the loopback interface.
Note that it is possible to bind Redis to a single interface by adding a line
like the following to the **redis.conf** file:
bind 127.0.0.1
Failing to protect the Redis port from the outside can have a big security
impact because of the nature of Redis. For instance, a single `FLUSHALL` command can be used by an external attacker to delete the whole data set.
## Protected mode
Unfortunately, many users fail to protect Redis instances from being accessed
from external networks. Many instances are simply left exposed on the
internet with public IPs. Since version 3.2.0, Redis enters a special mode called **protected mode** when it is
executed with the default configuration (binding all the interfaces) and
without any password in order to access it. In this mode, Redis only replies to queries from the
loopback interfaces, and replies to clients connecting from other
addresses with an error that explains the problem and how to configure
Redis properly.
We expect protected mode to seriously decrease the security issues caused
by unprotected Redis instances executed without proper administration. However,
the system administrator can still ignore the error given by Redis and
disable protected mode or manually bind all the interfaces.
## Authentication
Redis provides two ways to authenticate clients.
The recommended authentication method, introduced in Redis 6, is via Access Control Lists, allowing named users to be created and assigned fine-grained permissions.
Read more about Access Control Lists [here](/docs/management/security/acl/).
The legacy authentication method is enabled by editing the **redis.conf** file, and providing a database password using the `requirepass` setting.
This password is then used by all clients.
When the `requirepass` setting is enabled, Redis will refuse any query by
unauthenticated clients. A client can authenticate itself by sending the
**AUTH** command followed by the password.
The password is set by the system administrator in clear text inside the
redis.conf file. It should be long enough to prevent brute force attacks
for two reasons:
* Redis is very fast at serving queries. Many passwords per second can be tested by an external client.
* The Redis password is stored in the **redis.conf** file and inside the client configuration. Since the system administrator does not need to remember it, the password can be very long.
The goal of the authentication layer is to optionally provide a layer of
redundancy. If firewalling or any other system implemented to protect Redis
from external attackers fail, an external client will still not be able to
access the Redis instance without knowledge of the authentication password.
Since the `AUTH` command, like every other Redis command, is sent unencrypted, it
does not protect against an attacker that has enough access to the network to
perform eavesdropping.
## TLS support
Redis has optional support for TLS on all communication channels, including
client connections, replication links, and the Redis Cluster bus protocol.
## Disallowing specific commands
It is possible to disallow commands in Redis or to rename them as an unguessable
name, so that normal clients are limited to a specified set of commands.
For instance, a virtualized server provider may offer a managed Redis instance
service. In this context, normal users should probably not be able to
call the Redis **CONFIG** command to alter the configuration of the instance,
but the systems that provide and remove instances should be able to do so.
In this case, it is possible to either rename or completely shadow commands from
the command table. This feature is available as a statement that can be used
inside the redis.conf configuration file. For example:
rename-command CONFIG b840fc02d524045429941cc15f59e41cb7be6c52
In the above example, the **CONFIG** command was renamed into an unguessable name. It is also possible to completely disallow it (or any other command) by renaming it to the empty string, like in the following example:
rename-command CONFIG ""
## Attacks triggered by malicious inputs from external clients
There is a class of attacks that an attacker can trigger from the outside even
without external access to the instance. For example, an attacker might insert data into Redis that triggers pathological (worst case)
algorithm complexity on data structures implemented inside Redis internals.
An attacker could supply, via a web form, a set of strings that
are known to hash to the same bucket in a hash table in order to turn the
O(1) expected time (the average time) to the O(N) worst case. This can consume more
CPU than expected and ultimately cause a Denial of Service.
To prevent this specific attack, Redis uses a per-execution, pseudo-random
seed to the hash function.
Redis implements the SORT command using the qsort algorithm. Currently,
the algorithm is not randomized, so it is possible to trigger a quadratic
worst-case behavior by carefully selecting the right set of inputs.
## String escaping and NoSQL injection
The Redis protocol has no concept of string escaping, so injection
is impossible under normal circumstances using a normal client library.
The protocol uses prefixed-length strings and is completely binary safe.
Since Lua scripts executed by the `EVAL` and `EVALSHA` commands follow the
same rules, those commands are also safe.
While it would be a strange use case, the application should avoid composing the body of the Lua script from strings obtained from untrusted sources.
## Code security
In a classical Redis setup, clients are allowed full access to the command set,
but accessing the instance should never result in the ability to control the
system where Redis is running.
Internally, Redis uses all the well-known practices for writing secure code to
prevent buffer overflows, format bugs, and other memory corruption issues.
However, the ability to control the server configuration using the **CONFIG**
command allows the client to change the working directory of the program and
the name of the dump file. This allows clients to write RDB Redis files
to random paths. This is [a security issue](http://antirez.com/news/96) that may lead to the ability to compromise the system and/or run untrusted code as the same user as Redis is running.
Redis does not require root privileges to run. It is recommended to
run it as an unprivileged *redis* user that is only used for this purpose.
## GPG key
```
-----BEGIN PGP PUBLIC KEY BLOCK-----
mQINBF9FWioBEADfBiOE/iKpj2EF/cJ/KzFX+jSBKa8SKrE/9RE0faVF6OYnqstL
S5ox/o+yT45FdfFiRNDflKenjFbOmCbAdIys9Ta0iq6I9hs4sKfkNfNVlKZWtSVG
W4lI6zO2Zyc2wLZonI+Q32dDiXWNcCEsmajFcddukPevj9vKMTJZtF79P2SylEPq
mUuhMy/jOt7q1ibJCj5srtaureBH9662t4IJMFjsEe+hiZ5v071UiQA6Tp7rxLqZ
O6ZRzuamFP3xfy2Lz5NQ7QwnBH1ROabhJPoBOKCATCbfgFcM1Rj+9AOGfoDCOJKH
7yiEezMqr9VbDrEmYSmCO4KheqwC0T06lOLIQC4nnwKopNO/PN21mirCLHvfo01O
H/NUG1LZifOwAURbiFNF8Z3+L0csdhD8JnO+1nphjDHr0Xn9Vff2Vej030pRI/9C
SJ2s5fZUq8jK4n06sKCbqA4pekpbKyhRy3iuITKv7Nxesl4T/uhkc9ccpAvbuD1E
NczN1IH05jiMUMM3lC1A9TSvxSqflqI46TZU3qWLa9yg45kDC8Ryr39TY37LscQk
9x3WwLLkuHeUurnwAk46fSj7+FCKTGTdPVw8v7XbvNOTDf8vJ3o2PxX1uh2P2BHs
9L+E1P96oMkiEy1ug7gu8V+mKu5PAuD3QFzU3XCB93DpDakgtznRRXCkAQARAQAB
tBtSZWRpcyBMYWJzIDxyZWRpc0ByZWRpcy5pbz6JAk4EEwEKADgWIQR5sNCo1OBf
WO913l22qvOUq0evbgUCX0VaKgIbAwULCQgHAgYVCgkICwIEFgIDAQIeAQIXgAAK
CRC2qvOUq0evbpZaD/4rN7xesDcAG4ec895Fqzk3w74W1/K9lzRKZDwRsAqI+sAz
ZXvQMtWSxLfF2BITxLnHJXK5P+2Y6XlNgrn1GYwC1MsARyM9e1AzwDJHcXFkHU82
2aALIMXGtiZs/ejFh9ZSs5cgRlxBSqot/uxXm9AvKEByhmIeHPZse/Rc6e3qa57v
OhCkVZB4ETx5iZrgA+gdmS8N7MXG0cEu5gJLacG57MHi+2WMOCU9Xfj6+Pqhw3qc
E6lBinKcA/LdgUJ1onK0JCnOG1YVHjuFtaisfPXvEmUBGaSGE6lM4J7lass/OWps
Dd+oHCGI+VOGNx6AiBDZG8mZacu0/7goRnOTdljJ93rKkj31I+6+j4xzkAC0IXW8
LAP9Mmo9TGx0L5CaljykhW6z/RK3qd7dAYE+i7e8J9PuQaGG5pjFzuW4vY45j0V/
9JUMKDaGbU5choGqsCpAVtAMFfIBj3UQ5LCt5zKyescKCUb9uifOLeeQ1vay3R9o
eRSD52YpRBpor0AyYxcLur/pkHB0sSvXEfRZENQTohpY71rHSaFd3q1Hkk7lZl95
m24NRlrJnjFmeSPKP22vqUYIwoGNUF/D38UzvqHD8ltTPgkZc+Y+RRbVNqkQYiwW
GH/DigNB8r2sdkt+1EUu+YkYosxtzxpxxpYGKXYXx0uf+EZmRqRt/OSHKnf2GLkC
DQRfRVoqARAApffsrDNo4JWjX3r6wHJJ8IpwnGEJ2IzGkg8f1Ofk2uKrjkII/oIx
sXC3EeauC1Plhs+m9GP/SPY0LXmZ0OzGD/S1yMpmBeBuXJ0gONDo+xCg1pKGshPs
75XzpbggSOtEYR5S8Z46yCu7TGJRXBMGBhDgCfPVFBBNsnG5B0EeHXM4trqqlN6d
PAcwtLnKPz/Z+lloKR6bFXvYGuN5vjRXjcVYZLLCEwdV9iY5/Opqk9sCluasb3t/
c2gcsLWWFnNz2desvb/Y4ADJzxY+Um848DSR8IcdoArSsqmcCTiYvYC/UU7XPVNk
Jrx/HwgTVYiLGbtMB3u3fUpHW8SabdHc4xG3sx0LeIvl+JwHgx7yVhNYJEyOQfnE
mfS97x6surXgTVLbWVjXKIJhoWnWbLP4NkBc27H4qo8wM/IWH4SSXYNzFLlCDPnw
vQZSel21qxdqAWaSxkKcymfMS4nVDhVj0jhlcTY3aZcHMjqoUB07p5+laJr9CCGv
0Y0j0qT2aUO22A3kbv6H9c1Yjv8EI7eNz07aoH1oYU6ShsiaLfIqPfGYb7LwOFWi
PSl0dCY7WJg2H6UHsV/y2DwRr/3oH0a9hv/cvcMneMi3tpIkRwYFBPXEsIcoD9xr
RI5dp8BBdO/Nt+puoQq9oyialWnQK5+AY7ErW1yxjgie4PQ+XtN+85UAEQEAAYkC
NgQYAQoAIBYhBHmw0KjU4F9Y73XeXbaq85SrR69uBQJfRVoqAhsMAAoJELaq85Sr
R69uoV0QAIvlxAHYTjvH1lt5KbpVGs5gwIAnCMPxmaOXcaZ8V0Z1GEU+/IztwV+N
MYCBv1tYa7OppNs1pn75DhzoNAi+XQOVvU0OZgVJutthZe0fNDFGG9B4i/cxRscI
Ld8TPQQNiZPBZ4ubcxbZyBinE9HsYUM49otHjsyFZ0GqTpyne+zBf1GAQoekxlKo
tWSkkmW0x4qW6eiAmyo5lPS1bBjvaSc67i+6Bv5QkZa0UIkRqAzKN4zVvc2FyILz
+7wVLCzWcXrJt8dOeS6Y/Fjbhb6m7dtapUSETAKu6wJvSd9ndDUjFHD33NQIZ/nL
WaPbn01+e/PHtUDmyZ2W2KbcdlIT9nb2uHrruqdCN04sXkID8E2m2gYMA+TjhC0Q
JBJ9WPmdBeKH91R6wWDq6+HwOpgc/9na+BHZXMG+qyEcvNHB5RJdiu2r1Haf6gHi
Fd6rJ6VzaVwnmKmUSKA2wHUuUJ6oxVJ1nFb7Aaschq8F79TAfee0iaGe9cP+xUHL
zBDKwZ9PtyGfdBp1qNOb94sfEasWPftT26rLgKPFcroCSR2QCK5qHsMNCZL+u71w
NnTtq9YZDRaQ2JAc6VDZCcgu+dLiFxVIi1PFcJQ31rVe16+AQ9zsafiNsxkPdZcY
U9XKndQE028dGZv1E3S5BwpnikrUkWdxcYrVZ4fiNIy5I3My2yCe
=J9BD
-----END PGP PUBLIC KEY BLOCK-----
``` | redis | title Redis security linkTitle Security weight 1 description Security model and features in Redis aliases topics security docs manual security docs manual security md This document provides an introduction to the topic of security from the point of view of Redis It covers the access control provided by Redis code security concerns attacks that can be triggered from the outside by selecting malicious inputs and other similar topics You can learn more about access control data protection and encryption secure Redis architectures and secure deployment techniques by taking the Redis University security course https university redis com courses ru330 For security related contacts open an issue on GitHub or when you feel it is really important to preserve the security of the communication use the GPG key at the end of this document Security model Redis is designed to be accessed by trusted clients inside trusted environments This means that usually it is not a good idea to expose the Redis instance directly to the internet or in general to an environment where untrusted clients can directly access the Redis TCP port or UNIX socket For instance in the common context of a web application implemented using Redis as a database cache or messaging system the clients inside the front end web side of the application will query Redis to generate pages or to perform operations requested or triggered by the web application user In this case the web application mediates access between Redis and untrusted clients the user browsers accessing the web application In general untrusted access to Redis should always be mediated by a layer implementing ACLs validating user input and deciding what operations to perform against the Redis instance Network security Access to the Redis port should be denied to everybody but trusted clients in the network so the servers running Redis should be directly accessible only by the computers implementing the application using Redis In the common case of a single computer directly exposed to the internet such as a virtualized Linux instance Linode EC2 the Redis port should be firewalled to prevent access from the outside Clients will still be able to access Redis using the loopback interface Note that it is possible to bind Redis to a single interface by adding a line like the following to the redis conf file bind 127 0 0 1 Failing to protect the Redis port from the outside can have a big security impact because of the nature of Redis For instance a single FLUSHALL command can be used by an external attacker to delete the whole data set Protected mode Unfortunately many users fail to protect Redis instances from being accessed from external networks Many instances are simply left exposed on the internet with public IPs Since version 3 2 0 Redis enters a special mode called protected mode when it is executed with the default configuration binding all the interfaces and without any password in order to access it In this mode Redis only replies to queries from the loopback interfaces and replies to clients connecting from other addresses with an error that explains the problem and how to configure Redis properly We expect protected mode to seriously decrease the security issues caused by unprotected Redis instances executed without proper administration However the system administrator can still ignore the error given by Redis and disable protected mode or manually bind all the interfaces Authentication Redis provides two ways to authenticate clients The recommended authentication method introduced in Redis 6 is via Access Control Lists allowing named users to be created and assigned fine grained permissions Read more about Access Control Lists here docs management security acl The legacy authentication method is enabled by editing the redis conf file and providing a database password using the requirepass setting This password is then used by all clients When the requirepass setting is enabled Redis will refuse any query by unauthenticated clients A client can authenticate itself by sending the AUTH command followed by the password The password is set by the system administrator in clear text inside the redis conf file It should be long enough to prevent brute force attacks for two reasons Redis is very fast at serving queries Many passwords per second can be tested by an external client The Redis password is stored in the redis conf file and inside the client configuration Since the system administrator does not need to remember it the password can be very long The goal of the authentication layer is to optionally provide a layer of redundancy If firewalling or any other system implemented to protect Redis from external attackers fail an external client will still not be able to access the Redis instance without knowledge of the authentication password Since the AUTH command like every other Redis command is sent unencrypted it does not protect against an attacker that has enough access to the network to perform eavesdropping TLS support Redis has optional support for TLS on all communication channels including client connections replication links and the Redis Cluster bus protocol Disallowing specific commands It is possible to disallow commands in Redis or to rename them as an unguessable name so that normal clients are limited to a specified set of commands For instance a virtualized server provider may offer a managed Redis instance service In this context normal users should probably not be able to call the Redis CONFIG command to alter the configuration of the instance but the systems that provide and remove instances should be able to do so In this case it is possible to either rename or completely shadow commands from the command table This feature is available as a statement that can be used inside the redis conf configuration file For example rename command CONFIG b840fc02d524045429941cc15f59e41cb7be6c52 In the above example the CONFIG command was renamed into an unguessable name It is also possible to completely disallow it or any other command by renaming it to the empty string like in the following example rename command CONFIG Attacks triggered by malicious inputs from external clients There is a class of attacks that an attacker can trigger from the outside even without external access to the instance For example an attacker might insert data into Redis that triggers pathological worst case algorithm complexity on data structures implemented inside Redis internals An attacker could supply via a web form a set of strings that are known to hash to the same bucket in a hash table in order to turn the O 1 expected time the average time to the O N worst case This can consume more CPU than expected and ultimately cause a Denial of Service To prevent this specific attack Redis uses a per execution pseudo random seed to the hash function Redis implements the SORT command using the qsort algorithm Currently the algorithm is not randomized so it is possible to trigger a quadratic worst case behavior by carefully selecting the right set of inputs String escaping and NoSQL injection The Redis protocol has no concept of string escaping so injection is impossible under normal circumstances using a normal client library The protocol uses prefixed length strings and is completely binary safe Since Lua scripts executed by the EVAL and EVALSHA commands follow the same rules those commands are also safe While it would be a strange use case the application should avoid composing the body of the Lua script from strings obtained from untrusted sources Code security In a classical Redis setup clients are allowed full access to the command set but accessing the instance should never result in the ability to control the system where Redis is running Internally Redis uses all the well known practices for writing secure code to prevent buffer overflows format bugs and other memory corruption issues However the ability to control the server configuration using the CONFIG command allows the client to change the working directory of the program and the name of the dump file This allows clients to write RDB Redis files to random paths This is a security issue http antirez com news 96 that may lead to the ability to compromise the system and or run untrusted code as the same user as Redis is running Redis does not require root privileges to run It is recommended to run it as an unprivileged redis user that is only used for this purpose GPG key BEGIN PGP PUBLIC KEY BLOCK mQINBF9FWioBEADfBiOE iKpj2EF cJ KzFX jSBKa8SKrE 9RE0faVF6OYnqstL S5ox o yT45FdfFiRNDflKenjFbOmCbAdIys9Ta0iq6I9hs4sKfkNfNVlKZWtSVG W4lI6zO2Zyc2wLZonI Q32dDiXWNcCEsmajFcddukPevj9vKMTJZtF79P2SylEPq mUuhMy jOt7q1ibJCj5srtaureBH9662t4IJMFjsEe hiZ5v071UiQA6Tp7rxLqZ O6ZRzuamFP3xfy2Lz5NQ7QwnBH1ROabhJPoBOKCATCbfgFcM1Rj 9AOGfoDCOJKH 7yiEezMqr9VbDrEmYSmCO4KheqwC0T06lOLIQC4nnwKopNO PN21mirCLHvfo01O H NUG1LZifOwAURbiFNF8Z3 L0csdhD8JnO 1nphjDHr0Xn9Vff2Vej030pRI 9C SJ2s5fZUq8jK4n06sKCbqA4pekpbKyhRy3iuITKv7Nxesl4T uhkc9ccpAvbuD1E NczN1IH05jiMUMM3lC1A9TSvxSqflqI46TZU3qWLa9yg45kDC8Ryr39TY37LscQk 9x3WwLLkuHeUurnwAk46fSj7 FCKTGTdPVw8v7XbvNOTDf8vJ3o2PxX1uh2P2BHs 9L E1P96oMkiEy1ug7gu8V mKu5PAuD3QFzU3XCB93DpDakgtznRRXCkAQARAQAB tBtSZWRpcyBMYWJzIDxyZWRpc0ByZWRpcy5pbz6JAk4EEwEKADgWIQR5sNCo1OBf WO913l22qvOUq0evbgUCX0VaKgIbAwULCQgHAgYVCgkICwIEFgIDAQIeAQIXgAAK CRC2qvOUq0evbpZaD 4rN7xesDcAG4ec895Fqzk3w74W1 K9lzRKZDwRsAqI sAz ZXvQMtWSxLfF2BITxLnHJXK5P 2Y6XlNgrn1GYwC1MsARyM9e1AzwDJHcXFkHU82 2aALIMXGtiZs ejFh9ZSs5cgRlxBSqot uxXm9AvKEByhmIeHPZse Rc6e3qa57v OhCkVZB4ETx5iZrgA gdmS8N7MXG0cEu5gJLacG57MHi 2WMOCU9Xfj6 Pqhw3qc E6lBinKcA LdgUJ1onK0JCnOG1YVHjuFtaisfPXvEmUBGaSGE6lM4J7lass OWps Dd oHCGI VOGNx6AiBDZG8mZacu0 7goRnOTdljJ93rKkj31I 6 j4xzkAC0IXW8 LAP9Mmo9TGx0L5CaljykhW6z RK3qd7dAYE i7e8J9PuQaGG5pjFzuW4vY45j0V 9JUMKDaGbU5choGqsCpAVtAMFfIBj3UQ5LCt5zKyescKCUb9uifOLeeQ1vay3R9o eRSD52YpRBpor0AyYxcLur pkHB0sSvXEfRZENQTohpY71rHSaFd3q1Hkk7lZl95 m24NRlrJnjFmeSPKP22vqUYIwoGNUF D38UzvqHD8ltTPgkZc Y RRbVNqkQYiwW GH DigNB8r2sdkt 1EUu YkYosxtzxpxxpYGKXYXx0uf EZmRqRt OSHKnf2GLkC DQRfRVoqARAApffsrDNo4JWjX3r6wHJJ8IpwnGEJ2IzGkg8f1Ofk2uKrjkII oIx sXC3EeauC1Plhs m9GP SPY0LXmZ0OzGD S1yMpmBeBuXJ0gONDo xCg1pKGshPs 75XzpbggSOtEYR5S8Z46yCu7TGJRXBMGBhDgCfPVFBBNsnG5B0EeHXM4trqqlN6d PAcwtLnKPz Z lloKR6bFXvYGuN5vjRXjcVYZLLCEwdV9iY5 Opqk9sCluasb3t c2gcsLWWFnNz2desvb Y4ADJzxY Um848DSR8IcdoArSsqmcCTiYvYC UU7XPVNk Jrx HwgTVYiLGbtMB3u3fUpHW8SabdHc4xG3sx0LeIvl JwHgx7yVhNYJEyOQfnE mfS97x6surXgTVLbWVjXKIJhoWnWbLP4NkBc27H4qo8wM IWH4SSXYNzFLlCDPnw vQZSel21qxdqAWaSxkKcymfMS4nVDhVj0jhlcTY3aZcHMjqoUB07p5 laJr9CCGv 0Y0j0qT2aUO22A3kbv6H9c1Yjv8EI7eNz07aoH1oYU6ShsiaLfIqPfGYb7LwOFWi PSl0dCY7WJg2H6UHsV y2DwRr 3oH0a9hv cvcMneMi3tpIkRwYFBPXEsIcoD9xr RI5dp8BBdO Nt puoQq9oyialWnQK5 AY7ErW1yxjgie4PQ XtN 85UAEQEAAYkC NgQYAQoAIBYhBHmw0KjU4F9Y73XeXbaq85SrR69uBQJfRVoqAhsMAAoJELaq85Sr R69uoV0QAIvlxAHYTjvH1lt5KbpVGs5gwIAnCMPxmaOXcaZ8V0Z1GEU IztwV N MYCBv1tYa7OppNs1pn75DhzoNAi XQOVvU0OZgVJutthZe0fNDFGG9B4i cxRscI Ld8TPQQNiZPBZ4ubcxbZyBinE9HsYUM49otHjsyFZ0GqTpyne zBf1GAQoekxlKo tWSkkmW0x4qW6eiAmyo5lPS1bBjvaSc67i 6Bv5QkZa0UIkRqAzKN4zVvc2FyILz 7wVLCzWcXrJt8dOeS6Y Fjbhb6m7dtapUSETAKu6wJvSd9ndDUjFHD33NQIZ nL WaPbn01 e PHtUDmyZ2W2KbcdlIT9nb2uHrruqdCN04sXkID8E2m2gYMA TjhC0Q JBJ9WPmdBeKH91R6wWDq6 HwOpgc 9na BHZXMG qyEcvNHB5RJdiu2r1Haf6gHi Fd6rJ6VzaVwnmKmUSKA2wHUuUJ6oxVJ1nFb7Aaschq8F79TAfee0iaGe9cP xUHL zBDKwZ9PtyGfdBp1qNOb94sfEasWPftT26rLgKPFcroCSR2QCK5qHsMNCZL u71w NnTtq9YZDRaQ2JAc6VDZCcgu dLiFxVIi1PFcJQ31rVe16 AQ9zsafiNsxkPdZcY U9XKndQE028dGZv1E3S5BwpnikrUkWdxcYrVZ4fiNIy5I3My2yCe J9BD END PGP PUBLIC KEY BLOCK |
redis title Redis CPU profiling topics performance on cpu Performance engineering guide for on CPU profiling and tracing CPU profiling aliases docs reference optimization cpu profiling weight 1 | ---
title: "Redis CPU profiling"
linkTitle: "CPU profiling"
weight: 1
description: >
Performance engineering guide for on-CPU profiling and tracing
aliases: [
/topics/performance-on-cpu,
/docs/reference/optimization/cpu-profiling
]
---
## Filling the performance checklist
Redis is developed with a great emphasis on performance. We do our best with
every release to make sure you'll experience a very stable and fast product.
Nevertheless, if you're finding room to improve the efficiency of Redis or
are pursuing a performance regression investigation you will need a concise
methodical way of monitoring and analyzing Redis performance.
To do so you can rely on different methodologies (some more suited than other
depending on the class of issues/analysis we intend to make). A curated list
of methodologies and their steps are enumerated by Brendan Greg at the
[following link](http://www.brendangregg.com/methodology.html).
We recommend the Utilization Saturation and Errors (USE) Method for answering
the question of what is your bottleneck. Check the following mapping between
system resource, metric, and tools for a practical deep dive:
[USE method](http://www.brendangregg.com/USEmethod/use-rosetta.html).
### Ensuring the CPU is your bottleneck
This guide assumes you've followed one of the above methodologies to perform a
complete check of system health, and identified the bottleneck being the CPU.
**If you have identified that most of the time is spent blocked on I/O, locks,
timers, paging/swapping, etc., this guide is not for you**.
### Build Prerequisites
For a proper On-CPU analysis, Redis (and any dynamically loaded library like
Redis Modules) requires stack traces to be available to tracers, which you may
need to fix first.
By default, Redis is compiled with the `-O2` switch (which we intent to keep
during profiling). This means that compiler optimizations are enabled. Many
compilers omit the frame pointer as a runtime optimization (saving a register),
thus breaking frame pointer-based stack walking. This makes the Redis
executable faster, but at the same time it makes Redis (like any other program)
harder to trace, potentially wrongfully pinpointing on-CPU time to the last
available frame pointer of a call stack that can get a lot deeper (but
impossible to trace).
It's important that you ensure that:
- debug information is present: compile option `-g`
- frame pointer register is present: `-fno-omit-frame-pointer`
- we still run with optimizations to get an accurate representation of production run times, meaning we will keep: `-O2`
You can do it as follows within redis main repo:
$ make REDIS_CFLAGS="-g -fno-omit-frame-pointer"
## A set of instruments to identify performance regressions and/or potential **on-CPU performance** improvements
This document focuses specifically on **on-CPU** resource bottlenecks analysis,
meaning we're interested in understanding where threads are spending CPU cycles
while running on-CPU and, as importantly, whether those cycles are effectively
being used for computation or stalled waiting (not blocked!) for memory I/O,
and cache misses, etc.
For that we will rely on toolkits (perf, bcc tools), and hardware specific PMCs
(Performance Monitoring Counters), to proceed with:
- Hotspot analysis (perf or bcc tools): to profile code execution and determine which functions are consuming the most time and thus are targets for optimization. We'll present two options to collect, report, and visualize hotspots either with perf or bcc/BPF tracing tools.
- Call counts analysis: to count events including function calls, enabling us to correlate several calls/components at once, relying on bcc/BPF tracing tools.
- Hardware event sampling: crucial for understanding CPU behavior, including memory I/O, stall cycles, and cache misses.
### Tool prerequisites
The following steps rely on Linux perf_events (aka ["perf"](https://man7.org/linux/man-pages/man1/perf.1.html)), [bcc/BPF tracing tools](https://github.com/iovisor/bcc), and Brendan Greg’s [FlameGraph repo](https://github.com/brendangregg/FlameGraph).
We assume beforehand you have:
- Installed the perf tool on your system. Most Linux distributions will likely package this as a package related to the kernel. More information about the perf tool can be found at perf [wiki](https://perf.wiki.kernel.org/).
- Followed the install [bcc/BPF](https://github.com/iovisor/bcc/blob/master/INSTALL.md#installing-bcc) instructions to install bcc toolkit on your machine.
- Cloned Brendan Greg’s [FlameGraph repo](https://github.com/brendangregg/FlameGraph) and made accessible the `difffolded.pl` and `flamegraph.pl` files, to generated the collapsed stack traces and Flame Graphs.
## Hotspot analysis with perf or eBPF (stack traces sampling)
Profiling CPU usage by sampling stack traces at a timed interval is a fast and
easy way to identify performance-critical code sections (hotspots).
### Sampling stack traces using perf
To profile both user- and kernel-level stacks of redis-server for a specific
length of time, for example 60 seconds, at a sampling frequency of 999 samples
per second:
$ perf record -g --pid $(pgrep redis-server) -F 999 -- sleep 60
#### Displaying the recorded profile information using perf report
By default perf record will generate a perf.data file in the current working
directory.
You can then report with a call-graph output (call chain, stack backtrace),
with a minimum call graph inclusion threshold of 0.5%, with:
$ perf report -g "graph,0.5,caller"
See the [perf report](https://man7.org/linux/man-pages/man1/perf-report.1.html)
documentation for advanced filtering, sorting and aggregation capabilities.
#### Visualizing the recorded profile information using Flame Graphs
[Flame graphs](http://www.brendangregg.com/flamegraphs.html) allow for a quick
and accurate visualization of frequent code-paths. They can be generated using
Brendan Greg's open source programs on [github](https://github.com/brendangregg/FlameGraph),
which create interactive SVGs from folded stack files.
Specifically, for perf we need to convert the generated perf.data into the
captured stacks, and fold each of them into single lines. You can then render
the on-CPU flame graph with:
$ perf script > redis.perf.stacks
$ stackcollapse-perf.pl redis.perf.stacks > redis.folded.stacks
$ flamegraph.pl redis.folded.stacks > redis.svg
By default, perf script will generate a perf.data file in the current working
directory. See the [perf script](https://linux.die.net/man/1/perf-script)
documentation for advanced usage.
See [FlameGraph usage options](https://github.com/brendangregg/FlameGraph#options)
for more advanced stack trace visualizations (like the differential one).
#### Archiving and sharing recorded profile information
So that analysis of the perf.data contents can be possible on a machine other
than the one on which collection happened, you need to export along with the
perf.data file all object files with build-ids found in the record data file.
This can be easily done with the help of
[perf-archive.sh](https://github.com/torvalds/linux/blob/master/tools/perf/perf-archive.sh)
script:
$ perf-archive.sh perf.data
Now please run:
$ tar xvf perf.data.tar.bz2 -C ~/.debug
on the machine where you need to run `perf report`.
### Sampling stack traces using bcc/BPF's profile
Similarly to perf, as of Linux kernel 4.9, BPF-optimized profiling is now fully
available with the promise of lower overhead on CPU (as stack traces are
frequency counted in kernel context) and disk I/O resources during profiling.
Apart from that, and relying solely on bcc/BPF's profile tool, we have also
removed the perf.data and intermediate steps if stack traces analysis is our
main goal. You can use bcc's profile tool to output folded format directly, for
flame graph generation:
$ /usr/share/bcc/tools/profile -F 999 -f --pid $(pgrep redis-server) --duration 60 > redis.folded.stacks
In that manner, we've remove any preprocessing and can render the on-CPU flame
graph with a single command:
$ flamegraph.pl redis.folded.stacks > redis.svg
### Visualizing the recorded profile information using Flame Graphs
## Call counts analysis with bcc/BPF
A function may consume significant CPU cycles either because its code is slow
or because it's frequently called. To answer at what rate functions are being
called, you can rely upon call counts analysis using BCC's `funccount` tool:
$ /usr/share/bcc/tools/funccount 'redis-server:(call*|*Read*|*Write*)' --pid $(pgrep redis-server) --duration 60
Tracing 64 functions for "redis-server:(call*|*Read*|*Write*)"... Hit Ctrl-C to end.
FUNC COUNT
call 334
handleClientsWithPendingWrites 388
clientInstallWriteHandler 388
postponeClientRead 514
handleClientsWithPendingReadsUsingThreads 735
handleClientsWithPendingWritesUsingThreads 735
prepareClientToWrite 1442
Detaching...
The above output shows that, while tracing, the Redis's call() function was
called 334 times, handleClientsWithPendingWrites() 388 times, etc.
## Hardware event counting with Performance Monitoring Counters (PMCs)
Many modern processors contain a performance monitoring unit (PMU) exposing
Performance Monitoring Counters (PMCs). PMCs are crucial for understanding CPU
behavior, including memory I/O, stall cycles, and cache misses, and provide
low-level CPU performance statistics that aren't available anywhere else.
The design and functionality of a PMU is CPU-specific and you should assess
your CPU supported counters and features by using `perf list`.
To calculate the number of instructions per cycle, the number of micro ops
executed, the number of cycles during which no micro ops were dispatched, the
number stalled cycles on memory, including a per memory type stalls, for the
duration of 60s, specifically for redis process:
$ perf stat -e "cpu-clock,cpu-cycles,instructions,uops_executed.core,uops_executed.stall_cycles,cache-references,cache-misses,cycle_activity.stalls_total,cycle_activity.stalls_mem_any,cycle_activity.stalls_l3_miss,cycle_activity.stalls_l2_miss,cycle_activity.stalls_l1d_miss" --pid $(pgrep redis-server) -- sleep 60
Performance counter stats for process id '3038':
60046.411437 cpu-clock (msec) # 1.001 CPUs utilized
168991975443 cpu-cycles # 2.814 GHz (36.40%)
388248178431 instructions # 2.30 insn per cycle (45.50%)
443134227322 uops_executed.core # 7379.862 M/sec (45.51%)
30317116399 uops_executed.stall_cycles # 504.895 M/sec (45.51%)
670821512 cache-references # 11.172 M/sec (45.52%)
23727619 cache-misses # 3.537 % of all cache refs (45.43%)
30278479141 cycle_activity.stalls_total # 504.251 M/sec (36.33%)
19981138777 cycle_activity.stalls_mem_any # 332.762 M/sec (36.33%)
725708324 cycle_activity.stalls_l3_miss # 12.086 M/sec (36.33%)
8487905659 cycle_activity.stalls_l2_miss # 141.356 M/sec (36.32%)
10011909368 cycle_activity.stalls_l1d_miss # 166.736 M/sec (36.31%)
60.002765665 seconds time elapsed
It's important to know that there are two very different ways in which PMCs can
be used (counting and sampling), and we've focused solely on PMCs counting for
the sake of this analysis. Brendan Greg clearly explains it on the following
[link](http://www.brendangregg.com/blog/2017-05-04/the-pmcs-of-ec2.html).
| redis | title Redis CPU profiling linkTitle CPU profiling weight 1 description Performance engineering guide for on CPU profiling and tracing aliases topics performance on cpu docs reference optimization cpu profiling Filling the performance checklist Redis is developed with a great emphasis on performance We do our best with every release to make sure you ll experience a very stable and fast product Nevertheless if you re finding room to improve the efficiency of Redis or are pursuing a performance regression investigation you will need a concise methodical way of monitoring and analyzing Redis performance To do so you can rely on different methodologies some more suited than other depending on the class of issues analysis we intend to make A curated list of methodologies and their steps are enumerated by Brendan Greg at the following link http www brendangregg com methodology html We recommend the Utilization Saturation and Errors USE Method for answering the question of what is your bottleneck Check the following mapping between system resource metric and tools for a practical deep dive USE method http www brendangregg com USEmethod use rosetta html Ensuring the CPU is your bottleneck This guide assumes you ve followed one of the above methodologies to perform a complete check of system health and identified the bottleneck being the CPU If you have identified that most of the time is spent blocked on I O locks timers paging swapping etc this guide is not for you Build Prerequisites For a proper On CPU analysis Redis and any dynamically loaded library like Redis Modules requires stack traces to be available to tracers which you may need to fix first By default Redis is compiled with the O2 switch which we intent to keep during profiling This means that compiler optimizations are enabled Many compilers omit the frame pointer as a runtime optimization saving a register thus breaking frame pointer based stack walking This makes the Redis executable faster but at the same time it makes Redis like any other program harder to trace potentially wrongfully pinpointing on CPU time to the last available frame pointer of a call stack that can get a lot deeper but impossible to trace It s important that you ensure that debug information is present compile option g frame pointer register is present fno omit frame pointer we still run with optimizations to get an accurate representation of production run times meaning we will keep O2 You can do it as follows within redis main repo make REDIS CFLAGS g fno omit frame pointer A set of instruments to identify performance regressions and or potential on CPU performance improvements This document focuses specifically on on CPU resource bottlenecks analysis meaning we re interested in understanding where threads are spending CPU cycles while running on CPU and as importantly whether those cycles are effectively being used for computation or stalled waiting not blocked for memory I O and cache misses etc For that we will rely on toolkits perf bcc tools and hardware specific PMCs Performance Monitoring Counters to proceed with Hotspot analysis perf or bcc tools to profile code execution and determine which functions are consuming the most time and thus are targets for optimization We ll present two options to collect report and visualize hotspots either with perf or bcc BPF tracing tools Call counts analysis to count events including function calls enabling us to correlate several calls components at once relying on bcc BPF tracing tools Hardware event sampling crucial for understanding CPU behavior including memory I O stall cycles and cache misses Tool prerequisites The following steps rely on Linux perf events aka perf https man7 org linux man pages man1 perf 1 html bcc BPF tracing tools https github com iovisor bcc and Brendan Greg s FlameGraph repo https github com brendangregg FlameGraph We assume beforehand you have Installed the perf tool on your system Most Linux distributions will likely package this as a package related to the kernel More information about the perf tool can be found at perf wiki https perf wiki kernel org Followed the install bcc BPF https github com iovisor bcc blob master INSTALL md installing bcc instructions to install bcc toolkit on your machine Cloned Brendan Greg s FlameGraph repo https github com brendangregg FlameGraph and made accessible the difffolded pl and flamegraph pl files to generated the collapsed stack traces and Flame Graphs Hotspot analysis with perf or eBPF stack traces sampling Profiling CPU usage by sampling stack traces at a timed interval is a fast and easy way to identify performance critical code sections hotspots Sampling stack traces using perf To profile both user and kernel level stacks of redis server for a specific length of time for example 60 seconds at a sampling frequency of 999 samples per second perf record g pid pgrep redis server F 999 sleep 60 Displaying the recorded profile information using perf report By default perf record will generate a perf data file in the current working directory You can then report with a call graph output call chain stack backtrace with a minimum call graph inclusion threshold of 0 5 with perf report g graph 0 5 caller See the perf report https man7 org linux man pages man1 perf report 1 html documentation for advanced filtering sorting and aggregation capabilities Visualizing the recorded profile information using Flame Graphs Flame graphs http www brendangregg com flamegraphs html allow for a quick and accurate visualization of frequent code paths They can be generated using Brendan Greg s open source programs on github https github com brendangregg FlameGraph which create interactive SVGs from folded stack files Specifically for perf we need to convert the generated perf data into the captured stacks and fold each of them into single lines You can then render the on CPU flame graph with perf script redis perf stacks stackcollapse perf pl redis perf stacks redis folded stacks flamegraph pl redis folded stacks redis svg By default perf script will generate a perf data file in the current working directory See the perf script https linux die net man 1 perf script documentation for advanced usage See FlameGraph usage options https github com brendangregg FlameGraph options for more advanced stack trace visualizations like the differential one Archiving and sharing recorded profile information So that analysis of the perf data contents can be possible on a machine other than the one on which collection happened you need to export along with the perf data file all object files with build ids found in the record data file This can be easily done with the help of perf archive sh https github com torvalds linux blob master tools perf perf archive sh script perf archive sh perf data Now please run tar xvf perf data tar bz2 C debug on the machine where you need to run perf report Sampling stack traces using bcc BPF s profile Similarly to perf as of Linux kernel 4 9 BPF optimized profiling is now fully available with the promise of lower overhead on CPU as stack traces are frequency counted in kernel context and disk I O resources during profiling Apart from that and relying solely on bcc BPF s profile tool we have also removed the perf data and intermediate steps if stack traces analysis is our main goal You can use bcc s profile tool to output folded format directly for flame graph generation usr share bcc tools profile F 999 f pid pgrep redis server duration 60 redis folded stacks In that manner we ve remove any preprocessing and can render the on CPU flame graph with a single command flamegraph pl redis folded stacks redis svg Visualizing the recorded profile information using Flame Graphs Call counts analysis with bcc BPF A function may consume significant CPU cycles either because its code is slow or because it s frequently called To answer at what rate functions are being called you can rely upon call counts analysis using BCC s funccount tool usr share bcc tools funccount redis server call Read Write pid pgrep redis server duration 60 Tracing 64 functions for redis server call Read Write Hit Ctrl C to end FUNC COUNT call 334 handleClientsWithPendingWrites 388 clientInstallWriteHandler 388 postponeClientRead 514 handleClientsWithPendingReadsUsingThreads 735 handleClientsWithPendingWritesUsingThreads 735 prepareClientToWrite 1442 Detaching The above output shows that while tracing the Redis s call function was called 334 times handleClientsWithPendingWrites 388 times etc Hardware event counting with Performance Monitoring Counters PMCs Many modern processors contain a performance monitoring unit PMU exposing Performance Monitoring Counters PMCs PMCs are crucial for understanding CPU behavior including memory I O stall cycles and cache misses and provide low level CPU performance statistics that aren t available anywhere else The design and functionality of a PMU is CPU specific and you should assess your CPU supported counters and features by using perf list To calculate the number of instructions per cycle the number of micro ops executed the number of cycles during which no micro ops were dispatched the number stalled cycles on memory including a per memory type stalls for the duration of 60s specifically for redis process perf stat e cpu clock cpu cycles instructions uops executed core uops executed stall cycles cache references cache misses cycle activity stalls total cycle activity stalls mem any cycle activity stalls l3 miss cycle activity stalls l2 miss cycle activity stalls l1d miss pid pgrep redis server sleep 60 Performance counter stats for process id 3038 60046 411437 cpu clock msec 1 001 CPUs utilized 168991975443 cpu cycles 2 814 GHz 36 40 388248178431 instructions 2 30 insn per cycle 45 50 443134227322 uops executed core 7379 862 M sec 45 51 30317116399 uops executed stall cycles 504 895 M sec 45 51 670821512 cache references 11 172 M sec 45 52 23727619 cache misses 3 537 of all cache refs 45 43 30278479141 cycle activity stalls total 504 251 M sec 36 33 19981138777 cycle activity stalls mem any 332 762 M sec 36 33 725708324 cycle activity stalls l3 miss 12 086 M sec 36 33 8487905659 cycle activity stalls l2 miss 141 356 M sec 36 32 10011909368 cycle activity stalls l1d miss 166 736 M sec 36 31 60 002765665 seconds time elapsed It s important to know that there are two very different ways in which PMCs can be used counting and sampling and we ve focused solely on PMCs counting for the sake of this analysis Brendan Greg clearly explains it on the following link http www brendangregg com blog 2017 05 04 the pmcs of ec2 html |
redis docs reference optimization latency monitor Latency monitoring topics latency monitor aliases Discovering slow server events in Redis weight 1 title Redis latency monitoring | ---
title: "Redis latency monitoring"
linkTitle: "Latency monitoring"
weight: 1
description: Discovering slow server events in Redis
aliases: [
/topics/latency-monitor,
/docs/reference/optimization/latency-monitor
]
---
Redis is often used for demanding use cases, where it
serves a large number of queries per second per instance, but also has strict latency requirements for the average response
time and the worst-case latency.
While Redis is an in-memory system, it deals with the operating system in
different ways, for example, in the context of persisting to disk.
Moreover Redis implements a rich set of commands. Certain commands
are fast and run in constant or logarithmic time. Other commands are slower
O(N) commands that can cause latency spikes.
Finally, Redis is single threaded. This is usually an advantage
from the point of view of the amount of work it can perform per core, and in
the latency figures it is able to provide. However, it poses
a challenge for latency, since the single
thread must be able to perform certain tasks incrementally, for
example key expiration, in a way that does not impact the other clients
that are served.
For all these reasons, Redis 2.8.13 introduced a new feature called
**Latency Monitoring**, that helps the user to check and troubleshoot possible
latency problems. Latency monitoring is composed of the following conceptual
parts:
* Latency hooks that sample different latency-sensitive code paths.
* Time series recording of latency spikes, split by different events.
* Reporting engine to fetch raw data from the time series.
* Analysis engine to provide human-readable reports and hints according to the measurements.
The rest of this document covers the latency monitoring subsystem
details. For more information about the general topic of Redis
and latency, see [Redis latency problems troubleshooting](/topics/latency).
## Events and time series
Different monitored code paths have different names and are called *events*.
For example, `command` is an event that measures latency spikes of possibly slow
command executions, while `fast-command` is the event name for the monitoring
of the O(1) and O(log N) commands. Other events are less generic and monitor
specific operations performed by Redis. For example, the `fork` event
only monitors the time taken by Redis to execute the `fork(2)` system call.
A latency spike is an event that takes more time to run than the configured latency
threshold. There is a separate time series associated with every monitored
event. This is how the time series work:
* Every time a latency spike happens, it is logged in the appropriate time series.
* Every time series is composed of 160 elements.
* Each element is a pair made of a Unix timestamp of the time the latency spike was measured and the number of milliseconds the event took to execute.
* Latency spikes for the same event that occur in the same second are merged by taking the maximum latency. Even if continuous latency spikes are measured for a given event, which could happen with a low threshold, at least 160 seconds of history are available.
* Records the all-time maximum latency for every element.
The framework monitors and logs latency spikes in the execution time of these events:
* `command`: regular commands.
* `fast-command`: O(1) and O(log N) commands.
* `fork`: the `fork(2)` system call.
* `rdb-unlink-temp-file`: the `unlink(2)` system call.
* `aof-fsync-always`: the `fsync(2)` system call when invoked by the `appendfsync allways` policy.
* `aof-write`: writing to the AOF - a catchall event for `write(2)` system calls.
* `aof-write-pending-fsync`: the `write(2)` system call when there is a pending fsync.
* `aof-write-active-child`: the `write(2)` system call when there are active child processes.
* `aof-write-alone`: the `write(2)` system call when no pending fsync and no active child process.
* `aof-fstat`: the `fstat(2)` system call.
* `aof-rename`: the `rename(2)` system call for renaming the temporary file after completing `BGREWRITEAOF`.
* `aof-rewrite-diff-write`: writing the differences accumulated while performing `BGREWRITEAOF`.
* `active-defrag-cycle`: the active defragmentation cycle.
* `expire-cycle`: the expiration cycle.
* `eviction-cycle`: the eviction cycle.
* `eviction-del`: deletes during the eviction cycle.
## How to enable latency monitoring
What is high latency for one use case may not be considered high latency for another. Some applications may require that all queries be served in less than 1 millisecond. For other applications, it may be acceptable for a small amount of clients to experience a 2 second latency on occasion.
The first step to enable the latency monitor is to set a **latency threshold** in milliseconds. Only events that take longer than the specified threshold will be logged as latency spikes. The user should set the threshold according to their needs. For example, if the application requires a maximum acceptable latency of 100 milliseconds, the threshold should be set to log all the events blocking the server for a time equal or greater to 100 milliseconds.
Enable the latency monitor at runtime in a production server
with the following command:
CONFIG SET latency-monitor-threshold 100
Monitoring is turned off by default (threshold set to 0), even if the actual cost of latency monitoring is near zero. While the memory requirements of latency monitoring are very small, there is no good reason to raise the baseline memory usage of a Redis instance that is working well.
## Report information with the LATENCY command
The user interface to the latency monitoring subsystem is the `LATENCY` command.
Like many other Redis commands, `LATENCY` accepts subcommands that modify its behavior. These subcommands are:
* `LATENCY LATEST` - returns the latest latency samples for all events.
* `LATENCY HISTORY` - returns latency time series for a given event.
* `LATENCY RESET` - resets latency time series data for one or more events.
* `LATENCY GRAPH` - renders an ASCII-art graph of an event's latency samples.
* `LATENCY DOCTOR` - replies with a human-readable latency analysis report.
Refer to each subcommand's documentation page for further information. | redis | title Redis latency monitoring linkTitle Latency monitoring weight 1 description Discovering slow server events in Redis aliases topics latency monitor docs reference optimization latency monitor Redis is often used for demanding use cases where it serves a large number of queries per second per instance but also has strict latency requirements for the average response time and the worst case latency While Redis is an in memory system it deals with the operating system in different ways for example in the context of persisting to disk Moreover Redis implements a rich set of commands Certain commands are fast and run in constant or logarithmic time Other commands are slower O N commands that can cause latency spikes Finally Redis is single threaded This is usually an advantage from the point of view of the amount of work it can perform per core and in the latency figures it is able to provide However it poses a challenge for latency since the single thread must be able to perform certain tasks incrementally for example key expiration in a way that does not impact the other clients that are served For all these reasons Redis 2 8 13 introduced a new feature called Latency Monitoring that helps the user to check and troubleshoot possible latency problems Latency monitoring is composed of the following conceptual parts Latency hooks that sample different latency sensitive code paths Time series recording of latency spikes split by different events Reporting engine to fetch raw data from the time series Analysis engine to provide human readable reports and hints according to the measurements The rest of this document covers the latency monitoring subsystem details For more information about the general topic of Redis and latency see Redis latency problems troubleshooting topics latency Events and time series Different monitored code paths have different names and are called events For example command is an event that measures latency spikes of possibly slow command executions while fast command is the event name for the monitoring of the O 1 and O log N commands Other events are less generic and monitor specific operations performed by Redis For example the fork event only monitors the time taken by Redis to execute the fork 2 system call A latency spike is an event that takes more time to run than the configured latency threshold There is a separate time series associated with every monitored event This is how the time series work Every time a latency spike happens it is logged in the appropriate time series Every time series is composed of 160 elements Each element is a pair made of a Unix timestamp of the time the latency spike was measured and the number of milliseconds the event took to execute Latency spikes for the same event that occur in the same second are merged by taking the maximum latency Even if continuous latency spikes are measured for a given event which could happen with a low threshold at least 160 seconds of history are available Records the all time maximum latency for every element The framework monitors and logs latency spikes in the execution time of these events command regular commands fast command O 1 and O log N commands fork the fork 2 system call rdb unlink temp file the unlink 2 system call aof fsync always the fsync 2 system call when invoked by the appendfsync allways policy aof write writing to the AOF a catchall event for write 2 system calls aof write pending fsync the write 2 system call when there is a pending fsync aof write active child the write 2 system call when there are active child processes aof write alone the write 2 system call when no pending fsync and no active child process aof fstat the fstat 2 system call aof rename the rename 2 system call for renaming the temporary file after completing BGREWRITEAOF aof rewrite diff write writing the differences accumulated while performing BGREWRITEAOF active defrag cycle the active defragmentation cycle expire cycle the expiration cycle eviction cycle the eviction cycle eviction del deletes during the eviction cycle How to enable latency monitoring What is high latency for one use case may not be considered high latency for another Some applications may require that all queries be served in less than 1 millisecond For other applications it may be acceptable for a small amount of clients to experience a 2 second latency on occasion The first step to enable the latency monitor is to set a latency threshold in milliseconds Only events that take longer than the specified threshold will be logged as latency spikes The user should set the threshold according to their needs For example if the application requires a maximum acceptable latency of 100 milliseconds the threshold should be set to log all the events blocking the server for a time equal or greater to 100 milliseconds Enable the latency monitor at runtime in a production server with the following command CONFIG SET latency monitor threshold 100 Monitoring is turned off by default threshold set to 0 even if the actual cost of latency monitoring is near zero While the memory requirements of latency monitoring are very small there is no good reason to raise the baseline memory usage of a Redis instance that is working well Report information with the LATENCY command The user interface to the latency monitoring subsystem is the LATENCY command Like many other Redis commands LATENCY accepts subcommands that modify its behavior These subcommands are LATENCY LATEST returns the latest latency samples for all events LATENCY HISTORY returns latency time series for a given event LATENCY RESET resets latency time series data for one or more events LATENCY GRAPH renders an ASCII art graph of an event s latency samples LATENCY DOCTOR replies with a human readable latency analysis report Refer to each subcommand s documentation page for further information |
redis Latency diagnosis aliases Finding the causes of slow responses docs reference optimization latency topics latency title Diagnosing latency issues weight 1 | ---
title: "Diagnosing latency issues"
linkTitle: "Latency diagnosis"
weight: 1
description: Finding the causes of slow responses
aliases: [
/topics/latency,
/docs/reference/optimization/latency
]
---
This document will help you understand what the problem could be if you
are experiencing latency problems with Redis.
In this context *latency* is the maximum delay between the time a client
issues a command and the time the reply to the command is received by the
client. Usually Redis processing time is extremely low, in the sub microsecond
range, but there are certain conditions leading to higher latency figures.
I've little time, give me the checklist
---
The following documentation is very important in order to run Redis in
a low latency fashion. However I understand that we are busy people, so
let's start with a quick checklist. If you fail following these steps, please
return here to read the full documentation.
1. Make sure you are not running slow commands that are blocking the server. Use the Redis [Slow Log feature](/commands/slowlog) to check this.
2. For EC2 users, make sure you use HVM based modern EC2 instances, like m3.medium. Otherwise fork() is too slow.
3. Transparent huge pages must be disabled from your kernel. Use `echo never > /sys/kernel/mm/transparent_hugepage/enabled` to disable them, and restart your Redis process.
4. If you are using a virtual machine, it is possible that you have an intrinsic latency that has nothing to do with Redis. Check the minimum latency you can expect from your runtime environment using `./redis-cli --intrinsic-latency 100`. Note: you need to run this command in *the server* not in the client.
5. Enable and use the [Latency monitor](/topics/latency-monitor) feature of Redis in order to get a human readable description of the latency events and causes in your Redis instance.
In general, use the following table for durability VS latency/performance tradeoffs, ordered from stronger safety to better latency.
1. AOF + fsync always: this is very slow, you should use it only if you know what you are doing.
2. AOF + fsync every second: this is a good compromise.
3. AOF + fsync every second + no-appendfsync-on-rewrite option set to yes: this is as the above, but avoids to fsync during rewrites to lower the disk pressure.
4. AOF + fsync never. Fsyncing is up to the kernel in this setup, even less disk pressure and risk of latency spikes.
5. RDB. Here you have a vast spectrum of tradeoffs depending on the save triggers you configure.
And now for people with 15 minutes to spend, the details...
Measuring latency
-----------------
If you are experiencing latency problems, you probably know how to measure
it in the context of your application, or maybe your latency problem is very
evident even macroscopically. However redis-cli can be used to measure the
latency of a Redis server in milliseconds, just try:
redis-cli --latency -h `host` -p `port`
Using the internal Redis latency monitoring subsystem
---
Since Redis 2.8.13, Redis provides latency monitoring capabilities that
are able to sample different execution paths to understand where the
server is blocking. This makes debugging of the problems illustrated in
this documentation much simpler, so we suggest enabling latency monitoring
ASAP. Please refer to the [Latency monitor documentation](/topics/latency-monitor).
While the latency monitoring sampling and reporting capabilities will make
it simpler to understand the source of latency in your Redis system, it is still
advised that you read this documentation extensively to better understand
the topic of Redis and latency spikes.
Latency baseline
----------------
There is a kind of latency that is inherently part of the environment where
you run Redis, that is the latency provided by your operating system kernel
and, if you are using virtualization, by the hypervisor you are using.
While this latency can't be removed it is important to study it because
it is the baseline, or in other words, you won't be able to achieve a Redis
latency that is better than the latency that every process running in your
environment will experience because of the kernel or hypervisor implementation
or setup.
We call this kind of latency **intrinsic latency**, and `redis-cli` starting
from Redis version 2.8.7 is able to measure it. This is an example run
under Linux 3.11.0 running on an entry level server.
Note: the argument `100` is the number of seconds the test will be executed.
The more time we run the test, the more likely we'll be able to spot
latency spikes. 100 seconds is usually appropriate, however you may want
to perform a few runs at different times. Please note that the test is CPU
intensive and will likely saturate a single core in your system.
$ ./redis-cli --intrinsic-latency 100
Max latency so far: 1 microseconds.
Max latency so far: 16 microseconds.
Max latency so far: 50 microseconds.
Max latency so far: 53 microseconds.
Max latency so far: 83 microseconds.
Max latency so far: 115 microseconds.
Note: redis-cli in this special case needs to **run in the server** where you run or plan to run Redis, not in the client. In this special mode redis-cli does not connect to a Redis server at all: it will just try to measure the largest time the kernel does not provide CPU time to run to the redis-cli process itself.
In the above example, the intrinsic latency of the system is just 0.115
milliseconds (or 115 microseconds), which is a good news, however keep in mind
that the intrinsic latency may change over time depending on the load of the
system.
Virtualized environments will not show so good numbers, especially with high
load or if there are noisy neighbors. The following is a run on a Linode 4096
instance running Redis and Apache:
$ ./redis-cli --intrinsic-latency 100
Max latency so far: 573 microseconds.
Max latency so far: 695 microseconds.
Max latency so far: 919 microseconds.
Max latency so far: 1606 microseconds.
Max latency so far: 3191 microseconds.
Max latency so far: 9243 microseconds.
Max latency so far: 9671 microseconds.
Here we have an intrinsic latency of 9.7 milliseconds: this means that we can't ask better than that to Redis. However other runs at different times in different virtualization environments with higher load or with noisy neighbors can easily show even worse values. We were able to measure up to 40 milliseconds in
systems otherwise apparently running normally.
Latency induced by network and communication
--------------------------------------------
Clients connect to Redis using a TCP/IP connection or a Unix domain connection.
The typical latency of a 1 Gbit/s network is about 200 us, while the latency
with a Unix domain socket can be as low as 30 us. It actually depends on your
network and system hardware. On top of the communication itself, the system
adds some more latency (due to thread scheduling, CPU caches, NUMA placement,
etc ...). System induced latencies are significantly higher on a virtualized
environment than on a physical machine.
The consequence is even if Redis processes most commands in sub microsecond
range, a client performing many roundtrips to the server will have to pay
for these network and system related latencies.
An efficient client will therefore try to limit the number of roundtrips by
pipelining several commands together. This is fully supported by the servers
and most clients. Aggregated commands like MSET/MGET can be also used for
that purpose. Starting with Redis 2.4, a number of commands also support
variadic parameters for all data types.
Here are some guidelines:
+ If you can afford it, prefer a physical machine over a VM to host the server.
+ Do not systematically connect/disconnect to the server (especially true
for web based applications). Keep your connections as long lived as possible.
+ If your client is on the same host than the server, use Unix domain sockets.
+ Prefer to use aggregated commands (MSET/MGET), or commands with variadic
parameters (if possible) over pipelining.
+ Prefer to use pipelining (if possible) over sequence of roundtrips.
+ Redis supports Lua server-side scripting to cover cases that are not suitable
for raw pipelining (for instance when the result of a command is an input for
the following commands).
On Linux, some people can achieve better latencies by playing with process
placement (taskset), cgroups, real-time priorities (chrt), NUMA
configuration (numactl), or by using a low-latency kernel. Please note
vanilla Redis is not really suitable to be bound on a **single** CPU core.
Redis can fork background tasks that can be extremely CPU consuming
like `BGSAVE` or `BGREWRITEAOF`. These tasks must **never** run on the same core
as the main event loop.
In most situations, these kind of system level optimizations are not needed.
Only do them if you require them, and if you are familiar with them.
Single threaded nature of Redis
-------------------------------
Redis uses a *mostly* single threaded design. This means that a single process
serves all the client requests, using a technique called **multiplexing**.
This means that Redis can serve a single request in every given moment, so
all the requests are served sequentially. This is very similar to how Node.js
works as well. However, both products are not often perceived as being slow.
This is caused in part by the small amount of time to complete a single request,
but primarily because these products are designed to not block on system calls,
such as reading data from or writing data to a socket.
I said that Redis is *mostly* single threaded since actually from Redis 2.4
we use threads in Redis in order to perform some slow I/O operations in the
background, mainly related to disk I/O, but this does not change the fact
that Redis serves all the requests using a single thread.
Latency generated by slow commands
----------------------------------
A consequence of being single thread is that when a request is slow to serve
all the other clients will wait for this request to be served. When executing
normal commands, like `GET` or `SET` or `LPUSH` this is not a problem
at all since these commands are executed in constant (and very small) time.
However there are commands operating on many elements, like `SORT`, `LREM`,
`SUNION` and others. For instance taking the intersection of two big sets
can take a considerable amount of time.
The algorithmic complexity of all commands is documented. A good practice
is to systematically check it when using commands you are not familiar with.
If you have latency concerns you should either not use slow commands against
values composed of many elements, or you should run a replica using Redis
replication where you run all your slow queries.
It is possible to monitor slow commands using the Redis
[Slow Log feature](/commands/slowlog).
Additionally, you can use your favorite per-process monitoring program
(top, htop, prstat, etc ...) to quickly check the CPU consumption of the
main Redis process. If it is high while the traffic is not, it is usually
a sign that slow commands are used.
**IMPORTANT NOTE**: a VERY common source of latency generated by the execution
of slow commands is the use of the `KEYS` command in production environments.
`KEYS`, as documented in the Redis documentation, should only be used for
debugging purposes. Since Redis 2.8 a new commands were introduced in order to
iterate the key space and other large collections incrementally, please check
the `SCAN`, `SSCAN`, `HSCAN` and `ZSCAN` commands for more information.
Latency generated by fork
-------------------------
In order to generate the RDB file in background, or to rewrite the Append Only File if AOF persistence is enabled, Redis has to fork background processes.
The fork operation (running in the main thread) can induce latency by itself.
Forking is an expensive operation on most Unix-like systems, since it involves
copying a good number of objects linked to the process. This is especially
true for the page table associated to the virtual memory mechanism.
For instance on a Linux/AMD64 system, the memory is divided in 4 kB pages.
To convert virtual addresses to physical addresses, each process stores
a page table (actually represented as a tree) containing at least a pointer
per page of the address space of the process. So a large 24 GB Redis instance
requires a page table of 24 GB / 4 kB * 8 = 48 MB.
When a background save is performed, this instance will have to be forked,
which will involve allocating and copying 48 MB of memory. It takes time
and CPU, especially on virtual machines where allocation and initialization
of a large memory chunk can be expensive.
Fork time in different systems
------------------------------
Modern hardware is pretty fast at copying the page table, but Xen is not.
The problem with Xen is not virtualization-specific, but Xen-specific. For instance using VMware or Virtual Box does not result into slow fork time.
The following is a table that compares fork time for different Redis instance
size. Data is obtained performing a BGSAVE and looking at the `latest_fork_usec` filed in the `INFO` command output.
However the good news is that **new types of EC2 HVM based instances are much
better with fork times**, almost on par with physical servers, so for example
using m3.medium (or better) instances will provide good results.
* **Linux beefy VM on VMware** 6.0GB RSS forked in 77 milliseconds (12.8 milliseconds per GB).
* **Linux running on physical machine (Unknown HW)** 6.1GB RSS forked in 80 milliseconds (13.1 milliseconds per GB)
* **Linux running on physical machine (Xeon @ 2.27Ghz)** 6.9GB RSS forked into 62 milliseconds (9 milliseconds per GB).
* **Linux VM on 6sync (KVM)** 360 MB RSS forked in 8.2 milliseconds (23.3 milliseconds per GB).
* **Linux VM on EC2, old instance types (Xen)** 6.1GB RSS forked in 1460 milliseconds (239.3 milliseconds per GB).
* **Linux VM on EC2, new instance types (Xen)** 1GB RSS forked in 10 milliseconds (10 milliseconds per GB).
* **Linux VM on Linode (Xen)** 0.9GBRSS forked into 382 milliseconds (424 milliseconds per GB).
As you can see certain VMs running on Xen have a performance hit that is between one order to two orders of magnitude. For EC2 users the suggestion is simple: use modern HVM based instances.
Latency induced by transparent huge pages
-----------------------------------------
Unfortunately when a Linux kernel has transparent huge pages enabled, Redis
incurs to a big latency penalty after the `fork` call is used in order to
persist on disk. Huge pages are the cause of the following issue:
1. Fork is called, two processes with shared huge pages are created.
2. In a busy instance, a few event loops runs will cause commands to target a few thousand of pages, causing the copy on write of almost the whole process memory.
3. This will result in big latency and big memory usage.
Make sure to **disable transparent huge pages** using the following command:
echo never > /sys/kernel/mm/transparent_hugepage/enabled
Latency induced by swapping (operating system paging)
-----------------------------------------------------
Linux (and many other modern operating systems) is able to relocate memory
pages from the memory to the disk, and vice versa, in order to use the
system memory efficiently.
If a Redis page is moved by the kernel from the memory to the swap file, when
the data stored in this memory page is used by Redis (for example accessing
a key stored into this memory page) the kernel will stop the Redis process
in order to move the page back into the main memory. This is a slow operation
involving random I/Os (compared to accessing a page that is already in memory)
and will result into anomalous latency experienced by Redis clients.
The kernel relocates Redis memory pages on disk mainly because of three reasons:
* The system is under memory pressure since the running processes are demanding
more physical memory than the amount that is available. The simplest instance of
this problem is simply Redis using more memory than is available.
* The Redis instance data set, or part of the data set, is mostly completely idle
(never accessed by clients), so the kernel could swap idle memory pages on disk.
This problem is very rare since even a moderately slow instance will touch all
the memory pages often, forcing the kernel to retain all the pages in memory.
* Some processes are generating massive read or write I/Os on the system. Because
files are generally cached, it tends to put pressure on the kernel to increase
the filesystem cache, and therefore generate swapping activity. Please note it
includes Redis RDB and/or AOF background threads which can produce large files.
Fortunately Linux offers good tools to investigate the problem, so the simplest
thing to do is when latency due to swapping is suspected is just to check if
this is the case.
The first thing to do is to checking the amount of Redis memory that is swapped
on disk. In order to do so you need to obtain the Redis instance pid:
$ redis-cli info | grep process_id
process_id:5454
Now enter the /proc file system directory for this process:
$ cd /proc/5454
Here you'll find a file called **smaps** that describes the memory layout of
the Redis process (assuming you are using Linux 2.6.16 or newer).
This file contains very detailed information about our process memory maps,
and one field called **Swap** is exactly what we are looking for. However
there is not just a single swap field since the smaps file contains the
different memory maps of our Redis process (The memory layout of a process
is more complex than a simple linear array of pages).
Since we are interested in all the memory swapped by our process the first thing
to do is to grep for the Swap field across all the file:
$ cat smaps | grep 'Swap:'
Swap: 0 kB
Swap: 0 kB
Swap: 0 kB
Swap: 0 kB
Swap: 0 kB
Swap: 12 kB
Swap: 156 kB
Swap: 8 kB
Swap: 0 kB
Swap: 0 kB
Swap: 0 kB
Swap: 0 kB
Swap: 0 kB
Swap: 0 kB
Swap: 0 kB
Swap: 0 kB
Swap: 0 kB
Swap: 4 kB
Swap: 0 kB
Swap: 0 kB
Swap: 4 kB
Swap: 0 kB
Swap: 0 kB
Swap: 4 kB
Swap: 4 kB
Swap: 0 kB
Swap: 0 kB
Swap: 0 kB
Swap: 0 kB
Swap: 0 kB
If everything is 0 kB, or if there are sporadic 4k entries, everything is
perfectly normal. Actually in our example instance (the one of a real web
site running Redis and serving hundreds of users every second) there are a
few entries that show more swapped pages. To investigate if this is a serious
problem or not we change our command in order to also print the size of the
memory map:
$ cat smaps | egrep '^(Swap|Size)'
Size: 316 kB
Swap: 0 kB
Size: 4 kB
Swap: 0 kB
Size: 8 kB
Swap: 0 kB
Size: 40 kB
Swap: 0 kB
Size: 132 kB
Swap: 0 kB
Size: 720896 kB
Swap: 12 kB
Size: 4096 kB
Swap: 156 kB
Size: 4096 kB
Swap: 8 kB
Size: 4096 kB
Swap: 0 kB
Size: 4 kB
Swap: 0 kB
Size: 1272 kB
Swap: 0 kB
Size: 8 kB
Swap: 0 kB
Size: 4 kB
Swap: 0 kB
Size: 16 kB
Swap: 0 kB
Size: 84 kB
Swap: 0 kB
Size: 4 kB
Swap: 0 kB
Size: 4 kB
Swap: 0 kB
Size: 8 kB
Swap: 4 kB
Size: 8 kB
Swap: 0 kB
Size: 4 kB
Swap: 0 kB
Size: 4 kB
Swap: 4 kB
Size: 144 kB
Swap: 0 kB
Size: 4 kB
Swap: 0 kB
Size: 4 kB
Swap: 4 kB
Size: 12 kB
Swap: 4 kB
Size: 108 kB
Swap: 0 kB
Size: 4 kB
Swap: 0 kB
Size: 4 kB
Swap: 0 kB
Size: 272 kB
Swap: 0 kB
Size: 4 kB
Swap: 0 kB
As you can see from the output, there is a map of 720896 kB
(with just 12 kB swapped) and 156 kB more swapped in another map:
basically a very small amount of our memory is swapped so this is not
going to create any problem at all.
If instead a non trivial amount of the process memory is swapped on disk your
latency problems are likely related to swapping. If this is the case with your
Redis instance you can further verify it using the **vmstat** command:
$ vmstat 1
procs -----------memory---------- ---swap-- -----io---- -system-- ----cpu----
r b swpd free buff cache si so bi bo in cs us sy id wa
0 0 3980 697932 147180 1406456 0 0 2 2 2 0 4 4 91 0
0 0 3980 697428 147180 1406580 0 0 0 0 19088 16104 9 6 84 0
0 0 3980 697296 147180 1406616 0 0 0 28 18936 16193 7 6 87 0
0 0 3980 697048 147180 1406640 0 0 0 0 18613 15987 6 6 88 0
2 0 3980 696924 147180 1406656 0 0 0 0 18744 16299 6 5 88 0
0 0 3980 697048 147180 1406688 0 0 0 4 18520 15974 6 6 88 0
^C
The interesting part of the output for our needs are the two columns **si**
and **so**, that counts the amount of memory swapped from/to the swap file. If
you see non zero counts in those two columns then there is swapping activity
in your system.
Finally, the **iostat** command can be used to check the global I/O activity of
the system.
$ iostat -xk 1
avg-cpu: %user %nice %system %iowait %steal %idle
13.55 0.04 2.92 0.53 0.00 82.95
Device: rrqm/s wrqm/s r/s w/s rkB/s wkB/s avgrq-sz avgqu-sz await svctm %util
sda 0.77 0.00 0.01 0.00 0.40 0.00 73.65 0.00 3.62 2.58 0.00
sdb 1.27 4.75 0.82 3.54 38.00 32.32 32.19 0.11 24.80 4.24 1.85
If your latency problem is due to Redis memory being swapped on disk you need
to lower the memory pressure in your system, either adding more RAM if Redis
is using more memory than the available, or avoiding running other memory
hungry processes in the same system.
Latency due to AOF and disk I/O
-------------------------------
Another source of latency is due to the Append Only File support on Redis.
The AOF basically uses two system calls to accomplish its work. One is
write(2) that is used in order to write data to the append only file, and
the other one is fdatasync(2) that is used in order to flush the kernel
file buffer on disk in order to ensure the durability level specified by
the user.
Both the write(2) and fdatasync(2) calls can be source of latency.
For instance write(2) can block both when there is a system wide sync
in progress, or when the output buffers are full and the kernel requires
to flush on disk in order to accept new writes.
The fdatasync(2) call is a worse source of latency as with many combinations
of kernels and file systems used it can take from a few milliseconds to
a few seconds to complete, especially in the case of some other process
doing I/O. For this reason when possible Redis does the fdatasync(2) call
in a different thread since Redis 2.4.
We'll see how configuration can affect the amount and source of latency
when using the AOF file.
The AOF can be configured to perform a fsync on disk in three different
ways using the **appendfsync** configuration option (this setting can be
modified at runtime using the **CONFIG SET** command).
* When appendfsync is set to the value of **no** Redis performs no fsync.
In this configuration the only source of latency can be write(2).
When this happens usually there is no solution since simply the disk can't
cope with the speed at which Redis is receiving data, however this is
uncommon if the disk is not seriously slowed down by other processes doing
I/O.
* When appendfsync is set to the value of **everysec** Redis performs a
fsync every second. It uses a different thread, and if the fsync is still
in progress Redis uses a buffer to delay the write(2) call up to two seconds
(since write would block on Linux if a fsync is in progress against the
same file). However if the fsync is taking too long Redis will eventually
perform the write(2) call even if the fsync is still in progress, and this
can be a source of latency.
* When appendfsync is set to the value of **always** a fsync is performed
at every write operation before replying back to the client with an OK code
(actually Redis will try to cluster many commands executed at the same time
into a single fsync). In this mode performances are very low in general and
it is strongly recommended to use a fast disk and a file system implementation
that can perform the fsync in short time.
Most Redis users will use either the **no** or **everysec** setting for the
appendfsync configuration directive. The suggestion for minimum latency is
to avoid other processes doing I/O in the same system.
Using an SSD disk can help as well, but usually even non SSD disks perform
well with the append only file if the disk is spare as Redis writes
to the append only file without performing any seek.
If you want to investigate your latency issues related to the append only
file you can use the strace command under Linux:
sudo strace -p $(pidof redis-server) -T -e trace=fdatasync
The above command will show all the fdatasync(2) system calls performed by
Redis in the main thread. With the above command you'll not see the
fdatasync system calls performed by the background thread when the
appendfsync config option is set to **everysec**. In order to do so
just add the -f switch to strace.
If you wish you can also see both fdatasync and write system calls with the
following command:
sudo strace -p $(pidof redis-server) -T -e trace=fdatasync,write
However since write(2) is also used in order to write data to the client
sockets this will likely show too many things unrelated to disk I/O.
Apparently there is no way to tell strace to just show slow system calls so
I use the following command:
sudo strace -f -p $(pidof redis-server) -T -e trace=fdatasync,write 2>&1 | grep -v '0.0' | grep -v unfinished
Latency generated by expires
----------------------------
Redis evict expired keys in two ways:
+ One *lazy* way expires a key when it is requested by a command, but it is found to be already expired.
+ One *active* way expires a few keys every 100 milliseconds.
The active expiring is designed to be adaptive. An expire cycle is started every 100 milliseconds (10 times per second), and will do the following:
+ Sample `ACTIVE_EXPIRE_CYCLE_LOOKUPS_PER_LOOP` keys, evicting all the keys already expired.
+ If the more than 25% of the keys were found expired, repeat.
Given that `ACTIVE_EXPIRE_CYCLE_LOOKUPS_PER_LOOP` is set to 20 by default, and the process is performed ten times per second, usually just 200 keys per second are actively expired. This is enough to clean the DB fast enough even when already expired keys are not accessed for a long time, so that the *lazy* algorithm does not help. At the same time expiring just 200 keys per second has no effects in the latency a Redis instance.
However the algorithm is adaptive and will loop if it finds more than 25% of keys already expired in the set of sampled keys. But given that we run the algorithm ten times per second, this means that the unlucky event of more than 25% of the keys in our random sample are expiring at least *in the same second*.
Basically this means that **if the database has many, many keys expiring in the same second, and these make up at least 25% of the current population of keys with an expire set**, Redis can block in order to get the percentage of keys already expired below 25%.
This approach is needed in order to avoid using too much memory for keys that are already expired, and usually is absolutely harmless since it's strange that a big number of keys are going to expire in the same exact second, but it is not impossible that the user used `EXPIREAT` extensively with the same Unix time.
In short: be aware that many keys expiring at the same moment can be a source of latency.
Redis software watchdog
---
Redis 2.6 introduces the *Redis Software Watchdog* that is a debugging tool
designed to track those latency problems that for one reason or the other
escaped an analysis using normal tools.
The software watchdog is an experimental feature. While it is designed to
be used in production environments care should be taken to backup the database
before proceeding as it could possibly have unexpected interactions with the
normal execution of the Redis server.
It is important to use it only as *last resort* when there is no way to track the issue by other means.
This is how this feature works:
* The user enables the software watchdog using the `CONFIG SET` command.
* Redis starts monitoring itself constantly.
* If Redis detects that the server is blocked into some operation that is not returning fast enough, and that may be the source of the latency issue, a low level report about where the server is blocked is dumped on the log file.
* The user contacts the developers writing a message in the Redis Google Group, including the watchdog report in the message.
Note that this feature cannot be enabled using the redis.conf file, because it is designed to be enabled only in already running instances and only for debugging purposes.
To enable the feature just use the following:
CONFIG SET watchdog-period 500
The period is specified in milliseconds. In the above example I specified to log latency issues only if the server detects a delay of 500 milliseconds or greater. The minimum configurable period is 200 milliseconds.
When you are done with the software watchdog you can turn it off setting the `watchdog-period` parameter to 0. **Important:** remember to do this because keeping the instance with the watchdog turned on for a longer time than needed is generally not a good idea.
The following is an example of what you'll see printed in the log file once the software watchdog detects a delay longer than the configured one:
[8547 | signal handler] (1333114359)
--- WATCHDOG TIMER EXPIRED ---
/lib/libc.so.6(nanosleep+0x2d) [0x7f16b5c2d39d]
/lib/libpthread.so.0(+0xf8f0) [0x7f16b5f158f0]
/lib/libc.so.6(nanosleep+0x2d) [0x7f16b5c2d39d]
/lib/libc.so.6(usleep+0x34) [0x7f16b5c62844]
./redis-server(debugCommand+0x3e1) [0x43ab41]
./redis-server(call+0x5d) [0x415a9d]
./redis-server(processCommand+0x375) [0x415fc5]
./redis-server(processInputBuffer+0x4f) [0x4203cf]
./redis-server(readQueryFromClient+0xa0) [0x4204e0]
./redis-server(aeProcessEvents+0x128) [0x411b48]
./redis-server(aeMain+0x2b) [0x411dbb]
./redis-server(main+0x2b6) [0x418556]
/lib/libc.so.6(__libc_start_main+0xfd) [0x7f16b5ba1c4d]
./redis-server() [0x411099]
------
Note: in the example the **DEBUG SLEEP** command was used in order to block the server. The stack trace is different if the server blocks in a different context.
If you happen to collect multiple watchdog stack traces you are encouraged to send everything to the Redis Google Group: the more traces we obtain, the simpler it will be to understand what the problem with your instance is. | redis | title Diagnosing latency issues linkTitle Latency diagnosis weight 1 description Finding the causes of slow responses aliases topics latency docs reference optimization latency This document will help you understand what the problem could be if you are experiencing latency problems with Redis In this context latency is the maximum delay between the time a client issues a command and the time the reply to the command is received by the client Usually Redis processing time is extremely low in the sub microsecond range but there are certain conditions leading to higher latency figures I ve little time give me the checklist The following documentation is very important in order to run Redis in a low latency fashion However I understand that we are busy people so let s start with a quick checklist If you fail following these steps please return here to read the full documentation 1 Make sure you are not running slow commands that are blocking the server Use the Redis Slow Log feature commands slowlog to check this 2 For EC2 users make sure you use HVM based modern EC2 instances like m3 medium Otherwise fork is too slow 3 Transparent huge pages must be disabled from your kernel Use echo never sys kernel mm transparent hugepage enabled to disable them and restart your Redis process 4 If you are using a virtual machine it is possible that you have an intrinsic latency that has nothing to do with Redis Check the minimum latency you can expect from your runtime environment using redis cli intrinsic latency 100 Note you need to run this command in the server not in the client 5 Enable and use the Latency monitor topics latency monitor feature of Redis in order to get a human readable description of the latency events and causes in your Redis instance In general use the following table for durability VS latency performance tradeoffs ordered from stronger safety to better latency 1 AOF fsync always this is very slow you should use it only if you know what you are doing 2 AOF fsync every second this is a good compromise 3 AOF fsync every second no appendfsync on rewrite option set to yes this is as the above but avoids to fsync during rewrites to lower the disk pressure 4 AOF fsync never Fsyncing is up to the kernel in this setup even less disk pressure and risk of latency spikes 5 RDB Here you have a vast spectrum of tradeoffs depending on the save triggers you configure And now for people with 15 minutes to spend the details Measuring latency If you are experiencing latency problems you probably know how to measure it in the context of your application or maybe your latency problem is very evident even macroscopically However redis cli can be used to measure the latency of a Redis server in milliseconds just try redis cli latency h host p port Using the internal Redis latency monitoring subsystem Since Redis 2 8 13 Redis provides latency monitoring capabilities that are able to sample different execution paths to understand where the server is blocking This makes debugging of the problems illustrated in this documentation much simpler so we suggest enabling latency monitoring ASAP Please refer to the Latency monitor documentation topics latency monitor While the latency monitoring sampling and reporting capabilities will make it simpler to understand the source of latency in your Redis system it is still advised that you read this documentation extensively to better understand the topic of Redis and latency spikes Latency baseline There is a kind of latency that is inherently part of the environment where you run Redis that is the latency provided by your operating system kernel and if you are using virtualization by the hypervisor you are using While this latency can t be removed it is important to study it because it is the baseline or in other words you won t be able to achieve a Redis latency that is better than the latency that every process running in your environment will experience because of the kernel or hypervisor implementation or setup We call this kind of latency intrinsic latency and redis cli starting from Redis version 2 8 7 is able to measure it This is an example run under Linux 3 11 0 running on an entry level server Note the argument 100 is the number of seconds the test will be executed The more time we run the test the more likely we ll be able to spot latency spikes 100 seconds is usually appropriate however you may want to perform a few runs at different times Please note that the test is CPU intensive and will likely saturate a single core in your system redis cli intrinsic latency 100 Max latency so far 1 microseconds Max latency so far 16 microseconds Max latency so far 50 microseconds Max latency so far 53 microseconds Max latency so far 83 microseconds Max latency so far 115 microseconds Note redis cli in this special case needs to run in the server where you run or plan to run Redis not in the client In this special mode redis cli does not connect to a Redis server at all it will just try to measure the largest time the kernel does not provide CPU time to run to the redis cli process itself In the above example the intrinsic latency of the system is just 0 115 milliseconds or 115 microseconds which is a good news however keep in mind that the intrinsic latency may change over time depending on the load of the system Virtualized environments will not show so good numbers especially with high load or if there are noisy neighbors The following is a run on a Linode 4096 instance running Redis and Apache redis cli intrinsic latency 100 Max latency so far 573 microseconds Max latency so far 695 microseconds Max latency so far 919 microseconds Max latency so far 1606 microseconds Max latency so far 3191 microseconds Max latency so far 9243 microseconds Max latency so far 9671 microseconds Here we have an intrinsic latency of 9 7 milliseconds this means that we can t ask better than that to Redis However other runs at different times in different virtualization environments with higher load or with noisy neighbors can easily show even worse values We were able to measure up to 40 milliseconds in systems otherwise apparently running normally Latency induced by network and communication Clients connect to Redis using a TCP IP connection or a Unix domain connection The typical latency of a 1 Gbit s network is about 200 us while the latency with a Unix domain socket can be as low as 30 us It actually depends on your network and system hardware On top of the communication itself the system adds some more latency due to thread scheduling CPU caches NUMA placement etc System induced latencies are significantly higher on a virtualized environment than on a physical machine The consequence is even if Redis processes most commands in sub microsecond range a client performing many roundtrips to the server will have to pay for these network and system related latencies An efficient client will therefore try to limit the number of roundtrips by pipelining several commands together This is fully supported by the servers and most clients Aggregated commands like MSET MGET can be also used for that purpose Starting with Redis 2 4 a number of commands also support variadic parameters for all data types Here are some guidelines If you can afford it prefer a physical machine over a VM to host the server Do not systematically connect disconnect to the server especially true for web based applications Keep your connections as long lived as possible If your client is on the same host than the server use Unix domain sockets Prefer to use aggregated commands MSET MGET or commands with variadic parameters if possible over pipelining Prefer to use pipelining if possible over sequence of roundtrips Redis supports Lua server side scripting to cover cases that are not suitable for raw pipelining for instance when the result of a command is an input for the following commands On Linux some people can achieve better latencies by playing with process placement taskset cgroups real time priorities chrt NUMA configuration numactl or by using a low latency kernel Please note vanilla Redis is not really suitable to be bound on a single CPU core Redis can fork background tasks that can be extremely CPU consuming like BGSAVE or BGREWRITEAOF These tasks must never run on the same core as the main event loop In most situations these kind of system level optimizations are not needed Only do them if you require them and if you are familiar with them Single threaded nature of Redis Redis uses a mostly single threaded design This means that a single process serves all the client requests using a technique called multiplexing This means that Redis can serve a single request in every given moment so all the requests are served sequentially This is very similar to how Node js works as well However both products are not often perceived as being slow This is caused in part by the small amount of time to complete a single request but primarily because these products are designed to not block on system calls such as reading data from or writing data to a socket I said that Redis is mostly single threaded since actually from Redis 2 4 we use threads in Redis in order to perform some slow I O operations in the background mainly related to disk I O but this does not change the fact that Redis serves all the requests using a single thread Latency generated by slow commands A consequence of being single thread is that when a request is slow to serve all the other clients will wait for this request to be served When executing normal commands like GET or SET or LPUSH this is not a problem at all since these commands are executed in constant and very small time However there are commands operating on many elements like SORT LREM SUNION and others For instance taking the intersection of two big sets can take a considerable amount of time The algorithmic complexity of all commands is documented A good practice is to systematically check it when using commands you are not familiar with If you have latency concerns you should either not use slow commands against values composed of many elements or you should run a replica using Redis replication where you run all your slow queries It is possible to monitor slow commands using the Redis Slow Log feature commands slowlog Additionally you can use your favorite per process monitoring program top htop prstat etc to quickly check the CPU consumption of the main Redis process If it is high while the traffic is not it is usually a sign that slow commands are used IMPORTANT NOTE a VERY common source of latency generated by the execution of slow commands is the use of the KEYS command in production environments KEYS as documented in the Redis documentation should only be used for debugging purposes Since Redis 2 8 a new commands were introduced in order to iterate the key space and other large collections incrementally please check the SCAN SSCAN HSCAN and ZSCAN commands for more information Latency generated by fork In order to generate the RDB file in background or to rewrite the Append Only File if AOF persistence is enabled Redis has to fork background processes The fork operation running in the main thread can induce latency by itself Forking is an expensive operation on most Unix like systems since it involves copying a good number of objects linked to the process This is especially true for the page table associated to the virtual memory mechanism For instance on a Linux AMD64 system the memory is divided in 4 kB pages To convert virtual addresses to physical addresses each process stores a page table actually represented as a tree containing at least a pointer per page of the address space of the process So a large 24 GB Redis instance requires a page table of 24 GB 4 kB 8 48 MB When a background save is performed this instance will have to be forked which will involve allocating and copying 48 MB of memory It takes time and CPU especially on virtual machines where allocation and initialization of a large memory chunk can be expensive Fork time in different systems Modern hardware is pretty fast at copying the page table but Xen is not The problem with Xen is not virtualization specific but Xen specific For instance using VMware or Virtual Box does not result into slow fork time The following is a table that compares fork time for different Redis instance size Data is obtained performing a BGSAVE and looking at the latest fork usec filed in the INFO command output However the good news is that new types of EC2 HVM based instances are much better with fork times almost on par with physical servers so for example using m3 medium or better instances will provide good results Linux beefy VM on VMware 6 0GB RSS forked in 77 milliseconds 12 8 milliseconds per GB Linux running on physical machine Unknown HW 6 1GB RSS forked in 80 milliseconds 13 1 milliseconds per GB Linux running on physical machine Xeon 2 27Ghz 6 9GB RSS forked into 62 milliseconds 9 milliseconds per GB Linux VM on 6sync KVM 360 MB RSS forked in 8 2 milliseconds 23 3 milliseconds per GB Linux VM on EC2 old instance types Xen 6 1GB RSS forked in 1460 milliseconds 239 3 milliseconds per GB Linux VM on EC2 new instance types Xen 1GB RSS forked in 10 milliseconds 10 milliseconds per GB Linux VM on Linode Xen 0 9GBRSS forked into 382 milliseconds 424 milliseconds per GB As you can see certain VMs running on Xen have a performance hit that is between one order to two orders of magnitude For EC2 users the suggestion is simple use modern HVM based instances Latency induced by transparent huge pages Unfortunately when a Linux kernel has transparent huge pages enabled Redis incurs to a big latency penalty after the fork call is used in order to persist on disk Huge pages are the cause of the following issue 1 Fork is called two processes with shared huge pages are created 2 In a busy instance a few event loops runs will cause commands to target a few thousand of pages causing the copy on write of almost the whole process memory 3 This will result in big latency and big memory usage Make sure to disable transparent huge pages using the following command echo never sys kernel mm transparent hugepage enabled Latency induced by swapping operating system paging Linux and many other modern operating systems is able to relocate memory pages from the memory to the disk and vice versa in order to use the system memory efficiently If a Redis page is moved by the kernel from the memory to the swap file when the data stored in this memory page is used by Redis for example accessing a key stored into this memory page the kernel will stop the Redis process in order to move the page back into the main memory This is a slow operation involving random I Os compared to accessing a page that is already in memory and will result into anomalous latency experienced by Redis clients The kernel relocates Redis memory pages on disk mainly because of three reasons The system is under memory pressure since the running processes are demanding more physical memory than the amount that is available The simplest instance of this problem is simply Redis using more memory than is available The Redis instance data set or part of the data set is mostly completely idle never accessed by clients so the kernel could swap idle memory pages on disk This problem is very rare since even a moderately slow instance will touch all the memory pages often forcing the kernel to retain all the pages in memory Some processes are generating massive read or write I Os on the system Because files are generally cached it tends to put pressure on the kernel to increase the filesystem cache and therefore generate swapping activity Please note it includes Redis RDB and or AOF background threads which can produce large files Fortunately Linux offers good tools to investigate the problem so the simplest thing to do is when latency due to swapping is suspected is just to check if this is the case The first thing to do is to checking the amount of Redis memory that is swapped on disk In order to do so you need to obtain the Redis instance pid redis cli info grep process id process id 5454 Now enter the proc file system directory for this process cd proc 5454 Here you ll find a file called smaps that describes the memory layout of the Redis process assuming you are using Linux 2 6 16 or newer This file contains very detailed information about our process memory maps and one field called Swap is exactly what we are looking for However there is not just a single swap field since the smaps file contains the different memory maps of our Redis process The memory layout of a process is more complex than a simple linear array of pages Since we are interested in all the memory swapped by our process the first thing to do is to grep for the Swap field across all the file cat smaps grep Swap Swap 0 kB Swap 0 kB Swap 0 kB Swap 0 kB Swap 0 kB Swap 12 kB Swap 156 kB Swap 8 kB Swap 0 kB Swap 0 kB Swap 0 kB Swap 0 kB Swap 0 kB Swap 0 kB Swap 0 kB Swap 0 kB Swap 0 kB Swap 4 kB Swap 0 kB Swap 0 kB Swap 4 kB Swap 0 kB Swap 0 kB Swap 4 kB Swap 4 kB Swap 0 kB Swap 0 kB Swap 0 kB Swap 0 kB Swap 0 kB If everything is 0 kB or if there are sporadic 4k entries everything is perfectly normal Actually in our example instance the one of a real web site running Redis and serving hundreds of users every second there are a few entries that show more swapped pages To investigate if this is a serious problem or not we change our command in order to also print the size of the memory map cat smaps egrep Swap Size Size 316 kB Swap 0 kB Size 4 kB Swap 0 kB Size 8 kB Swap 0 kB Size 40 kB Swap 0 kB Size 132 kB Swap 0 kB Size 720896 kB Swap 12 kB Size 4096 kB Swap 156 kB Size 4096 kB Swap 8 kB Size 4096 kB Swap 0 kB Size 4 kB Swap 0 kB Size 1272 kB Swap 0 kB Size 8 kB Swap 0 kB Size 4 kB Swap 0 kB Size 16 kB Swap 0 kB Size 84 kB Swap 0 kB Size 4 kB Swap 0 kB Size 4 kB Swap 0 kB Size 8 kB Swap 4 kB Size 8 kB Swap 0 kB Size 4 kB Swap 0 kB Size 4 kB Swap 4 kB Size 144 kB Swap 0 kB Size 4 kB Swap 0 kB Size 4 kB Swap 4 kB Size 12 kB Swap 4 kB Size 108 kB Swap 0 kB Size 4 kB Swap 0 kB Size 4 kB Swap 0 kB Size 272 kB Swap 0 kB Size 4 kB Swap 0 kB As you can see from the output there is a map of 720896 kB with just 12 kB swapped and 156 kB more swapped in another map basically a very small amount of our memory is swapped so this is not going to create any problem at all If instead a non trivial amount of the process memory is swapped on disk your latency problems are likely related to swapping If this is the case with your Redis instance you can further verify it using the vmstat command vmstat 1 procs memory swap io system cpu r b swpd free buff cache si so bi bo in cs us sy id wa 0 0 3980 697932 147180 1406456 0 0 2 2 2 0 4 4 91 0 0 0 3980 697428 147180 1406580 0 0 0 0 19088 16104 9 6 84 0 0 0 3980 697296 147180 1406616 0 0 0 28 18936 16193 7 6 87 0 0 0 3980 697048 147180 1406640 0 0 0 0 18613 15987 6 6 88 0 2 0 3980 696924 147180 1406656 0 0 0 0 18744 16299 6 5 88 0 0 0 3980 697048 147180 1406688 0 0 0 4 18520 15974 6 6 88 0 C The interesting part of the output for our needs are the two columns si and so that counts the amount of memory swapped from to the swap file If you see non zero counts in those two columns then there is swapping activity in your system Finally the iostat command can be used to check the global I O activity of the system iostat xk 1 avg cpu user nice system iowait steal idle 13 55 0 04 2 92 0 53 0 00 82 95 Device rrqm s wrqm s r s w s rkB s wkB s avgrq sz avgqu sz await svctm util sda 0 77 0 00 0 01 0 00 0 40 0 00 73 65 0 00 3 62 2 58 0 00 sdb 1 27 4 75 0 82 3 54 38 00 32 32 32 19 0 11 24 80 4 24 1 85 If your latency problem is due to Redis memory being swapped on disk you need to lower the memory pressure in your system either adding more RAM if Redis is using more memory than the available or avoiding running other memory hungry processes in the same system Latency due to AOF and disk I O Another source of latency is due to the Append Only File support on Redis The AOF basically uses two system calls to accomplish its work One is write 2 that is used in order to write data to the append only file and the other one is fdatasync 2 that is used in order to flush the kernel file buffer on disk in order to ensure the durability level specified by the user Both the write 2 and fdatasync 2 calls can be source of latency For instance write 2 can block both when there is a system wide sync in progress or when the output buffers are full and the kernel requires to flush on disk in order to accept new writes The fdatasync 2 call is a worse source of latency as with many combinations of kernels and file systems used it can take from a few milliseconds to a few seconds to complete especially in the case of some other process doing I O For this reason when possible Redis does the fdatasync 2 call in a different thread since Redis 2 4 We ll see how configuration can affect the amount and source of latency when using the AOF file The AOF can be configured to perform a fsync on disk in three different ways using the appendfsync configuration option this setting can be modified at runtime using the CONFIG SET command When appendfsync is set to the value of no Redis performs no fsync In this configuration the only source of latency can be write 2 When this happens usually there is no solution since simply the disk can t cope with the speed at which Redis is receiving data however this is uncommon if the disk is not seriously slowed down by other processes doing I O When appendfsync is set to the value of everysec Redis performs a fsync every second It uses a different thread and if the fsync is still in progress Redis uses a buffer to delay the write 2 call up to two seconds since write would block on Linux if a fsync is in progress against the same file However if the fsync is taking too long Redis will eventually perform the write 2 call even if the fsync is still in progress and this can be a source of latency When appendfsync is set to the value of always a fsync is performed at every write operation before replying back to the client with an OK code actually Redis will try to cluster many commands executed at the same time into a single fsync In this mode performances are very low in general and it is strongly recommended to use a fast disk and a file system implementation that can perform the fsync in short time Most Redis users will use either the no or everysec setting for the appendfsync configuration directive The suggestion for minimum latency is to avoid other processes doing I O in the same system Using an SSD disk can help as well but usually even non SSD disks perform well with the append only file if the disk is spare as Redis writes to the append only file without performing any seek If you want to investigate your latency issues related to the append only file you can use the strace command under Linux sudo strace p pidof redis server T e trace fdatasync The above command will show all the fdatasync 2 system calls performed by Redis in the main thread With the above command you ll not see the fdatasync system calls performed by the background thread when the appendfsync config option is set to everysec In order to do so just add the f switch to strace If you wish you can also see both fdatasync and write system calls with the following command sudo strace p pidof redis server T e trace fdatasync write However since write 2 is also used in order to write data to the client sockets this will likely show too many things unrelated to disk I O Apparently there is no way to tell strace to just show slow system calls so I use the following command sudo strace f p pidof redis server T e trace fdatasync write 2 1 grep v 0 0 grep v unfinished Latency generated by expires Redis evict expired keys in two ways One lazy way expires a key when it is requested by a command but it is found to be already expired One active way expires a few keys every 100 milliseconds The active expiring is designed to be adaptive An expire cycle is started every 100 milliseconds 10 times per second and will do the following Sample ACTIVE EXPIRE CYCLE LOOKUPS PER LOOP keys evicting all the keys already expired If the more than 25 of the keys were found expired repeat Given that ACTIVE EXPIRE CYCLE LOOKUPS PER LOOP is set to 20 by default and the process is performed ten times per second usually just 200 keys per second are actively expired This is enough to clean the DB fast enough even when already expired keys are not accessed for a long time so that the lazy algorithm does not help At the same time expiring just 200 keys per second has no effects in the latency a Redis instance However the algorithm is adaptive and will loop if it finds more than 25 of keys already expired in the set of sampled keys But given that we run the algorithm ten times per second this means that the unlucky event of more than 25 of the keys in our random sample are expiring at least in the same second Basically this means that if the database has many many keys expiring in the same second and these make up at least 25 of the current population of keys with an expire set Redis can block in order to get the percentage of keys already expired below 25 This approach is needed in order to avoid using too much memory for keys that are already expired and usually is absolutely harmless since it s strange that a big number of keys are going to expire in the same exact second but it is not impossible that the user used EXPIREAT extensively with the same Unix time In short be aware that many keys expiring at the same moment can be a source of latency Redis software watchdog Redis 2 6 introduces the Redis Software Watchdog that is a debugging tool designed to track those latency problems that for one reason or the other escaped an analysis using normal tools The software watchdog is an experimental feature While it is designed to be used in production environments care should be taken to backup the database before proceeding as it could possibly have unexpected interactions with the normal execution of the Redis server It is important to use it only as last resort when there is no way to track the issue by other means This is how this feature works The user enables the software watchdog using the CONFIG SET command Redis starts monitoring itself constantly If Redis detects that the server is blocked into some operation that is not returning fast enough and that may be the source of the latency issue a low level report about where the server is blocked is dumped on the log file The user contacts the developers writing a message in the Redis Google Group including the watchdog report in the message Note that this feature cannot be enabled using the redis conf file because it is designed to be enabled only in already running instances and only for debugging purposes To enable the feature just use the following CONFIG SET watchdog period 500 The period is specified in milliseconds In the above example I specified to log latency issues only if the server detects a delay of 500 milliseconds or greater The minimum configurable period is 200 milliseconds When you are done with the software watchdog you can turn it off setting the watchdog period parameter to 0 Important remember to do this because keeping the instance with the watchdog turned on for a longer time than needed is generally not a good idea The following is an example of what you ll see printed in the log file once the software watchdog detects a delay longer than the configured one 8547 signal handler 1333114359 WATCHDOG TIMER EXPIRED lib libc so 6 nanosleep 0x2d 0x7f16b5c2d39d lib libpthread so 0 0xf8f0 0x7f16b5f158f0 lib libc so 6 nanosleep 0x2d 0x7f16b5c2d39d lib libc so 6 usleep 0x34 0x7f16b5c62844 redis server debugCommand 0x3e1 0x43ab41 redis server call 0x5d 0x415a9d redis server processCommand 0x375 0x415fc5 redis server processInputBuffer 0x4f 0x4203cf redis server readQueryFromClient 0xa0 0x4204e0 redis server aeProcessEvents 0x128 0x411b48 redis server aeMain 0x2b 0x411dbb redis server main 0x2b6 0x418556 lib libc so 6 libc start main 0xfd 0x7f16b5ba1c4d redis server 0x411099 Note in the example the DEBUG SLEEP command was used in order to block the server The stack trace is different if the server blocks in a different context If you happen to collect multiple watchdog stack traces you are encouraged to send everything to the Redis Google Group the more traces we obtain the simpler it will be to understand what the problem with your instance is |
redis docs reference optimization memory optimization topics memory optimization title Memory optimization Strategies for optimizing memory usage in Redis aliases Memory optimization weight 1 | ---
title: Memory optimization
linkTitle: Memory optimization
description: Strategies for optimizing memory usage in Redis
weight: 1
aliases: [
/topics/memory-optimization,
/docs/reference/optimization/memory-optimization
]
---
## Special encoding of small aggregate data types
Since Redis 2.2 many data types are optimized to use less space up to a certain size.
Hashes, Lists, Sets composed of just integers, and Sorted Sets, when smaller than a given number of elements, and up to a maximum element size, are encoded in a very memory-efficient way that uses *up to 10 times less memory* (with 5 times less memory used being the average saving).
This is completely transparent from the point of view of the user and API.
Since this is a CPU / memory tradeoff it is possible to tune the maximum
number of elements and maximum element size for special encoded types
using the following redis.conf directives (defaults are shown):
### Redis <= 6.2
```
hash-max-ziplist-entries 512
hash-max-ziplist-value 64
zset-max-ziplist-entries 128
zset-max-ziplist-value 64
set-max-intset-entries 512
```
### Redis >= 7.0
```
hash-max-listpack-entries 512
hash-max-listpack-value 64
zset-max-listpack-entries 128
zset-max-listpack-value 64
set-max-intset-entries 512
```
### Redis >= 7.2
The following directives are also available:
```
set-max-listpack-entries 128
set-max-listpack-value 64
```
If a specially encoded value overflows the configured max size,
Redis will automatically convert it into normal encoding.
This operation is very fast for small values,
but if you change the setting in order to use specially encoded values
for much larger aggregate types the suggestion is to run some
benchmarks and tests to check the conversion time.
## Using 32-bit instances
When Redis is compiled as a 32-bit target, it uses a lot less memory per key, since pointers are small,
but such an instance will be limited to 4 GB of maximum memory usage.
To compile Redis as 32-bit binary use *make 32bit*.
RDB and AOF files are compatible between 32-bit and 64-bit instances
(and between little and big endian of course) so you can switch from 32 to 64-bit, or the contrary, without problems.
## Bit and byte level operations
Redis 2.2 introduced new bit and byte level operations: `GETRANGE`, `SETRANGE`, `GETBIT` and `SETBIT`.
Using these commands you can treat the Redis string type as a random access array.
For instance, if you have an application where users are identified by a unique progressive integer number,
you can use a bitmap to save information about the subscription of users in a mailing list,
setting the bit for subscribed and clearing it for unsubscribed, or the other way around.
With 100 million users this data will take just 12 megabytes of RAM in a Redis instance.
You can do the same using `GETRANGE` and `SETRANGE` to store one byte of information for each user.
This is just an example but it is possible to model several problems in very little space with these new primitives.
## Use hashes when possible
Small hashes are encoded in a very small space, so you should try representing your data using hashes whenever possible.
For instance, if you have objects representing users in a web application,
instead of using different keys for name, surname, email, password, use a single hash with all the required fields.
If you want to know more about this, read the next section.
## Using hashes to abstract a very memory-efficient plain key-value store on top of Redis
I understand the title of this section is a bit scary, but I'm going to explain in detail what this is about.
Basically it is possible to model a plain key-value store using Redis
where values can just be just strings, which is not just more memory efficient
than Redis plain keys but also much more memory efficient than memcached.
Let's start with some facts: a few keys use a lot more memory than a single key
containing a hash with a few fields. How is this possible? We use a trick.
In theory to guarantee that we perform lookups in constant time
(also known as O(1) in big O notation) there is the need to use a data structure
with a constant time complexity in the average case, like a hash table.
But many times hashes contain just a few fields. When hashes are small we can
instead just encode them in an O(N) data structure, like a linear
array with length-prefixed key-value pairs. Since we do this only when N
is small, the amortized time for `HGET` and `HSET` commands is still O(1): the
hash will be converted into a real hash table as soon as the number of elements
it contains grows too large (you can configure the limit in redis.conf).
This does not only work well from the point of view of time complexity, but
also from the point of view of constant times since a linear array of key-value pairs happens to play very well with the CPU cache (it has a better
cache locality than a hash table).
However since hash fields and values are not (always) represented as full-featured Redis objects, hash fields can't have an associated time to live
(expire) like a real key, and can only contain a string. But we are okay with
this, this was the intention anyway when the hash data type API was
designed (we trust simplicity more than features, so nested data structures
are not allowed, as expires of single fields are not allowed).
So hashes are memory efficient. This is useful when using hashes
to represent objects or to model other problems when there are group of
related fields. But what about if we have a plain key value business?
Imagine we want to use Redis as a cache for many small objects, which can be JSON encoded objects, small HTML fragments, simple key -> boolean values
and so forth. Basically, anything is a string -> string map with small keys
and values.
Now let's assume the objects we want to cache are numbered, like:
* object:102393
* object:1234
* object:5
This is what we can do. Every time we perform a
SET operation to set a new value, we actually split the key into two parts,
one part used as a key, and the other part used as the field name for the hash. For instance, the
object named "object:1234" is actually split into:
* a Key named object:12
* a Field named 34
So we use all the characters but the last two for the key, and the final
two characters for the hash field name. To set our key we use the following
command:
```
HSET object:12 34 somevalue
```
As you can see every hash will end up containing 100 fields, which is an optimal compromise between CPU and memory saved.
There is another important thing to note, with this schema
every hash will have more or
less 100 fields regardless of the number of objects we cached. This is because our objects will always end with a number and not a random string. In some way, the final number can be considered as a form of implicit pre-sharding.
What about small numbers? Like object:2? We handle this case using just
"object:" as a key name, and the whole number as the hash field name.
So object:2 and object:10 will both end inside the key "object:", but one
as field name "2" and one as "10".
How much memory do we save this way?
I used the following Ruby program to test how this works:
```ruby
require 'rubygems'
require 'redis'
USE_OPTIMIZATION = true
def hash_get_key_field(key)
s = key.split(':')
if s[1].length > 2
{ key: s[0] + ':' + s[1][0..-3], field: s[1][-2..-1] }
else
{ key: s[0] + ':', field: s[1] }
end
end
def hash_set(r, key, value)
kf = hash_get_key_field(key)
r.hset(kf[:key], kf[:field], value)
end
def hash_get(r, key, value)
kf = hash_get_key_field(key)
r.hget(kf[:key], kf[:field], value)
end
r = Redis.new
(0..100_000).each do |id|
key = "object:#{id}"
if USE_OPTIMIZATION
hash_set(r, key, 'val')
else
r.set(key, 'val')
end
end
```
This is the result against a 64 bit instance of Redis 2.2:
* USE_OPTIMIZATION set to true: 1.7 MB of used memory
* USE_OPTIMIZATION set to false; 11 MB of used memory
This is an order of magnitude, I think this makes Redis more or less the most
memory efficient plain key value store out there.
*WARNING*: for this to work, make sure that in your redis.conf you have
something like this:
```
hash-max-zipmap-entries 256
```
Also remember to set the following field accordingly to the maximum size
of your keys and values:
```
hash-max-zipmap-value 1024
```
Every time a hash exceeds the number of elements or element size specified
it will be converted into a real hash table, and the memory saving will be lost.
You may ask, why don't you do this implicitly in the normal key space so that
I don't have to care? There are two reasons: one is that we tend to make
tradeoffs explicit, and this is a clear tradeoff between many things: CPU,
memory, and max element size. The second is that the top-level key space must
support a lot of interesting things like expires, LRU data, and so
forth so it is not practical to do this in a general way.
But the Redis Way is that the user must understand how things work so that he can pick the best compromise and to understand how the system will
behave exactly.
## Memory allocation
To store user keys, Redis allocates at most as much memory as the `maxmemory`
setting enables (however there are small extra allocations possible).
The exact value can be set in the configuration file or set later via
`CONFIG SET` (for more info, see [Using memory as an LRU cache](/docs/reference/eviction)).
There are a few things that should be noted about how Redis manages memory:
* Redis will not always free up (return) memory to the OS when keys are removed.
This is not something special about Redis, but it is how most malloc() implementations work.
For example, if you fill an instance with 5GB worth of data, and then
remove the equivalent of 2GB of data, the Resident Set Size (also known as
the RSS, which is the number of memory pages consumed by the process)
will probably still be around 5GB, even if Redis will claim that the user
memory is around 3GB. This happens because the underlying allocator can't easily release the memory.
For example, often most of the removed keys were allocated on the same pages as the other keys that still exist.
* The previous point means that you need to provision memory based on your
**peak memory usage**. If your workload from time to time requires 10GB, even if
most of the time 5GB could do, you need to provision for 10GB.
* However allocators are smart and are able to reuse free chunks of memory,
so after you free 2GB of your 5GB data set, when you start adding more keys
again, you'll see the RSS (Resident Set Size) stay steady and not grow
more, as you add up to 2GB of additional keys. The allocator is basically
trying to reuse the 2GB of memory previously (logically) freed.
* Because of all this, the fragmentation ratio is not reliable when you
had a memory usage that at the peak is much larger than the currently used memory.
The fragmentation is calculated as the physical memory actually used (the RSS
value) divided by the amount of memory currently in use (as the sum of all
the allocations performed by Redis). Because the RSS reflects the peak memory,
when the (virtually) used memory is low since a lot of keys/values were freed, but the RSS is high, the ratio `RSS / mem_used` will be very high.
If `maxmemory` is not set Redis will keep allocating memory as it sees
fit and thus it can (gradually) eat up all your free memory.
Therefore it is generally advisable to configure some limits. You may also
want to set `maxmemory-policy` to `noeviction` (which is *not* the default
value in some older versions of Redis).
It makes Redis return an out-of-memory error for write commands if and when it reaches the
limit - which in turn may result in errors in the application but will not render the
whole machine dead because of memory starvation. | redis | title Memory optimization linkTitle Memory optimization description Strategies for optimizing memory usage in Redis weight 1 aliases topics memory optimization docs reference optimization memory optimization Special encoding of small aggregate data types Since Redis 2 2 many data types are optimized to use less space up to a certain size Hashes Lists Sets composed of just integers and Sorted Sets when smaller than a given number of elements and up to a maximum element size are encoded in a very memory efficient way that uses up to 10 times less memory with 5 times less memory used being the average saving This is completely transparent from the point of view of the user and API Since this is a CPU memory tradeoff it is possible to tune the maximum number of elements and maximum element size for special encoded types using the following redis conf directives defaults are shown Redis 6 2 hash max ziplist entries 512 hash max ziplist value 64 zset max ziplist entries 128 zset max ziplist value 64 set max intset entries 512 Redis 7 0 hash max listpack entries 512 hash max listpack value 64 zset max listpack entries 128 zset max listpack value 64 set max intset entries 512 Redis 7 2 The following directives are also available set max listpack entries 128 set max listpack value 64 If a specially encoded value overflows the configured max size Redis will automatically convert it into normal encoding This operation is very fast for small values but if you change the setting in order to use specially encoded values for much larger aggregate types the suggestion is to run some benchmarks and tests to check the conversion time Using 32 bit instances When Redis is compiled as a 32 bit target it uses a lot less memory per key since pointers are small but such an instance will be limited to 4 GB of maximum memory usage To compile Redis as 32 bit binary use make 32bit RDB and AOF files are compatible between 32 bit and 64 bit instances and between little and big endian of course so you can switch from 32 to 64 bit or the contrary without problems Bit and byte level operations Redis 2 2 introduced new bit and byte level operations GETRANGE SETRANGE GETBIT and SETBIT Using these commands you can treat the Redis string type as a random access array For instance if you have an application where users are identified by a unique progressive integer number you can use a bitmap to save information about the subscription of users in a mailing list setting the bit for subscribed and clearing it for unsubscribed or the other way around With 100 million users this data will take just 12 megabytes of RAM in a Redis instance You can do the same using GETRANGE and SETRANGE to store one byte of information for each user This is just an example but it is possible to model several problems in very little space with these new primitives Use hashes when possible Small hashes are encoded in a very small space so you should try representing your data using hashes whenever possible For instance if you have objects representing users in a web application instead of using different keys for name surname email password use a single hash with all the required fields If you want to know more about this read the next section Using hashes to abstract a very memory efficient plain key value store on top of Redis I understand the title of this section is a bit scary but I m going to explain in detail what this is about Basically it is possible to model a plain key value store using Redis where values can just be just strings which is not just more memory efficient than Redis plain keys but also much more memory efficient than memcached Let s start with some facts a few keys use a lot more memory than a single key containing a hash with a few fields How is this possible We use a trick In theory to guarantee that we perform lookups in constant time also known as O 1 in big O notation there is the need to use a data structure with a constant time complexity in the average case like a hash table But many times hashes contain just a few fields When hashes are small we can instead just encode them in an O N data structure like a linear array with length prefixed key value pairs Since we do this only when N is small the amortized time for HGET and HSET commands is still O 1 the hash will be converted into a real hash table as soon as the number of elements it contains grows too large you can configure the limit in redis conf This does not only work well from the point of view of time complexity but also from the point of view of constant times since a linear array of key value pairs happens to play very well with the CPU cache it has a better cache locality than a hash table However since hash fields and values are not always represented as full featured Redis objects hash fields can t have an associated time to live expire like a real key and can only contain a string But we are okay with this this was the intention anyway when the hash data type API was designed we trust simplicity more than features so nested data structures are not allowed as expires of single fields are not allowed So hashes are memory efficient This is useful when using hashes to represent objects or to model other problems when there are group of related fields But what about if we have a plain key value business Imagine we want to use Redis as a cache for many small objects which can be JSON encoded objects small HTML fragments simple key boolean values and so forth Basically anything is a string string map with small keys and values Now let s assume the objects we want to cache are numbered like object 102393 object 1234 object 5 This is what we can do Every time we perform a SET operation to set a new value we actually split the key into two parts one part used as a key and the other part used as the field name for the hash For instance the object named object 1234 is actually split into a Key named object 12 a Field named 34 So we use all the characters but the last two for the key and the final two characters for the hash field name To set our key we use the following command HSET object 12 34 somevalue As you can see every hash will end up containing 100 fields which is an optimal compromise between CPU and memory saved There is another important thing to note with this schema every hash will have more or less 100 fields regardless of the number of objects we cached This is because our objects will always end with a number and not a random string In some way the final number can be considered as a form of implicit pre sharding What about small numbers Like object 2 We handle this case using just object as a key name and the whole number as the hash field name So object 2 and object 10 will both end inside the key object but one as field name 2 and one as 10 How much memory do we save this way I used the following Ruby program to test how this works ruby require rubygems require redis USE OPTIMIZATION true def hash get key field key s key split if s 1 length 2 key s 0 s 1 0 3 field s 1 2 1 else key s 0 field s 1 end end def hash set r key value kf hash get key field key r hset kf key kf field value end def hash get r key value kf hash get key field key r hget kf key kf field value end r Redis new 0 100 000 each do id key object id if USE OPTIMIZATION hash set r key val else r set key val end end This is the result against a 64 bit instance of Redis 2 2 USE OPTIMIZATION set to true 1 7 MB of used memory USE OPTIMIZATION set to false 11 MB of used memory This is an order of magnitude I think this makes Redis more or less the most memory efficient plain key value store out there WARNING for this to work make sure that in your redis conf you have something like this hash max zipmap entries 256 Also remember to set the following field accordingly to the maximum size of your keys and values hash max zipmap value 1024 Every time a hash exceeds the number of elements or element size specified it will be converted into a real hash table and the memory saving will be lost You may ask why don t you do this implicitly in the normal key space so that I don t have to care There are two reasons one is that we tend to make tradeoffs explicit and this is a clear tradeoff between many things CPU memory and max element size The second is that the top level key space must support a lot of interesting things like expires LRU data and so forth so it is not practical to do this in a general way But the Redis Way is that the user must understand how things work so that he can pick the best compromise and to understand how the system will behave exactly Memory allocation To store user keys Redis allocates at most as much memory as the maxmemory setting enables however there are small extra allocations possible The exact value can be set in the configuration file or set later via CONFIG SET for more info see Using memory as an LRU cache docs reference eviction There are a few things that should be noted about how Redis manages memory Redis will not always free up return memory to the OS when keys are removed This is not something special about Redis but it is how most malloc implementations work For example if you fill an instance with 5GB worth of data and then remove the equivalent of 2GB of data the Resident Set Size also known as the RSS which is the number of memory pages consumed by the process will probably still be around 5GB even if Redis will claim that the user memory is around 3GB This happens because the underlying allocator can t easily release the memory For example often most of the removed keys were allocated on the same pages as the other keys that still exist The previous point means that you need to provision memory based on your peak memory usage If your workload from time to time requires 10GB even if most of the time 5GB could do you need to provision for 10GB However allocators are smart and are able to reuse free chunks of memory so after you free 2GB of your 5GB data set when you start adding more keys again you ll see the RSS Resident Set Size stay steady and not grow more as you add up to 2GB of additional keys The allocator is basically trying to reuse the 2GB of memory previously logically freed Because of all this the fragmentation ratio is not reliable when you had a memory usage that at the peak is much larger than the currently used memory The fragmentation is calculated as the physical memory actually used the RSS value divided by the amount of memory currently in use as the sum of all the allocations performed by Redis Because the RSS reflects the peak memory when the virtually used memory is low since a lot of keys values were freed but the RSS is high the ratio RSS mem used will be very high If maxmemory is not set Redis will keep allocating memory as it sees fit and thus it can gradually eat up all your free memory Therefore it is generally advisable to configure some limits You may also want to set maxmemory policy to noeviction which is not the default value in some older versions of Redis It makes Redis return an out of memory error for write commands if and when it reaches the limit which in turn may result in errors in the application but will not render the whole machine dead because of memory starvation |
redis Using the redis benchmark utility on a Redis server docs reference optimization benchmarks md title Redis benchmark aliases Benchmarking topics benchmarks weight 1 docs reference optimization benchmarks | ---
title: "Redis benchmark"
linkTitle: "Benchmarking"
weight: 1
description: >
Using the redis-benchmark utility on a Redis server
aliases: [
/topics/benchmarks,
/docs/reference/optimization/benchmarks,
/docs/reference/optimization/benchmarks.md
]
---
Redis includes the `redis-benchmark` utility that simulates running commands done
by N clients while at the same time sending M total queries. The utility provides
a default set of tests, or you can supply a custom set of tests.
The following options are supported:
Usage: redis-benchmark [-h <host>] [-p <port>] [-c <clients>] [-n <requests]> [-k <boolean>]
-h <hostname> Server hostname (default 127.0.0.1)
-p <port> Server port (default 6379)
-s <socket> Server socket (overrides host and port)
-a <password> Password for Redis Auth
-c <clients> Number of parallel connections (default 50)
-n <requests> Total number of requests (default 100000)
-d <size> Data size of SET/GET value in bytes (default 3)
--dbnum <db> SELECT the specified db number (default 0)
-k <boolean> 1=keep alive 0=reconnect (default 1)
-r <keyspacelen> Use random keys for SET/GET/INCR, random values for SADD
Using this option the benchmark will expand the string __rand_int__
inside an argument with a 12 digits number in the specified range
from 0 to keyspacelen-1. The substitution changes every time a command
is executed. Default tests use this to hit random keys in the
specified range.
-P <numreq> Pipeline <numreq> requests. Default 1 (no pipeline).
-q Quiet. Just show query/sec values
--csv Output in CSV format
-l Loop. Run the tests forever
-t <tests> Only run the comma separated list of tests. The test
names are the same as the ones produced as output.
-I Idle mode. Just open N idle connections and wait.
You need to have a running Redis instance before launching the benchmark.
You can run the benchmarking utility like so:
redis-benchmark -q -n 100000
### Running only a subset of the tests
You don't need to run all the default tests every time you execute `redis-benchmark`.
For example, to select only a subset of tests, use the `-t` option
as in the following example:
$ redis-benchmark -t set,lpush -n 100000 -q
SET: 74239.05 requests per second
LPUSH: 79239.30 requests per second
This example runs the tests for the `SET` and `LPUSH` commands and uses quiet mode (see the `-q` switch).
You can even benchmark a specific command:
$ redis-benchmark -n 100000 -q script load "redis.call('set','foo','bar')"
script load redis.call('set','foo','bar'): 69881.20 requests per second
### Selecting the size of the key space
By default, the benchmark runs against a single key. In Redis the difference
between such a synthetic benchmark and a real one is not huge since it is an
in-memory system, however it is possible to stress cache misses and in general
to simulate a more real-world work load by using a large key space.
This is obtained by using the `-r` switch. For instance if I want to run
one million SET operations, using a random key for every operation out of
100k possible keys, I'll use the following command line:
$ redis-cli flushall
OK
$ redis-benchmark -t set -r 100000 -n 1000000
====== SET ======
1000000 requests completed in 13.86 seconds
50 parallel clients
3 bytes payload
keep alive: 1
99.76% `<=` 1 milliseconds
99.98% `<=` 2 milliseconds
100.00% `<=` 3 milliseconds
100.00% `<=` 3 milliseconds
72144.87 requests per second
$ redis-cli dbsize
(integer) 99993
### Using pipelining
By default every client (the benchmark simulates 50 clients if not otherwise
specified with `-c`) sends the next command only when the reply of the previous
command is received, this means that the server will likely need a read call
in order to read each command from every client. Also RTT is paid as well.
Redis supports [pipelining](/topics/pipelining), so it is possible to send
multiple commands at once, a feature often exploited by real world applications.
Redis pipelining is able to dramatically improve the number of operations per
second a server is able do deliver.
Consider this example of running the benchmark using a
pipelining of 16 commands:
$ redis-benchmark -n 1000000 -t set,get -P 16 -q
SET: 403063.28 requests per second
GET: 508388.41 requests per second
Using pipelining results in a significant increase in performance.
### Pitfalls and misconceptions
The first point is obvious: the golden rule of a useful benchmark is to
only compare apples and apples. You can compare different versions of Redis on the same workload or the same version of Redis, but with
different options. If you plan to compare Redis to something else, then it is
important to evaluate the functional and technical differences, and take them
in account.
+ Redis is a server: all commands involve network or IPC round trips. It is meaningless to compare it to embedded data stores, because the cost of most operations is primarily in network/protocol management.
+ Redis commands return an acknowledgment for all usual commands. Some other data stores do not. Comparing Redis to stores involving one-way queries is only mildly useful.
+ Naively iterating on synchronous Redis commands does not benchmark Redis itself, but rather measure your network (or IPC) latency and the client library intrinsic latency. To really test Redis, you need multiple connections (like redis-benchmark) and/or to use pipelining to aggregate several commands and/or multiple threads or processes.
+ Redis is an in-memory data store with some optional persistence options. If you plan to compare it to transactional servers (MySQL, PostgreSQL, etc ...), then you should consider activating AOF and decide on a suitable fsync policy.
+ Redis is, mostly, a single-threaded server from the POV of commands execution (actually modern versions of Redis use threads for different things). It is not designed to benefit from multiple CPU cores. People are supposed to launch several Redis instances to scale out on several cores if needed. It is not really fair to compare one single Redis instance to a multi-threaded data store.
The `redis-benchmark` program is a quick and useful way to get some figures and
evaluate the performance of a Redis instance on a given hardware. However,
by default, it does not represent the maximum throughput a Redis instance can
sustain. Actually, by using pipelining and a fast client (hiredis), it is fairly
easy to write a program generating more throughput than redis-benchmark. The
default behavior of redis-benchmark is to achieve throughput by exploiting
concurrency only (i.e. it creates several connections to the server).
It does not use pipelining or any parallelism at all (one pending query per
connection at most, and no multi-threading), if not explicitly enabled via
the `-P` parameter. So in some way using `redis-benchmark` and, triggering, for
example, a `BGSAVE` operation in the background at the same time, will provide
the user with numbers more near to the *worst case* than to the best case.
To run a benchmark using pipelining mode (and achieve higher throughput),
you need to explicitly use the -P option. Please note that it is still a
realistic behavior since a lot of Redis based applications actively use
pipelining to improve performance. However you should use a pipeline size that
is more or less the average pipeline length you'll be able to use in your
application in order to get realistic numbers.
The benchmark should apply the same operations, and work in the same way
with the multiple data stores you want to compare. It is absolutely pointless to
compare the result of redis-benchmark to the result of another benchmark
program and extrapolate.
For instance, Redis and memcached in single-threaded mode can be compared on
GET/SET operations. Both are in-memory data stores, working mostly in the same
way at the protocol level. Provided their respective benchmark application is
aggregating queries in the same way (pipelining) and use a similar number of
connections, the comparison is actually meaningful.
When you're benchmarking a high-performance, in-memory database like Redis,
it may be difficult to saturate
the server. Sometimes, the performance bottleneck is on the client side,
and not the server-side. In that case, the client (i.e., the benchmarking program itself)
must be fixed, or perhaps scaled out, to reach the maximum throughput.
### Factors impacting Redis performance
There are multiple factors having direct consequences on Redis performance.
We mention them here, since they can alter the result of any benchmarks.
Please note however, that a typical Redis instance running on a low end,
untuned box usually provides good enough performance for most applications.
+ Network bandwidth and latency usually have a direct impact on the performance.
It is a good practice to use the ping program to quickly check the latency
between the client and server hosts is normal before launching the benchmark.
Regarding the bandwidth, it is generally useful to estimate
the throughput in Gbit/s and compare it to the theoretical bandwidth
of the network. For instance a benchmark setting 4 KB strings
in Redis at 100000 q/s, would actually consume 3.2 Gbit/s of bandwidth
and probably fit within a 10 Gbit/s link, but not a 1 Gbit/s one. In many real
world scenarios, Redis throughput is limited by the network well before being
limited by the CPU. To consolidate several high-throughput Redis instances
on a single server, it worth considering putting a 10 Gbit/s NIC
or multiple 1 Gbit/s NICs with TCP/IP bonding.
+ CPU is another very important factor. Being single-threaded, Redis favors
fast CPUs with large caches and not many cores. At this game, Intel CPUs are
currently the winners. It is not uncommon to get only half the performance on
an AMD Opteron CPU compared to similar Nehalem EP/Westmere EP/Sandy Bridge
Intel CPUs with Redis. When client and server run on the same box, the CPU is
the limiting factor with redis-benchmark.
+ Speed of RAM and memory bandwidth seem less critical for global performance
especially for small objects. For large objects (>10 KB), it may become
noticeable though. Usually, it is not really cost-effective to buy expensive
fast memory modules to optimize Redis.
+ Redis runs slower on a VM compared to running without virtualization using
the same hardware. If you have the chance to run Redis on a physical machine
this is preferred. However this does not mean that Redis is slow in
virtualized environments, the delivered performances are still very good
and most of the serious performance issues you may incur in virtualized
environments are due to over-provisioning, non-local disks with high latency,
or old hypervisor software that have slow `fork` syscall implementation.
+ When the server and client benchmark programs run on the same box, both
the TCP/IP loopback and unix domain sockets can be used. Depending on the
platform, unix domain sockets can achieve around 50% more throughput than
the TCP/IP loopback (on Linux for instance). The default behavior of
redis-benchmark is to use the TCP/IP loopback.
+ The performance benefit of unix domain sockets compared to TCP/IP loopback
tends to decrease when pipelining is heavily used (i.e. long pipelines).
+ When an ethernet network is used to access Redis, aggregating commands using
pipelining is especially efficient when the size of the data is kept under
the ethernet packet size (about 1500 bytes). Actually, processing 10 bytes,
100 bytes, or 1000 bytes queries almost result in the same throughput.
See the graph below.

+ On multi CPU sockets servers, Redis performance becomes dependent on the
NUMA configuration and process location. The most visible effect is that
redis-benchmark results seem non-deterministic because client and server
processes are distributed randomly on the cores. To get deterministic results,
it is required to use process placement tools (on Linux: taskset or numactl).
The most efficient combination is always to put the client and server on two
different cores of the same CPU to benefit from the L3 cache.
Here are some results of 4 KB SET benchmark for 3 server CPUs (AMD Istanbul,
Intel Nehalem EX, and Intel Westmere) with different relative placements.
Please note this benchmark is not meant to compare CPU models between themselves
(CPUs exact model and frequency are therefore not disclosed).

+ With high-end configurations, the number of client connections is also an
important factor. Being based on epoll/kqueue, the Redis event loop is quite
scalable. Redis has already been benchmarked at more than 60000 connections,
and was still able to sustain 50000 q/s in these conditions. As a rule of thumb,
an instance with 30000 connections can only process half the throughput
achievable with 100 connections. Here is an example showing the throughput of
a Redis instance per number of connections:

+ With high-end configurations, it is possible to achieve higher throughput by
tuning the NIC(s) configuration and associated interruptions. Best throughput
is achieved by setting an affinity between Rx/Tx NIC queues and CPU cores,
and activating RPS (Receive Packet Steering) support. More information in this
[thread](https://groups.google.com/forum/#!msg/redis-db/gUhc19gnYgc/BruTPCOroiMJ).
Jumbo frames may also provide a performance boost when large objects are used.
+ Depending on the platform, Redis can be compiled against different memory
allocators (libc malloc, jemalloc, tcmalloc), which may have different behaviors
in term of raw speed, internal and external fragmentation.
If you did not compile Redis yourself, you can use the INFO command to check
the `mem_allocator` field. Please note most benchmarks do not run long enough to
generate significant external fragmentation (contrary to production Redis
instances).
### Other things to consider
One important goal of any benchmark is to get reproducible results, so they
can be compared to the results of other tests.
+ A good practice is to try to run tests on isolated hardware as much as possible.
If it is not possible, then the system must be monitored to check the benchmark
is not impacted by some external activity.
+ Some configurations (desktops and laptops for sure, some servers as well)
have a variable CPU core frequency mechanism. The policy controlling this
mechanism can be set at the OS level. Some CPU models are more aggressive than
others at adapting the frequency of the CPU cores to the workload. To get
reproducible results, it is better to set the highest possible fixed frequency
for all the CPU cores involved in the benchmark.
+ An important point is to size the system accordingly to the benchmark.
The system must have enough RAM and must not swap. On Linux, do not forget
to set the `overcommit_memory` parameter correctly. Please note 32 and 64 bit
Redis instances do not have the same memory footprint.
+ If you plan to use RDB or AOF for your benchmark, please check there is no other
I/O activity in the system. Avoid putting RDB or AOF files on NAS or NFS shares,
or on any other devices impacting your network bandwidth and/or latency
(for instance, EBS on Amazon EC2).
+ Set Redis logging level (loglevel parameter) to warning or notice. Avoid putting
the generated log file on a remote filesystem.
+ Avoid using monitoring tools which can alter the result of the benchmark. For
instance using INFO at regular interval to gather statistics is probably fine,
but MONITOR will impact the measured performance significantly.
### Other Redis benchmarking tools
There are several third-party tools that can be used for benchmarking Redis. Refer to each tool's
documentation for more information about its goals and capabilities.
* [memtier_benchmark](https://github.com/redislabs/memtier_benchmark) from [Redis Ltd.](https://twitter.com/RedisInc) is a NoSQL Redis and Memcache traffic generation and benchmarking tool.
* [rpc-perf](https://github.com/twitter/rpc-perf) from [Twitter](https://twitter.com/twitter) is a tool for benchmarking RPC services that supports Redis and Memcache.
* [YCSB](https://github.com/brianfrankcooper/YCSB) from [Yahoo @Yahoo](https://twitter.com/Yahoo) is a benchmarking framework with clients to many databases, including Redis. | redis | title Redis benchmark linkTitle Benchmarking weight 1 description Using the redis benchmark utility on a Redis server aliases topics benchmarks docs reference optimization benchmarks docs reference optimization benchmarks md Redis includes the redis benchmark utility that simulates running commands done by N clients while at the same time sending M total queries The utility provides a default set of tests or you can supply a custom set of tests The following options are supported Usage redis benchmark h host p port c clients n requests k boolean h hostname Server hostname default 127 0 0 1 p port Server port default 6379 s socket Server socket overrides host and port a password Password for Redis Auth c clients Number of parallel connections default 50 n requests Total number of requests default 100000 d size Data size of SET GET value in bytes default 3 dbnum db SELECT the specified db number default 0 k boolean 1 keep alive 0 reconnect default 1 r keyspacelen Use random keys for SET GET INCR random values for SADD Using this option the benchmark will expand the string rand int inside an argument with a 12 digits number in the specified range from 0 to keyspacelen 1 The substitution changes every time a command is executed Default tests use this to hit random keys in the specified range P numreq Pipeline numreq requests Default 1 no pipeline q Quiet Just show query sec values csv Output in CSV format l Loop Run the tests forever t tests Only run the comma separated list of tests The test names are the same as the ones produced as output I Idle mode Just open N idle connections and wait You need to have a running Redis instance before launching the benchmark You can run the benchmarking utility like so redis benchmark q n 100000 Running only a subset of the tests You don t need to run all the default tests every time you execute redis benchmark For example to select only a subset of tests use the t option as in the following example redis benchmark t set lpush n 100000 q SET 74239 05 requests per second LPUSH 79239 30 requests per second This example runs the tests for the SET and LPUSH commands and uses quiet mode see the q switch You can even benchmark a specific command redis benchmark n 100000 q script load redis call set foo bar script load redis call set foo bar 69881 20 requests per second Selecting the size of the key space By default the benchmark runs against a single key In Redis the difference between such a synthetic benchmark and a real one is not huge since it is an in memory system however it is possible to stress cache misses and in general to simulate a more real world work load by using a large key space This is obtained by using the r switch For instance if I want to run one million SET operations using a random key for every operation out of 100k possible keys I ll use the following command line redis cli flushall OK redis benchmark t set r 100000 n 1000000 SET 1000000 requests completed in 13 86 seconds 50 parallel clients 3 bytes payload keep alive 1 99 76 1 milliseconds 99 98 2 milliseconds 100 00 3 milliseconds 100 00 3 milliseconds 72144 87 requests per second redis cli dbsize integer 99993 Using pipelining By default every client the benchmark simulates 50 clients if not otherwise specified with c sends the next command only when the reply of the previous command is received this means that the server will likely need a read call in order to read each command from every client Also RTT is paid as well Redis supports pipelining topics pipelining so it is possible to send multiple commands at once a feature often exploited by real world applications Redis pipelining is able to dramatically improve the number of operations per second a server is able do deliver Consider this example of running the benchmark using a pipelining of 16 commands redis benchmark n 1000000 t set get P 16 q SET 403063 28 requests per second GET 508388 41 requests per second Using pipelining results in a significant increase in performance Pitfalls and misconceptions The first point is obvious the golden rule of a useful benchmark is to only compare apples and apples You can compare different versions of Redis on the same workload or the same version of Redis but with different options If you plan to compare Redis to something else then it is important to evaluate the functional and technical differences and take them in account Redis is a server all commands involve network or IPC round trips It is meaningless to compare it to embedded data stores because the cost of most operations is primarily in network protocol management Redis commands return an acknowledgment for all usual commands Some other data stores do not Comparing Redis to stores involving one way queries is only mildly useful Naively iterating on synchronous Redis commands does not benchmark Redis itself but rather measure your network or IPC latency and the client library intrinsic latency To really test Redis you need multiple connections like redis benchmark and or to use pipelining to aggregate several commands and or multiple threads or processes Redis is an in memory data store with some optional persistence options If you plan to compare it to transactional servers MySQL PostgreSQL etc then you should consider activating AOF and decide on a suitable fsync policy Redis is mostly a single threaded server from the POV of commands execution actually modern versions of Redis use threads for different things It is not designed to benefit from multiple CPU cores People are supposed to launch several Redis instances to scale out on several cores if needed It is not really fair to compare one single Redis instance to a multi threaded data store The redis benchmark program is a quick and useful way to get some figures and evaluate the performance of a Redis instance on a given hardware However by default it does not represent the maximum throughput a Redis instance can sustain Actually by using pipelining and a fast client hiredis it is fairly easy to write a program generating more throughput than redis benchmark The default behavior of redis benchmark is to achieve throughput by exploiting concurrency only i e it creates several connections to the server It does not use pipelining or any parallelism at all one pending query per connection at most and no multi threading if not explicitly enabled via the P parameter So in some way using redis benchmark and triggering for example a BGSAVE operation in the background at the same time will provide the user with numbers more near to the worst case than to the best case To run a benchmark using pipelining mode and achieve higher throughput you need to explicitly use the P option Please note that it is still a realistic behavior since a lot of Redis based applications actively use pipelining to improve performance However you should use a pipeline size that is more or less the average pipeline length you ll be able to use in your application in order to get realistic numbers The benchmark should apply the same operations and work in the same way with the multiple data stores you want to compare It is absolutely pointless to compare the result of redis benchmark to the result of another benchmark program and extrapolate For instance Redis and memcached in single threaded mode can be compared on GET SET operations Both are in memory data stores working mostly in the same way at the protocol level Provided their respective benchmark application is aggregating queries in the same way pipelining and use a similar number of connections the comparison is actually meaningful When you re benchmarking a high performance in memory database like Redis it may be difficult to saturate the server Sometimes the performance bottleneck is on the client side and not the server side In that case the client i e the benchmarking program itself must be fixed or perhaps scaled out to reach the maximum throughput Factors impacting Redis performance There are multiple factors having direct consequences on Redis performance We mention them here since they can alter the result of any benchmarks Please note however that a typical Redis instance running on a low end untuned box usually provides good enough performance for most applications Network bandwidth and latency usually have a direct impact on the performance It is a good practice to use the ping program to quickly check the latency between the client and server hosts is normal before launching the benchmark Regarding the bandwidth it is generally useful to estimate the throughput in Gbit s and compare it to the theoretical bandwidth of the network For instance a benchmark setting 4 KB strings in Redis at 100000 q s would actually consume 3 2 Gbit s of bandwidth and probably fit within a 10 Gbit s link but not a 1 Gbit s one In many real world scenarios Redis throughput is limited by the network well before being limited by the CPU To consolidate several high throughput Redis instances on a single server it worth considering putting a 10 Gbit s NIC or multiple 1 Gbit s NICs with TCP IP bonding CPU is another very important factor Being single threaded Redis favors fast CPUs with large caches and not many cores At this game Intel CPUs are currently the winners It is not uncommon to get only half the performance on an AMD Opteron CPU compared to similar Nehalem EP Westmere EP Sandy Bridge Intel CPUs with Redis When client and server run on the same box the CPU is the limiting factor with redis benchmark Speed of RAM and memory bandwidth seem less critical for global performance especially for small objects For large objects 10 KB it may become noticeable though Usually it is not really cost effective to buy expensive fast memory modules to optimize Redis Redis runs slower on a VM compared to running without virtualization using the same hardware If you have the chance to run Redis on a physical machine this is preferred However this does not mean that Redis is slow in virtualized environments the delivered performances are still very good and most of the serious performance issues you may incur in virtualized environments are due to over provisioning non local disks with high latency or old hypervisor software that have slow fork syscall implementation When the server and client benchmark programs run on the same box both the TCP IP loopback and unix domain sockets can be used Depending on the platform unix domain sockets can achieve around 50 more throughput than the TCP IP loopback on Linux for instance The default behavior of redis benchmark is to use the TCP IP loopback The performance benefit of unix domain sockets compared to TCP IP loopback tends to decrease when pipelining is heavily used i e long pipelines When an ethernet network is used to access Redis aggregating commands using pipelining is especially efficient when the size of the data is kept under the ethernet packet size about 1500 bytes Actually processing 10 bytes 100 bytes or 1000 bytes queries almost result in the same throughput See the graph below Data size impact Data size png On multi CPU sockets servers Redis performance becomes dependent on the NUMA configuration and process location The most visible effect is that redis benchmark results seem non deterministic because client and server processes are distributed randomly on the cores To get deterministic results it is required to use process placement tools on Linux taskset or numactl The most efficient combination is always to put the client and server on two different cores of the same CPU to benefit from the L3 cache Here are some results of 4 KB SET benchmark for 3 server CPUs AMD Istanbul Intel Nehalem EX and Intel Westmere with different relative placements Please note this benchmark is not meant to compare CPU models between themselves CPUs exact model and frequency are therefore not disclosed NUMA chart NUMA chart gif With high end configurations the number of client connections is also an important factor Being based on epoll kqueue the Redis event loop is quite scalable Redis has already been benchmarked at more than 60000 connections and was still able to sustain 50000 q s in these conditions As a rule of thumb an instance with 30000 connections can only process half the throughput achievable with 100 connections Here is an example showing the throughput of a Redis instance per number of connections connections chart Connections chart png With high end configurations it is possible to achieve higher throughput by tuning the NIC s configuration and associated interruptions Best throughput is achieved by setting an affinity between Rx Tx NIC queues and CPU cores and activating RPS Receive Packet Steering support More information in this thread https groups google com forum msg redis db gUhc19gnYgc BruTPCOroiMJ Jumbo frames may also provide a performance boost when large objects are used Depending on the platform Redis can be compiled against different memory allocators libc malloc jemalloc tcmalloc which may have different behaviors in term of raw speed internal and external fragmentation If you did not compile Redis yourself you can use the INFO command to check the mem allocator field Please note most benchmarks do not run long enough to generate significant external fragmentation contrary to production Redis instances Other things to consider One important goal of any benchmark is to get reproducible results so they can be compared to the results of other tests A good practice is to try to run tests on isolated hardware as much as possible If it is not possible then the system must be monitored to check the benchmark is not impacted by some external activity Some configurations desktops and laptops for sure some servers as well have a variable CPU core frequency mechanism The policy controlling this mechanism can be set at the OS level Some CPU models are more aggressive than others at adapting the frequency of the CPU cores to the workload To get reproducible results it is better to set the highest possible fixed frequency for all the CPU cores involved in the benchmark An important point is to size the system accordingly to the benchmark The system must have enough RAM and must not swap On Linux do not forget to set the overcommit memory parameter correctly Please note 32 and 64 bit Redis instances do not have the same memory footprint If you plan to use RDB or AOF for your benchmark please check there is no other I O activity in the system Avoid putting RDB or AOF files on NAS or NFS shares or on any other devices impacting your network bandwidth and or latency for instance EBS on Amazon EC2 Set Redis logging level loglevel parameter to warning or notice Avoid putting the generated log file on a remote filesystem Avoid using monitoring tools which can alter the result of the benchmark For instance using INFO at regular interval to gather statistics is probably fine but MONITOR will impact the measured performance significantly Other Redis benchmarking tools There are several third party tools that can be used for benchmarking Redis Refer to each tool s documentation for more information about its goals and capabilities memtier benchmark https github com redislabs memtier benchmark from Redis Ltd https twitter com RedisInc is a NoSQL Redis and Memcache traffic generation and benchmarking tool rpc perf https github com twitter rpc perf from Twitter https twitter com twitter is a tool for benchmarking RPC services that supports Redis and Memcache YCSB https github com brianfrankcooper YCSB from Yahoo Yahoo https twitter com Yahoo is a benchmarking framework with clients to many databases including Redis |
scikit-learn is the best approach for most users It will provide a stable version Installing scikit learn There are different ways to install scikit learn This installation instructions | .. _installation-instructions:
=======================
Installing scikit-learn
=======================
There are different ways to install scikit-learn:
* :ref:`Install the latest official release <install_official_release>`. This
is the best approach for most users. It will provide a stable version
and pre-built packages are available for most platforms.
* Install the version of scikit-learn provided by your
:ref:`operating system or Python distribution <install_by_distribution>`.
This is a quick option for those who have operating systems or Python
distributions that distribute scikit-learn.
It might not provide the latest release version.
* :ref:`Building the package from source
<install_bleeding_edge>`. This is best for users who want the
latest-and-greatest features and aren't afraid of running
brand-new code. This is also needed for users who wish to contribute to the
project.
.. _install_official_release:
Installing the latest release
=============================
.. raw:: html
<style>
/* Show caption on large screens */
@media screen and (min-width: 960px) {
.install-instructions .sd-tab-set {
--tab-caption-width: 20%;
}
.install-instructions .sd-tab-set.tabs-os::before {
content: "Operating System";
}
.install-instructions .sd-tab-set.tabs-package-manager::before {
content: "Package Manager";
}
}
</style>
.. div:: install-instructions
.. tab-set::
:class: tabs-os
.. tab-item:: Windows
:class-label: tab-4
.. tab-set::
:class: tabs-package-manager
.. tab-item:: pip
:class-label: tab-6
:sync: package-manager-pip
Install the 64-bit version of Python 3, for instance from the
`official website <https://www.python.org/downloads/windows/>`__.
Now create a `virtual environment (venv)
<https://docs.python.org/3/tutorial/venv.html>`_ and install scikit-learn.
Note that the virtual environment is optional but strongly recommended, in
order to avoid potential conflicts with other packages.
.. prompt:: powershell
python -m venv sklearn-env
sklearn-env\Scripts\activate # activate
pip install -U scikit-learn
In order to check your installation, you can use:
.. prompt:: powershell
python -m pip show scikit-learn # show scikit-learn version and location
python -m pip freeze # show all installed packages in the environment
python -c "import sklearn; sklearn.show_versions()"
.. tab-item:: conda
:class-label: tab-6
:sync: package-manager-conda
.. include:: ./install_instructions_conda.rst
.. tab-item:: MacOS
:class-label: tab-4
.. tab-set::
:class: tabs-package-manager
.. tab-item:: pip
:class-label: tab-6
:sync: package-manager-pip
Install Python 3 using `homebrew <https://brew.sh/>`_ (`brew install python`)
or by manually installing the package from the `official website
<https://www.python.org/downloads/macos/>`__.
Now create a `virtual environment (venv)
<https://docs.python.org/3/tutorial/venv.html>`_ and install scikit-learn.
Note that the virtual environment is optional but strongly recommended, in
order to avoid potential conflicts with other packges.
.. prompt:: bash
python -m venv sklearn-env
source sklearn-env/bin/activate # activate
pip install -U scikit-learn
In order to check your installation, you can use:
.. prompt:: bash
python -m pip show scikit-learn # show scikit-learn version and location
python -m pip freeze # show all installed packages in the environment
python -c "import sklearn; sklearn.show_versions()"
.. tab-item:: conda
:class-label: tab-6
:sync: package-manager-conda
.. include:: ./install_instructions_conda.rst
.. tab-item:: Linux
:class-label: tab-4
.. tab-set::
:class: tabs-package-manager
.. tab-item:: pip
:class-label: tab-6
:sync: package-manager-pip
Python 3 is usually installed by default on most Linux distributions. To
check if you have it installed, try:
.. prompt:: bash
python3 --version
pip3 --version
If you don't have Python 3 installed, please install `python3` and
`python3-pip` from your distribution's package manager.
Now create a `virtual environment (venv)
<https://docs.python.org/3/tutorial/venv.html>`_ and install scikit-learn.
Note that the virtual environment is optional but strongly recommended, in
order to avoid potential conflicts with other packages.
.. prompt:: bash
python3 -m venv sklearn-env
source sklearn-env/bin/activate # activate
pip3 install -U scikit-learn
In order to check your installation, you can use:
.. prompt:: bash
python3 -m pip show scikit-learn # show scikit-learn version and location
python3 -m pip freeze # show all installed packages in the environment
python3 -c "import sklearn; sklearn.show_versions()"
.. tab-item:: conda
:class-label: tab-6
:sync: package-manager-conda
.. include:: ./install_instructions_conda.rst
Using an isolated environment such as pip venv or conda makes it possible to
install a specific version of scikit-learn with pip or conda and its dependencies
independently of any previously installed Python packages. In particular under Linux
it is discouraged to install pip packages alongside the packages managed by the
package manager of the distribution (apt, dnf, pacman...).
Note that you should always remember to activate the environment of your choice
prior to running any Python command whenever you start a new terminal session.
If you have not installed NumPy or SciPy yet, you can also install these using
conda or pip. When using pip, please ensure that *binary wheels* are used,
and NumPy and SciPy are not recompiled from source, which can happen when using
particular configurations of operating system and hardware (such as Linux on
a Raspberry Pi).
Scikit-learn plotting capabilities (i.e., functions starting with `plot\_`
and classes ending with `Display`) require Matplotlib. The examples require
Matplotlib and some examples require scikit-image, pandas, or seaborn. The
minimum version of scikit-learn dependencies are listed below along with its
purpose.
.. include:: min_dependency_table.rst
.. warning::
Scikit-learn 0.20 was the last version to support Python 2.7 and Python 3.4.
Scikit-learn 0.21 supported Python 3.5-3.7.
Scikit-learn 0.22 supported Python 3.5-3.8.
Scikit-learn 0.23-0.24 required Python 3.6 or newer.
Scikit-learn 1.0 supported Python 3.7-3.10.
Scikit-learn 1.1, 1.2 and 1.3 support Python 3.8-3.12
Scikit-learn 1.4 requires Python 3.9 or newer.
.. _install_by_distribution:
Third party distributions of scikit-learn
=========================================
Some third-party distributions provide versions of
scikit-learn integrated with their package-management systems.
These can make installation and upgrading much easier for users since
the integration includes the ability to automatically install
dependencies (numpy, scipy) that scikit-learn requires.
The following is an incomplete list of OS and python distributions
that provide their own version of scikit-learn.
Alpine Linux
------------
Alpine Linux's package is provided through the `official repositories
<https://pkgs.alpinelinux.org/packages?name=py3-scikit-learn>`__ as
``py3-scikit-learn`` for Python.
It can be installed by typing the following command:
.. prompt:: bash
sudo apk add py3-scikit-learn
Arch Linux
----------
Arch Linux's package is provided through the `official repositories
<https://www.archlinux.org/packages/?q=scikit-learn>`_ as
``python-scikit-learn`` for Python.
It can be installed by typing the following command:
.. prompt:: bash
sudo pacman -S python-scikit-learn
Debian/Ubuntu
-------------
The Debian/Ubuntu package is split in three different packages called
``python3-sklearn`` (python modules), ``python3-sklearn-lib`` (low-level
implementations and bindings), ``python-sklearn-doc`` (documentation).
Note that scikit-learn requires Python 3, hence the need to use the `python3-`
suffixed package names.
Packages can be installed using ``apt-get``:
.. prompt:: bash
sudo apt-get install python3-sklearn python3-sklearn-lib python-sklearn-doc
Fedora
------
The Fedora package is called ``python3-scikit-learn`` for the python 3 version,
the only one available in Fedora.
It can be installed using ``dnf``:
.. prompt:: bash
sudo dnf install python3-scikit-learn
NetBSD
------
scikit-learn is available via `pkgsrc-wip <http://pkgsrc-wip.sourceforge.net/>`_:
https://pkgsrc.se/math/py-scikit-learn
MacPorts for Mac OSX
--------------------
The MacPorts package is named ``py<XY>-scikits-learn``,
where ``XY`` denotes the Python version.
It can be installed by typing the following
command:
.. prompt:: bash
sudo port install py39-scikit-learn
Anaconda and Enthought Deployment Manager for all supported platforms
---------------------------------------------------------------------
`Anaconda <https://www.anaconda.com/download>`_ and
`Enthought Deployment Manager <https://assets.enthought.com/downloads/>`_
both ship with scikit-learn in addition to a large set of scientific
python library for Windows, Mac OSX and Linux.
Anaconda offers scikit-learn as part of its free distribution.
Intel Extension for Scikit-learn
--------------------------------
Intel maintains an optimized x86_64 package, available in PyPI (via `pip`),
and in the `main`, `conda-forge` and `intel` conda channels:
.. prompt:: bash
conda install scikit-learn-intelex
This package has an Intel optimized version of many estimators. Whenever
an alternative implementation doesn't exist, scikit-learn implementation
is used as a fallback. Those optimized solvers come from the oneDAL
C++ library and are optimized for the x86_64 architecture, and are
optimized for multi-core Intel CPUs.
Note that those solvers are not enabled by default, please refer to the
`scikit-learn-intelex <https://intel.github.io/scikit-learn-intelex/latest/what-is-patching.html>`_
documentation for more details on usage scenarios. Direct export example:
.. prompt:: python >>>
from sklearnex.neighbors import NearestNeighbors
Compatibility with the standard scikit-learn solvers is checked by running the
full scikit-learn test suite via automated continuous integration as reported
on https://github.com/intel/scikit-learn-intelex. If you observe any issue
with `scikit-learn-intelex`, please report the issue on their
`issue tracker <https://github.com/intel/scikit-learn-intelex/issues>`__.
WinPython for Windows
---------------------
The `WinPython <https://winpython.github.io/>`_ project distributes
scikit-learn as an additional plugin.
Troubleshooting
===============
If you encounter unexpected failures when installing scikit-learn, you may submit
an issue to the `issue tracker <https://github.com/scikit-learn/scikit-learn/issues>`_.
Before that, please also make sure to check the following common issues.
.. _windows_longpath:
Error caused by file path length limit on Windows
-------------------------------------------------
It can happen that pip fails to install packages when reaching the default path
size limit of Windows if Python is installed in a nested location such as the
`AppData` folder structure under the user home directory, for instance::
C:\Users\username>C:\Users\username\AppData\Local\Microsoft\WindowsApps\python.exe -m pip install scikit-learn
Collecting scikit-learn
...
Installing collected packages: scikit-learn
ERROR: Could not install packages due to an OSError: [Errno 2] No such file or directory: 'C:\\Users\\username\\AppData\\Local\\Packages\\PythonSoftwareFoundation.Python.3.7_qbz5n2kfra8p0\\LocalCache\\local-packages\\Python37\\site-packages\\sklearn\\datasets\\tests\\data\\openml\\292\\api-v1-json-data-list-data_name-australian-limit-2-data_version-1-status-deactivated.json.gz'
In this case it is possible to lift that limit in the Windows registry by
using the ``regedit`` tool:
#. Type "regedit" in the Windows start menu to launch ``regedit``.
#. Go to the
``Computer\HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\FileSystem``
key.
#. Edit the value of the ``LongPathsEnabled`` property of that key and set
it to 1.
#. Reinstall scikit-learn (ignoring the previous broken installation):
.. prompt:: powershell
pip install --exists-action=i scikit-learn | scikit-learn | installation instructions Installing scikit learn There are different ways to install scikit learn ref Install the latest official release install official release This is the best approach for most users It will provide a stable version and pre built packages are available for most platforms Install the version of scikit learn provided by your ref operating system or Python distribution install by distribution This is a quick option for those who have operating systems or Python distributions that distribute scikit learn It might not provide the latest release version ref Building the package from source install bleeding edge This is best for users who want the latest and greatest features and aren t afraid of running brand new code This is also needed for users who wish to contribute to the project install official release Installing the latest release raw html style Show caption on large screens media screen and min width 960px install instructions sd tab set tab caption width 20 install instructions sd tab set tabs os before content Operating System install instructions sd tab set tabs package manager before content Package Manager style div install instructions tab set class tabs os tab item Windows class label tab 4 tab set class tabs package manager tab item pip class label tab 6 sync package manager pip Install the 64 bit version of Python 3 for instance from the official website https www python org downloads windows Now create a virtual environment venv https docs python org 3 tutorial venv html and install scikit learn Note that the virtual environment is optional but strongly recommended in order to avoid potential conflicts with other packages prompt powershell python m venv sklearn env sklearn env Scripts activate activate pip install U scikit learn In order to check your installation you can use prompt powershell python m pip show scikit learn show scikit learn version and location python m pip freeze show all installed packages in the environment python c import sklearn sklearn show versions tab item conda class label tab 6 sync package manager conda include install instructions conda rst tab item MacOS class label tab 4 tab set class tabs package manager tab item pip class label tab 6 sync package manager pip Install Python 3 using homebrew https brew sh brew install python or by manually installing the package from the official website https www python org downloads macos Now create a virtual environment venv https docs python org 3 tutorial venv html and install scikit learn Note that the virtual environment is optional but strongly recommended in order to avoid potential conflicts with other packges prompt bash python m venv sklearn env source sklearn env bin activate activate pip install U scikit learn In order to check your installation you can use prompt bash python m pip show scikit learn show scikit learn version and location python m pip freeze show all installed packages in the environment python c import sklearn sklearn show versions tab item conda class label tab 6 sync package manager conda include install instructions conda rst tab item Linux class label tab 4 tab set class tabs package manager tab item pip class label tab 6 sync package manager pip Python 3 is usually installed by default on most Linux distributions To check if you have it installed try prompt bash python3 version pip3 version If you don t have Python 3 installed please install python3 and python3 pip from your distribution s package manager Now create a virtual environment venv https docs python org 3 tutorial venv html and install scikit learn Note that the virtual environment is optional but strongly recommended in order to avoid potential conflicts with other packages prompt bash python3 m venv sklearn env source sklearn env bin activate activate pip3 install U scikit learn In order to check your installation you can use prompt bash python3 m pip show scikit learn show scikit learn version and location python3 m pip freeze show all installed packages in the environment python3 c import sklearn sklearn show versions tab item conda class label tab 6 sync package manager conda include install instructions conda rst Using an isolated environment such as pip venv or conda makes it possible to install a specific version of scikit learn with pip or conda and its dependencies independently of any previously installed Python packages In particular under Linux it is discouraged to install pip packages alongside the packages managed by the package manager of the distribution apt dnf pacman Note that you should always remember to activate the environment of your choice prior to running any Python command whenever you start a new terminal session If you have not installed NumPy or SciPy yet you can also install these using conda or pip When using pip please ensure that binary wheels are used and NumPy and SciPy are not recompiled from source which can happen when using particular configurations of operating system and hardware such as Linux on a Raspberry Pi Scikit learn plotting capabilities i e functions starting with plot and classes ending with Display require Matplotlib The examples require Matplotlib and some examples require scikit image pandas or seaborn The minimum version of scikit learn dependencies are listed below along with its purpose include min dependency table rst warning Scikit learn 0 20 was the last version to support Python 2 7 and Python 3 4 Scikit learn 0 21 supported Python 3 5 3 7 Scikit learn 0 22 supported Python 3 5 3 8 Scikit learn 0 23 0 24 required Python 3 6 or newer Scikit learn 1 0 supported Python 3 7 3 10 Scikit learn 1 1 1 2 and 1 3 support Python 3 8 3 12 Scikit learn 1 4 requires Python 3 9 or newer install by distribution Third party distributions of scikit learn Some third party distributions provide versions of scikit learn integrated with their package management systems These can make installation and upgrading much easier for users since the integration includes the ability to automatically install dependencies numpy scipy that scikit learn requires The following is an incomplete list of OS and python distributions that provide their own version of scikit learn Alpine Linux Alpine Linux s package is provided through the official repositories https pkgs alpinelinux org packages name py3 scikit learn as py3 scikit learn for Python It can be installed by typing the following command prompt bash sudo apk add py3 scikit learn Arch Linux Arch Linux s package is provided through the official repositories https www archlinux org packages q scikit learn as python scikit learn for Python It can be installed by typing the following command prompt bash sudo pacman S python scikit learn Debian Ubuntu The Debian Ubuntu package is split in three different packages called python3 sklearn python modules python3 sklearn lib low level implementations and bindings python sklearn doc documentation Note that scikit learn requires Python 3 hence the need to use the python3 suffixed package names Packages can be installed using apt get prompt bash sudo apt get install python3 sklearn python3 sklearn lib python sklearn doc Fedora The Fedora package is called python3 scikit learn for the python 3 version the only one available in Fedora It can be installed using dnf prompt bash sudo dnf install python3 scikit learn NetBSD scikit learn is available via pkgsrc wip http pkgsrc wip sourceforge net https pkgsrc se math py scikit learn MacPorts for Mac OSX The MacPorts package is named py XY scikits learn where XY denotes the Python version It can be installed by typing the following command prompt bash sudo port install py39 scikit learn Anaconda and Enthought Deployment Manager for all supported platforms Anaconda https www anaconda com download and Enthought Deployment Manager https assets enthought com downloads both ship with scikit learn in addition to a large set of scientific python library for Windows Mac OSX and Linux Anaconda offers scikit learn as part of its free distribution Intel Extension for Scikit learn Intel maintains an optimized x86 64 package available in PyPI via pip and in the main conda forge and intel conda channels prompt bash conda install scikit learn intelex This package has an Intel optimized version of many estimators Whenever an alternative implementation doesn t exist scikit learn implementation is used as a fallback Those optimized solvers come from the oneDAL C library and are optimized for the x86 64 architecture and are optimized for multi core Intel CPUs Note that those solvers are not enabled by default please refer to the scikit learn intelex https intel github io scikit learn intelex latest what is patching html documentation for more details on usage scenarios Direct export example prompt python from sklearnex neighbors import NearestNeighbors Compatibility with the standard scikit learn solvers is checked by running the full scikit learn test suite via automated continuous integration as reported on https github com intel scikit learn intelex If you observe any issue with scikit learn intelex please report the issue on their issue tracker https github com intel scikit learn intelex issues WinPython for Windows The WinPython https winpython github io project distributes scikit learn as an additional plugin Troubleshooting If you encounter unexpected failures when installing scikit learn you may submit an issue to the issue tracker https github com scikit learn scikit learn issues Before that please also make sure to check the following common issues windows longpath Error caused by file path length limit on Windows It can happen that pip fails to install packages when reaching the default path size limit of Windows if Python is installed in a nested location such as the AppData folder structure under the user home directory for instance C Users username C Users username AppData Local Microsoft WindowsApps python exe m pip install scikit learn Collecting scikit learn Installing collected packages scikit learn ERROR Could not install packages due to an OSError Errno 2 No such file or directory C Users username AppData Local Packages PythonSoftwareFoundation Python 3 7 qbz5n2kfra8p0 LocalCache local packages Python37 site packages sklearn datasets tests data openml 292 api v1 json data list data name australian limit 2 data version 1 status deactivated json gz In this case it is possible to lift that limit in the Windows registry by using the regedit tool Type regedit in the Windows start menu to launch regedit Go to the Computer HKEY LOCAL MACHINE SYSTEM CurrentControlSet Control FileSystem key Edit the value of the LongPathsEnabled property of that key and set it to 1 Reinstall scikit learn ignoring the previous broken installation prompt powershell pip install exists action i scikit learn |
scikit-learn html roadmap strike | .. |ss| raw:: html
<strike>
.. |se| raw:: html
</strike>
.. _roadmap:
Roadmap
=======
Purpose of this document
------------------------
This document list general directions that core contributors are interested
to see developed in scikit-learn. The fact that an item is listed here is in
no way a promise that it will happen, as resources are limited. Rather, it
is an indication that help is welcomed on this topic.
Statement of purpose: Scikit-learn in 2018
------------------------------------------
Eleven years after the inception of Scikit-learn, much has changed in the
world of machine learning. Key changes include:
* Computational tools: The exploitation of GPUs, distributed programming
frameworks like Scala/Spark, etc.
* High-level Python libraries for experimentation, processing and data
management: Jupyter notebook, Cython, Pandas, Dask, Numba...
* Changes in the focus of machine learning research: artificial intelligence
applications (where input structure is key) with deep learning,
representation learning, reinforcement learning, domain transfer, etc.
A more subtle change over the last decade is that, due to changing interests
in ML, PhD students in machine learning are more likely to contribute to
PyTorch, Dask, etc. than to Scikit-learn, so our contributor pool is very
different to a decade ago.
Scikit-learn remains very popular in practice for trying out canonical
machine learning techniques, particularly for applications in experimental
science and in data science. A lot of what we provide is now very mature.
But it can be costly to maintain, and we cannot therefore include arbitrary
new implementations. Yet Scikit-learn is also essential in defining an API
framework for the development of interoperable machine learning components
external to the core library.
**Thus our main goals in this era are to**:
* continue maintaining a high-quality, well-documented collection of canonical
tools for data processing and machine learning within the current scope
(i.e. rectangular data largely invariant to column and row order;
predicting targets with simple structure)
* improve the ease for users to develop and publish external components
* improve interoperability with modern data science tools (e.g. Pandas, Dask)
and infrastructures (e.g. distributed processing)
Many of the more fine-grained goals can be found under the `API tag
<https://github.com/scikit-learn/scikit-learn/issues?q=is%3Aissue+is%3Aopen+sort%3Aupdated-desc+label%3AAPI>`_
on the issue tracker.
Architectural / general goals
-----------------------------
The list is numbered not as an indication of the order of priority, but to
make referring to specific points easier. Please add new entries only at the
bottom. Note that the crossed out entries are already done, and we try to keep
the document up to date as we work on these issues.
#. Improved handling of Pandas DataFrames
* document current handling
#. Improved handling of categorical features
* Tree-based models should be able to handle both continuous and categorical
features :issue:`29437`.
* Handling mixtures of categorical and continuous variables
#. Improved handling of missing data
* Making sure meta-estimators are lenient towards missing data by implementing
a common test.
* An amputation sample generator to make parts of a dataset go missing
:issue:`6284`
#. More didactic documentation
* More and more options have been added to scikit-learn. As a result, the
documentation is crowded which makes it hard for beginners to get the big
picture. Some work could be done in prioritizing the information.
#. Passing around information that is not (X, y): Feature properties
* Per-feature handling (e.g. "is this a nominal / ordinal / English language
text?") should also not need to be provided to estimator constructors,
ideally, but should be available as metadata alongside X. :issue:`8480`
#. Passing around information that is not (X, y): Target information
* We have problems getting the full set of classes to all components when
the data is split/sampled. :issue:`6231` :issue:`8100`
* We have no way to handle a mixture of categorical and continuous targets.
#. Make it easier for external users to write Scikit-learn-compatible
components
* More self-sufficient running of scikit-learn-contrib or a similar resource
#. Support resampling and sample reduction
* Allow subsampling of majority classes (in a pipeline?) :issue:`3855`
#. Better interfaces for interactive development
* Improve the HTML visualisations of estimators via the `estimator_html_repr`.
* Include more plotting tools, not just as examples.
#. Improved tools for model diagnostics and basic inference
* work on a unified interface for "feature importance"
* better ways to handle validation sets when fitting
#. Better tools for selecting hyperparameters with transductive estimators
* Grid search and cross validation are not applicable to most clustering
tasks. Stability-based selection is more relevant.
#. Better support for manual and automatic pipeline building
* Easier way to construct complex pipelines and valid search spaces
:issue:`7608` :issue:`5082` :issue:`8243`
* provide search ranges for common estimators??
* cf. `searchgrid <https://searchgrid.readthedocs.io/en/latest/>`_
#. Improved tracking of fitting
* Verbose is not very friendly and should use a standard logging library
:issue:`6929`, :issue:`78`
* Callbacks or a similar system would facilitate logging and early stopping
#. Distributed parallelism
* Accept data which complies with ``__array_function__``
#. A way forward for more out of core
* Dask enables easy out-of-core computation. While the Dask model probably
cannot be adaptable to all machine-learning algorithms, most machine
learning is on smaller data than ETL, hence we can maybe adapt to very
large scale while supporting only a fraction of the patterns.
#. Backwards-compatible de/serialization of some estimators
* Currently serialization (with pickle) breaks across versions. While we may
not be able to get around other limitations of pickle re security etc, it
would be great to offer cross-version safety from version 1.0. Note: Gael
and Olivier think that this can cause heavy maintenance burden and we
should manage the trade-offs. A possible alternative is presented in the
following point.
#. Documentation and tooling for model lifecycle management
* Document good practices for model deployments and lifecycle: before
deploying a model: snapshot the code versions (numpy, scipy, scikit-learn,
custom code repo), the training script and an alias on how to retrieve
historical training data + snapshot a copy of a small validation set +
snapshot of the predictions (predicted probabilities for classifiers)
on that validation set.
* Document and tools to make it easy to manage upgrade of scikit-learn
versions:
* Try to load the old pickle, if it works, use the validation set
prediction snapshot to detect that the serialized model still behave
the same;
* If joblib.load / pickle.load not work, use the versioned control
training script + historical training set to retrain the model and use
the validation set prediction snapshot to assert that it is possible to
recover the previous predictive performance: if this is not the case
there is probably a bug in scikit-learn that needs to be reported.
#. Everything in scikit-learn should probably conform to our API contract.
We are still in the process of making decisions on some of these related
issues.
* `Pipeline <pipeline.Pipeline>` and `FeatureUnion` modify their input
parameters in fit. Fixing this requires making sure we have a good
grasp of their use cases to make sure all current functionality is
maintained. :issue:`8157` :issue:`7382`
#. (Optional) Improve scikit-learn common tests suite to make sure that (at
least for frequently used) models have stable predictions across-versions
(to be discussed);
* Extend documentation to mention how to deploy models in Python-free
environments for instance `ONNX <https://github.com/onnx/sklearn-onnx>`_.
and use the above best practices to assess predictive consistency between
scikit-learn and ONNX prediction functions on validation set.
* Document good practices to detect temporal distribution drift for deployed
model and good practices for re-training on fresh data without causing
catastrophic predictive performance regressions. | scikit-learn | ss raw html strike se raw html strike roadmap Roadmap Purpose of this document This document list general directions that core contributors are interested to see developed in scikit learn The fact that an item is listed here is in no way a promise that it will happen as resources are limited Rather it is an indication that help is welcomed on this topic Statement of purpose Scikit learn in 2018 Eleven years after the inception of Scikit learn much has changed in the world of machine learning Key changes include Computational tools The exploitation of GPUs distributed programming frameworks like Scala Spark etc High level Python libraries for experimentation processing and data management Jupyter notebook Cython Pandas Dask Numba Changes in the focus of machine learning research artificial intelligence applications where input structure is key with deep learning representation learning reinforcement learning domain transfer etc A more subtle change over the last decade is that due to changing interests in ML PhD students in machine learning are more likely to contribute to PyTorch Dask etc than to Scikit learn so our contributor pool is very different to a decade ago Scikit learn remains very popular in practice for trying out canonical machine learning techniques particularly for applications in experimental science and in data science A lot of what we provide is now very mature But it can be costly to maintain and we cannot therefore include arbitrary new implementations Yet Scikit learn is also essential in defining an API framework for the development of interoperable machine learning components external to the core library Thus our main goals in this era are to continue maintaining a high quality well documented collection of canonical tools for data processing and machine learning within the current scope i e rectangular data largely invariant to column and row order predicting targets with simple structure improve the ease for users to develop and publish external components improve interoperability with modern data science tools e g Pandas Dask and infrastructures e g distributed processing Many of the more fine grained goals can be found under the API tag https github com scikit learn scikit learn issues q is 3Aissue is 3Aopen sort 3Aupdated desc label 3AAPI on the issue tracker Architectural general goals The list is numbered not as an indication of the order of priority but to make referring to specific points easier Please add new entries only at the bottom Note that the crossed out entries are already done and we try to keep the document up to date as we work on these issues Improved handling of Pandas DataFrames document current handling Improved handling of categorical features Tree based models should be able to handle both continuous and categorical features issue 29437 Handling mixtures of categorical and continuous variables Improved handling of missing data Making sure meta estimators are lenient towards missing data by implementing a common test An amputation sample generator to make parts of a dataset go missing issue 6284 More didactic documentation More and more options have been added to scikit learn As a result the documentation is crowded which makes it hard for beginners to get the big picture Some work could be done in prioritizing the information Passing around information that is not X y Feature properties Per feature handling e g is this a nominal ordinal English language text should also not need to be provided to estimator constructors ideally but should be available as metadata alongside X issue 8480 Passing around information that is not X y Target information We have problems getting the full set of classes to all components when the data is split sampled issue 6231 issue 8100 We have no way to handle a mixture of categorical and continuous targets Make it easier for external users to write Scikit learn compatible components More self sufficient running of scikit learn contrib or a similar resource Support resampling and sample reduction Allow subsampling of majority classes in a pipeline issue 3855 Better interfaces for interactive development Improve the HTML visualisations of estimators via the estimator html repr Include more plotting tools not just as examples Improved tools for model diagnostics and basic inference work on a unified interface for feature importance better ways to handle validation sets when fitting Better tools for selecting hyperparameters with transductive estimators Grid search and cross validation are not applicable to most clustering tasks Stability based selection is more relevant Better support for manual and automatic pipeline building Easier way to construct complex pipelines and valid search spaces issue 7608 issue 5082 issue 8243 provide search ranges for common estimators cf searchgrid https searchgrid readthedocs io en latest Improved tracking of fitting Verbose is not very friendly and should use a standard logging library issue 6929 issue 78 Callbacks or a similar system would facilitate logging and early stopping Distributed parallelism Accept data which complies with array function A way forward for more out of core Dask enables easy out of core computation While the Dask model probably cannot be adaptable to all machine learning algorithms most machine learning is on smaller data than ETL hence we can maybe adapt to very large scale while supporting only a fraction of the patterns Backwards compatible de serialization of some estimators Currently serialization with pickle breaks across versions While we may not be able to get around other limitations of pickle re security etc it would be great to offer cross version safety from version 1 0 Note Gael and Olivier think that this can cause heavy maintenance burden and we should manage the trade offs A possible alternative is presented in the following point Documentation and tooling for model lifecycle management Document good practices for model deployments and lifecycle before deploying a model snapshot the code versions numpy scipy scikit learn custom code repo the training script and an alias on how to retrieve historical training data snapshot a copy of a small validation set snapshot of the predictions predicted probabilities for classifiers on that validation set Document and tools to make it easy to manage upgrade of scikit learn versions Try to load the old pickle if it works use the validation set prediction snapshot to detect that the serialized model still behave the same If joblib load pickle load not work use the versioned control training script historical training set to retrain the model and use the validation set prediction snapshot to assert that it is possible to recover the previous predictive performance if this is not the case there is probably a bug in scikit learn that needs to be reported Everything in scikit learn should probably conform to our API contract We are still in the process of making decisions on some of these related issues Pipeline pipeline Pipeline and FeatureUnion modify their input parameters in fit Fixing this requires making sure we have a good grasp of their use cases to make sure all current functionality is maintained issue 8157 issue 7382 Optional Improve scikit learn common tests suite to make sure that at least for frequently used models have stable predictions across versions to be discussed Extend documentation to mention how to deploy models in Python free environments for instance ONNX https github com onnx sklearn onnx and use the above best practices to assess predictive consistency between scikit learn and ONNX prediction functions on validation set Document good practices to detect temporal distribution drift for deployed model and good practices for re training on fresh data without causing catastrophic predictive performance regressions |
scikit-learn There are several channels to connect with scikit learn developers for assistance feedback or contributions Note Communications on all channels should respect our announcementsandnotification Support | =======
Support
=======
There are several channels to connect with scikit-learn developers for assistance, feedback, or contributions.
**Note**: Communications on all channels should respect our `Code of Conduct <https://github.com/scikit-learn/scikit-learn/blob/main/CODE_OF_CONDUCT.md>`_.
.. _announcements_and_notification:
Mailing Lists
=============
- **Main Mailing List**: Join the primary discussion
platform for scikit-learn at `scikit-learn Mailing List
<https://mail.python.org/mailman/listinfo/scikitlearn>`_.
- **Commit Updates**: Stay informed about repository
updates and test failures on the `scikit-learn-commits list
<https://lists.sourceforge.net/lists/listinfo/scikit-learn-commits>`_.
.. _user_questions:
User Questions
==============
If you have questions, this is our general workflow.
- **Stack Overflow**: Some scikit-learn developers support users using the
`[scikit-learn] <https://stackoverflow.com/questions/tagged/scikit-learn>`_
tag.
- **General Machine Learning Queries**: For broader machine learning
discussions, visit `Stack Exchange <https://stats.stackexchange.com/>`_.
When posting questions:
- Please use a descriptive question in the title field (e.g. no "Please
help with scikit-learn!" as this is not a question)
- Provide detailed context, expected results, and actual observations.
- Include code and data snippets (preferably minimalistic scripts,
up to ~20 lines).
- Describe your data and preprocessing steps, including sample size,
feature types (categorical or numerical), and the target for supervised
learning tasks (classification type or regression).
**Note**: Avoid asking user questions on the bug tracker to keep
the focus on development.
- `GitHub Discussions <https://github.com/scikit-learn/scikit-learn/discussions>`_
Usage questions such as methodological
- `Stack Overflow <https://stackoverflow.com/questions/tagged/scikit-learn>`_
Programming/user questions with `[scikit-learn]` tag
- `GitHub Bug Tracker <https://github.com/scikit-learn/scikit-learn/issues>`_
Bug reports - Please do not ask usage questions on the issue tracker.
- `Discord Server <https://discord.gg/h9qyrK8Jc8>`_
Current pull requests - Post any specific PR-related questions on your PR,
and you can share a link to your PR on this server.
.. _bug_tracker:
Bug Tracker
===========
Encountered a bug? Report it on our `issue tracker
<https://github.com/scikit-learn/scikit-learn/issues>`_
Include in your report:
- Steps or scripts to reproduce the bug.
- Expected and observed outcomes.
- Python or gdb tracebacks, if applicable.
- The ideal bug report contains a :ref:`short reproducible code snippet
<minimal_reproducer>`, this way anyone can try to reproduce the bug easily.
- If your snippet is longer than around 50 lines, please link to a
`gist <https://gist.github.com>`_ or a github repo.
**Tip**: Gists are Git repositories; you can push data files to them using Git.
.. _social_media:
Social Media
============
scikit-learn has presence on various social media platforms to share
updates with the community. The platforms are not monitored for user
questions.
.. _gitter:
Gitter
======
**Note**: The scikit-learn Gitter room is no longer an active community.
For live discussions and support, please refer to the other channels
mentioned in this document.
.. _documentation_resources:
Documentation Resources
=======================
This documentation is for |release|. Find documentation for other versions
`here <https://scikit-learn.org/dev/versions.html>`__.
Older versions' printable PDF documentation is available `here
<https://sourceforge.net/projects/scikit-learn/files/documentation/>`_.
Building the PDF documentation is no longer supported in the website,
but you can still generate it locally by following the
:ref:`building documentation instructions <building_documentation>`. | scikit-learn | Support There are several channels to connect with scikit learn developers for assistance feedback or contributions Note Communications on all channels should respect our Code of Conduct https github com scikit learn scikit learn blob main CODE OF CONDUCT md announcements and notification Mailing Lists Main Mailing List Join the primary discussion platform for scikit learn at scikit learn Mailing List https mail python org mailman listinfo scikitlearn Commit Updates Stay informed about repository updates and test failures on the scikit learn commits list https lists sourceforge net lists listinfo scikit learn commits user questions User Questions If you have questions this is our general workflow Stack Overflow Some scikit learn developers support users using the scikit learn https stackoverflow com questions tagged scikit learn tag General Machine Learning Queries For broader machine learning discussions visit Stack Exchange https stats stackexchange com When posting questions Please use a descriptive question in the title field e g no Please help with scikit learn as this is not a question Provide detailed context expected results and actual observations Include code and data snippets preferably minimalistic scripts up to 20 lines Describe your data and preprocessing steps including sample size feature types categorical or numerical and the target for supervised learning tasks classification type or regression Note Avoid asking user questions on the bug tracker to keep the focus on development GitHub Discussions https github com scikit learn scikit learn discussions Usage questions such as methodological Stack Overflow https stackoverflow com questions tagged scikit learn Programming user questions with scikit learn tag GitHub Bug Tracker https github com scikit learn scikit learn issues Bug reports Please do not ask usage questions on the issue tracker Discord Server https discord gg h9qyrK8Jc8 Current pull requests Post any specific PR related questions on your PR and you can share a link to your PR on this server bug tracker Bug Tracker Encountered a bug Report it on our issue tracker https github com scikit learn scikit learn issues Include in your report Steps or scripts to reproduce the bug Expected and observed outcomes Python or gdb tracebacks if applicable The ideal bug report contains a ref short reproducible code snippet minimal reproducer this way anyone can try to reproduce the bug easily If your snippet is longer than around 50 lines please link to a gist https gist github com or a github repo Tip Gists are Git repositories you can push data files to them using Git social media Social Media scikit learn has presence on various social media platforms to share updates with the community The platforms are not monitored for user questions gitter Gitter Note The scikit learn Gitter room is no longer an active community For live discussions and support please refer to the other channels mentioned in this document documentation resources Documentation Resources This documentation is for release Find documentation for other versions here https scikit learn org dev versions html Older versions printable PDF documentation is available here https sourceforge net projects scikit learn files documentation Building the PDF documentation is no longer supported in the website but you can still generate it locally by following the ref building documentation instructions building documentation |
scikit-learn Summary of model persistence methods widths 25 50 50 header rows 1 modelpersistence Model persistence | .. _model_persistence:
=================
Model persistence
=================
.. list-table:: Summary of model persistence methods
:widths: 25 50 50
:header-rows: 1
* - Persistence method
- Pros
- Risks / Cons
* - :ref:`ONNX <onnx_persistence>`
- * Serve models without a Python environment
* Serving and training environments independent of one another
* Most secure option
- * Not all scikit-learn models are supported
* Custom estimators require more work to support
* Original Python object is lost and cannot be reconstructed
* - :ref:`skops_persistence`
- * More secure than `pickle` based formats
* Contents can be partly validated without loading
- * Not as fast as `pickle` based formats
* Supports less types than `pickle` based formats
* Requires the same environment as the training environment
* - :mod:`pickle`
- * Native to Python
* Can serialize most Python objects
* Efficient memory usage with `protocol=5`
- * Loading can execute arbitrary code
* Requires the same environment as the training environment
* - :mod:`joblib`
- * Efficient memory usage
* Supports memory mapping
* Easy shortcuts for compression and decompression
- * Pickle based format
* Loading can execute arbitrary code
* Requires the same environment as the training environment
* - `cloudpickle`_
- * Can serialize non-packaged, custom Python code
* Comparable loading efficiency as :mod:`pickle` with `protocol=5`
- * Pickle based format
* Loading can execute arbitrary code
* No forward compatibility guarantees
* Requires the same environment as the training environment
After training a scikit-learn model, it is desirable to have a way to persist
the model for future use without having to retrain. Based on your use-case,
there are a few different ways to persist a scikit-learn model, and here we
help you decide which one suits you best. In order to make a decision, you need
to answer the following questions:
1. Do you need the Python object after persistence, or do you only need to
persist in order to serve the model and get predictions out of it?
If you only need to serve the model and no further investigation on the Python
object itself is required, then :ref:`ONNX <onnx_persistence>` might be the
best fit for you. Note that not all models are supported by ONNX.
In case ONNX is not suitable for your use-case, the next question is:
2. Do you absolutely trust the source of the model, or are there any security
concerns regarding where the persisted model comes from?
If you have security concerns, then you should consider using :ref:`skops.io
<skops_persistence>` which gives you back the Python object, but unlike
`pickle` based persistence solutions, loading the persisted model doesn't
automatically allow arbitrary code execution. Note that this requires manual
investigation of the persisted file, which :mod:`skops.io` allows you to do.
The other solutions assume you absolutely trust the source of the file to be
loaded, as they are all susceptible to arbitrary code execution upon loading
the persisted file since they all use the pickle protocol under the hood.
3. Do you care about the performance of loading the model, and sharing it
between processes where a memory mapped object on disk is beneficial?
If yes, then you can consider using :ref:`joblib <pickle_persistence>`. If this
is not a major concern for you, then you can use the built-in :mod:`pickle`
module.
4. Did you try :mod:`pickle` or :mod:`joblib` and found that the model cannot
be persisted? It can happen for instance when you have user defined
functions in your model.
If yes, then you can use `cloudpickle`_ which can serialize certain objects
which cannot be serialized by :mod:`pickle` or :mod:`joblib`.
Workflow Overview
-----------------
In a typical workflow, the first step is to train the model using scikit-learn
and scikit-learn compatible libraries. Note that support for scikit-learn and
third party estimators varies across the different persistence methods.
Train and Persist the Model
...........................
Creating an appropriate model depends on your use-case. As an example, here we
train a :class:`sklearn.ensemble.HistGradientBoostingClassifier` on the iris
dataset::
>>> from sklearn import ensemble
>>> from sklearn import datasets
>>> clf = ensemble.HistGradientBoostingClassifier()
>>> X, y = datasets.load_iris(return_X_y=True)
>>> clf.fit(X, y)
HistGradientBoostingClassifier()
Once the model is trained, you can persist it using your desired method, and
then you can load the model in a separate environment and get predictions from
it given input data. Here there are two major paths depending on how you
persist and plan to serve the model:
- :ref:`ONNX <onnx_persistence>`: You need an `ONNX` runtime and an environment
with appropriate dependencies installed to load the model and use the runtime
to get predictions. This environment can be minimal and does not necessarily
even require Python to be installed to load the model and compute
predictions. Also note that `onnxruntime` typically requires much less RAM
than Python to compute predictions from small models.
- :mod:`skops.io`, :mod:`pickle`, :mod:`joblib`, `cloudpickle`_: You need a
Python environment with the appropriate dependencies installed to load the
model and get predictions from it. This environment should have the same
**packages** and the same **versions** as the environment where the model was
trained. Note that none of these methods support loading a model trained with
a different version of scikit-learn, and possibly different versions of other
dependencies such as `numpy` and `scipy`. Another concern would be running
the persisted model on a different hardware, and in most cases you should be
able to load your persisted model on a different hardware.
.. _onnx_persistence:
ONNX
----
`ONNX`, or `Open Neural Network Exchange <https://onnx.ai/>`__ format is best
suitable in use-cases where one needs to persist the model and then use the
persisted artifact to get predictions without the need to load the Python
object itself. It is also useful in cases where the serving environment needs
to be lean and minimal, since the `ONNX` runtime does not require `python`.
`ONNX` is a binary serialization of the model. It has been developed to improve
the usability of the interoperable representation of data models. It aims to
facilitate the conversion of the data models between different machine learning
frameworks, and to improve their portability on different computing
architectures. More details are available from the `ONNX tutorial
<https://onnx.ai/get-started.html>`__. To convert scikit-learn model to `ONNX`
`sklearn-onnx <http://onnx.ai/sklearn-onnx/>`__ has been developed. However,
not all scikit-learn models are supported, and it is limited to the core
scikit-learn and does not support most third party estimators. One can write a
custom converter for third party or custom estimators, but the documentation to
do that is sparse and it might be challenging to do so.
.. dropdown:: Using ONNX
To convert the model to `ONNX` format, you need to give the converter some
information about the input as well, about which you can read more `here
<http://onnx.ai/sklearn-onnx/index.html>`__::
from skl2onnx import to_onnx
onx = to_onnx(clf, X[:1].astype(numpy.float32), target_opset=12)
with open("filename.onnx", "wb") as f:
f.write(onx.SerializeToString())
You can load the model in Python and use the `ONNX` runtime to get
predictions::
from onnxruntime import InferenceSession
with open("filename.onnx", "rb") as f:
onx = f.read()
sess = InferenceSession(onx, providers=["CPUExecutionProvider"])
pred_ort = sess.run(None, {"X": X_test.astype(numpy.float32)})[0]
.. _skops_persistence:
`skops.io`
----------
:mod:`skops.io` avoids using :mod:`pickle` and only loads files which have types
and references to functions which are trusted either by default or by the user.
Therefore it provides a more secure format than :mod:`pickle`, :mod:`joblib`,
and `cloudpickle`_.
.. dropdown:: Using skops
The API is very similar to :mod:`pickle`, and you can persist your models as
explained in the `documentation
<https://skops.readthedocs.io/en/stable/persistence.html>`__ using
:func:`skops.io.dump` and :func:`skops.io.dumps`::
import skops.io as sio
obj = sio.dump(clf, "filename.skops")
And you can load them back using :func:`skops.io.load` and
:func:`skops.io.loads`. However, you need to specify the types which are
trusted by you. You can get existing unknown types in a dumped object / file
using :func:`skops.io.get_untrusted_types`, and after checking its contents,
pass it to the load function::
unknown_types = sio.get_untrusted_types(file="filename.skops")
# investigate the contents of unknown_types, and only load if you trust
# everything you see.
clf = sio.load("filename.skops", trusted=unknown_types)
Please report issues and feature requests related to this format on the `skops
issue tracker <https://github.com/skops-dev/skops/issues>`__.
.. _pickle_persistence:
`pickle`, `joblib`, and `cloudpickle`
-------------------------------------
These three modules / packages, use the `pickle` protocol under the hood, but
come with slight variations:
- :mod:`pickle` is a module from the Python Standard Library. It can serialize
and deserialize any Python object, including custom Python classes and
objects.
- :mod:`joblib` is more efficient than `pickle` when working with large machine
learning models or large numpy arrays.
- `cloudpickle`_ can serialize certain objects which cannot be serialized by
:mod:`pickle` or :mod:`joblib`, such as user defined functions and lambda
functions. This can happen for instance, when using a
:class:`~sklearn.preprocessing.FunctionTransformer` and using a custom
function to transform the data.
.. dropdown:: Using `pickle`, `joblib`, or `cloudpickle`
Depending on your use-case, you can choose one of these three methods to
persist and load your scikit-learn model, and they all follow the same API::
# Here you can replace pickle with joblib or cloudpickle
from pickle import dump
with open("filename.pkl", "wb") as f:
dump(clf, f, protocol=5)
Using `protocol=5` is recommended to reduce memory usage and make it faster to
store and load any large NumPy array stored as a fitted attribute in the model.
You can alternatively pass `protocol=pickle.HIGHEST_PROTOCOL` which is
equivalent to `protocol=5` in Python 3.8 and later (at the time of writing).
And later when needed, you can load the same object from the persisted file::
# Here you can replace pickle with joblib or cloudpickle
from pickle import load
with open("filename.pkl", "rb") as f:
clf = load(f)
.. _persistence_limitations:
Security & Maintainability Limitations
--------------------------------------
:mod:`pickle` (and :mod:`joblib` and :mod:`clouldpickle` by extension), has
many documented security vulnerabilities by design and should only be used if
the artifact, i.e. the pickle-file, is coming from a trusted and verified
source. You should never load a pickle file from an untrusted source, similarly
to how you should never execute code from an untrusted source.
Also note that arbitrary computations can be represented using the `ONNX`
format, and it is therefore recommended to serve models using `ONNX` in a
sandboxed environment to safeguard against computational and memory exploits.
Also note that there are no supported ways to load a model trained with a
different version of scikit-learn. While using :mod:`skops.io`, :mod:`joblib`,
:mod:`pickle`, or `cloudpickle`_, models saved using one version of
scikit-learn might load in other versions, however, this is entirely
unsupported and inadvisable. It should also be kept in mind that operations
performed on such data could give different and unexpected results, or even
crash your Python process.
In order to rebuild a similar model with future versions of scikit-learn,
additional metadata should be saved along the pickled model:
* The training data, e.g. a reference to an immutable snapshot
* The Python source code used to generate the model
* The versions of scikit-learn and its dependencies
* The cross validation score obtained on the training data
This should make it possible to check that the cross-validation score is in the
same range as before.
Aside for a few exceptions, persisted models should be portable across
operating systems and hardware architectures assuming the same versions of
dependencies and Python are used. If you encounter an estimator that is not
portable, please open an issue on GitHub. Persisted models are often deployed
in production using containers like Docker, in order to freeze the environment
and dependencies.
If you want to know more about these issues, please refer to these talks:
- `Adrin Jalali: Let's exploit pickle, and skops to the rescue! | PyData
Amsterdam 2023 <https://www.youtube.com/watch?v=9w_H5OSTO9A>`__.
- `Alex Gaynor: Pickles are for Delis, not Software - PyCon 2014
<https://pyvideo.org/video/2566/pickles-are-for-delis-not-software>`__.
.. _serving_environment:
Replicating the training environment in production
..................................................
If the versions of the dependencies used may differ from training to
production, it may result in unexpected behaviour and errors while using the
trained model. To prevent such situations it is recommended to use the same
dependencies and versions in both the training and production environment.
These transitive dependencies can be pinned with the help of package management
tools like `pip`, `mamba`, `conda`, `poetry`, `conda-lock`, `pixi`, etc.
It is not always possible to load an model trained with older versions of the
scikit-learn library and its dependencies in an updated software environment.
Instead, you might need to retrain the model with the new versions of the all
the libraries. So when training a model, it is important to record the training
recipe (e.g. a Python script) and training set information, and metadata about
all the dependencies to be able to automatically reconstruct the same training
environment for the updated software.
.. dropdown:: InconsistentVersionWarning
When an estimator is loaded with a scikit-learn version that is inconsistent
with the version the estimator was pickled with, a
:class:`~sklearn.exceptions.InconsistentVersionWarning` is raised. This warning
can be caught to obtain the original version the estimator was pickled with::
from sklearn.exceptions import InconsistentVersionWarning
warnings.simplefilter("error", InconsistentVersionWarning)
try:
with open("model_from_prevision_version.pickle", "rb") as f:
est = pickle.load(f)
except InconsistentVersionWarning as w:
print(w.original_sklearn_version)
Serving the model artifact
..........................
The last step after training a scikit-learn model is serving the model.
Once the trained model is successfully loaded, it can be served to manage
different prediction requests. This can involve deploying the model as a
web service using containerization, or other model deployment strategies,
according to the specifications.
Summarizing the key points
--------------------------
Based on the different approaches for model persistence, the key points for
each approach can be summarized as follows:
* `ONNX`: It provides a uniform format for persisting any machine learning or
deep learning model (other than scikit-learn) and is useful for model
inference (predictions). It can however, result in compatibility issues with
different frameworks.
* :mod:`skops.io`: Trained scikit-learn models can be easily shared and put
into production using :mod:`skops.io`. It is more secure compared to
alternate approaches based on :mod:`pickle` because it does not load
arbitrary code unless explicitly asked for by the user. Such code needs to be
packaged and importable in the target Python environment.
* :mod:`joblib`: Efficient memory mapping techniques make it faster when using
the same persisted model in multiple Python processes when using
`mmap_mode="r"`. It also gives easy shortcuts to compress and decompress the
persisted object without the need for extra code. However, it may trigger the
execution of malicious code when loading a model from an untrusted source as
any other pickle-based persistence mechanism.
* :mod:`pickle`: It is native to Python and most Python objects can be
serialized and deserialized using :mod:`pickle`, including custom Python
classes and functions as long as they are defined in a package that can be
imported in the target environment. While :mod:`pickle` can be used to easily
save and load scikit-learn models, it may trigger the execution of malicious
code while loading a model from an untrusted source. :mod:`pickle` can also
be very efficient memorywise if the model was persisted with `protocol=5` but
it does not support memory mapping.
* `cloudpickle`_: It has comparable loading efficiency as :mod:`pickle` and
:mod:`joblib` (without memory mapping), but offers additional flexibility to
serialize custom Python code such as lambda expressions and interactively
defined functions and classes. It might be a last resort to persist pipelines
with custom Python components such as a
:class:`sklearn.preprocessing.FunctionTransformer` that wraps a function
defined in the training script itself or more generally outside of any
importable Python package. Note that `cloudpickle`_ offers no forward
compatibility guarantees and you might need the same version of
`cloudpickle`_ to load the persisted model along with the same version of all
the libraries used to define the model. As the other pickle-based persistence
mechanisms, it may trigger the execution of malicious code while loading
a model from an untrusted source.
.. _cloudpickle: https://github.com/cloudpipe/cloudpickle | scikit-learn | model persistence Model persistence list table Summary of model persistence methods widths 25 50 50 header rows 1 Persistence method Pros Risks Cons ref ONNX onnx persistence Serve models without a Python environment Serving and training environments independent of one another Most secure option Not all scikit learn models are supported Custom estimators require more work to support Original Python object is lost and cannot be reconstructed ref skops persistence More secure than pickle based formats Contents can be partly validated without loading Not as fast as pickle based formats Supports less types than pickle based formats Requires the same environment as the training environment mod pickle Native to Python Can serialize most Python objects Efficient memory usage with protocol 5 Loading can execute arbitrary code Requires the same environment as the training environment mod joblib Efficient memory usage Supports memory mapping Easy shortcuts for compression and decompression Pickle based format Loading can execute arbitrary code Requires the same environment as the training environment cloudpickle Can serialize non packaged custom Python code Comparable loading efficiency as mod pickle with protocol 5 Pickle based format Loading can execute arbitrary code No forward compatibility guarantees Requires the same environment as the training environment After training a scikit learn model it is desirable to have a way to persist the model for future use without having to retrain Based on your use case there are a few different ways to persist a scikit learn model and here we help you decide which one suits you best In order to make a decision you need to answer the following questions 1 Do you need the Python object after persistence or do you only need to persist in order to serve the model and get predictions out of it If you only need to serve the model and no further investigation on the Python object itself is required then ref ONNX onnx persistence might be the best fit for you Note that not all models are supported by ONNX In case ONNX is not suitable for your use case the next question is 2 Do you absolutely trust the source of the model or are there any security concerns regarding where the persisted model comes from If you have security concerns then you should consider using ref skops io skops persistence which gives you back the Python object but unlike pickle based persistence solutions loading the persisted model doesn t automatically allow arbitrary code execution Note that this requires manual investigation of the persisted file which mod skops io allows you to do The other solutions assume you absolutely trust the source of the file to be loaded as they are all susceptible to arbitrary code execution upon loading the persisted file since they all use the pickle protocol under the hood 3 Do you care about the performance of loading the model and sharing it between processes where a memory mapped object on disk is beneficial If yes then you can consider using ref joblib pickle persistence If this is not a major concern for you then you can use the built in mod pickle module 4 Did you try mod pickle or mod joblib and found that the model cannot be persisted It can happen for instance when you have user defined functions in your model If yes then you can use cloudpickle which can serialize certain objects which cannot be serialized by mod pickle or mod joblib Workflow Overview In a typical workflow the first step is to train the model using scikit learn and scikit learn compatible libraries Note that support for scikit learn and third party estimators varies across the different persistence methods Train and Persist the Model Creating an appropriate model depends on your use case As an example here we train a class sklearn ensemble HistGradientBoostingClassifier on the iris dataset from sklearn import ensemble from sklearn import datasets clf ensemble HistGradientBoostingClassifier X y datasets load iris return X y True clf fit X y HistGradientBoostingClassifier Once the model is trained you can persist it using your desired method and then you can load the model in a separate environment and get predictions from it given input data Here there are two major paths depending on how you persist and plan to serve the model ref ONNX onnx persistence You need an ONNX runtime and an environment with appropriate dependencies installed to load the model and use the runtime to get predictions This environment can be minimal and does not necessarily even require Python to be installed to load the model and compute predictions Also note that onnxruntime typically requires much less RAM than Python to compute predictions from small models mod skops io mod pickle mod joblib cloudpickle You need a Python environment with the appropriate dependencies installed to load the model and get predictions from it This environment should have the same packages and the same versions as the environment where the model was trained Note that none of these methods support loading a model trained with a different version of scikit learn and possibly different versions of other dependencies such as numpy and scipy Another concern would be running the persisted model on a different hardware and in most cases you should be able to load your persisted model on a different hardware onnx persistence ONNX ONNX or Open Neural Network Exchange https onnx ai format is best suitable in use cases where one needs to persist the model and then use the persisted artifact to get predictions without the need to load the Python object itself It is also useful in cases where the serving environment needs to be lean and minimal since the ONNX runtime does not require python ONNX is a binary serialization of the model It has been developed to improve the usability of the interoperable representation of data models It aims to facilitate the conversion of the data models between different machine learning frameworks and to improve their portability on different computing architectures More details are available from the ONNX tutorial https onnx ai get started html To convert scikit learn model to ONNX sklearn onnx http onnx ai sklearn onnx has been developed However not all scikit learn models are supported and it is limited to the core scikit learn and does not support most third party estimators One can write a custom converter for third party or custom estimators but the documentation to do that is sparse and it might be challenging to do so dropdown Using ONNX To convert the model to ONNX format you need to give the converter some information about the input as well about which you can read more here http onnx ai sklearn onnx index html from skl2onnx import to onnx onx to onnx clf X 1 astype numpy float32 target opset 12 with open filename onnx wb as f f write onx SerializeToString You can load the model in Python and use the ONNX runtime to get predictions from onnxruntime import InferenceSession with open filename onnx rb as f onx f read sess InferenceSession onx providers CPUExecutionProvider pred ort sess run None X X test astype numpy float32 0 skops persistence skops io mod skops io avoids using mod pickle and only loads files which have types and references to functions which are trusted either by default or by the user Therefore it provides a more secure format than mod pickle mod joblib and cloudpickle dropdown Using skops The API is very similar to mod pickle and you can persist your models as explained in the documentation https skops readthedocs io en stable persistence html using func skops io dump and func skops io dumps import skops io as sio obj sio dump clf filename skops And you can load them back using func skops io load and func skops io loads However you need to specify the types which are trusted by you You can get existing unknown types in a dumped object file using func skops io get untrusted types and after checking its contents pass it to the load function unknown types sio get untrusted types file filename skops investigate the contents of unknown types and only load if you trust everything you see clf sio load filename skops trusted unknown types Please report issues and feature requests related to this format on the skops issue tracker https github com skops dev skops issues pickle persistence pickle joblib and cloudpickle These three modules packages use the pickle protocol under the hood but come with slight variations mod pickle is a module from the Python Standard Library It can serialize and deserialize any Python object including custom Python classes and objects mod joblib is more efficient than pickle when working with large machine learning models or large numpy arrays cloudpickle can serialize certain objects which cannot be serialized by mod pickle or mod joblib such as user defined functions and lambda functions This can happen for instance when using a class sklearn preprocessing FunctionTransformer and using a custom function to transform the data dropdown Using pickle joblib or cloudpickle Depending on your use case you can choose one of these three methods to persist and load your scikit learn model and they all follow the same API Here you can replace pickle with joblib or cloudpickle from pickle import dump with open filename pkl wb as f dump clf f protocol 5 Using protocol 5 is recommended to reduce memory usage and make it faster to store and load any large NumPy array stored as a fitted attribute in the model You can alternatively pass protocol pickle HIGHEST PROTOCOL which is equivalent to protocol 5 in Python 3 8 and later at the time of writing And later when needed you can load the same object from the persisted file Here you can replace pickle with joblib or cloudpickle from pickle import load with open filename pkl rb as f clf load f persistence limitations Security Maintainability Limitations mod pickle and mod joblib and mod clouldpickle by extension has many documented security vulnerabilities by design and should only be used if the artifact i e the pickle file is coming from a trusted and verified source You should never load a pickle file from an untrusted source similarly to how you should never execute code from an untrusted source Also note that arbitrary computations can be represented using the ONNX format and it is therefore recommended to serve models using ONNX in a sandboxed environment to safeguard against computational and memory exploits Also note that there are no supported ways to load a model trained with a different version of scikit learn While using mod skops io mod joblib mod pickle or cloudpickle models saved using one version of scikit learn might load in other versions however this is entirely unsupported and inadvisable It should also be kept in mind that operations performed on such data could give different and unexpected results or even crash your Python process In order to rebuild a similar model with future versions of scikit learn additional metadata should be saved along the pickled model The training data e g a reference to an immutable snapshot The Python source code used to generate the model The versions of scikit learn and its dependencies The cross validation score obtained on the training data This should make it possible to check that the cross validation score is in the same range as before Aside for a few exceptions persisted models should be portable across operating systems and hardware architectures assuming the same versions of dependencies and Python are used If you encounter an estimator that is not portable please open an issue on GitHub Persisted models are often deployed in production using containers like Docker in order to freeze the environment and dependencies If you want to know more about these issues please refer to these talks Adrin Jalali Let s exploit pickle and skops to the rescue PyData Amsterdam 2023 https www youtube com watch v 9w H5OSTO9A Alex Gaynor Pickles are for Delis not Software PyCon 2014 https pyvideo org video 2566 pickles are for delis not software serving environment Replicating the training environment in production If the versions of the dependencies used may differ from training to production it may result in unexpected behaviour and errors while using the trained model To prevent such situations it is recommended to use the same dependencies and versions in both the training and production environment These transitive dependencies can be pinned with the help of package management tools like pip mamba conda poetry conda lock pixi etc It is not always possible to load an model trained with older versions of the scikit learn library and its dependencies in an updated software environment Instead you might need to retrain the model with the new versions of the all the libraries So when training a model it is important to record the training recipe e g a Python script and training set information and metadata about all the dependencies to be able to automatically reconstruct the same training environment for the updated software dropdown InconsistentVersionWarning When an estimator is loaded with a scikit learn version that is inconsistent with the version the estimator was pickled with a class sklearn exceptions InconsistentVersionWarning is raised This warning can be caught to obtain the original version the estimator was pickled with from sklearn exceptions import InconsistentVersionWarning warnings simplefilter error InconsistentVersionWarning try with open model from prevision version pickle rb as f est pickle load f except InconsistentVersionWarning as w print w original sklearn version Serving the model artifact The last step after training a scikit learn model is serving the model Once the trained model is successfully loaded it can be served to manage different prediction requests This can involve deploying the model as a web service using containerization or other model deployment strategies according to the specifications Summarizing the key points Based on the different approaches for model persistence the key points for each approach can be summarized as follows ONNX It provides a uniform format for persisting any machine learning or deep learning model other than scikit learn and is useful for model inference predictions It can however result in compatibility issues with different frameworks mod skops io Trained scikit learn models can be easily shared and put into production using mod skops io It is more secure compared to alternate approaches based on mod pickle because it does not load arbitrary code unless explicitly asked for by the user Such code needs to be packaged and importable in the target Python environment mod joblib Efficient memory mapping techniques make it faster when using the same persisted model in multiple Python processes when using mmap mode r It also gives easy shortcuts to compress and decompress the persisted object without the need for extra code However it may trigger the execution of malicious code when loading a model from an untrusted source as any other pickle based persistence mechanism mod pickle It is native to Python and most Python objects can be serialized and deserialized using mod pickle including custom Python classes and functions as long as they are defined in a package that can be imported in the target environment While mod pickle can be used to easily save and load scikit learn models it may trigger the execution of malicious code while loading a model from an untrusted source mod pickle can also be very efficient memorywise if the model was persisted with protocol 5 but it does not support memory mapping cloudpickle It has comparable loading efficiency as mod pickle and mod joblib without memory mapping but offers additional flexibility to serialize custom Python code such as lambda expressions and interactively defined functions and classes It might be a last resort to persist pipelines with custom Python components such as a class sklearn preprocessing FunctionTransformer that wraps a function defined in the training script itself or more generally outside of any importable Python package Note that cloudpickle offers no forward compatibility guarantees and you might need the same version of cloudpickle to load the persisted model along with the same version of all the libraries used to define the model As the other pickle based persistence mechanisms it may trigger the execution of malicious code while loading a model from an untrusted source cloudpickle https github com cloudpipe cloudpickle |
scikit-learn governance Scikit learn governance and decision making elements of our community interact scikit learn project to clarify how decisions are made and how the various This document establishes a decision making structure that takes into account The purpose of this document is to formalize the governance process used by the | .. _governance:
===========================================
Scikit-learn governance and decision-making
===========================================
The purpose of this document is to formalize the governance process used by the
scikit-learn project, to clarify how decisions are made and how the various
elements of our community interact.
This document establishes a decision-making structure that takes into account
feedback from all members of the community and strives to find consensus, while
avoiding any deadlocks.
This is a meritocratic, consensus-based community project. Anyone with an
interest in the project can join the community, contribute to the project
design and participate in the decision making process. This document describes
how that participation takes place and how to set about earning merit within
the project community.
Roles And Responsibilities
==========================
We distinguish between contributors, core contributors, and the technical
committee. A key distinction between them is their voting rights: contributors
have no voting rights, whereas the other two groups all have voting rights,
as well as permissions to the tools relevant to their roles.
Contributors
------------
Contributors are community members who contribute in concrete ways to the
project. Anyone can become a contributor, and contributions can take many forms
– not only code – as detailed in the :ref:`contributors guide <contributing>`.
There is no process to become a contributor: once somebody contributes to the
project in any way, they are a contributor.
Core Contributors
-----------------
All core contributor members have the same voting rights and right to propose
new members to any of the roles listed below. Their membership is represented
as being an organization member on the scikit-learn `GitHub organization
<https://github.com/orgs/scikit-learn/people>`_.
They are also welcome to join our `monthly core contributor meetings
<https://github.com/scikit-learn/administrative/tree/master/meeting_notes>`_.
New members can be nominated by any existing member. Once they have been
nominated, there will be a vote by the current core contributors. Voting on new
members is one of the few activities that takes place on the project's private
mailing list. While it is expected that most votes will be unanimous, a
two-thirds majority of the cast votes is enough. The vote needs to be open for
at least 1 week.
Core contributors that have not contributed to the project, corresponding to
their role, in the past 12 months will be asked if they want to become emeritus
members and recant their rights until they become active again. The list of
members, active and emeritus (with dates at which they became active) is public
on the scikit-learn website. It is the responsibility of the active core
contributors to send such a yearly reminder email.
The following teams form the core contributors group:
* **Contributor Experience Team**
The contributor experience team improves the experience of contributors by
helping with the triage of issues and pull requests, as well as noticing any
repeating patterns where people might struggle, and to help with improving
those aspects of the project.
To this end, they have the required permissions on GitHub to label and close
issues. :ref:`Their work <bug_triaging>` is crucial to improve the
communication in the project and limit the crowding of the issue tracker.
.. _communication_team:
* **Communication Team**
Members of the communication team help with outreach and communication
for scikit-learn. The goal of the team is to develop public awareness of
scikit-learn, of its features and usage, as well as branding.
For this, they can operate the scikit-learn accounts on various social networks
and produce materials. They also have the required rights to our blog
repository and other relevant accounts and platforms.
* **Documentation Team**
Members of the documentation team engage with the documentation of the project
among other things. They might also be involved in other aspects of the
project, but their reviews on documentation contributions are considered
authoritative, and can merge such contributions.
To this end, they have permissions to merge pull requests in scikit-learn's
repository.
* **Maintainers Team**
Maintainers are community members who have shown that they are dedicated to the
continued development of the project through ongoing engagement with the
community. They have shown they can be trusted to maintain scikit-learn with
care. Being a maintainer allows contributors to more easily carry on with their
project related activities by giving them direct access to the project's
repository. Maintainers are expected to review code contributions, merge
approved pull requests, cast votes for and against merging a pull-request,
and to be involved in deciding major changes to the API.
Technical Committee
-------------------
The Technical Committee (TC) members are maintainers who have additional
responsibilities to ensure the smooth running of the project. TC members are
expected to participate in strategic planning, and approve changes to the
governance model. The purpose of the TC is to ensure a smooth progress from the
big-picture perspective. Indeed changes that impact the full project require a
synthetic analysis and a consensus that is both explicit and informed. In cases
that the core contributor community (which includes the TC members) fails to
reach such a consensus in the required time frame, the TC is the entity to
resolve the issue. Membership of the TC is by nomination by a core contributor.
A nomination will result in discussion which cannot take more than a month and
then a vote by the core contributors which will stay open for a week. TC
membership votes are subject to a two-third majority of all cast votes as well
as a simple majority approval of all the current TC members. TC members who do
not actively engage with the TC duties are expected to resign.
The Technical Committee of scikit-learn consists of :user:`Thomas Fan
<thomasjpfan>`, :user:`Alexandre Gramfort <agramfort>`, :user:`Olivier Grisel
<ogrisel>`, :user:`Adrin Jalali <adrinjalali>`, :user:`Andreas Müller
<amueller>`, :user:`Joel Nothman <jnothman>` and :user:`Gaël Varoquaux
<GaelVaroquaux>`.
Decision Making Process
=======================
Decisions about the future of the project are made through discussion with all
members of the community. All non-sensitive project management discussion takes
place on the project contributors' `mailing list <mailto:[email protected]>`_
and the `issue tracker <https://github.com/scikit-learn/scikit-learn/issues>`_.
Occasionally, sensitive discussion occurs on a private list.
Scikit-learn uses a "consensus seeking" process for making decisions. The group
tries to find a resolution that has no open objections among core contributors.
At any point during the discussion, any core contributor can call for a vote,
which will conclude one month from the call for the vote. Most votes have to be
backed by a :ref:`SLEP <slep>`. If no option can gather two thirds of the votes
cast, the decision is escalated to the TC, which in turn will use consensus
seeking with the fallback option of a simple majority vote if no consensus can
be found within a month. This is what we hereafter may refer to as "**the
decision making process**".
Decisions (in addition to adding core contributors and TC membership as above)
are made according to the following rules:
* **Minor Documentation changes**, such as typo fixes, or addition / correction
of a sentence, but no change of the ``scikit-learn.org`` landing page or the
“about” page: Requires +1 by a maintainer, no -1 by a maintainer (lazy
consensus), happens on the issue or pull request page. Maintainers are
expected to give “reasonable time” to others to give their opinion on the
pull request if they're not confident others would agree.
* **Code changes and major documentation changes**
require +1 by two maintainers, no -1 by a maintainer (lazy
consensus), happens on the issue of pull-request page.
* **Changes to the API principles and changes to dependencies or supported
versions** happen via :ref:`slep` and follows the decision-making process
outlined above.
* **Changes to the governance model** follow the process outlined in `SLEP020
<https://scikit-learn-enhancement-proposals.readthedocs.io/en/latest/slep020/proposal.html>`__.
If a veto -1 vote is cast on a lazy consensus, the proposer can appeal to the
community and maintainers and the change can be approved or rejected using
the decision making procedure outlined above.
Governance Model Changes
------------------------
Governance model changes occur through an enhancement proposal or a GitHub Pull
Request. An enhancement proposal will go through "**the decision-making process**"
described in the previous section. Alternatively, an author may propose a change
directly to the governance model with a GitHub Pull Request. Logistically, an
author can open a Draft Pull Request for feedback and follow up with a new
revised Pull Request for voting. Once that author is happy with the state of the
Pull Request, they can call for a vote on the public mailing list. During the
one-month voting period, the Pull Request can not change. A Pull Request
Approval will count as a positive vote, and a "Request Changes" review will
count as a negative vote. If two-thirds of the cast votes are positive, then
the governance model change is accepted.
.. _slep:
Enhancement proposals (SLEPs)
==============================
For all votes, a proposal must have been made public and discussed before the
vote. Such proposal must be a consolidated document, in the form of a
"Scikit-Learn Enhancement Proposal" (SLEP), rather than a long discussion on an
issue. A SLEP must be submitted as a pull-request to `enhancement proposals
<https://scikit-learn-enhancement-proposals.readthedocs.io>`_ using the `SLEP
template
<https://scikit-learn-enhancement-proposals.readthedocs.io/en/latest/slep_template.html>`_.
`SLEP000
<https://scikit-learn-enhancement-proposals.readthedocs.io/en/latest/slep000/proposal.html>`__
describes the process in more detail. | scikit-learn | governance Scikit learn governance and decision making The purpose of this document is to formalize the governance process used by the scikit learn project to clarify how decisions are made and how the various elements of our community interact This document establishes a decision making structure that takes into account feedback from all members of the community and strives to find consensus while avoiding any deadlocks This is a meritocratic consensus based community project Anyone with an interest in the project can join the community contribute to the project design and participate in the decision making process This document describes how that participation takes place and how to set about earning merit within the project community Roles And Responsibilities We distinguish between contributors core contributors and the technical committee A key distinction between them is their voting rights contributors have no voting rights whereas the other two groups all have voting rights as well as permissions to the tools relevant to their roles Contributors Contributors are community members who contribute in concrete ways to the project Anyone can become a contributor and contributions can take many forms not only code as detailed in the ref contributors guide contributing There is no process to become a contributor once somebody contributes to the project in any way they are a contributor Core Contributors All core contributor members have the same voting rights and right to propose new members to any of the roles listed below Their membership is represented as being an organization member on the scikit learn GitHub organization https github com orgs scikit learn people They are also welcome to join our monthly core contributor meetings https github com scikit learn administrative tree master meeting notes New members can be nominated by any existing member Once they have been nominated there will be a vote by the current core contributors Voting on new members is one of the few activities that takes place on the project s private mailing list While it is expected that most votes will be unanimous a two thirds majority of the cast votes is enough The vote needs to be open for at least 1 week Core contributors that have not contributed to the project corresponding to their role in the past 12 months will be asked if they want to become emeritus members and recant their rights until they become active again The list of members active and emeritus with dates at which they became active is public on the scikit learn website It is the responsibility of the active core contributors to send such a yearly reminder email The following teams form the core contributors group Contributor Experience Team The contributor experience team improves the experience of contributors by helping with the triage of issues and pull requests as well as noticing any repeating patterns where people might struggle and to help with improving those aspects of the project To this end they have the required permissions on GitHub to label and close issues ref Their work bug triaging is crucial to improve the communication in the project and limit the crowding of the issue tracker communication team Communication Team Members of the communication team help with outreach and communication for scikit learn The goal of the team is to develop public awareness of scikit learn of its features and usage as well as branding For this they can operate the scikit learn accounts on various social networks and produce materials They also have the required rights to our blog repository and other relevant accounts and platforms Documentation Team Members of the documentation team engage with the documentation of the project among other things They might also be involved in other aspects of the project but their reviews on documentation contributions are considered authoritative and can merge such contributions To this end they have permissions to merge pull requests in scikit learn s repository Maintainers Team Maintainers are community members who have shown that they are dedicated to the continued development of the project through ongoing engagement with the community They have shown they can be trusted to maintain scikit learn with care Being a maintainer allows contributors to more easily carry on with their project related activities by giving them direct access to the project s repository Maintainers are expected to review code contributions merge approved pull requests cast votes for and against merging a pull request and to be involved in deciding major changes to the API Technical Committee The Technical Committee TC members are maintainers who have additional responsibilities to ensure the smooth running of the project TC members are expected to participate in strategic planning and approve changes to the governance model The purpose of the TC is to ensure a smooth progress from the big picture perspective Indeed changes that impact the full project require a synthetic analysis and a consensus that is both explicit and informed In cases that the core contributor community which includes the TC members fails to reach such a consensus in the required time frame the TC is the entity to resolve the issue Membership of the TC is by nomination by a core contributor A nomination will result in discussion which cannot take more than a month and then a vote by the core contributors which will stay open for a week TC membership votes are subject to a two third majority of all cast votes as well as a simple majority approval of all the current TC members TC members who do not actively engage with the TC duties are expected to resign The Technical Committee of scikit learn consists of user Thomas Fan thomasjpfan user Alexandre Gramfort agramfort user Olivier Grisel ogrisel user Adrin Jalali adrinjalali user Andreas M ller amueller user Joel Nothman jnothman and user Ga l Varoquaux GaelVaroquaux Decision Making Process Decisions about the future of the project are made through discussion with all members of the community All non sensitive project management discussion takes place on the project contributors mailing list mailto scikit learn python org and the issue tracker https github com scikit learn scikit learn issues Occasionally sensitive discussion occurs on a private list Scikit learn uses a consensus seeking process for making decisions The group tries to find a resolution that has no open objections among core contributors At any point during the discussion any core contributor can call for a vote which will conclude one month from the call for the vote Most votes have to be backed by a ref SLEP slep If no option can gather two thirds of the votes cast the decision is escalated to the TC which in turn will use consensus seeking with the fallback option of a simple majority vote if no consensus can be found within a month This is what we hereafter may refer to as the decision making process Decisions in addition to adding core contributors and TC membership as above are made according to the following rules Minor Documentation changes such as typo fixes or addition correction of a sentence but no change of the scikit learn org landing page or the about page Requires 1 by a maintainer no 1 by a maintainer lazy consensus happens on the issue or pull request page Maintainers are expected to give reasonable time to others to give their opinion on the pull request if they re not confident others would agree Code changes and major documentation changes require 1 by two maintainers no 1 by a maintainer lazy consensus happens on the issue of pull request page Changes to the API principles and changes to dependencies or supported versions happen via ref slep and follows the decision making process outlined above Changes to the governance model follow the process outlined in SLEP020 https scikit learn enhancement proposals readthedocs io en latest slep020 proposal html If a veto 1 vote is cast on a lazy consensus the proposer can appeal to the community and maintainers and the change can be approved or rejected using the decision making procedure outlined above Governance Model Changes Governance model changes occur through an enhancement proposal or a GitHub Pull Request An enhancement proposal will go through the decision making process described in the previous section Alternatively an author may propose a change directly to the governance model with a GitHub Pull Request Logistically an author can open a Draft Pull Request for feedback and follow up with a new revised Pull Request for voting Once that author is happy with the state of the Pull Request they can call for a vote on the public mailing list During the one month voting period the Pull Request can not change A Pull Request Approval will count as a positive vote and a Request Changes review will count as a negative vote If two thirds of the cast votes are positive then the governance model change is accepted slep Enhancement proposals SLEPs For all votes a proposal must have been made public and discussed before the vote Such proposal must be a consolidated document in the form of a Scikit Learn Enhancement Proposal SLEP rather than a long discussion on an issue A SLEP must be submitted as a pull request to enhancement proposals https scikit learn enhancement proposals readthedocs io using the SLEP template https scikit learn enhancement proposals readthedocs io en latest slep template html SLEP000 https scikit learn enhancement proposals readthedocs io en latest slep000 proposal html describes the process in more detail |
scikit-learn Projects implementing the scikit learn estimator API are encouraged to use relatedprojects The which facilitates best practices for testing and documenting estimators Related Projects the | .. _related_projects:
=====================================
Related Projects
=====================================
Projects implementing the scikit-learn estimator API are encouraged to use
the `scikit-learn-contrib template <https://github.com/scikit-learn-contrib/project-template>`_
which facilitates best practices for testing and documenting estimators.
The `scikit-learn-contrib GitHub organization <https://github.com/scikit-learn-contrib/scikit-learn-contrib>`_
also accepts high-quality contributions of repositories conforming to this
template.
Below is a list of sister-projects, extensions and domain specific packages.
Interoperability and framework enhancements
-------------------------------------------
These tools adapt scikit-learn for use with other technologies or otherwise
enhance the functionality of scikit-learn's estimators.
**Auto-ML**
- `auto-sklearn <https://github.com/automl/auto-sklearn/>`_
An automated machine learning toolkit and a drop-in replacement for a
scikit-learn estimator
- `autoviml <https://github.com/AutoViML/Auto_ViML/>`_
Automatically Build Multiple Machine Learning Models with a Single Line of Code.
Designed as a faster way to use scikit-learn models without having to preprocess data.
- `TPOT <https://github.com/rhiever/tpot>`_
An automated machine learning toolkit that optimizes a series of scikit-learn
operators to design a machine learning pipeline, including data and feature
preprocessors as well as the estimators. Works as a drop-in replacement for a
scikit-learn estimator.
- `Featuretools <https://github.com/alteryx/featuretools>`_
A framework to perform automated feature engineering. It can be used for
transforming temporal and relational datasets into feature matrices for
machine learning.
- `EvalML <https://github.com/alteryx/evalml>`_
EvalML is an AutoML library which builds, optimizes, and evaluates
machine learning pipelines using domain-specific objective functions.
It incorporates multiple modeling libraries under one API, and
the objects that EvalML creates use an sklearn-compatible API.
- `MLJAR AutoML <https://github.com/mljar/mljar-supervised>`_
Python package for AutoML on Tabular Data with Feature Engineering,
Hyper-Parameters Tuning, Explanations and Automatic Documentation.
**Experimentation and model registry frameworks**
- `MLFlow <https://mlflow.org/>`_ MLflow is an open source platform to manage the ML
lifecycle, including experimentation, reproducibility, deployment, and a central
model registry.
- `Neptune <https://neptune.ai/>`_ Metadata store for MLOps,
built for teams that run a lot of experiments. It gives you a single
place to log, store, display, organize, compare, and query all your
model building metadata.
- `Sacred <https://github.com/IDSIA/Sacred>`_ Tool to help you configure,
organize, log and reproduce experiments
- `Scikit-Learn Laboratory
<https://skll.readthedocs.io/en/latest/index.html>`_ A command-line
wrapper around scikit-learn that makes it easy to run machine learning
experiments with multiple learners and large feature sets.
**Model inspection and visualization**
- `dtreeviz <https://github.com/parrt/dtreeviz/>`_ A python library for
decision tree visualization and model interpretation.
- `sklearn-evaluation <https://github.com/ploomber/sklearn-evaluation>`_
Machine learning model evaluation made easy: plots, tables, HTML reports,
experiment tracking and Jupyter notebook analysis. Visual analysis, model
selection, evaluation and diagnostics.
- `yellowbrick <https://github.com/DistrictDataLabs/yellowbrick>`_ A suite of
custom matplotlib visualizers for scikit-learn estimators to support visual feature
analysis, model selection, evaluation, and diagnostics.
**Model export for production**
- `sklearn-onnx <https://github.com/onnx/sklearn-onnx>`_ Serialization of many
Scikit-learn pipelines to `ONNX <https://onnx.ai/>`_ for interchange and
prediction.
- `skops.io <https://skops.readthedocs.io/en/stable/persistence.html>`__ A
persistence model more secure than pickle, which can be used instead of
pickle in most common cases.
- `sklearn2pmml <https://github.com/jpmml/sklearn2pmml>`_
Serialization of a wide variety of scikit-learn estimators and transformers
into PMML with the help of `JPMML-SkLearn <https://github.com/jpmml/jpmml-sklearn>`_
library.
- `treelite <https://treelite.readthedocs.io>`_
Compiles tree-based ensemble models into C code for minimizing prediction
latency.
- `emlearn <https://emlearn.org>`_
Implements scikit-learn estimators in C99 for embedded devices and microcontrollers.
Supports several classifier, regression and outlier detection models.
**Model throughput**
- `Intel(R) Extension for scikit-learn <https://github.com/intel/scikit-learn-intelex>`_
Mostly on high end Intel(R) hardware, accelerates some scikit-learn models
for both training and inference under certain circumstances. This project is
maintained by Intel(R) and scikit-learn's maintainers are not involved in the
development of this project. Also note that in some cases using the tools and
estimators under ``scikit-learn-intelex`` would give different results than
``scikit-learn`` itself. If you encounter issues while using this project,
make sure you report potential issues in their respective repositories.
**Interface to R with genomic applications**
- `BiocSklearn <https://bioconductor.org/packages/BiocSklearn>`_
Exposes a small number of dimension reduction facilities as an illustration
of the basilisk protocol for interfacing python with R. Intended as a
springboard for more complete interop.
Other estimators and tasks
--------------------------
Not everything belongs or is mature enough for the central scikit-learn
project. The following are projects providing interfaces similar to
scikit-learn for additional learning algorithms, infrastructures
and tasks.
**Time series and forecasting**
- `Darts <https://unit8co.github.io/darts/>`_ Darts is a Python library for
user-friendly forecasting and anomaly detection on time series. It contains a variety
of models, from classics such as ARIMA to deep neural networks. The forecasting
models can all be used in the same way, using fit() and predict() functions, similar
to scikit-learn.
- `sktime <https://github.com/alan-turing-institute/sktime>`_ A scikit-learn compatible
toolbox for machine learning with time series including time series
classification/regression and (supervised/panel) forecasting.
- `skforecast <https://github.com/JoaquinAmatRodrigo/skforecast>`_ A python library
that eases using scikit-learn regressors as multi-step forecasters. It also works
with any regressor compatible with the scikit-learn API.
- `tslearn <https://github.com/tslearn-team/tslearn>`_ A machine learning library for
time series that offers tools for pre-processing and feature extraction as well as
dedicated models for clustering, classification and regression.
**Gradient (tree) boosting**
Note scikit-learn own modern gradient boosting estimators
:class:`~sklearn.ensemble.HistGradientBoostingClassifier` and
:class:`~sklearn.ensemble.HistGradientBoostingRegressor`.
- `XGBoost <https://github.com/dmlc/xgboost>`_ XGBoost is an optimized distributed
gradient boosting library designed to be highly efficient, flexible and portable.
- `LightGBM <https://lightgbm.readthedocs.io>`_ LightGBM is a gradient boosting
framework that uses tree based learning algorithms. It is designed to be distributed
and efficient.
**Structured learning**
- `HMMLearn <https://github.com/hmmlearn/hmmlearn>`_ Implementation of hidden
markov models that was previously part of scikit-learn.
- `pomegranate <https://github.com/jmschrei/pomegranate>`_ Probabilistic modelling
for Python, with an emphasis on hidden Markov models.
**Deep neural networks etc.**
- `skorch <https://github.com/dnouri/skorch>`_ A scikit-learn compatible
neural network library that wraps PyTorch.
- `scikeras <https://github.com/adriangb/scikeras>`_ provides a wrapper around
Keras to interface it with scikit-learn. SciKeras is the successor
of `tf.keras.wrappers.scikit_learn`.
**Federated Learning**
- `Flower <https://flower.dev/>`_ A friendly federated learning framework with a
unified approach that can federate any workload, any ML framework, and any programming language.
**Privacy Preserving Machine Learning**
- `Concrete ML <https://github.com/zama-ai/concrete-ml/>`_ A privacy preserving
ML framework built on top of `Concrete
<https://github.com/zama-ai/concrete>`_, with bindings to traditional ML
frameworks, thanks to fully homomorphic encryption. APIs of so-called
Concrete ML built-in models are very close to scikit-learn APIs.
**Broad scope**
- `mlxtend <https://github.com/rasbt/mlxtend>`_ Includes a number of additional
estimators as well as model visualization utilities.
- `scikit-lego <https://github.com/koaning/scikit-lego>`_ A number of scikit-learn compatible
custom transformers, models and metrics, focusing on solving practical industry tasks.
**Other regression and classification**
- `py-earth <https://github.com/scikit-learn-contrib/py-earth>`_ Multivariate
adaptive regression splines
- `gplearn <https://github.com/trevorstephens/gplearn>`_ Genetic Programming
for symbolic regression tasks.
- `scikit-multilearn <https://github.com/scikit-multilearn/scikit-multilearn>`_
Multi-label classification with focus on label space manipulation.
**Decomposition and clustering**
- `lda <https://github.com/lda-project/lda/>`_: Fast implementation of latent
Dirichlet allocation in Cython which uses `Gibbs sampling
<https://en.wikipedia.org/wiki/Gibbs_sampling>`_ to sample from the true
posterior distribution. (scikit-learn's
:class:`~sklearn.decomposition.LatentDirichletAllocation` implementation uses
`variational inference
<https://en.wikipedia.org/wiki/Variational_Bayesian_methods>`_ to sample from
a tractable approximation of a topic model's posterior distribution.)
- `kmodes <https://github.com/nicodv/kmodes>`_ k-modes clustering algorithm for
categorical data, and several of its variations.
- `hdbscan <https://github.com/scikit-learn-contrib/hdbscan>`_ HDBSCAN and Robust Single
Linkage clustering algorithms for robust variable density clustering.
As of scikit-learn version 1.3.0, there is :class:`~sklearn.cluster.HDBSCAN`.
**Pre-processing**
- `categorical-encoding
<https://github.com/scikit-learn-contrib/categorical-encoding>`_ A
library of sklearn compatible categorical variable encoders.
As of scikit-learn version 1.3.0, there is
:class:`~sklearn.preprocessing.TargetEncoder`.
- `imbalanced-learn
<https://github.com/scikit-learn-contrib/imbalanced-learn>`_ Various
methods to under- and over-sample datasets.
- `Feature-engine <https://github.com/solegalli/feature_engine>`_ A library
of sklearn compatible transformers for missing data imputation, categorical
encoding, variable transformation, discretization, outlier handling and more.
Feature-engine allows the application of preprocessing steps to selected groups
of variables and it is fully compatible with the Scikit-learn Pipeline.
**Topological Data Analysis**
- `giotto-tda <https://github.com/giotto-ai/giotto-tda>`_ A library for
`Topological Data Analysis
<https://en.wikipedia.org/wiki/Topological_data_analysis>`_ aiming to
provide a scikit-learn compatible API. It offers tools to transform data
inputs (point clouds, graphs, time series, images) into forms suitable for
computations of topological summaries, and components dedicated to
extracting sets of scalar features of topological origin, which can be used
alongside other feature extraction methods in scikit-learn.
Statistical learning with Python
--------------------------------
Other packages useful for data analysis and machine learning.
- `Pandas <https://pandas.pydata.org/>`_ Tools for working with heterogeneous and
columnar data, relational queries, time series and basic statistics.
- `statsmodels <https://www.statsmodels.org>`_ Estimating and analysing
statistical models. More focused on statistical tests and less on prediction
than scikit-learn.
- `PyMC <https://www.pymc.io/>`_ Bayesian statistical models and
fitting algorithms.
- `Seaborn <https://stanford.edu/~mwaskom/software/seaborn/>`_ Visualization library based on
matplotlib. It provides a high-level interface for drawing attractive statistical graphics.
- `scikit-survival <https://scikit-survival.readthedocs.io/>`_ A library implementing
models to learn from censored time-to-event data (also called survival analysis).
Models are fully compatible with scikit-learn.
Recommendation Engine packages
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
- `implicit <https://github.com/benfred/implicit>`_, Library for implicit
feedback datasets.
- `lightfm <https://github.com/lyst/lightfm>`_ A Python/Cython
implementation of a hybrid recommender system.
- `Surprise Lib <https://surpriselib.com/>`_ Library for explicit feedback
datasets.
Domain specific packages
~~~~~~~~~~~~~~~~~~~~~~~~
- `scikit-network <https://scikit-network.readthedocs.io/>`_ Machine learning on graphs.
- `scikit-image <https://scikit-image.org/>`_ Image processing and computer
vision in python.
- `Natural language toolkit (nltk) <https://www.nltk.org/>`_ Natural language
processing and some machine learning.
- `gensim <https://radimrehurek.com/gensim/>`_ A library for topic modelling,
document indexing and similarity retrieval
- `NiLearn <https://nilearn.github.io/>`_ Machine learning for neuro-imaging.
- `AstroML <https://www.astroml.org/>`_ Machine learning for astronomy.
Translations of scikit-learn documentation
------------------------------------------
Translation's purpose is to ease reading and understanding in languages
other than English. Its aim is to help people who do not understand English
or have doubts about its interpretation. Additionally, some people prefer
to read documentation in their native language, but please bear in mind that
the only official documentation is the English one [#f1]_.
Those translation efforts are community initiatives and we have no control
on them.
If you want to contribute or report an issue with the translation, please
contact the authors of the translation.
Some available translations are linked here to improve their dissemination
and promote community efforts.
- `Chinese translation <https://sklearn.apachecn.org/>`_
(`source <https://github.com/apachecn/sklearn-doc-zh>`__)
- `Persian translation <https://sklearn.ir/>`_
(`source <https://github.com/mehrdad-dev/scikit-learn>`__)
- `Spanish translation <https://qu4nt.github.io/sklearn-doc-es/>`_
(`source <https://github.com/qu4nt/sklearn-doc-es>`__)
- `Korean translation <https://panda5176.github.io/scikit-learn-korean/>`_
(`source <https://github.com/panda5176/scikit-learn-korean>`__)
.. rubric:: Footnotes
.. [#f1] following `linux documentation Disclaimer
<https://www.kernel.org/doc/html/latest/translations/index.html#disclaimer>`__ | scikit-learn | related projects Related Projects Projects implementing the scikit learn estimator API are encouraged to use the scikit learn contrib template https github com scikit learn contrib project template which facilitates best practices for testing and documenting estimators The scikit learn contrib GitHub organization https github com scikit learn contrib scikit learn contrib also accepts high quality contributions of repositories conforming to this template Below is a list of sister projects extensions and domain specific packages Interoperability and framework enhancements These tools adapt scikit learn for use with other technologies or otherwise enhance the functionality of scikit learn s estimators Auto ML auto sklearn https github com automl auto sklearn An automated machine learning toolkit and a drop in replacement for a scikit learn estimator autoviml https github com AutoViML Auto ViML Automatically Build Multiple Machine Learning Models with a Single Line of Code Designed as a faster way to use scikit learn models without having to preprocess data TPOT https github com rhiever tpot An automated machine learning toolkit that optimizes a series of scikit learn operators to design a machine learning pipeline including data and feature preprocessors as well as the estimators Works as a drop in replacement for a scikit learn estimator Featuretools https github com alteryx featuretools A framework to perform automated feature engineering It can be used for transforming temporal and relational datasets into feature matrices for machine learning EvalML https github com alteryx evalml EvalML is an AutoML library which builds optimizes and evaluates machine learning pipelines using domain specific objective functions It incorporates multiple modeling libraries under one API and the objects that EvalML creates use an sklearn compatible API MLJAR AutoML https github com mljar mljar supervised Python package for AutoML on Tabular Data with Feature Engineering Hyper Parameters Tuning Explanations and Automatic Documentation Experimentation and model registry frameworks MLFlow https mlflow org MLflow is an open source platform to manage the ML lifecycle including experimentation reproducibility deployment and a central model registry Neptune https neptune ai Metadata store for MLOps built for teams that run a lot of experiments It gives you a single place to log store display organize compare and query all your model building metadata Sacred https github com IDSIA Sacred Tool to help you configure organize log and reproduce experiments Scikit Learn Laboratory https skll readthedocs io en latest index html A command line wrapper around scikit learn that makes it easy to run machine learning experiments with multiple learners and large feature sets Model inspection and visualization dtreeviz https github com parrt dtreeviz A python library for decision tree visualization and model interpretation sklearn evaluation https github com ploomber sklearn evaluation Machine learning model evaluation made easy plots tables HTML reports experiment tracking and Jupyter notebook analysis Visual analysis model selection evaluation and diagnostics yellowbrick https github com DistrictDataLabs yellowbrick A suite of custom matplotlib visualizers for scikit learn estimators to support visual feature analysis model selection evaluation and diagnostics Model export for production sklearn onnx https github com onnx sklearn onnx Serialization of many Scikit learn pipelines to ONNX https onnx ai for interchange and prediction skops io https skops readthedocs io en stable persistence html A persistence model more secure than pickle which can be used instead of pickle in most common cases sklearn2pmml https github com jpmml sklearn2pmml Serialization of a wide variety of scikit learn estimators and transformers into PMML with the help of JPMML SkLearn https github com jpmml jpmml sklearn library treelite https treelite readthedocs io Compiles tree based ensemble models into C code for minimizing prediction latency emlearn https emlearn org Implements scikit learn estimators in C99 for embedded devices and microcontrollers Supports several classifier regression and outlier detection models Model throughput Intel R Extension for scikit learn https github com intel scikit learn intelex Mostly on high end Intel R hardware accelerates some scikit learn models for both training and inference under certain circumstances This project is maintained by Intel R and scikit learn s maintainers are not involved in the development of this project Also note that in some cases using the tools and estimators under scikit learn intelex would give different results than scikit learn itself If you encounter issues while using this project make sure you report potential issues in their respective repositories Interface to R with genomic applications BiocSklearn https bioconductor org packages BiocSklearn Exposes a small number of dimension reduction facilities as an illustration of the basilisk protocol for interfacing python with R Intended as a springboard for more complete interop Other estimators and tasks Not everything belongs or is mature enough for the central scikit learn project The following are projects providing interfaces similar to scikit learn for additional learning algorithms infrastructures and tasks Time series and forecasting Darts https unit8co github io darts Darts is a Python library for user friendly forecasting and anomaly detection on time series It contains a variety of models from classics such as ARIMA to deep neural networks The forecasting models can all be used in the same way using fit and predict functions similar to scikit learn sktime https github com alan turing institute sktime A scikit learn compatible toolbox for machine learning with time series including time series classification regression and supervised panel forecasting skforecast https github com JoaquinAmatRodrigo skforecast A python library that eases using scikit learn regressors as multi step forecasters It also works with any regressor compatible with the scikit learn API tslearn https github com tslearn team tslearn A machine learning library for time series that offers tools for pre processing and feature extraction as well as dedicated models for clustering classification and regression Gradient tree boosting Note scikit learn own modern gradient boosting estimators class sklearn ensemble HistGradientBoostingClassifier and class sklearn ensemble HistGradientBoostingRegressor XGBoost https github com dmlc xgboost XGBoost is an optimized distributed gradient boosting library designed to be highly efficient flexible and portable LightGBM https lightgbm readthedocs io LightGBM is a gradient boosting framework that uses tree based learning algorithms It is designed to be distributed and efficient Structured learning HMMLearn https github com hmmlearn hmmlearn Implementation of hidden markov models that was previously part of scikit learn pomegranate https github com jmschrei pomegranate Probabilistic modelling for Python with an emphasis on hidden Markov models Deep neural networks etc skorch https github com dnouri skorch A scikit learn compatible neural network library that wraps PyTorch scikeras https github com adriangb scikeras provides a wrapper around Keras to interface it with scikit learn SciKeras is the successor of tf keras wrappers scikit learn Federated Learning Flower https flower dev A friendly federated learning framework with a unified approach that can federate any workload any ML framework and any programming language Privacy Preserving Machine Learning Concrete ML https github com zama ai concrete ml A privacy preserving ML framework built on top of Concrete https github com zama ai concrete with bindings to traditional ML frameworks thanks to fully homomorphic encryption APIs of so called Concrete ML built in models are very close to scikit learn APIs Broad scope mlxtend https github com rasbt mlxtend Includes a number of additional estimators as well as model visualization utilities scikit lego https github com koaning scikit lego A number of scikit learn compatible custom transformers models and metrics focusing on solving practical industry tasks Other regression and classification py earth https github com scikit learn contrib py earth Multivariate adaptive regression splines gplearn https github com trevorstephens gplearn Genetic Programming for symbolic regression tasks scikit multilearn https github com scikit multilearn scikit multilearn Multi label classification with focus on label space manipulation Decomposition and clustering lda https github com lda project lda Fast implementation of latent Dirichlet allocation in Cython which uses Gibbs sampling https en wikipedia org wiki Gibbs sampling to sample from the true posterior distribution scikit learn s class sklearn decomposition LatentDirichletAllocation implementation uses variational inference https en wikipedia org wiki Variational Bayesian methods to sample from a tractable approximation of a topic model s posterior distribution kmodes https github com nicodv kmodes k modes clustering algorithm for categorical data and several of its variations hdbscan https github com scikit learn contrib hdbscan HDBSCAN and Robust Single Linkage clustering algorithms for robust variable density clustering As of scikit learn version 1 3 0 there is class sklearn cluster HDBSCAN Pre processing categorical encoding https github com scikit learn contrib categorical encoding A library of sklearn compatible categorical variable encoders As of scikit learn version 1 3 0 there is class sklearn preprocessing TargetEncoder imbalanced learn https github com scikit learn contrib imbalanced learn Various methods to under and over sample datasets Feature engine https github com solegalli feature engine A library of sklearn compatible transformers for missing data imputation categorical encoding variable transformation discretization outlier handling and more Feature engine allows the application of preprocessing steps to selected groups of variables and it is fully compatible with the Scikit learn Pipeline Topological Data Analysis giotto tda https github com giotto ai giotto tda A library for Topological Data Analysis https en wikipedia org wiki Topological data analysis aiming to provide a scikit learn compatible API It offers tools to transform data inputs point clouds graphs time series images into forms suitable for computations of topological summaries and components dedicated to extracting sets of scalar features of topological origin which can be used alongside other feature extraction methods in scikit learn Statistical learning with Python Other packages useful for data analysis and machine learning Pandas https pandas pydata org Tools for working with heterogeneous and columnar data relational queries time series and basic statistics statsmodels https www statsmodels org Estimating and analysing statistical models More focused on statistical tests and less on prediction than scikit learn PyMC https www pymc io Bayesian statistical models and fitting algorithms Seaborn https stanford edu mwaskom software seaborn Visualization library based on matplotlib It provides a high level interface for drawing attractive statistical graphics scikit survival https scikit survival readthedocs io A library implementing models to learn from censored time to event data also called survival analysis Models are fully compatible with scikit learn Recommendation Engine packages implicit https github com benfred implicit Library for implicit feedback datasets lightfm https github com lyst lightfm A Python Cython implementation of a hybrid recommender system Surprise Lib https surpriselib com Library for explicit feedback datasets Domain specific packages scikit network https scikit network readthedocs io Machine learning on graphs scikit image https scikit image org Image processing and computer vision in python Natural language toolkit nltk https www nltk org Natural language processing and some machine learning gensim https radimrehurek com gensim A library for topic modelling document indexing and similarity retrieval NiLearn https nilearn github io Machine learning for neuro imaging AstroML https www astroml org Machine learning for astronomy Translations of scikit learn documentation Translation s purpose is to ease reading and understanding in languages other than English Its aim is to help people who do not understand English or have doubts about its interpretation Additionally some people prefer to read documentation in their native language but please bear in mind that the only official documentation is the English one f1 Those translation efforts are community initiatives and we have no control on them If you want to contribute or report an issue with the translation please contact the authors of the translation Some available translations are linked here to improve their dissemination and promote community efforts Chinese translation https sklearn apachecn org source https github com apachecn sklearn doc zh Persian translation https sklearn ir source https github com mehrdad dev scikit learn Spanish translation https qu4nt github io sklearn doc es source https github com qu4nt sklearn doc es Korean translation https panda5176 github io scikit learn korean source https github com panda5176 scikit learn korean rubric Footnotes f1 following linux documentation Disclaimer https www kernel org doc html latest translations index html disclaimer |
scikit-learn html h3 headings on this page are the questions make them rubric like h3 padding bottom 0 2rem border bottom 1px solid var pst color border font weight bold margin 2rem 0 1 15rem 0 style font size 1rem | .. raw:: html
<style>
/* h3 headings on this page are the questions; make them rubric-like */
h3 {
font-size: 1rem;
font-weight: bold;
padding-bottom: 0.2rem;
margin: 2rem 0 1.15rem 0;
border-bottom: 1px solid var(--pst-color-border);
}
/* Increase top margin for first question in each section */
h2 + section > h3 {
margin-top: 2.5rem;
}
/* Make the headerlinks a bit more visible */
h3 > a.headerlink {
font-size: 0.9rem;
}
/* Remove the backlink decoration on the titles */
h2 > a.toc-backref,
h3 > a.toc-backref {
text-decoration: none;
}
</style>
.. _faq:
==========================
Frequently Asked Questions
==========================
.. currentmodule:: sklearn
Here we try to give some answers to questions that regularly pop up on the mailing list.
.. contents:: Table of Contents
:local:
:depth: 2
About the project
-----------------
What is the project name (a lot of people get it wrong)?
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
scikit-learn, but not scikit or SciKit nor sci-kit learn.
Also not scikits.learn or scikits-learn, which were previously used.
How do you pronounce the project name?
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
sy-kit learn. sci stands for science!
Why scikit?
^^^^^^^^^^^
There are multiple scikits, which are scientific toolboxes built around SciPy.
Apart from scikit-learn, another popular one is `scikit-image <https://scikit-image.org/>`_.
Do you support PyPy?
^^^^^^^^^^^^^^^^^^^^
Due to limited maintainer resources and small number of users, using
scikit-learn with `PyPy <https://pypy.org/>`_ (an alternative Python
implementation with a built-in just-in-time compiler) is not officially
supported.
How can I obtain permission to use the images in scikit-learn for my work?
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
The images contained in the `scikit-learn repository
<https://github.com/scikit-learn/scikit-learn>`_ and the images generated within
the `scikit-learn documentation <https://scikit-learn.org/stable/index.html>`_
can be used via the `BSD 3-Clause License
<https://github.com/scikit-learn/scikit-learn?tab=BSD-3-Clause-1-ov-file>`_ for
your work. Citations of scikit-learn are highly encouraged and appreciated. See
:ref:`citing scikit-learn <citing-scikit-learn>`.
Implementation decisions
------------------------
Why is there no support for deep or reinforcement learning? Will there be such support in the future?
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Deep learning and reinforcement learning both require a rich vocabulary to
define an architecture, with deep learning additionally requiring
GPUs for efficient computing. However, neither of these fit within
the design constraints of scikit-learn. As a result, deep learning
and reinforcement learning are currently out of scope for what
scikit-learn seeks to achieve.
You can find more information about the addition of GPU support at
`Will you add GPU support?`_.
Note that scikit-learn currently implements a simple multilayer perceptron
in :mod:`sklearn.neural_network`. We will only accept bug fixes for this module.
If you want to implement more complex deep learning models, please turn to
popular deep learning frameworks such as
`tensorflow <https://www.tensorflow.org/>`_,
`keras <https://keras.io/>`_,
and `pytorch <https://pytorch.org/>`_.
.. _adding_graphical_models:
Will you add graphical models or sequence prediction to scikit-learn?
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Not in the foreseeable future.
scikit-learn tries to provide a unified API for the basic tasks in machine
learning, with pipelines and meta-algorithms like grid search to tie
everything together. The required concepts, APIs, algorithms and
expertise required for structured learning are different from what
scikit-learn has to offer. If we started doing arbitrary structured
learning, we'd need to redesign the whole package and the project
would likely collapse under its own weight.
There are two projects with API similar to scikit-learn that
do structured prediction:
* `pystruct <https://pystruct.github.io/>`_ handles general structured
learning (focuses on SSVMs on arbitrary graph structures with
approximate inference; defines the notion of sample as an instance of
the graph structure).
* `seqlearn <https://larsmans.github.io/seqlearn/>`_ handles sequences only
(focuses on exact inference; has HMMs, but mostly for the sake of
completeness; treats a feature vector as a sample and uses an offset encoding
for the dependencies between feature vectors).
Why did you remove HMMs from scikit-learn?
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
See :ref:`adding_graphical_models`.
Will you add GPU support?
^^^^^^^^^^^^^^^^^^^^^^^^^
Adding GPU support by default would introduce heavy harware-specific software
dependencies and existing algorithms would need to be reimplemented. This would
make it both harder for the average user to install scikit-learn and harder for
the developers to maintain the code.
However, since 2023, a limited but growing :ref:`list of scikit-learn
estimators <array_api_supported>` can already run on GPUs if the input data is
provided as a PyTorch or CuPy array and if scikit-learn has been configured to
accept such inputs as explained in :ref:`array_api`. This Array API support
allows scikit-learn to run on GPUs without introducing heavy and
hardware-specific software dependencies to the main package.
Most estimators that rely on NumPy for their computationally intensive operations
can be considered for Array API support and therefore GPU support.
However, not all scikit-learn estimators are amenable to efficiently running
on GPUs via the Array API for fundamental algorithmic reasons. For instance,
tree-based models currently implemented with Cython in scikit-learn are
fundamentally not array-based algorithms. Other algorithms such as k-means or
k-nearest neighbors rely on array-based algorithms but are also implemented in
Cython. Cython is used to manually interleave consecutive array operations to
avoid introducing performance killing memory access to large intermediate
arrays: this low-level algorithmic rewrite is called "kernel fusion" and cannot
be expressed via the Array API for the foreseeable future.
Adding efficient GPU support to estimators that cannot be efficiently
implemented with the Array API would require designing and adopting a more
flexible extension system for scikit-learn. This possibility is being
considered in the following GitHub issue (under discussion):
- https://github.com/scikit-learn/scikit-learn/issues/22438
Why do categorical variables need preprocessing in scikit-learn, compared to other tools?
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Most of scikit-learn assumes data is in NumPy arrays or SciPy sparse matrices
of a single numeric dtype. These do not explicitly represent categorical
variables at present. Thus, unlike R's ``data.frames`` or :class:`pandas.DataFrame`,
we require explicit conversion of categorical features to numeric values, as
discussed in :ref:`preprocessing_categorical_features`.
See also :ref:`sphx_glr_auto_examples_compose_plot_column_transformer_mixed_types.py` for an
example of working with heterogeneous (e.g. categorical and numeric) data.
Note that recently, :class:`~sklearn.ensemble.HistGradientBoostingClassifier` and
:class:`~sklearn.ensemble.HistGradientBoostingRegressor` gained native support for
categorical features through the option `categorical_features="from_dtype"`. This
option relies on inferring which columns of the data are categorical based on the
:class:`pandas.CategoricalDtype` and :class:`polars.datatypes.Categorical` dtypes.
Does scikit-learn work natively with various types of dataframes?
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Scikit-learn has limited support for :class:`pandas.DataFrame` and
:class:`polars.DataFrame`. Scikit-learn estimators can accept both these dataframe types
as input, and scikit-learn transformers can output dataframes using the `set_output`
API. For more details, refer to
:ref:`sphx_glr_auto_examples_miscellaneous_plot_set_output.py`.
However, the internal computations in scikit-learn estimators rely on numerical
operations that are more efficiently performed on homogeneous data structures such as
NumPy arrays or SciPy sparse matrices. As a result, most scikit-learn estimators will
internally convert dataframe inputs into these homogeneous data structures. Similarly,
dataframe outputs are generated from these homogeneous data structures.
Also note that :class:`~sklearn.compose.ColumnTransformer` makes it convenient to handle
heterogeneous pandas dataframes by mapping homogeneous subsets of dataframe columns
selected by name or dtype to dedicated scikit-learn transformers. Therefore
:class:`~sklearn.compose.ColumnTransformer` are often used in the first step of
scikit-learn pipelines when dealing with heterogeneous dataframes (see :ref:`pipeline`
for more details).
See also :ref:`sphx_glr_auto_examples_compose_plot_column_transformer_mixed_types.py`
for an example of working with heterogeneous (e.g. categorical and numeric) data.
Do you plan to implement transform for target ``y`` in a pipeline?
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Currently transform only works for features ``X`` in a pipeline. There's a
long-standing discussion about not being able to transform ``y`` in a pipeline.
Follow on GitHub issue :issue:`4143`. Meanwhile, you can check out
:class:`~compose.TransformedTargetRegressor`,
`pipegraph <https://github.com/mcasl/PipeGraph>`_,
and `imbalanced-learn <https://github.com/scikit-learn-contrib/imbalanced-learn>`_.
Note that scikit-learn solved for the case where ``y``
has an invertible transformation applied before training
and inverted after prediction. scikit-learn intends to solve for
use cases where ``y`` should be transformed at training time
and not at test time, for resampling and similar uses, like at
`imbalanced-learn <https://github.com/scikit-learn-contrib/imbalanced-learn>`_.
In general, these use cases can be solved
with a custom meta estimator rather than a :class:`~pipeline.Pipeline`.
Why are there so many different estimators for linear models?
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Usually, there is one classifier and one regressor per model type, e.g.
:class:`~ensemble.GradientBoostingClassifier` and
:class:`~ensemble.GradientBoostingRegressor`. Both have similar options and
both have the parameter `loss`, which is especially useful in the regression
case as it enables the estimation of conditional mean as well as conditional
quantiles.
For linear models, there are many estimator classes which are very close to
each other. Let us have a look at
- :class:`~linear_model.LinearRegression`, no penalty
- :class:`~linear_model.Ridge`, L2 penalty
- :class:`~linear_model.Lasso`, L1 penalty (sparse models)
- :class:`~linear_model.ElasticNet`, L1 + L2 penalty (less sparse models)
- :class:`~linear_model.SGDRegressor` with `loss="squared_loss"`
**Maintainer perspective:**
They all do in principle the same and are different only by the penalty they
impose. This, however, has a large impact on the way the underlying
optimization problem is solved. In the end, this amounts to usage of different
methods and tricks from linear algebra. A special case is
:class:`~linear_model.SGDRegressor` which
comprises all 4 previous models and is different by the optimization procedure.
A further side effect is that the different estimators favor different data
layouts (`X` C-contiguous or F-contiguous, sparse csr or csc). This complexity
of the seemingly simple linear models is the reason for having different
estimator classes for different penalties.
**User perspective:**
First, the current design is inspired by the scientific literature where linear
regression models with different regularization/penalty were given different
names, e.g. *ridge regression*. Having different model classes with according
names makes it easier for users to find those regression models.
Secondly, if all the 5 above mentioned linear models were unified into a single
class, there would be parameters with a lot of options like the ``solver``
parameter. On top of that, there would be a lot of exclusive interactions
between different parameters. For example, the possible options of the
parameters ``solver``, ``precompute`` and ``selection`` would depend on the
chosen values of the penalty parameters ``alpha`` and ``l1_ratio``.
Contributing
------------
How can I contribute to scikit-learn?
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
See :ref:`contributing`. Before wanting to add a new algorithm, which is
usually a major and lengthy undertaking, it is recommended to start with
:ref:`known issues <new_contributors>`. Please do not contact the contributors
of scikit-learn directly regarding contributing to scikit-learn.
Why is my pull request not getting any attention?
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
The scikit-learn review process takes a significant amount of time, and
contributors should not be discouraged by a lack of activity or review on
their pull request. We care a lot about getting things right
the first time, as maintenance and later change comes at a high cost.
We rarely release any "experimental" code, so all of our contributions
will be subject to high use immediately and should be of the highest
quality possible initially.
Beyond that, scikit-learn is limited in its reviewing bandwidth; many of the
reviewers and core developers are working on scikit-learn on their own time.
If a review of your pull request comes slowly, it is likely because the
reviewers are busy. We ask for your understanding and request that you
not close your pull request or discontinue your work solely because of
this reason.
.. _new_algorithms_inclusion_criteria:
What are the inclusion criteria for new algorithms?
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
We only consider well-established algorithms for inclusion. A rule of thumb is
at least 3 years since publication, 200+ citations, and wide use and
usefulness. A technique that provides a clear-cut improvement (e.g. an
enhanced data structure or a more efficient approximation technique) on
a widely-used method will also be considered for inclusion.
From the algorithms or techniques that meet the above criteria, only those
which fit well within the current API of scikit-learn, that is a ``fit``,
``predict/transform`` interface and ordinarily having input/output that is a
numpy array or sparse matrix, are accepted.
The contributor should support the importance of the proposed addition with
research papers and/or implementations in other similar packages, demonstrate
its usefulness via common use-cases/applications and corroborate performance
improvements, if any, with benchmarks and/or plots. It is expected that the
proposed algorithm should outperform the methods that are already implemented
in scikit-learn at least in some areas.
Inclusion of a new algorithm speeding up an existing model is easier if:
- it does not introduce new hyper-parameters (as it makes the library
more future-proof),
- it is easy to document clearly when the contribution improves the speed
and when it does not, for instance, "when ``n_features >>
n_samples``",
- benchmarks clearly show a speed up.
Also, note that your implementation need not be in scikit-learn to be used
together with scikit-learn tools. You can implement your favorite algorithm
in a scikit-learn compatible way, upload it to GitHub and let us know. We
will be happy to list it under :ref:`related_projects`. If you already have
a package on GitHub following the scikit-learn API, you may also be
interested to look at `scikit-learn-contrib
<https://scikit-learn-contrib.github.io>`_.
.. _selectiveness:
Why are you so selective on what algorithms you include in scikit-learn?
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Code comes with maintenance cost, and we need to balance the amount of
code we have with the size of the team (and add to this the fact that
complexity scales non linearly with the number of features).
The package relies on core developers using their free time to
fix bugs, maintain code and review contributions.
Any algorithm that is added needs future attention by the developers,
at which point the original author might long have lost interest.
See also :ref:`new_algorithms_inclusion_criteria`. For a great read about
long-term maintenance issues in open-source software, look at
`the Executive Summary of Roads and Bridges
<https://www.fordfoundation.org/media/2976/roads-and-bridges-the-unseen-labor-behind-our-digital-infrastructure.pdf#page=8>`_.
Using scikit-learn
------------------
What's the best way to get help on scikit-learn usage?
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
* General machine learning questions: use `Cross Validated
<https://stats.stackexchange.com/>`_ with the ``[machine-learning]`` tag.
* scikit-learn usage questions: use `Stack Overflow
<https://stackoverflow.com/questions/tagged/scikit-learn>`_ with the
``[scikit-learn]`` and ``[python]`` tags. You can alternatively use the `mailing list
<https://mail.python.org/mailman/listinfo/scikit-learn>`_.
Please make sure to include a minimal reproduction code snippet (ideally shorter
than 10 lines) that highlights your problem on a toy dataset (for instance from
:mod:`sklearn.datasets` or randomly generated with functions of ``numpy.random`` with
a fixed random seed). Please remove any line of code that is not necessary to
reproduce your problem.
The problem should be reproducible by simply copy-pasting your code snippet in a Python
shell with scikit-learn installed. Do not forget to include the import statements.
More guidance to write good reproduction code snippets can be found at:
https://stackoverflow.com/help/mcve.
If your problem raises an exception that you do not understand (even after googling it),
please make sure to include the full traceback that you obtain when running the
reproduction script.
For bug reports or feature requests, please make use of the
`issue tracker on GitHub <https://github.com/scikit-learn/scikit-learn/issues>`_.
.. warning::
Please do not email any authors directly to ask for assistance, report bugs,
or for any other issue related to scikit-learn.
How should I save, export or deploy estimators for production?
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
See :ref:`model_persistence`.
How can I create a bunch object?
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Bunch objects are sometimes used as an output for functions and methods. They
extend dictionaries by enabling values to be accessed by key,
`bunch["value_key"]`, or by an attribute, `bunch.value_key`.
They should not be used as an input. Therefore you almost never need to create
a :class:`~utils.Bunch` object, unless you are extending scikit-learn's API.
How can I load my own datasets into a format usable by scikit-learn?
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Generally, scikit-learn works on any numeric data stored as numpy arrays
or scipy sparse matrices. Other types that are convertible to numeric
arrays such as :class:`pandas.DataFrame` are also acceptable.
For more information on loading your data files into these usable data
structures, please refer to :ref:`loading external datasets <external_datasets>`.
How do I deal with string data (or trees, graphs...)?
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
scikit-learn estimators assume you'll feed them real-valued feature vectors.
This assumption is hard-coded in pretty much all of the library.
However, you can feed non-numerical inputs to estimators in several ways.
If you have text documents, you can use a term frequency features; see
:ref:`text_feature_extraction` for the built-in *text vectorizers*.
For more general feature extraction from any kind of data, see
:ref:`dict_feature_extraction` and :ref:`feature_hashing`.
Another common case is when you have non-numerical data and a custom distance
(or similarity) metric on these data. Examples include strings with edit
distance (aka. Levenshtein distance), for instance, DNA or RNA sequences. These can be
encoded as numbers, but doing so is painful and error-prone. Working with
distance metrics on arbitrary data can be done in two ways.
Firstly, many estimators take precomputed distance/similarity matrices, so if
the dataset is not too large, you can compute distances for all pairs of inputs.
If the dataset is large, you can use feature vectors with only one "feature",
which is an index into a separate data structure, and supply a custom metric
function that looks up the actual data in this data structure. For instance, to use
:class:`~cluster.dbscan` with Levenshtein distances::
>>> import numpy as np
>>> from leven import levenshtein # doctest: +SKIP
>>> from sklearn.cluster import dbscan
>>> data = ["ACCTCCTAGAAG", "ACCTACTAGAAGTT", "GAATATTAGGCCGA"]
>>> def lev_metric(x, y):
... i, j = int(x[0]), int(y[0]) # extract indices
... return levenshtein(data[i], data[j])
...
>>> X = np.arange(len(data)).reshape(-1, 1)
>>> X
array([[0],
[1],
[2]])
>>> # We need to specify algorithm='brute' as the default assumes
>>> # a continuous feature space.
>>> dbscan(X, metric=lev_metric, eps=5, min_samples=2, algorithm='brute') # doctest: +SKIP
(array([0, 1]), array([ 0, 0, -1]))
Note that the example above uses the third-party edit distance package
`leven <https://pypi.org/project/leven/>`_. Similar tricks can be used,
with some care, for tree kernels, graph kernels, etc.
Why do I sometimes get a crash/freeze with ``n_jobs > 1`` under OSX or Linux?
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Several scikit-learn tools such as :class:`~model_selection.GridSearchCV` and
:class:`~model_selection.cross_val_score` rely internally on Python's
:mod:`multiprocessing` module to parallelize execution
onto several Python processes by passing ``n_jobs > 1`` as an argument.
The problem is that Python :mod:`multiprocessing` does a ``fork`` system call
without following it with an ``exec`` system call for performance reasons. Many
libraries like (some versions of) Accelerate or vecLib under OSX, (some versions
of) MKL, the OpenMP runtime of GCC, nvidia's Cuda (and probably many others),
manage their own internal thread pool. Upon a call to `fork`, the thread pool
state in the child process is corrupted: the thread pool believes it has many
threads while only the main thread state has been forked. It is possible to
change the libraries to make them detect when a fork happens and reinitialize
the thread pool in that case: we did that for OpenBLAS (merged upstream in
main since 0.2.10) and we contributed a `patch
<https://gcc.gnu.org/bugzilla/show_bug.cgi?id=60035>`_ to GCC's OpenMP runtime
(not yet reviewed).
But in the end the real culprit is Python's :mod:`multiprocessing` that does
``fork`` without ``exec`` to reduce the overhead of starting and using new
Python processes for parallel computing. Unfortunately this is a violation of
the POSIX standard and therefore some software editors like Apple refuse to
consider the lack of fork-safety in Accelerate and vecLib as a bug.
In Python 3.4+ it is now possible to configure :mod:`multiprocessing` to
use the ``"forkserver"`` or ``"spawn"`` start methods (instead of the default
``"fork"``) to manage the process pools. To work around this issue when
using scikit-learn, you can set the ``JOBLIB_START_METHOD`` environment
variable to ``"forkserver"``. However the user should be aware that using
the ``"forkserver"`` method prevents :class:`joblib.Parallel` to call function
interactively defined in a shell session.
If you have custom code that uses :mod:`multiprocessing` directly instead of using
it via :mod:`joblib` you can enable the ``"forkserver"`` mode globally for your
program. Insert the following instructions in your main script::
import multiprocessing
# other imports, custom code, load data, define model...
if __name__ == "__main__":
multiprocessing.set_start_method("forkserver")
# call scikit-learn utils with n_jobs > 1 here
You can find more default on the new start methods in the `multiprocessing
documentation <https://docs.python.org/3/library/multiprocessing.html#contexts-and-start-methods>`_.
.. _faq_mkl_threading:
Why does my job use more cores than specified with ``n_jobs``?
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
This is because ``n_jobs`` only controls the number of jobs for
routines that are parallelized with :mod:`joblib`, but parallel code can come
from other sources:
- some routines may be parallelized with OpenMP (for code written in C or
Cython),
- scikit-learn relies a lot on numpy, which in turn may rely on numerical
libraries like MKL, OpenBLAS or BLIS which can provide parallel
implementations.
For more details, please refer to our :ref:`notes on parallelism <parallelism>`.
How do I set a ``random_state`` for an entire execution?
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Please refer to :ref:`randomness`. | scikit-learn | raw html style h3 headings on this page are the questions make them rubric like h3 font size 1rem font weight bold padding bottom 0 2rem margin 2rem 0 1 15rem 0 border bottom 1px solid var pst color border Increase top margin for first question in each section h2 section h3 margin top 2 5rem Make the headerlinks a bit more visible h3 a headerlink font size 0 9rem Remove the backlink decoration on the titles h2 a toc backref h3 a toc backref text decoration none style faq Frequently Asked Questions currentmodule sklearn Here we try to give some answers to questions that regularly pop up on the mailing list contents Table of Contents local depth 2 About the project What is the project name a lot of people get it wrong scikit learn but not scikit or SciKit nor sci kit learn Also not scikits learn or scikits learn which were previously used How do you pronounce the project name sy kit learn sci stands for science Why scikit There are multiple scikits which are scientific toolboxes built around SciPy Apart from scikit learn another popular one is scikit image https scikit image org Do you support PyPy Due to limited maintainer resources and small number of users using scikit learn with PyPy https pypy org an alternative Python implementation with a built in just in time compiler is not officially supported How can I obtain permission to use the images in scikit learn for my work The images contained in the scikit learn repository https github com scikit learn scikit learn and the images generated within the scikit learn documentation https scikit learn org stable index html can be used via the BSD 3 Clause License https github com scikit learn scikit learn tab BSD 3 Clause 1 ov file for your work Citations of scikit learn are highly encouraged and appreciated See ref citing scikit learn citing scikit learn Implementation decisions Why is there no support for deep or reinforcement learning Will there be such support in the future Deep learning and reinforcement learning both require a rich vocabulary to define an architecture with deep learning additionally requiring GPUs for efficient computing However neither of these fit within the design constraints of scikit learn As a result deep learning and reinforcement learning are currently out of scope for what scikit learn seeks to achieve You can find more information about the addition of GPU support at Will you add GPU support Note that scikit learn currently implements a simple multilayer perceptron in mod sklearn neural network We will only accept bug fixes for this module If you want to implement more complex deep learning models please turn to popular deep learning frameworks such as tensorflow https www tensorflow org keras https keras io and pytorch https pytorch org adding graphical models Will you add graphical models or sequence prediction to scikit learn Not in the foreseeable future scikit learn tries to provide a unified API for the basic tasks in machine learning with pipelines and meta algorithms like grid search to tie everything together The required concepts APIs algorithms and expertise required for structured learning are different from what scikit learn has to offer If we started doing arbitrary structured learning we d need to redesign the whole package and the project would likely collapse under its own weight There are two projects with API similar to scikit learn that do structured prediction pystruct https pystruct github io handles general structured learning focuses on SSVMs on arbitrary graph structures with approximate inference defines the notion of sample as an instance of the graph structure seqlearn https larsmans github io seqlearn handles sequences only focuses on exact inference has HMMs but mostly for the sake of completeness treats a feature vector as a sample and uses an offset encoding for the dependencies between feature vectors Why did you remove HMMs from scikit learn See ref adding graphical models Will you add GPU support Adding GPU support by default would introduce heavy harware specific software dependencies and existing algorithms would need to be reimplemented This would make it both harder for the average user to install scikit learn and harder for the developers to maintain the code However since 2023 a limited but growing ref list of scikit learn estimators array api supported can already run on GPUs if the input data is provided as a PyTorch or CuPy array and if scikit learn has been configured to accept such inputs as explained in ref array api This Array API support allows scikit learn to run on GPUs without introducing heavy and hardware specific software dependencies to the main package Most estimators that rely on NumPy for their computationally intensive operations can be considered for Array API support and therefore GPU support However not all scikit learn estimators are amenable to efficiently running on GPUs via the Array API for fundamental algorithmic reasons For instance tree based models currently implemented with Cython in scikit learn are fundamentally not array based algorithms Other algorithms such as k means or k nearest neighbors rely on array based algorithms but are also implemented in Cython Cython is used to manually interleave consecutive array operations to avoid introducing performance killing memory access to large intermediate arrays this low level algorithmic rewrite is called kernel fusion and cannot be expressed via the Array API for the foreseeable future Adding efficient GPU support to estimators that cannot be efficiently implemented with the Array API would require designing and adopting a more flexible extension system for scikit learn This possibility is being considered in the following GitHub issue under discussion https github com scikit learn scikit learn issues 22438 Why do categorical variables need preprocessing in scikit learn compared to other tools Most of scikit learn assumes data is in NumPy arrays or SciPy sparse matrices of a single numeric dtype These do not explicitly represent categorical variables at present Thus unlike R s data frames or class pandas DataFrame we require explicit conversion of categorical features to numeric values as discussed in ref preprocessing categorical features See also ref sphx glr auto examples compose plot column transformer mixed types py for an example of working with heterogeneous e g categorical and numeric data Note that recently class sklearn ensemble HistGradientBoostingClassifier and class sklearn ensemble HistGradientBoostingRegressor gained native support for categorical features through the option categorical features from dtype This option relies on inferring which columns of the data are categorical based on the class pandas CategoricalDtype and class polars datatypes Categorical dtypes Does scikit learn work natively with various types of dataframes Scikit learn has limited support for class pandas DataFrame and class polars DataFrame Scikit learn estimators can accept both these dataframe types as input and scikit learn transformers can output dataframes using the set output API For more details refer to ref sphx glr auto examples miscellaneous plot set output py However the internal computations in scikit learn estimators rely on numerical operations that are more efficiently performed on homogeneous data structures such as NumPy arrays or SciPy sparse matrices As a result most scikit learn estimators will internally convert dataframe inputs into these homogeneous data structures Similarly dataframe outputs are generated from these homogeneous data structures Also note that class sklearn compose ColumnTransformer makes it convenient to handle heterogeneous pandas dataframes by mapping homogeneous subsets of dataframe columns selected by name or dtype to dedicated scikit learn transformers Therefore class sklearn compose ColumnTransformer are often used in the first step of scikit learn pipelines when dealing with heterogeneous dataframes see ref pipeline for more details See also ref sphx glr auto examples compose plot column transformer mixed types py for an example of working with heterogeneous e g categorical and numeric data Do you plan to implement transform for target y in a pipeline Currently transform only works for features X in a pipeline There s a long standing discussion about not being able to transform y in a pipeline Follow on GitHub issue issue 4143 Meanwhile you can check out class compose TransformedTargetRegressor pipegraph https github com mcasl PipeGraph and imbalanced learn https github com scikit learn contrib imbalanced learn Note that scikit learn solved for the case where y has an invertible transformation applied before training and inverted after prediction scikit learn intends to solve for use cases where y should be transformed at training time and not at test time for resampling and similar uses like at imbalanced learn https github com scikit learn contrib imbalanced learn In general these use cases can be solved with a custom meta estimator rather than a class pipeline Pipeline Why are there so many different estimators for linear models Usually there is one classifier and one regressor per model type e g class ensemble GradientBoostingClassifier and class ensemble GradientBoostingRegressor Both have similar options and both have the parameter loss which is especially useful in the regression case as it enables the estimation of conditional mean as well as conditional quantiles For linear models there are many estimator classes which are very close to each other Let us have a look at class linear model LinearRegression no penalty class linear model Ridge L2 penalty class linear model Lasso L1 penalty sparse models class linear model ElasticNet L1 L2 penalty less sparse models class linear model SGDRegressor with loss squared loss Maintainer perspective They all do in principle the same and are different only by the penalty they impose This however has a large impact on the way the underlying optimization problem is solved In the end this amounts to usage of different methods and tricks from linear algebra A special case is class linear model SGDRegressor which comprises all 4 previous models and is different by the optimization procedure A further side effect is that the different estimators favor different data layouts X C contiguous or F contiguous sparse csr or csc This complexity of the seemingly simple linear models is the reason for having different estimator classes for different penalties User perspective First the current design is inspired by the scientific literature where linear regression models with different regularization penalty were given different names e g ridge regression Having different model classes with according names makes it easier for users to find those regression models Secondly if all the 5 above mentioned linear models were unified into a single class there would be parameters with a lot of options like the solver parameter On top of that there would be a lot of exclusive interactions between different parameters For example the possible options of the parameters solver precompute and selection would depend on the chosen values of the penalty parameters alpha and l1 ratio Contributing How can I contribute to scikit learn See ref contributing Before wanting to add a new algorithm which is usually a major and lengthy undertaking it is recommended to start with ref known issues new contributors Please do not contact the contributors of scikit learn directly regarding contributing to scikit learn Why is my pull request not getting any attention The scikit learn review process takes a significant amount of time and contributors should not be discouraged by a lack of activity or review on their pull request We care a lot about getting things right the first time as maintenance and later change comes at a high cost We rarely release any experimental code so all of our contributions will be subject to high use immediately and should be of the highest quality possible initially Beyond that scikit learn is limited in its reviewing bandwidth many of the reviewers and core developers are working on scikit learn on their own time If a review of your pull request comes slowly it is likely because the reviewers are busy We ask for your understanding and request that you not close your pull request or discontinue your work solely because of this reason new algorithms inclusion criteria What are the inclusion criteria for new algorithms We only consider well established algorithms for inclusion A rule of thumb is at least 3 years since publication 200 citations and wide use and usefulness A technique that provides a clear cut improvement e g an enhanced data structure or a more efficient approximation technique on a widely used method will also be considered for inclusion From the algorithms or techniques that meet the above criteria only those which fit well within the current API of scikit learn that is a fit predict transform interface and ordinarily having input output that is a numpy array or sparse matrix are accepted The contributor should support the importance of the proposed addition with research papers and or implementations in other similar packages demonstrate its usefulness via common use cases applications and corroborate performance improvements if any with benchmarks and or plots It is expected that the proposed algorithm should outperform the methods that are already implemented in scikit learn at least in some areas Inclusion of a new algorithm speeding up an existing model is easier if it does not introduce new hyper parameters as it makes the library more future proof it is easy to document clearly when the contribution improves the speed and when it does not for instance when n features n samples benchmarks clearly show a speed up Also note that your implementation need not be in scikit learn to be used together with scikit learn tools You can implement your favorite algorithm in a scikit learn compatible way upload it to GitHub and let us know We will be happy to list it under ref related projects If you already have a package on GitHub following the scikit learn API you may also be interested to look at scikit learn contrib https scikit learn contrib github io selectiveness Why are you so selective on what algorithms you include in scikit learn Code comes with maintenance cost and we need to balance the amount of code we have with the size of the team and add to this the fact that complexity scales non linearly with the number of features The package relies on core developers using their free time to fix bugs maintain code and review contributions Any algorithm that is added needs future attention by the developers at which point the original author might long have lost interest See also ref new algorithms inclusion criteria For a great read about long term maintenance issues in open source software look at the Executive Summary of Roads and Bridges https www fordfoundation org media 2976 roads and bridges the unseen labor behind our digital infrastructure pdf page 8 Using scikit learn What s the best way to get help on scikit learn usage General machine learning questions use Cross Validated https stats stackexchange com with the machine learning tag scikit learn usage questions use Stack Overflow https stackoverflow com questions tagged scikit learn with the scikit learn and python tags You can alternatively use the mailing list https mail python org mailman listinfo scikit learn Please make sure to include a minimal reproduction code snippet ideally shorter than 10 lines that highlights your problem on a toy dataset for instance from mod sklearn datasets or randomly generated with functions of numpy random with a fixed random seed Please remove any line of code that is not necessary to reproduce your problem The problem should be reproducible by simply copy pasting your code snippet in a Python shell with scikit learn installed Do not forget to include the import statements More guidance to write good reproduction code snippets can be found at https stackoverflow com help mcve If your problem raises an exception that you do not understand even after googling it please make sure to include the full traceback that you obtain when running the reproduction script For bug reports or feature requests please make use of the issue tracker on GitHub https github com scikit learn scikit learn issues warning Please do not email any authors directly to ask for assistance report bugs or for any other issue related to scikit learn How should I save export or deploy estimators for production See ref model persistence How can I create a bunch object Bunch objects are sometimes used as an output for functions and methods They extend dictionaries by enabling values to be accessed by key bunch value key or by an attribute bunch value key They should not be used as an input Therefore you almost never need to create a class utils Bunch object unless you are extending scikit learn s API How can I load my own datasets into a format usable by scikit learn Generally scikit learn works on any numeric data stored as numpy arrays or scipy sparse matrices Other types that are convertible to numeric arrays such as class pandas DataFrame are also acceptable For more information on loading your data files into these usable data structures please refer to ref loading external datasets external datasets How do I deal with string data or trees graphs scikit learn estimators assume you ll feed them real valued feature vectors This assumption is hard coded in pretty much all of the library However you can feed non numerical inputs to estimators in several ways If you have text documents you can use a term frequency features see ref text feature extraction for the built in text vectorizers For more general feature extraction from any kind of data see ref dict feature extraction and ref feature hashing Another common case is when you have non numerical data and a custom distance or similarity metric on these data Examples include strings with edit distance aka Levenshtein distance for instance DNA or RNA sequences These can be encoded as numbers but doing so is painful and error prone Working with distance metrics on arbitrary data can be done in two ways Firstly many estimators take precomputed distance similarity matrices so if the dataset is not too large you can compute distances for all pairs of inputs If the dataset is large you can use feature vectors with only one feature which is an index into a separate data structure and supply a custom metric function that looks up the actual data in this data structure For instance to use class cluster dbscan with Levenshtein distances import numpy as np from leven import levenshtein doctest SKIP from sklearn cluster import dbscan data ACCTCCTAGAAG ACCTACTAGAAGTT GAATATTAGGCCGA def lev metric x y i j int x 0 int y 0 extract indices return levenshtein data i data j X np arange len data reshape 1 1 X array 0 1 2 We need to specify algorithm brute as the default assumes a continuous feature space dbscan X metric lev metric eps 5 min samples 2 algorithm brute doctest SKIP array 0 1 array 0 0 1 Note that the example above uses the third party edit distance package leven https pypi org project leven Similar tricks can be used with some care for tree kernels graph kernels etc Why do I sometimes get a crash freeze with n jobs 1 under OSX or Linux Several scikit learn tools such as class model selection GridSearchCV and class model selection cross val score rely internally on Python s mod multiprocessing module to parallelize execution onto several Python processes by passing n jobs 1 as an argument The problem is that Python mod multiprocessing does a fork system call without following it with an exec system call for performance reasons Many libraries like some versions of Accelerate or vecLib under OSX some versions of MKL the OpenMP runtime of GCC nvidia s Cuda and probably many others manage their own internal thread pool Upon a call to fork the thread pool state in the child process is corrupted the thread pool believes it has many threads while only the main thread state has been forked It is possible to change the libraries to make them detect when a fork happens and reinitialize the thread pool in that case we did that for OpenBLAS merged upstream in main since 0 2 10 and we contributed a patch https gcc gnu org bugzilla show bug cgi id 60035 to GCC s OpenMP runtime not yet reviewed But in the end the real culprit is Python s mod multiprocessing that does fork without exec to reduce the overhead of starting and using new Python processes for parallel computing Unfortunately this is a violation of the POSIX standard and therefore some software editors like Apple refuse to consider the lack of fork safety in Accelerate and vecLib as a bug In Python 3 4 it is now possible to configure mod multiprocessing to use the forkserver or spawn start methods instead of the default fork to manage the process pools To work around this issue when using scikit learn you can set the JOBLIB START METHOD environment variable to forkserver However the user should be aware that using the forkserver method prevents class joblib Parallel to call function interactively defined in a shell session If you have custom code that uses mod multiprocessing directly instead of using it via mod joblib you can enable the forkserver mode globally for your program Insert the following instructions in your main script import multiprocessing other imports custom code load data define model if name main multiprocessing set start method forkserver call scikit learn utils with n jobs 1 here You can find more default on the new start methods in the multiprocessing documentation https docs python org 3 library multiprocessing html contexts and start methods faq mkl threading Why does my job use more cores than specified with n jobs This is because n jobs only controls the number of jobs for routines that are parallelized with mod joblib but parallel code can come from other sources some routines may be parallelized with OpenMP for code written in C or Cython scikit learn relies a lot on numpy which in turn may rely on numerical libraries like MKL OpenBLAS or BLIS which can provide parallel implementations For more details please refer to our ref notes on parallelism parallelism How do I set a random state for an entire execution Please refer to ref randomness |
scikit-learn sklearn glossary Glossary of Common Terms and API Elements This glossary hopes to definitively represent the tacit and explicit conventions applied in Scikit learn and its API while providing a reference | .. currentmodule:: sklearn
.. _glossary:
=========================================
Glossary of Common Terms and API Elements
=========================================
This glossary hopes to definitively represent the tacit and explicit
conventions applied in Scikit-learn and its API, while providing a reference
for users and contributors. It aims to describe the concepts and either detail
their corresponding API or link to other relevant parts of the documentation
which do so. By linking to glossary entries from the API Reference and User
Guide, we may minimize redundancy and inconsistency.
We begin by listing general concepts (and any that didn't fit elsewhere), but
more specific sets of related terms are listed below:
:ref:`glossary_estimator_types`, :ref:`glossary_target_types`,
:ref:`glossary_methods`, :ref:`glossary_parameters`,
:ref:`glossary_attributes`, :ref:`glossary_sample_props`.
General Concepts
================
.. glossary::
1d
1d array
One-dimensional array. A NumPy array whose ``.shape`` has length 1.
A vector.
2d
2d array
Two-dimensional array. A NumPy array whose ``.shape`` has length 2.
Often represents a matrix.
API
Refers to both the *specific* interfaces for estimators implemented in
Scikit-learn and the *generalized* conventions across types of
estimators as described in this glossary and :ref:`overviewed in the
contributor documentation <api_overview>`.
The specific interfaces that constitute Scikit-learn's public API are
largely documented in :ref:`api_ref`. However, we less formally consider
anything as public API if none of the identifiers required to access it
begins with ``_``. We generally try to maintain :term:`backwards
compatibility` for all objects in the public API.
Private API, including functions, modules and methods beginning ``_``
are not assured to be stable.
array-like
The most common data format for *input* to Scikit-learn estimators and
functions, array-like is any type object for which
:func:`numpy.asarray` will produce an array of appropriate shape
(usually 1 or 2-dimensional) of appropriate dtype (usually numeric).
This includes:
* a numpy array
* a list of numbers
* a list of length-k lists of numbers for some fixed length k
* a :class:`pandas.DataFrame` with all columns numeric
* a numeric :class:`pandas.Series`
It excludes:
* a :term:`sparse matrix`
* a sparse array
* an iterator
* a generator
Note that *output* from scikit-learn estimators and functions (e.g.
predictions) should generally be arrays or sparse matrices, or lists
thereof (as in multi-output :class:`tree.DecisionTreeClassifier`'s
``predict_proba``). An estimator where ``predict()`` returns a list or
a `pandas.Series` is not valid.
attribute
attributes
We mostly use attribute to refer to how model information is stored on
an estimator during fitting. Any public attribute stored on an
estimator instance is required to begin with an alphabetic character
and end in a single underscore if it is set in :term:`fit` or
:term:`partial_fit`. These are what is documented under an estimator's
*Attributes* documentation. The information stored in attributes is
usually either: sufficient statistics used for prediction or
transformation; :term:`transductive` outputs such as :term:`labels_` or
:term:`embedding_`; or diagnostic data, such as
:term:`feature_importances_`.
Common attributes are listed :ref:`below <glossary_attributes>`.
A public attribute may have the same name as a constructor
:term:`parameter`, with a ``_`` appended. This is used to store a
validated or estimated version of the user's input. For example,
:class:`decomposition.PCA` is constructed with an ``n_components``
parameter. From this, together with other parameters and the data,
PCA estimates the attribute ``n_components_``.
Further private attributes used in prediction/transformation/etc. may
also be set when fitting. These begin with a single underscore and are
not assured to be stable for public access.
A public attribute on an estimator instance that does not end in an
underscore should be the stored, unmodified value of an ``__init__``
:term:`parameter` of the same name. Because of this equivalence, these
are documented under an estimator's *Parameters* documentation.
backwards compatibility
We generally try to maintain backward compatibility (i.e. interfaces
and behaviors may be extended but not changed or removed) from release
to release but this comes with some exceptions:
Public API only
The behavior of objects accessed through private identifiers
(those beginning ``_``) may be changed arbitrarily between
versions.
As documented
We will generally assume that the users have adhered to the
documented parameter types and ranges. If the documentation asks
for a list and the user gives a tuple, we do not assure consistent
behavior from version to version.
Deprecation
Behaviors may change following a :term:`deprecation` period
(usually two releases long). Warnings are issued using Python's
:mod:`warnings` module.
Keyword arguments
We may sometimes assume that all optional parameters (other than X
and y to :term:`fit` and similar methods) are passed as keyword
arguments only and may be positionally reordered.
Bug fixes and enhancements
Bug fixes and -- less often -- enhancements may change the behavior
of estimators, including the predictions of an estimator trained on
the same data and :term:`random_state`. When this happens, we
attempt to note it clearly in the changelog.
Serialization
We make no assurances that pickling an estimator in one version
will allow it to be unpickled to an equivalent model in the
subsequent version. (For estimators in the sklearn package, we
issue a warning when this unpickling is attempted, even if it may
happen to work.) See :ref:`persistence_limitations`.
:func:`utils.estimator_checks.check_estimator`
We provide limited backwards compatibility assurances for the
estimator checks: we may add extra requirements on estimators
tested with this function, usually when these were informally
assumed but not formally tested.
Despite this informal contract with our users, the software is provided
as is, as stated in the license. When a release inadvertently
introduces changes that are not backward compatible, these are known
as software regressions.
callable
A function, class or an object which implements the ``__call__``
method; anything that returns True when the argument of `callable()
<https://docs.python.org/3/library/functions.html#callable>`_.
categorical feature
A categorical or nominal :term:`feature` is one that has a
finite set of discrete values across the population of data.
These are commonly represented as columns of integers or
strings. Strings will be rejected by most scikit-learn
estimators, and integers will be treated as ordinal or
count-valued. For the use with most estimators, categorical
variables should be one-hot encoded. Notable exceptions include
tree-based models such as random forests and gradient boosting
models that often work better and faster with integer-coded
categorical variables.
:class:`~sklearn.preprocessing.OrdinalEncoder` helps encoding
string-valued categorical features as ordinal integers, and
:class:`~sklearn.preprocessing.OneHotEncoder` can be used to
one-hot encode categorical features.
See also :ref:`preprocessing_categorical_features` and the
`categorical-encoding
<https://github.com/scikit-learn-contrib/category_encoders>`_
package for tools related to encoding categorical features.
clone
cloned
To copy an :term:`estimator instance` and create a new one with
identical :term:`parameters`, but without any fitted
:term:`attributes`, using :func:`~sklearn.base.clone`.
When ``fit`` is called, a :term:`meta-estimator` usually clones
a wrapped estimator instance before fitting the cloned instance.
(Exceptions, for legacy reasons, include
:class:`~pipeline.Pipeline` and
:class:`~pipeline.FeatureUnion`.)
If the estimator's `random_state` parameter is an integer (or if the
estimator doesn't have a `random_state` parameter), an *exact clone*
is returned: the clone and the original estimator will give the exact
same results. Otherwise, *statistical clone* is returned: the clone
might yield different results from the original estimator. More
details can be found in :ref:`randomness`.
common tests
This refers to the tests run on almost every estimator class in
Scikit-learn to check they comply with basic API conventions. They are
available for external use through
:func:`utils.estimator_checks.check_estimator` or
:func:`utils.estimator_checks.parametrize_with_checks`, with most of the
implementation in ``sklearn/utils/estimator_checks.py``.
Note: Some exceptions to the common testing regime are currently
hard-coded into the library, but we hope to replace this by marking
exceptional behaviours on the estimator using semantic :term:`estimator
tags`.
cross-fitting
cross fitting
A resampling method that iteratively partitions data into mutually
exclusive subsets to fit two stages. During the first stage, the
mutually exclusive subsets enable predictions or transformations to be
computed on data not seen during training. The computed data is then
used in the second stage. The objective is to avoid having any
overfitting in the first stage introduce bias into the input data
distribution of the second stage.
For examples of its use, see: :class:`~preprocessing.TargetEncoder`,
:class:`~ensemble.StackingClassifier`,
:class:`~ensemble.StackingRegressor` and
:class:`~calibration.CalibratedClassifierCV`.
cross-validation
cross validation
A resampling method that iteratively partitions data into mutually
exclusive 'train' and 'test' subsets so model performance can be
evaluated on unseen data. This conserves data as avoids the need to hold
out a 'validation' dataset and accounts for variability as multiple
rounds of cross validation are generally performed.
See :ref:`User Guide <cross_validation>` for more details.
deprecation
We use deprecation to slowly violate our :term:`backwards
compatibility` assurances, usually to:
* change the default value of a parameter; or
* remove a parameter, attribute, method, class, etc.
We will ordinarily issue a warning when a deprecated element is used,
although there may be limitations to this. For instance, we will raise
a warning when someone sets a parameter that has been deprecated, but
may not when they access that parameter's attribute on the estimator
instance.
See the :ref:`Contributors' Guide <contributing_deprecation>`.
dimensionality
May be used to refer to the number of :term:`features` (i.e.
:term:`n_features`), or columns in a 2d feature matrix.
Dimensions are, however, also used to refer to the length of a NumPy
array's shape, distinguishing a 1d array from a 2d matrix.
docstring
The embedded documentation for a module, class, function, etc., usually
in code as a string at the beginning of the object's definition, and
accessible as the object's ``__doc__`` attribute.
We try to adhere to `PEP257
<https://www.python.org/dev/peps/pep-0257/>`_, and follow `NumpyDoc
conventions <https://numpydoc.readthedocs.io/en/latest/format.html>`_.
double underscore
double underscore notation
When specifying parameter names for nested estimators, ``__`` may be
used to separate between parent and child in some contexts. The most
common use is when setting parameters through a meta-estimator with
:term:`set_params` and hence in specifying a search grid in
:ref:`parameter search <grid_search>`. See :term:`parameter`.
It is also used in :meth:`pipeline.Pipeline.fit` for passing
:term:`sample properties` to the ``fit`` methods of estimators in
the pipeline.
dtype
data type
NumPy arrays assume a homogeneous data type throughout, available in
the ``.dtype`` attribute of an array (or sparse matrix). We generally
assume simple data types for scikit-learn data: float or integer.
We may support object or string data types for arrays before encoding
or vectorizing. Our estimators do not work with struct arrays, for
instance.
Our documentation can sometimes give information about the dtype
precision, e.g. `np.int32`, `np.int64`, etc. When the precision is
provided, it refers to the NumPy dtype. If an arbitrary precision is
used, the documentation will refer to dtype `integer` or `floating`.
Note that in this case, the precision can be platform dependent.
The `numeric` dtype refers to accepting both `integer` and `floating`.
When it comes to choosing between 64-bit dtype (i.e. `np.float64` and
`np.int64`) and 32-bit dtype (i.e. `np.float32` and `np.int32`), it
boils down to a trade-off between efficiency and precision. The 64-bit
types offer more accurate results due to their lower floating-point
error, but demand more computational resources, resulting in slower
operations and increased memory usage. In contrast, 32-bit types
promise enhanced operation speed and reduced memory consumption, but
introduce a larger floating-point error. The efficiency improvement are
dependent on lower level optimization such as like vectorization,
single instruction multiple dispatch (SIMD), or cache optimization but
crucially on the compatibility of the algorithm in use.
Specifically, the choice of precision should account for whether the
employed algorithm can effectively leverage `np.float32`. Some
algorithms, especially certain minimization methods, are exclusively
coded for `np.float64`, meaning that even if `np.float32` is passed, it
triggers an automatic conversion back to `np.float64`. This not only
negates the intended computational savings but also introduces
additional overhead, making operations with `np.float32` unexpectedly
slower and more memory-intensive due to this extra conversion step.
duck typing
We try to apply `duck typing
<https://en.wikipedia.org/wiki/Duck_typing>`_ to determine how to
handle some input values (e.g. checking whether a given estimator is
a classifier). That is, we avoid using ``isinstance`` where possible,
and rely on the presence or absence of attributes to determine an
object's behaviour. Some nuance is required when following this
approach:
* For some estimators, an attribute may only be available once it is
:term:`fitted`. For instance, we cannot a priori determine if
:term:`predict_proba` is available in a grid search where the grid
includes alternating between a probabilistic and a non-probabilistic
predictor in the final step of the pipeline. In the following, we
can only determine if ``clf`` is probabilistic after fitting it on
some data::
>>> from sklearn.model_selection import GridSearchCV
>>> from sklearn.linear_model import SGDClassifier
>>> clf = GridSearchCV(SGDClassifier(),
... param_grid={'loss': ['log_loss', 'hinge']})
This means that we can only check for duck-typed attributes after
fitting, and that we must be careful to make :term:`meta-estimators`
only present attributes according to the state of the underlying
estimator after fitting.
* Checking if an attribute is present (using ``hasattr``) is in general
just as expensive as getting the attribute (``getattr`` or dot
notation). In some cases, getting the attribute may indeed be
expensive (e.g. for some implementations of
:term:`feature_importances_`, which may suggest this is an API design
flaw). So code which does ``hasattr`` followed by ``getattr`` should
be avoided; ``getattr`` within a try-except block is preferred.
* For determining some aspects of an estimator's expectations or
support for some feature, we use :term:`estimator tags` instead of
duck typing.
early stopping
This consists in stopping an iterative optimization method before the
convergence of the training loss, to avoid over-fitting. This is
generally done by monitoring the generalization score on a validation
set. When available, it is activated through the parameter
``early_stopping`` or by setting a positive :term:`n_iter_no_change`.
estimator instance
We sometimes use this terminology to distinguish an :term:`estimator`
class from a constructed instance. For example, in the following,
``cls`` is an estimator class, while ``est1`` and ``est2`` are
instances::
cls = RandomForestClassifier
est1 = cls()
est2 = RandomForestClassifier()
examples
We try to give examples of basic usage for most functions and
classes in the API:
* as doctests in their docstrings (i.e. within the ``sklearn/`` library
code itself).
* as examples in the :ref:`example gallery <general_examples>`
rendered (using `sphinx-gallery
<https://sphinx-gallery.readthedocs.io/>`_) from scripts in the
``examples/`` directory, exemplifying key features or parameters
of the estimator/function. These should also be referenced from the
User Guide.
* sometimes in the :ref:`User Guide <user_guide>` (built from ``doc/``)
alongside a technical description of the estimator.
experimental
An experimental tool is already usable but its public API, such as
default parameter values or fitted attributes, is still subject to
change in future versions without the usual :term:`deprecation`
warning policy.
evaluation metric
evaluation metrics
Evaluation metrics give a measure of how well a model performs. We may
use this term specifically to refer to the functions in :mod:`~sklearn.metrics`
(disregarding :mod:`~sklearn.metrics.pairwise`), as distinct from the
:term:`score` method and the :term:`scoring` API used in cross
validation. See :ref:`model_evaluation`.
These functions usually accept a ground truth (or the raw data
where the metric evaluates clustering without a ground truth) and a
prediction, be it the output of :term:`predict` (``y_pred``),
of :term:`predict_proba` (``y_proba``), or of an arbitrary score
function including :term:`decision_function` (``y_score``).
Functions are usually named to end with ``_score`` if a greater
score indicates a better model, and ``_loss`` if a lesser score
indicates a better model. This diversity of interface motivates
the scoring API.
Note that some estimators can calculate metrics that are not included
in :mod:`~sklearn.metrics` and are estimator-specific, notably model
likelihoods.
estimator tags
Estimator tags describe certain capabilities of an estimator. This would
enable some runtime behaviors based on estimator inspection, but it
also allows each estimator to be tested for appropriate invariances
while being excepted from other :term:`common tests`.
Some aspects of estimator tags are currently determined through
the :term:`duck typing` of methods like ``predict_proba`` and through
some special attributes on estimator objects:
For more detailed info, see :ref:`estimator_tags`.
feature
features
feature vector
In the abstract, a feature is a function (in its mathematical sense)
mapping a sampled object to a numeric or categorical quantity.
"Feature" is also commonly used to refer to these quantities, being the
individual elements of a vector representing a sample. In a data
matrix, features are represented as columns: each column contains the
result of applying a feature function to a set of samples.
Elsewhere features are known as attributes, predictors, regressors, or
independent variables.
Nearly all estimators in scikit-learn assume that features are numeric,
finite and not missing, even when they have semantically distinct
domains and distributions (categorical, ordinal, count-valued,
real-valued, interval). See also :term:`categorical feature` and
:term:`missing values`.
``n_features`` indicates the number of features in a dataset.
fitting
Calling :term:`fit` (or :term:`fit_transform`, :term:`fit_predict`,
etc.) on an estimator.
fitted
The state of an estimator after :term:`fitting`.
There is no conventional procedure for checking if an estimator
is fitted. However, an estimator that is not fitted:
* should raise :class:`exceptions.NotFittedError` when a prediction
method (:term:`predict`, :term:`transform`, etc.) is called.
(:func:`utils.validation.check_is_fitted` is used internally
for this purpose.)
* should not have any :term:`attributes` beginning with an alphabetic
character and ending with an underscore. (Note that a descriptor for
the attribute may still be present on the class, but hasattr should
return False)
function
We provide ad hoc function interfaces for many algorithms, while
:term:`estimator` classes provide a more consistent interface.
In particular, Scikit-learn may provide a function interface that fits
a model to some data and returns the learnt model parameters, as in
:func:`linear_model.enet_path`. For transductive models, this also
returns the embedding or cluster labels, as in
:func:`manifold.spectral_embedding` or :func:`cluster.dbscan`. Many
preprocessing transformers also provide a function interface, akin to
calling :term:`fit_transform`, as in
:func:`preprocessing.maxabs_scale`. Users should be careful to avoid
:term:`data leakage` when making use of these
``fit_transform``-equivalent functions.
We do not have a strict policy about when to or when not to provide
function forms of estimators, but maintainers should consider
consistency with existing interfaces, and whether providing a function
would lead users astray from best practices (as regards data leakage,
etc.)
gallery
See :term:`examples`.
hyperparameter
hyper-parameter
See :term:`parameter`.
impute
imputation
Most machine learning algorithms require that their inputs have no
:term:`missing values`, and will not work if this requirement is
violated. Algorithms that attempt to fill in (or impute) missing values
are referred to as imputation algorithms.
indexable
An :term:`array-like`, :term:`sparse matrix`, pandas DataFrame or
sequence (usually a list).
induction
inductive
Inductive (contrasted with :term:`transductive`) machine learning
builds a model of some data that can then be applied to new instances.
Most estimators in Scikit-learn are inductive, having :term:`predict`
and/or :term:`transform` methods.
joblib
A Python library (https://joblib.readthedocs.io) used in Scikit-learn to
facilite simple parallelism and caching. Joblib is oriented towards
efficiently working with numpy arrays, such as through use of
:term:`memory mapping`. See :ref:`parallelism` for more
information.
label indicator matrix
multilabel indicator matrix
multilabel indicator matrices
The format used to represent multilabel data, where each row of a 2d
array or sparse matrix corresponds to a sample, each column
corresponds to a class, and each element is 1 if the sample is labeled
with the class and 0 if not.
leakage
data leakage
A problem in cross validation where generalization performance can be
over-estimated since knowledge of the test data was inadvertently
included in training a model. This is a risk, for instance, when
applying a :term:`transformer` to the entirety of a dataset rather
than each training portion in a cross validation split.
We aim to provide interfaces (such as :mod:`~sklearn.pipeline` and
:mod:`~sklearn.model_selection`) that shield the user from data leakage.
memmapping
memory map
memory mapping
A memory efficiency strategy that keeps data on disk rather than
copying it into main memory. Memory maps can be created for arrays
that can be read, written, or both, using :obj:`numpy.memmap`. When
using :term:`joblib` to parallelize operations in Scikit-learn, it
may automatically memmap large arrays to reduce memory duplication
overhead in multiprocessing.
missing values
Most Scikit-learn estimators do not work with missing values. When they
do (e.g. in :class:`impute.SimpleImputer`), NaN is the preferred
representation of missing values in float arrays. If the array has
integer dtype, NaN cannot be represented. For this reason, we support
specifying another ``missing_values`` value when :term:`imputation` or
learning can be performed in integer space.
:term:`Unlabeled data <unlabeled data>` is a special case of missing
values in the :term:`target`.
``n_features``
The number of :term:`features`.
``n_outputs``
The number of :term:`outputs` in the :term:`target`.
``n_samples``
The number of :term:`samples`.
``n_targets``
Synonym for :term:`n_outputs`.
narrative docs
narrative documentation
An alias for :ref:`User Guide <user_guide>`, i.e. documentation written
in ``doc/modules/``. Unlike the :ref:`API reference <api_ref>` provided
through docstrings, the User Guide aims to:
* group tools provided by Scikit-learn together thematically or in
terms of usage;
* motivate why someone would use each particular tool, often through
comparison;
* provide both intuitive and technical descriptions of tools;
* provide or link to :term:`examples` of using key features of a
tool.
np
A shorthand for Numpy due to the conventional import statement::
import numpy as np
online learning
Where a model is iteratively updated by receiving each batch of ground
truth :term:`targets` soon after making predictions on corresponding
batch of data. Intrinsically, the model must be usable for prediction
after each batch. See :term:`partial_fit`.
out-of-core
An efficiency strategy where not all the data is stored in main memory
at once, usually by performing learning on batches of data. See
:term:`partial_fit`.
outputs
Individual scalar/categorical variables per sample in the
:term:`target`. For example, in multilabel classification each
possible label corresponds to a binary output. Also called *responses*,
*tasks* or *targets*.
See :term:`multiclass multioutput` and :term:`continuous multioutput`.
pair
A tuple of length two.
parameter
parameters
param
params
We mostly use *parameter* to refer to the aspects of an estimator that
can be specified in its construction. For example, ``max_depth`` and
``random_state`` are parameters of :class:`~ensemble.RandomForestClassifier`.
Parameters to an estimator's constructor are stored unmodified as
attributes on the estimator instance, and conventionally start with an
alphabetic character and end with an alphanumeric character. Each
estimator's constructor parameters are described in the estimator's
docstring.
We do not use parameters in the statistical sense, where parameters are
values that specify a model and can be estimated from data. What we
call parameters might be what statisticians call hyperparameters to the
model: aspects for configuring model structure that are often not
directly learnt from data. However, our parameters are also used to
prescribe modeling operations that do not affect the learnt model, such
as :term:`n_jobs` for controlling parallelism.
When talking about the parameters of a :term:`meta-estimator`, we may
also be including the parameters of the estimators wrapped by the
meta-estimator. Ordinarily, these nested parameters are denoted by
using a :term:`double underscore` (``__``) to separate between the
estimator-as-parameter and its parameter. Thus ``clf =
BaggingClassifier(estimator=DecisionTreeClassifier(max_depth=3))``
has a deep parameter ``estimator__max_depth`` with value ``3``,
which is accessible with ``clf.estimator.max_depth`` or
``clf.get_params()['estimator__max_depth']``.
The list of parameters and their current values can be retrieved from
an :term:`estimator instance` using its :term:`get_params` method.
Between construction and fitting, parameters may be modified using
:term:`set_params`. To enable this, parameters are not ordinarily
validated or altered when the estimator is constructed, or when each
parameter is set. Parameter validation is performed when :term:`fit` is
called.
Common parameters are listed :ref:`below <glossary_parameters>`.
pairwise metric
pairwise metrics
In its broad sense, a pairwise metric defines a function for measuring
similarity or dissimilarity between two samples (with each ordinarily
represented as a :term:`feature vector`). We particularly provide
implementations of distance metrics (as well as improper metrics like
Cosine Distance) through :func:`metrics.pairwise_distances`, and of
kernel functions (a constrained class of similarity functions) in
:func:`metrics.pairwise.pairwise_kernels`. These can compute pairwise distance
matrices that are symmetric and hence store data redundantly.
See also :term:`precomputed` and :term:`metric`.
Note that for most distance metrics, we rely on implementations from
:mod:`scipy.spatial.distance`, but may reimplement for efficiency in
our context. The :class:`metrics.DistanceMetric` interface is used to implement
distance metrics for integration with efficient neighbors search.
pd
A shorthand for `Pandas <https://pandas.pydata.org>`_ due to the
conventional import statement::
import pandas as pd
precomputed
Where algorithms rely on :term:`pairwise metrics`, and can be computed
from pairwise metrics alone, we often allow the user to specify that
the :term:`X` provided is already in the pairwise (dis)similarity
space, rather than in a feature space. That is, when passed to
:term:`fit`, it is a square, symmetric matrix, with each vector
indicating (dis)similarity to every sample, and when passed to
prediction/transformation methods, each row corresponds to a testing
sample and each column to a training sample.
Use of precomputed X is usually indicated by setting a ``metric``,
``affinity`` or ``kernel`` parameter to the string 'precomputed'. If
this is the case, then the estimator should set the `pairwise`
estimator tag as True.
rectangular
Data that can be represented as a matrix with :term:`samples` on the
first axis and a fixed, finite set of :term:`features` on the second
is called rectangular.
This term excludes samples with non-vectorial structures, such as text,
an image of arbitrary size, a time series of arbitrary length, a set of
vectors, etc. The purpose of a :term:`vectorizer` is to produce
rectangular forms of such data.
sample
samples
We usually use this term as a noun to indicate a single feature vector.
Elsewhere a sample is called an instance, data point, or observation.
``n_samples`` indicates the number of samples in a dataset, being the
number of rows in a data array :term:`X`.
Note that this definition is standard in machine learning and deviates from
statistics where it means *a set of individuals or objects collected or
selected*.
sample property
sample properties
A sample property is data for each sample (e.g. an array of length
n_samples) passed to an estimator method or a similar function,
alongside but distinct from the :term:`features` (``X``) and
:term:`target` (``y``). The most prominent example is
:term:`sample_weight`; see others at :ref:`glossary_sample_props`.
As of version 0.19 we do not have a consistent approach to handling
sample properties and their routing in :term:`meta-estimators`, though
a ``fit_params`` parameter is often used.
scikit-learn-contrib
A venue for publishing Scikit-learn-compatible libraries that are
broadly authorized by the core developers and the contrib community,
but not maintained by the core developer team.
See https://scikit-learn-contrib.github.io.
scikit-learn enhancement proposals
SLEP
SLEPs
Changes to the API principles and changes to dependencies or supported
versions happen via a :ref:`SLEP <slep>` and follows the
decision-making process outlined in :ref:`governance`.
For all votes, a proposal must have been made public and discussed before the
vote. Such a proposal must be a consolidated document, in the form of a
"Scikit-Learn Enhancement Proposal" (SLEP), rather than a long discussion on an
issue. A SLEP must be submitted as a pull-request to
`enhancement proposals <https://scikit-learn-enhancement-proposals.readthedocs.io>`_ using the
`SLEP template <https://scikit-learn-enhancement-proposals.readthedocs.io/en/latest/slep_template.html>`_.
semi-supervised
semi-supervised learning
semisupervised
Learning where the expected prediction (label or ground truth) is only
available for some samples provided as training data when
:term:`fitting` the model. We conventionally apply the label ``-1``
to :term:`unlabeled` samples in semi-supervised classification.
sparse matrix
sparse graph
A representation of two-dimensional numeric data that is more memory
efficient the corresponding dense numpy array where almost all elements
are zero. We use the :mod:`scipy.sparse` framework, which provides
several underlying sparse data representations, or *formats*.
Some formats are more efficient than others for particular tasks, and
when a particular format provides especial benefit, we try to document
this fact in Scikit-learn parameter descriptions.
Some sparse matrix formats (notably CSR, CSC, COO and LIL) distinguish
between *implicit* and *explicit* zeros. Explicit zeros are stored
(i.e. they consume memory in a ``data`` array) in the data structure,
while implicit zeros correspond to every element not otherwise defined
in explicit storage.
Two semantics for sparse matrices are used in Scikit-learn:
matrix semantics
The sparse matrix is interpreted as an array with implicit and
explicit zeros being interpreted as the number 0. This is the
interpretation most often adopted, e.g. when sparse matrices
are used for feature matrices or :term:`multilabel indicator
matrices`.
graph semantics
As with :mod:`scipy.sparse.csgraph`, explicit zeros are
interpreted as the number 0, but implicit zeros indicate a masked
or absent value, such as the absence of an edge between two
vertices of a graph, where an explicit value indicates an edge's
weight. This interpretation is adopted to represent connectivity
in clustering, in representations of nearest neighborhoods
(e.g. :func:`neighbors.kneighbors_graph`), and for precomputed
distance representation where only distances in the neighborhood
of each point are required.
When working with sparse matrices, we assume that it is sparse for a
good reason, and avoid writing code that densifies a user-provided
sparse matrix, instead maintaining sparsity or raising an error if not
possible (i.e. if an estimator does not / cannot support sparse
matrices).
stateless
An estimator is stateless if it does not store any information that is
obtained during :term:`fit`. This information can be either parameters
learned during :term:`fit` or statistics computed from the
training data. An estimator is stateless if it has no :term:`attributes`
apart from ones set in `__init__`. Calling :term:`fit` for these
estimators will only validate the public :term:`attributes` passed
in `__init__`.
supervised
supervised learning
Learning where the expected prediction (label or ground truth) is
available for each sample when :term:`fitting` the model, provided as
:term:`y`. This is the approach taken in a :term:`classifier` or
:term:`regressor` among other estimators.
target
targets
The *dependent variable* in :term:`supervised` (and
:term:`semisupervised`) learning, passed as :term:`y` to an estimator's
:term:`fit` method. Also known as *dependent variable*, *outcome
variable*, *response variable*, *ground truth* or *label*. Scikit-learn
works with targets that have minimal structure: a class from a finite
set, a finite real-valued number, multiple classes, or multiple
numbers. See :ref:`glossary_target_types`.
transduction
transductive
A transductive (contrasted with :term:`inductive`) machine learning
method is designed to model a specific dataset, but not to apply that
model to unseen data. Examples include :class:`manifold.TSNE`,
:class:`cluster.AgglomerativeClustering` and
:class:`neighbors.LocalOutlierFactor`.
unlabeled
unlabeled data
Samples with an unknown ground truth when fitting; equivalently,
:term:`missing values` in the :term:`target`. See also
:term:`semisupervised` and :term:`unsupervised` learning.
unsupervised
unsupervised learning
Learning where the expected prediction (label or ground truth) is not
available for each sample when :term:`fitting` the model, as in
:term:`clusterers` and :term:`outlier detectors`. Unsupervised
estimators ignore any :term:`y` passed to :term:`fit`.
.. _glossary_estimator_types:
Class APIs and Estimator Types
==============================
.. glossary::
classifier
classifiers
A :term:`supervised` (or :term:`semi-supervised`) :term:`predictor`
with a finite set of discrete possible output values.
A classifier supports modeling some of :term:`binary`,
:term:`multiclass`, :term:`multilabel`, or :term:`multiclass
multioutput` targets. Within scikit-learn, all classifiers support
multi-class classification, defaulting to using a one-vs-rest
strategy over the binary classification problem.
Classifiers must store a :term:`classes_` attribute after fitting,
and inherit from :class:`base.ClassifierMixin`, which sets
their corresponding :term:`estimator tags` correctly.
A classifier can be distinguished from other estimators with
:func:`~base.is_classifier`.
A classifier must implement:
* :term:`fit`
* :term:`predict`
* :term:`score`
It may also be appropriate to implement :term:`decision_function`,
:term:`predict_proba` and :term:`predict_log_proba`.
clusterer
clusterers
A :term:`unsupervised` :term:`predictor` with a finite set of discrete
output values.
A clusterer usually stores :term:`labels_` after fitting, and must do
so if it is :term:`transductive`.
A clusterer must implement:
* :term:`fit`
* :term:`fit_predict` if :term:`transductive`
* :term:`predict` if :term:`inductive`
density estimator
An :term:`unsupervised` estimation of input probability density
function. Commonly used techniques are:
* :ref:`kernel_density` - uses a kernel function, controlled by the
bandwidth parameter to represent density;
* :ref:`Gaussian mixture <mixture>` - uses mixture of Gaussian models
to represent density.
estimator
estimators
An object which manages the estimation and decoding of a model. The
model is estimated as a deterministic function of:
* :term:`parameters` provided in object construction or with
:term:`set_params`;
* the global :mod:`numpy.random` random state if the estimator's
:term:`random_state` parameter is set to None; and
* any data or :term:`sample properties` passed to the most recent
call to :term:`fit`, :term:`fit_transform` or :term:`fit_predict`,
or data similarly passed in a sequence of calls to
:term:`partial_fit`.
The estimated model is stored in public and private :term:`attributes`
on the estimator instance, facilitating decoding through prediction
and transformation methods.
Estimators must provide a :term:`fit` method, and should provide
:term:`set_params` and :term:`get_params`, although these are usually
provided by inheritance from :class:`base.BaseEstimator`.
The core functionality of some estimators may also be available as a
:term:`function`.
feature extractor
feature extractors
A :term:`transformer` which takes input where each sample is not
represented as an :term:`array-like` object of fixed length, and
produces an :term:`array-like` object of :term:`features` for each
sample (and thus a 2-dimensional array-like for a set of samples). In
other words, it (lossily) maps a non-rectangular data representation
into :term:`rectangular` data.
Feature extractors must implement at least:
* :term:`fit`
* :term:`transform`
* :term:`get_feature_names_out`
meta-estimator
meta-estimators
metaestimator
metaestimators
An :term:`estimator` which takes another estimator as a parameter.
Examples include :class:`pipeline.Pipeline`,
:class:`model_selection.GridSearchCV`,
:class:`feature_selection.SelectFromModel` and
:class:`ensemble.BaggingClassifier`.
In a meta-estimator's :term:`fit` method, any contained estimators
should be :term:`cloned` before they are fit (although FIXME: Pipeline
and FeatureUnion do not do this currently). An exception to this is
that an estimator may explicitly document that it accepts a pre-fitted
estimator (e.g. using ``prefit=True`` in
:class:`feature_selection.SelectFromModel`). One known issue with this
is that the pre-fitted estimator will lose its model if the
meta-estimator is cloned. A meta-estimator should have ``fit`` called
before prediction, even if all contained estimators are pre-fitted.
In cases where a meta-estimator's primary behaviors (e.g.
:term:`predict` or :term:`transform` implementation) are functions of
prediction/transformation methods of the provided *base estimator* (or
multiple base estimators), a meta-estimator should provide at least the
standard methods provided by the base estimator. It may not be
possible to identify which methods are provided by the underlying
estimator until the meta-estimator has been :term:`fitted` (see also
:term:`duck typing`), for which
:func:`utils.metaestimators.available_if` may help. It
should also provide (or modify) the :term:`estimator tags` and
:term:`classes_` attribute provided by the base estimator.
Meta-estimators should be careful to validate data as minimally as
possible before passing it to an underlying estimator. This saves
computation time, and may, for instance, allow the underlying
estimator to easily work with data that is not :term:`rectangular`.
outlier detector
outlier detectors
An :term:`unsupervised` binary :term:`predictor` which models the
distinction between core and outlying samples.
Outlier detectors must implement:
* :term:`fit`
* :term:`fit_predict` if :term:`transductive`
* :term:`predict` if :term:`inductive`
Inductive outlier detectors may also implement
:term:`decision_function` to give a normalized inlier score where
outliers have score below 0. :term:`score_samples` may provide an
unnormalized score per sample.
predictor
predictors
An :term:`estimator` supporting :term:`predict` and/or
:term:`fit_predict`. This encompasses :term:`classifier`,
:term:`regressor`, :term:`outlier detector` and :term:`clusterer`.
In statistics, "predictors" refers to :term:`features`.
regressor
regressors
A :term:`supervised` (or :term:`semi-supervised`) :term:`predictor`
with :term:`continuous` output values.
Regressors inherit from :class:`base.RegressorMixin`, which sets their
:term:`estimator tags` correctly.
A regressor can be distinguished from other estimators with
:func:`~base.is_regressor`.
A regressor must implement:
* :term:`fit`
* :term:`predict`
* :term:`score`
transformer
transformers
An estimator supporting :term:`transform` and/or :term:`fit_transform`.
A purely :term:`transductive` transformer, such as
:class:`manifold.TSNE`, may not implement ``transform``.
vectorizer
vectorizers
See :term:`feature extractor`.
There are further APIs specifically related to a small family of estimators,
such as:
.. glossary::
cross-validation splitter
CV splitter
cross-validation generator
A non-estimator family of classes used to split a dataset into a
sequence of train and test portions (see :ref:`cross_validation`),
by providing :term:`split` and :term:`get_n_splits` methods.
Note that unlike estimators, these do not have :term:`fit` methods
and do not provide :term:`set_params` or :term:`get_params`.
Parameter validation may be performed in ``__init__``.
cross-validation estimator
An estimator that has built-in cross-validation capabilities to
automatically select the best hyper-parameters (see the :ref:`User
Guide <grid_search>`). Some example of cross-validation estimators
are :class:`ElasticNetCV <linear_model.ElasticNetCV>` and
:class:`LogisticRegressionCV <linear_model.LogisticRegressionCV>`.
Cross-validation estimators are named `EstimatorCV` and tend to be
roughly equivalent to `GridSearchCV(Estimator(), ...)`. The
advantage of using a cross-validation estimator over the canonical
:term:`estimator` class along with :ref:`grid search <grid_search>` is
that they can take advantage of warm-starting by reusing precomputed
results in the previous steps of the cross-validation process. This
generally leads to speed improvements. An exception is the
:class:`RidgeCV <linear_model.RidgeCV>` class, which can instead
perform efficient Leave-One-Out (LOO) CV. By default, all these
estimators, apart from :class:`RidgeCV <linear_model.RidgeCV>` with an
LOO-CV, will be refitted on the full training dataset after finding the
best combination of hyper-parameters.
scorer
A non-estimator callable object which evaluates an estimator on given
test data, returning a number. Unlike :term:`evaluation metrics`,
a greater returned number must correspond with a *better* score.
See :ref:`scoring_parameter`.
Further examples:
* :class:`metrics.DistanceMetric`
* :class:`gaussian_process.kernels.Kernel`
* ``tree.Criterion``
.. _glossary_metadata_routing:
Metadata Routing
================
.. glossary::
consumer
An object which consumes :term:`metadata`. This object is usually an
:term:`estimator`, a :term:`scorer`, or a :term:`CV splitter`. Consuming
metadata means using it in calculations, e.g. using
:term:`sample_weight` to calculate a certain type of score. Being a
consumer doesn't mean that the object always receives a certain
metadata, rather it means it can use it if it is provided.
metadata
Data which is related to the given :term:`X` and :term:`y` data, but
is not directly a part of the data, e.g. :term:`sample_weight` or
:term:`groups`, and is passed along to different objects and methods,
e.g. to a :term:`scorer` or a :term:`CV splitter`.
router
An object which routes metadata to :term:`consumers <consumer>`. This
object is usually a :term:`meta-estimator`, e.g.
:class:`~pipeline.Pipeline` or :class:`~model_selection.GridSearchCV`.
Some routers can also be a consumer. This happens for example when a
meta-estimator uses the given :term:`groups`, and it also passes it
along to some of its sub-objects, such as a :term:`CV splitter`.
Please refer to :ref:`Metadata Routing User Guide <metadata_routing>` for more
information.
.. _glossary_target_types:
Target Types
============
.. glossary::
binary
A classification problem consisting of two classes. A binary target
may be represented as for a :term:`multiclass` problem but with only two
labels. A binary decision function is represented as a 1d array.
Semantically, one class is often considered the "positive" class.
Unless otherwise specified (e.g. using :term:`pos_label` in
:term:`evaluation metrics`), we consider the class label with the
greater value (numerically or lexicographically) as the positive class:
of labels [0, 1], 1 is the positive class; of [1, 2], 2 is the positive
class; of ['no', 'yes'], 'yes' is the positive class; of ['no', 'YES'],
'no' is the positive class. This affects the output of
:term:`decision_function`, for instance.
Note that a dataset sampled from a multiclass ``y`` or a continuous
``y`` may appear to be binary.
:func:`~utils.multiclass.type_of_target` will return 'binary' for
binary input, or a similar array with only a single class present.
continuous
A regression problem where each sample's target is a finite floating
point number represented as a 1-dimensional array of floats (or
sometimes ints).
:func:`~utils.multiclass.type_of_target` will return 'continuous' for
continuous input, but if the data is all integers, it will be
identified as 'multiclass'.
continuous multioutput
continuous multi-output
multioutput continuous
multi-output continuous
A regression problem where each sample's target consists of ``n_outputs``
:term:`outputs`, each one a finite floating point number, for a
fixed int ``n_outputs > 1`` in a particular dataset.
Continuous multioutput targets are represented as multiple
:term:`continuous` targets, horizontally stacked into an array
of shape ``(n_samples, n_outputs)``.
:func:`~utils.multiclass.type_of_target` will return
'continuous-multioutput' for continuous multioutput input, but if the
data is all integers, it will be identified as
'multiclass-multioutput'.
multiclass
multi-class
A classification problem consisting of more than two classes. A
multiclass target may be represented as a 1-dimensional array of
strings or integers. A 2d column vector of integers (i.e. a
single output in :term:`multioutput` terms) is also accepted.
We do not officially support other orderable, hashable objects as class
labels, even if estimators may happen to work when given classification
targets of such type.
For semi-supervised classification, :term:`unlabeled` samples should
have the special label -1 in ``y``.
Within scikit-learn, all estimators supporting binary classification
also support multiclass classification, using One-vs-Rest by default.
A :class:`preprocessing.LabelEncoder` helps to canonicalize multiclass
targets as integers.
:func:`~utils.multiclass.type_of_target` will return 'multiclass' for
multiclass input. The user may also want to handle 'binary' input
identically to 'multiclass'.
multiclass multioutput
multi-class multi-output
multioutput multiclass
multi-output multi-class
A classification problem where each sample's target consists of
``n_outputs`` :term:`outputs`, each a class label, for a fixed int
``n_outputs > 1`` in a particular dataset. Each output has a
fixed set of available classes, and each sample is labeled with a
class for each output. An output may be binary or multiclass, and in
the case where all outputs are binary, the target is
:term:`multilabel`.
Multiclass multioutput targets are represented as multiple
:term:`multiclass` targets, horizontally stacked into an array
of shape ``(n_samples, n_outputs)``.
XXX: For simplicity, we may not always support string class labels
for multiclass multioutput, and integer class labels should be used.
:mod:`~sklearn.multioutput` provides estimators which estimate multi-output
problems using multiple single-output estimators. This may not fully
account for dependencies among the different outputs, which methods
natively handling the multioutput case (e.g. decision trees, nearest
neighbors, neural networks) may do better.
:func:`~utils.multiclass.type_of_target` will return
'multiclass-multioutput' for multiclass multioutput input.
multilabel
multi-label
A :term:`multiclass multioutput` target where each output is
:term:`binary`. This may be represented as a 2d (dense) array or
sparse matrix of integers, such that each column is a separate binary
target, where positive labels are indicated with 1 and negative labels
are usually -1 or 0. Sparse multilabel targets are not supported
everywhere that dense multilabel targets are supported.
Semantically, a multilabel target can be thought of as a set of labels
for each sample. While not used internally,
:class:`preprocessing.MultiLabelBinarizer` is provided as a utility to
convert from a list of sets representation to a 2d array or sparse
matrix. One-hot encoding a multiclass target with
:class:`preprocessing.LabelBinarizer` turns it into a multilabel
problem.
:func:`~utils.multiclass.type_of_target` will return
'multilabel-indicator' for multilabel input, whether sparse or dense.
multioutput
multi-output
A target where each sample has multiple classification/regression
labels. See :term:`multiclass multioutput` and :term:`continuous
multioutput`. We do not currently support modelling mixed
classification and regression targets.
.. _glossary_methods:
Methods
=======
.. glossary::
``decision_function``
In a fitted :term:`classifier` or :term:`outlier detector`, predicts a
"soft" score for each sample in relation to each class, rather than the
"hard" categorical prediction produced by :term:`predict`. Its input
is usually only some observed data, :term:`X`.
If the estimator was not already :term:`fitted`, calling this method
should raise a :class:`exceptions.NotFittedError`.
Output conventions:
binary classification
A 1-dimensional array, where values strictly greater than zero
indicate the positive class (i.e. the last class in
:term:`classes_`).
multiclass classification
A 2-dimensional array, where the row-wise arg-maximum is the
predicted class. Columns are ordered according to
:term:`classes_`.
multilabel classification
Scikit-learn is inconsistent in its representation of :term:`multilabel`
decision functions. It may be represented one of two ways:
- List of 2d arrays, each array of shape: (`n_samples`, 2), like in
multiclass multioutput. List is of length `n_labels`.
- Single 2d array of shape (`n_samples`, `n_labels`), with each
'column' in the array corresponding to the individual binary
classification decisions. This is identical to the
multiclass classification format, though its semantics differ: it
should be interpreted, like in the binary case, by thresholding at
0.
multioutput classification
A list of 2d arrays, corresponding to each multiclass decision
function.
outlier detection
A 1-dimensional array, where a value greater than or equal to zero
indicates an inlier.
``fit``
The ``fit`` method is provided on every estimator. It usually takes some
:term:`samples` ``X``, :term:`targets` ``y`` if the model is supervised,
and potentially other :term:`sample properties` such as
:term:`sample_weight`. It should:
* clear any prior :term:`attributes` stored on the estimator, unless
:term:`warm_start` is used;
* validate and interpret any :term:`parameters`, ideally raising an
error if invalid;
* validate the input data;
* estimate and store model attributes from the estimated parameters and
provided data; and
* return the now :term:`fitted` estimator to facilitate method
chaining.
:ref:`glossary_target_types` describes possible formats for ``y``.
``fit_predict``
Used especially for :term:`unsupervised`, :term:`transductive`
estimators, this fits the model and returns the predictions (similar to
:term:`predict`) on the training data. In clusterers, these predictions
are also stored in the :term:`labels_` attribute, and the output of
``.fit_predict(X)`` is usually equivalent to ``.fit(X).predict(X)``.
The parameters to ``fit_predict`` are the same as those to ``fit``.
``fit_transform``
A method on :term:`transformers` which fits the estimator and returns
the transformed training data. It takes parameters as in :term:`fit`
and its output should have the same shape as calling ``.fit(X,
...).transform(X)``. There are nonetheless rare cases where
``.fit_transform(X, ...)`` and ``.fit(X, ...).transform(X)`` do not
return the same value, wherein training data needs to be handled
differently (due to model blending in stacked ensembles, for instance;
such cases should be clearly documented).
:term:`Transductive <transductive>` transformers may also provide
``fit_transform`` but not :term:`transform`.
One reason to implement ``fit_transform`` is that performing ``fit``
and ``transform`` separately would be less efficient than together.
:class:`base.TransformerMixin` provides a default implementation,
providing a consistent interface across transformers where
``fit_transform`` is or is not specialized.
In :term:`inductive` learning -- where the goal is to learn a
generalized model that can be applied to new data -- users should be
careful not to apply ``fit_transform`` to the entirety of a dataset
(i.e. training and test data together) before further modelling, as
this results in :term:`data leakage`.
``get_feature_names_out``
Primarily for :term:`feature extractors`, but also used for other
transformers to provide string names for each column in the output of
the estimator's :term:`transform` method. It outputs an array of
strings and may take an array-like of strings as input, corresponding
to the names of input columns from which output column names can
be generated. If `input_features` is not passed in, then the
`feature_names_in_` attribute will be used. If the
`feature_names_in_` attribute is not defined, then the
input names are named `[x0, x1, ..., x(n_features_in_ - 1)]`.
``get_n_splits``
On a :term:`CV splitter` (not an estimator), returns the number of
elements one would get if iterating through the return value of
:term:`split` given the same parameters. Takes the same parameters as
split.
``get_params``
Gets all :term:`parameters`, and their values, that can be set using
:term:`set_params`. A parameter ``deep`` can be used, when set to
False to only return those parameters not including ``__``, i.e. not
due to indirection via contained estimators.
Most estimators adopt the definition from :class:`base.BaseEstimator`,
which simply adopts the parameters defined for ``__init__``.
:class:`pipeline.Pipeline`, among others, reimplements ``get_params``
to declare the estimators named in its ``steps`` parameters as
themselves being parameters.
``partial_fit``
Facilitates fitting an estimator in an online fashion. Unlike ``fit``,
repeatedly calling ``partial_fit`` does not clear the model, but
updates it with the data provided. The portion of data
provided to ``partial_fit`` may be called a mini-batch.
Each mini-batch must be of consistent shape, etc. In iterative
estimators, ``partial_fit`` often only performs a single iteration.
``partial_fit`` may also be used for :term:`out-of-core` learning,
although usually limited to the case where learning can be performed
online, i.e. the model is usable after each ``partial_fit`` and there
is no separate processing needed to finalize the model.
:class:`cluster.Birch` introduces the convention that calling
``partial_fit(X)`` will produce a model that is not finalized, but the
model can be finalized by calling ``partial_fit()`` i.e. without
passing a further mini-batch.
Generally, estimator parameters should not be modified between calls
to ``partial_fit``, although ``partial_fit`` should validate them
as well as the new mini-batch of data. In contrast, ``warm_start``
is used to repeatedly fit the same estimator with the same data
but varying parameters.
Like ``fit``, ``partial_fit`` should return the estimator object.
To clear the model, a new estimator should be constructed, for instance
with :func:`base.clone`.
NOTE: Using ``partial_fit`` after ``fit`` results in undefined behavior.
``predict``
Makes a prediction for each sample, usually only taking :term:`X` as
input (but see under regressor output conventions below). In a
:term:`classifier` or :term:`regressor`, this prediction is in the same
target space used in fitting (e.g. one of {'red', 'amber', 'green'} if
the ``y`` in fitting consisted of these strings). Despite this, even
when ``y`` passed to :term:`fit` is a list or other array-like, the
output of ``predict`` should always be an array or sparse matrix. In a
:term:`clusterer` or :term:`outlier detector` the prediction is an
integer.
If the estimator was not already :term:`fitted`, calling this method
should raise a :class:`exceptions.NotFittedError`.
Output conventions:
classifier
An array of shape ``(n_samples,)`` ``(n_samples, n_outputs)``.
:term:`Multilabel <multilabel>` data may be represented as a sparse
matrix if a sparse matrix was used in fitting. Each element should
be one of the values in the classifier's :term:`classes_`
attribute.
clusterer
An array of shape ``(n_samples,)`` where each value is from 0 to
``n_clusters - 1`` if the corresponding sample is clustered,
and -1 if the sample is not clustered, as in
:func:`cluster.dbscan`.
outlier detector
An array of shape ``(n_samples,)`` where each value is -1 for an
outlier and 1 otherwise.
regressor
A numeric array of shape ``(n_samples,)``, usually float64.
Some regressors have extra options in their ``predict`` method,
allowing them to return standard deviation (``return_std=True``)
or covariance (``return_cov=True``) relative to the predicted
value. In this case, the return value is a tuple of arrays
corresponding to (prediction mean, std, cov) as required.
``predict_log_proba``
The natural logarithm of the output of :term:`predict_proba`, provided
to facilitate numerical stability.
``predict_proba``
A method in :term:`classifiers` and :term:`clusterers` that can
return probability estimates for each class/cluster. Its input is
usually only some observed data, :term:`X`.
If the estimator was not already :term:`fitted`, calling this method
should raise a :class:`exceptions.NotFittedError`.
Output conventions are like those for :term:`decision_function` except
in the :term:`binary` classification case, where one column is output
for each class (while ``decision_function`` outputs a 1d array). For
binary and multiclass predictions, each row should add to 1.
Like other methods, ``predict_proba`` should only be present when the
estimator can make probabilistic predictions (see :term:`duck typing`).
This means that the presence of the method may depend on estimator
parameters (e.g. in :class:`linear_model.SGDClassifier`) or training
data (e.g. in :class:`model_selection.GridSearchCV`) and may only
appear after fitting.
``score``
A method on an estimator, usually a :term:`predictor`, which evaluates
its predictions on a given dataset, and returns a single numerical
score. A greater return value should indicate better predictions;
accuracy is used for classifiers and R^2 for regressors by default.
If the estimator was not already :term:`fitted`, calling this method
should raise a :class:`exceptions.NotFittedError`.
Some estimators implement a custom, estimator-specific score function,
often the likelihood of the data under the model.
``score_samples``
A method that returns a score for each given sample. The exact
definition of *score* varies from one class to another. In the case of
density estimation, it can be the log density model on the data, and in
the case of outlier detection, it can be the opposite of the outlier
factor of the data.
If the estimator was not already :term:`fitted`, calling this method
should raise a :class:`exceptions.NotFittedError`.
``set_params``
Available in any estimator, takes keyword arguments corresponding to
keys in :term:`get_params`. Each is provided a new value to assign
such that calling ``get_params`` after ``set_params`` will reflect the
changed :term:`parameters`. Most estimators use the implementation in
:class:`base.BaseEstimator`, which handles nested parameters and
otherwise sets the parameter as an attribute on the estimator.
The method is overridden in :class:`pipeline.Pipeline` and related
estimators.
``split``
On a :term:`CV splitter` (not an estimator), this method accepts
parameters (:term:`X`, :term:`y`, :term:`groups`), where all may be
optional, and returns an iterator over ``(train_idx, test_idx)``
pairs. Each of {train,test}_idx is a 1d integer array, with values
from 0 from ``X.shape[0] - 1`` of any length, such that no values
appear in both some ``train_idx`` and its corresponding ``test_idx``.
``transform``
In a :term:`transformer`, transforms the input, usually only :term:`X`,
into some transformed space (conventionally notated as :term:`Xt`).
Output is an array or sparse matrix of length :term:`n_samples` and
with the number of columns fixed after :term:`fitting`.
If the estimator was not already :term:`fitted`, calling this method
should raise a :class:`exceptions.NotFittedError`.
.. _glossary_parameters:
Parameters
==========
These common parameter names, specifically used in estimator construction
(see concept :term:`parameter`), sometimes also appear as parameters of
functions or non-estimator constructors.
.. glossary::
``class_weight``
Used to specify sample weights when fitting classifiers as a function
of the :term:`target` class. Where :term:`sample_weight` is also
supported and given, it is multiplied by the ``class_weight``
contribution. Similarly, where ``class_weight`` is used in a
:term:`multioutput` (including :term:`multilabel`) tasks, the weights
are multiplied across outputs (i.e. columns of ``y``).
By default, all samples have equal weight such that classes are
effectively weighted by their prevalence in the training data.
This could be achieved explicitly with ``class_weight={label1: 1,
label2: 1, ...}`` for all class labels.
More generally, ``class_weight`` is specified as a dict mapping class
labels to weights (``{class_label: weight}``), such that each sample
of the named class is given that weight.
``class_weight='balanced'`` can be used to give all classes
equal weight by giving each sample a weight inversely related
to its class's prevalence in the training data:
``n_samples / (n_classes * np.bincount(y))``. Class weights will be
used differently depending on the algorithm: for linear models (such
as linear SVM or logistic regression), the class weights will alter the
loss function by weighting the loss of each sample by its class weight.
For tree-based algorithms, the class weights will be used for
reweighting the splitting criterion.
**Note** however that this rebalancing does not take the weight of
samples in each class into account.
For multioutput classification, a list of dicts is used to specify
weights for each output. For example, for four-class multilabel
classification weights should be ``[{0: 1, 1: 1}, {0: 1, 1: 5}, {0: 1,
1: 1}, {0: 1, 1: 1}]`` instead of ``[{1:1}, {2:5}, {3:1}, {4:1}]``.
The ``class_weight`` parameter is validated and interpreted with
:func:`utils.class_weight.compute_class_weight`.
``cv``
Determines a cross validation splitting strategy, as used in
cross-validation based routines. ``cv`` is also available in estimators
such as :class:`multioutput.ClassifierChain` or
:class:`calibration.CalibratedClassifierCV` which use the predictions
of one estimator as training data for another, to not overfit the
training supervision.
Possible inputs for ``cv`` are usually:
- An integer, specifying the number of folds in K-fold cross
validation. K-fold will be stratified over classes if the estimator
is a classifier (determined by :func:`base.is_classifier`) and the
:term:`targets` may represent a binary or multiclass (but not
multioutput) classification problem (determined by
:func:`utils.multiclass.type_of_target`).
- A :term:`cross-validation splitter` instance. Refer to the
:ref:`User Guide <cross_validation>` for splitters available
within Scikit-learn.
- An iterable yielding train/test splits.
With some exceptions (especially where not using cross validation at
all is an option), the default is 5-fold.
``cv`` values are validated and interpreted with
:func:`model_selection.check_cv`.
``kernel``
Specifies the kernel function to be used by Kernel Method algorithms.
For example, the estimators :class:`svm.SVC` and
:class:`gaussian_process.GaussianProcessClassifier` both have a
``kernel`` parameter that takes the name of the kernel to use as string
or a callable kernel function used to compute the kernel matrix. For
more reference, see the :ref:`kernel_approximation` and the
:ref:`gaussian_process` user guides.
``max_iter``
For estimators involving iterative optimization, this determines the
maximum number of iterations to be performed in :term:`fit`. If
``max_iter`` iterations are run without convergence, a
:class:`exceptions.ConvergenceWarning` should be raised. Note that the
interpretation of "a single iteration" is inconsistent across
estimators: some, but not all, use it to mean a single epoch (i.e. a
pass over every sample in the data).
FIXME perhaps we should have some common tests about the relationship
between ConvergenceWarning and max_iter.
``memory``
Some estimators make use of :class:`joblib.Memory` to
store partial solutions during fitting. Thus when ``fit`` is called
again, those partial solutions have been memoized and can be reused.
A ``memory`` parameter can be specified as a string with a path to a
directory, or a :class:`joblib.Memory` instance (or an object with a
similar interface, i.e. a ``cache`` method) can be used.
``memory`` values are validated and interpreted with
:func:`utils.validation.check_memory`.
``metric``
As a parameter, this is the scheme for determining the distance between
two data points. See :func:`metrics.pairwise_distances`. In practice,
for some algorithms, an improper distance metric (one that does not
obey the triangle inequality, such as Cosine Distance) may be used.
XXX: hierarchical clustering uses ``affinity`` with this meaning.
We also use *metric* to refer to :term:`evaluation metrics`, but avoid
using this sense as a parameter name.
``n_components``
The number of features which a :term:`transformer` should transform the
input into. See :term:`components_` for the special case of affine
projection.
``n_iter_no_change``
Number of iterations with no improvement to wait before stopping the
iterative procedure. This is also known as a *patience* parameter. It
is typically used with :term:`early stopping` to avoid stopping too
early.
``n_jobs``
This parameter is used to specify how many concurrent processes or
threads should be used for routines that are parallelized with
:term:`joblib`.
``n_jobs`` is an integer, specifying the maximum number of concurrently
running workers. If 1 is given, no joblib parallelism is used at all,
which is useful for debugging. If set to -1, all CPUs are used. For
``n_jobs`` below -1, (n_cpus + 1 + n_jobs) are used. For example with
``n_jobs=-2``, all CPUs but one are used.
``n_jobs`` is ``None`` by default, which means *unset*; it will
generally be interpreted as ``n_jobs=1``, unless the current
:class:`joblib.Parallel` backend context specifies otherwise.
Note that even if ``n_jobs=1``, low-level parallelism (via Numpy and OpenMP)
might be used in some configuration.
For more details on the use of ``joblib`` and its interactions with
scikit-learn, please refer to our :ref:`parallelism notes
<parallelism>`.
``pos_label``
Value with which positive labels must be encoded in binary
classification problems in which the positive class is not assumed.
This value is typically required to compute asymmetric evaluation
metrics such as precision and recall.
``random_state``
Whenever randomization is part of a Scikit-learn algorithm, a
``random_state`` parameter may be provided to control the random number
generator used. Note that the mere presence of ``random_state`` doesn't
mean that randomization is always used, as it may be dependent on
another parameter, e.g. ``shuffle``, being set.
The passed value will have an effect on the reproducibility of the
results returned by the function (:term:`fit`, :term:`split`, or any
other function like :func:`~sklearn.cluster.k_means`). `random_state`'s
value may be:
None (default)
Use the global random state instance from :mod:`numpy.random`.
Calling the function multiple times will reuse
the same instance, and will produce different results.
An integer
Use a new random number generator seeded by the given integer.
Using an int will produce the same results across different calls.
However, it may be
worthwhile checking that your results are stable across a
number of different distinct random seeds. Popular integer
random seeds are 0 and `42
<https://en.wikipedia.org/wiki/Answer_to_the_Ultimate_Question_of_Life%2C_the_Universe%2C_and_Everything>`_.
Integer values must be in the range `[0, 2**32 - 1]`.
A :class:`numpy.random.RandomState` instance
Use the provided random state, only affecting other users
of that same random state instance. Calling the function
multiple times will reuse the same instance, and
will produce different results.
:func:`utils.check_random_state` is used internally to validate the
input ``random_state`` and return a :class:`~numpy.random.RandomState`
instance.
For more details on how to control the randomness of scikit-learn
objects and avoid common pitfalls, you may refer to :ref:`randomness`.
``scoring``
Specifies the score function to be maximized (usually by :ref:`cross
validation <cross_validation>`), or -- in some cases -- multiple score
functions to be reported. The score function can be a string accepted
by :func:`metrics.get_scorer` or a callable :term:`scorer`, not to be
confused with an :term:`evaluation metric`, as the latter have a more
diverse API. ``scoring`` may also be set to None, in which case the
estimator's :term:`score` method is used. See :ref:`scoring_parameter`
in the User Guide.
Where multiple metrics can be evaluated, ``scoring`` may be given
either as a list of unique strings, a dictionary with names as keys and
callables as values or a callable that returns a dictionary. Note that
this does *not* specify which score function is to be maximized, and
another parameter such as ``refit`` maybe used for this purpose.
The ``scoring`` parameter is validated and interpreted using
:func:`metrics.check_scoring`.
``verbose``
Logging is not handled very consistently in Scikit-learn at present,
but when it is provided as an option, the ``verbose`` parameter is
usually available to choose no logging (set to False). Any True value
should enable some logging, but larger integers (e.g. above 10) may be
needed for full verbosity. Verbose logs are usually printed to
Standard Output.
Estimators should not produce any output on Standard Output with the
default ``verbose`` setting.
``warm_start``
When fitting an estimator repeatedly on the same dataset, but for
multiple parameter values (such as to find the value maximizing
performance as in :ref:`grid search <grid_search>`), it may be possible
to reuse aspects of the model learned from the previous parameter value,
saving time. When ``warm_start`` is true, the existing :term:`fitted`
model :term:`attributes` are used to initialize the new model
in a subsequent call to :term:`fit`.
Note that this is only applicable for some models and some
parameters, and even some orders of parameter values. In general, there
is an interaction between ``warm_start`` and the parameter controlling
the number of iterations of the estimator.
For estimators imported from :mod:`~sklearn.ensemble`,
``warm_start`` will interact with ``n_estimators`` or ``max_iter``.
For these models, the number of iterations, reported via
``len(estimators_)`` or ``n_iter_``, corresponds the total number of
estimators/iterations learnt since the initialization of the model.
Thus, if a model was already initialized with `N` estimators, and `fit`
is called with ``n_estimators`` or ``max_iter`` set to `M`, the model
will train `M - N` new estimators.
Other models, usually using gradient-based solvers, have a different
behavior. They all expose a ``max_iter`` parameter. The reported
``n_iter_`` corresponds to the number of iteration done during the last
call to ``fit`` and will be at most ``max_iter``. Thus, we do not
consider the state of the estimator since the initialization.
:term:`partial_fit` also retains the model between calls, but differs:
with ``warm_start`` the parameters change and the data is
(more-or-less) constant across calls to ``fit``; with ``partial_fit``,
the mini-batch of data changes and model parameters stay fixed.
There are cases where you want to use ``warm_start`` to fit on
different, but closely related data. For example, one may initially fit
to a subset of the data, then fine-tune the parameter search on the
full dataset. For classification, all data in a sequence of
``warm_start`` calls to ``fit`` must include samples from each class.
.. _glossary_attributes:
Attributes
==========
See concept :term:`attribute`.
.. glossary::
``classes_``
A list of class labels known to the :term:`classifier`, mapping each
label to a numerical index used in the model representation our output.
For instance, the array output from :term:`predict_proba` has columns
aligned with ``classes_``. For :term:`multi-output` classifiers,
``classes_`` should be a list of lists, with one class listing for
each output. For each output, the classes should be sorted
(numerically, or lexicographically for strings).
``classes_`` and the mapping to indices is often managed with
:class:`preprocessing.LabelEncoder`.
``components_``
An affine transformation matrix of shape ``(n_components, n_features)``
used in many linear :term:`transformers` where :term:`n_components` is
the number of output features and :term:`n_features` is the number of
input features.
See also :term:`components_` which is a similar attribute for linear
predictors.
``coef_``
The weight/coefficient matrix of a generalized linear model
:term:`predictor`, of shape ``(n_features,)`` for binary classification
and single-output regression, ``(n_classes, n_features)`` for
multiclass classification and ``(n_targets, n_features)`` for
multi-output regression. Note this does not include the intercept
(or bias) term, which is stored in ``intercept_``.
When available, ``feature_importances_`` is not usually provided as
well, but can be calculated as the norm of each feature's entry in
``coef_``.
See also :term:`components_` which is a similar attribute for linear
transformers.
``embedding_``
An embedding of the training data in :ref:`manifold learning
<manifold>` estimators, with shape ``(n_samples, n_components)``,
identical to the output of :term:`fit_transform`. See also
:term:`labels_`.
``n_iter_``
The number of iterations actually performed when fitting an iterative
estimator that may stop upon convergence. See also :term:`max_iter`.
``feature_importances_``
A vector of shape ``(n_features,)`` available in some
:term:`predictors` to provide a relative measure of the importance of
each feature in the predictions of the model.
``labels_``
A vector containing a cluster label for each sample of the training
data in :term:`clusterers`, identical to the output of
:term:`fit_predict`. See also :term:`embedding_`.
.. _glossary_sample_props:
Data and sample properties
==========================
See concept :term:`sample property`.
.. glossary::
``groups``
Used in cross-validation routines to identify samples that are correlated.
Each value is an identifier such that, in a supporting
:term:`CV splitter`, samples from some ``groups`` value may not
appear in both a training set and its corresponding test set.
See :ref:`group_cv`.
``sample_weight``
A relative weight for each sample. Intuitively, if all weights are
integers, a weighted model or score should be equivalent to that
calculated when repeating the sample the number of times specified in
the weight. Weights may be specified as floats, so that sample weights
are usually equivalent up to a constant positive scaling factor.
FIXME Is this interpretation always the case in practice? We have no
common tests.
Some estimators, such as decision trees, support negative weights.
FIXME: This feature or its absence may not be tested or documented in
many estimators.
This is not entirely the case where other parameters of the model
consider the number of samples in a region, as with ``min_samples`` in
:class:`cluster.DBSCAN`. In this case, a count of samples becomes
to a sum of their weights.
In classification, sample weights can also be specified as a function
of class with the :term:`class_weight` estimator :term:`parameter`.
``X``
Denotes data that is observed at training and prediction time, used as
independent variables in learning. The notation is uppercase to denote
that it is ordinarily a matrix (see :term:`rectangular`).
When a matrix, each sample may be represented by a :term:`feature`
vector, or a vector of :term:`precomputed` (dis)similarity with each
training sample. ``X`` may also not be a matrix, and may require a
:term:`feature extractor` or a :term:`pairwise metric` to turn it into
one before learning a model.
``Xt``
Shorthand for "transformed :term:`X`".
``y``
``Y``
Denotes data that may be observed at training time as the dependent
variable in learning, but which is unavailable at prediction time, and
is usually the :term:`target` of prediction. The notation may be
uppercase to denote that it is a matrix, representing
:term:`multi-output` targets, for instance; but usually we use ``y``
and sometimes do so even when multiple outputs are assumed. | scikit-learn | currentmodule sklearn glossary Glossary of Common Terms and API Elements This glossary hopes to definitively represent the tacit and explicit conventions applied in Scikit learn and its API while providing a reference for users and contributors It aims to describe the concepts and either detail their corresponding API or link to other relevant parts of the documentation which do so By linking to glossary entries from the API Reference and User Guide we may minimize redundancy and inconsistency We begin by listing general concepts and any that didn t fit elsewhere but more specific sets of related terms are listed below ref glossary estimator types ref glossary target types ref glossary methods ref glossary parameters ref glossary attributes ref glossary sample props General Concepts glossary 1d 1d array One dimensional array A NumPy array whose shape has length 1 A vector 2d 2d array Two dimensional array A NumPy array whose shape has length 2 Often represents a matrix API Refers to both the specific interfaces for estimators implemented in Scikit learn and the generalized conventions across types of estimators as described in this glossary and ref overviewed in the contributor documentation api overview The specific interfaces that constitute Scikit learn s public API are largely documented in ref api ref However we less formally consider anything as public API if none of the identifiers required to access it begins with We generally try to maintain term backwards compatibility for all objects in the public API Private API including functions modules and methods beginning are not assured to be stable array like The most common data format for input to Scikit learn estimators and functions array like is any type object for which func numpy asarray will produce an array of appropriate shape usually 1 or 2 dimensional of appropriate dtype usually numeric This includes a numpy array a list of numbers a list of length k lists of numbers for some fixed length k a class pandas DataFrame with all columns numeric a numeric class pandas Series It excludes a term sparse matrix a sparse array an iterator a generator Note that output from scikit learn estimators and functions e g predictions should generally be arrays or sparse matrices or lists thereof as in multi output class tree DecisionTreeClassifier s predict proba An estimator where predict returns a list or a pandas Series is not valid attribute attributes We mostly use attribute to refer to how model information is stored on an estimator during fitting Any public attribute stored on an estimator instance is required to begin with an alphabetic character and end in a single underscore if it is set in term fit or term partial fit These are what is documented under an estimator s Attributes documentation The information stored in attributes is usually either sufficient statistics used for prediction or transformation term transductive outputs such as term labels or term embedding or diagnostic data such as term feature importances Common attributes are listed ref below glossary attributes A public attribute may have the same name as a constructor term parameter with a appended This is used to store a validated or estimated version of the user s input For example class decomposition PCA is constructed with an n components parameter From this together with other parameters and the data PCA estimates the attribute n components Further private attributes used in prediction transformation etc may also be set when fitting These begin with a single underscore and are not assured to be stable for public access A public attribute on an estimator instance that does not end in an underscore should be the stored unmodified value of an init term parameter of the same name Because of this equivalence these are documented under an estimator s Parameters documentation backwards compatibility We generally try to maintain backward compatibility i e interfaces and behaviors may be extended but not changed or removed from release to release but this comes with some exceptions Public API only The behavior of objects accessed through private identifiers those beginning may be changed arbitrarily between versions As documented We will generally assume that the users have adhered to the documented parameter types and ranges If the documentation asks for a list and the user gives a tuple we do not assure consistent behavior from version to version Deprecation Behaviors may change following a term deprecation period usually two releases long Warnings are issued using Python s mod warnings module Keyword arguments We may sometimes assume that all optional parameters other than X and y to term fit and similar methods are passed as keyword arguments only and may be positionally reordered Bug fixes and enhancements Bug fixes and less often enhancements may change the behavior of estimators including the predictions of an estimator trained on the same data and term random state When this happens we attempt to note it clearly in the changelog Serialization We make no assurances that pickling an estimator in one version will allow it to be unpickled to an equivalent model in the subsequent version For estimators in the sklearn package we issue a warning when this unpickling is attempted even if it may happen to work See ref persistence limitations func utils estimator checks check estimator We provide limited backwards compatibility assurances for the estimator checks we may add extra requirements on estimators tested with this function usually when these were informally assumed but not formally tested Despite this informal contract with our users the software is provided as is as stated in the license When a release inadvertently introduces changes that are not backward compatible these are known as software regressions callable A function class or an object which implements the call method anything that returns True when the argument of callable https docs python org 3 library functions html callable categorical feature A categorical or nominal term feature is one that has a finite set of discrete values across the population of data These are commonly represented as columns of integers or strings Strings will be rejected by most scikit learn estimators and integers will be treated as ordinal or count valued For the use with most estimators categorical variables should be one hot encoded Notable exceptions include tree based models such as random forests and gradient boosting models that often work better and faster with integer coded categorical variables class sklearn preprocessing OrdinalEncoder helps encoding string valued categorical features as ordinal integers and class sklearn preprocessing OneHotEncoder can be used to one hot encode categorical features See also ref preprocessing categorical features and the categorical encoding https github com scikit learn contrib category encoders package for tools related to encoding categorical features clone cloned To copy an term estimator instance and create a new one with identical term parameters but without any fitted term attributes using func sklearn base clone When fit is called a term meta estimator usually clones a wrapped estimator instance before fitting the cloned instance Exceptions for legacy reasons include class pipeline Pipeline and class pipeline FeatureUnion If the estimator s random state parameter is an integer or if the estimator doesn t have a random state parameter an exact clone is returned the clone and the original estimator will give the exact same results Otherwise statistical clone is returned the clone might yield different results from the original estimator More details can be found in ref randomness common tests This refers to the tests run on almost every estimator class in Scikit learn to check they comply with basic API conventions They are available for external use through func utils estimator checks check estimator or func utils estimator checks parametrize with checks with most of the implementation in sklearn utils estimator checks py Note Some exceptions to the common testing regime are currently hard coded into the library but we hope to replace this by marking exceptional behaviours on the estimator using semantic term estimator tags cross fitting cross fitting A resampling method that iteratively partitions data into mutually exclusive subsets to fit two stages During the first stage the mutually exclusive subsets enable predictions or transformations to be computed on data not seen during training The computed data is then used in the second stage The objective is to avoid having any overfitting in the first stage introduce bias into the input data distribution of the second stage For examples of its use see class preprocessing TargetEncoder class ensemble StackingClassifier class ensemble StackingRegressor and class calibration CalibratedClassifierCV cross validation cross validation A resampling method that iteratively partitions data into mutually exclusive train and test subsets so model performance can be evaluated on unseen data This conserves data as avoids the need to hold out a validation dataset and accounts for variability as multiple rounds of cross validation are generally performed See ref User Guide cross validation for more details deprecation We use deprecation to slowly violate our term backwards compatibility assurances usually to change the default value of a parameter or remove a parameter attribute method class etc We will ordinarily issue a warning when a deprecated element is used although there may be limitations to this For instance we will raise a warning when someone sets a parameter that has been deprecated but may not when they access that parameter s attribute on the estimator instance See the ref Contributors Guide contributing deprecation dimensionality May be used to refer to the number of term features i e term n features or columns in a 2d feature matrix Dimensions are however also used to refer to the length of a NumPy array s shape distinguishing a 1d array from a 2d matrix docstring The embedded documentation for a module class function etc usually in code as a string at the beginning of the object s definition and accessible as the object s doc attribute We try to adhere to PEP257 https www python org dev peps pep 0257 and follow NumpyDoc conventions https numpydoc readthedocs io en latest format html double underscore double underscore notation When specifying parameter names for nested estimators may be used to separate between parent and child in some contexts The most common use is when setting parameters through a meta estimator with term set params and hence in specifying a search grid in ref parameter search grid search See term parameter It is also used in meth pipeline Pipeline fit for passing term sample properties to the fit methods of estimators in the pipeline dtype data type NumPy arrays assume a homogeneous data type throughout available in the dtype attribute of an array or sparse matrix We generally assume simple data types for scikit learn data float or integer We may support object or string data types for arrays before encoding or vectorizing Our estimators do not work with struct arrays for instance Our documentation can sometimes give information about the dtype precision e g np int32 np int64 etc When the precision is provided it refers to the NumPy dtype If an arbitrary precision is used the documentation will refer to dtype integer or floating Note that in this case the precision can be platform dependent The numeric dtype refers to accepting both integer and floating When it comes to choosing between 64 bit dtype i e np float64 and np int64 and 32 bit dtype i e np float32 and np int32 it boils down to a trade off between efficiency and precision The 64 bit types offer more accurate results due to their lower floating point error but demand more computational resources resulting in slower operations and increased memory usage In contrast 32 bit types promise enhanced operation speed and reduced memory consumption but introduce a larger floating point error The efficiency improvement are dependent on lower level optimization such as like vectorization single instruction multiple dispatch SIMD or cache optimization but crucially on the compatibility of the algorithm in use Specifically the choice of precision should account for whether the employed algorithm can effectively leverage np float32 Some algorithms especially certain minimization methods are exclusively coded for np float64 meaning that even if np float32 is passed it triggers an automatic conversion back to np float64 This not only negates the intended computational savings but also introduces additional overhead making operations with np float32 unexpectedly slower and more memory intensive due to this extra conversion step duck typing We try to apply duck typing https en wikipedia org wiki Duck typing to determine how to handle some input values e g checking whether a given estimator is a classifier That is we avoid using isinstance where possible and rely on the presence or absence of attributes to determine an object s behaviour Some nuance is required when following this approach For some estimators an attribute may only be available once it is term fitted For instance we cannot a priori determine if term predict proba is available in a grid search where the grid includes alternating between a probabilistic and a non probabilistic predictor in the final step of the pipeline In the following we can only determine if clf is probabilistic after fitting it on some data from sklearn model selection import GridSearchCV from sklearn linear model import SGDClassifier clf GridSearchCV SGDClassifier param grid loss log loss hinge This means that we can only check for duck typed attributes after fitting and that we must be careful to make term meta estimators only present attributes according to the state of the underlying estimator after fitting Checking if an attribute is present using hasattr is in general just as expensive as getting the attribute getattr or dot notation In some cases getting the attribute may indeed be expensive e g for some implementations of term feature importances which may suggest this is an API design flaw So code which does hasattr followed by getattr should be avoided getattr within a try except block is preferred For determining some aspects of an estimator s expectations or support for some feature we use term estimator tags instead of duck typing early stopping This consists in stopping an iterative optimization method before the convergence of the training loss to avoid over fitting This is generally done by monitoring the generalization score on a validation set When available it is activated through the parameter early stopping or by setting a positive term n iter no change estimator instance We sometimes use this terminology to distinguish an term estimator class from a constructed instance For example in the following cls is an estimator class while est1 and est2 are instances cls RandomForestClassifier est1 cls est2 RandomForestClassifier examples We try to give examples of basic usage for most functions and classes in the API as doctests in their docstrings i e within the sklearn library code itself as examples in the ref example gallery general examples rendered using sphinx gallery https sphinx gallery readthedocs io from scripts in the examples directory exemplifying key features or parameters of the estimator function These should also be referenced from the User Guide sometimes in the ref User Guide user guide built from doc alongside a technical description of the estimator experimental An experimental tool is already usable but its public API such as default parameter values or fitted attributes is still subject to change in future versions without the usual term deprecation warning policy evaluation metric evaluation metrics Evaluation metrics give a measure of how well a model performs We may use this term specifically to refer to the functions in mod sklearn metrics disregarding mod sklearn metrics pairwise as distinct from the term score method and the term scoring API used in cross validation See ref model evaluation These functions usually accept a ground truth or the raw data where the metric evaluates clustering without a ground truth and a prediction be it the output of term predict y pred of term predict proba y proba or of an arbitrary score function including term decision function y score Functions are usually named to end with score if a greater score indicates a better model and loss if a lesser score indicates a better model This diversity of interface motivates the scoring API Note that some estimators can calculate metrics that are not included in mod sklearn metrics and are estimator specific notably model likelihoods estimator tags Estimator tags describe certain capabilities of an estimator This would enable some runtime behaviors based on estimator inspection but it also allows each estimator to be tested for appropriate invariances while being excepted from other term common tests Some aspects of estimator tags are currently determined through the term duck typing of methods like predict proba and through some special attributes on estimator objects For more detailed info see ref estimator tags feature features feature vector In the abstract a feature is a function in its mathematical sense mapping a sampled object to a numeric or categorical quantity Feature is also commonly used to refer to these quantities being the individual elements of a vector representing a sample In a data matrix features are represented as columns each column contains the result of applying a feature function to a set of samples Elsewhere features are known as attributes predictors regressors or independent variables Nearly all estimators in scikit learn assume that features are numeric finite and not missing even when they have semantically distinct domains and distributions categorical ordinal count valued real valued interval See also term categorical feature and term missing values n features indicates the number of features in a dataset fitting Calling term fit or term fit transform term fit predict etc on an estimator fitted The state of an estimator after term fitting There is no conventional procedure for checking if an estimator is fitted However an estimator that is not fitted should raise class exceptions NotFittedError when a prediction method term predict term transform etc is called func utils validation check is fitted is used internally for this purpose should not have any term attributes beginning with an alphabetic character and ending with an underscore Note that a descriptor for the attribute may still be present on the class but hasattr should return False function We provide ad hoc function interfaces for many algorithms while term estimator classes provide a more consistent interface In particular Scikit learn may provide a function interface that fits a model to some data and returns the learnt model parameters as in func linear model enet path For transductive models this also returns the embedding or cluster labels as in func manifold spectral embedding or func cluster dbscan Many preprocessing transformers also provide a function interface akin to calling term fit transform as in func preprocessing maxabs scale Users should be careful to avoid term data leakage when making use of these fit transform equivalent functions We do not have a strict policy about when to or when not to provide function forms of estimators but maintainers should consider consistency with existing interfaces and whether providing a function would lead users astray from best practices as regards data leakage etc gallery See term examples hyperparameter hyper parameter See term parameter impute imputation Most machine learning algorithms require that their inputs have no term missing values and will not work if this requirement is violated Algorithms that attempt to fill in or impute missing values are referred to as imputation algorithms indexable An term array like term sparse matrix pandas DataFrame or sequence usually a list induction inductive Inductive contrasted with term transductive machine learning builds a model of some data that can then be applied to new instances Most estimators in Scikit learn are inductive having term predict and or term transform methods joblib A Python library https joblib readthedocs io used in Scikit learn to facilite simple parallelism and caching Joblib is oriented towards efficiently working with numpy arrays such as through use of term memory mapping See ref parallelism for more information label indicator matrix multilabel indicator matrix multilabel indicator matrices The format used to represent multilabel data where each row of a 2d array or sparse matrix corresponds to a sample each column corresponds to a class and each element is 1 if the sample is labeled with the class and 0 if not leakage data leakage A problem in cross validation where generalization performance can be over estimated since knowledge of the test data was inadvertently included in training a model This is a risk for instance when applying a term transformer to the entirety of a dataset rather than each training portion in a cross validation split We aim to provide interfaces such as mod sklearn pipeline and mod sklearn model selection that shield the user from data leakage memmapping memory map memory mapping A memory efficiency strategy that keeps data on disk rather than copying it into main memory Memory maps can be created for arrays that can be read written or both using obj numpy memmap When using term joblib to parallelize operations in Scikit learn it may automatically memmap large arrays to reduce memory duplication overhead in multiprocessing missing values Most Scikit learn estimators do not work with missing values When they do e g in class impute SimpleImputer NaN is the preferred representation of missing values in float arrays If the array has integer dtype NaN cannot be represented For this reason we support specifying another missing values value when term imputation or learning can be performed in integer space term Unlabeled data unlabeled data is a special case of missing values in the term target n features The number of term features n outputs The number of term outputs in the term target n samples The number of term samples n targets Synonym for term n outputs narrative docs narrative documentation An alias for ref User Guide user guide i e documentation written in doc modules Unlike the ref API reference api ref provided through docstrings the User Guide aims to group tools provided by Scikit learn together thematically or in terms of usage motivate why someone would use each particular tool often through comparison provide both intuitive and technical descriptions of tools provide or link to term examples of using key features of a tool np A shorthand for Numpy due to the conventional import statement import numpy as np online learning Where a model is iteratively updated by receiving each batch of ground truth term targets soon after making predictions on corresponding batch of data Intrinsically the model must be usable for prediction after each batch See term partial fit out of core An efficiency strategy where not all the data is stored in main memory at once usually by performing learning on batches of data See term partial fit outputs Individual scalar categorical variables per sample in the term target For example in multilabel classification each possible label corresponds to a binary output Also called responses tasks or targets See term multiclass multioutput and term continuous multioutput pair A tuple of length two parameter parameters param params We mostly use parameter to refer to the aspects of an estimator that can be specified in its construction For example max depth and random state are parameters of class ensemble RandomForestClassifier Parameters to an estimator s constructor are stored unmodified as attributes on the estimator instance and conventionally start with an alphabetic character and end with an alphanumeric character Each estimator s constructor parameters are described in the estimator s docstring We do not use parameters in the statistical sense where parameters are values that specify a model and can be estimated from data What we call parameters might be what statisticians call hyperparameters to the model aspects for configuring model structure that are often not directly learnt from data However our parameters are also used to prescribe modeling operations that do not affect the learnt model such as term n jobs for controlling parallelism When talking about the parameters of a term meta estimator we may also be including the parameters of the estimators wrapped by the meta estimator Ordinarily these nested parameters are denoted by using a term double underscore to separate between the estimator as parameter and its parameter Thus clf BaggingClassifier estimator DecisionTreeClassifier max depth 3 has a deep parameter estimator max depth with value 3 which is accessible with clf estimator max depth or clf get params estimator max depth The list of parameters and their current values can be retrieved from an term estimator instance using its term get params method Between construction and fitting parameters may be modified using term set params To enable this parameters are not ordinarily validated or altered when the estimator is constructed or when each parameter is set Parameter validation is performed when term fit is called Common parameters are listed ref below glossary parameters pairwise metric pairwise metrics In its broad sense a pairwise metric defines a function for measuring similarity or dissimilarity between two samples with each ordinarily represented as a term feature vector We particularly provide implementations of distance metrics as well as improper metrics like Cosine Distance through func metrics pairwise distances and of kernel functions a constrained class of similarity functions in func metrics pairwise pairwise kernels These can compute pairwise distance matrices that are symmetric and hence store data redundantly See also term precomputed and term metric Note that for most distance metrics we rely on implementations from mod scipy spatial distance but may reimplement for efficiency in our context The class metrics DistanceMetric interface is used to implement distance metrics for integration with efficient neighbors search pd A shorthand for Pandas https pandas pydata org due to the conventional import statement import pandas as pd precomputed Where algorithms rely on term pairwise metrics and can be computed from pairwise metrics alone we often allow the user to specify that the term X provided is already in the pairwise dis similarity space rather than in a feature space That is when passed to term fit it is a square symmetric matrix with each vector indicating dis similarity to every sample and when passed to prediction transformation methods each row corresponds to a testing sample and each column to a training sample Use of precomputed X is usually indicated by setting a metric affinity or kernel parameter to the string precomputed If this is the case then the estimator should set the pairwise estimator tag as True rectangular Data that can be represented as a matrix with term samples on the first axis and a fixed finite set of term features on the second is called rectangular This term excludes samples with non vectorial structures such as text an image of arbitrary size a time series of arbitrary length a set of vectors etc The purpose of a term vectorizer is to produce rectangular forms of such data sample samples We usually use this term as a noun to indicate a single feature vector Elsewhere a sample is called an instance data point or observation n samples indicates the number of samples in a dataset being the number of rows in a data array term X Note that this definition is standard in machine learning and deviates from statistics where it means a set of individuals or objects collected or selected sample property sample properties A sample property is data for each sample e g an array of length n samples passed to an estimator method or a similar function alongside but distinct from the term features X and term target y The most prominent example is term sample weight see others at ref glossary sample props As of version 0 19 we do not have a consistent approach to handling sample properties and their routing in term meta estimators though a fit params parameter is often used scikit learn contrib A venue for publishing Scikit learn compatible libraries that are broadly authorized by the core developers and the contrib community but not maintained by the core developer team See https scikit learn contrib github io scikit learn enhancement proposals SLEP SLEPs Changes to the API principles and changes to dependencies or supported versions happen via a ref SLEP slep and follows the decision making process outlined in ref governance For all votes a proposal must have been made public and discussed before the vote Such a proposal must be a consolidated document in the form of a Scikit Learn Enhancement Proposal SLEP rather than a long discussion on an issue A SLEP must be submitted as a pull request to enhancement proposals https scikit learn enhancement proposals readthedocs io using the SLEP template https scikit learn enhancement proposals readthedocs io en latest slep template html semi supervised semi supervised learning semisupervised Learning where the expected prediction label or ground truth is only available for some samples provided as training data when term fitting the model We conventionally apply the label 1 to term unlabeled samples in semi supervised classification sparse matrix sparse graph A representation of two dimensional numeric data that is more memory efficient the corresponding dense numpy array where almost all elements are zero We use the mod scipy sparse framework which provides several underlying sparse data representations or formats Some formats are more efficient than others for particular tasks and when a particular format provides especial benefit we try to document this fact in Scikit learn parameter descriptions Some sparse matrix formats notably CSR CSC COO and LIL distinguish between implicit and explicit zeros Explicit zeros are stored i e they consume memory in a data array in the data structure while implicit zeros correspond to every element not otherwise defined in explicit storage Two semantics for sparse matrices are used in Scikit learn matrix semantics The sparse matrix is interpreted as an array with implicit and explicit zeros being interpreted as the number 0 This is the interpretation most often adopted e g when sparse matrices are used for feature matrices or term multilabel indicator matrices graph semantics As with mod scipy sparse csgraph explicit zeros are interpreted as the number 0 but implicit zeros indicate a masked or absent value such as the absence of an edge between two vertices of a graph where an explicit value indicates an edge s weight This interpretation is adopted to represent connectivity in clustering in representations of nearest neighborhoods e g func neighbors kneighbors graph and for precomputed distance representation where only distances in the neighborhood of each point are required When working with sparse matrices we assume that it is sparse for a good reason and avoid writing code that densifies a user provided sparse matrix instead maintaining sparsity or raising an error if not possible i e if an estimator does not cannot support sparse matrices stateless An estimator is stateless if it does not store any information that is obtained during term fit This information can be either parameters learned during term fit or statistics computed from the training data An estimator is stateless if it has no term attributes apart from ones set in init Calling term fit for these estimators will only validate the public term attributes passed in init supervised supervised learning Learning where the expected prediction label or ground truth is available for each sample when term fitting the model provided as term y This is the approach taken in a term classifier or term regressor among other estimators target targets The dependent variable in term supervised and term semisupervised learning passed as term y to an estimator s term fit method Also known as dependent variable outcome variable response variable ground truth or label Scikit learn works with targets that have minimal structure a class from a finite set a finite real valued number multiple classes or multiple numbers See ref glossary target types transduction transductive A transductive contrasted with term inductive machine learning method is designed to model a specific dataset but not to apply that model to unseen data Examples include class manifold TSNE class cluster AgglomerativeClustering and class neighbors LocalOutlierFactor unlabeled unlabeled data Samples with an unknown ground truth when fitting equivalently term missing values in the term target See also term semisupervised and term unsupervised learning unsupervised unsupervised learning Learning where the expected prediction label or ground truth is not available for each sample when term fitting the model as in term clusterers and term outlier detectors Unsupervised estimators ignore any term y passed to term fit glossary estimator types Class APIs and Estimator Types glossary classifier classifiers A term supervised or term semi supervised term predictor with a finite set of discrete possible output values A classifier supports modeling some of term binary term multiclass term multilabel or term multiclass multioutput targets Within scikit learn all classifiers support multi class classification defaulting to using a one vs rest strategy over the binary classification problem Classifiers must store a term classes attribute after fitting and inherit from class base ClassifierMixin which sets their corresponding term estimator tags correctly A classifier can be distinguished from other estimators with func base is classifier A classifier must implement term fit term predict term score It may also be appropriate to implement term decision function term predict proba and term predict log proba clusterer clusterers A term unsupervised term predictor with a finite set of discrete output values A clusterer usually stores term labels after fitting and must do so if it is term transductive A clusterer must implement term fit term fit predict if term transductive term predict if term inductive density estimator An term unsupervised estimation of input probability density function Commonly used techniques are ref kernel density uses a kernel function controlled by the bandwidth parameter to represent density ref Gaussian mixture mixture uses mixture of Gaussian models to represent density estimator estimators An object which manages the estimation and decoding of a model The model is estimated as a deterministic function of term parameters provided in object construction or with term set params the global mod numpy random random state if the estimator s term random state parameter is set to None and any data or term sample properties passed to the most recent call to term fit term fit transform or term fit predict or data similarly passed in a sequence of calls to term partial fit The estimated model is stored in public and private term attributes on the estimator instance facilitating decoding through prediction and transformation methods Estimators must provide a term fit method and should provide term set params and term get params although these are usually provided by inheritance from class base BaseEstimator The core functionality of some estimators may also be available as a term function feature extractor feature extractors A term transformer which takes input where each sample is not represented as an term array like object of fixed length and produces an term array like object of term features for each sample and thus a 2 dimensional array like for a set of samples In other words it lossily maps a non rectangular data representation into term rectangular data Feature extractors must implement at least term fit term transform term get feature names out meta estimator meta estimators metaestimator metaestimators An term estimator which takes another estimator as a parameter Examples include class pipeline Pipeline class model selection GridSearchCV class feature selection SelectFromModel and class ensemble BaggingClassifier In a meta estimator s term fit method any contained estimators should be term cloned before they are fit although FIXME Pipeline and FeatureUnion do not do this currently An exception to this is that an estimator may explicitly document that it accepts a pre fitted estimator e g using prefit True in class feature selection SelectFromModel One known issue with this is that the pre fitted estimator will lose its model if the meta estimator is cloned A meta estimator should have fit called before prediction even if all contained estimators are pre fitted In cases where a meta estimator s primary behaviors e g term predict or term transform implementation are functions of prediction transformation methods of the provided base estimator or multiple base estimators a meta estimator should provide at least the standard methods provided by the base estimator It may not be possible to identify which methods are provided by the underlying estimator until the meta estimator has been term fitted see also term duck typing for which func utils metaestimators available if may help It should also provide or modify the term estimator tags and term classes attribute provided by the base estimator Meta estimators should be careful to validate data as minimally as possible before passing it to an underlying estimator This saves computation time and may for instance allow the underlying estimator to easily work with data that is not term rectangular outlier detector outlier detectors An term unsupervised binary term predictor which models the distinction between core and outlying samples Outlier detectors must implement term fit term fit predict if term transductive term predict if term inductive Inductive outlier detectors may also implement term decision function to give a normalized inlier score where outliers have score below 0 term score samples may provide an unnormalized score per sample predictor predictors An term estimator supporting term predict and or term fit predict This encompasses term classifier term regressor term outlier detector and term clusterer In statistics predictors refers to term features regressor regressors A term supervised or term semi supervised term predictor with term continuous output values Regressors inherit from class base RegressorMixin which sets their term estimator tags correctly A regressor can be distinguished from other estimators with func base is regressor A regressor must implement term fit term predict term score transformer transformers An estimator supporting term transform and or term fit transform A purely term transductive transformer such as class manifold TSNE may not implement transform vectorizer vectorizers See term feature extractor There are further APIs specifically related to a small family of estimators such as glossary cross validation splitter CV splitter cross validation generator A non estimator family of classes used to split a dataset into a sequence of train and test portions see ref cross validation by providing term split and term get n splits methods Note that unlike estimators these do not have term fit methods and do not provide term set params or term get params Parameter validation may be performed in init cross validation estimator An estimator that has built in cross validation capabilities to automatically select the best hyper parameters see the ref User Guide grid search Some example of cross validation estimators are class ElasticNetCV linear model ElasticNetCV and class LogisticRegressionCV linear model LogisticRegressionCV Cross validation estimators are named EstimatorCV and tend to be roughly equivalent to GridSearchCV Estimator The advantage of using a cross validation estimator over the canonical term estimator class along with ref grid search grid search is that they can take advantage of warm starting by reusing precomputed results in the previous steps of the cross validation process This generally leads to speed improvements An exception is the class RidgeCV linear model RidgeCV class which can instead perform efficient Leave One Out LOO CV By default all these estimators apart from class RidgeCV linear model RidgeCV with an LOO CV will be refitted on the full training dataset after finding the best combination of hyper parameters scorer A non estimator callable object which evaluates an estimator on given test data returning a number Unlike term evaluation metrics a greater returned number must correspond with a better score See ref scoring parameter Further examples class metrics DistanceMetric class gaussian process kernels Kernel tree Criterion glossary metadata routing Metadata Routing glossary consumer An object which consumes term metadata This object is usually an term estimator a term scorer or a term CV splitter Consuming metadata means using it in calculations e g using term sample weight to calculate a certain type of score Being a consumer doesn t mean that the object always receives a certain metadata rather it means it can use it if it is provided metadata Data which is related to the given term X and term y data but is not directly a part of the data e g term sample weight or term groups and is passed along to different objects and methods e g to a term scorer or a term CV splitter router An object which routes metadata to term consumers consumer This object is usually a term meta estimator e g class pipeline Pipeline or class model selection GridSearchCV Some routers can also be a consumer This happens for example when a meta estimator uses the given term groups and it also passes it along to some of its sub objects such as a term CV splitter Please refer to ref Metadata Routing User Guide metadata routing for more information glossary target types Target Types glossary binary A classification problem consisting of two classes A binary target may be represented as for a term multiclass problem but with only two labels A binary decision function is represented as a 1d array Semantically one class is often considered the positive class Unless otherwise specified e g using term pos label in term evaluation metrics we consider the class label with the greater value numerically or lexicographically as the positive class of labels 0 1 1 is the positive class of 1 2 2 is the positive class of no yes yes is the positive class of no YES no is the positive class This affects the output of term decision function for instance Note that a dataset sampled from a multiclass y or a continuous y may appear to be binary func utils multiclass type of target will return binary for binary input or a similar array with only a single class present continuous A regression problem where each sample s target is a finite floating point number represented as a 1 dimensional array of floats or sometimes ints func utils multiclass type of target will return continuous for continuous input but if the data is all integers it will be identified as multiclass continuous multioutput continuous multi output multioutput continuous multi output continuous A regression problem where each sample s target consists of n outputs term outputs each one a finite floating point number for a fixed int n outputs 1 in a particular dataset Continuous multioutput targets are represented as multiple term continuous targets horizontally stacked into an array of shape n samples n outputs func utils multiclass type of target will return continuous multioutput for continuous multioutput input but if the data is all integers it will be identified as multiclass multioutput multiclass multi class A classification problem consisting of more than two classes A multiclass target may be represented as a 1 dimensional array of strings or integers A 2d column vector of integers i e a single output in term multioutput terms is also accepted We do not officially support other orderable hashable objects as class labels even if estimators may happen to work when given classification targets of such type For semi supervised classification term unlabeled samples should have the special label 1 in y Within scikit learn all estimators supporting binary classification also support multiclass classification using One vs Rest by default A class preprocessing LabelEncoder helps to canonicalize multiclass targets as integers func utils multiclass type of target will return multiclass for multiclass input The user may also want to handle binary input identically to multiclass multiclass multioutput multi class multi output multioutput multiclass multi output multi class A classification problem where each sample s target consists of n outputs term outputs each a class label for a fixed int n outputs 1 in a particular dataset Each output has a fixed set of available classes and each sample is labeled with a class for each output An output may be binary or multiclass and in the case where all outputs are binary the target is term multilabel Multiclass multioutput targets are represented as multiple term multiclass targets horizontally stacked into an array of shape n samples n outputs XXX For simplicity we may not always support string class labels for multiclass multioutput and integer class labels should be used mod sklearn multioutput provides estimators which estimate multi output problems using multiple single output estimators This may not fully account for dependencies among the different outputs which methods natively handling the multioutput case e g decision trees nearest neighbors neural networks may do better func utils multiclass type of target will return multiclass multioutput for multiclass multioutput input multilabel multi label A term multiclass multioutput target where each output is term binary This may be represented as a 2d dense array or sparse matrix of integers such that each column is a separate binary target where positive labels are indicated with 1 and negative labels are usually 1 or 0 Sparse multilabel targets are not supported everywhere that dense multilabel targets are supported Semantically a multilabel target can be thought of as a set of labels for each sample While not used internally class preprocessing MultiLabelBinarizer is provided as a utility to convert from a list of sets representation to a 2d array or sparse matrix One hot encoding a multiclass target with class preprocessing LabelBinarizer turns it into a multilabel problem func utils multiclass type of target will return multilabel indicator for multilabel input whether sparse or dense multioutput multi output A target where each sample has multiple classification regression labels See term multiclass multioutput and term continuous multioutput We do not currently support modelling mixed classification and regression targets glossary methods Methods glossary decision function In a fitted term classifier or term outlier detector predicts a soft score for each sample in relation to each class rather than the hard categorical prediction produced by term predict Its input is usually only some observed data term X If the estimator was not already term fitted calling this method should raise a class exceptions NotFittedError Output conventions binary classification A 1 dimensional array where values strictly greater than zero indicate the positive class i e the last class in term classes multiclass classification A 2 dimensional array where the row wise arg maximum is the predicted class Columns are ordered according to term classes multilabel classification Scikit learn is inconsistent in its representation of term multilabel decision functions It may be represented one of two ways List of 2d arrays each array of shape n samples 2 like in multiclass multioutput List is of length n labels Single 2d array of shape n samples n labels with each column in the array corresponding to the individual binary classification decisions This is identical to the multiclass classification format though its semantics differ it should be interpreted like in the binary case by thresholding at 0 multioutput classification A list of 2d arrays corresponding to each multiclass decision function outlier detection A 1 dimensional array where a value greater than or equal to zero indicates an inlier fit The fit method is provided on every estimator It usually takes some term samples X term targets y if the model is supervised and potentially other term sample properties such as term sample weight It should clear any prior term attributes stored on the estimator unless term warm start is used validate and interpret any term parameters ideally raising an error if invalid validate the input data estimate and store model attributes from the estimated parameters and provided data and return the now term fitted estimator to facilitate method chaining ref glossary target types describes possible formats for y fit predict Used especially for term unsupervised term transductive estimators this fits the model and returns the predictions similar to term predict on the training data In clusterers these predictions are also stored in the term labels attribute and the output of fit predict X is usually equivalent to fit X predict X The parameters to fit predict are the same as those to fit fit transform A method on term transformers which fits the estimator and returns the transformed training data It takes parameters as in term fit and its output should have the same shape as calling fit X transform X There are nonetheless rare cases where fit transform X and fit X transform X do not return the same value wherein training data needs to be handled differently due to model blending in stacked ensembles for instance such cases should be clearly documented term Transductive transductive transformers may also provide fit transform but not term transform One reason to implement fit transform is that performing fit and transform separately would be less efficient than together class base TransformerMixin provides a default implementation providing a consistent interface across transformers where fit transform is or is not specialized In term inductive learning where the goal is to learn a generalized model that can be applied to new data users should be careful not to apply fit transform to the entirety of a dataset i e training and test data together before further modelling as this results in term data leakage get feature names out Primarily for term feature extractors but also used for other transformers to provide string names for each column in the output of the estimator s term transform method It outputs an array of strings and may take an array like of strings as input corresponding to the names of input columns from which output column names can be generated If input features is not passed in then the feature names in attribute will be used If the feature names in attribute is not defined then the input names are named x0 x1 x n features in 1 get n splits On a term CV splitter not an estimator returns the number of elements one would get if iterating through the return value of term split given the same parameters Takes the same parameters as split get params Gets all term parameters and their values that can be set using term set params A parameter deep can be used when set to False to only return those parameters not including i e not due to indirection via contained estimators Most estimators adopt the definition from class base BaseEstimator which simply adopts the parameters defined for init class pipeline Pipeline among others reimplements get params to declare the estimators named in its steps parameters as themselves being parameters partial fit Facilitates fitting an estimator in an online fashion Unlike fit repeatedly calling partial fit does not clear the model but updates it with the data provided The portion of data provided to partial fit may be called a mini batch Each mini batch must be of consistent shape etc In iterative estimators partial fit often only performs a single iteration partial fit may also be used for term out of core learning although usually limited to the case where learning can be performed online i e the model is usable after each partial fit and there is no separate processing needed to finalize the model class cluster Birch introduces the convention that calling partial fit X will produce a model that is not finalized but the model can be finalized by calling partial fit i e without passing a further mini batch Generally estimator parameters should not be modified between calls to partial fit although partial fit should validate them as well as the new mini batch of data In contrast warm start is used to repeatedly fit the same estimator with the same data but varying parameters Like fit partial fit should return the estimator object To clear the model a new estimator should be constructed for instance with func base clone NOTE Using partial fit after fit results in undefined behavior predict Makes a prediction for each sample usually only taking term X as input but see under regressor output conventions below In a term classifier or term regressor this prediction is in the same target space used in fitting e g one of red amber green if the y in fitting consisted of these strings Despite this even when y passed to term fit is a list or other array like the output of predict should always be an array or sparse matrix In a term clusterer or term outlier detector the prediction is an integer If the estimator was not already term fitted calling this method should raise a class exceptions NotFittedError Output conventions classifier An array of shape n samples n samples n outputs term Multilabel multilabel data may be represented as a sparse matrix if a sparse matrix was used in fitting Each element should be one of the values in the classifier s term classes attribute clusterer An array of shape n samples where each value is from 0 to n clusters 1 if the corresponding sample is clustered and 1 if the sample is not clustered as in func cluster dbscan outlier detector An array of shape n samples where each value is 1 for an outlier and 1 otherwise regressor A numeric array of shape n samples usually float64 Some regressors have extra options in their predict method allowing them to return standard deviation return std True or covariance return cov True relative to the predicted value In this case the return value is a tuple of arrays corresponding to prediction mean std cov as required predict log proba The natural logarithm of the output of term predict proba provided to facilitate numerical stability predict proba A method in term classifiers and term clusterers that can return probability estimates for each class cluster Its input is usually only some observed data term X If the estimator was not already term fitted calling this method should raise a class exceptions NotFittedError Output conventions are like those for term decision function except in the term binary classification case where one column is output for each class while decision function outputs a 1d array For binary and multiclass predictions each row should add to 1 Like other methods predict proba should only be present when the estimator can make probabilistic predictions see term duck typing This means that the presence of the method may depend on estimator parameters e g in class linear model SGDClassifier or training data e g in class model selection GridSearchCV and may only appear after fitting score A method on an estimator usually a term predictor which evaluates its predictions on a given dataset and returns a single numerical score A greater return value should indicate better predictions accuracy is used for classifiers and R 2 for regressors by default If the estimator was not already term fitted calling this method should raise a class exceptions NotFittedError Some estimators implement a custom estimator specific score function often the likelihood of the data under the model score samples A method that returns a score for each given sample The exact definition of score varies from one class to another In the case of density estimation it can be the log density model on the data and in the case of outlier detection it can be the opposite of the outlier factor of the data If the estimator was not already term fitted calling this method should raise a class exceptions NotFittedError set params Available in any estimator takes keyword arguments corresponding to keys in term get params Each is provided a new value to assign such that calling get params after set params will reflect the changed term parameters Most estimators use the implementation in class base BaseEstimator which handles nested parameters and otherwise sets the parameter as an attribute on the estimator The method is overridden in class pipeline Pipeline and related estimators split On a term CV splitter not an estimator this method accepts parameters term X term y term groups where all may be optional and returns an iterator over train idx test idx pairs Each of train test idx is a 1d integer array with values from 0 from X shape 0 1 of any length such that no values appear in both some train idx and its corresponding test idx transform In a term transformer transforms the input usually only term X into some transformed space conventionally notated as term Xt Output is an array or sparse matrix of length term n samples and with the number of columns fixed after term fitting If the estimator was not already term fitted calling this method should raise a class exceptions NotFittedError glossary parameters Parameters These common parameter names specifically used in estimator construction see concept term parameter sometimes also appear as parameters of functions or non estimator constructors glossary class weight Used to specify sample weights when fitting classifiers as a function of the term target class Where term sample weight is also supported and given it is multiplied by the class weight contribution Similarly where class weight is used in a term multioutput including term multilabel tasks the weights are multiplied across outputs i e columns of y By default all samples have equal weight such that classes are effectively weighted by their prevalence in the training data This could be achieved explicitly with class weight label1 1 label2 1 for all class labels More generally class weight is specified as a dict mapping class labels to weights class label weight such that each sample of the named class is given that weight class weight balanced can be used to give all classes equal weight by giving each sample a weight inversely related to its class s prevalence in the training data n samples n classes np bincount y Class weights will be used differently depending on the algorithm for linear models such as linear SVM or logistic regression the class weights will alter the loss function by weighting the loss of each sample by its class weight For tree based algorithms the class weights will be used for reweighting the splitting criterion Note however that this rebalancing does not take the weight of samples in each class into account For multioutput classification a list of dicts is used to specify weights for each output For example for four class multilabel classification weights should be 0 1 1 1 0 1 1 5 0 1 1 1 0 1 1 1 instead of 1 1 2 5 3 1 4 1 The class weight parameter is validated and interpreted with func utils class weight compute class weight cv Determines a cross validation splitting strategy as used in cross validation based routines cv is also available in estimators such as class multioutput ClassifierChain or class calibration CalibratedClassifierCV which use the predictions of one estimator as training data for another to not overfit the training supervision Possible inputs for cv are usually An integer specifying the number of folds in K fold cross validation K fold will be stratified over classes if the estimator is a classifier determined by func base is classifier and the term targets may represent a binary or multiclass but not multioutput classification problem determined by func utils multiclass type of target A term cross validation splitter instance Refer to the ref User Guide cross validation for splitters available within Scikit learn An iterable yielding train test splits With some exceptions especially where not using cross validation at all is an option the default is 5 fold cv values are validated and interpreted with func model selection check cv kernel Specifies the kernel function to be used by Kernel Method algorithms For example the estimators class svm SVC and class gaussian process GaussianProcessClassifier both have a kernel parameter that takes the name of the kernel to use as string or a callable kernel function used to compute the kernel matrix For more reference see the ref kernel approximation and the ref gaussian process user guides max iter For estimators involving iterative optimization this determines the maximum number of iterations to be performed in term fit If max iter iterations are run without convergence a class exceptions ConvergenceWarning should be raised Note that the interpretation of a single iteration is inconsistent across estimators some but not all use it to mean a single epoch i e a pass over every sample in the data FIXME perhaps we should have some common tests about the relationship between ConvergenceWarning and max iter memory Some estimators make use of class joblib Memory to store partial solutions during fitting Thus when fit is called again those partial solutions have been memoized and can be reused A memory parameter can be specified as a string with a path to a directory or a class joblib Memory instance or an object with a similar interface i e a cache method can be used memory values are validated and interpreted with func utils validation check memory metric As a parameter this is the scheme for determining the distance between two data points See func metrics pairwise distances In practice for some algorithms an improper distance metric one that does not obey the triangle inequality such as Cosine Distance may be used XXX hierarchical clustering uses affinity with this meaning We also use metric to refer to term evaluation metrics but avoid using this sense as a parameter name n components The number of features which a term transformer should transform the input into See term components for the special case of affine projection n iter no change Number of iterations with no improvement to wait before stopping the iterative procedure This is also known as a patience parameter It is typically used with term early stopping to avoid stopping too early n jobs This parameter is used to specify how many concurrent processes or threads should be used for routines that are parallelized with term joblib n jobs is an integer specifying the maximum number of concurrently running workers If 1 is given no joblib parallelism is used at all which is useful for debugging If set to 1 all CPUs are used For n jobs below 1 n cpus 1 n jobs are used For example with n jobs 2 all CPUs but one are used n jobs is None by default which means unset it will generally be interpreted as n jobs 1 unless the current class joblib Parallel backend context specifies otherwise Note that even if n jobs 1 low level parallelism via Numpy and OpenMP might be used in some configuration For more details on the use of joblib and its interactions with scikit learn please refer to our ref parallelism notes parallelism pos label Value with which positive labels must be encoded in binary classification problems in which the positive class is not assumed This value is typically required to compute asymmetric evaluation metrics such as precision and recall random state Whenever randomization is part of a Scikit learn algorithm a random state parameter may be provided to control the random number generator used Note that the mere presence of random state doesn t mean that randomization is always used as it may be dependent on another parameter e g shuffle being set The passed value will have an effect on the reproducibility of the results returned by the function term fit term split or any other function like func sklearn cluster k means random state s value may be None default Use the global random state instance from mod numpy random Calling the function multiple times will reuse the same instance and will produce different results An integer Use a new random number generator seeded by the given integer Using an int will produce the same results across different calls However it may be worthwhile checking that your results are stable across a number of different distinct random seeds Popular integer random seeds are 0 and 42 https en wikipedia org wiki Answer to the Ultimate Question of Life 2C the Universe 2C and Everything Integer values must be in the range 0 2 32 1 A class numpy random RandomState instance Use the provided random state only affecting other users of that same random state instance Calling the function multiple times will reuse the same instance and will produce different results func utils check random state is used internally to validate the input random state and return a class numpy random RandomState instance For more details on how to control the randomness of scikit learn objects and avoid common pitfalls you may refer to ref randomness scoring Specifies the score function to be maximized usually by ref cross validation cross validation or in some cases multiple score functions to be reported The score function can be a string accepted by func metrics get scorer or a callable term scorer not to be confused with an term evaluation metric as the latter have a more diverse API scoring may also be set to None in which case the estimator s term score method is used See ref scoring parameter in the User Guide Where multiple metrics can be evaluated scoring may be given either as a list of unique strings a dictionary with names as keys and callables as values or a callable that returns a dictionary Note that this does not specify which score function is to be maximized and another parameter such as refit maybe used for this purpose The scoring parameter is validated and interpreted using func metrics check scoring verbose Logging is not handled very consistently in Scikit learn at present but when it is provided as an option the verbose parameter is usually available to choose no logging set to False Any True value should enable some logging but larger integers e g above 10 may be needed for full verbosity Verbose logs are usually printed to Standard Output Estimators should not produce any output on Standard Output with the default verbose setting warm start When fitting an estimator repeatedly on the same dataset but for multiple parameter values such as to find the value maximizing performance as in ref grid search grid search it may be possible to reuse aspects of the model learned from the previous parameter value saving time When warm start is true the existing term fitted model term attributes are used to initialize the new model in a subsequent call to term fit Note that this is only applicable for some models and some parameters and even some orders of parameter values In general there is an interaction between warm start and the parameter controlling the number of iterations of the estimator For estimators imported from mod sklearn ensemble warm start will interact with n estimators or max iter For these models the number of iterations reported via len estimators or n iter corresponds the total number of estimators iterations learnt since the initialization of the model Thus if a model was already initialized with N estimators and fit is called with n estimators or max iter set to M the model will train M N new estimators Other models usually using gradient based solvers have a different behavior They all expose a max iter parameter The reported n iter corresponds to the number of iteration done during the last call to fit and will be at most max iter Thus we do not consider the state of the estimator since the initialization term partial fit also retains the model between calls but differs with warm start the parameters change and the data is more or less constant across calls to fit with partial fit the mini batch of data changes and model parameters stay fixed There are cases where you want to use warm start to fit on different but closely related data For example one may initially fit to a subset of the data then fine tune the parameter search on the full dataset For classification all data in a sequence of warm start calls to fit must include samples from each class glossary attributes Attributes See concept term attribute glossary classes A list of class labels known to the term classifier mapping each label to a numerical index used in the model representation our output For instance the array output from term predict proba has columns aligned with classes For term multi output classifiers classes should be a list of lists with one class listing for each output For each output the classes should be sorted numerically or lexicographically for strings classes and the mapping to indices is often managed with class preprocessing LabelEncoder components An affine transformation matrix of shape n components n features used in many linear term transformers where term n components is the number of output features and term n features is the number of input features See also term components which is a similar attribute for linear predictors coef The weight coefficient matrix of a generalized linear model term predictor of shape n features for binary classification and single output regression n classes n features for multiclass classification and n targets n features for multi output regression Note this does not include the intercept or bias term which is stored in intercept When available feature importances is not usually provided as well but can be calculated as the norm of each feature s entry in coef See also term components which is a similar attribute for linear transformers embedding An embedding of the training data in ref manifold learning manifold estimators with shape n samples n components identical to the output of term fit transform See also term labels n iter The number of iterations actually performed when fitting an iterative estimator that may stop upon convergence See also term max iter feature importances A vector of shape n features available in some term predictors to provide a relative measure of the importance of each feature in the predictions of the model labels A vector containing a cluster label for each sample of the training data in term clusterers identical to the output of term fit predict See also term embedding glossary sample props Data and sample properties See concept term sample property glossary groups Used in cross validation routines to identify samples that are correlated Each value is an identifier such that in a supporting term CV splitter samples from some groups value may not appear in both a training set and its corresponding test set See ref group cv sample weight A relative weight for each sample Intuitively if all weights are integers a weighted model or score should be equivalent to that calculated when repeating the sample the number of times specified in the weight Weights may be specified as floats so that sample weights are usually equivalent up to a constant positive scaling factor FIXME Is this interpretation always the case in practice We have no common tests Some estimators such as decision trees support negative weights FIXME This feature or its absence may not be tested or documented in many estimators This is not entirely the case where other parameters of the model consider the number of samples in a region as with min samples in class cluster DBSCAN In this case a count of samples becomes to a sum of their weights In classification sample weights can also be specified as a function of class with the term class weight estimator term parameter X Denotes data that is observed at training and prediction time used as independent variables in learning The notation is uppercase to denote that it is ordinarily a matrix see term rectangular When a matrix each sample may be represented by a term feature vector or a vector of term precomputed dis similarity with each training sample X may also not be a matrix and may require a term feature extractor or a term pairwise metric to turn it into one before learning a model Xt Shorthand for transformed term X y Y Denotes data that may be observed at training time as the dependent variable in learning but which is unavailable at prediction time and is usually the term target of prediction The notation may be uppercase to denote that it is a matrix representing term multi output targets for instance but usually we use y and sometimes do so even when multiple outputs are assumed |
scikit-learn About us This project was started in 2007 as a Google Summer of Code project by about History | .. _about:
========
About us
========
History
=======
This project was started in 2007 as a Google Summer of Code project by
David Cournapeau. Later that year, Matthieu Brucher started working on this project
as part of his thesis.
In 2010 Fabian Pedregosa, Gael Varoquaux, Alexandre Gramfort and Vincent
Michel of INRIA took leadership of the project and made the first public
release, February the 1st 2010. Since then, several releases have appeared
following an approximately 3-month cycle, and a thriving international
community has been leading the development. As a result, INRIA holds the
copyright over the work done by people who were employed by INRIA at the
time of the contribution.
Governance
==========
The decision making process and governance structure of scikit-learn, like roles and responsibilities, is laid out in the :ref:`governance document <governance>`.
.. The "author" anchors below is there to ensure that old html links (in
the form of "about.html#author" still work)
.. _authors:
The people behind scikit-learn
==============================
Scikit-learn is a community project, developed by a large group of
people, all across the world. A few core contributor teams, listed below, have
central roles, however a more complete list of contributors can be found `on
github
<https://github.com/scikit-learn/scikit-learn/graphs/contributors>`__.
Active Core Contributors
------------------------
Maintainers Team
................
The following people are currently maintainers, in charge of
consolidating scikit-learn's development and maintenance:
.. include:: maintainers.rst
.. note::
Please do not email the authors directly to ask for assistance or report issues.
Instead, please see `What's the best way to ask questions about scikit-learn
<https://scikit-learn.org/stable/faq.html#what-s-the-best-way-to-get-help-on-scikit-learn-usage>`_
in the FAQ.
.. seealso::
How you can :ref:`contribute to the project <contributing>`.
Documentation Team
..................
The following people help with documenting the project:
.. include:: documentation_team.rst
Contributor Experience Team
...........................
The following people are active contributors who also help with
:ref:`triaging issues <bug_triaging>`, PRs, and general
maintenance:
.. include:: contributor_experience_team.rst
Communication Team
..................
The following people help with :ref:`communication around scikit-learn
<communication_team>`.
.. include:: communication_team.rst
Emeritus Core Contributors
--------------------------
Emeritus Maintainers Team
.........................
The following people have been active contributors in the past, but are no
longer active in the project:
.. include:: maintainers_emeritus.rst
Emeritus Communication Team
...........................
The following people have been active in the communication team in the
past, but no longer have communication responsibilities:
.. include:: communication_team_emeritus.rst
Emeritus Contributor Experience Team
....................................
The following people have been active in the contributor experience team in the
past:
.. include:: contributor_experience_team_emeritus.rst
.. _citing-scikit-learn:
Citing scikit-learn
===================
If you use scikit-learn in a scientific publication, we would appreciate
citations to the following paper:
`Scikit-learn: Machine Learning in Python
<https://jmlr.csail.mit.edu/papers/v12/pedregosa11a.html>`_, Pedregosa
*et al.*, JMLR 12, pp. 2825-2830, 2011.
Bibtex entry::
@article{scikit-learn,
title={Scikit-learn: Machine Learning in {P}ython},
author={Pedregosa, F. and Varoquaux, G. and Gramfort, A. and Michel, V.
and Thirion, B. and Grisel, O. and Blondel, M. and Prettenhofer, P.
and Weiss, R. and Dubourg, V. and Vanderplas, J. and Passos, A. and
Cournapeau, D. and Brucher, M. and Perrot, M. and Duchesnay, E.},
journal={Journal of Machine Learning Research},
volume={12},
pages={2825--2830},
year={2011}
}
If you want to cite scikit-learn for its API or design, you may also want to consider the
following paper:
:arxiv:`API design for machine learning software: experiences from the scikit-learn
project <1309.0238>`, Buitinck *et al.*, 2013.
Bibtex entry::
@inproceedings{sklearn_api,
author = {Lars Buitinck and Gilles Louppe and Mathieu Blondel and
Fabian Pedregosa and Andreas Mueller and Olivier Grisel and
Vlad Niculae and Peter Prettenhofer and Alexandre Gramfort
and Jaques Grobler and Robert Layton and Jake VanderPlas and
Arnaud Joly and Brian Holt and Ga{\"{e}}l Varoquaux},
title = {{API} design for machine learning software: experiences from the scikit-learn
project},
booktitle = {ECML PKDD Workshop: Languages for Data Mining and Machine Learning},
year = {2013},
pages = {108--122},
}
Artwork
=======
High quality PNG and SVG logos are available in the `doc/logos/
<https://github.com/scikit-learn/scikit-learn/tree/main/doc/logos>`_
source directory.
.. image:: images/scikit-learn-logo-notext.png
:align: center
Funding
=======
Scikit-learn is a community driven project, however institutional and private
grants help to assure its sustainability.
The project would like to thank the following funders.
...................................
.. div:: sk-text-image-grid-small
.. div:: text-box
`:probabl. <https://probabl.ai>`_ employs Adrin Jalali, Arturo Amor,
François Goupil, Guillaume Lemaitre, Jérémie du Boisberranger, Loïc Estève,
Olivier Grisel, and Stefanie Senger.
.. div:: image-box
.. image:: images/probabl.png
:target: https://probabl.ai
..........
.. |chanel| image:: images/chanel.png
:target: https://www.chanel.com
.. |axa| image:: images/axa.png
:target: https://www.axa.fr/
.. |bnp| image:: images/bnp.png
:target: https://www.bnpparibascardif.com/
.. |dataiku| image:: images/dataiku.png
:target: https://www.dataiku.com/
.. |nvidia| image:: images/nvidia.png
:target: https://www.nvidia.com
.. |inria| image:: images/inria-logo.jpg
:target: https://www.inria.fr
.. raw:: html
<style>
table.image-subtable tr {
border-color: transparent;
}
table.image-subtable td {
width: 50%;
vertical-align: middle;
text-align: center;
}
table.image-subtable td img {
max-height: 40px !important;
max-width: 90% !important;
}
</style>
.. div:: sk-text-image-grid-small
.. div:: text-box
The `Members <https://scikit-learn.fondation-inria.fr/en/home/#sponsors>`_ of
the `Scikit-learn Consortium at Inria Foundation
<https://scikit-learn.fondation-inria.fr/en/home/>`_ help at maintaining and
improving the project through their financial support.
.. div:: image-box
.. table::
:class: image-subtable
+----------+-----------+
| |chanel| |
+----------+-----------+
| |axa| | |bnp| |
+----------+-----------+
| |nvidia| |
+----------+-----------+
| |dataiku| |
+----------+-----------+
| |inria| |
+----------+-----------+
..........
.. div:: sk-text-image-grid-small
.. div:: text-box
`NVidia <https://nvidia.com>`_ funds Tim Head since 2022
and is part of the scikit-learn consortium at Inria.
.. div:: image-box
.. image:: images/nvidia.png
:target: https://nvidia.com
..........
.. div:: sk-text-image-grid-small
.. div:: text-box
`Microsoft <https://microsoft.com/>`_ funds Andreas Müller since 2020.
.. div:: image-box
.. image:: images/microsoft.png
:target: https://microsoft.com
...........
.. div:: sk-text-image-grid-small
.. div:: text-box
`Quansight Labs <https://labs.quansight.org>`_ funds Lucy Liu since 2022.
.. div:: image-box
.. image:: images/quansight-labs.png
:target: https://labs.quansight.org
...........
.. |czi| image:: images/czi.png
:target: https://chanzuckerberg.com
.. |wellcome| image:: images/wellcome-trust.png
:target: https://wellcome.org/
.. div:: sk-text-image-grid-small
.. div:: text-box
`The Chan-Zuckerberg Initiative <https://chanzuckerberg.com/>`_ and
`Wellcome Trust <https://wellcome.org/>`_ fund scikit-learn through the
`Essential Open Source Software for Science (EOSS) <https://chanzuckerberg.com/eoss/>`_
cycle 6.
It supports Lucy Liu and diversity & inclusion initiatives that will
be announced in the future.
.. div:: image-box
.. table::
:class: image-subtable
+----------+----------------+
| |czi| | |wellcome| |
+----------+----------------+
...........
.. div:: sk-text-image-grid-small
.. div:: text-box
`Tidelift <https://tidelift.com/>`_ supports the project via their service
agreement.
.. div:: image-box
.. image:: images/Tidelift-logo-on-light.svg
:target: https://tidelift.com/
...........
Past Sponsors
-------------
.. div:: sk-text-image-grid-small
.. div:: text-box
`Quansight Labs <https://labs.quansight.org>`_ funded Meekail Zain in 2022 and 2023,
and funded Thomas J. Fan from 2021 to 2023.
.. div:: image-box
.. image:: images/quansight-labs.png
:target: https://labs.quansight.org
...........
.. div:: sk-text-image-grid-small
.. div:: text-box
`Columbia University <https://columbia.edu/>`_ funded Andreas Müller
(2016-2020).
.. div:: image-box
.. image:: images/columbia.png
:target: https://columbia.edu
........
.. div:: sk-text-image-grid-small
.. div:: text-box
`The University of Sydney <https://sydney.edu.au/>`_ funded Joel Nothman
(2017-2021).
.. div:: image-box
.. image:: images/sydney-primary.jpeg
:target: https://sydney.edu.au/
...........
.. div:: sk-text-image-grid-small
.. div:: text-box
Andreas Müller received a grant to improve scikit-learn from the
`Alfred P. Sloan Foundation <https://sloan.org>`_ .
This grant supported the position of Nicolas Hug and Thomas J. Fan.
.. div:: image-box
.. image:: images/sloan_banner.png
:target: https://sloan.org/
.............
.. div:: sk-text-image-grid-small
.. div:: text-box
`INRIA <https://www.inria.fr>`_ actively supports this project. It has
provided funding for Fabian Pedregosa (2010-2012), Jaques Grobler
(2012-2013) and Olivier Grisel (2013-2017) to work on this project
full-time. It also hosts coding sprints and other events.
.. div:: image-box
.. image:: images/inria-logo.jpg
:target: https://www.inria.fr
.....................
.. div:: sk-text-image-grid-small
.. div:: text-box
`Paris-Saclay Center for Data Science <http://www.datascience-paris-saclay.fr/>`_
funded one year for a developer to work on the project full-time (2014-2015), 50%
of the time of Guillaume Lemaitre (2016-2017) and 50% of the time of Joris van den
Bossche (2017-2018).
.. div:: image-box
.. image:: images/cds-logo.png
:target: http://www.datascience-paris-saclay.fr/
..........................
.. div:: sk-text-image-grid-small
.. div:: text-box
`NYU Moore-Sloan Data Science Environment <https://cds.nyu.edu/mooresloan/>`_
funded Andreas Mueller (2014-2016) to work on this project. The Moore-Sloan
Data Science Environment also funds several students to work on the project
part-time.
.. div:: image-box
.. image:: images/nyu_short_color.png
:target: https://cds.nyu.edu/mooresloan/
........................
.. div:: sk-text-image-grid-small
.. div:: text-box
`Télécom Paristech <https://www.telecom-paristech.fr/>`_ funded Manoj Kumar
(2014), Tom Dupré la Tour (2015), Raghav RV (2015-2017), Thierry Guillemot
(2016-2017) and Albert Thomas (2017) to work on scikit-learn.
.. div:: image-box
.. image:: images/telecom.png
:target: https://www.telecom-paristech.fr/
.....................
.. div:: sk-text-image-grid-small
.. div:: text-box
`The Labex DigiCosme <https://digicosme.lri.fr>`_ funded Nicolas Goix
(2015-2016), Tom Dupré la Tour (2015-2016 and 2017-2018), Mathurin Massias
(2018-2019) to work part time on scikit-learn during their PhDs. It also
funded a scikit-learn coding sprint in 2015.
.. div:: image-box
.. image:: images/digicosme.png
:target: https://digicosme.lri.fr
.....................
.. div:: sk-text-image-grid-small
.. div:: text-box
`The Chan-Zuckerberg Initiative <https://chanzuckerberg.com/>`_ funded Nicolas
Hug to work full-time on scikit-learn in 2020.
.. div:: image-box
.. image:: images/czi.png
:target: https://chanzuckerberg.com
......................
The following students were sponsored by `Google
<https://opensource.google/>`_ to work on scikit-learn through
the `Google Summer of Code <https://en.wikipedia.org/wiki/Google_Summer_of_Code>`_
program.
- 2007 - David Cournapeau
- 2011 - `Vlad Niculae`_
- 2012 - `Vlad Niculae`_, Immanuel Bayer
- 2013 - Kemal Eren, Nicolas Trésegnie
- 2014 - Hamzeh Alsalhi, Issam Laradji, Maheshakya Wijewardena, Manoj Kumar
- 2015 - `Raghav RV <https://github.com/raghavrv>`_, Wei Xue
- 2016 - `Nelson Liu <http://nelsonliu.me>`_, `YenChen Lin <https://yenchenlin.me/>`_
.. _Vlad Niculae: https://vene.ro/
...................
The `NeuroDebian <http://neuro.debian.net>`_ project providing `Debian
<https://www.debian.org/>`_ packaging and contributions is supported by
`Dr. James V. Haxby <http://haxbylab.dartmouth.edu/>`_ (`Dartmouth
College <https://pbs.dartmouth.edu/>`_).
...................
The following organizations funded the scikit-learn consortium at Inria in
the past:
.. |msn| image:: images/microsoft.png
:target: https://www.microsoft.com/
.. |bcg| image:: images/bcg.png
:target: https://www.bcg.com/beyond-consulting/bcg-gamma/default.aspx
.. |fujitsu| image:: images/fujitsu.png
:target: https://www.fujitsu.com/global/
.. |aphp| image:: images/logo_APHP_text.png
:target: https://aphp.fr/
.. |hf| image:: images/huggingface_logo-noborder.png
:target: https://huggingface.co
.. raw:: html
<style>
div.image-subgrid img {
max-height: 50px;
max-width: 90%;
}
</style>
.. grid:: 2 2 4 4
:class-row: image-subgrid
:gutter: 1
.. grid-item::
:class: sd-text-center
:child-align: center
|msn|
.. grid-item::
:class: sd-text-center
:child-align: center
|bcg|
.. grid-item::
:class: sd-text-center
:child-align: center
|fujitsu|
.. grid-item::
:class: sd-text-center
:child-align: center
|aphp|
.. grid-item::
:class: sd-text-center
:child-align: center
|hf|
Coding Sprints
==============
The scikit-learn project has a long history of `open source coding sprints
<https://blog.scikit-learn.org/events/sprints-value/>`_ with over 50 sprint
events from 2010 to present day. There are scores of sponsors who contributed
to costs which include venue, food, travel, developer time and more. See
`scikit-learn sprints <https://blog.scikit-learn.org/sprints/>`_ for a full
list of events.
Donating to the project
=======================
If you are interested in donating to the project or to one of our code-sprints,
please donate via the `NumFOCUS Donations Page
<https://numfocus.org/donate-to-scikit-learn>`_.
.. raw:: html
<p class="text-center">
<a class="btn sk-btn-orange mb-1" href="https://numfocus.org/donate-to-scikit-learn">
Help us, <strong>donate!</strong>
</a>
</p>
All donations will be handled by `NumFOCUS <https://numfocus.org/>`_, a non-profit
organization which is managed by a board of `Scipy community members
<https://numfocus.org/board.html>`_. NumFOCUS's mission is to foster scientific
computing software, in particular in Python. As a fiscal home of scikit-learn, it
ensures that money is available when needed to keep the project funded and available
while in compliance with tax regulations.
The received donations for the scikit-learn project mostly will go towards covering
travel-expenses for code sprints, as well as towards the organization budget of the
project [#f1]_.
.. rubric:: Notes
.. [#f1] Regarding the organization budget, in particular, we might use some of
the donated funds to pay for other project expenses such as DNS,
hosting or continuous integration services.
Infrastructure support
======================
We would also like to thank `Microsoft Azure <https://azure.microsoft.com/en-us/>`_,
`Cirrus Cl <https://cirrus-ci.org>`_, `CircleCl <https://circleci.com/>`_ for free CPU
time on their Continuous Integration servers, and `Anaconda Inc. <https://www.anaconda.com>`_
for the storage they provide for our staging and nightly builds. | scikit-learn | about About us History This project was started in 2007 as a Google Summer of Code project by David Cournapeau Later that year Matthieu Brucher started working on this project as part of his thesis In 2010 Fabian Pedregosa Gael Varoquaux Alexandre Gramfort and Vincent Michel of INRIA took leadership of the project and made the first public release February the 1st 2010 Since then several releases have appeared following an approximately 3 month cycle and a thriving international community has been leading the development As a result INRIA holds the copyright over the work done by people who were employed by INRIA at the time of the contribution Governance The decision making process and governance structure of scikit learn like roles and responsibilities is laid out in the ref governance document governance The author anchors below is there to ensure that old html links in the form of about html author still work authors The people behind scikit learn Scikit learn is a community project developed by a large group of people all across the world A few core contributor teams listed below have central roles however a more complete list of contributors can be found on github https github com scikit learn scikit learn graphs contributors Active Core Contributors Maintainers Team The following people are currently maintainers in charge of consolidating scikit learn s development and maintenance include maintainers rst note Please do not email the authors directly to ask for assistance or report issues Instead please see What s the best way to ask questions about scikit learn https scikit learn org stable faq html what s the best way to get help on scikit learn usage in the FAQ seealso How you can ref contribute to the project contributing Documentation Team The following people help with documenting the project include documentation team rst Contributor Experience Team The following people are active contributors who also help with ref triaging issues bug triaging PRs and general maintenance include contributor experience team rst Communication Team The following people help with ref communication around scikit learn communication team include communication team rst Emeritus Core Contributors Emeritus Maintainers Team The following people have been active contributors in the past but are no longer active in the project include maintainers emeritus rst Emeritus Communication Team The following people have been active in the communication team in the past but no longer have communication responsibilities include communication team emeritus rst Emeritus Contributor Experience Team The following people have been active in the contributor experience team in the past include contributor experience team emeritus rst citing scikit learn Citing scikit learn If you use scikit learn in a scientific publication we would appreciate citations to the following paper Scikit learn Machine Learning in Python https jmlr csail mit edu papers v12 pedregosa11a html Pedregosa et al JMLR 12 pp 2825 2830 2011 Bibtex entry article scikit learn title Scikit learn Machine Learning in P ython author Pedregosa F and Varoquaux G and Gramfort A and Michel V and Thirion B and Grisel O and Blondel M and Prettenhofer P and Weiss R and Dubourg V and Vanderplas J and Passos A and Cournapeau D and Brucher M and Perrot M and Duchesnay E journal Journal of Machine Learning Research volume 12 pages 2825 2830 year 2011 If you want to cite scikit learn for its API or design you may also want to consider the following paper arxiv API design for machine learning software experiences from the scikit learn project 1309 0238 Buitinck et al 2013 Bibtex entry inproceedings sklearn api author Lars Buitinck and Gilles Louppe and Mathieu Blondel and Fabian Pedregosa and Andreas Mueller and Olivier Grisel and Vlad Niculae and Peter Prettenhofer and Alexandre Gramfort and Jaques Grobler and Robert Layton and Jake VanderPlas and Arnaud Joly and Brian Holt and Ga e l Varoquaux title API design for machine learning software experiences from the scikit learn project booktitle ECML PKDD Workshop Languages for Data Mining and Machine Learning year 2013 pages 108 122 Artwork High quality PNG and SVG logos are available in the doc logos https github com scikit learn scikit learn tree main doc logos source directory image images scikit learn logo notext png align center Funding Scikit learn is a community driven project however institutional and private grants help to assure its sustainability The project would like to thank the following funders div sk text image grid small div text box probabl https probabl ai employs Adrin Jalali Arturo Amor Fran ois Goupil Guillaume Lemaitre J r mie du Boisberranger Lo c Est ve Olivier Grisel and Stefanie Senger div image box image images probabl png target https probabl ai chanel image images chanel png target https www chanel com axa image images axa png target https www axa fr bnp image images bnp png target https www bnpparibascardif com dataiku image images dataiku png target https www dataiku com nvidia image images nvidia png target https www nvidia com inria image images inria logo jpg target https www inria fr raw html style table image subtable tr border color transparent table image subtable td width 50 vertical align middle text align center table image subtable td img max height 40px important max width 90 important style div sk text image grid small div text box The Members https scikit learn fondation inria fr en home sponsors of the Scikit learn Consortium at Inria Foundation https scikit learn fondation inria fr en home help at maintaining and improving the project through their financial support div image box table class image subtable chanel axa bnp nvidia dataiku inria div sk text image grid small div text box NVidia https nvidia com funds Tim Head since 2022 and is part of the scikit learn consortium at Inria div image box image images nvidia png target https nvidia com div sk text image grid small div text box Microsoft https microsoft com funds Andreas M ller since 2020 div image box image images microsoft png target https microsoft com div sk text image grid small div text box Quansight Labs https labs quansight org funds Lucy Liu since 2022 div image box image images quansight labs png target https labs quansight org czi image images czi png target https chanzuckerberg com wellcome image images wellcome trust png target https wellcome org div sk text image grid small div text box The Chan Zuckerberg Initiative https chanzuckerberg com and Wellcome Trust https wellcome org fund scikit learn through the Essential Open Source Software for Science EOSS https chanzuckerberg com eoss cycle 6 It supports Lucy Liu and diversity inclusion initiatives that will be announced in the future div image box table class image subtable czi wellcome div sk text image grid small div text box Tidelift https tidelift com supports the project via their service agreement div image box image images Tidelift logo on light svg target https tidelift com Past Sponsors div sk text image grid small div text box Quansight Labs https labs quansight org funded Meekail Zain in 2022 and 2023 and funded Thomas J Fan from 2021 to 2023 div image box image images quansight labs png target https labs quansight org div sk text image grid small div text box Columbia University https columbia edu funded Andreas M ller 2016 2020 div image box image images columbia png target https columbia edu div sk text image grid small div text box The University of Sydney https sydney edu au funded Joel Nothman 2017 2021 div image box image images sydney primary jpeg target https sydney edu au div sk text image grid small div text box Andreas M ller received a grant to improve scikit learn from the Alfred P Sloan Foundation https sloan org This grant supported the position of Nicolas Hug and Thomas J Fan div image box image images sloan banner png target https sloan org div sk text image grid small div text box INRIA https www inria fr actively supports this project It has provided funding for Fabian Pedregosa 2010 2012 Jaques Grobler 2012 2013 and Olivier Grisel 2013 2017 to work on this project full time It also hosts coding sprints and other events div image box image images inria logo jpg target https www inria fr div sk text image grid small div text box Paris Saclay Center for Data Science http www datascience paris saclay fr funded one year for a developer to work on the project full time 2014 2015 50 of the time of Guillaume Lemaitre 2016 2017 and 50 of the time of Joris van den Bossche 2017 2018 div image box image images cds logo png target http www datascience paris saclay fr div sk text image grid small div text box NYU Moore Sloan Data Science Environment https cds nyu edu mooresloan funded Andreas Mueller 2014 2016 to work on this project The Moore Sloan Data Science Environment also funds several students to work on the project part time div image box image images nyu short color png target https cds nyu edu mooresloan div sk text image grid small div text box T l com Paristech https www telecom paristech fr funded Manoj Kumar 2014 Tom Dupr la Tour 2015 Raghav RV 2015 2017 Thierry Guillemot 2016 2017 and Albert Thomas 2017 to work on scikit learn div image box image images telecom png target https www telecom paristech fr div sk text image grid small div text box The Labex DigiCosme https digicosme lri fr funded Nicolas Goix 2015 2016 Tom Dupr la Tour 2015 2016 and 2017 2018 Mathurin Massias 2018 2019 to work part time on scikit learn during their PhDs It also funded a scikit learn coding sprint in 2015 div image box image images digicosme png target https digicosme lri fr div sk text image grid small div text box The Chan Zuckerberg Initiative https chanzuckerberg com funded Nicolas Hug to work full time on scikit learn in 2020 div image box image images czi png target https chanzuckerberg com The following students were sponsored by Google https opensource google to work on scikit learn through the Google Summer of Code https en wikipedia org wiki Google Summer of Code program 2007 David Cournapeau 2011 Vlad Niculae 2012 Vlad Niculae Immanuel Bayer 2013 Kemal Eren Nicolas Tr segnie 2014 Hamzeh Alsalhi Issam Laradji Maheshakya Wijewardena Manoj Kumar 2015 Raghav RV https github com raghavrv Wei Xue 2016 Nelson Liu http nelsonliu me YenChen Lin https yenchenlin me Vlad Niculae https vene ro The NeuroDebian http neuro debian net project providing Debian https www debian org packaging and contributions is supported by Dr James V Haxby http haxbylab dartmouth edu Dartmouth College https pbs dartmouth edu The following organizations funded the scikit learn consortium at Inria in the past msn image images microsoft png target https www microsoft com bcg image images bcg png target https www bcg com beyond consulting bcg gamma default aspx fujitsu image images fujitsu png target https www fujitsu com global aphp image images logo APHP text png target https aphp fr hf image images huggingface logo noborder png target https huggingface co raw html style div image subgrid img max height 50px max width 90 style grid 2 2 4 4 class row image subgrid gutter 1 grid item class sd text center child align center msn grid item class sd text center child align center bcg grid item class sd text center child align center fujitsu grid item class sd text center child align center aphp grid item class sd text center child align center hf Coding Sprints The scikit learn project has a long history of open source coding sprints https blog scikit learn org events sprints value with over 50 sprint events from 2010 to present day There are scores of sponsors who contributed to costs which include venue food travel developer time and more See scikit learn sprints https blog scikit learn org sprints for a full list of events Donating to the project If you are interested in donating to the project or to one of our code sprints please donate via the NumFOCUS Donations Page https numfocus org donate to scikit learn raw html p class text center a class btn sk btn orange mb 1 href https numfocus org donate to scikit learn Help us strong donate strong a p All donations will be handled by NumFOCUS https numfocus org a non profit organization which is managed by a board of Scipy community members https numfocus org board html NumFOCUS s mission is to foster scientific computing software in particular in Python As a fiscal home of scikit learn it ensures that money is available when needed to keep the project funded and available while in compliance with tax regulations The received donations for the scikit learn project mostly will go towards covering travel expenses for code sprints as well as towards the organization budget of the project f1 rubric Notes f1 Regarding the organization budget in particular we might use some of the donated funds to pay for other project expenses such as DNS hosting or continuous integration services Infrastructure support We would also like to thank Microsoft Azure https azure microsoft com en us Cirrus Cl https cirrus ci org CircleCl https circleci com for free CPU time on their Continuous Integration servers and Anaconda Inc https www anaconda com for the storage they provide for our staging and nightly builds |
scikit-learn sklearn contributors rst releasenotes15 Version 1 5 | .. include:: _contributors.rst
.. currentmodule:: sklearn
.. _release_notes_1_5:
===========
Version 1.5
===========
For a short description of the main highlights of the release, please refer to
:ref:`sphx_glr_auto_examples_release_highlights_plot_release_highlights_1_5_0.py`.
.. include:: changelog_legend.inc
.. _changes_1_5_2:
Version 1.5.2
=============
**September 2024**
Changes impacting many modules
------------------------------
- |Fix| Fixed performance regression in a few Cython modules in
`sklearn._loss`, `sklearn.manifold`, `sklearn.metrics` and `sklearn.utils`,
which were built without OpenMP support.
:pr:`29694` by :user:`Loïc Estèvce <lesteve>`.
Changelog
---------
:mod:`sklearn.calibration`
..........................
- |Fix| Raise error when :class:`~sklearn.model_selection.LeaveOneOut` used in
`cv`, matching what would happen if `KFold(n_splits=n_samples)` was used.
:pr:`29545` by :user:`Lucy Liu <lucyleeow>`
:mod:`sklearn.compose`
......................
- |Fix| Fixed :class:`compose.TransformedTargetRegressor` not to raise `UserWarning` if
transform output is set to `pandas` or `polars`, since it isn't a transformer.
:pr:`29401` by :user:`Stefanie Senger <StefanieSenger>`.
:mod:`sklearn.decomposition`
............................
- |Fix| Increase rank defficiency threshold in the whitening step of
:class:`decomposition.FastICA` with `whiten_solver="eigh"` to improve the
platform-agnosticity of the estimator.
:pr:`29612` by :user:`Olivier Grisel <ogrisel>`.
:mod:`sklearn.metrics`
......................
- |Fix| Fix a regression in :func:`metrics.accuracy_score` and in
:func:`metrics.zero_one_loss` causing an error for Array API dispatch with multilabel
inputs.
:pr:`29336` by :user:`Edoardo Abati <EdAbati>`.
:mod:`sklearn.svm`
..................
- |Fix| Fixed a regression in :class:`svm.SVC` and :class:`svm.SVR` such that we accept
`C=float("inf")`.
:pr:`29780` by :user:`Guillaume Lemaitre <glemaitre>`.
.. _changes_1_5_1:
Version 1.5.1
=============
**July 2024**
Changes impacting many modules
------------------------------
- |Fix| Fixed a regression in the validation of the input data of all estimators where
an unexpected error was raised when passing a DataFrame backed by a read-only buffer.
:pr:`29018` by :user:`Jérémie du Boisberranger <jeremiedbb>`.
- |Fix| Fixed a regression causing a dead-lock at import time in some settings.
:pr:`29235` by :user:`Jérémie du Boisberranger <jeremiedbb>`.
Changelog
---------
:mod:`sklearn.compose`
......................
- |Efficiency| Fix a performance regression in :class:`compose.ColumnTransformer`
where the full input data was copied for each transformer when `n_jobs > 1`.
:pr:`29330` by :user:`Jérémie du Boisberranger <jeremiedbb>`.
:mod:`sklearn.metrics`
......................
- |Fix| Fix a regression in :func:`metrics.r2_score`. Passing torch CPU tensors
with array API dispatched disabled would complain about non-CPU devices
instead of implicitly converting those inputs as regular NumPy arrays.
:pr:`29119` by :user:`Olivier Grisel`.
- |Fix| Fix a regression in
:func:`metrics.zero_one_loss` causing an error for Array API dispatch with multilabel
inputs.
:pr:`29269` by :user:`Yaroslav Korobko <Tialo>`.
:mod:`sklearn.model_selection`
..............................
- |Fix| Fix a regression in :class:`model_selection.GridSearchCV` for parameter
grids that have heterogeneous parameter values.
:pr:`29078` by :user:`Loïc Estève <lesteve>`.
- |Fix| Fix a regression in :class:`model_selection.GridSearchCV` for parameter
grids that have estimators as parameter values.
:pr:`29179` by :user:`Marco Gorelli<MarcoGorelli>`.
- |Fix| Fix a regression in :class:`model_selection.GridSearchCV` for parameter
grids that have arrays of different sizes as parameter values.
:pr:`29314` by :user:`Marco Gorelli<MarcoGorelli>`.
:mod:`sklearn.tree`
...................
- |Fix| Fix an issue in :func:`tree.export_graphviz` and :func:`tree.plot_tree`
that could potentially result in exception or wrong results on 32bit OSes.
:pr:`29327` by :user:`Loïc Estève<lesteve>`.
:mod:`sklearn.utils`
....................
- |API| :func:`utils.validation.check_array` has a new parameter, `force_writeable`, to
control the writeability of the output array. If set to `True`, the output array will
be guaranteed to be writeable and a copy will be made if the input array is read-only.
If set to `False`, no guarantee is made about the writeability of the output array.
:pr:`29018` by :user:`Jérémie du Boisberranger <jeremiedbb>`.
.. _changes_1_5:
Version 1.5.0
=============
**May 2024**
Security
--------
- |Fix| :class:`feature_extraction.text.CountVectorizer` and
:class:`feature_extraction.text.TfidfVectorizer` no longer store discarded
tokens from the training set in their `stop_words_` attribute. This attribute
would hold too frequent (above `max_df`) but also too rare tokens (below
`min_df`). This fixes a potential security issue (data leak) if the discarded
rare tokens hold sensitive information from the training set without the
model developer's knowledge.
Note: users of those classes are encouraged to either retrain their pipelines
with the new scikit-learn version or to manually clear the `stop_words_`
attribute from previously trained instances of those transformers. This
attribute was designed only for model inspection purposes and has no impact
on the behavior of the transformers.
:pr:`28823` by :user:`Olivier Grisel <ogrisel>`.
Changed models
--------------
- |Efficiency| The subsampling in :class:`preprocessing.QuantileTransformer` is now
more efficient for dense arrays but the fitted quantiles and the results of
`transform` may be slightly different than before (keeping the same statistical
properties).
:pr:`27344` by :user:`Xuefeng Xu <xuefeng-xu>`.
- |Enhancement| :class:`decomposition.PCA`, :class:`decomposition.SparsePCA`
and :class:`decomposition.TruncatedSVD` now set the sign of the `components_`
attribute based on the component values instead of using the transformed data
as reference. This change is needed to be able to offer consistent component
signs across all `PCA` solvers, including the new
`svd_solver="covariance_eigh"` option introduced in this release.
Changes impacting many modules
------------------------------
- |Fix| Raise `ValueError` with an informative error message when passing 1D
sparse arrays to methods that expect 2D sparse inputs.
:pr:`28988` by :user:`Olivier Grisel <ogrisel>`.
- |API| The name of the input of the `inverse_transform` method of estimators has been
standardized to `X`. As a consequence, `Xt` is deprecated and will be removed in
version 1.7 in the following estimators: :class:`cluster.FeatureAgglomeration`,
:class:`decomposition.MiniBatchNMF`, :class:`decomposition.NMF`,
:class:`model_selection.GridSearchCV`, :class:`model_selection.RandomizedSearchCV`,
:class:`pipeline.Pipeline` and :class:`preprocessing.KBinsDiscretizer`.
:pr:`28756` by :user:`Will Dean <wd60622>`.
Support for Array API
---------------------
Additional estimators and functions have been updated to include support for all
`Array API <https://data-apis.org/array-api/latest/>`_ compliant inputs.
See :ref:`array_api` for more details.
**Functions:**
- :func:`sklearn.metrics.r2_score` now supports Array API compliant inputs.
:pr:`27904` by :user:`Eric Lindgren <elindgren>`, :user:`Franck Charras <fcharras>`,
:user:`Olivier Grisel <ogrisel>` and :user:`Tim Head <betatim>`.
**Classes:**
- :class:`linear_model.Ridge` now supports the Array API for the `svd` solver.
See :ref:`array_api` for more details.
:pr:`27800` by :user:`Franck Charras <fcharras>`, :user:`Olivier Grisel <ogrisel>`
and :user:`Tim Head <betatim>`.
Support for building with Meson
-------------------------------
From scikit-learn 1.5 onwards, Meson is the main supported way to build
scikit-learn, see :ref:`Building from source <install_bleeding_edge>` for more
details.
Unless we discover a major blocker, setuptools support will be dropped in
scikit-learn 1.6. The 1.5.x releases will support building scikit-learn with
setuptools.
Meson support for building scikit-learn was added in :pr:`28040` by
:user:`Loïc Estève <lesteve>`
Metadata Routing
----------------
The following models now support metadata routing in one or more or their
methods. Refer to the :ref:`Metadata Routing User Guide <metadata_routing>` for
more details.
- |Feature| :class:`impute.IterativeImputer` now supports metadata routing in
its `fit` method. :pr:`28187` by :user:`Stefanie Senger <StefanieSenger>`.
- |Feature| :class:`ensemble.BaggingClassifier` and :class:`ensemble.BaggingRegressor`
now support metadata routing. The fit methods now
accept ``**fit_params`` which are passed to the underlying estimators
via their `fit` methods.
:pr:`28432` by :user:`Adam Li <adam2392>` and
:user:`Benjamin Bossan <BenjaminBossan>`.
- |Feature| :class:`linear_model.RidgeCV` and
:class:`linear_model.RidgeClassifierCV` now support metadata routing in
their `fit` method and route metadata to the underlying
:class:`model_selection.GridSearchCV` object or the underlying scorer.
:pr:`27560` by :user:`Omar Salman <OmarManzoor>`.
- |Feature| :class:`GraphicalLassoCV` now supports metadata routing in it's
`fit` method and routes metadata to the CV splitter.
:pr:`27566` by :user:`Omar Salman <OmarManzoor>`.
- |Feature| :class:`linear_model.RANSACRegressor` now supports metadata routing
in its ``fit``, ``score`` and ``predict`` methods and route metadata to its
underlying estimator's' ``fit``, ``score`` and ``predict`` methods.
:pr:`28261` by :user:`Stefanie Senger <StefanieSenger>`.
- |Feature| :class:`ensemble.VotingClassifier` and
:class:`ensemble.VotingRegressor` now support metadata routing and pass
``**fit_params`` to the underlying estimators via their `fit` methods.
:pr:`27584` by :user:`Stefanie Senger <StefanieSenger>`.
- |Feature| :class:`pipeline.FeatureUnion` now supports metadata routing in its
``fit`` and ``fit_transform`` methods and route metadata to the underlying
transformers' ``fit`` and ``fit_transform``.
:pr:`28205` by :user:`Stefanie Senger <StefanieSenger>`.
- |Fix| Fix an issue when resolving default routing requests set via class
attributes.
:pr:`28435` by `Adrin Jalali`_.
- |Fix| Fix an issue when `set_{method}_request` methods are used as unbound
methods, which can happen if one tries to decorate them.
:pr:`28651` by `Adrin Jalali`_.
- |FIX| Prevent a `RecursionError` when estimators with the default `scoring`
param (`None`) route metadata.
:pr:`28712` by :user:`Stefanie Senger <StefanieSenger>`.
Changelog
---------
..
Entries should be grouped by module (in alphabetic order) and prefixed with
one of the labels: |MajorFeature|, |Feature|, |Efficiency|, |Enhancement|,
|Fix| or |API| (see whats_new.rst for descriptions).
Entries should be ordered by those labels (e.g. |Fix| after |Efficiency|).
Changes not specific to a module should be listed under *Multiple Modules*
or *Miscellaneous*.
Entries should end with:
:pr:`123456` by :user:`Joe Bloggs <joeongithub>`.
where 123455 is the *pull request* number, not the issue number.
:mod:`sklearn.calibration`
..........................
- |Fix| Fixed a regression in :class:`calibration.CalibratedClassifierCV` where
an error was wrongly raised with string targets.
:pr:`28843` by :user:`Jérémie du Boisberranger <jeremiedbb>`.
:mod:`sklearn.cluster`
......................
- |Fix| The :class:`cluster.MeanShift` class now properly converges for constant data.
:pr:`28951` by :user:`Akihiro Kuno <akikuno>`.
- |FIX| Create copy of precomputed sparse matrix within the `fit` method of
:class:`~cluster.OPTICS` to avoid in-place modification of the sparse matrix.
:pr:`28491` by :user:`Thanh Lam Dang <lamdang2k>`.
- |Fix| :class:`cluster.HDBSCAN` now supports all metrics supported by
:func:`sklearn.metrics.pairwise_distances` when `algorithm="brute"` or `"auto"`.
:pr:`28664` by :user:`Manideep Yenugula <myenugula>`.
:mod:`sklearn.compose`
......................
- |Feature| A fitted :class:`compose.ColumnTransformer` now implements `__getitem__`
which returns the fitted transformers by name. :pr:`27990` by `Thomas Fan`_.
- |Enhancement| :class:`compose.TransformedTargetRegressor` now raises an error in `fit`
if only `inverse_func` is provided without `func` (that would default to identity)
being explicitly set as well.
:pr:`28483` by :user:`Stefanie Senger <StefanieSenger>`.
- |Enhancement| :class:`compose.ColumnTransformer` can now expose the "remainder"
columns in the fitted `transformers_` attribute as column names or boolean
masks, rather than column indices.
:pr:`27657` by :user:`Jérôme Dockès <jeromedockes>`.
- |Fix| Fixed an bug in :class:`compose.ColumnTransformer` with `n_jobs > 1`, where the
intermediate selected columns were passed to the transformers as read-only arrays.
:pr:`28822` by :user:`Jérémie du Boisberranger <jeremiedbb>`.
:mod:`sklearn.cross_decomposition`
..................................
- |Fix| The `coef_` fitted attribute of :class:`cross_decomposition.PLSRegression`
now takes into account both the scale of `X` and `Y` when `scale=True`. Note that
the previous predicted values were not affected by this bug.
:pr:`28612` by :user:`Guillaume Lemaitre <glemaitre>`.
- |API| Deprecates `Y` in favor of `y` in the methods fit, transform and
inverse_transform of:
:class:`cross_decomposition.PLSRegression`.
:class:`cross_decomposition.PLSCanonical`,
:class:`cross_decomposition.CCA`,
and :class:`cross_decomposition.PLSSVD`.
`Y` will be removed in version 1.7.
:pr:`28604` by :user:`David Leon <davidleon123>`.
:mod:`sklearn.datasets`
.......................
- |Enhancement| Adds optional arguments `n_retries` and `delay` to functions
:func:`datasets.fetch_20newsgroups`,
:func:`datasets.fetch_20newsgroups_vectorized`,
:func:`datasets.fetch_california_housing`,
:func:`datasets.fetch_covtype`,
:func:`datasets.fetch_kddcup99`,
:func:`datasets.fetch_lfw_pairs`,
:func:`datasets.fetch_lfw_people`,
:func:`datasets.fetch_olivetti_faces`,
:func:`datasets.fetch_rcv1`,
and :func:`datasets.fetch_species_distributions`.
By default, the functions will retry up to 3 times in case of network failures.
:pr:`28160` by :user:`Zhehao Liu <MaxwellLZH>` and
:user:`Filip Karlo Došilović <fkdosilovic>`.
:mod:`sklearn.decomposition`
............................
- |Efficiency| :class:`decomposition.PCA` with `svd_solver="full"` now assigns
a contiguous `components_` attribute instead of an non-contiguous slice of
the singular vectors. When `n_components << n_features`, this can save some
memory and, more importantly, help speed-up subsequent calls to the `transform`
method by more than an order of magnitude by leveraging cache locality of
BLAS GEMM on contiguous arrays.
:pr:`27491` by :user:`Olivier Grisel <ogrisel>`.
- |Enhancement| :class:`~decomposition.PCA` now automatically selects the ARPACK solver
for sparse inputs when `svd_solver="auto"` instead of raising an error.
:pr:`28498` by :user:`Thanh Lam Dang <lamdang2k>`.
- |Enhancement| :class:`decomposition.PCA` now supports a new solver option
named `svd_solver="covariance_eigh"` which offers an order of magnitude
speed-up and reduced memory usage for datasets with a large number of data
points and a small number of features (say, `n_samples >> 1000 >
n_features`). The `svd_solver="auto"` option has been updated to use the new
solver automatically for such datasets. This solver also accepts sparse input
data.
:pr:`27491` by :user:`Olivier Grisel <ogrisel>`.
- |Fix| :class:`decomposition.PCA` fit with `svd_solver="arpack"`,
`whiten=True` and a value for `n_components` that is larger than the rank of
the training set, no longer returns infinite values when transforming
hold-out data.
:pr:`27491` by :user:`Olivier Grisel <ogrisel>`.
:mod:`sklearn.dummy`
....................
- |Enhancement| :class:`dummy.DummyClassifier` and :class:`dummy.DummyRegressor` now
have the `n_features_in_` and `feature_names_in_` attributes after `fit`.
:pr:`27937` by :user:`Marco vd Boom <tvdboom>`.
:mod:`sklearn.ensemble`
.......................
- |Efficiency| Improves runtime of `predict` of
:class:`ensemble.HistGradientBoostingClassifier` by avoiding to call `predict_proba`.
:pr:`27844` by :user:`Christian Lorentzen <lorentzenchr>`.
- |Efficiency| :class:`ensemble.HistGradientBoostingClassifier` and
:class:`ensemble.HistGradientBoostingRegressor` are now a tiny bit faster by
pre-sorting the data before finding the thresholds for binning.
:pr:`28102` by :user:`Christian Lorentzen <lorentzenchr>`.
- |Fix| Fixes a bug in :class:`ensemble.HistGradientBoostingClassifier` and
:class:`ensemble.HistGradientBoostingRegressor` when `monotonic_cst` is specified
for non-categorical features.
:pr:`28925` by :user:`Xiao Yuan <yuanx749>`.
:mod:`sklearn.feature_extraction`
.................................
- |Efficiency| :class:`feature_extraction.text.TfidfTransformer` is now faster
and more memory-efficient by using a NumPy vector instead of a sparse matrix
for storing the inverse document frequency.
:pr:`18843` by :user:`Paolo Montesel <thebabush>`.
- |Enhancement| :class:`feature_extraction.text.TfidfTransformer` now preserves
the data type of the input matrix if it is `np.float64` or `np.float32`.
:pr:`28136` by :user:`Guillaume Lemaitre <glemaitre>`.
:mod:`sklearn.feature_selection`
................................
- |Enhancement| :func:`feature_selection.mutual_info_regression` and
:func:`feature_selection.mutual_info_classif` now support `n_jobs` parameter.
:pr:`28085` by :user:`Neto Menoci <netomenoci>` and
:user:`Florin Andrei <FlorinAndrei>`.
- |Enhancement| The `cv_results_` attribute of :class:`feature_selection.RFECV` has
a new key, `n_features`, containing an array with the number of features selected
at each step.
:pr:`28670` by :user:`Miguel Silva <miguelcsilva>`.
:mod:`sklearn.impute`
.....................
- |Enhancement| :class:`impute.SimpleImputer` now supports custom strategies
by passing a function in place of a strategy name.
:pr:`28053` by :user:`Mark Elliot <mark-thm>`.
:mod:`sklearn.inspection`
.........................
- |Fix| :meth:`inspection.DecisionBoundaryDisplay.from_estimator` no longer
warns about missing feature names when provided a `polars.DataFrame`.
:pr:`28718` by :user:`Patrick Wang <patrickkwang>`.
:mod:`sklearn.linear_model`
...........................
- |Enhancement| Solver `"newton-cg"` in :class:`linear_model.LogisticRegression` and
:class:`linear_model.LogisticRegressionCV` now emits information when `verbose` is
set to positive values.
:pr:`27526` by :user:`Christian Lorentzen <lorentzenchr>`.
- |Fix| :class:`linear_model.ElasticNet`, :class:`linear_model.ElasticNetCV`,
:class:`linear_model.Lasso` and :class:`linear_model.LassoCV` now explicitly don't
accept large sparse data formats.
:pr:`27576` by :user:`Stefanie Senger <StefanieSenger>`.
- |Fix| :class:`linear_model.RidgeCV` and :class:`RidgeClassifierCV` correctly pass
`sample_weight` to the underlying scorer when `cv` is None.
:pr:`27560` by :user:`Omar Salman <OmarManzoor>`.
- |Fix| `n_nonzero_coefs_` attribute in :class:`linear_model.OrthogonalMatchingPursuit`
will now always be `None` when `tol` is set, as `n_nonzero_coefs` is ignored in
this case. :pr:`28557` by :user:`Lucy Liu <lucyleeow>`.
- |API| :class:`linear_model.RidgeCV` and :class:`linear_model.RidgeClassifierCV`
will now allow `alpha=0` when `cv != None`, which is consistent with
:class:`linear_model.Ridge` and :class:`linear_model.RidgeClassifier`.
:pr:`28425` by :user:`Lucy Liu <lucyleeow>`.
- |API| Passing `average=0` to disable averaging is deprecated in
:class:`linear_model.PassiveAggressiveClassifier`,
:class:`linear_model.PassiveAggressiveRegressor`,
:class:`linear_model.SGDClassifier`, :class:`linear_model.SGDRegressor` and
:class:`linear_model.SGDOneClassSVM`. Pass `average=False` instead.
:pr:`28582` by :user:`Jérémie du Boisberranger <jeremiedbb>`.
- |API| Parameter `multi_class` was deprecated in
:class:`linear_model.LogisticRegression` and
:class:`linear_model.LogisticRegressionCV`. `multi_class` will be removed in 1.7,
and internally, for 3 and more classes, it will always use multinomial.
If you still want to use the one-vs-rest scheme, you can use
`OneVsRestClassifier(LogisticRegression(..))`.
:pr:`28703` by :user:`Christian Lorentzen <lorentzenchr>`.
- |API| `store_cv_values` and `cv_values_` are deprecated in favor of
`store_cv_results` and `cv_results_` in `~linear_model.RidgeCV` and
`~linear_model.RidgeClassifierCV`.
:pr:`28915` by :user:`Lucy Liu <lucyleeow>`.
:mod:`sklearn.manifold`
.......................
- |API| Deprecates `n_iter` in favor of `max_iter` in :class:`manifold.TSNE`.
`n_iter` will be removed in version 1.7. This makes :class:`manifold.TSNE`
consistent with the rest of the estimators. :pr:`28471` by
:user:`Lucy Liu <lucyleeow>`
:mod:`sklearn.metrics`
......................
- |Feature| :func:`metrics.pairwise_distances` accepts calculating pairwise distances
for non-numeric arrays as well. This is supported through custom metrics only.
:pr:`27456` by :user:`Venkatachalam N <venkyyuvy>`, :user:`Kshitij Mathur <Kshitij68>`
and :user:`Julian Libiseller-Egger <julibeg>`.
- |Feature| :func:`sklearn.metrics.check_scoring` now returns a multi-metric scorer
when `scoring` as a `dict`, `set`, `tuple`, or `list`. :pr:`28360` by `Thomas Fan`_.
- |Feature| :func:`metrics.d2_log_loss_score` has been added which
calculates the D^2 score for the log loss.
:pr:`28351` by :user:`Omar Salman <OmarManzoor>`.
- |Efficiency| Improve efficiency of functions :func:`~metrics.brier_score_loss`,
:func:`~calibration.calibration_curve`, :func:`~metrics.det_curve`,
:func:`~metrics.precision_recall_curve`,
:func:`~metrics.roc_curve` when `pos_label` argument is specified.
Also improve efficiency of methods `from_estimator`
and `from_predictions` in :class:`~metrics.RocCurveDisplay`,
:class:`~metrics.PrecisionRecallDisplay`, :class:`~metrics.DetCurveDisplay`,
:class:`~calibration.CalibrationDisplay`.
:pr:`28051` by :user:`Pierre de Fréminville <pidefrem>`.
- |Fix|:class:`metrics.classification_report` now shows only accuracy and not
micro-average when input is a subset of labels.
:pr:`28399` by :user:`Vineet Joshi <vjoshi253>`.
- |Fix| Fix OpenBLAS 0.3.26 dead-lock on Windows in pairwise distances
computation. This is likely to affect neighbor-based algorithms.
:pr:`28692` by :user:`Loïc Estève <lesteve>`.
- |API| :func:`metrics.precision_recall_curve` deprecated the keyword argument
`probas_pred` in favor of `y_score`. `probas_pred` will be removed in version 1.7.
:pr:`28092` by :user:`Adam Li <adam2392>`.
- |API| :func:`metrics.brier_score_loss` deprecated the keyword argument `y_prob`
in favor of `y_proba`. `y_prob` will be removed in version 1.7.
:pr:`28092` by :user:`Adam Li <adam2392>`.
- |API| For classifiers and classification metrics, labels encoded as bytes
is deprecated and will raise an error in v1.7.
:pr:`18555` by :user:`Kaushik Amar Das <cozek>`.
:mod:`sklearn.mixture`
......................
- |Fix| The `converged_` attribute of :class:`mixture.GaussianMixture` and
:class:`mixture.BayesianGaussianMixture` now reflects the convergence status of
the best fit whereas it was previously `True` if any of the fits converged.
:pr:`26837` by :user:`Krsto Proroković <krstopro>`.
:mod:`sklearn.model_selection`
..............................
- |MajorFeature| :class:`model_selection.TunedThresholdClassifierCV` finds
the decision threshold of a binary classifier that maximizes a
classification metric through cross-validation.
:class:`model_selection.FixedThresholdClassifier` is an alternative when one wants
to use a fixed decision threshold without any tuning scheme.
:pr:`26120` by :user:`Guillaume Lemaitre <glemaitre>`.
- |Enhancement| :term:`CV splitters <CV splitter>` that ignores the group parameter now
raises a warning when groups are passed in to :term:`split`. :pr:`28210` by
`Thomas Fan`_.
- |Enhancement| The HTML diagram representation of
:class:`~model_selection.GridSearchCV`,
:class:`~model_selection.RandomizedSearchCV`,
:class:`~model_selection.HalvingGridSearchCV`, and
:class:`~model_selection.HalvingRandomSearchCV` will show the best estimator when
`refit=True`. :pr:`28722` by :user:`Yao Xiao <Charlie-XIAO>` and `Thomas Fan`_.
- |Fix| the ``cv_results_`` attribute (of :class:`model_selection.GridSearchCV`) now
returns masked arrays of the appropriate NumPy dtype, as opposed to always returning
dtype ``object``. :pr:`28352` by :user:`Marco Gorelli<MarcoGorelli>`.
- |Fix| :func:`model_selection.train_test_split` works with Array API inputs.
Previously indexing was not handled correctly leading to exceptions when using strict
implementations of the Array API like CuPY.
:pr:`28407` by :user:`Tim Head <betatim>`.
:mod:`sklearn.multioutput`
..........................
- |Enhancement| `chain_method` parameter added to :class:`multioutput.ClassifierChain`.
:pr:`27700` by :user:`Lucy Liu <lucyleeow>`.
:mod:`sklearn.neighbors`
........................
- |Fix| Fixes :class:`neighbors.NeighborhoodComponentsAnalysis` such that
`get_feature_names_out` returns the correct number of feature names.
:pr:`28306` by :user:`Brendan Lu <brendanlu>`.
:mod:`sklearn.pipeline`
.......................
- |Feature| :class:`pipeline.FeatureUnion` can now use the
`verbose_feature_names_out` attribute. If `True`, `get_feature_names_out`
will prefix all feature names with the name of the transformer
that generated that feature. If `False`, `get_feature_names_out` will not
prefix any feature names and will error if feature names are not unique.
:pr:`25991` by :user:`Jiawei Zhang <jiawei-zhang-a>`.
:mod:`sklearn.preprocessing`
............................
- |Enhancement| :class:`preprocessing.QuantileTransformer` and
:func:`preprocessing.quantile_transform` now supports disabling
subsampling explicitly.
:pr:`27636` by :user:`Ralph Urlus <rurlus>`.
:mod:`sklearn.tree`
...................
- |Enhancement| Plotting trees in matplotlib via :func:`tree.plot_tree` now
show a "True/False" label to indicate the directionality the samples traverse
given the split condition.
:pr:`28552` by :user:`Adam Li <adam2392>`.
:mod:`sklearn.utils`
....................
- |Fix| :func:`~utils._safe_indexing` now works correctly for polars DataFrame when
`axis=0` and supports indexing polars Series.
:pr:`28521` by :user:`Yao Xiao <Charlie-XIAO>`.
- |API| :data:`utils.IS_PYPY` is deprecated and will be removed in version 1.7.
:pr:`28768` by :user:`Jérémie du Boisberranger <jeremiedbb>`.
- |API| :func:`utils.tosequence` is deprecated and will be removed in version 1.7.
:pr:`28763` by :user:`Jérémie du Boisberranger <jeremiedbb>`.
- |API| :class:`utils.parallel_backend` and :func:`utils.register_parallel_backend` are
deprecated and will be removed in version 1.7. Use `joblib.parallel_backend` and
`joblib.register_parallel_backend` instead.
:pr:`28847` by :user:`Jérémie du Boisberranger <jeremiedbb>`.
- |API| Raise informative warning message in :func:`~utils.multiclass.type_of_target`
when represented as bytes. For classifiers and classification metrics, labels encoded
as bytes is deprecated and will raise an error in v1.7.
:pr:`18555` by :user:`Kaushik Amar Das <cozek>`.
- |API| :func:`utils.estimator_checks.check_estimator_sparse_data` was split into two
functions: :func:`utils.estimator_checks.check_estimator_sparse_matrix` and
:func:`utils.estimator_checks.check_estimator_sparse_array`.
:pr:`27576` by :user:`Stefanie Senger <StefanieSenger>`.
.. rubric:: Code and documentation contributors
Thanks to everyone who has contributed to the maintenance and improvement of
the project since version 1.4, including:
101AlexMartin, Abdulaziz Aloqeely, Adam J. Stewart, Adam Li, Adarsh Wase,
Adeyemi Biola, Aditi Juneja, Adrin Jalali, Advik Sinha, Aisha, Akash
Srivastava, Akihiro Kuno, Alan Guedes, Alberto Torres, Alexis IMBERT, alexqiao,
Ana Paula Gomes, Anderson Nelson, Andrei Dzis, Arif Qodari, Arnaud Capitaine,
Arturo Amor, Aswathavicky, Audrey Flanders, awwwyan, baggiponte, Bharat
Raghunathan, bme-git, brdav, Brendan Lu, Brigitta Sipőcz, Bruno, Cailean
Carter, Cemlyn, Christian Lorentzen, Christian Veenhuis, Cindy Liang, Claudio
Salvatore Arcidiacono, Connor Boyle, Conrad Stevens, crispinlogan, David
Matthew Cherney, Davide Chicco, davidleon123, dependabot[bot], DerWeh, dinga92,
Dipan Banik, Drew Craeton, Duarte São José, DUONG, Eddie Bergman, Edoardo
Abati, Egehan Gunduz, Emad Izadifar, EmilyXinyi, Erich Schubert, Evelyn, Filip
Karlo Došilović, Franck Charras, Gael Varoquaux, Gönül Aycı, Guillaume
Lemaitre, Gyeongjae Choi, Harmanan Kohli, Hong Xiang Yue, Ian Faust, Ilya
Komarov, itsaphel, Ivan Wiryadi, Jack Bowyer, Javier Marin Tur, Jérémie du
Boisberranger, Jérôme Dockès, Jiawei Zhang, João Morais, Joe Cainey, Joel
Nothman, Johanna Bayer, John Cant, John Enblom, John Hopfensperger, jpcars,
jpienaar-tuks, Julian Chan, Julian Libiseller-Egger, Julien Jerphanion,
KanchiMoe, Kaushik Amar Das, keyber, Koustav Ghosh, kraktus, Krsto Proroković,
Lars, ldwy4, LeoGrin, lihaitao, Linus Sommer, Loic Esteve, Lucy Liu, Lukas
Geiger, m-maggi, manasimj, Manuel Labbé, Manuel Morales, Marco Edward Gorelli,
Marco Wolsza, Maren Westermann, Marija Vlajic, Mark Elliot, Martin Helm,
Mateusz Sokół, mathurinm, Mavs, Michael Dawson, Michael Higgins, Michael Mayer,
miguelcsilva, Miki Watanabe, Mohammed Hamdy, myenugula, Nathan Goldbaum, Naziya
Mahimkar, nbrown-ScottLogic, Neto, Nithish Bolleddula, notPlancha, Olivier
Grisel, Omar Salman, ParsifalXu, Patrick Wang, Pierre de Fréminville, Piotr,
Priyank Shroff, Priyansh Gupta, Priyash Shah, Puneeth K, Rahil Parikh, raisadz,
Raj Pulapakura, Ralf Gommers, Ralph Urlus, Randolf Scholz, renaissance0ne,
Reshama Shaikh, Richard Barnes, Robert Pollak, Roberto Rosati, Rodrigo Romero,
rwelsch427, Saad Mahmood, Salim Dohri, Sandip Dutta, SarahRemus,
scikit-learn-bot, Shaharyar Choudhry, Shubham, sperret6, Stefanie Senger,
Steffen Schneider, Suha Siddiqui, Thanh Lam DANG, thebabush, Thomas, Thomas J.
Fan, Thomas Lazarus, Tialo, Tim Head, Tuhin Sharma, Tushar Parimi,
VarunChaduvula, Vineet Joshi, virchan, Waël Boukhobza, Weyb, Will Dean, Xavier
Beltran, Xiao Yuan, Xuefeng Xu, Yao Xiao, yareyaredesuyo, Ziad Amerr, Štěpán
Sršeň | scikit-learn | include contributors rst currentmodule sklearn release notes 1 5 Version 1 5 For a short description of the main highlights of the release please refer to ref sphx glr auto examples release highlights plot release highlights 1 5 0 py include changelog legend inc changes 1 5 2 Version 1 5 2 September 2024 Changes impacting many modules Fix Fixed performance regression in a few Cython modules in sklearn loss sklearn manifold sklearn metrics and sklearn utils which were built without OpenMP support pr 29694 by user Lo c Est vce lesteve Changelog mod sklearn calibration Fix Raise error when class sklearn model selection LeaveOneOut used in cv matching what would happen if KFold n splits n samples was used pr 29545 by user Lucy Liu lucyleeow mod sklearn compose Fix Fixed class compose TransformedTargetRegressor not to raise UserWarning if transform output is set to pandas or polars since it isn t a transformer pr 29401 by user Stefanie Senger StefanieSenger mod sklearn decomposition Fix Increase rank defficiency threshold in the whitening step of class decomposition FastICA with whiten solver eigh to improve the platform agnosticity of the estimator pr 29612 by user Olivier Grisel ogrisel mod sklearn metrics Fix Fix a regression in func metrics accuracy score and in func metrics zero one loss causing an error for Array API dispatch with multilabel inputs pr 29336 by user Edoardo Abati EdAbati mod sklearn svm Fix Fixed a regression in class svm SVC and class svm SVR such that we accept C float inf pr 29780 by user Guillaume Lemaitre glemaitre changes 1 5 1 Version 1 5 1 July 2024 Changes impacting many modules Fix Fixed a regression in the validation of the input data of all estimators where an unexpected error was raised when passing a DataFrame backed by a read only buffer pr 29018 by user J r mie du Boisberranger jeremiedbb Fix Fixed a regression causing a dead lock at import time in some settings pr 29235 by user J r mie du Boisberranger jeremiedbb Changelog mod sklearn compose Efficiency Fix a performance regression in class compose ColumnTransformer where the full input data was copied for each transformer when n jobs 1 pr 29330 by user J r mie du Boisberranger jeremiedbb mod sklearn metrics Fix Fix a regression in func metrics r2 score Passing torch CPU tensors with array API dispatched disabled would complain about non CPU devices instead of implicitly converting those inputs as regular NumPy arrays pr 29119 by user Olivier Grisel Fix Fix a regression in func metrics zero one loss causing an error for Array API dispatch with multilabel inputs pr 29269 by user Yaroslav Korobko Tialo mod sklearn model selection Fix Fix a regression in class model selection GridSearchCV for parameter grids that have heterogeneous parameter values pr 29078 by user Lo c Est ve lesteve Fix Fix a regression in class model selection GridSearchCV for parameter grids that have estimators as parameter values pr 29179 by user Marco Gorelli MarcoGorelli Fix Fix a regression in class model selection GridSearchCV for parameter grids that have arrays of different sizes as parameter values pr 29314 by user Marco Gorelli MarcoGorelli mod sklearn tree Fix Fix an issue in func tree export graphviz and func tree plot tree that could potentially result in exception or wrong results on 32bit OSes pr 29327 by user Lo c Est ve lesteve mod sklearn utils API func utils validation check array has a new parameter force writeable to control the writeability of the output array If set to True the output array will be guaranteed to be writeable and a copy will be made if the input array is read only If set to False no guarantee is made about the writeability of the output array pr 29018 by user J r mie du Boisberranger jeremiedbb changes 1 5 Version 1 5 0 May 2024 Security Fix class feature extraction text CountVectorizer and class feature extraction text TfidfVectorizer no longer store discarded tokens from the training set in their stop words attribute This attribute would hold too frequent above max df but also too rare tokens below min df This fixes a potential security issue data leak if the discarded rare tokens hold sensitive information from the training set without the model developer s knowledge Note users of those classes are encouraged to either retrain their pipelines with the new scikit learn version or to manually clear the stop words attribute from previously trained instances of those transformers This attribute was designed only for model inspection purposes and has no impact on the behavior of the transformers pr 28823 by user Olivier Grisel ogrisel Changed models Efficiency The subsampling in class preprocessing QuantileTransformer is now more efficient for dense arrays but the fitted quantiles and the results of transform may be slightly different than before keeping the same statistical properties pr 27344 by user Xuefeng Xu xuefeng xu Enhancement class decomposition PCA class decomposition SparsePCA and class decomposition TruncatedSVD now set the sign of the components attribute based on the component values instead of using the transformed data as reference This change is needed to be able to offer consistent component signs across all PCA solvers including the new svd solver covariance eigh option introduced in this release Changes impacting many modules Fix Raise ValueError with an informative error message when passing 1D sparse arrays to methods that expect 2D sparse inputs pr 28988 by user Olivier Grisel ogrisel API The name of the input of the inverse transform method of estimators has been standardized to X As a consequence Xt is deprecated and will be removed in version 1 7 in the following estimators class cluster FeatureAgglomeration class decomposition MiniBatchNMF class decomposition NMF class model selection GridSearchCV class model selection RandomizedSearchCV class pipeline Pipeline and class preprocessing KBinsDiscretizer pr 28756 by user Will Dean wd60622 Support for Array API Additional estimators and functions have been updated to include support for all Array API https data apis org array api latest compliant inputs See ref array api for more details Functions func sklearn metrics r2 score now supports Array API compliant inputs pr 27904 by user Eric Lindgren elindgren user Franck Charras fcharras user Olivier Grisel ogrisel and user Tim Head betatim Classes class linear model Ridge now supports the Array API for the svd solver See ref array api for more details pr 27800 by user Franck Charras fcharras user Olivier Grisel ogrisel and user Tim Head betatim Support for building with Meson From scikit learn 1 5 onwards Meson is the main supported way to build scikit learn see ref Building from source install bleeding edge for more details Unless we discover a major blocker setuptools support will be dropped in scikit learn 1 6 The 1 5 x releases will support building scikit learn with setuptools Meson support for building scikit learn was added in pr 28040 by user Lo c Est ve lesteve Metadata Routing The following models now support metadata routing in one or more or their methods Refer to the ref Metadata Routing User Guide metadata routing for more details Feature class impute IterativeImputer now supports metadata routing in its fit method pr 28187 by user Stefanie Senger StefanieSenger Feature class ensemble BaggingClassifier and class ensemble BaggingRegressor now support metadata routing The fit methods now accept fit params which are passed to the underlying estimators via their fit methods pr 28432 by user Adam Li adam2392 and user Benjamin Bossan BenjaminBossan Feature class linear model RidgeCV and class linear model RidgeClassifierCV now support metadata routing in their fit method and route metadata to the underlying class model selection GridSearchCV object or the underlying scorer pr 27560 by user Omar Salman OmarManzoor Feature class GraphicalLassoCV now supports metadata routing in it s fit method and routes metadata to the CV splitter pr 27566 by user Omar Salman OmarManzoor Feature class linear model RANSACRegressor now supports metadata routing in its fit score and predict methods and route metadata to its underlying estimator s fit score and predict methods pr 28261 by user Stefanie Senger StefanieSenger Feature class ensemble VotingClassifier and class ensemble VotingRegressor now support metadata routing and pass fit params to the underlying estimators via their fit methods pr 27584 by user Stefanie Senger StefanieSenger Feature class pipeline FeatureUnion now supports metadata routing in its fit and fit transform methods and route metadata to the underlying transformers fit and fit transform pr 28205 by user Stefanie Senger StefanieSenger Fix Fix an issue when resolving default routing requests set via class attributes pr 28435 by Adrin Jalali Fix Fix an issue when set method request methods are used as unbound methods which can happen if one tries to decorate them pr 28651 by Adrin Jalali FIX Prevent a RecursionError when estimators with the default scoring param None route metadata pr 28712 by user Stefanie Senger StefanieSenger Changelog Entries should be grouped by module in alphabetic order and prefixed with one of the labels MajorFeature Feature Efficiency Enhancement Fix or API see whats new rst for descriptions Entries should be ordered by those labels e g Fix after Efficiency Changes not specific to a module should be listed under Multiple Modules or Miscellaneous Entries should end with pr 123456 by user Joe Bloggs joeongithub where 123455 is the pull request number not the issue number mod sklearn calibration Fix Fixed a regression in class calibration CalibratedClassifierCV where an error was wrongly raised with string targets pr 28843 by user J r mie du Boisberranger jeremiedbb mod sklearn cluster Fix The class cluster MeanShift class now properly converges for constant data pr 28951 by user Akihiro Kuno akikuno FIX Create copy of precomputed sparse matrix within the fit method of class cluster OPTICS to avoid in place modification of the sparse matrix pr 28491 by user Thanh Lam Dang lamdang2k Fix class cluster HDBSCAN now supports all metrics supported by func sklearn metrics pairwise distances when algorithm brute or auto pr 28664 by user Manideep Yenugula myenugula mod sklearn compose Feature A fitted class compose ColumnTransformer now implements getitem which returns the fitted transformers by name pr 27990 by Thomas Fan Enhancement class compose TransformedTargetRegressor now raises an error in fit if only inverse func is provided without func that would default to identity being explicitly set as well pr 28483 by user Stefanie Senger StefanieSenger Enhancement class compose ColumnTransformer can now expose the remainder columns in the fitted transformers attribute as column names or boolean masks rather than column indices pr 27657 by user J r me Dock s jeromedockes Fix Fixed an bug in class compose ColumnTransformer with n jobs 1 where the intermediate selected columns were passed to the transformers as read only arrays pr 28822 by user J r mie du Boisberranger jeremiedbb mod sklearn cross decomposition Fix The coef fitted attribute of class cross decomposition PLSRegression now takes into account both the scale of X and Y when scale True Note that the previous predicted values were not affected by this bug pr 28612 by user Guillaume Lemaitre glemaitre API Deprecates Y in favor of y in the methods fit transform and inverse transform of class cross decomposition PLSRegression class cross decomposition PLSCanonical class cross decomposition CCA and class cross decomposition PLSSVD Y will be removed in version 1 7 pr 28604 by user David Leon davidleon123 mod sklearn datasets Enhancement Adds optional arguments n retries and delay to functions func datasets fetch 20newsgroups func datasets fetch 20newsgroups vectorized func datasets fetch california housing func datasets fetch covtype func datasets fetch kddcup99 func datasets fetch lfw pairs func datasets fetch lfw people func datasets fetch olivetti faces func datasets fetch rcv1 and func datasets fetch species distributions By default the functions will retry up to 3 times in case of network failures pr 28160 by user Zhehao Liu MaxwellLZH and user Filip Karlo Do ilovi fkdosilovic mod sklearn decomposition Efficiency class decomposition PCA with svd solver full now assigns a contiguous components attribute instead of an non contiguous slice of the singular vectors When n components n features this can save some memory and more importantly help speed up subsequent calls to the transform method by more than an order of magnitude by leveraging cache locality of BLAS GEMM on contiguous arrays pr 27491 by user Olivier Grisel ogrisel Enhancement class decomposition PCA now automatically selects the ARPACK solver for sparse inputs when svd solver auto instead of raising an error pr 28498 by user Thanh Lam Dang lamdang2k Enhancement class decomposition PCA now supports a new solver option named svd solver covariance eigh which offers an order of magnitude speed up and reduced memory usage for datasets with a large number of data points and a small number of features say n samples 1000 n features The svd solver auto option has been updated to use the new solver automatically for such datasets This solver also accepts sparse input data pr 27491 by user Olivier Grisel ogrisel Fix class decomposition PCA fit with svd solver arpack whiten True and a value for n components that is larger than the rank of the training set no longer returns infinite values when transforming hold out data pr 27491 by user Olivier Grisel ogrisel mod sklearn dummy Enhancement class dummy DummyClassifier and class dummy DummyRegressor now have the n features in and feature names in attributes after fit pr 27937 by user Marco vd Boom tvdboom mod sklearn ensemble Efficiency Improves runtime of predict of class ensemble HistGradientBoostingClassifier by avoiding to call predict proba pr 27844 by user Christian Lorentzen lorentzenchr Efficiency class ensemble HistGradientBoostingClassifier and class ensemble HistGradientBoostingRegressor are now a tiny bit faster by pre sorting the data before finding the thresholds for binning pr 28102 by user Christian Lorentzen lorentzenchr Fix Fixes a bug in class ensemble HistGradientBoostingClassifier and class ensemble HistGradientBoostingRegressor when monotonic cst is specified for non categorical features pr 28925 by user Xiao Yuan yuanx749 mod sklearn feature extraction Efficiency class feature extraction text TfidfTransformer is now faster and more memory efficient by using a NumPy vector instead of a sparse matrix for storing the inverse document frequency pr 18843 by user Paolo Montesel thebabush Enhancement class feature extraction text TfidfTransformer now preserves the data type of the input matrix if it is np float64 or np float32 pr 28136 by user Guillaume Lemaitre glemaitre mod sklearn feature selection Enhancement func feature selection mutual info regression and func feature selection mutual info classif now support n jobs parameter pr 28085 by user Neto Menoci netomenoci and user Florin Andrei FlorinAndrei Enhancement The cv results attribute of class feature selection RFECV has a new key n features containing an array with the number of features selected at each step pr 28670 by user Miguel Silva miguelcsilva mod sklearn impute Enhancement class impute SimpleImputer now supports custom strategies by passing a function in place of a strategy name pr 28053 by user Mark Elliot mark thm mod sklearn inspection Fix meth inspection DecisionBoundaryDisplay from estimator no longer warns about missing feature names when provided a polars DataFrame pr 28718 by user Patrick Wang patrickkwang mod sklearn linear model Enhancement Solver newton cg in class linear model LogisticRegression and class linear model LogisticRegressionCV now emits information when verbose is set to positive values pr 27526 by user Christian Lorentzen lorentzenchr Fix class linear model ElasticNet class linear model ElasticNetCV class linear model Lasso and class linear model LassoCV now explicitly don t accept large sparse data formats pr 27576 by user Stefanie Senger StefanieSenger Fix class linear model RidgeCV and class RidgeClassifierCV correctly pass sample weight to the underlying scorer when cv is None pr 27560 by user Omar Salman OmarManzoor Fix n nonzero coefs attribute in class linear model OrthogonalMatchingPursuit will now always be None when tol is set as n nonzero coefs is ignored in this case pr 28557 by user Lucy Liu lucyleeow API class linear model RidgeCV and class linear model RidgeClassifierCV will now allow alpha 0 when cv None which is consistent with class linear model Ridge and class linear model RidgeClassifier pr 28425 by user Lucy Liu lucyleeow API Passing average 0 to disable averaging is deprecated in class linear model PassiveAggressiveClassifier class linear model PassiveAggressiveRegressor class linear model SGDClassifier class linear model SGDRegressor and class linear model SGDOneClassSVM Pass average False instead pr 28582 by user J r mie du Boisberranger jeremiedbb API Parameter multi class was deprecated in class linear model LogisticRegression and class linear model LogisticRegressionCV multi class will be removed in 1 7 and internally for 3 and more classes it will always use multinomial If you still want to use the one vs rest scheme you can use OneVsRestClassifier LogisticRegression pr 28703 by user Christian Lorentzen lorentzenchr API store cv values and cv values are deprecated in favor of store cv results and cv results in linear model RidgeCV and linear model RidgeClassifierCV pr 28915 by user Lucy Liu lucyleeow mod sklearn manifold API Deprecates n iter in favor of max iter in class manifold TSNE n iter will be removed in version 1 7 This makes class manifold TSNE consistent with the rest of the estimators pr 28471 by user Lucy Liu lucyleeow mod sklearn metrics Feature func metrics pairwise distances accepts calculating pairwise distances for non numeric arrays as well This is supported through custom metrics only pr 27456 by user Venkatachalam N venkyyuvy user Kshitij Mathur Kshitij68 and user Julian Libiseller Egger julibeg Feature func sklearn metrics check scoring now returns a multi metric scorer when scoring as a dict set tuple or list pr 28360 by Thomas Fan Feature func metrics d2 log loss score has been added which calculates the D 2 score for the log loss pr 28351 by user Omar Salman OmarManzoor Efficiency Improve efficiency of functions func metrics brier score loss func calibration calibration curve func metrics det curve func metrics precision recall curve func metrics roc curve when pos label argument is specified Also improve efficiency of methods from estimator and from predictions in class metrics RocCurveDisplay class metrics PrecisionRecallDisplay class metrics DetCurveDisplay class calibration CalibrationDisplay pr 28051 by user Pierre de Fr minville pidefrem Fix class metrics classification report now shows only accuracy and not micro average when input is a subset of labels pr 28399 by user Vineet Joshi vjoshi253 Fix Fix OpenBLAS 0 3 26 dead lock on Windows in pairwise distances computation This is likely to affect neighbor based algorithms pr 28692 by user Lo c Est ve lesteve API func metrics precision recall curve deprecated the keyword argument probas pred in favor of y score probas pred will be removed in version 1 7 pr 28092 by user Adam Li adam2392 API func metrics brier score loss deprecated the keyword argument y prob in favor of y proba y prob will be removed in version 1 7 pr 28092 by user Adam Li adam2392 API For classifiers and classification metrics labels encoded as bytes is deprecated and will raise an error in v1 7 pr 18555 by user Kaushik Amar Das cozek mod sklearn mixture Fix The converged attribute of class mixture GaussianMixture and class mixture BayesianGaussianMixture now reflects the convergence status of the best fit whereas it was previously True if any of the fits converged pr 26837 by user Krsto Prorokovi krstopro mod sklearn model selection MajorFeature class model selection TunedThresholdClassifierCV finds the decision threshold of a binary classifier that maximizes a classification metric through cross validation class model selection FixedThresholdClassifier is an alternative when one wants to use a fixed decision threshold without any tuning scheme pr 26120 by user Guillaume Lemaitre glemaitre Enhancement term CV splitters CV splitter that ignores the group parameter now raises a warning when groups are passed in to term split pr 28210 by Thomas Fan Enhancement The HTML diagram representation of class model selection GridSearchCV class model selection RandomizedSearchCV class model selection HalvingGridSearchCV and class model selection HalvingRandomSearchCV will show the best estimator when refit True pr 28722 by user Yao Xiao Charlie XIAO and Thomas Fan Fix the cv results attribute of class model selection GridSearchCV now returns masked arrays of the appropriate NumPy dtype as opposed to always returning dtype object pr 28352 by user Marco Gorelli MarcoGorelli Fix func model selection train test split works with Array API inputs Previously indexing was not handled correctly leading to exceptions when using strict implementations of the Array API like CuPY pr 28407 by user Tim Head betatim mod sklearn multioutput Enhancement chain method parameter added to class multioutput ClassifierChain pr 27700 by user Lucy Liu lucyleeow mod sklearn neighbors Fix Fixes class neighbors NeighborhoodComponentsAnalysis such that get feature names out returns the correct number of feature names pr 28306 by user Brendan Lu brendanlu mod sklearn pipeline Feature class pipeline FeatureUnion can now use the verbose feature names out attribute If True get feature names out will prefix all feature names with the name of the transformer that generated that feature If False get feature names out will not prefix any feature names and will error if feature names are not unique pr 25991 by user Jiawei Zhang jiawei zhang a mod sklearn preprocessing Enhancement class preprocessing QuantileTransformer and func preprocessing quantile transform now supports disabling subsampling explicitly pr 27636 by user Ralph Urlus rurlus mod sklearn tree Enhancement Plotting trees in matplotlib via func tree plot tree now show a True False label to indicate the directionality the samples traverse given the split condition pr 28552 by user Adam Li adam2392 mod sklearn utils Fix func utils safe indexing now works correctly for polars DataFrame when axis 0 and supports indexing polars Series pr 28521 by user Yao Xiao Charlie XIAO API data utils IS PYPY is deprecated and will be removed in version 1 7 pr 28768 by user J r mie du Boisberranger jeremiedbb API func utils tosequence is deprecated and will be removed in version 1 7 pr 28763 by user J r mie du Boisberranger jeremiedbb API class utils parallel backend and func utils register parallel backend are deprecated and will be removed in version 1 7 Use joblib parallel backend and joblib register parallel backend instead pr 28847 by user J r mie du Boisberranger jeremiedbb API Raise informative warning message in func utils multiclass type of target when represented as bytes For classifiers and classification metrics labels encoded as bytes is deprecated and will raise an error in v1 7 pr 18555 by user Kaushik Amar Das cozek API func utils estimator checks check estimator sparse data was split into two functions func utils estimator checks check estimator sparse matrix and func utils estimator checks check estimator sparse array pr 27576 by user Stefanie Senger StefanieSenger rubric Code and documentation contributors Thanks to everyone who has contributed to the maintenance and improvement of the project since version 1 4 including 101AlexMartin Abdulaziz Aloqeely Adam J Stewart Adam Li Adarsh Wase Adeyemi Biola Aditi Juneja Adrin Jalali Advik Sinha Aisha Akash Srivastava Akihiro Kuno Alan Guedes Alberto Torres Alexis IMBERT alexqiao Ana Paula Gomes Anderson Nelson Andrei Dzis Arif Qodari Arnaud Capitaine Arturo Amor Aswathavicky Audrey Flanders awwwyan baggiponte Bharat Raghunathan bme git brdav Brendan Lu Brigitta Sip cz Bruno Cailean Carter Cemlyn Christian Lorentzen Christian Veenhuis Cindy Liang Claudio Salvatore Arcidiacono Connor Boyle Conrad Stevens crispinlogan David Matthew Cherney Davide Chicco davidleon123 dependabot bot DerWeh dinga92 Dipan Banik Drew Craeton Duarte S o Jos DUONG Eddie Bergman Edoardo Abati Egehan Gunduz Emad Izadifar EmilyXinyi Erich Schubert Evelyn Filip Karlo Do ilovi Franck Charras Gael Varoquaux G n l Ayc Guillaume Lemaitre Gyeongjae Choi Harmanan Kohli Hong Xiang Yue Ian Faust Ilya Komarov itsaphel Ivan Wiryadi Jack Bowyer Javier Marin Tur J r mie du Boisberranger J r me Dock s Jiawei Zhang Jo o Morais Joe Cainey Joel Nothman Johanna Bayer John Cant John Enblom John Hopfensperger jpcars jpienaar tuks Julian Chan Julian Libiseller Egger Julien Jerphanion KanchiMoe Kaushik Amar Das keyber Koustav Ghosh kraktus Krsto Prorokovi Lars ldwy4 LeoGrin lihaitao Linus Sommer Loic Esteve Lucy Liu Lukas Geiger m maggi manasimj Manuel Labb Manuel Morales Marco Edward Gorelli Marco Wolsza Maren Westermann Marija Vlajic Mark Elliot Martin Helm Mateusz Sok mathurinm Mavs Michael Dawson Michael Higgins Michael Mayer miguelcsilva Miki Watanabe Mohammed Hamdy myenugula Nathan Goldbaum Naziya Mahimkar nbrown ScottLogic Neto Nithish Bolleddula notPlancha Olivier Grisel Omar Salman ParsifalXu Patrick Wang Pierre de Fr minville Piotr Priyank Shroff Priyansh Gupta Priyash Shah Puneeth K Rahil Parikh raisadz Raj Pulapakura Ralf Gommers Ralph Urlus Randolf Scholz renaissance0ne Reshama Shaikh Richard Barnes Robert Pollak Roberto Rosati Rodrigo Romero rwelsch427 Saad Mahmood Salim Dohri Sandip Dutta SarahRemus scikit learn bot Shaharyar Choudhry Shubham sperret6 Stefanie Senger Steffen Schneider Suha Siddiqui Thanh Lam DANG thebabush Thomas Thomas J Fan Thomas Lazarus Tialo Tim Head Tuhin Sharma Tushar Parimi VarunChaduvula Vineet Joshi virchan Wa l Boukhobza Weyb Will Dean Xavier Beltran Xiao Yuan Xuefeng Xu Yao Xiao yareyaredesuyo Ziad Amerr t p n Sr e |
scikit-learn sklearn contributors rst Version 1 4 releasenotes14 | .. include:: _contributors.rst
.. currentmodule:: sklearn
.. _release_notes_1_4:
===========
Version 1.4
===========
For a short description of the main highlights of the release, please refer to
:ref:`sphx_glr_auto_examples_release_highlights_plot_release_highlights_1_4_0.py`.
.. include:: changelog_legend.inc
.. _changes_1_4_2:
Version 1.4.2
=============
**April 2024**
This release only includes support for numpy 2.
.. _changes_1_4_1:
Version 1.4.1
=============
**February 2024**
Changed models
--------------
- |API| The `tree_.value` attribute in :class:`tree.DecisionTreeClassifier`,
:class:`tree.DecisionTreeRegressor`, :class:`tree.ExtraTreeClassifier` and
:class:`tree.ExtraTreeRegressor` changed from an weighted absolute count
of number of samples to a weighted fraction of the total number of samples.
:pr:`27639` by :user:`Samuel Ronsin <samronsin>`.
Metadata Routing
----------------
- |FIX| Fix routing issue with :class:`~compose.ColumnTransformer` when used
inside another meta-estimator.
:pr:`28188` by `Adrin Jalali`_.
- |Fix| No error is raised when no metadata is passed to a metaestimator that
includes a sub-estimator which doesn't support metadata routing.
:pr:`28256` by `Adrin Jalali`_.
- |Fix| Fix :class:`multioutput.MultiOutputRegressor` and
:class:`multioutput.MultiOutputClassifier` to work with estimators that don't
consume any metadata when metadata routing is enabled.
:pr:`28240` by `Adrin Jalali`_.
DataFrame Support
-----------------
- |Enhancement| |Fix| Pandas and Polars dataframe are validated directly without
ducktyping checks.
:pr:`28195` by `Thomas Fan`_.
Changes impacting many modules
------------------------------
- |Efficiency| |Fix| Partial revert of :pr:`28191` to avoid a performance regression for
estimators relying on euclidean pairwise computation with
sparse matrices. The impacted estimators are:
- :func:`sklearn.metrics.pairwise_distances_argmin`
- :func:`sklearn.metrics.pairwise_distances_argmin_min`
- :class:`sklearn.cluster.AffinityPropagation`
- :class:`sklearn.cluster.Birch`
- :class:`sklearn.cluster.SpectralClustering`
- :class:`sklearn.neighbors.KNeighborsClassifier`
- :class:`sklearn.neighbors.KNeighborsRegressor`
- :class:`sklearn.neighbors.RadiusNeighborsClassifier`
- :class:`sklearn.neighbors.RadiusNeighborsRegressor`
- :class:`sklearn.neighbors.LocalOutlierFactor`
- :class:`sklearn.neighbors.NearestNeighbors`
- :class:`sklearn.manifold.Isomap`
- :class:`sklearn.manifold.TSNE`
- :func:`sklearn.manifold.trustworthiness`
:pr:`28235` by :user:`Julien Jerphanion <jjerphan>`.
- |Fix| Fixes a bug for all scikit-learn transformers when using `set_output` with
`transform` set to `pandas` or `polars`. The bug could lead to wrong naming of the
columns of the returned dataframe.
:pr:`28262` by :user:`Guillaume Lemaitre <glemaitre>`.
- |Fix| When users try to use a method in :class:`~ensemble.StackingClassifier`,
:class:`~ensemble.StackingClassifier`, :class:`~ensemble.StackingClassifier`,
:class:`~feature_selection.SelectFromModel`, :class:`~feature_selection.RFE`,
:class:`~semi_supervised.SelfTrainingClassifier`,
:class:`~multiclass.OneVsOneClassifier`, :class:`~multiclass.OutputCodeClassifier` or
:class:`~multiclass.OneVsRestClassifier` that their sub-estimators don't implement,
the `AttributeError` now reraises in the traceback.
:pr:`28167` by :user:`Stefanie Senger <StefanieSenger>`.
Changelog
---------
:mod:`sklearn.calibration`
..........................
- |Fix| `calibration.CalibratedClassifierCV` supports :term:`predict_proba` with
float32 output from the inner estimator. :pr:`28247` by `Thomas Fan`_.
:mod:`sklearn.cluster`
......................
- |Fix| :class:`cluster.AffinityPropagation` now avoids assigning multiple different
clusters for equal points.
:pr:`28121` by :user:`Pietro Peterlongo <pietroppeter>` and
:user:`Yao Xiao <Charlie-XIAO>`.
- |Fix| Avoid infinite loop in :class:`cluster.KMeans` when the number of clusters is
larger than the number of non-duplicate samples.
:pr:`28165` by :user:`Jérémie du Boisberranger <jeremiedbb>`.
:mod:`sklearn.compose`
......................
- |Fix| :class:`compose.ColumnTransformer` now transform into a polars dataframe when
`verbose_feature_names_out=True` and the transformers internally used several times
the same columns. Previously, it would raise a due to duplicated column names.
:pr:`28262` by :user:`Guillaume Lemaitre <glemaitre>`.
:mod:`sklearn.ensemble`
.......................
- |Fix| :class:`HistGradientBoostingClassifier` and
:class:`HistGradientBoostingRegressor` when fitted on `pandas` `DataFrame`
with extension dtypes, for example `pd.Int64Dtype`
:pr:`28385` by :user:`Loïc Estève <lesteve>`.
- |Fix| Fixes error message raised by :class:`ensemble.VotingClassifier` when the
target is multilabel or multiclass-multioutput in a DataFrame format.
:pr:`27702` by :user:`Guillaume Lemaitre <glemaitre>`.
:mod:`sklearn.impute`
.....................
- |Fix|: :class:`impute.SimpleImputer` now raises an error in `.fit` and
`.transform` if `fill_value` can not be cast to input value dtype with
`casting='same_kind'`.
:pr:`28365` by :user:`Leo Grinsztajn <LeoGrin>`.
:mod:`sklearn.inspection`
.........................
- |Fix| :func:`inspection.permutation_importance` now handles properly `sample_weight`
together with subsampling (i.e. `max_features` < 1.0).
:pr:`28184` by :user:`Michael Mayer <mayer79>`.
:mod:`sklearn.linear_model`
...........................
- |Fix| :class:`linear_model.ARDRegression` now handles pandas input types
for `predict(X, return_std=True)`.
:pr:`28377` by :user:`Eddie Bergman <eddiebergman>`.
:mod:`sklearn.preprocessing`
............................
- |Fix| make :class:`preprocessing.FunctionTransformer` more lenient and overwrite
output column names with the `get_feature_names_out` in the following cases:
(i) the input and output column names remain the same (happen when using NumPy
`ufunc`); (ii) the input column names are numbers; (iii) the output will be set to
Pandas or Polars dataframe.
:pr:`28241` by :user:`Guillaume Lemaitre <glemaitre>`.
- |Fix| :class:`preprocessing.FunctionTransformer` now also warns when `set_output`
is called with `transform="polars"` and `func` does not return a Polars dataframe or
`feature_names_out` is not specified.
:pr:`28263` by :user:`Guillaume Lemaitre <glemaitre>`.
- |Fix| :class:`preprocessing.TargetEncoder` no longer fails when
`target_type="continuous"` and the input is read-only. In particular, it now
works with pandas copy-on-write mode enabled.
:pr:`28233` by :user:`John Hopfensperger <s-banach>`.
:mod:`sklearn.tree`
...................
- |Fix| :class:`tree.DecisionTreeClassifier` and
:class:`tree.DecisionTreeRegressor` are handling missing values properly. The internal
criterion was not initialized when no missing values were present in the data, leading
to potentially wrong criterion values.
:pr:`28295` by :user:`Guillaume Lemaitre <glemaitre>` and
:pr:`28327` by :user:`Adam Li <adam2392>`.
:mod:`sklearn.utils`
....................
- |Enhancement| |Fix| :func:`utils.metaestimators.available_if` now reraises the error
from the `check` function as the cause of the `AttributeError`.
:pr:`28198` by `Thomas Fan`_.
- |Fix| :func:`utils._safe_indexing` now raises a `ValueError` when `X` is a Python list
and `axis=1`, as documented in the docstring.
:pr:`28222` by :user:`Guillaume Lemaitre <glemaitre>`.
.. _changes_1_4:
Version 1.4.0
=============
**January 2024**
Changed models
--------------
The following estimators and functions, when fit with the same data and
parameters, may produce different models from the previous version. This often
occurs due to changes in the modelling logic (bug fixes or enhancements), or in
random sampling procedures.
- |Efficiency| :class:`linear_model.LogisticRegression` and
:class:`linear_model.LogisticRegressionCV` now have much better convergence for
solvers `"lbfgs"` and `"newton-cg"`. Both solvers can now reach much higher precision
for the coefficients depending on the specified `tol`. Additionally, lbfgs can
make better use of `tol`, i.e., stop sooner or reach higher precision.
Note: The lbfgs is the default solver, so this change might effect many models.
This change also means that with this new version of scikit-learn, the resulting
coefficients `coef_` and `intercept_` of your models will change for these two
solvers (when fit on the same data again). The amount of change depends on the
specified `tol`, for small values you will get more precise results.
:pr:`26721` by :user:`Christian Lorentzen <lorentzenchr>`.
- |Fix| fixes a memory leak seen in PyPy for estimators using the Cython loss functions.
:pr:`27670` by :user:`Guillaume Lemaitre <glemaitre>`.
Changes impacting all modules
-----------------------------
- |MajorFeature| Transformers now support polars output with
`set_output(transform="polars")`.
:pr:`27315` by `Thomas Fan`_.
- |Enhancement| All estimators now recognizes the column names from any dataframe
that adopts the
`DataFrame Interchange Protocol <https://data-apis.org/dataframe-protocol/latest/purpose_and_scope.html>`__.
Dataframes that return a correct representation through `np.asarray(df)` is expected
to work with our estimators and functions.
:pr:`26464` by `Thomas Fan`_.
- |Enhancement| The HTML representation of estimators now includes a link to the
documentation and is color-coded to denote whether the estimator is fitted or
not (unfitted estimators are orange, fitted estimators are blue).
:pr:`26616` by :user:`Riccardo Cappuzzo <rcap107>`,
:user:`Ines Ibnukhsein <Ines1999>`, :user:`Gael Varoquaux <GaelVaroquaux>`,
`Joel Nothman`_ and :user:`Lilian Boulard <LilianBoulard>`.
- |Fix| Fixed a bug in most estimators and functions where setting a parameter to
a large integer would cause a `TypeError`.
:pr:`26648` by :user:`Naoise Holohan <naoise-h>`.
Metadata Routing
----------------
The following models now support metadata routing in one or more or their
methods. Refer to the :ref:`Metadata Routing User Guide <metadata_routing>` for
more details.
- |Feature| :class:`LarsCV` and :class:`LassoLarsCV` now support metadata
routing in their `fit` method and route metadata to the CV splitter.
:pr:`27538` by :user:`Omar Salman <OmarManzoor>`.
- |Feature| :class:`multiclass.OneVsRestClassifier`,
:class:`multiclass.OneVsOneClassifier` and
:class:`multiclass.OutputCodeClassifier` now support metadata routing in
their ``fit`` and ``partial_fit``, and route metadata to the underlying
estimator's ``fit`` and ``partial_fit``.
:pr:`27308` by :user:`Stefanie Senger <StefanieSenger>`.
- |Feature| :class:`pipeline.Pipeline` now supports metadata routing according
to :ref:`metadata routing user guide <metadata_routing>`.
:pr:`26789` by `Adrin Jalali`_.
- |Feature| :func:`~model_selection.cross_validate`,
:func:`~model_selection.cross_val_score`, and
:func:`~model_selection.cross_val_predict` now support metadata routing. The
metadata are routed to the estimator's `fit`, the scorer, and the CV
splitter's `split`. The metadata is accepted via the new `params` parameter.
`fit_params` is deprecated and will be removed in version 1.6. `groups`
parameter is also not accepted as a separate argument when metadata routing
is enabled and should be passed via the `params` parameter.
:pr:`26896` by `Adrin Jalali`_.
- |Feature| :class:`~model_selection.GridSearchCV`,
:class:`~model_selection.RandomizedSearchCV`,
:class:`~model_selection.HalvingGridSearchCV`, and
:class:`~model_selection.HalvingRandomSearchCV` now support metadata routing
in their ``fit`` and ``score``, and route metadata to the underlying
estimator's ``fit``, the CV splitter, and the scorer.
:pr:`27058` by `Adrin Jalali`_.
- |Feature| :class:`~compose.ColumnTransformer` now supports metadata routing
according to :ref:`metadata routing user guide <metadata_routing>`.
:pr:`27005` by `Adrin Jalali`_.
- |Feature| :class:`linear_model.LogisticRegressionCV` now supports
metadata routing. :meth:`linear_model.LogisticRegressionCV.fit` now
accepts ``**params`` which are passed to the underlying splitter and
scorer. :meth:`linear_model.LogisticRegressionCV.score` now accepts
``**score_params`` which are passed to the underlying scorer.
:pr:`26525` by :user:`Omar Salman <OmarManzoor>`.
- |Feature| :class:`feature_selection.SelectFromModel` now supports metadata
routing in `fit` and `partial_fit`.
:pr:`27490` by :user:`Stefanie Senger <StefanieSenger>`.
- |Feature| :class:`linear_model.OrthogonalMatchingPursuitCV` now supports
metadata routing. Its `fit` now accepts ``**fit_params``, which are passed to
the underlying splitter.
:pr:`27500` by :user:`Stefanie Senger <StefanieSenger>`.
- |Feature| :class:`ElasticNetCV`, :class:`LassoCV`,
:class:`MultiTaskElasticNetCV` and :class:`MultiTaskLassoCV`
now support metadata routing and route metadata to the CV splitter.
:pr:`27478` by :user:`Omar Salman <OmarManzoor>`.
- |Fix| All meta-estimators for which metadata routing is not yet implemented
now raise a `NotImplementedError` on `get_metadata_routing` and on `fit` if
metadata routing is enabled and any metadata is passed to them.
:pr:`27389` by `Adrin Jalali`_.
Support for SciPy sparse arrays
-------------------------------
Several estimators are now supporting SciPy sparse arrays. The following functions
and classes are impacted:
**Functions:**
- :func:`cluster.compute_optics_graph` in :pr:`27104` by
:user:`Maren Westermann <marenwestermann>` and in :pr:`27250` by
:user:`Yao Xiao <Charlie-XIAO>`;
- :func:`cluster.kmeans_plusplus` in :pr:`27179` by :user:`Nurseit Kamchyev <Bncer>`;
- :func:`decomposition.non_negative_factorization` in :pr:`27100` by
:user:`Isaac Virshup <ivirshup>`;
- :func:`feature_selection.f_regression` in :pr:`27239` by
:user:`Yaroslav Korobko <Tialo>`;
- :func:`feature_selection.r_regression` in :pr:`27239` by
:user:`Yaroslav Korobko <Tialo>`;
- :func:`manifold.trustworthiness` in :pr:`27250` by :user:`Yao Xiao <Charlie-XIAO>`;
- :func:`manifold.spectral_embedding` in :pr:`27240` by :user:`Yao Xiao <Charlie-XIAO>`;
- :func:`metrics.pairwise_distances` in :pr:`27250` by :user:`Yao Xiao <Charlie-XIAO>`;
- :func:`metrics.pairwise_distances_chunked` in :pr:`27250` by
:user:`Yao Xiao <Charlie-XIAO>`;
- :func:`metrics.pairwise.pairwise_kernels` in :pr:`27250` by
:user:`Yao Xiao <Charlie-XIAO>`;
- :func:`utils.multiclass.type_of_target` in :pr:`27274` by
:user:`Yao Xiao <Charlie-XIAO>`.
**Classes:**
- :class:`cluster.HDBSCAN` in :pr:`27250` by :user:`Yao Xiao <Charlie-XIAO>`;
- :class:`cluster.KMeans` in :pr:`27179` by :user:`Nurseit Kamchyev <Bncer>`;
- :class:`cluster.MiniBatchKMeans` in :pr:`27179` by :user:`Nurseit Kamchyev <Bncer>`;
- :class:`cluster.OPTICS` in :pr:`27104` by
:user:`Maren Westermann <marenwestermann>` and in :pr:`27250` by
:user:`Yao Xiao <Charlie-XIAO>`;
- :class:`cluster.SpectralClustering` in :pr:`27161` by
:user:`Bharat Raghunathan <bharatr21>`;
- :class:`decomposition.MiniBatchNMF` in :pr:`27100` by
:user:`Isaac Virshup <ivirshup>`;
- :class:`decomposition.NMF` in :pr:`27100` by :user:`Isaac Virshup <ivirshup>`;
- :class:`feature_extraction.text.TfidfTransformer` in :pr:`27219` by
:user:`Yao Xiao <Charlie-XIAO>`;
- :class:`manifold.Isomap` in :pr:`27250` by :user:`Yao Xiao <Charlie-XIAO>`;
- :class:`manifold.SpectralEmbedding` in :pr:`27240` by :user:`Yao Xiao <Charlie-XIAO>`;
- :class:`manifold.TSNE` in :pr:`27250` by :user:`Yao Xiao <Charlie-XIAO>`;
- :class:`impute.SimpleImputer` in :pr:`27277` by :user:`Yao Xiao <Charlie-XIAO>`;
- :class:`impute.IterativeImputer` in :pr:`27277` by :user:`Yao Xiao <Charlie-XIAO>`;
- :class:`impute.KNNImputer` in :pr:`27277` by :user:`Yao Xiao <Charlie-XIAO>`;
- :class:`kernel_approximation.PolynomialCountSketch` in :pr:`27301` by
:user:`Lohit SundaramahaLingam <lohitslohit>`;
- :class:`neural_network.BernoulliRBM` in :pr:`27252` by
:user:`Yao Xiao <Charlie-XIAO>`;
- :class:`preprocessing.PolynomialFeatures` in :pr:`27166` by
:user:`Mohit Joshi <work-mohit>`;
- :class:`random_projection.GaussianRandomProjection` in :pr:`27314` by
:user:`Stefanie Senger <StefanieSenger>`;
- :class:`random_projection.SparseRandomProjection` in :pr:`27314` by
:user:`Stefanie Senger <StefanieSenger>`.
Support for Array API
---------------------
Several estimators and functions support the
`Array API <https://data-apis.org/array-api/latest/>`_. Such changes allows for using
the estimators and functions with other libraries such as JAX, CuPy, and PyTorch.
This therefore enables some GPU-accelerated computations.
See :ref:`array_api` for more details.
**Functions:**
- :func:`sklearn.metrics.accuracy_score` and :func:`sklearn.metrics.zero_one_loss` in
:pr:`27137` by :user:`Edoardo Abati <EdAbati>`;
- :func:`sklearn.model_selection.train_test_split` in :pr:`26855` by `Tim Head`_;
- :func:`~utils.multiclass.is_multilabel` in :pr:`27601` by
:user:`Yaroslav Korobko <Tialo>`.
**Classes:**
- :class:`decomposition.PCA` for the `full` and `randomized` solvers (with QR power
iterations) in :pr:`26315`, :pr:`27098` and :pr:`27431` by
:user:`Mateusz Sokół <mtsokol>`, :user:`Olivier Grisel <ogrisel>` and
:user:`Edoardo Abati <EdAbati>`;
- :class:`preprocessing.KernelCenterer` in :pr:`27556` by
:user:`Edoardo Abati <EdAbati>`;
- :class:`preprocessing.MaxAbsScaler` in :pr:`27110` by :user:`Edoardo Abati <EdAbati>`;
- :class:`preprocessing.MinMaxScaler` in :pr:`26243` by `Tim Head`_;
- :class:`preprocessing.Normalizer` in :pr:`27558` by :user:`Edoardo Abati <EdAbati>`.
Private Loss Function Module
----------------------------
- |FIX| The gradient computation of the binomial log loss is now numerically
more stable for very large, in absolute value, input (raw predictions). Before, it
could result in `np.nan`. Among the models that profit from this change are
:class:`ensemble.GradientBoostingClassifier`,
:class:`ensemble.HistGradientBoostingClassifier` and
:class:`linear_model.LogisticRegression`.
:pr:`28048` by :user:`Christian Lorentzen <lorentzenchr>`.
Changelog
---------
..
Entries should be grouped by module (in alphabetic order) and prefixed with
one of the labels: |MajorFeature|, |Feature|, |Efficiency|, |Enhancement|,
|Fix| or |API| (see whats_new.rst for descriptions).
Entries should be ordered by those labels (e.g. |Fix| after |Efficiency|).
Changes not specific to a module should be listed under *Multiple Modules*
or *Miscellaneous*.
Entries should end with:
:pr:`123456` by :user:`Joe Bloggs <joeongithub>`.
where 123455 is the *pull request* number, not the issue number.
:mod:`sklearn.base`
...................
- |Enhancement| :meth:`base.ClusterMixin.fit_predict` and
:meth:`base.OutlierMixin.fit_predict` now accept ``**kwargs`` which are
passed to the ``fit`` method of the estimator.
:pr:`26506` by `Adrin Jalali`_.
- |Enhancement| :meth:`base.TransformerMixin.fit_transform` and
:meth:`base.OutlierMixin.fit_predict` now raise a warning if ``transform`` /
``predict`` consume metadata, but no custom ``fit_transform`` / ``fit_predict``
is defined in the class inheriting from them correspondingly.
:pr:`26831` by `Adrin Jalali`_.
- |Enhancement| :func:`base.clone` now supports `dict` as input and creates a
copy.
:pr:`26786` by `Adrin Jalali`_.
- |API|:func:`~utils.metadata_routing.process_routing` now has a different
signature. The first two (the object and the method) are positional only,
and all metadata are passed as keyword arguments.
:pr:`26909` by `Adrin Jalali`_.
:mod:`sklearn.calibration`
..........................
- |Enhancement| The internal objective and gradient of the `sigmoid` method
of :class:`calibration.CalibratedClassifierCV` have been replaced by the
private loss module.
:pr:`27185` by :user:`Omar Salman <OmarManzoor>`.
:mod:`sklearn.cluster`
......................
- |Fix| The `degree` parameter in the :class:`cluster.SpectralClustering`
constructor now accepts real values instead of only integral values in
accordance with the `degree` parameter of the
:class:`sklearn.metrics.pairwise.polynomial_kernel`.
:pr:`27668` by :user:`Nolan McMahon <NolantheNerd>`.
- |Fix| Fixes a bug in :class:`cluster.OPTICS` where the cluster correction based
on predecessor was not using the right indexing. It would lead to inconsistent results
depedendent on the order of the data.
:pr:`26459` by :user:`Haoying Zhang <stevezhang1999>` and
:user:`Guillaume Lemaitre <glemaitre>`.
- |Fix| Improve error message when checking the number of connected components
in the `fit` method of :class:`cluster.HDBSCAN`.
:pr:`27678` by :user:`Ganesh Tata <tataganesh>`.
- |Fix| Create copy of precomputed sparse matrix within the
`fit` method of :class:`cluster.DBSCAN` to avoid in-place modification of
the sparse matrix.
:pr:`27651` by :user:`Ganesh Tata <tataganesh>`.
- |Fix| Raises a proper `ValueError` when `metric="precomputed"` and requested storing
centers via the parameter `store_centers`.
:pr:`27898` by :user:`Guillaume Lemaitre <glemaitre>`.
- |API| `kdtree` and `balltree` values are now deprecated and are renamed as
`kd_tree` and `ball_tree` respectively for the `algorithm` parameter of
:class:`cluster.HDBSCAN` ensuring consistency in naming convention.
`kdtree` and `balltree` values will be removed in 1.6.
:pr:`26744` by :user:`Shreesha Kumar Bhat <Shreesha3112>`.
- |API| The option `metric=None` in
:class:`cluster.AgglomerativeClustering` and :class:`cluster.FeatureAgglomeration`
is deprecated in version 1.4 and will be removed in version 1.6. Use the default
value instead.
:pr:`27828` by :user:`Guillaume Lemaitre <glemaitre>`.
:mod:`sklearn.compose`
......................
- |MajorFeature| Adds `polars <https://www.pola.rs>`__ input support to
:class:`compose.ColumnTransformer` through the `DataFrame Interchange Protocol
<https://data-apis.org/dataframe-protocol/latest/purpose_and_scope.html>`__.
The minimum supported version for polars is `0.19.12`.
:pr:`26683` by `Thomas Fan`_.
- |Fix| :func:`cluster.spectral_clustering` and :class:`cluster.SpectralClustering`
now raise an explicit error message indicating that sparse matrices and arrays
with `np.int64` indices are not supported.
:pr:`27240` by :user:`Yao Xiao <Charlie-XIAO>`.
- |API| outputs that use pandas extension dtypes and contain `pd.NA` in
:class:`~compose.ColumnTransformer` now result in a `FutureWarning` and will
cause a `ValueError` in version 1.6, unless the output container has been
configured as "pandas" with `set_output(transform="pandas")`. Before, such
outputs resulted in numpy arrays of dtype `object` containing `pd.NA` which
could not be converted to numpy floats and caused errors when passed to other
scikit-learn estimators.
:pr:`27734` by :user:`Jérôme Dockès <jeromedockes>`.
:mod:`sklearn.covariance`
.........................
- |Enhancement| Allow :func:`covariance.shrunk_covariance` to process
multiple covariance matrices at once by handling nd-arrays.
:pr:`25275` by :user:`Quentin Barthélemy <qbarthelemy>`.
- |API| |FIX| :class:`~compose.ColumnTransformer` now replaces `"passthrough"`
with a corresponding :class:`~preprocessing.FunctionTransformer` in the
fitted ``transformers_`` attribute.
:pr:`27204` by `Adrin Jalali`_.
:mod:`sklearn.datasets`
.......................
- |Enhancement| :func:`datasets.make_sparse_spd_matrix` now uses a more memory-
efficient sparse layout. It also accepts a new keyword `sparse_format` that allows
specifying the output format of the sparse matrix. By default `sparse_format=None`,
which returns a dense numpy ndarray as before.
:pr:`27438` by :user:`Yao Xiao <Charlie-XIAO>`.
- |Fix| :func:`datasets.dump_svmlight_file` now does not raise `ValueError` when `X`
is read-only, e.g., a `numpy.memmap` instance.
:pr:`28111` by :user:`Yao Xiao <Charlie-XIAO>`.
- |API| :func:`datasets.make_sparse_spd_matrix` deprecated the keyword argument ``dim``
in favor of ``n_dim``. ``dim`` will be removed in version 1.6.
:pr:`27718` by :user:`Adam Li <adam2392>`.
:mod:`sklearn.decomposition`
............................
- |Feature| :class:`decomposition.PCA` now supports :class:`scipy.sparse.sparray`
and :class:`scipy.sparse.spmatrix` inputs when using the `arpack` solver.
When used on sparse data like :func:`datasets.fetch_20newsgroups_vectorized` this
can lead to speed-ups of 100x (single threaded) and 70x lower memory usage.
Based on :user:`Alexander Tarashansky <atarashansky>`'s implementation in
`scanpy <https://github.com/scverse/scanpy>`_.
:pr:`18689` by :user:`Isaac Virshup <ivirshup>` and
:user:`Andrey Portnoy <andportnoy>`.
- |Enhancement| An "auto" option was added to the `n_components` parameter of
:func:`decomposition.non_negative_factorization`, :class:`decomposition.NMF` and
:class:`decomposition.MiniBatchNMF` to automatically infer the number of components
from W or H shapes when using a custom initialization. The default value of this
parameter will change from `None` to `auto` in version 1.6.
:pr:`26634` by :user:`Alexandre Landeau <AlexL>` and :user:`Alexandre Vigny <avigny>`.
- |Fix| :func:`decomposition.dict_learning_online` does not ignore anymore the parameter
`max_iter`.
:pr:`27834` by :user:`Guillaume Lemaitre <glemaitre>`.
- |Fix| The `degree` parameter in the :class:`decomposition.KernelPCA`
constructor now accepts real values instead of only integral values in
accordance with the `degree` parameter of the
:class:`sklearn.metrics.pairwise.polynomial_kernel`.
:pr:`27668` by :user:`Nolan McMahon <NolantheNerd>`.
- |API| The option `max_iter=None` in
:class:`decomposition.MiniBatchDictionaryLearning`,
:class:`decomposition.MiniBatchSparsePCA`, and
:func:`decomposition.dict_learning_online` is deprecated and will be removed in
version 1.6. Use the default value instead.
:pr:`27834` by :user:`Guillaume Lemaitre <glemaitre>`.
:mod:`sklearn.ensemble`
.......................
- |MajorFeature| :class:`ensemble.RandomForestClassifier` and
:class:`ensemble.RandomForestRegressor` support missing values when
the criterion is `gini`, `entropy`, or `log_loss`,
for classification or `squared_error`, `friedman_mse`, or `poisson`
for regression.
:pr:`26391` by `Thomas Fan`_.
- |MajorFeature| :class:`ensemble.HistGradientBoostingClassifier` and
:class:`ensemble.HistGradientBoostingRegressor` supports
`categorical_features="from_dtype"`, which treats columns with Pandas or
Polars Categorical dtype as categories in the algorithm.
`categorical_features="from_dtype"` will become the default in v1.6.
Categorical features no longer need to be encoded with numbers. When
categorical features are numbers, the maximum value no longer needs to be
smaller than `max_bins`; only the number of (unique) categories must be
smaller than `max_bins`.
:pr:`26411` by `Thomas Fan`_ and :pr:`27835` by :user:`Jérôme Dockès <jeromedockes>`.
- |MajorFeature| :class:`ensemble.HistGradientBoostingClassifier` and
:class:`ensemble.HistGradientBoostingRegressor` got the new parameter
`max_features` to specify the proportion of randomly chosen features considered
in each split.
:pr:`27139` by :user:`Christian Lorentzen <lorentzenchr>`.
- |Feature| :class:`ensemble.RandomForestClassifier`,
:class:`ensemble.RandomForestRegressor`, :class:`ensemble.ExtraTreesClassifier`
and :class:`ensemble.ExtraTreesRegressor` now support monotonic constraints,
useful when features are supposed to have a positive/negative effect on the target.
Missing values in the train data and multi-output targets are not supported.
:pr:`13649` by :user:`Samuel Ronsin <samronsin>`,
initiated by :user:`Patrick O'Reilly <pat-oreilly>`.
- |Efficiency| :class:`ensemble.HistGradientBoostingClassifier` and
:class:`ensemble.HistGradientBoostingRegressor` are now a bit faster by reusing
the parent node's histogram as children node's histogram in the subtraction trick.
In effect, less memory has to be allocated and deallocated.
:pr:`27865` by :user:`Christian Lorentzen <lorentzenchr>`.
- |Efficiency| :class:`ensemble.GradientBoostingClassifier` is faster,
for binary and in particular for multiclass problems thanks to the private loss
function module.
:pr:`26278` and :pr:`28095` by :user:`Christian Lorentzen <lorentzenchr>`.
- |Efficiency| Improves runtime and memory usage for
:class:`ensemble.GradientBoostingClassifier` and
:class:`ensemble.GradientBoostingRegressor` when trained on sparse data.
:pr:`26957` by `Thomas Fan`_.
- |Efficiency| :class:`ensemble.HistGradientBoostingClassifier` and
:class:`ensemble.HistGradientBoostingRegressor` is now faster when `scoring`
is a predefined metric listed in :func:`metrics.get_scorer_names` and
early stopping is enabled.
:pr:`26163` by `Thomas Fan`_.
- |Enhancement| A fitted property, ``estimators_samples_``, was added to all Forest
methods, including
:class:`ensemble.RandomForestClassifier`, :class:`ensemble.RandomForestRegressor`,
:class:`ensemble.ExtraTreesClassifier` and :class:`ensemble.ExtraTreesRegressor`,
which allows to retrieve the training sample indices used for each tree estimator.
:pr:`26736` by :user:`Adam Li <adam2392>`.
- |Fix| Fixes :class:`ensemble.IsolationForest` when the input is a sparse matrix and
`contamination` is set to a float value.
:pr:`27645` by :user:`Guillaume Lemaitre <glemaitre>`.
- |Fix| Raises a `ValueError` in :class:`ensemble.RandomForestRegressor` and
:class:`ensemble.ExtraTreesRegressor` when requesting OOB score with multioutput model
for the targets being all rounded to integer. It was recognized as a multiclass
problem.
:pr:`27817` by :user:`Daniele Ongari <danieleongari>`
- |Fix| Changes estimator tags to acknowledge that
:class:`ensemble.VotingClassifier`, :class:`ensemble.VotingRegressor`,
:class:`ensemble.StackingClassifier`, :class:`ensemble.StackingRegressor`,
support missing values if all `estimators` support missing values.
:pr:`27710` by :user:`Guillaume Lemaitre <glemaitre>`.
- |Fix| Support loading pickles of :class:`ensemble.HistGradientBoostingClassifier` and
:class:`ensemble.HistGradientBoostingRegressor` when the pickle has
been generated on a platform with a different bitness. A typical example is
to train and pickle the model on 64 bit machine and load the model on a 32
bit machine for prediction.
:pr:`28074` by :user:`Christian Lorentzen <lorentzenchr>` and
:user:`Loïc Estève <lesteve>`.
- |API| In :class:`ensemble.AdaBoostClassifier`, the `algorithm` argument `SAMME.R` was
deprecated and will be removed in 1.6.
:pr:`26830` by :user:`Stefanie Senger <StefanieSenger>`.
:mod:`sklearn.feature_extraction`
.................................
- |API| Changed error type from :class:`AttributeError` to
:class:`exceptions.NotFittedError` in unfitted instances of
:class:`feature_extraction.DictVectorizer` for the following methods:
:func:`feature_extraction.DictVectorizer.inverse_transform`,
:func:`feature_extraction.DictVectorizer.restrict`,
:func:`feature_extraction.DictVectorizer.transform`.
:pr:`24838` by :user:`Lorenz Hertel <LoHertel>`.
:mod:`sklearn.feature_selection`
................................
- |Enhancement| :class:`feature_selection.SelectKBest`,
:class:`feature_selection.SelectPercentile`, and
:class:`feature_selection.GenericUnivariateSelect` now support unsupervised
feature selection by providing a `score_func` taking `X` and `y=None`.
:pr:`27721` by :user:`Guillaume Lemaitre <glemaitre>`.
- |Enhancement| :class:`feature_selection.SelectKBest` and
:class:`feature_selection.GenericUnivariateSelect` with `mode='k_best'`
now shows a warning when `k` is greater than the number of features.
:pr:`27841` by `Thomas Fan`_.
- |Fix| :class:`feature_selection.RFE` and :class:`feature_selection.RFECV` do
not check for nans during input validation.
:pr:`21807` by `Thomas Fan`_.
:mod:`sklearn.inspection`
.........................
- |Enhancement| :class:`inspection.DecisionBoundaryDisplay` now accepts a parameter
`class_of_interest` to select the class of interest when plotting the response
provided by `response_method="predict_proba"` or
`response_method="decision_function"`. It allows to plot the decision boundary for
both binary and multiclass classifiers.
:pr:`27291` by :user:`Guillaume Lemaitre <glemaitre>`.
- |Fix| :meth:`inspection.DecisionBoundaryDisplay.from_estimator` and
:class:`inspection.PartialDependenceDisplay.from_estimator` now return the correct
type for subclasses.
:pr:`27675` by :user:`John Cant <johncant>`.
- |API| :class:`inspection.DecisionBoundaryDisplay` raise an `AttributeError` instead
of a `ValueError` when an estimator does not implement the requested response method.
:pr:`27291` by :user:`Guillaume Lemaitre <glemaitre>`.
:mod:`sklearn.kernel_ridge`
...........................
- |Fix| The `degree` parameter in the :class:`kernel_ridge.KernelRidge`
constructor now accepts real values instead of only integral values in
accordance with the `degree` parameter of the
:class:`sklearn.metrics.pairwise.polynomial_kernel`.
:pr:`27668` by :user:`Nolan McMahon <NolantheNerd>`.
:mod:`sklearn.linear_model`
...........................
- |Efficiency| :class:`linear_model.LogisticRegression` and
:class:`linear_model.LogisticRegressionCV` now have much better convergence for
solvers `"lbfgs"` and `"newton-cg"`. Both solvers can now reach much higher precision
for the coefficients depending on the specified `tol`. Additionally, lbfgs can
make better use of `tol`, i.e., stop sooner or reach higher precision. This is
accomplished by better scaling of the objective function, i.e., using average per
sample losses instead of sum of per sample losses.
:pr:`26721` by :user:`Christian Lorentzen <lorentzenchr>`.
- |Efficiency| :class:`linear_model.LogisticRegression` and
:class:`linear_model.LogisticRegressionCV` with solver `"newton-cg"` can now be
considerably faster for some data and parameter settings. This is accomplished by a
better line search convergence check for negligible loss improvements that takes into
account gradient information.
:pr:`26721` by :user:`Christian Lorentzen <lorentzenchr>`.
- |Efficiency| Solver `"newton-cg"` in :class:`linear_model.LogisticRegression` and
:class:`linear_model.LogisticRegressionCV` uses a little less memory. The effect is
proportional to the number of coefficients (`n_features * n_classes`).
:pr:`27417` by :user:`Christian Lorentzen <lorentzenchr>`.
- |Fix| Ensure that the `sigma_` attribute of
:class:`linear_model.ARDRegression` and :class:`linear_model.BayesianRidge`
always has a `float32` dtype when fitted on `float32` data, even with the
type promotion rules of NumPy 2.
:pr:`27899` by :user:`Olivier Grisel <ogrisel>`.
- |API| The attribute `loss_function_` of :class:`linear_model.SGDClassifier` and
:class:`linear_model.SGDOneClassSVM` has been deprecated and will be removed in
version 1.6.
:pr:`27979` by :user:`Christian Lorentzen <lorentzenchr>`.
:mod:`sklearn.metrics`
......................
- |Efficiency| Computing pairwise distances via :class:`metrics.DistanceMetric`
for CSR x CSR, Dense x CSR, and CSR x Dense datasets is now 1.5x faster.
:pr:`26765` by :user:`Meekail Zain <micky774>`.
- |Efficiency| Computing distances via :class:`metrics.DistanceMetric`
for CSR x CSR, Dense x CSR, and CSR x Dense now uses ~50% less memory,
and outputs distances in the same dtype as the provided data.
:pr:`27006` by :user:`Meekail Zain <micky774>`.
- |Enhancement| Improve the rendering of the plot obtained with the
:class:`metrics.PrecisionRecallDisplay` and :class:`metrics.RocCurveDisplay`
classes. the x- and y-axis limits are set to [0, 1] and the aspect ratio between
both axis is set to be 1 to get a square plot.
:pr:`26366` by :user:`Mojdeh Rastgoo <mrastgoo>`.
- |Enhancement| Added `neg_root_mean_squared_log_error_scorer` as scorer
:pr:`26734` by :user:`Alejandro Martin Gil <101AlexMartin>`.
- |Enhancement| :func:`metrics.confusion_matrix` now warns when only one label was
found in `y_true` and `y_pred`.
:pr:`27650` by :user:`Lucy Liu <lucyleeow>`.
- |Fix| computing pairwise distances with :func:`metrics.pairwise.euclidean_distances`
no longer raises an exception when `X` is provided as a `float64` array and
`X_norm_squared` as a `float32` array.
:pr:`27624` by :user:`Jérôme Dockès <jeromedockes>`.
- |Fix| :func:`f1_score` now provides correct values when handling various
cases in which division by zero occurs by using a formulation that does not
depend on the precision and recall values.
:pr:`27577` by :user:`Omar Salman <OmarManzoor>` and
:user:`Guillaume Lemaitre <glemaitre>`.
- |Fix| :func:`metrics.make_scorer` now raises an error when using a regressor on a
scorer requesting a non-thresholded decision function (from `decision_function` or
`predict_proba`). Such scorer are specific to classification.
:pr:`26840` by :user:`Guillaume Lemaitre <glemaitre>`.
- |Fix| :meth:`metrics.DetCurveDisplay.from_predictions`,
:class:`metrics.PrecisionRecallDisplay.from_predictions`,
:class:`metrics.PredictionErrorDisplay.from_predictions`, and
:class:`metrics.RocCurveDisplay.from_predictions` now return the correct type
for subclasses.
:pr:`27675` by :user:`John Cant <johncant>`.
- |API| Deprecated `needs_threshold` and `needs_proba` from :func:`metrics.make_scorer`.
These parameters will be removed in version 1.6. Instead, use `response_method` that
accepts `"predict"`, `"predict_proba"` or `"decision_function"` or a list of such
values. `needs_proba=True` is equivalent to `response_method="predict_proba"` and
`needs_threshold=True` is equivalent to
`response_method=("decision_function", "predict_proba")`.
:pr:`26840` by :user:`Guillaume Lemaitre <glemaitre>`.
- |API| The `squared` parameter of :func:`metrics.mean_squared_error` and
:func:`metrics.mean_squared_log_error` is deprecated and will be removed in 1.6.
Use the new functions :func:`metrics.root_mean_squared_error` and
:func:`metrics.root_mean_squared_log_error` instead.
:pr:`26734` by :user:`Alejandro Martin Gil <101AlexMartin>`.
:mod:`sklearn.model_selection`
..............................
- |Enhancement| :func:`model_selection.learning_curve` raises a warning when
every cross validation fold fails.
:pr:`26299` by :user:`Rahil Parikh <rprkh>`.
- |Fix| :class:`model_selection.GridSearchCV`,
:class:`model_selection.RandomizedSearchCV`, and
:class:`model_selection.HalvingGridSearchCV` now don't change the given
object in the parameter grid if it's an estimator.
:pr:`26786` by `Adrin Jalali`_.
:mod:`sklearn.multioutput`
..........................
- |Enhancement| Add method `predict_log_proba` to :class:`multioutput.ClassifierChain`.
:pr:`27720` by :user:`Guillaume Lemaitre <glemaitre>`.
:mod:`sklearn.neighbors`
........................
- |Efficiency| :meth:`sklearn.neighbors.KNeighborsRegressor.predict` and
:meth:`sklearn.neighbors.KNeighborsClassifier.predict_proba` now efficiently support
pairs of dense and sparse datasets.
:pr:`27018` by :user:`Julien Jerphanion <jjerphan>`.
- |Efficiency| The performance of :meth:`neighbors.RadiusNeighborsClassifier.predict`
and of :meth:`neighbors.RadiusNeighborsClassifier.predict_proba` has been improved
when `radius` is large and `algorithm="brute"` with non-Euclidean metrics.
:pr:`26828` by :user:`Omar Salman <OmarManzoor>`.
- |Fix| Improve error message for :class:`neighbors.LocalOutlierFactor`
when it is invoked with `n_samples=n_neighbors`.
:pr:`23317` by :user:`Bharat Raghunathan <bharatr21>`.
- |Fix| :meth:`neighbors.KNeighborsClassifier.predict` and
:meth:`neighbors.KNeighborsClassifier.predict_proba` now raises an error when the
weights of all neighbors of some sample are zero. This can happen when `weights`
is a user-defined function.
:pr:`26410` by :user:`Yao Xiao <Charlie-XIAO>`.
- |API| :class:`neighbors.KNeighborsRegressor` now accepts
:class:`metrics.DistanceMetric` objects directly via the `metric` keyword
argument allowing for the use of accelerated third-party
:class:`metrics.DistanceMetric` objects.
:pr:`26267` by :user:`Meekail Zain <micky774>`.
:mod:`sklearn.preprocessing`
............................
- |Efficiency| :class:`preprocessing.OrdinalEncoder` avoids calculating
missing indices twice to improve efficiency.
:pr:`27017` by :user:`Xuefeng Xu <xuefeng-xu>`.
- |Efficiency| Improves efficiency in :class:`preprocessing.OneHotEncoder` and
:class:`preprocessing.OrdinalEncoder` in checking `nan`.
:pr:`27760` by :user:`Xuefeng Xu <xuefeng-xu>`.
- |Enhancement| Improves warnings in :class:`preprocessing.FunctionTransformer` when
`func` returns a pandas dataframe and the output is configured to be pandas.
:pr:`26944` by `Thomas Fan`_.
- |Enhancement| :class:`preprocessing.TargetEncoder` now supports `target_type`
'multiclass'.
:pr:`26674` by :user:`Lucy Liu <lucyleeow>`.
- |Fix| :class:`preprocessing.OneHotEncoder` and :class:`preprocessing.OrdinalEncoder`
raise an exception when `nan` is a category and is not the last in the user's
provided categories.
:pr:`27309` by :user:`Xuefeng Xu <xuefeng-xu>`.
- |Fix| :class:`preprocessing.OneHotEncoder` and :class:`preprocessing.OrdinalEncoder`
raise an exception if the user provided categories contain duplicates.
:pr:`27328` by :user:`Xuefeng Xu <xuefeng-xu>`.
- |Fix| :class:`preprocessing.FunctionTransformer` raises an error at `transform` if
the output of `get_feature_names_out` is not consistent with the column names of the
output container if those are defined.
:pr:`27801` by :user:`Guillaume Lemaitre <glemaitre>`.
- |Fix| Raise a `NotFittedError` in :class:`preprocessing.OrdinalEncoder` when calling
`transform` without calling `fit` since `categories` always requires to be checked.
:pr:`27821` by :user:`Guillaume Lemaitre <glemaitre>`.
:mod:`sklearn.tree`
...................
- |Feature| :class:`tree.DecisionTreeClassifier`, :class:`tree.DecisionTreeRegressor`,
:class:`tree.ExtraTreeClassifier` and :class:`tree.ExtraTreeRegressor` now support
monotonic constraints, useful when features are supposed to have a positive/negative
effect on the target. Missing values in the train data and multi-output targets are
not supported.
:pr:`13649` by :user:`Samuel Ronsin <samronsin>`, initiated by
:user:`Patrick O'Reilly <pat-oreilly>`.
:mod:`sklearn.utils`
....................
- |Enhancement| :func:`sklearn.utils.estimator_html_repr` dynamically adapts
diagram colors based on the browser's `prefers-color-scheme`, providing
improved adaptability to dark mode environments.
:pr:`26862` by :user:`Andrew Goh Yisheng <9y5>`, `Thomas Fan`_, `Adrin
Jalali`_.
- |Enhancement| :class:`~utils.metadata_routing.MetadataRequest` and
:class:`~utils.metadata_routing.MetadataRouter` now have a ``consumes`` method
which can be used to check whether a given set of parameters would be consumed.
:pr:`26831` by `Adrin Jalali`_.
- |Enhancement| Make :func:`sklearn.utils.check_array` attempt to output
`int32`-indexed CSR and COO arrays when converting from DIA arrays if the number of
non-zero entries is small enough. This ensures that estimators implemented in Cython
and that do not accept `int64`-indexed sparse datastucture, now consistently
accept the same sparse input formats for SciPy sparse matrices and arrays.
:pr:`27372` by :user:`Guillaume Lemaitre <glemaitre>`.
- |Fix| :func:`sklearn.utils.check_array` should accept both matrix and array from
the sparse SciPy module. The previous implementation would fail if `copy=True` by
calling specific NumPy `np.may_share_memory` that does not work with SciPy sparse
array and does not return the correct result for SciPy sparse matrix.
:pr:`27336` by :user:`Guillaume Lemaitre <glemaitre>`.
- |Fix| :func:`~utils.estimator_checks.check_estimators_pickle` with
`readonly_memmap=True` now relies on joblib's own capability to allocate
aligned memory mapped arrays when loading a serialized estimator instead of
calling a dedicated private function that would crash when OpenBLAS
misdetects the CPU architecture.
:pr:`27614` by :user:`Olivier Grisel <ogrisel>`.
- |Fix| Error message in :func:`~utils.check_array` when a sparse matrix was
passed but `accept_sparse` is `False` now suggests to use `.toarray()` and not
`X.toarray()`.
:pr:`27757` by :user:`Lucy Liu <lucyleeow>`.
- |Fix| Fix the function :func:`~utils.check_array` to output the right error message
when the input is a Series instead of a DataFrame.
:pr:`28090` by :user:`Stan Furrer <stanFurrer>` and :user:`Yao Xiao <Charlie-XIAO>`.
- |API| :func:`sklearn.extmath.log_logistic` is deprecated and will be removed in 1.6.
Use `-np.logaddexp(0, -x)` instead.
:pr:`27544` by :user:`Christian Lorentzen <lorentzenchr>`.
.. rubric:: Code and documentation contributors
Thanks to everyone who has contributed to the maintenance and improvement of
the project since version 1.3, including:
101AlexMartin, Abhishek Singh Kushwah, Adam Li, Adarsh Wase, Adrin Jalali,
Advik Sinha, Alex, Alexander Al-Feghali, Alexis IMBERT, AlexL, Alex Molas, Anam
Fatima, Andrew Goh, andyscanzio, Aniket Patil, Artem Kislovskiy, Arturo Amor,
ashah002, avm19, Ben Holmes, Ben Mares, Benoit Chevallier-Mames, Bharat
Raghunathan, Binesh Bannerjee, Brendan Lu, Brevin Kunde, Camille Troillard,
Carlo Lemos, Chad Parmet, Christian Clauss, Christian Lorentzen, Christian
Veenhuis, Christos Aridas, Cindy Liang, Claudio Salvatore Arcidiacono, Connor
Boyle, cynthias13w, DaminK, Daniele Ongari, Daniel Schmitz, Daniel Tinoco,
David Brochart, Deborah L. Haar, DevanshKyada27, Dimitri Papadopoulos Orfanos,
Dmitry Nesterov, DUONG, Edoardo Abati, Eitan Hemed, Elabonga Atuo, Elisabeth
Günther, Emma Carballal, Emmanuel Ferdman, epimorphic, Erwan Le Floch, Fabian
Egli, Filip Karlo Došilović, Florian Idelberger, Franck Charras, Gael
Varoquaux, Ganesh Tata, Gleb Levitski, Guillaume Lemaitre, Haoying Zhang,
Harmanan Kohli, Ily, ioangatop, IsaacTrost, Isaac Virshup, Iwona Zdzieblo,
Jakub Kaczmarzyk, James McDermott, Jarrod Millman, JB Mountford, Jérémie du
Boisberranger, Jérôme Dockès, Jiawei Zhang, Joel Nothman, John Cant, John
Hopfensperger, Jona Sassenhagen, Jon Nordby, Julien Jerphanion, Kennedy Waweru,
kevin moore, Kian Eliasi, Kishan Ved, Konstantinos Pitas, Koustav Ghosh, Kushan
Sharma, ldwy4, Linus, Lohit SundaramahaLingam, Loic Esteve, Lorenz, Louis
Fouquet, Lucy Liu, Luis Silvestrin, Lukáš Folwarczný, Lukas Geiger, Malte
Londschien, Marcus Fraaß, Marek Hanuš, Maren Westermann, Mark Elliot, Martin
Larralde, Mateusz Sokół, mathurinm, mecopur, Meekail Zain, Michael Higgins,
Miki Watanabe, Milton Gomez, MN193, Mohammed Hamdy, Mohit Joshi, mrastgoo,
Naman Dhingra, Naoise Holohan, Narendra Singh dangi, Noa Malem-Shinitski,
Nolan, Nurseit Kamchyev, Oleksii Kachaiev, Olivier Grisel, Omar Salman, partev,
Peter Hull, Peter Steinbach, Pierre de Fréminville, Pooja Subramaniam, Puneeth
K, qmarcou, Quentin Barthélemy, Rahil Parikh, Rahul Mahajan, Raj Pulapakura,
Raphael, Ricardo Peres, Riccardo Cappuzzo, Roman Lutz, Salim Dohri, Samuel O.
Ronsin, Sandip Dutta, Sayed Qaiser Ali, scaja, scikit-learn-bot, Sebastian
Berg, Shreesha Kumar Bhat, Shubhal Gupta, Søren Fuglede Jørgensen, Stefanie
Senger, Tamara, Tanjina Afroj, THARAK HEGDE, thebabush, Thomas J. Fan, Thomas
Roehr, Tialo, Tim Head, tongyu, Venkatachalam N, Vijeth Moudgalya, Vincent M,
Vivek Reddy P, Vladimir Fokow, Xiao Yuan, Xuefeng Xu, Yang Tao, Yao Xiao,
Yuchen Zhou, Yuusuke Hiramatsu | scikit-learn | include contributors rst currentmodule sklearn release notes 1 4 Version 1 4 For a short description of the main highlights of the release please refer to ref sphx glr auto examples release highlights plot release highlights 1 4 0 py include changelog legend inc changes 1 4 2 Version 1 4 2 April 2024 This release only includes support for numpy 2 changes 1 4 1 Version 1 4 1 February 2024 Changed models API The tree value attribute in class tree DecisionTreeClassifier class tree DecisionTreeRegressor class tree ExtraTreeClassifier and class tree ExtraTreeRegressor changed from an weighted absolute count of number of samples to a weighted fraction of the total number of samples pr 27639 by user Samuel Ronsin samronsin Metadata Routing FIX Fix routing issue with class compose ColumnTransformer when used inside another meta estimator pr 28188 by Adrin Jalali Fix No error is raised when no metadata is passed to a metaestimator that includes a sub estimator which doesn t support metadata routing pr 28256 by Adrin Jalali Fix Fix class multioutput MultiOutputRegressor and class multioutput MultiOutputClassifier to work with estimators that don t consume any metadata when metadata routing is enabled pr 28240 by Adrin Jalali DataFrame Support Enhancement Fix Pandas and Polars dataframe are validated directly without ducktyping checks pr 28195 by Thomas Fan Changes impacting many modules Efficiency Fix Partial revert of pr 28191 to avoid a performance regression for estimators relying on euclidean pairwise computation with sparse matrices The impacted estimators are func sklearn metrics pairwise distances argmin func sklearn metrics pairwise distances argmin min class sklearn cluster AffinityPropagation class sklearn cluster Birch class sklearn cluster SpectralClustering class sklearn neighbors KNeighborsClassifier class sklearn neighbors KNeighborsRegressor class sklearn neighbors RadiusNeighborsClassifier class sklearn neighbors RadiusNeighborsRegressor class sklearn neighbors LocalOutlierFactor class sklearn neighbors NearestNeighbors class sklearn manifold Isomap class sklearn manifold TSNE func sklearn manifold trustworthiness pr 28235 by user Julien Jerphanion jjerphan Fix Fixes a bug for all scikit learn transformers when using set output with transform set to pandas or polars The bug could lead to wrong naming of the columns of the returned dataframe pr 28262 by user Guillaume Lemaitre glemaitre Fix When users try to use a method in class ensemble StackingClassifier class ensemble StackingClassifier class ensemble StackingClassifier class feature selection SelectFromModel class feature selection RFE class semi supervised SelfTrainingClassifier class multiclass OneVsOneClassifier class multiclass OutputCodeClassifier or class multiclass OneVsRestClassifier that their sub estimators don t implement the AttributeError now reraises in the traceback pr 28167 by user Stefanie Senger StefanieSenger Changelog mod sklearn calibration Fix calibration CalibratedClassifierCV supports term predict proba with float32 output from the inner estimator pr 28247 by Thomas Fan mod sklearn cluster Fix class cluster AffinityPropagation now avoids assigning multiple different clusters for equal points pr 28121 by user Pietro Peterlongo pietroppeter and user Yao Xiao Charlie XIAO Fix Avoid infinite loop in class cluster KMeans when the number of clusters is larger than the number of non duplicate samples pr 28165 by user J r mie du Boisberranger jeremiedbb mod sklearn compose Fix class compose ColumnTransformer now transform into a polars dataframe when verbose feature names out True and the transformers internally used several times the same columns Previously it would raise a due to duplicated column names pr 28262 by user Guillaume Lemaitre glemaitre mod sklearn ensemble Fix class HistGradientBoostingClassifier and class HistGradientBoostingRegressor when fitted on pandas DataFrame with extension dtypes for example pd Int64Dtype pr 28385 by user Lo c Est ve lesteve Fix Fixes error message raised by class ensemble VotingClassifier when the target is multilabel or multiclass multioutput in a DataFrame format pr 27702 by user Guillaume Lemaitre glemaitre mod sklearn impute Fix class impute SimpleImputer now raises an error in fit and transform if fill value can not be cast to input value dtype with casting same kind pr 28365 by user Leo Grinsztajn LeoGrin mod sklearn inspection Fix func inspection permutation importance now handles properly sample weight together with subsampling i e max features 1 0 pr 28184 by user Michael Mayer mayer79 mod sklearn linear model Fix class linear model ARDRegression now handles pandas input types for predict X return std True pr 28377 by user Eddie Bergman eddiebergman mod sklearn preprocessing Fix make class preprocessing FunctionTransformer more lenient and overwrite output column names with the get feature names out in the following cases i the input and output column names remain the same happen when using NumPy ufunc ii the input column names are numbers iii the output will be set to Pandas or Polars dataframe pr 28241 by user Guillaume Lemaitre glemaitre Fix class preprocessing FunctionTransformer now also warns when set output is called with transform polars and func does not return a Polars dataframe or feature names out is not specified pr 28263 by user Guillaume Lemaitre glemaitre Fix class preprocessing TargetEncoder no longer fails when target type continuous and the input is read only In particular it now works with pandas copy on write mode enabled pr 28233 by user John Hopfensperger s banach mod sklearn tree Fix class tree DecisionTreeClassifier and class tree DecisionTreeRegressor are handling missing values properly The internal criterion was not initialized when no missing values were present in the data leading to potentially wrong criterion values pr 28295 by user Guillaume Lemaitre glemaitre and pr 28327 by user Adam Li adam2392 mod sklearn utils Enhancement Fix func utils metaestimators available if now reraises the error from the check function as the cause of the AttributeError pr 28198 by Thomas Fan Fix func utils safe indexing now raises a ValueError when X is a Python list and axis 1 as documented in the docstring pr 28222 by user Guillaume Lemaitre glemaitre changes 1 4 Version 1 4 0 January 2024 Changed models The following estimators and functions when fit with the same data and parameters may produce different models from the previous version This often occurs due to changes in the modelling logic bug fixes or enhancements or in random sampling procedures Efficiency class linear model LogisticRegression and class linear model LogisticRegressionCV now have much better convergence for solvers lbfgs and newton cg Both solvers can now reach much higher precision for the coefficients depending on the specified tol Additionally lbfgs can make better use of tol i e stop sooner or reach higher precision Note The lbfgs is the default solver so this change might effect many models This change also means that with this new version of scikit learn the resulting coefficients coef and intercept of your models will change for these two solvers when fit on the same data again The amount of change depends on the specified tol for small values you will get more precise results pr 26721 by user Christian Lorentzen lorentzenchr Fix fixes a memory leak seen in PyPy for estimators using the Cython loss functions pr 27670 by user Guillaume Lemaitre glemaitre Changes impacting all modules MajorFeature Transformers now support polars output with set output transform polars pr 27315 by Thomas Fan Enhancement All estimators now recognizes the column names from any dataframe that adopts the DataFrame Interchange Protocol https data apis org dataframe protocol latest purpose and scope html Dataframes that return a correct representation through np asarray df is expected to work with our estimators and functions pr 26464 by Thomas Fan Enhancement The HTML representation of estimators now includes a link to the documentation and is color coded to denote whether the estimator is fitted or not unfitted estimators are orange fitted estimators are blue pr 26616 by user Riccardo Cappuzzo rcap107 user Ines Ibnukhsein Ines1999 user Gael Varoquaux GaelVaroquaux Joel Nothman and user Lilian Boulard LilianBoulard Fix Fixed a bug in most estimators and functions where setting a parameter to a large integer would cause a TypeError pr 26648 by user Naoise Holohan naoise h Metadata Routing The following models now support metadata routing in one or more or their methods Refer to the ref Metadata Routing User Guide metadata routing for more details Feature class LarsCV and class LassoLarsCV now support metadata routing in their fit method and route metadata to the CV splitter pr 27538 by user Omar Salman OmarManzoor Feature class multiclass OneVsRestClassifier class multiclass OneVsOneClassifier and class multiclass OutputCodeClassifier now support metadata routing in their fit and partial fit and route metadata to the underlying estimator s fit and partial fit pr 27308 by user Stefanie Senger StefanieSenger Feature class pipeline Pipeline now supports metadata routing according to ref metadata routing user guide metadata routing pr 26789 by Adrin Jalali Feature func model selection cross validate func model selection cross val score and func model selection cross val predict now support metadata routing The metadata are routed to the estimator s fit the scorer and the CV splitter s split The metadata is accepted via the new params parameter fit params is deprecated and will be removed in version 1 6 groups parameter is also not accepted as a separate argument when metadata routing is enabled and should be passed via the params parameter pr 26896 by Adrin Jalali Feature class model selection GridSearchCV class model selection RandomizedSearchCV class model selection HalvingGridSearchCV and class model selection HalvingRandomSearchCV now support metadata routing in their fit and score and route metadata to the underlying estimator s fit the CV splitter and the scorer pr 27058 by Adrin Jalali Feature class compose ColumnTransformer now supports metadata routing according to ref metadata routing user guide metadata routing pr 27005 by Adrin Jalali Feature class linear model LogisticRegressionCV now supports metadata routing meth linear model LogisticRegressionCV fit now accepts params which are passed to the underlying splitter and scorer meth linear model LogisticRegressionCV score now accepts score params which are passed to the underlying scorer pr 26525 by user Omar Salman OmarManzoor Feature class feature selection SelectFromModel now supports metadata routing in fit and partial fit pr 27490 by user Stefanie Senger StefanieSenger Feature class linear model OrthogonalMatchingPursuitCV now supports metadata routing Its fit now accepts fit params which are passed to the underlying splitter pr 27500 by user Stefanie Senger StefanieSenger Feature class ElasticNetCV class LassoCV class MultiTaskElasticNetCV and class MultiTaskLassoCV now support metadata routing and route metadata to the CV splitter pr 27478 by user Omar Salman OmarManzoor Fix All meta estimators for which metadata routing is not yet implemented now raise a NotImplementedError on get metadata routing and on fit if metadata routing is enabled and any metadata is passed to them pr 27389 by Adrin Jalali Support for SciPy sparse arrays Several estimators are now supporting SciPy sparse arrays The following functions and classes are impacted Functions func cluster compute optics graph in pr 27104 by user Maren Westermann marenwestermann and in pr 27250 by user Yao Xiao Charlie XIAO func cluster kmeans plusplus in pr 27179 by user Nurseit Kamchyev Bncer func decomposition non negative factorization in pr 27100 by user Isaac Virshup ivirshup func feature selection f regression in pr 27239 by user Yaroslav Korobko Tialo func feature selection r regression in pr 27239 by user Yaroslav Korobko Tialo func manifold trustworthiness in pr 27250 by user Yao Xiao Charlie XIAO func manifold spectral embedding in pr 27240 by user Yao Xiao Charlie XIAO func metrics pairwise distances in pr 27250 by user Yao Xiao Charlie XIAO func metrics pairwise distances chunked in pr 27250 by user Yao Xiao Charlie XIAO func metrics pairwise pairwise kernels in pr 27250 by user Yao Xiao Charlie XIAO func utils multiclass type of target in pr 27274 by user Yao Xiao Charlie XIAO Classes class cluster HDBSCAN in pr 27250 by user Yao Xiao Charlie XIAO class cluster KMeans in pr 27179 by user Nurseit Kamchyev Bncer class cluster MiniBatchKMeans in pr 27179 by user Nurseit Kamchyev Bncer class cluster OPTICS in pr 27104 by user Maren Westermann marenwestermann and in pr 27250 by user Yao Xiao Charlie XIAO class cluster SpectralClustering in pr 27161 by user Bharat Raghunathan bharatr21 class decomposition MiniBatchNMF in pr 27100 by user Isaac Virshup ivirshup class decomposition NMF in pr 27100 by user Isaac Virshup ivirshup class feature extraction text TfidfTransformer in pr 27219 by user Yao Xiao Charlie XIAO class manifold Isomap in pr 27250 by user Yao Xiao Charlie XIAO class manifold SpectralEmbedding in pr 27240 by user Yao Xiao Charlie XIAO class manifold TSNE in pr 27250 by user Yao Xiao Charlie XIAO class impute SimpleImputer in pr 27277 by user Yao Xiao Charlie XIAO class impute IterativeImputer in pr 27277 by user Yao Xiao Charlie XIAO class impute KNNImputer in pr 27277 by user Yao Xiao Charlie XIAO class kernel approximation PolynomialCountSketch in pr 27301 by user Lohit SundaramahaLingam lohitslohit class neural network BernoulliRBM in pr 27252 by user Yao Xiao Charlie XIAO class preprocessing PolynomialFeatures in pr 27166 by user Mohit Joshi work mohit class random projection GaussianRandomProjection in pr 27314 by user Stefanie Senger StefanieSenger class random projection SparseRandomProjection in pr 27314 by user Stefanie Senger StefanieSenger Support for Array API Several estimators and functions support the Array API https data apis org array api latest Such changes allows for using the estimators and functions with other libraries such as JAX CuPy and PyTorch This therefore enables some GPU accelerated computations See ref array api for more details Functions func sklearn metrics accuracy score and func sklearn metrics zero one loss in pr 27137 by user Edoardo Abati EdAbati func sklearn model selection train test split in pr 26855 by Tim Head func utils multiclass is multilabel in pr 27601 by user Yaroslav Korobko Tialo Classes class decomposition PCA for the full and randomized solvers with QR power iterations in pr 26315 pr 27098 and pr 27431 by user Mateusz Sok mtsokol user Olivier Grisel ogrisel and user Edoardo Abati EdAbati class preprocessing KernelCenterer in pr 27556 by user Edoardo Abati EdAbati class preprocessing MaxAbsScaler in pr 27110 by user Edoardo Abati EdAbati class preprocessing MinMaxScaler in pr 26243 by Tim Head class preprocessing Normalizer in pr 27558 by user Edoardo Abati EdAbati Private Loss Function Module FIX The gradient computation of the binomial log loss is now numerically more stable for very large in absolute value input raw predictions Before it could result in np nan Among the models that profit from this change are class ensemble GradientBoostingClassifier class ensemble HistGradientBoostingClassifier and class linear model LogisticRegression pr 28048 by user Christian Lorentzen lorentzenchr Changelog Entries should be grouped by module in alphabetic order and prefixed with one of the labels MajorFeature Feature Efficiency Enhancement Fix or API see whats new rst for descriptions Entries should be ordered by those labels e g Fix after Efficiency Changes not specific to a module should be listed under Multiple Modules or Miscellaneous Entries should end with pr 123456 by user Joe Bloggs joeongithub where 123455 is the pull request number not the issue number mod sklearn base Enhancement meth base ClusterMixin fit predict and meth base OutlierMixin fit predict now accept kwargs which are passed to the fit method of the estimator pr 26506 by Adrin Jalali Enhancement meth base TransformerMixin fit transform and meth base OutlierMixin fit predict now raise a warning if transform predict consume metadata but no custom fit transform fit predict is defined in the class inheriting from them correspondingly pr 26831 by Adrin Jalali Enhancement func base clone now supports dict as input and creates a copy pr 26786 by Adrin Jalali API func utils metadata routing process routing now has a different signature The first two the object and the method are positional only and all metadata are passed as keyword arguments pr 26909 by Adrin Jalali mod sklearn calibration Enhancement The internal objective and gradient of the sigmoid method of class calibration CalibratedClassifierCV have been replaced by the private loss module pr 27185 by user Omar Salman OmarManzoor mod sklearn cluster Fix The degree parameter in the class cluster SpectralClustering constructor now accepts real values instead of only integral values in accordance with the degree parameter of the class sklearn metrics pairwise polynomial kernel pr 27668 by user Nolan McMahon NolantheNerd Fix Fixes a bug in class cluster OPTICS where the cluster correction based on predecessor was not using the right indexing It would lead to inconsistent results depedendent on the order of the data pr 26459 by user Haoying Zhang stevezhang1999 and user Guillaume Lemaitre glemaitre Fix Improve error message when checking the number of connected components in the fit method of class cluster HDBSCAN pr 27678 by user Ganesh Tata tataganesh Fix Create copy of precomputed sparse matrix within the fit method of class cluster DBSCAN to avoid in place modification of the sparse matrix pr 27651 by user Ganesh Tata tataganesh Fix Raises a proper ValueError when metric precomputed and requested storing centers via the parameter store centers pr 27898 by user Guillaume Lemaitre glemaitre API kdtree and balltree values are now deprecated and are renamed as kd tree and ball tree respectively for the algorithm parameter of class cluster HDBSCAN ensuring consistency in naming convention kdtree and balltree values will be removed in 1 6 pr 26744 by user Shreesha Kumar Bhat Shreesha3112 API The option metric None in class cluster AgglomerativeClustering and class cluster FeatureAgglomeration is deprecated in version 1 4 and will be removed in version 1 6 Use the default value instead pr 27828 by user Guillaume Lemaitre glemaitre mod sklearn compose MajorFeature Adds polars https www pola rs input support to class compose ColumnTransformer through the DataFrame Interchange Protocol https data apis org dataframe protocol latest purpose and scope html The minimum supported version for polars is 0 19 12 pr 26683 by Thomas Fan Fix func cluster spectral clustering and class cluster SpectralClustering now raise an explicit error message indicating that sparse matrices and arrays with np int64 indices are not supported pr 27240 by user Yao Xiao Charlie XIAO API outputs that use pandas extension dtypes and contain pd NA in class compose ColumnTransformer now result in a FutureWarning and will cause a ValueError in version 1 6 unless the output container has been configured as pandas with set output transform pandas Before such outputs resulted in numpy arrays of dtype object containing pd NA which could not be converted to numpy floats and caused errors when passed to other scikit learn estimators pr 27734 by user J r me Dock s jeromedockes mod sklearn covariance Enhancement Allow func covariance shrunk covariance to process multiple covariance matrices at once by handling nd arrays pr 25275 by user Quentin Barth lemy qbarthelemy API FIX class compose ColumnTransformer now replaces passthrough with a corresponding class preprocessing FunctionTransformer in the fitted transformers attribute pr 27204 by Adrin Jalali mod sklearn datasets Enhancement func datasets make sparse spd matrix now uses a more memory efficient sparse layout It also accepts a new keyword sparse format that allows specifying the output format of the sparse matrix By default sparse format None which returns a dense numpy ndarray as before pr 27438 by user Yao Xiao Charlie XIAO Fix func datasets dump svmlight file now does not raise ValueError when X is read only e g a numpy memmap instance pr 28111 by user Yao Xiao Charlie XIAO API func datasets make sparse spd matrix deprecated the keyword argument dim in favor of n dim dim will be removed in version 1 6 pr 27718 by user Adam Li adam2392 mod sklearn decomposition Feature class decomposition PCA now supports class scipy sparse sparray and class scipy sparse spmatrix inputs when using the arpack solver When used on sparse data like func datasets fetch 20newsgroups vectorized this can lead to speed ups of 100x single threaded and 70x lower memory usage Based on user Alexander Tarashansky atarashansky s implementation in scanpy https github com scverse scanpy pr 18689 by user Isaac Virshup ivirshup and user Andrey Portnoy andportnoy Enhancement An auto option was added to the n components parameter of func decomposition non negative factorization class decomposition NMF and class decomposition MiniBatchNMF to automatically infer the number of components from W or H shapes when using a custom initialization The default value of this parameter will change from None to auto in version 1 6 pr 26634 by user Alexandre Landeau AlexL and user Alexandre Vigny avigny Fix func decomposition dict learning online does not ignore anymore the parameter max iter pr 27834 by user Guillaume Lemaitre glemaitre Fix The degree parameter in the class decomposition KernelPCA constructor now accepts real values instead of only integral values in accordance with the degree parameter of the class sklearn metrics pairwise polynomial kernel pr 27668 by user Nolan McMahon NolantheNerd API The option max iter None in class decomposition MiniBatchDictionaryLearning class decomposition MiniBatchSparsePCA and func decomposition dict learning online is deprecated and will be removed in version 1 6 Use the default value instead pr 27834 by user Guillaume Lemaitre glemaitre mod sklearn ensemble MajorFeature class ensemble RandomForestClassifier and class ensemble RandomForestRegressor support missing values when the criterion is gini entropy or log loss for classification or squared error friedman mse or poisson for regression pr 26391 by Thomas Fan MajorFeature class ensemble HistGradientBoostingClassifier and class ensemble HistGradientBoostingRegressor supports categorical features from dtype which treats columns with Pandas or Polars Categorical dtype as categories in the algorithm categorical features from dtype will become the default in v1 6 Categorical features no longer need to be encoded with numbers When categorical features are numbers the maximum value no longer needs to be smaller than max bins only the number of unique categories must be smaller than max bins pr 26411 by Thomas Fan and pr 27835 by user J r me Dock s jeromedockes MajorFeature class ensemble HistGradientBoostingClassifier and class ensemble HistGradientBoostingRegressor got the new parameter max features to specify the proportion of randomly chosen features considered in each split pr 27139 by user Christian Lorentzen lorentzenchr Feature class ensemble RandomForestClassifier class ensemble RandomForestRegressor class ensemble ExtraTreesClassifier and class ensemble ExtraTreesRegressor now support monotonic constraints useful when features are supposed to have a positive negative effect on the target Missing values in the train data and multi output targets are not supported pr 13649 by user Samuel Ronsin samronsin initiated by user Patrick O Reilly pat oreilly Efficiency class ensemble HistGradientBoostingClassifier and class ensemble HistGradientBoostingRegressor are now a bit faster by reusing the parent node s histogram as children node s histogram in the subtraction trick In effect less memory has to be allocated and deallocated pr 27865 by user Christian Lorentzen lorentzenchr Efficiency class ensemble GradientBoostingClassifier is faster for binary and in particular for multiclass problems thanks to the private loss function module pr 26278 and pr 28095 by user Christian Lorentzen lorentzenchr Efficiency Improves runtime and memory usage for class ensemble GradientBoostingClassifier and class ensemble GradientBoostingRegressor when trained on sparse data pr 26957 by Thomas Fan Efficiency class ensemble HistGradientBoostingClassifier and class ensemble HistGradientBoostingRegressor is now faster when scoring is a predefined metric listed in func metrics get scorer names and early stopping is enabled pr 26163 by Thomas Fan Enhancement A fitted property estimators samples was added to all Forest methods including class ensemble RandomForestClassifier class ensemble RandomForestRegressor class ensemble ExtraTreesClassifier and class ensemble ExtraTreesRegressor which allows to retrieve the training sample indices used for each tree estimator pr 26736 by user Adam Li adam2392 Fix Fixes class ensemble IsolationForest when the input is a sparse matrix and contamination is set to a float value pr 27645 by user Guillaume Lemaitre glemaitre Fix Raises a ValueError in class ensemble RandomForestRegressor and class ensemble ExtraTreesRegressor when requesting OOB score with multioutput model for the targets being all rounded to integer It was recognized as a multiclass problem pr 27817 by user Daniele Ongari danieleongari Fix Changes estimator tags to acknowledge that class ensemble VotingClassifier class ensemble VotingRegressor class ensemble StackingClassifier class ensemble StackingRegressor support missing values if all estimators support missing values pr 27710 by user Guillaume Lemaitre glemaitre Fix Support loading pickles of class ensemble HistGradientBoostingClassifier and class ensemble HistGradientBoostingRegressor when the pickle has been generated on a platform with a different bitness A typical example is to train and pickle the model on 64 bit machine and load the model on a 32 bit machine for prediction pr 28074 by user Christian Lorentzen lorentzenchr and user Lo c Est ve lesteve API In class ensemble AdaBoostClassifier the algorithm argument SAMME R was deprecated and will be removed in 1 6 pr 26830 by user Stefanie Senger StefanieSenger mod sklearn feature extraction API Changed error type from class AttributeError to class exceptions NotFittedError in unfitted instances of class feature extraction DictVectorizer for the following methods func feature extraction DictVectorizer inverse transform func feature extraction DictVectorizer restrict func feature extraction DictVectorizer transform pr 24838 by user Lorenz Hertel LoHertel mod sklearn feature selection Enhancement class feature selection SelectKBest class feature selection SelectPercentile and class feature selection GenericUnivariateSelect now support unsupervised feature selection by providing a score func taking X and y None pr 27721 by user Guillaume Lemaitre glemaitre Enhancement class feature selection SelectKBest and class feature selection GenericUnivariateSelect with mode k best now shows a warning when k is greater than the number of features pr 27841 by Thomas Fan Fix class feature selection RFE and class feature selection RFECV do not check for nans during input validation pr 21807 by Thomas Fan mod sklearn inspection Enhancement class inspection DecisionBoundaryDisplay now accepts a parameter class of interest to select the class of interest when plotting the response provided by response method predict proba or response method decision function It allows to plot the decision boundary for both binary and multiclass classifiers pr 27291 by user Guillaume Lemaitre glemaitre Fix meth inspection DecisionBoundaryDisplay from estimator and class inspection PartialDependenceDisplay from estimator now return the correct type for subclasses pr 27675 by user John Cant johncant API class inspection DecisionBoundaryDisplay raise an AttributeError instead of a ValueError when an estimator does not implement the requested response method pr 27291 by user Guillaume Lemaitre glemaitre mod sklearn kernel ridge Fix The degree parameter in the class kernel ridge KernelRidge constructor now accepts real values instead of only integral values in accordance with the degree parameter of the class sklearn metrics pairwise polynomial kernel pr 27668 by user Nolan McMahon NolantheNerd mod sklearn linear model Efficiency class linear model LogisticRegression and class linear model LogisticRegressionCV now have much better convergence for solvers lbfgs and newton cg Both solvers can now reach much higher precision for the coefficients depending on the specified tol Additionally lbfgs can make better use of tol i e stop sooner or reach higher precision This is accomplished by better scaling of the objective function i e using average per sample losses instead of sum of per sample losses pr 26721 by user Christian Lorentzen lorentzenchr Efficiency class linear model LogisticRegression and class linear model LogisticRegressionCV with solver newton cg can now be considerably faster for some data and parameter settings This is accomplished by a better line search convergence check for negligible loss improvements that takes into account gradient information pr 26721 by user Christian Lorentzen lorentzenchr Efficiency Solver newton cg in class linear model LogisticRegression and class linear model LogisticRegressionCV uses a little less memory The effect is proportional to the number of coefficients n features n classes pr 27417 by user Christian Lorentzen lorentzenchr Fix Ensure that the sigma attribute of class linear model ARDRegression and class linear model BayesianRidge always has a float32 dtype when fitted on float32 data even with the type promotion rules of NumPy 2 pr 27899 by user Olivier Grisel ogrisel API The attribute loss function of class linear model SGDClassifier and class linear model SGDOneClassSVM has been deprecated and will be removed in version 1 6 pr 27979 by user Christian Lorentzen lorentzenchr mod sklearn metrics Efficiency Computing pairwise distances via class metrics DistanceMetric for CSR x CSR Dense x CSR and CSR x Dense datasets is now 1 5x faster pr 26765 by user Meekail Zain micky774 Efficiency Computing distances via class metrics DistanceMetric for CSR x CSR Dense x CSR and CSR x Dense now uses 50 less memory and outputs distances in the same dtype as the provided data pr 27006 by user Meekail Zain micky774 Enhancement Improve the rendering of the plot obtained with the class metrics PrecisionRecallDisplay and class metrics RocCurveDisplay classes the x and y axis limits are set to 0 1 and the aspect ratio between both axis is set to be 1 to get a square plot pr 26366 by user Mojdeh Rastgoo mrastgoo Enhancement Added neg root mean squared log error scorer as scorer pr 26734 by user Alejandro Martin Gil 101AlexMartin Enhancement func metrics confusion matrix now warns when only one label was found in y true and y pred pr 27650 by user Lucy Liu lucyleeow Fix computing pairwise distances with func metrics pairwise euclidean distances no longer raises an exception when X is provided as a float64 array and X norm squared as a float32 array pr 27624 by user J r me Dock s jeromedockes Fix func f1 score now provides correct values when handling various cases in which division by zero occurs by using a formulation that does not depend on the precision and recall values pr 27577 by user Omar Salman OmarManzoor and user Guillaume Lemaitre glemaitre Fix func metrics make scorer now raises an error when using a regressor on a scorer requesting a non thresholded decision function from decision function or predict proba Such scorer are specific to classification pr 26840 by user Guillaume Lemaitre glemaitre Fix meth metrics DetCurveDisplay from predictions class metrics PrecisionRecallDisplay from predictions class metrics PredictionErrorDisplay from predictions and class metrics RocCurveDisplay from predictions now return the correct type for subclasses pr 27675 by user John Cant johncant API Deprecated needs threshold and needs proba from func metrics make scorer These parameters will be removed in version 1 6 Instead use response method that accepts predict predict proba or decision function or a list of such values needs proba True is equivalent to response method predict proba and needs threshold True is equivalent to response method decision function predict proba pr 26840 by user Guillaume Lemaitre glemaitre API The squared parameter of func metrics mean squared error and func metrics mean squared log error is deprecated and will be removed in 1 6 Use the new functions func metrics root mean squared error and func metrics root mean squared log error instead pr 26734 by user Alejandro Martin Gil 101AlexMartin mod sklearn model selection Enhancement func model selection learning curve raises a warning when every cross validation fold fails pr 26299 by user Rahil Parikh rprkh Fix class model selection GridSearchCV class model selection RandomizedSearchCV and class model selection HalvingGridSearchCV now don t change the given object in the parameter grid if it s an estimator pr 26786 by Adrin Jalali mod sklearn multioutput Enhancement Add method predict log proba to class multioutput ClassifierChain pr 27720 by user Guillaume Lemaitre glemaitre mod sklearn neighbors Efficiency meth sklearn neighbors KNeighborsRegressor predict and meth sklearn neighbors KNeighborsClassifier predict proba now efficiently support pairs of dense and sparse datasets pr 27018 by user Julien Jerphanion jjerphan Efficiency The performance of meth neighbors RadiusNeighborsClassifier predict and of meth neighbors RadiusNeighborsClassifier predict proba has been improved when radius is large and algorithm brute with non Euclidean metrics pr 26828 by user Omar Salman OmarManzoor Fix Improve error message for class neighbors LocalOutlierFactor when it is invoked with n samples n neighbors pr 23317 by user Bharat Raghunathan bharatr21 Fix meth neighbors KNeighborsClassifier predict and meth neighbors KNeighborsClassifier predict proba now raises an error when the weights of all neighbors of some sample are zero This can happen when weights is a user defined function pr 26410 by user Yao Xiao Charlie XIAO API class neighbors KNeighborsRegressor now accepts class metrics DistanceMetric objects directly via the metric keyword argument allowing for the use of accelerated third party class metrics DistanceMetric objects pr 26267 by user Meekail Zain micky774 mod sklearn preprocessing Efficiency class preprocessing OrdinalEncoder avoids calculating missing indices twice to improve efficiency pr 27017 by user Xuefeng Xu xuefeng xu Efficiency Improves efficiency in class preprocessing OneHotEncoder and class preprocessing OrdinalEncoder in checking nan pr 27760 by user Xuefeng Xu xuefeng xu Enhancement Improves warnings in class preprocessing FunctionTransformer when func returns a pandas dataframe and the output is configured to be pandas pr 26944 by Thomas Fan Enhancement class preprocessing TargetEncoder now supports target type multiclass pr 26674 by user Lucy Liu lucyleeow Fix class preprocessing OneHotEncoder and class preprocessing OrdinalEncoder raise an exception when nan is a category and is not the last in the user s provided categories pr 27309 by user Xuefeng Xu xuefeng xu Fix class preprocessing OneHotEncoder and class preprocessing OrdinalEncoder raise an exception if the user provided categories contain duplicates pr 27328 by user Xuefeng Xu xuefeng xu Fix class preprocessing FunctionTransformer raises an error at transform if the output of get feature names out is not consistent with the column names of the output container if those are defined pr 27801 by user Guillaume Lemaitre glemaitre Fix Raise a NotFittedError in class preprocessing OrdinalEncoder when calling transform without calling fit since categories always requires to be checked pr 27821 by user Guillaume Lemaitre glemaitre mod sklearn tree Feature class tree DecisionTreeClassifier class tree DecisionTreeRegressor class tree ExtraTreeClassifier and class tree ExtraTreeRegressor now support monotonic constraints useful when features are supposed to have a positive negative effect on the target Missing values in the train data and multi output targets are not supported pr 13649 by user Samuel Ronsin samronsin initiated by user Patrick O Reilly pat oreilly mod sklearn utils Enhancement func sklearn utils estimator html repr dynamically adapts diagram colors based on the browser s prefers color scheme providing improved adaptability to dark mode environments pr 26862 by user Andrew Goh Yisheng 9y5 Thomas Fan Adrin Jalali Enhancement class utils metadata routing MetadataRequest and class utils metadata routing MetadataRouter now have a consumes method which can be used to check whether a given set of parameters would be consumed pr 26831 by Adrin Jalali Enhancement Make func sklearn utils check array attempt to output int32 indexed CSR and COO arrays when converting from DIA arrays if the number of non zero entries is small enough This ensures that estimators implemented in Cython and that do not accept int64 indexed sparse datastucture now consistently accept the same sparse input formats for SciPy sparse matrices and arrays pr 27372 by user Guillaume Lemaitre glemaitre Fix func sklearn utils check array should accept both matrix and array from the sparse SciPy module The previous implementation would fail if copy True by calling specific NumPy np may share memory that does not work with SciPy sparse array and does not return the correct result for SciPy sparse matrix pr 27336 by user Guillaume Lemaitre glemaitre Fix func utils estimator checks check estimators pickle with readonly memmap True now relies on joblib s own capability to allocate aligned memory mapped arrays when loading a serialized estimator instead of calling a dedicated private function that would crash when OpenBLAS misdetects the CPU architecture pr 27614 by user Olivier Grisel ogrisel Fix Error message in func utils check array when a sparse matrix was passed but accept sparse is False now suggests to use toarray and not X toarray pr 27757 by user Lucy Liu lucyleeow Fix Fix the function func utils check array to output the right error message when the input is a Series instead of a DataFrame pr 28090 by user Stan Furrer stanFurrer and user Yao Xiao Charlie XIAO API func sklearn extmath log logistic is deprecated and will be removed in 1 6 Use np logaddexp 0 x instead pr 27544 by user Christian Lorentzen lorentzenchr rubric Code and documentation contributors Thanks to everyone who has contributed to the maintenance and improvement of the project since version 1 3 including 101AlexMartin Abhishek Singh Kushwah Adam Li Adarsh Wase Adrin Jalali Advik Sinha Alex Alexander Al Feghali Alexis IMBERT AlexL Alex Molas Anam Fatima Andrew Goh andyscanzio Aniket Patil Artem Kislovskiy Arturo Amor ashah002 avm19 Ben Holmes Ben Mares Benoit Chevallier Mames Bharat Raghunathan Binesh Bannerjee Brendan Lu Brevin Kunde Camille Troillard Carlo Lemos Chad Parmet Christian Clauss Christian Lorentzen Christian Veenhuis Christos Aridas Cindy Liang Claudio Salvatore Arcidiacono Connor Boyle cynthias13w DaminK Daniele Ongari Daniel Schmitz Daniel Tinoco David Brochart Deborah L Haar DevanshKyada27 Dimitri Papadopoulos Orfanos Dmitry Nesterov DUONG Edoardo Abati Eitan Hemed Elabonga Atuo Elisabeth G nther Emma Carballal Emmanuel Ferdman epimorphic Erwan Le Floch Fabian Egli Filip Karlo Do ilovi Florian Idelberger Franck Charras Gael Varoquaux Ganesh Tata Gleb Levitski Guillaume Lemaitre Haoying Zhang Harmanan Kohli Ily ioangatop IsaacTrost Isaac Virshup Iwona Zdzieblo Jakub Kaczmarzyk James McDermott Jarrod Millman JB Mountford J r mie du Boisberranger J r me Dock s Jiawei Zhang Joel Nothman John Cant John Hopfensperger Jona Sassenhagen Jon Nordby Julien Jerphanion Kennedy Waweru kevin moore Kian Eliasi Kishan Ved Konstantinos Pitas Koustav Ghosh Kushan Sharma ldwy4 Linus Lohit SundaramahaLingam Loic Esteve Lorenz Louis Fouquet Lucy Liu Luis Silvestrin Luk Folwarczn Lukas Geiger Malte Londschien Marcus Fraa Marek Hanu Maren Westermann Mark Elliot Martin Larralde Mateusz Sok mathurinm mecopur Meekail Zain Michael Higgins Miki Watanabe Milton Gomez MN193 Mohammed Hamdy Mohit Joshi mrastgoo Naman Dhingra Naoise Holohan Narendra Singh dangi Noa Malem Shinitski Nolan Nurseit Kamchyev Oleksii Kachaiev Olivier Grisel Omar Salman partev Peter Hull Peter Steinbach Pierre de Fr minville Pooja Subramaniam Puneeth K qmarcou Quentin Barth lemy Rahil Parikh Rahul Mahajan Raj Pulapakura Raphael Ricardo Peres Riccardo Cappuzzo Roman Lutz Salim Dohri Samuel O Ronsin Sandip Dutta Sayed Qaiser Ali scaja scikit learn bot Sebastian Berg Shreesha Kumar Bhat Shubhal Gupta S ren Fuglede J rgensen Stefanie Senger Tamara Tanjina Afroj THARAK HEGDE thebabush Thomas J Fan Thomas Roehr Tialo Tim Head tongyu Venkatachalam N Vijeth Moudgalya Vincent M Vivek Reddy P Vladimir Fokow Xiao Yuan Xuefeng Xu Yang Tao Yao Xiao Yuchen Zhou Yuusuke Hiramatsu |
scikit-learn sklearn contributors rst changes019 Version 0 19 | .. include:: _contributors.rst
.. currentmodule:: sklearn
============
Version 0.19
============
.. _changes_0_19:
Version 0.19.2
==============
**July, 2018**
This release is exclusively in order to support Python 3.7.
Related changes
---------------
- ``n_iter_`` may vary from previous releases in
:class:`linear_model.LogisticRegression` with ``solver='lbfgs'`` and
:class:`linear_model.HuberRegressor`. For Scipy <= 1.0.0, the optimizer could
perform more than the requested maximum number of iterations. Now both
estimators will report at most ``max_iter`` iterations even if more were
performed. :issue:`10723` by `Joel Nothman`_.
Version 0.19.1
==============
**October 23, 2017**
This is a bug-fix release with some minor documentation improvements and
enhancements to features released in 0.19.0.
Note there may be minor differences in TSNE output in this release (due to
:issue:`9623`), in the case where multiple samples have equal distance to some
sample.
Changelog
---------
API changes
...........
- Reverted the addition of ``metrics.ndcg_score`` and ``metrics.dcg_score``
which had been merged into version 0.19.0 by error. The implementations
were broken and undocumented.
- ``return_train_score`` which was added to
:class:`model_selection.GridSearchCV`,
:class:`model_selection.RandomizedSearchCV` and
:func:`model_selection.cross_validate` in version 0.19.0 will be changing its
default value from True to False in version 0.21. We found that calculating
training score could have a great effect on cross validation runtime in some
cases. Users should explicitly set ``return_train_score`` to False if
prediction or scoring functions are slow, resulting in a deleterious effect
on CV runtime, or to True if they wish to use the calculated scores.
:issue:`9677` by :user:`Kumar Ashutosh <thechargedneutron>` and `Joel
Nothman`_.
- ``correlation_models`` and ``regression_models`` from the legacy gaussian
processes implementation have been belatedly deprecated. :issue:`9717` by
:user:`Kumar Ashutosh <thechargedneutron>`.
Bug fixes
.........
- Avoid integer overflows in :func:`metrics.matthews_corrcoef`.
:issue:`9693` by :user:`Sam Steingold <sam-s>`.
- Fixed a bug in the objective function for :class:`manifold.TSNE` (both exact
and with the Barnes-Hut approximation) when ``n_components >= 3``.
:issue:`9711` by :user:`goncalo-rodrigues`.
- Fix regression in :func:`model_selection.cross_val_predict` where it
raised an error with ``method='predict_proba'`` for some probabilistic
classifiers. :issue:`9641` by :user:`James Bourbeau <jrbourbeau>`.
- Fixed a bug where :func:`datasets.make_classification` modified its input
``weights``. :issue:`9865` by :user:`Sachin Kelkar <s4chin>`.
- :class:`model_selection.StratifiedShuffleSplit` now works with multioutput
multiclass or multilabel data with more than 1000 columns. :issue:`9922` by
:user:`Charlie Brummitt <crbrummitt>`.
- Fixed a bug with nested and conditional parameter setting, e.g. setting a
pipeline step and its parameter at the same time. :issue:`9945` by `Andreas
Müller`_ and `Joel Nothman`_.
Regressions in 0.19.0 fixed in 0.19.1:
- Fixed a bug where parallelised prediction in random forests was not
thread-safe and could (rarely) result in arbitrary errors. :issue:`9830` by
`Joel Nothman`_.
- Fix regression in :func:`model_selection.cross_val_predict` where it no
longer accepted ``X`` as a list. :issue:`9600` by :user:`Rasul Kerimov
<CoderINusE>`.
- Fixed handling of :func:`model_selection.cross_val_predict` for binary
classification with ``method='decision_function'``. :issue:`9593` by
:user:`Reiichiro Nakano <reiinakano>` and core devs.
- Fix regression in :class:`pipeline.Pipeline` where it no longer accepted
``steps`` as a tuple. :issue:`9604` by :user:`Joris Van den Bossche
<jorisvandenbossche>`.
- Fix bug where ``n_iter`` was not properly deprecated, leaving ``n_iter``
unavailable for interim use in
:class:`linear_model.SGDClassifier`, :class:`linear_model.SGDRegressor`,
:class:`linear_model.PassiveAggressiveClassifier`,
:class:`linear_model.PassiveAggressiveRegressor` and
:class:`linear_model.Perceptron`. :issue:`9558` by `Andreas Müller`_.
- Dataset fetchers make sure temporary files are closed before removing them,
which caused errors on Windows. :issue:`9847` by :user:`Joan Massich <massich>`.
- Fixed a regression in :class:`manifold.TSNE` where it no longer supported
metrics other than 'euclidean' and 'precomputed'. :issue:`9623` by :user:`Oli
Blum <oliblum90>`.
Enhancements
............
- Our test suite and :func:`utils.estimator_checks.check_estimator` can now be
run without Nose installed. :issue:`9697` by :user:`Joan Massich <massich>`.
- To improve usability of version 0.19's :class:`pipeline.Pipeline`
caching, ``memory`` now allows ``joblib.Memory`` instances.
This make use of the new :func:`utils.validation.check_memory` helper.
issue:`9584` by :user:`Kumar Ashutosh <thechargedneutron>`
- Some fixes to examples: :issue:`9750`, :issue:`9788`, :issue:`9815`
- Made a FutureWarning in SGD-based estimators less verbose. :issue:`9802` by
:user:`Vrishank Bhardwaj <vrishank97>`.
Code and Documentation Contributors
-----------------------------------
With thanks to:
Joel Nothman, Loic Esteve, Andreas Mueller, Kumar Ashutosh,
Vrishank Bhardwaj, Hanmin Qin, Rasul Kerimov, James Bourbeau,
Nagarjuna Kumar, Nathaniel Saul, Olivier Grisel, Roman
Yurchak, Reiichiro Nakano, Sachin Kelkar, Sam Steingold,
Yaroslav Halchenko, diegodlh, felix, goncalo-rodrigues,
jkleint, oliblum90, pasbi, Anthony Gitter, Ben Lawson, Charlie
Brummitt, Didi Bar-Zev, Gael Varoquaux, Joan Massich, Joris
Van den Bossche, nielsenmarkus11
Version 0.19
============
**August 12, 2017**
Highlights
----------
We are excited to release a number of great new features including
:class:`neighbors.LocalOutlierFactor` for anomaly detection,
:class:`preprocessing.QuantileTransformer` for robust feature transformation,
and the :class:`multioutput.ClassifierChain` meta-estimator to simply account
for dependencies between classes in multilabel problems. We have some new
algorithms in existing estimators, such as multiplicative update in
:class:`decomposition.NMF` and multinomial
:class:`linear_model.LogisticRegression` with L1 loss (use ``solver='saga'``).
Cross validation is now able to return the results from multiple metric
evaluations. The new :func:`model_selection.cross_validate` can return many
scores on the test data as well as training set performance and timings, and we
have extended the ``scoring`` and ``refit`` parameters for grid/randomized
search :ref:`to handle multiple metrics <multimetric_grid_search>`.
You can also learn faster. For instance, the :ref:`new option to cache
transformations <pipeline_cache>` in :class:`pipeline.Pipeline` makes grid
search over pipelines including slow transformations much more efficient. And
you can predict faster: if you're sure you know what you're doing, you can turn
off validating that the input is finite using :func:`config_context`.
We've made some important fixes too. We've fixed a longstanding implementation
error in :func:`metrics.average_precision_score`, so please be cautious with
prior results reported from that function. A number of errors in the
:class:`manifold.TSNE` implementation have been fixed, particularly in the
default Barnes-Hut approximation. :class:`semi_supervised.LabelSpreading` and
:class:`semi_supervised.LabelPropagation` have had substantial fixes.
LabelPropagation was previously broken. LabelSpreading should now correctly
respect its alpha parameter.
Changed models
--------------
The following estimators and functions, when fit with the same data and
parameters, may produce different models from the previous version. This often
occurs due to changes in the modelling logic (bug fixes or enhancements), or in
random sampling procedures.
- :class:`cluster.KMeans` with sparse X and initial centroids given (bug fix)
- :class:`cross_decomposition.PLSRegression`
with ``scale=True`` (bug fix)
- :class:`ensemble.GradientBoostingClassifier` and
:class:`ensemble.GradientBoostingRegressor` where ``min_impurity_split`` is used (bug fix)
- gradient boosting ``loss='quantile'`` (bug fix)
- :class:`ensemble.IsolationForest` (bug fix)
- :class:`feature_selection.SelectFdr` (bug fix)
- :class:`linear_model.RANSACRegressor` (bug fix)
- :class:`linear_model.LassoLars` (bug fix)
- :class:`linear_model.LassoLarsIC` (bug fix)
- :class:`manifold.TSNE` (bug fix)
- :class:`neighbors.NearestCentroid` (bug fix)
- :class:`semi_supervised.LabelSpreading` (bug fix)
- :class:`semi_supervised.LabelPropagation` (bug fix)
- tree based models where ``min_weight_fraction_leaf`` is used (enhancement)
- :class:`model_selection.StratifiedKFold` with ``shuffle=True``
(this change, due to :issue:`7823` was not mentioned in the release notes at
the time)
Details are listed in the changelog below.
(While we are trying to better inform users by providing this information, we
cannot assure that this list is complete.)
Changelog
---------
New features
............
Classifiers and regressors
- Added :class:`multioutput.ClassifierChain` for multi-label
classification. By :user:`Adam Kleczewski <adamklec>`.
- Added solver ``'saga'`` that implements the improved version of Stochastic
Average Gradient, in :class:`linear_model.LogisticRegression` and
:class:`linear_model.Ridge`. It allows the use of L1 penalty with
multinomial logistic loss, and behaves marginally better than 'sag'
during the first epochs of ridge and logistic regression.
:issue:`8446` by `Arthur Mensch`_.
Other estimators
- Added the :class:`neighbors.LocalOutlierFactor` class for anomaly
detection based on nearest neighbors.
:issue:`5279` by `Nicolas Goix`_ and `Alexandre Gramfort`_.
- Added :class:`preprocessing.QuantileTransformer` class and
:func:`preprocessing.quantile_transform` function for features
normalization based on quantiles.
:issue:`8363` by :user:`Denis Engemann <dengemann>`,
:user:`Guillaume Lemaitre <glemaitre>`, `Olivier Grisel`_, `Raghav RV`_,
:user:`Thierry Guillemot <tguillemot>`, and `Gael Varoquaux`_.
- The new solver ``'mu'`` implements a Multiplicate Update in
:class:`decomposition.NMF`, allowing the optimization of all
beta-divergences, including the Frobenius norm, the generalized
Kullback-Leibler divergence and the Itakura-Saito divergence.
:issue:`5295` by `Tom Dupre la Tour`_.
Model selection and evaluation
- :class:`model_selection.GridSearchCV` and
:class:`model_selection.RandomizedSearchCV` now support simultaneous
evaluation of multiple metrics. Refer to the
:ref:`multimetric_grid_search` section of the user guide for more
information. :issue:`7388` by `Raghav RV`_
- Added the :func:`model_selection.cross_validate` which allows evaluation
of multiple metrics. This function returns a dict with more useful
information from cross-validation such as the train scores, fit times and
score times.
Refer to :ref:`multimetric_cross_validation` section of the userguide
for more information. :issue:`7388` by `Raghav RV`_
- Added :func:`metrics.mean_squared_log_error`, which computes
the mean square error of the logarithmic transformation of targets,
particularly useful for targets with an exponential trend.
:issue:`7655` by :user:`Karan Desai <karandesai-96>`.
- Added :func:`metrics.dcg_score` and :func:`metrics.ndcg_score`, which
compute Discounted cumulative gain (DCG) and Normalized discounted
cumulative gain (NDCG).
:issue:`7739` by :user:`David Gasquez <davidgasquez>`.
- Added the :class:`model_selection.RepeatedKFold` and
:class:`model_selection.RepeatedStratifiedKFold`.
:issue:`8120` by `Neeraj Gangwar`_.
Miscellaneous
- Validation that input data contains no NaN or inf can now be suppressed
using :func:`config_context`, at your own risk. This will save on runtime,
and may be particularly useful for prediction time. :issue:`7548` by
`Joel Nothman`_.
- Added a test to ensure parameter listing in docstrings match the
function/class signature. :issue:`9206` by `Alexandre Gramfort`_ and
`Raghav RV`_.
Enhancements
............
Trees and ensembles
- The ``min_weight_fraction_leaf`` constraint in tree construction is now
more efficient, taking a fast path to declare a node a leaf if its weight
is less than 2 * the minimum. Note that the constructed tree will be
different from previous versions where ``min_weight_fraction_leaf`` is
used. :issue:`7441` by :user:`Nelson Liu <nelson-liu>`.
- :class:`ensemble.GradientBoostingClassifier` and :class:`ensemble.GradientBoostingRegressor`
now support sparse input for prediction.
:issue:`6101` by :user:`Ibraim Ganiev <olologin>`.
- :class:`ensemble.VotingClassifier` now allows changing estimators by using
:meth:`ensemble.VotingClassifier.set_params`. An estimator can also be
removed by setting it to ``None``.
:issue:`7674` by :user:`Yichuan Liu <yl565>`.
- :func:`tree.export_graphviz` now shows configurable number of decimal
places. :issue:`8698` by :user:`Guillaume Lemaitre <glemaitre>`.
- Added ``flatten_transform`` parameter to :class:`ensemble.VotingClassifier`
to change output shape of `transform` method to 2 dimensional.
:issue:`7794` by :user:`Ibraim Ganiev <olologin>` and
:user:`Herilalaina Rakotoarison <herilalaina>`.
Linear, kernelized and related models
- :class:`linear_model.SGDClassifier`, :class:`linear_model.SGDRegressor`,
:class:`linear_model.PassiveAggressiveClassifier`,
:class:`linear_model.PassiveAggressiveRegressor` and
:class:`linear_model.Perceptron` now expose ``max_iter`` and
``tol`` parameters, to handle convergence more precisely.
``n_iter`` parameter is deprecated, and the fitted estimator exposes
a ``n_iter_`` attribute, with actual number of iterations before
convergence. :issue:`5036` by `Tom Dupre la Tour`_.
- Added ``average`` parameter to perform weight averaging in
:class:`linear_model.PassiveAggressiveClassifier`. :issue:`4939`
by :user:`Andrea Esuli <aesuli>`.
- :class:`linear_model.RANSACRegressor` no longer throws an error
when calling ``fit`` if no inliers are found in its first iteration.
Furthermore, causes of skipped iterations are tracked in newly added
attributes, ``n_skips_*``.
:issue:`7914` by :user:`Michael Horrell <mthorrell>`.
- In :class:`gaussian_process.GaussianProcessRegressor`, method ``predict``
is a lot faster with ``return_std=True``. :issue:`8591` by
:user:`Hadrien Bertrand <hbertrand>`.
- Added ``return_std`` to ``predict`` method of
:class:`linear_model.ARDRegression` and
:class:`linear_model.BayesianRidge`.
:issue:`7838` by :user:`Sergey Feldman <sergeyf>`.
- Memory usage enhancements: Prevent cast from float32 to float64 in:
:class:`linear_model.MultiTaskElasticNet`;
:class:`linear_model.LogisticRegression` when using newton-cg solver; and
:class:`linear_model.Ridge` when using svd, sparse_cg, cholesky or lsqr
solvers. :issue:`8835`, :issue:`8061` by :user:`Joan Massich <massich>` and :user:`Nicolas
Cordier <ncordier>` and :user:`Thierry Guillemot <tguillemot>`.
Other predictors
- Custom metrics for the :mod:`sklearn.neighbors` binary trees now have
fewer constraints: they must take two 1d-arrays and return a float.
:issue:`6288` by `Jake Vanderplas`_.
- ``algorithm='auto`` in :mod:`sklearn.neighbors` estimators now chooses the most
appropriate algorithm for all input types and metrics. :issue:`9145` by
:user:`Herilalaina Rakotoarison <herilalaina>` and :user:`Reddy Chinthala
<preddy5>`.
Decomposition, manifold learning and clustering
- :class:`cluster.MiniBatchKMeans` and :class:`cluster.KMeans`
now use significantly less memory when assigning data points to their
nearest cluster center. :issue:`7721` by :user:`Jon Crall <Erotemic>`.
- :class:`decomposition.PCA`, :class:`decomposition.IncrementalPCA` and
:class:`decomposition.TruncatedSVD` now expose the singular values
from the underlying SVD. They are stored in the attribute
``singular_values_``, like in :class:`decomposition.IncrementalPCA`.
:issue:`7685` by :user:`Tommy Löfstedt <tomlof>`
- :class:`decomposition.NMF` now faster when ``beta_loss=0``.
:issue:`9277` by :user:`hongkahjun`.
- Memory improvements for method ``barnes_hut`` in :class:`manifold.TSNE`
:issue:`7089` by :user:`Thomas Moreau <tomMoral>` and `Olivier Grisel`_.
- Optimization schedule improvements for Barnes-Hut :class:`manifold.TSNE`
so the results are closer to the one from the reference implementation
`lvdmaaten/bhtsne <https://github.com/lvdmaaten/bhtsne>`_ by :user:`Thomas
Moreau <tomMoral>` and `Olivier Grisel`_.
- Memory usage enhancements: Prevent cast from float32 to float64 in
:class:`decomposition.PCA` and
`decomposition.randomized_svd_low_rank`.
:issue:`9067` by `Raghav RV`_.
Preprocessing and feature selection
- Added ``norm_order`` parameter to :class:`feature_selection.SelectFromModel`
to enable selection of the norm order when ``coef_`` is more than 1D.
:issue:`6181` by :user:`Antoine Wendlinger <antoinewdg>`.
- Added ability to use sparse matrices in :func:`feature_selection.f_regression`
with ``center=True``. :issue:`8065` by :user:`Daniel LeJeune <acadiansith>`.
- Small performance improvement to n-gram creation in
:mod:`sklearn.feature_extraction.text` by binding methods for loops and
special-casing unigrams. :issue:`7567` by :user:`Jaye Doepke <jtdoepke>`
- Relax assumption on the data for the
:class:`kernel_approximation.SkewedChi2Sampler`. Since the Skewed-Chi2
kernel is defined on the open interval :math:`(-skewedness; +\infty)^d`,
the transform function should not check whether ``X < 0`` but whether ``X <
-self.skewedness``. :issue:`7573` by :user:`Romain Brault <RomainBrault>`.
- Made default kernel parameters kernel-dependent in
:class:`kernel_approximation.Nystroem`.
:issue:`5229` by :user:`Saurabh Bansod <mth4saurabh>` and `Andreas Müller`_.
Model evaluation and meta-estimators
- :class:`pipeline.Pipeline` is now able to cache transformers
within a pipeline by using the ``memory`` constructor parameter.
:issue:`7990` by :user:`Guillaume Lemaitre <glemaitre>`.
- :class:`pipeline.Pipeline` steps can now be accessed as attributes of its
``named_steps`` attribute. :issue:`8586` by :user:`Herilalaina
Rakotoarison <herilalaina>`.
- Added ``sample_weight`` parameter to :meth:`pipeline.Pipeline.score`.
:issue:`7723` by :user:`Mikhail Korobov <kmike>`.
- Added ability to set ``n_jobs`` parameter to :func:`pipeline.make_union`.
A ``TypeError`` will be raised for any other kwargs. :issue:`8028`
by :user:`Alexander Booth <alexandercbooth>`.
- :class:`model_selection.GridSearchCV`,
:class:`model_selection.RandomizedSearchCV` and
:func:`model_selection.cross_val_score` now allow estimators with callable
kernels which were previously prohibited.
:issue:`8005` by `Andreas Müller`_ .
- :func:`model_selection.cross_val_predict` now returns output of the
correct shape for all values of the argument ``method``.
:issue:`7863` by :user:`Aman Dalmia <dalmia>`.
- Added ``shuffle`` and ``random_state`` parameters to shuffle training
data before taking prefixes of it based on training sizes in
:func:`model_selection.learning_curve`.
:issue:`7506` by :user:`Narine Kokhlikyan <NarineK>`.
- :class:`model_selection.StratifiedShuffleSplit` now works with multioutput
multiclass (or multilabel) data. :issue:`9044` by `Vlad Niculae`_.
- Speed improvements to :class:`model_selection.StratifiedShuffleSplit`.
:issue:`5991` by :user:`Arthur Mensch <arthurmensch>` and `Joel Nothman`_.
- Add ``shuffle`` parameter to :func:`model_selection.train_test_split`.
:issue:`8845` by :user:`themrmax <themrmax>`
- :class:`multioutput.MultiOutputRegressor` and :class:`multioutput.MultiOutputClassifier`
now support online learning using ``partial_fit``.
:issue: `8053` by :user:`Peng Yu <yupbank>`.
- Add ``max_train_size`` parameter to :class:`model_selection.TimeSeriesSplit`
:issue:`8282` by :user:`Aman Dalmia <dalmia>`.
- More clustering metrics are now available through :func:`metrics.get_scorer`
and ``scoring`` parameters. :issue:`8117` by `Raghav RV`_.
- A scorer based on :func:`metrics.explained_variance_score` is also available.
:issue:`9259` by :user:`Hanmin Qin <qinhanmin2014>`.
Metrics
- :func:`metrics.matthews_corrcoef` now support multiclass classification.
:issue:`8094` by :user:`Jon Crall <Erotemic>`.
- Add ``sample_weight`` parameter to :func:`metrics.cohen_kappa_score`.
:issue:`8335` by :user:`Victor Poughon <vpoughon>`.
Miscellaneous
- :func:`utils.estimator_checks.check_estimator` now attempts to ensure that methods
transform, predict, etc. do not set attributes on the estimator.
:issue:`7533` by :user:`Ekaterina Krivich <kiote>`.
- Added type checking to the ``accept_sparse`` parameter in
:mod:`sklearn.utils.validation` methods. This parameter now accepts only boolean,
string, or list/tuple of strings. ``accept_sparse=None`` is deprecated and
should be replaced by ``accept_sparse=False``.
:issue:`7880` by :user:`Josh Karnofsky <jkarno>`.
- Make it possible to load a chunk of an svmlight formatted file by
passing a range of bytes to :func:`datasets.load_svmlight_file`.
:issue:`935` by :user:`Olivier Grisel <ogrisel>`.
- :class:`dummy.DummyClassifier` and :class:`dummy.DummyRegressor`
now accept non-finite features. :issue:`8931` by :user:`Attractadore`.
Bug fixes
.........
Trees and ensembles
- Fixed a memory leak in trees when using trees with ``criterion='mae'``.
:issue:`8002` by `Raghav RV`_.
- Fixed a bug where :class:`ensemble.IsolationForest` uses an
an incorrect formula for the average path length
:issue:`8549` by `Peter Wang <https://github.com/PTRWang>`_.
- Fixed a bug where :class:`ensemble.AdaBoostClassifier` throws
``ZeroDivisionError`` while fitting data with single class labels.
:issue:`7501` by :user:`Dominik Krzeminski <dokato>`.
- Fixed a bug in :class:`ensemble.GradientBoostingClassifier` and
:class:`ensemble.GradientBoostingRegressor` where a float being compared
to ``0.0`` using ``==`` caused a divide by zero error. :issue:`7970` by
:user:`He Chen <chenhe95>`.
- Fix a bug where :class:`ensemble.GradientBoostingClassifier` and
:class:`ensemble.GradientBoostingRegressor` ignored the
``min_impurity_split`` parameter.
:issue:`8006` by :user:`Sebastian Pölsterl <sebp>`.
- Fixed ``oob_score`` in :class:`ensemble.BaggingClassifier`.
:issue:`8936` by :user:`Michael Lewis <mlewis1729>`
- Fixed excessive memory usage in prediction for random forests estimators.
:issue:`8672` by :user:`Mike Benfield <mikebenfield>`.
- Fixed a bug where ``sample_weight`` as a list broke random forests in Python 2
:issue:`8068` by :user:`xor`.
- Fixed a bug where :class:`ensemble.IsolationForest` fails when
``max_features`` is less than 1.
:issue:`5732` by :user:`Ishank Gulati <IshankGulati>`.
- Fix a bug where gradient boosting with ``loss='quantile'`` computed
negative errors for negative values of ``ytrue - ypred`` leading to wrong
values when calling ``__call__``.
:issue:`8087` by :user:`Alexis Mignon <AlexisMignon>`
- Fix a bug where :class:`ensemble.VotingClassifier` raises an error
when a numpy array is passed in for weights. :issue:`7983` by
:user:`Vincent Pham <vincentpham1991>`.
- Fixed a bug where :func:`tree.export_graphviz` raised an error
when the length of features_names does not match n_features in the decision
tree. :issue:`8512` by :user:`Li Li <aikinogard>`.
Linear, kernelized and related models
- Fixed a bug where :func:`linear_model.RANSACRegressor.fit` may run until
``max_iter`` if it finds a large inlier group early. :issue:`8251` by
:user:`aivision2020`.
- Fixed a bug where :class:`naive_bayes.MultinomialNB` and
:class:`naive_bayes.BernoulliNB` failed when ``alpha=0``. :issue:`5814` by
:user:`Yichuan Liu <yl565>` and :user:`Herilalaina Rakotoarison
<herilalaina>`.
- Fixed a bug where :class:`linear_model.LassoLars` does not give
the same result as the LassoLars implementation available
in R (lars library). :issue:`7849` by :user:`Jair Montoya Martinez <jmontoyam>`.
- Fixed a bug in `linear_model.RandomizedLasso`,
:class:`linear_model.Lars`, :class:`linear_model.LassoLars`,
:class:`linear_model.LarsCV` and :class:`linear_model.LassoLarsCV`,
where the parameter ``precompute`` was not used consistently across
classes, and some values proposed in the docstring could raise errors.
:issue:`5359` by `Tom Dupre la Tour`_.
- Fix inconsistent results between :class:`linear_model.RidgeCV` and
:class:`linear_model.Ridge` when using ``normalize=True``. :issue:`9302`
by `Alexandre Gramfort`_.
- Fix a bug where :func:`linear_model.LassoLars.fit` sometimes
left ``coef_`` as a list, rather than an ndarray.
:issue:`8160` by :user:`CJ Carey <perimosocordiae>`.
- Fix :func:`linear_model.BayesianRidge.fit` to return
ridge parameter ``alpha_`` and ``lambda_`` consistent with calculated
coefficients ``coef_`` and ``intercept_``.
:issue:`8224` by :user:`Peter Gedeck <gedeck>`.
- Fixed a bug in :class:`svm.OneClassSVM` where it returned floats instead of
integer classes. :issue:`8676` by :user:`Vathsala Achar <VathsalaAchar>`.
- Fix AIC/BIC criterion computation in :class:`linear_model.LassoLarsIC`.
:issue:`9022` by `Alexandre Gramfort`_ and :user:`Mehmet Basbug <mehmetbasbug>`.
- Fixed a memory leak in our LibLinear implementation. :issue:`9024` by
:user:`Sergei Lebedev <superbobry>`
- Fix bug where stratified CV splitters did not work with
:class:`linear_model.LassoCV`. :issue:`8973` by
:user:`Paulo Haddad <paulochf>`.
- Fixed a bug in :class:`gaussian_process.GaussianProcessRegressor`
when the standard deviation and covariance predicted without fit
would fail with a unmeaningful error by default.
:issue:`6573` by :user:`Quazi Marufur Rahman <qmaruf>` and
`Manoj Kumar`_.
Other predictors
- Fix `semi_supervised.BaseLabelPropagation` to correctly implement
``LabelPropagation`` and ``LabelSpreading`` as done in the referenced
papers. :issue:`9239`
by :user:`Andre Ambrosio Boechat <boechat107>`, :user:`Utkarsh Upadhyay
<musically-ut>`, and `Joel Nothman`_.
Decomposition, manifold learning and clustering
- Fixed the implementation of :class:`manifold.TSNE`:
- ``early_exageration`` parameter had no effect and is now used for the
first 250 optimization iterations.
- Fixed the ``AssertionError: Tree consistency failed`` exception
reported in :issue:`8992`.
- Improve the learning schedule to match the one from the reference
implementation `lvdmaaten/bhtsne <https://github.com/lvdmaaten/bhtsne>`_.
by :user:`Thomas Moreau <tomMoral>` and `Olivier Grisel`_.
- Fix a bug in :class:`decomposition.LatentDirichletAllocation`
where the ``perplexity`` method was returning incorrect results because
the ``transform`` method returns normalized document topic distributions
as of version 0.18. :issue:`7954` by :user:`Gary Foreman <garyForeman>`.
- Fix output shape and bugs with n_jobs > 1 in
:class:`decomposition.SparseCoder` transform and
:func:`decomposition.sparse_encode`
for one-dimensional data and one component.
This also impacts the output shape of :class:`decomposition.DictionaryLearning`.
:issue:`8086` by `Andreas Müller`_.
- Fixed the implementation of ``explained_variance_``
in :class:`decomposition.PCA`,
`decomposition.RandomizedPCA` and
:class:`decomposition.IncrementalPCA`.
:issue:`9105` by `Hanmin Qin <https://github.com/qinhanmin2014>`_.
- Fixed the implementation of ``noise_variance_`` in :class:`decomposition.PCA`.
:issue:`9108` by `Hanmin Qin <https://github.com/qinhanmin2014>`_.
- Fixed a bug where :class:`cluster.DBSCAN` gives incorrect
result when input is a precomputed sparse matrix with initial
rows all zero. :issue:`8306` by :user:`Akshay Gupta <Akshay0724>`
- Fix a bug regarding fitting :class:`cluster.KMeans` with a sparse
array X and initial centroids, where X's means were unnecessarily being
subtracted from the centroids. :issue:`7872` by :user:`Josh Karnofsky <jkarno>`.
- Fixes to the input validation in :class:`covariance.EllipticEnvelope`.
:issue:`8086` by `Andreas Müller`_.
- Fixed a bug in :class:`covariance.MinCovDet` where inputting data
that produced a singular covariance matrix would cause the helper method
``_c_step`` to throw an exception.
:issue:`3367` by :user:`Jeremy Steward <ThatGeoGuy>`
- Fixed a bug in :class:`manifold.TSNE` affecting convergence of the
gradient descent. :issue:`8768` by :user:`David DeTomaso <deto>`.
- Fixed a bug in :class:`manifold.TSNE` where it stored the incorrect
``kl_divergence_``. :issue:`6507` by :user:`Sebastian Saeger <ssaeger>`.
- Fixed improper scaling in :class:`cross_decomposition.PLSRegression`
with ``scale=True``. :issue:`7819` by :user:`jayzed82 <jayzed82>`.
- :class:`cluster.SpectralCoclustering` and
:class:`cluster.SpectralBiclustering` ``fit`` method conforms
with API by accepting ``y`` and returning the object. :issue:`6126`,
:issue:`7814` by :user:`Laurent Direr <ldirer>` and :user:`Maniteja
Nandana <maniteja123>`.
- Fix bug where :mod:`sklearn.mixture` ``sample`` methods did not return as many
samples as requested. :issue:`7702` by :user:`Levi John Wolf <ljwolf>`.
- Fixed the shrinkage implementation in :class:`neighbors.NearestCentroid`.
:issue:`9219` by `Hanmin Qin <https://github.com/qinhanmin2014>`_.
Preprocessing and feature selection
- For sparse matrices, :func:`preprocessing.normalize` with ``return_norm=True``
will now raise a ``NotImplementedError`` with 'l1' or 'l2' norm and with
norm 'max' the norms returned will be the same as for dense matrices.
:issue:`7771` by `Ang Lu <https://github.com/luang008>`_.
- Fix a bug where :class:`feature_selection.SelectFdr` did not
exactly implement Benjamini-Hochberg procedure. It formerly may have
selected fewer features than it should.
:issue:`7490` by :user:`Peng Meng <mpjlu>`.
- Fixed a bug where `linear_model.RandomizedLasso` and
`linear_model.RandomizedLogisticRegression` breaks for
sparse input. :issue:`8259` by :user:`Aman Dalmia <dalmia>`.
- Fix a bug where :class:`feature_extraction.FeatureHasher`
mandatorily applied a sparse random projection to the hashed features,
preventing the use of
:class:`feature_extraction.text.HashingVectorizer` in a
pipeline with :class:`feature_extraction.text.TfidfTransformer`.
:issue:`7565` by :user:`Roman Yurchak <rth>`.
- Fix a bug where :class:`feature_selection.mutual_info_regression` did not
correctly use ``n_neighbors``. :issue:`8181` by :user:`Guillaume Lemaitre
<glemaitre>`.
Model evaluation and meta-estimators
- Fixed a bug where `model_selection.BaseSearchCV.inverse_transform`
returns ``self.best_estimator_.transform()`` instead of
``self.best_estimator_.inverse_transform()``.
:issue:`8344` by :user:`Akshay Gupta <Akshay0724>` and :user:`Rasmus Eriksson <MrMjauh>`.
- Added ``classes_`` attribute to :class:`model_selection.GridSearchCV`,
:class:`model_selection.RandomizedSearchCV`, `grid_search.GridSearchCV`,
and `grid_search.RandomizedSearchCV` that matches the ``classes_``
attribute of ``best_estimator_``. :issue:`7661` and :issue:`8295`
by :user:`Alyssa Batula <abatula>`, :user:`Dylan Werner-Meier <unautre>`,
and :user:`Stephen Hoover <stephen-hoover>`.
- Fixed a bug where :func:`model_selection.validation_curve`
reused the same estimator for each parameter value.
:issue:`7365` by :user:`Aleksandr Sandrovskii <Sundrique>`.
- :func:`model_selection.permutation_test_score` now works with Pandas
types. :issue:`5697` by :user:`Stijn Tonk <equialgo>`.
- Several fixes to input validation in
:class:`multiclass.OutputCodeClassifier`
:issue:`8086` by `Andreas Müller`_.
- :class:`multiclass.OneVsOneClassifier`'s ``partial_fit`` now ensures all
classes are provided up-front. :issue:`6250` by
:user:`Asish Panda <kaichogami>`.
- Fix :func:`multioutput.MultiOutputClassifier.predict_proba` to return a
list of 2d arrays, rather than a 3d array. In the case where different
target columns had different numbers of classes, a ``ValueError`` would be
raised on trying to stack matrices with different dimensions.
:issue:`8093` by :user:`Peter Bull <pjbull>`.
- Cross validation now works with Pandas datatypes that have a
read-only index. :issue:`9507` by `Loic Esteve`_.
Metrics
- :func:`metrics.average_precision_score` no longer linearly
interpolates between operating points, and instead weighs precisions
by the change in recall since the last operating point, as per the
`Wikipedia entry <https://en.wikipedia.org/wiki/Average_precision>`_.
(`#7356 <https://github.com/scikit-learn/scikit-learn/pull/7356>`_). By
:user:`Nick Dingwall <ndingwall>` and `Gael Varoquaux`_.
- Fix a bug in `metrics.classification._check_targets`
which would return ``'binary'`` if ``y_true`` and ``y_pred`` were
both ``'binary'`` but the union of ``y_true`` and ``y_pred`` was
``'multiclass'``. :issue:`8377` by `Loic Esteve`_.
- Fixed an integer overflow bug in :func:`metrics.confusion_matrix` and
hence :func:`metrics.cohen_kappa_score`. :issue:`8354`, :issue:`7929`
by `Joel Nothman`_ and :user:`Jon Crall <Erotemic>`.
- Fixed passing of ``gamma`` parameter to the ``chi2`` kernel in
:func:`metrics.pairwise.pairwise_kernels` :issue:`5211` by
:user:`Nick Rhinehart <nrhine1>`,
:user:`Saurabh Bansod <mth4saurabh>` and `Andreas Müller`_.
Miscellaneous
- Fixed a bug when :func:`datasets.make_classification` fails
when generating more than 30 features. :issue:`8159` by
:user:`Herilalaina Rakotoarison <herilalaina>`.
- Fixed a bug where :func:`datasets.make_moons` gives an
incorrect result when ``n_samples`` is odd.
:issue:`8198` by :user:`Josh Levy <levy5674>`.
- Some ``fetch_`` functions in :mod:`sklearn.datasets` were ignoring the
``download_if_missing`` keyword. :issue:`7944` by :user:`Ralf Gommers <rgommers>`.
- Fix estimators to accept a ``sample_weight`` parameter of type
``pandas.Series`` in their ``fit`` function. :issue:`7825` by
`Kathleen Chen`_.
- Fix a bug in cases where ``numpy.cumsum`` may be numerically unstable,
raising an exception if instability is identified. :issue:`7376` and
:issue:`7331` by `Joel Nothman`_ and :user:`yangarbiter`.
- Fix a bug where `base.BaseEstimator.__getstate__`
obstructed pickling customizations of child-classes, when used in a
multiple inheritance context.
:issue:`8316` by :user:`Holger Peters <HolgerPeters>`.
- Update Sphinx-Gallery from 0.1.4 to 0.1.7 for resolving links in
documentation build with Sphinx>1.5 :issue:`8010`, :issue:`7986` by
:user:`Oscar Najera <Titan-C>`
- Add ``data_home`` parameter to :func:`sklearn.datasets.fetch_kddcup99`.
:issue:`9289` by `Loic Esteve`_.
- Fix dataset loaders using Python 3 version of makedirs to also work in
Python 2. :issue:`9284` by :user:`Sebastin Santy <SebastinSanty>`.
- Several minor issues were fixed with thanks to the alerts of
`lgtm.com <https://lgtm.com/>`_. :issue:`9278` by :user:`Jean Helie <jhelie>`,
among others.
API changes summary
-------------------
Trees and ensembles
- Gradient boosting base models are no longer estimators. By `Andreas Müller`_.
- All tree based estimators now accept a ``min_impurity_decrease``
parameter in lieu of the ``min_impurity_split``, which is now deprecated.
The ``min_impurity_decrease`` helps stop splitting the nodes in which
the weighted impurity decrease from splitting is no longer at least
``min_impurity_decrease``. :issue:`8449` by `Raghav RV`_.
Linear, kernelized and related models
- ``n_iter`` parameter is deprecated in :class:`linear_model.SGDClassifier`,
:class:`linear_model.SGDRegressor`,
:class:`linear_model.PassiveAggressiveClassifier`,
:class:`linear_model.PassiveAggressiveRegressor` and
:class:`linear_model.Perceptron`. By `Tom Dupre la Tour`_.
Other predictors
- `neighbors.LSHForest` has been deprecated and will be
removed in 0.21 due to poor performance.
:issue:`9078` by :user:`Laurent Direr <ldirer>`.
- :class:`neighbors.NearestCentroid` no longer purports to support
``metric='precomputed'`` which now raises an error. :issue:`8515` by
:user:`Sergul Aydore <sergulaydore>`.
- The ``alpha`` parameter of :class:`semi_supervised.LabelPropagation` now
has no effect and is deprecated to be removed in 0.21. :issue:`9239`
by :user:`Andre Ambrosio Boechat <boechat107>`, :user:`Utkarsh Upadhyay
<musically-ut>`, and `Joel Nothman`_.
Decomposition, manifold learning and clustering
- Deprecate the ``doc_topic_distr`` argument of the ``perplexity`` method
in :class:`decomposition.LatentDirichletAllocation` because the
user no longer has access to the unnormalized document topic distribution
needed for the perplexity calculation. :issue:`7954` by
:user:`Gary Foreman <garyForeman>`.
- The ``n_topics`` parameter of :class:`decomposition.LatentDirichletAllocation`
has been renamed to ``n_components`` and will be removed in version 0.21.
:issue:`8922` by :user:`Attractadore`.
- :meth:`decomposition.SparsePCA.transform`'s ``ridge_alpha`` parameter is
deprecated in preference for class parameter.
:issue:`8137` by :user:`Naoya Kanai <naoyak>`.
- :class:`cluster.DBSCAN` now has a ``metric_params`` parameter.
:issue:`8139` by :user:`Naoya Kanai <naoyak>`.
Preprocessing and feature selection
- :class:`feature_selection.SelectFromModel` now has a ``partial_fit``
method only if the underlying estimator does. By `Andreas Müller`_.
- :class:`feature_selection.SelectFromModel` now validates the ``threshold``
parameter and sets the ``threshold_`` attribute during the call to
``fit``, and no longer during the call to ``transform```. By `Andreas
Müller`_.
- The ``non_negative`` parameter in :class:`feature_extraction.FeatureHasher`
has been deprecated, and replaced with a more principled alternative,
``alternate_sign``.
:issue:`7565` by :user:`Roman Yurchak <rth>`.
- `linear_model.RandomizedLogisticRegression`,
and `linear_model.RandomizedLasso` have been deprecated and will
be removed in version 0.21.
:issue:`8995` by :user:`Ramana.S <sentient07>`.
Model evaluation and meta-estimators
- Deprecate the ``fit_params`` constructor input to the
:class:`model_selection.GridSearchCV` and
:class:`model_selection.RandomizedSearchCV` in favor
of passing keyword parameters to the ``fit`` methods
of those classes. Data-dependent parameters needed for model
training should be passed as keyword arguments to ``fit``,
and conforming to this convention will allow the hyperparameter
selection classes to be used with tools such as
:func:`model_selection.cross_val_predict`.
:issue:`2879` by :user:`Stephen Hoover <stephen-hoover>`.
- In version 0.21, the default behavior of splitters that use the
``test_size`` and ``train_size`` parameter will change, such that
specifying ``train_size`` alone will cause ``test_size`` to be the
remainder. :issue:`7459` by :user:`Nelson Liu <nelson-liu>`.
- :class:`multiclass.OneVsRestClassifier` now has ``partial_fit``,
``decision_function`` and ``predict_proba`` methods only when the
underlying estimator does. :issue:`7812` by `Andreas Müller`_ and
:user:`Mikhail Korobov <kmike>`.
- :class:`multiclass.OneVsRestClassifier` now has a ``partial_fit`` method
only if the underlying estimator does. By `Andreas Müller`_.
- The ``decision_function`` output shape for binary classification in
:class:`multiclass.OneVsRestClassifier` and
:class:`multiclass.OneVsOneClassifier` is now ``(n_samples,)`` to conform
to scikit-learn conventions. :issue:`9100` by `Andreas Müller`_.
- The :func:`multioutput.MultiOutputClassifier.predict_proba`
function used to return a 3d array (``n_samples``, ``n_classes``,
``n_outputs``). In the case where different target columns had different
numbers of classes, a ``ValueError`` would be raised on trying to stack
matrices with different dimensions. This function now returns a list of
arrays where the length of the list is ``n_outputs``, and each array is
(``n_samples``, ``n_classes``) for that particular output.
:issue:`8093` by :user:`Peter Bull <pjbull>`.
- Replace attribute ``named_steps`` ``dict`` to :class:`utils.Bunch`
in :class:`pipeline.Pipeline` to enable tab completion in interactive
environment. In the case conflict value on ``named_steps`` and ``dict``
attribute, ``dict`` behavior will be prioritized.
:issue:`8481` by :user:`Herilalaina Rakotoarison <herilalaina>`.
Miscellaneous
- Deprecate the ``y`` parameter in ``transform`` and ``inverse_transform``.
The method should not accept ``y`` parameter, as it's used at the prediction time.
:issue:`8174` by :user:`Tahar Zanouda <tzano>`, `Alexandre Gramfort`_
and `Raghav RV`_.
- SciPy >= 0.13.3 and NumPy >= 1.8.2 are now the minimum supported versions
for scikit-learn. The following backported functions in
:mod:`sklearn.utils` have been removed or deprecated accordingly.
:issue:`8854` and :issue:`8874` by :user:`Naoya Kanai <naoyak>`
- The ``store_covariances`` and ``covariances_`` parameters of
:class:`discriminant_analysis.QuadraticDiscriminantAnalysis`
has been renamed to ``store_covariance`` and ``covariance_`` to be
consistent with the corresponding parameter names of the
:class:`discriminant_analysis.LinearDiscriminantAnalysis`. They will be
removed in version 0.21. :issue:`7998` by :user:`Jiacheng <mrbeann>`
Removed in 0.19:
- ``utils.fixes.argpartition``
- ``utils.fixes.array_equal``
- ``utils.fixes.astype``
- ``utils.fixes.bincount``
- ``utils.fixes.expit``
- ``utils.fixes.frombuffer_empty``
- ``utils.fixes.in1d``
- ``utils.fixes.norm``
- ``utils.fixes.rankdata``
- ``utils.fixes.safe_copy``
Deprecated in 0.19, to be removed in 0.21:
- ``utils.arpack.eigs``
- ``utils.arpack.eigsh``
- ``utils.arpack.svds``
- ``utils.extmath.fast_dot``
- ``utils.extmath.logsumexp``
- ``utils.extmath.norm``
- ``utils.extmath.pinvh``
- ``utils.graph.graph_laplacian``
- ``utils.random.choice``
- ``utils.sparsetools.connected_components``
- ``utils.stats.rankdata``
- Estimators with both methods ``decision_function`` and ``predict_proba``
are now required to have a monotonic relation between them. The
method ``check_decision_proba_consistency`` has been added in
**utils.estimator_checks** to check their consistency.
:issue:`7578` by :user:`Shubham Bhardwaj <shubham0704>`
- All checks in ``utils.estimator_checks``, in particular
:func:`utils.estimator_checks.check_estimator` now accept estimator
instances. Most other checks do not accept
estimator classes any more. :issue:`9019` by `Andreas Müller`_.
- Ensure that estimators' attributes ending with ``_`` are not set
in the constructor but only in the ``fit`` method. Most notably,
ensemble estimators (deriving from `ensemble.BaseEnsemble`)
now only have ``self.estimators_`` available after ``fit``.
:issue:`7464` by `Lars Buitinck`_ and `Loic Esteve`_.
Code and Documentation Contributors
-----------------------------------
Thanks to everyone who has contributed to the maintenance and improvement of the
project since version 0.18, including:
Joel Nothman, Loic Esteve, Andreas Mueller, Guillaume Lemaitre, Olivier Grisel,
Hanmin Qin, Raghav RV, Alexandre Gramfort, themrmax, Aman Dalmia, Gael
Varoquaux, Naoya Kanai, Tom Dupré la Tour, Rishikesh, Nelson Liu, Taehoon Lee,
Nelle Varoquaux, Aashil, Mikhail Korobov, Sebastin Santy, Joan Massich, Roman
Yurchak, RAKOTOARISON Herilalaina, Thierry Guillemot, Alexandre Abadie, Carol
Willing, Balakumaran Manoharan, Josh Karnofsky, Vlad Niculae, Utkarsh Upadhyay,
Dmitry Petrov, Minghui Liu, Srivatsan, Vincent Pham, Albert Thomas, Jake
VanderPlas, Attractadore, JC Liu, alexandercbooth, chkoar, Óscar Nájera,
Aarshay Jain, Kyle Gilliam, Ramana Subramanyam, CJ Carey, Clement Joudet, David
Robles, He Chen, Joris Van den Bossche, Karan Desai, Katie Luangkote, Leland
McInnes, Maniteja Nandana, Michele Lacchia, Sergei Lebedev, Shubham Bhardwaj,
akshay0724, omtcyfz, rickiepark, waterponey, Vathsala Achar, jbDelafosse, Ralf
Gommers, Ekaterina Krivich, Vivek Kumar, Ishank Gulati, Dave Elliott, ldirer,
Reiichiro Nakano, Levi John Wolf, Mathieu Blondel, Sid Kapur, Dougal J.
Sutherland, midinas, mikebenfield, Sourav Singh, Aseem Bansal, Ibraim Ganiev,
Stephen Hoover, AishwaryaRK, Steven C. Howell, Gary Foreman, Neeraj Gangwar,
Tahar, Jon Crall, dokato, Kathy Chen, ferria, Thomas Moreau, Charlie Brummitt,
Nicolas Goix, Adam Kleczewski, Sam Shleifer, Nikita Singh, Basil Beirouti,
Giorgio Patrini, Manoj Kumar, Rafael Possas, James Bourbeau, James A. Bednar,
Janine Harper, Jaye, Jean Helie, Jeremy Steward, Artsiom, John Wei, Jonathan
LIgo, Jonathan Rahn, seanpwilliams, Arthur Mensch, Josh Levy, Julian Kuhlmann,
Julien Aubert, Jörn Hees, Kai, shivamgargsya, Kat Hempstalk, Kaushik
Lakshmikanth, Kennedy, Kenneth Lyons, Kenneth Myers, Kevin Yap, Kirill Bobyrev,
Konstantin Podshumok, Arthur Imbert, Lee Murray, toastedcornflakes, Lera, Li
Li, Arthur Douillard, Mainak Jas, tobycheese, Manraj Singh, Manvendra Singh,
Marc Meketon, MarcoFalke, Matthew Brett, Matthias Gilch, Mehul Ahuja, Melanie
Goetz, Meng, Peng, Michael Dezube, Michal Baumgartner, vibrantabhi19, Artem
Golubin, Milen Paskov, Antonin Carette, Morikko, MrMjauh, NALEPA Emmanuel,
Namiya, Antoine Wendlinger, Narine Kokhlikyan, NarineK, Nate Guerin, Angus
Williams, Ang Lu, Nicole Vavrova, Nitish Pandey, Okhlopkov Daniil Olegovich,
Andy Craze, Om Prakash, Parminder Singh, Patrick Carlson, Patrick Pei, Paul
Ganssle, Paulo Haddad, Paweł Lorek, Peng Yu, Pete Bachant, Peter Bull, Peter
Csizsek, Peter Wang, Pieter Arthur de Jong, Ping-Yao, Chang, Preston Parry,
Puneet Mathur, Quentin Hibon, Andrew Smith, Andrew Jackson, 1kastner, Rameshwar
Bhaskaran, Rebecca Bilbro, Remi Rampin, Andrea Esuli, Rob Hall, Robert
Bradshaw, Romain Brault, Aman Pratik, Ruifeng Zheng, Russell Smith, Sachin
Agarwal, Sailesh Choyal, Samson Tan, Samuël Weber, Sarah Brown, Sebastian
Pölsterl, Sebastian Raschka, Sebastian Saeger, Alyssa Batula, Abhyuday Pratap
Singh, Sergey Feldman, Sergul Aydore, Sharan Yalburgi, willduan, Siddharth
Gupta, Sri Krishna, Almer, Stijn Tonk, Allen Riddell, Theofilos Papapanagiotou,
Alison, Alexis Mignon, Tommy Boucher, Tommy Löfstedt, Toshihiro Kamishima,
Tyler Folkman, Tyler Lanigan, Alexander Junge, Varun Shenoy, Victor Poughon,
Vilhelm von Ehrenheim, Aleksandr Sandrovskii, Alan Yee, Vlasios Vasileiou,
Warut Vijitbenjaronk, Yang Zhang, Yaroslav Halchenko, Yichuan Liu, Yuichi
Fujikawa, affanv14, aivision2020, xor, andreh7, brady salz, campustrampus,
Agamemnon Krasoulis, ditenberg, elena-sharova, filipj8, fukatani, gedeck,
guiniol, guoci, hakaa1, hongkahjun, i-am-xhy, jakirkham, jaroslaw-weber,
jayzed82, jeroko, jmontoyam, jonathan.striebel, josephsalmon, jschendel,
leereeves, martin-hahn, mathurinm, mehak-sachdeva, mlewis1729, mlliou112,
mthorrell, ndingwall, nuffe, yangarbiter, plagree, pldtc325, Breno Freitas,
Brett Olsen, Brian A. Alfano, Brian Burns, polmauri, Brandon Carter, Charlton
Austin, Chayant T15h, Chinmaya Pancholi, Christian Danielsen, Chung Yen,
Chyi-Kwei Yau, pravarmahajan, DOHMATOB Elvis, Daniel LeJeune, Daniel Hnyk,
Darius Morawiec, David DeTomaso, David Gasquez, David Haberthür, David
Heryanto, David Kirkby, David Nicholson, rashchedrin, Deborah Gertrude Digges,
Denis Engemann, Devansh D, Dickson, Bob Baxley, Don86, E. Lynch-Klarup, Ed
Rogers, Elizabeth Ferriss, Ellen-Co2, Fabian Egli, Fang-Chieh Chou, Bing Tian
Dai, Greg Stupp, Grzegorz Szpak, Bertrand Thirion, Hadrien Bertrand, Harizo
Rajaona, zxcvbnius, Henry Lin, Holger Peters, Icyblade Dai, Igor
Andriushchenko, Ilya, Isaac Laughlin, Iván Vallés, Aurélien Bellet, JPFrancoia,
Jacob Schreiber, Asish Mahapatra | scikit-learn | include contributors rst currentmodule sklearn Version 0 19 changes 0 19 Version 0 19 2 July 2018 This release is exclusively in order to support Python 3 7 Related changes n iter may vary from previous releases in class linear model LogisticRegression with solver lbfgs and class linear model HuberRegressor For Scipy 1 0 0 the optimizer could perform more than the requested maximum number of iterations Now both estimators will report at most max iter iterations even if more were performed issue 10723 by Joel Nothman Version 0 19 1 October 23 2017 This is a bug fix release with some minor documentation improvements and enhancements to features released in 0 19 0 Note there may be minor differences in TSNE output in this release due to issue 9623 in the case where multiple samples have equal distance to some sample Changelog API changes Reverted the addition of metrics ndcg score and metrics dcg score which had been merged into version 0 19 0 by error The implementations were broken and undocumented return train score which was added to class model selection GridSearchCV class model selection RandomizedSearchCV and func model selection cross validate in version 0 19 0 will be changing its default value from True to False in version 0 21 We found that calculating training score could have a great effect on cross validation runtime in some cases Users should explicitly set return train score to False if prediction or scoring functions are slow resulting in a deleterious effect on CV runtime or to True if they wish to use the calculated scores issue 9677 by user Kumar Ashutosh thechargedneutron and Joel Nothman correlation models and regression models from the legacy gaussian processes implementation have been belatedly deprecated issue 9717 by user Kumar Ashutosh thechargedneutron Bug fixes Avoid integer overflows in func metrics matthews corrcoef issue 9693 by user Sam Steingold sam s Fixed a bug in the objective function for class manifold TSNE both exact and with the Barnes Hut approximation when n components 3 issue 9711 by user goncalo rodrigues Fix regression in func model selection cross val predict where it raised an error with method predict proba for some probabilistic classifiers issue 9641 by user James Bourbeau jrbourbeau Fixed a bug where func datasets make classification modified its input weights issue 9865 by user Sachin Kelkar s4chin class model selection StratifiedShuffleSplit now works with multioutput multiclass or multilabel data with more than 1000 columns issue 9922 by user Charlie Brummitt crbrummitt Fixed a bug with nested and conditional parameter setting e g setting a pipeline step and its parameter at the same time issue 9945 by Andreas M ller and Joel Nothman Regressions in 0 19 0 fixed in 0 19 1 Fixed a bug where parallelised prediction in random forests was not thread safe and could rarely result in arbitrary errors issue 9830 by Joel Nothman Fix regression in func model selection cross val predict where it no longer accepted X as a list issue 9600 by user Rasul Kerimov CoderINusE Fixed handling of func model selection cross val predict for binary classification with method decision function issue 9593 by user Reiichiro Nakano reiinakano and core devs Fix regression in class pipeline Pipeline where it no longer accepted steps as a tuple issue 9604 by user Joris Van den Bossche jorisvandenbossche Fix bug where n iter was not properly deprecated leaving n iter unavailable for interim use in class linear model SGDClassifier class linear model SGDRegressor class linear model PassiveAggressiveClassifier class linear model PassiveAggressiveRegressor and class linear model Perceptron issue 9558 by Andreas M ller Dataset fetchers make sure temporary files are closed before removing them which caused errors on Windows issue 9847 by user Joan Massich massich Fixed a regression in class manifold TSNE where it no longer supported metrics other than euclidean and precomputed issue 9623 by user Oli Blum oliblum90 Enhancements Our test suite and func utils estimator checks check estimator can now be run without Nose installed issue 9697 by user Joan Massich massich To improve usability of version 0 19 s class pipeline Pipeline caching memory now allows joblib Memory instances This make use of the new func utils validation check memory helper issue 9584 by user Kumar Ashutosh thechargedneutron Some fixes to examples issue 9750 issue 9788 issue 9815 Made a FutureWarning in SGD based estimators less verbose issue 9802 by user Vrishank Bhardwaj vrishank97 Code and Documentation Contributors With thanks to Joel Nothman Loic Esteve Andreas Mueller Kumar Ashutosh Vrishank Bhardwaj Hanmin Qin Rasul Kerimov James Bourbeau Nagarjuna Kumar Nathaniel Saul Olivier Grisel Roman Yurchak Reiichiro Nakano Sachin Kelkar Sam Steingold Yaroslav Halchenko diegodlh felix goncalo rodrigues jkleint oliblum90 pasbi Anthony Gitter Ben Lawson Charlie Brummitt Didi Bar Zev Gael Varoquaux Joan Massich Joris Van den Bossche nielsenmarkus11 Version 0 19 August 12 2017 Highlights We are excited to release a number of great new features including class neighbors LocalOutlierFactor for anomaly detection class preprocessing QuantileTransformer for robust feature transformation and the class multioutput ClassifierChain meta estimator to simply account for dependencies between classes in multilabel problems We have some new algorithms in existing estimators such as multiplicative update in class decomposition NMF and multinomial class linear model LogisticRegression with L1 loss use solver saga Cross validation is now able to return the results from multiple metric evaluations The new func model selection cross validate can return many scores on the test data as well as training set performance and timings and we have extended the scoring and refit parameters for grid randomized search ref to handle multiple metrics multimetric grid search You can also learn faster For instance the ref new option to cache transformations pipeline cache in class pipeline Pipeline makes grid search over pipelines including slow transformations much more efficient And you can predict faster if you re sure you know what you re doing you can turn off validating that the input is finite using func config context We ve made some important fixes too We ve fixed a longstanding implementation error in func metrics average precision score so please be cautious with prior results reported from that function A number of errors in the class manifold TSNE implementation have been fixed particularly in the default Barnes Hut approximation class semi supervised LabelSpreading and class semi supervised LabelPropagation have had substantial fixes LabelPropagation was previously broken LabelSpreading should now correctly respect its alpha parameter Changed models The following estimators and functions when fit with the same data and parameters may produce different models from the previous version This often occurs due to changes in the modelling logic bug fixes or enhancements or in random sampling procedures class cluster KMeans with sparse X and initial centroids given bug fix class cross decomposition PLSRegression with scale True bug fix class ensemble GradientBoostingClassifier and class ensemble GradientBoostingRegressor where min impurity split is used bug fix gradient boosting loss quantile bug fix class ensemble IsolationForest bug fix class feature selection SelectFdr bug fix class linear model RANSACRegressor bug fix class linear model LassoLars bug fix class linear model LassoLarsIC bug fix class manifold TSNE bug fix class neighbors NearestCentroid bug fix class semi supervised LabelSpreading bug fix class semi supervised LabelPropagation bug fix tree based models where min weight fraction leaf is used enhancement class model selection StratifiedKFold with shuffle True this change due to issue 7823 was not mentioned in the release notes at the time Details are listed in the changelog below While we are trying to better inform users by providing this information we cannot assure that this list is complete Changelog New features Classifiers and regressors Added class multioutput ClassifierChain for multi label classification By user Adam Kleczewski adamklec Added solver saga that implements the improved version of Stochastic Average Gradient in class linear model LogisticRegression and class linear model Ridge It allows the use of L1 penalty with multinomial logistic loss and behaves marginally better than sag during the first epochs of ridge and logistic regression issue 8446 by Arthur Mensch Other estimators Added the class neighbors LocalOutlierFactor class for anomaly detection based on nearest neighbors issue 5279 by Nicolas Goix and Alexandre Gramfort Added class preprocessing QuantileTransformer class and func preprocessing quantile transform function for features normalization based on quantiles issue 8363 by user Denis Engemann dengemann user Guillaume Lemaitre glemaitre Olivier Grisel Raghav RV user Thierry Guillemot tguillemot and Gael Varoquaux The new solver mu implements a Multiplicate Update in class decomposition NMF allowing the optimization of all beta divergences including the Frobenius norm the generalized Kullback Leibler divergence and the Itakura Saito divergence issue 5295 by Tom Dupre la Tour Model selection and evaluation class model selection GridSearchCV and class model selection RandomizedSearchCV now support simultaneous evaluation of multiple metrics Refer to the ref multimetric grid search section of the user guide for more information issue 7388 by Raghav RV Added the func model selection cross validate which allows evaluation of multiple metrics This function returns a dict with more useful information from cross validation such as the train scores fit times and score times Refer to ref multimetric cross validation section of the userguide for more information issue 7388 by Raghav RV Added func metrics mean squared log error which computes the mean square error of the logarithmic transformation of targets particularly useful for targets with an exponential trend issue 7655 by user Karan Desai karandesai 96 Added func metrics dcg score and func metrics ndcg score which compute Discounted cumulative gain DCG and Normalized discounted cumulative gain NDCG issue 7739 by user David Gasquez davidgasquez Added the class model selection RepeatedKFold and class model selection RepeatedStratifiedKFold issue 8120 by Neeraj Gangwar Miscellaneous Validation that input data contains no NaN or inf can now be suppressed using func config context at your own risk This will save on runtime and may be particularly useful for prediction time issue 7548 by Joel Nothman Added a test to ensure parameter listing in docstrings match the function class signature issue 9206 by Alexandre Gramfort and Raghav RV Enhancements Trees and ensembles The min weight fraction leaf constraint in tree construction is now more efficient taking a fast path to declare a node a leaf if its weight is less than 2 the minimum Note that the constructed tree will be different from previous versions where min weight fraction leaf is used issue 7441 by user Nelson Liu nelson liu class ensemble GradientBoostingClassifier and class ensemble GradientBoostingRegressor now support sparse input for prediction issue 6101 by user Ibraim Ganiev olologin class ensemble VotingClassifier now allows changing estimators by using meth ensemble VotingClassifier set params An estimator can also be removed by setting it to None issue 7674 by user Yichuan Liu yl565 func tree export graphviz now shows configurable number of decimal places issue 8698 by user Guillaume Lemaitre glemaitre Added flatten transform parameter to class ensemble VotingClassifier to change output shape of transform method to 2 dimensional issue 7794 by user Ibraim Ganiev olologin and user Herilalaina Rakotoarison herilalaina Linear kernelized and related models class linear model SGDClassifier class linear model SGDRegressor class linear model PassiveAggressiveClassifier class linear model PassiveAggressiveRegressor and class linear model Perceptron now expose max iter and tol parameters to handle convergence more precisely n iter parameter is deprecated and the fitted estimator exposes a n iter attribute with actual number of iterations before convergence issue 5036 by Tom Dupre la Tour Added average parameter to perform weight averaging in class linear model PassiveAggressiveClassifier issue 4939 by user Andrea Esuli aesuli class linear model RANSACRegressor no longer throws an error when calling fit if no inliers are found in its first iteration Furthermore causes of skipped iterations are tracked in newly added attributes n skips issue 7914 by user Michael Horrell mthorrell In class gaussian process GaussianProcessRegressor method predict is a lot faster with return std True issue 8591 by user Hadrien Bertrand hbertrand Added return std to predict method of class linear model ARDRegression and class linear model BayesianRidge issue 7838 by user Sergey Feldman sergeyf Memory usage enhancements Prevent cast from float32 to float64 in class linear model MultiTaskElasticNet class linear model LogisticRegression when using newton cg solver and class linear model Ridge when using svd sparse cg cholesky or lsqr solvers issue 8835 issue 8061 by user Joan Massich massich and user Nicolas Cordier ncordier and user Thierry Guillemot tguillemot Other predictors Custom metrics for the mod sklearn neighbors binary trees now have fewer constraints they must take two 1d arrays and return a float issue 6288 by Jake Vanderplas algorithm auto in mod sklearn neighbors estimators now chooses the most appropriate algorithm for all input types and metrics issue 9145 by user Herilalaina Rakotoarison herilalaina and user Reddy Chinthala preddy5 Decomposition manifold learning and clustering class cluster MiniBatchKMeans and class cluster KMeans now use significantly less memory when assigning data points to their nearest cluster center issue 7721 by user Jon Crall Erotemic class decomposition PCA class decomposition IncrementalPCA and class decomposition TruncatedSVD now expose the singular values from the underlying SVD They are stored in the attribute singular values like in class decomposition IncrementalPCA issue 7685 by user Tommy L fstedt tomlof class decomposition NMF now faster when beta loss 0 issue 9277 by user hongkahjun Memory improvements for method barnes hut in class manifold TSNE issue 7089 by user Thomas Moreau tomMoral and Olivier Grisel Optimization schedule improvements for Barnes Hut class manifold TSNE so the results are closer to the one from the reference implementation lvdmaaten bhtsne https github com lvdmaaten bhtsne by user Thomas Moreau tomMoral and Olivier Grisel Memory usage enhancements Prevent cast from float32 to float64 in class decomposition PCA and decomposition randomized svd low rank issue 9067 by Raghav RV Preprocessing and feature selection Added norm order parameter to class feature selection SelectFromModel to enable selection of the norm order when coef is more than 1D issue 6181 by user Antoine Wendlinger antoinewdg Added ability to use sparse matrices in func feature selection f regression with center True issue 8065 by user Daniel LeJeune acadiansith Small performance improvement to n gram creation in mod sklearn feature extraction text by binding methods for loops and special casing unigrams issue 7567 by user Jaye Doepke jtdoepke Relax assumption on the data for the class kernel approximation SkewedChi2Sampler Since the Skewed Chi2 kernel is defined on the open interval math skewedness infty d the transform function should not check whether X 0 but whether X self skewedness issue 7573 by user Romain Brault RomainBrault Made default kernel parameters kernel dependent in class kernel approximation Nystroem issue 5229 by user Saurabh Bansod mth4saurabh and Andreas M ller Model evaluation and meta estimators class pipeline Pipeline is now able to cache transformers within a pipeline by using the memory constructor parameter issue 7990 by user Guillaume Lemaitre glemaitre class pipeline Pipeline steps can now be accessed as attributes of its named steps attribute issue 8586 by user Herilalaina Rakotoarison herilalaina Added sample weight parameter to meth pipeline Pipeline score issue 7723 by user Mikhail Korobov kmike Added ability to set n jobs parameter to func pipeline make union A TypeError will be raised for any other kwargs issue 8028 by user Alexander Booth alexandercbooth class model selection GridSearchCV class model selection RandomizedSearchCV and func model selection cross val score now allow estimators with callable kernels which were previously prohibited issue 8005 by Andreas M ller func model selection cross val predict now returns output of the correct shape for all values of the argument method issue 7863 by user Aman Dalmia dalmia Added shuffle and random state parameters to shuffle training data before taking prefixes of it based on training sizes in func model selection learning curve issue 7506 by user Narine Kokhlikyan NarineK class model selection StratifiedShuffleSplit now works with multioutput multiclass or multilabel data issue 9044 by Vlad Niculae Speed improvements to class model selection StratifiedShuffleSplit issue 5991 by user Arthur Mensch arthurmensch and Joel Nothman Add shuffle parameter to func model selection train test split issue 8845 by user themrmax themrmax class multioutput MultiOutputRegressor and class multioutput MultiOutputClassifier now support online learning using partial fit issue 8053 by user Peng Yu yupbank Add max train size parameter to class model selection TimeSeriesSplit issue 8282 by user Aman Dalmia dalmia More clustering metrics are now available through func metrics get scorer and scoring parameters issue 8117 by Raghav RV A scorer based on func metrics explained variance score is also available issue 9259 by user Hanmin Qin qinhanmin2014 Metrics func metrics matthews corrcoef now support multiclass classification issue 8094 by user Jon Crall Erotemic Add sample weight parameter to func metrics cohen kappa score issue 8335 by user Victor Poughon vpoughon Miscellaneous func utils estimator checks check estimator now attempts to ensure that methods transform predict etc do not set attributes on the estimator issue 7533 by user Ekaterina Krivich kiote Added type checking to the accept sparse parameter in mod sklearn utils validation methods This parameter now accepts only boolean string or list tuple of strings accept sparse None is deprecated and should be replaced by accept sparse False issue 7880 by user Josh Karnofsky jkarno Make it possible to load a chunk of an svmlight formatted file by passing a range of bytes to func datasets load svmlight file issue 935 by user Olivier Grisel ogrisel class dummy DummyClassifier and class dummy DummyRegressor now accept non finite features issue 8931 by user Attractadore Bug fixes Trees and ensembles Fixed a memory leak in trees when using trees with criterion mae issue 8002 by Raghav RV Fixed a bug where class ensemble IsolationForest uses an an incorrect formula for the average path length issue 8549 by Peter Wang https github com PTRWang Fixed a bug where class ensemble AdaBoostClassifier throws ZeroDivisionError while fitting data with single class labels issue 7501 by user Dominik Krzeminski dokato Fixed a bug in class ensemble GradientBoostingClassifier and class ensemble GradientBoostingRegressor where a float being compared to 0 0 using caused a divide by zero error issue 7970 by user He Chen chenhe95 Fix a bug where class ensemble GradientBoostingClassifier and class ensemble GradientBoostingRegressor ignored the min impurity split parameter issue 8006 by user Sebastian P lsterl sebp Fixed oob score in class ensemble BaggingClassifier issue 8936 by user Michael Lewis mlewis1729 Fixed excessive memory usage in prediction for random forests estimators issue 8672 by user Mike Benfield mikebenfield Fixed a bug where sample weight as a list broke random forests in Python 2 issue 8068 by user xor Fixed a bug where class ensemble IsolationForest fails when max features is less than 1 issue 5732 by user Ishank Gulati IshankGulati Fix a bug where gradient boosting with loss quantile computed negative errors for negative values of ytrue ypred leading to wrong values when calling call issue 8087 by user Alexis Mignon AlexisMignon Fix a bug where class ensemble VotingClassifier raises an error when a numpy array is passed in for weights issue 7983 by user Vincent Pham vincentpham1991 Fixed a bug where func tree export graphviz raised an error when the length of features names does not match n features in the decision tree issue 8512 by user Li Li aikinogard Linear kernelized and related models Fixed a bug where func linear model RANSACRegressor fit may run until max iter if it finds a large inlier group early issue 8251 by user aivision2020 Fixed a bug where class naive bayes MultinomialNB and class naive bayes BernoulliNB failed when alpha 0 issue 5814 by user Yichuan Liu yl565 and user Herilalaina Rakotoarison herilalaina Fixed a bug where class linear model LassoLars does not give the same result as the LassoLars implementation available in R lars library issue 7849 by user Jair Montoya Martinez jmontoyam Fixed a bug in linear model RandomizedLasso class linear model Lars class linear model LassoLars class linear model LarsCV and class linear model LassoLarsCV where the parameter precompute was not used consistently across classes and some values proposed in the docstring could raise errors issue 5359 by Tom Dupre la Tour Fix inconsistent results between class linear model RidgeCV and class linear model Ridge when using normalize True issue 9302 by Alexandre Gramfort Fix a bug where func linear model LassoLars fit sometimes left coef as a list rather than an ndarray issue 8160 by user CJ Carey perimosocordiae Fix func linear model BayesianRidge fit to return ridge parameter alpha and lambda consistent with calculated coefficients coef and intercept issue 8224 by user Peter Gedeck gedeck Fixed a bug in class svm OneClassSVM where it returned floats instead of integer classes issue 8676 by user Vathsala Achar VathsalaAchar Fix AIC BIC criterion computation in class linear model LassoLarsIC issue 9022 by Alexandre Gramfort and user Mehmet Basbug mehmetbasbug Fixed a memory leak in our LibLinear implementation issue 9024 by user Sergei Lebedev superbobry Fix bug where stratified CV splitters did not work with class linear model LassoCV issue 8973 by user Paulo Haddad paulochf Fixed a bug in class gaussian process GaussianProcessRegressor when the standard deviation and covariance predicted without fit would fail with a unmeaningful error by default issue 6573 by user Quazi Marufur Rahman qmaruf and Manoj Kumar Other predictors Fix semi supervised BaseLabelPropagation to correctly implement LabelPropagation and LabelSpreading as done in the referenced papers issue 9239 by user Andre Ambrosio Boechat boechat107 user Utkarsh Upadhyay musically ut and Joel Nothman Decomposition manifold learning and clustering Fixed the implementation of class manifold TSNE early exageration parameter had no effect and is now used for the first 250 optimization iterations Fixed the AssertionError Tree consistency failed exception reported in issue 8992 Improve the learning schedule to match the one from the reference implementation lvdmaaten bhtsne https github com lvdmaaten bhtsne by user Thomas Moreau tomMoral and Olivier Grisel Fix a bug in class decomposition LatentDirichletAllocation where the perplexity method was returning incorrect results because the transform method returns normalized document topic distributions as of version 0 18 issue 7954 by user Gary Foreman garyForeman Fix output shape and bugs with n jobs 1 in class decomposition SparseCoder transform and func decomposition sparse encode for one dimensional data and one component This also impacts the output shape of class decomposition DictionaryLearning issue 8086 by Andreas M ller Fixed the implementation of explained variance in class decomposition PCA decomposition RandomizedPCA and class decomposition IncrementalPCA issue 9105 by Hanmin Qin https github com qinhanmin2014 Fixed the implementation of noise variance in class decomposition PCA issue 9108 by Hanmin Qin https github com qinhanmin2014 Fixed a bug where class cluster DBSCAN gives incorrect result when input is a precomputed sparse matrix with initial rows all zero issue 8306 by user Akshay Gupta Akshay0724 Fix a bug regarding fitting class cluster KMeans with a sparse array X and initial centroids where X s means were unnecessarily being subtracted from the centroids issue 7872 by user Josh Karnofsky jkarno Fixes to the input validation in class covariance EllipticEnvelope issue 8086 by Andreas M ller Fixed a bug in class covariance MinCovDet where inputting data that produced a singular covariance matrix would cause the helper method c step to throw an exception issue 3367 by user Jeremy Steward ThatGeoGuy Fixed a bug in class manifold TSNE affecting convergence of the gradient descent issue 8768 by user David DeTomaso deto Fixed a bug in class manifold TSNE where it stored the incorrect kl divergence issue 6507 by user Sebastian Saeger ssaeger Fixed improper scaling in class cross decomposition PLSRegression with scale True issue 7819 by user jayzed82 jayzed82 class cluster SpectralCoclustering and class cluster SpectralBiclustering fit method conforms with API by accepting y and returning the object issue 6126 issue 7814 by user Laurent Direr ldirer and user Maniteja Nandana maniteja123 Fix bug where mod sklearn mixture sample methods did not return as many samples as requested issue 7702 by user Levi John Wolf ljwolf Fixed the shrinkage implementation in class neighbors NearestCentroid issue 9219 by Hanmin Qin https github com qinhanmin2014 Preprocessing and feature selection For sparse matrices func preprocessing normalize with return norm True will now raise a NotImplementedError with l1 or l2 norm and with norm max the norms returned will be the same as for dense matrices issue 7771 by Ang Lu https github com luang008 Fix a bug where class feature selection SelectFdr did not exactly implement Benjamini Hochberg procedure It formerly may have selected fewer features than it should issue 7490 by user Peng Meng mpjlu Fixed a bug where linear model RandomizedLasso and linear model RandomizedLogisticRegression breaks for sparse input issue 8259 by user Aman Dalmia dalmia Fix a bug where class feature extraction FeatureHasher mandatorily applied a sparse random projection to the hashed features preventing the use of class feature extraction text HashingVectorizer in a pipeline with class feature extraction text TfidfTransformer issue 7565 by user Roman Yurchak rth Fix a bug where class feature selection mutual info regression did not correctly use n neighbors issue 8181 by user Guillaume Lemaitre glemaitre Model evaluation and meta estimators Fixed a bug where model selection BaseSearchCV inverse transform returns self best estimator transform instead of self best estimator inverse transform issue 8344 by user Akshay Gupta Akshay0724 and user Rasmus Eriksson MrMjauh Added classes attribute to class model selection GridSearchCV class model selection RandomizedSearchCV grid search GridSearchCV and grid search RandomizedSearchCV that matches the classes attribute of best estimator issue 7661 and issue 8295 by user Alyssa Batula abatula user Dylan Werner Meier unautre and user Stephen Hoover stephen hoover Fixed a bug where func model selection validation curve reused the same estimator for each parameter value issue 7365 by user Aleksandr Sandrovskii Sundrique func model selection permutation test score now works with Pandas types issue 5697 by user Stijn Tonk equialgo Several fixes to input validation in class multiclass OutputCodeClassifier issue 8086 by Andreas M ller class multiclass OneVsOneClassifier s partial fit now ensures all classes are provided up front issue 6250 by user Asish Panda kaichogami Fix func multioutput MultiOutputClassifier predict proba to return a list of 2d arrays rather than a 3d array In the case where different target columns had different numbers of classes a ValueError would be raised on trying to stack matrices with different dimensions issue 8093 by user Peter Bull pjbull Cross validation now works with Pandas datatypes that have a read only index issue 9507 by Loic Esteve Metrics func metrics average precision score no longer linearly interpolates between operating points and instead weighs precisions by the change in recall since the last operating point as per the Wikipedia entry https en wikipedia org wiki Average precision 7356 https github com scikit learn scikit learn pull 7356 By user Nick Dingwall ndingwall and Gael Varoquaux Fix a bug in metrics classification check targets which would return binary if y true and y pred were both binary but the union of y true and y pred was multiclass issue 8377 by Loic Esteve Fixed an integer overflow bug in func metrics confusion matrix and hence func metrics cohen kappa score issue 8354 issue 7929 by Joel Nothman and user Jon Crall Erotemic Fixed passing of gamma parameter to the chi2 kernel in func metrics pairwise pairwise kernels issue 5211 by user Nick Rhinehart nrhine1 user Saurabh Bansod mth4saurabh and Andreas M ller Miscellaneous Fixed a bug when func datasets make classification fails when generating more than 30 features issue 8159 by user Herilalaina Rakotoarison herilalaina Fixed a bug where func datasets make moons gives an incorrect result when n samples is odd issue 8198 by user Josh Levy levy5674 Some fetch functions in mod sklearn datasets were ignoring the download if missing keyword issue 7944 by user Ralf Gommers rgommers Fix estimators to accept a sample weight parameter of type pandas Series in their fit function issue 7825 by Kathleen Chen Fix a bug in cases where numpy cumsum may be numerically unstable raising an exception if instability is identified issue 7376 and issue 7331 by Joel Nothman and user yangarbiter Fix a bug where base BaseEstimator getstate obstructed pickling customizations of child classes when used in a multiple inheritance context issue 8316 by user Holger Peters HolgerPeters Update Sphinx Gallery from 0 1 4 to 0 1 7 for resolving links in documentation build with Sphinx 1 5 issue 8010 issue 7986 by user Oscar Najera Titan C Add data home parameter to func sklearn datasets fetch kddcup99 issue 9289 by Loic Esteve Fix dataset loaders using Python 3 version of makedirs to also work in Python 2 issue 9284 by user Sebastin Santy SebastinSanty Several minor issues were fixed with thanks to the alerts of lgtm com https lgtm com issue 9278 by user Jean Helie jhelie among others API changes summary Trees and ensembles Gradient boosting base models are no longer estimators By Andreas M ller All tree based estimators now accept a min impurity decrease parameter in lieu of the min impurity split which is now deprecated The min impurity decrease helps stop splitting the nodes in which the weighted impurity decrease from splitting is no longer at least min impurity decrease issue 8449 by Raghav RV Linear kernelized and related models n iter parameter is deprecated in class linear model SGDClassifier class linear model SGDRegressor class linear model PassiveAggressiveClassifier class linear model PassiveAggressiveRegressor and class linear model Perceptron By Tom Dupre la Tour Other predictors neighbors LSHForest has been deprecated and will be removed in 0 21 due to poor performance issue 9078 by user Laurent Direr ldirer class neighbors NearestCentroid no longer purports to support metric precomputed which now raises an error issue 8515 by user Sergul Aydore sergulaydore The alpha parameter of class semi supervised LabelPropagation now has no effect and is deprecated to be removed in 0 21 issue 9239 by user Andre Ambrosio Boechat boechat107 user Utkarsh Upadhyay musically ut and Joel Nothman Decomposition manifold learning and clustering Deprecate the doc topic distr argument of the perplexity method in class decomposition LatentDirichletAllocation because the user no longer has access to the unnormalized document topic distribution needed for the perplexity calculation issue 7954 by user Gary Foreman garyForeman The n topics parameter of class decomposition LatentDirichletAllocation has been renamed to n components and will be removed in version 0 21 issue 8922 by user Attractadore meth decomposition SparsePCA transform s ridge alpha parameter is deprecated in preference for class parameter issue 8137 by user Naoya Kanai naoyak class cluster DBSCAN now has a metric params parameter issue 8139 by user Naoya Kanai naoyak Preprocessing and feature selection class feature selection SelectFromModel now has a partial fit method only if the underlying estimator does By Andreas M ller class feature selection SelectFromModel now validates the threshold parameter and sets the threshold attribute during the call to fit and no longer during the call to transform By Andreas M ller The non negative parameter in class feature extraction FeatureHasher has been deprecated and replaced with a more principled alternative alternate sign issue 7565 by user Roman Yurchak rth linear model RandomizedLogisticRegression and linear model RandomizedLasso have been deprecated and will be removed in version 0 21 issue 8995 by user Ramana S sentient07 Model evaluation and meta estimators Deprecate the fit params constructor input to the class model selection GridSearchCV and class model selection RandomizedSearchCV in favor of passing keyword parameters to the fit methods of those classes Data dependent parameters needed for model training should be passed as keyword arguments to fit and conforming to this convention will allow the hyperparameter selection classes to be used with tools such as func model selection cross val predict issue 2879 by user Stephen Hoover stephen hoover In version 0 21 the default behavior of splitters that use the test size and train size parameter will change such that specifying train size alone will cause test size to be the remainder issue 7459 by user Nelson Liu nelson liu class multiclass OneVsRestClassifier now has partial fit decision function and predict proba methods only when the underlying estimator does issue 7812 by Andreas M ller and user Mikhail Korobov kmike class multiclass OneVsRestClassifier now has a partial fit method only if the underlying estimator does By Andreas M ller The decision function output shape for binary classification in class multiclass OneVsRestClassifier and class multiclass OneVsOneClassifier is now n samples to conform to scikit learn conventions issue 9100 by Andreas M ller The func multioutput MultiOutputClassifier predict proba function used to return a 3d array n samples n classes n outputs In the case where different target columns had different numbers of classes a ValueError would be raised on trying to stack matrices with different dimensions This function now returns a list of arrays where the length of the list is n outputs and each array is n samples n classes for that particular output issue 8093 by user Peter Bull pjbull Replace attribute named steps dict to class utils Bunch in class pipeline Pipeline to enable tab completion in interactive environment In the case conflict value on named steps and dict attribute dict behavior will be prioritized issue 8481 by user Herilalaina Rakotoarison herilalaina Miscellaneous Deprecate the y parameter in transform and inverse transform The method should not accept y parameter as it s used at the prediction time issue 8174 by user Tahar Zanouda tzano Alexandre Gramfort and Raghav RV SciPy 0 13 3 and NumPy 1 8 2 are now the minimum supported versions for scikit learn The following backported functions in mod sklearn utils have been removed or deprecated accordingly issue 8854 and issue 8874 by user Naoya Kanai naoyak The store covariances and covariances parameters of class discriminant analysis QuadraticDiscriminantAnalysis has been renamed to store covariance and covariance to be consistent with the corresponding parameter names of the class discriminant analysis LinearDiscriminantAnalysis They will be removed in version 0 21 issue 7998 by user Jiacheng mrbeann Removed in 0 19 utils fixes argpartition utils fixes array equal utils fixes astype utils fixes bincount utils fixes expit utils fixes frombuffer empty utils fixes in1d utils fixes norm utils fixes rankdata utils fixes safe copy Deprecated in 0 19 to be removed in 0 21 utils arpack eigs utils arpack eigsh utils arpack svds utils extmath fast dot utils extmath logsumexp utils extmath norm utils extmath pinvh utils graph graph laplacian utils random choice utils sparsetools connected components utils stats rankdata Estimators with both methods decision function and predict proba are now required to have a monotonic relation between them The method check decision proba consistency has been added in utils estimator checks to check their consistency issue 7578 by user Shubham Bhardwaj shubham0704 All checks in utils estimator checks in particular func utils estimator checks check estimator now accept estimator instances Most other checks do not accept estimator classes any more issue 9019 by Andreas M ller Ensure that estimators attributes ending with are not set in the constructor but only in the fit method Most notably ensemble estimators deriving from ensemble BaseEnsemble now only have self estimators available after fit issue 7464 by Lars Buitinck and Loic Esteve Code and Documentation Contributors Thanks to everyone who has contributed to the maintenance and improvement of the project since version 0 18 including Joel Nothman Loic Esteve Andreas Mueller Guillaume Lemaitre Olivier Grisel Hanmin Qin Raghav RV Alexandre Gramfort themrmax Aman Dalmia Gael Varoquaux Naoya Kanai Tom Dupr la Tour Rishikesh Nelson Liu Taehoon Lee Nelle Varoquaux Aashil Mikhail Korobov Sebastin Santy Joan Massich Roman Yurchak RAKOTOARISON Herilalaina Thierry Guillemot Alexandre Abadie Carol Willing Balakumaran Manoharan Josh Karnofsky Vlad Niculae Utkarsh Upadhyay Dmitry Petrov Minghui Liu Srivatsan Vincent Pham Albert Thomas Jake VanderPlas Attractadore JC Liu alexandercbooth chkoar scar N jera Aarshay Jain Kyle Gilliam Ramana Subramanyam CJ Carey Clement Joudet David Robles He Chen Joris Van den Bossche Karan Desai Katie Luangkote Leland McInnes Maniteja Nandana Michele Lacchia Sergei Lebedev Shubham Bhardwaj akshay0724 omtcyfz rickiepark waterponey Vathsala Achar jbDelafosse Ralf Gommers Ekaterina Krivich Vivek Kumar Ishank Gulati Dave Elliott ldirer Reiichiro Nakano Levi John Wolf Mathieu Blondel Sid Kapur Dougal J Sutherland midinas mikebenfield Sourav Singh Aseem Bansal Ibraim Ganiev Stephen Hoover AishwaryaRK Steven C Howell Gary Foreman Neeraj Gangwar Tahar Jon Crall dokato Kathy Chen ferria Thomas Moreau Charlie Brummitt Nicolas Goix Adam Kleczewski Sam Shleifer Nikita Singh Basil Beirouti Giorgio Patrini Manoj Kumar Rafael Possas James Bourbeau James A Bednar Janine Harper Jaye Jean Helie Jeremy Steward Artsiom John Wei Jonathan LIgo Jonathan Rahn seanpwilliams Arthur Mensch Josh Levy Julian Kuhlmann Julien Aubert J rn Hees Kai shivamgargsya Kat Hempstalk Kaushik Lakshmikanth Kennedy Kenneth Lyons Kenneth Myers Kevin Yap Kirill Bobyrev Konstantin Podshumok Arthur Imbert Lee Murray toastedcornflakes Lera Li Li Arthur Douillard Mainak Jas tobycheese Manraj Singh Manvendra Singh Marc Meketon MarcoFalke Matthew Brett Matthias Gilch Mehul Ahuja Melanie Goetz Meng Peng Michael Dezube Michal Baumgartner vibrantabhi19 Artem Golubin Milen Paskov Antonin Carette Morikko MrMjauh NALEPA Emmanuel Namiya Antoine Wendlinger Narine Kokhlikyan NarineK Nate Guerin Angus Williams Ang Lu Nicole Vavrova Nitish Pandey Okhlopkov Daniil Olegovich Andy Craze Om Prakash Parminder Singh Patrick Carlson Patrick Pei Paul Ganssle Paulo Haddad Pawe Lorek Peng Yu Pete Bachant Peter Bull Peter Csizsek Peter Wang Pieter Arthur de Jong Ping Yao Chang Preston Parry Puneet Mathur Quentin Hibon Andrew Smith Andrew Jackson 1kastner Rameshwar Bhaskaran Rebecca Bilbro Remi Rampin Andrea Esuli Rob Hall Robert Bradshaw Romain Brault Aman Pratik Ruifeng Zheng Russell Smith Sachin Agarwal Sailesh Choyal Samson Tan Samu l Weber Sarah Brown Sebastian P lsterl Sebastian Raschka Sebastian Saeger Alyssa Batula Abhyuday Pratap Singh Sergey Feldman Sergul Aydore Sharan Yalburgi willduan Siddharth Gupta Sri Krishna Almer Stijn Tonk Allen Riddell Theofilos Papapanagiotou Alison Alexis Mignon Tommy Boucher Tommy L fstedt Toshihiro Kamishima Tyler Folkman Tyler Lanigan Alexander Junge Varun Shenoy Victor Poughon Vilhelm von Ehrenheim Aleksandr Sandrovskii Alan Yee Vlasios Vasileiou Warut Vijitbenjaronk Yang Zhang Yaroslav Halchenko Yichuan Liu Yuichi Fujikawa affanv14 aivision2020 xor andreh7 brady salz campustrampus Agamemnon Krasoulis ditenberg elena sharova filipj8 fukatani gedeck guiniol guoci hakaa1 hongkahjun i am xhy jakirkham jaroslaw weber jayzed82 jeroko jmontoyam jonathan striebel josephsalmon jschendel leereeves martin hahn mathurinm mehak sachdeva mlewis1729 mlliou112 mthorrell ndingwall nuffe yangarbiter plagree pldtc325 Breno Freitas Brett Olsen Brian A Alfano Brian Burns polmauri Brandon Carter Charlton Austin Chayant T15h Chinmaya Pancholi Christian Danielsen Chung Yen Chyi Kwei Yau pravarmahajan DOHMATOB Elvis Daniel LeJeune Daniel Hnyk Darius Morawiec David DeTomaso David Gasquez David Haberth r David Heryanto David Kirkby David Nicholson rashchedrin Deborah Gertrude Digges Denis Engemann Devansh D Dickson Bob Baxley Don86 E Lynch Klarup Ed Rogers Elizabeth Ferriss Ellen Co2 Fabian Egli Fang Chieh Chou Bing Tian Dai Greg Stupp Grzegorz Szpak Bertrand Thirion Hadrien Bertrand Harizo Rajaona zxcvbnius Henry Lin Holger Peters Icyblade Dai Igor Andriushchenko Ilya Isaac Laughlin Iv n Vall s Aur lien Bellet JPFrancoia Jacob Schreiber Asish Mahapatra |
scikit-learn sklearn contributors rst Version 0 18 | .. include:: _contributors.rst
.. currentmodule:: sklearn
============
Version 0.18
============
.. warning::
Scikit-learn 0.18 is the last major release of scikit-learn to support Python 2.6.
Later versions of scikit-learn will require Python 2.7 or above.
.. _changes_0_18_2:
Version 0.18.2
==============
**June 20, 2017**
Changelog
---------
- Fixes for compatibility with NumPy 1.13.0: :issue:`7946` :issue:`8355` by
`Loic Esteve`_.
- Minor compatibility changes in the examples :issue:`9010` :issue:`8040`
:issue:`9149`.
Code Contributors
-----------------
Aman Dalmia, Loic Esteve, Nate Guerin, Sergei Lebedev
.. _changes_0_18_1:
Version 0.18.1
==============
**November 11, 2016**
Changelog
---------
Enhancements
............
- Improved ``sample_without_replacement`` speed by utilizing
numpy.random.permutation for most cases. As a result,
samples may differ in this release for a fixed random state.
Affected estimators:
- :class:`ensemble.BaggingClassifier`
- :class:`ensemble.BaggingRegressor`
- :class:`linear_model.RANSACRegressor`
- :class:`model_selection.RandomizedSearchCV`
- :class:`random_projection.SparseRandomProjection`
This also affects the :meth:`datasets.make_classification`
method.
Bug fixes
.........
- Fix issue where ``min_grad_norm`` and ``n_iter_without_progress``
parameters were not being utilised by :class:`manifold.TSNE`.
:issue:`6497` by :user:`Sebastian Säger <ssaeger>`
- Fix bug for svm's decision values when ``decision_function_shape``
is ``ovr`` in :class:`svm.SVC`.
:class:`svm.SVC`'s decision_function was incorrect from versions
0.17.0 through 0.18.0.
:issue:`7724` by `Bing Tian Dai`_
- Attribute ``explained_variance_ratio`` of
:class:`discriminant_analysis.LinearDiscriminantAnalysis` calculated
with SVD and Eigen solver are now of the same length. :issue:`7632`
by :user:`JPFrancoia <JPFrancoia>`
- Fixes issue in :ref:`univariate_feature_selection` where score
functions were not accepting multi-label targets. :issue:`7676`
by :user:`Mohammed Affan <affanv14>`
- Fixed setting parameters when calling ``fit`` multiple times on
:class:`feature_selection.SelectFromModel`. :issue:`7756` by `Andreas Müller`_
- Fixes issue in ``partial_fit`` method of
:class:`multiclass.OneVsRestClassifier` when number of classes used in
``partial_fit`` was less than the total number of classes in the
data. :issue:`7786` by `Srivatsan Ramesh`_
- Fixes issue in :class:`calibration.CalibratedClassifierCV` where
the sum of probabilities of each class for a data was not 1, and
``CalibratedClassifierCV`` now handles the case where the training set
has less number of classes than the total data. :issue:`7799` by
`Srivatsan Ramesh`_
- Fix a bug where :class:`sklearn.feature_selection.SelectFdr` did not
exactly implement Benjamini-Hochberg procedure. It formerly may have
selected fewer features than it should.
:issue:`7490` by :user:`Peng Meng <mpjlu>`.
- :class:`sklearn.manifold.LocallyLinearEmbedding` now correctly handles
integer inputs. :issue:`6282` by `Jake Vanderplas`_.
- The ``min_weight_fraction_leaf`` parameter of tree-based classifiers and
regressors now assumes uniform sample weights by default if the
``sample_weight`` argument is not passed to the ``fit`` function.
Previously, the parameter was silently ignored. :issue:`7301`
by :user:`Nelson Liu <nelson-liu>`.
- Numerical issue with :class:`linear_model.RidgeCV` on centered data when
`n_features > n_samples`. :issue:`6178` by `Bertrand Thirion`_
- Tree splitting criterion classes' cloning/pickling is now memory safe
:issue:`7680` by :user:`Ibraim Ganiev <olologin>`.
- Fixed a bug where :class:`decomposition.NMF` sets its ``n_iters_``
attribute in `transform()`. :issue:`7553` by :user:`Ekaterina
Krivich <kiote>`.
- :class:`sklearn.linear_model.LogisticRegressionCV` now correctly handles
string labels. :issue:`5874` by `Raghav RV`_.
- Fixed a bug where :func:`sklearn.model_selection.train_test_split` raised
an error when ``stratify`` is a list of string labels. :issue:`7593` by
`Raghav RV`_.
- Fixed a bug where :class:`sklearn.model_selection.GridSearchCV` and
:class:`sklearn.model_selection.RandomizedSearchCV` were not pickleable
because of a pickling bug in ``np.ma.MaskedArray``. :issue:`7594` by
`Raghav RV`_.
- All cross-validation utilities in :mod:`sklearn.model_selection` now
permit one time cross-validation splitters for the ``cv`` parameter. Also
non-deterministic cross-validation splitters (where multiple calls to
``split`` produce dissimilar splits) can be used as ``cv`` parameter.
The :class:`sklearn.model_selection.GridSearchCV` will cross-validate each
parameter setting on the split produced by the first ``split`` call
to the cross-validation splitter. :issue:`7660` by `Raghav RV`_.
- Fix bug where :meth:`preprocessing.MultiLabelBinarizer.fit_transform`
returned an invalid CSR matrix.
:issue:`7750` by :user:`CJ Carey <perimosocordiae>`.
- Fixed a bug where :func:`metrics.pairwise.cosine_distances` could return a
small negative distance. :issue:`7732` by :user:`Artsion <asanakoy>`.
API changes summary
-------------------
Trees and forests
- The ``min_weight_fraction_leaf`` parameter of tree-based classifiers and
regressors now assumes uniform sample weights by default if the
``sample_weight`` argument is not passed to the ``fit`` function.
Previously, the parameter was silently ignored. :issue:`7301` by :user:`Nelson
Liu <nelson-liu>`.
- Tree splitting criterion classes' cloning/pickling is now memory safe.
:issue:`7680` by :user:`Ibraim Ganiev <olologin>`.
Linear, kernelized and related models
- Length of ``explained_variance_ratio`` of
:class:`discriminant_analysis.LinearDiscriminantAnalysis`
changed for both Eigen and SVD solvers. The attribute has now a length
of min(n_components, n_classes - 1). :issue:`7632`
by :user:`JPFrancoia <JPFrancoia>`
- Numerical issue with :class:`linear_model.RidgeCV` on centered data when
``n_features > n_samples``. :issue:`6178` by `Bertrand Thirion`_
.. _changes_0_18:
Version 0.18
============
**September 28, 2016**
.. _model_selection_changes:
Model Selection Enhancements and API Changes
--------------------------------------------
- **The model_selection module**
The new module :mod:`sklearn.model_selection`, which groups together the
functionalities of formerly `sklearn.cross_validation`,
`sklearn.grid_search` and `sklearn.learning_curve`, introduces new
possibilities such as nested cross-validation and better manipulation of
parameter searches with Pandas.
Many things will stay the same but there are some key differences. Read
below to know more about the changes.
- **Data-independent CV splitters enabling nested cross-validation**
The new cross-validation splitters, defined in the
:mod:`sklearn.model_selection`, are no longer initialized with any
data-dependent parameters such as ``y``. Instead they expose a
`split` method that takes in the data and yields a generator for the
different splits.
This change makes it possible to use the cross-validation splitters to
perform nested cross-validation, facilitated by
:class:`model_selection.GridSearchCV` and
:class:`model_selection.RandomizedSearchCV` utilities.
- **The enhanced cv_results_ attribute**
The new ``cv_results_`` attribute (of :class:`model_selection.GridSearchCV`
and :class:`model_selection.RandomizedSearchCV`) introduced in lieu of the
``grid_scores_`` attribute is a dict of 1D arrays with elements in each
array corresponding to the parameter settings (i.e. search candidates).
The ``cv_results_`` dict can be easily imported into ``pandas`` as a
``DataFrame`` for exploring the search results.
The ``cv_results_`` arrays include scores for each cross-validation split
(with keys such as ``'split0_test_score'``), as well as their mean
(``'mean_test_score'``) and standard deviation (``'std_test_score'``).
The ranks for the search candidates (based on their mean
cross-validation score) is available at ``cv_results_['rank_test_score']``.
The parameter values for each parameter is stored separately as numpy
masked object arrays. The value, for that search candidate, is masked if
the corresponding parameter is not applicable. Additionally a list of all
the parameter dicts are stored at ``cv_results_['params']``.
- **Parameters n_folds and n_iter renamed to n_splits**
Some parameter names have changed:
The ``n_folds`` parameter in new :class:`model_selection.KFold`,
:class:`model_selection.GroupKFold` (see below for the name change),
and :class:`model_selection.StratifiedKFold` is now renamed to
``n_splits``. The ``n_iter`` parameter in
:class:`model_selection.ShuffleSplit`, the new class
:class:`model_selection.GroupShuffleSplit` and
:class:`model_selection.StratifiedShuffleSplit` is now renamed to
``n_splits``.
- **Rename of splitter classes which accepts group labels along with data**
The cross-validation splitters ``LabelKFold``,
``LabelShuffleSplit``, ``LeaveOneLabelOut`` and ``LeavePLabelOut`` have
been renamed to :class:`model_selection.GroupKFold`,
:class:`model_selection.GroupShuffleSplit`,
:class:`model_selection.LeaveOneGroupOut` and
:class:`model_selection.LeavePGroupsOut` respectively.
Note the change from singular to plural form in
:class:`model_selection.LeavePGroupsOut`.
- **Fit parameter labels renamed to groups**
The ``labels`` parameter in the `split` method of the newly renamed
splitters :class:`model_selection.GroupKFold`,
:class:`model_selection.LeaveOneGroupOut`,
:class:`model_selection.LeavePGroupsOut`,
:class:`model_selection.GroupShuffleSplit` is renamed to ``groups``
following the new nomenclature of their class names.
- **Parameter n_labels renamed to n_groups**
The parameter ``n_labels`` in the newly renamed
:class:`model_selection.LeavePGroupsOut` is changed to ``n_groups``.
- Training scores and Timing information
``cv_results_`` also includes the training scores for each
cross-validation split (with keys such as ``'split0_train_score'``), as
well as their mean (``'mean_train_score'``) and standard deviation
(``'std_train_score'``). To avoid the cost of evaluating training score,
set ``return_train_score=False``.
Additionally the mean and standard deviation of the times taken to split,
train and score the model across all the cross-validation splits is
available at the key ``'mean_time'`` and ``'std_time'`` respectively.
Changelog
---------
New features
............
Classifiers and Regressors
- The Gaussian Process module has been reimplemented and now offers classification
and regression estimators through :class:`gaussian_process.GaussianProcessClassifier`
and :class:`gaussian_process.GaussianProcessRegressor`. Among other things, the new
implementation supports kernel engineering, gradient-based hyperparameter optimization or
sampling of functions from GP prior and GP posterior. Extensive documentation and
examples are provided. By `Jan Hendrik Metzen`_.
- Added new supervised learning algorithm: :ref:`Multi-layer Perceptron <multilayer_perceptron>`
:issue:`3204` by :user:`Issam H. Laradji <IssamLaradji>`
- Added :class:`linear_model.HuberRegressor`, a linear model robust to outliers.
:issue:`5291` by `Manoj Kumar`_.
- Added the :class:`multioutput.MultiOutputRegressor` meta-estimator. It
converts single output regressors to multi-output regressors by fitting
one regressor per output. By :user:`Tim Head <betatim>`.
Other estimators
- New :class:`mixture.GaussianMixture` and :class:`mixture.BayesianGaussianMixture`
replace former mixture models, employing faster inference
for sounder results. :issue:`7295` by :user:`Wei Xue <xuewei4d>` and
:user:`Thierry Guillemot <tguillemot>`.
- Class `decomposition.RandomizedPCA` is now factored into :class:`decomposition.PCA`
and it is available calling with parameter ``svd_solver='randomized'``.
The default number of ``n_iter`` for ``'randomized'`` has changed to 4. The old
behavior of PCA is recovered by ``svd_solver='full'``. An additional solver
calls ``arpack`` and performs truncated (non-randomized) SVD. By default,
the best solver is selected depending on the size of the input and the
number of components requested. :issue:`5299` by :user:`Giorgio Patrini <giorgiop>`.
- Added two functions for mutual information estimation:
:func:`feature_selection.mutual_info_classif` and
:func:`feature_selection.mutual_info_regression`. These functions can be
used in :class:`feature_selection.SelectKBest` and
:class:`feature_selection.SelectPercentile` as score functions.
By :user:`Andrea Bravi <AndreaBravi>` and :user:`Nikolay Mayorov <nmayorov>`.
- Added the :class:`ensemble.IsolationForest` class for anomaly detection based on
random forests. By `Nicolas Goix`_.
- Added ``algorithm="elkan"`` to :class:`cluster.KMeans` implementing
Elkan's fast K-Means algorithm. By `Andreas Müller`_.
Model selection and evaluation
- Added :func:`metrics.fowlkes_mallows_score`, the Fowlkes Mallows
Index which measures the similarity of two clusterings of a set of points
By :user:`Arnaud Fouchet <afouchet>` and :user:`Thierry Guillemot <tguillemot>`.
- Added `metrics.calinski_harabaz_score`, which computes the Calinski
and Harabaz score to evaluate the resulting clustering of a set of points.
By :user:`Arnaud Fouchet <afouchet>` and :user:`Thierry Guillemot <tguillemot>`.
- Added new cross-validation splitter
:class:`model_selection.TimeSeriesSplit` to handle time series data.
:issue:`6586` by :user:`YenChen Lin <yenchenlin>`
- The cross-validation iterators are replaced by cross-validation splitters
available from :mod:`sklearn.model_selection`, allowing for nested
cross-validation. See :ref:`model_selection_changes` for more information.
:issue:`4294` by `Raghav RV`_.
Enhancements
............
Trees and ensembles
- Added a new splitting criterion for :class:`tree.DecisionTreeRegressor`,
the mean absolute error. This criterion can also be used in
:class:`ensemble.ExtraTreesRegressor`,
:class:`ensemble.RandomForestRegressor`, and the gradient boosting
estimators. :issue:`6667` by :user:`Nelson Liu <nelson-liu>`.
- Added weighted impurity-based early stopping criterion for decision tree
growth. :issue:`6954` by :user:`Nelson Liu <nelson-liu>`
- The random forest, extra tree and decision tree estimators now has a
method ``decision_path`` which returns the decision path of samples in
the tree. By `Arnaud Joly`_.
- A new example has been added unveiling the decision tree structure.
By `Arnaud Joly`_.
- Random forest, extra trees, decision trees and gradient boosting estimator
accept the parameter ``min_samples_split`` and ``min_samples_leaf``
provided as a percentage of the training samples. By :user:`yelite <yelite>` and `Arnaud Joly`_.
- Gradient boosting estimators accept the parameter ``criterion`` to specify
to splitting criterion used in built decision trees.
:issue:`6667` by :user:`Nelson Liu <nelson-liu>`.
- The memory footprint is reduced (sometimes greatly) for
`ensemble.bagging.BaseBagging` and classes that inherit from it,
i.e, :class:`ensemble.BaggingClassifier`,
:class:`ensemble.BaggingRegressor`, and :class:`ensemble.IsolationForest`,
by dynamically generating attribute ``estimators_samples_`` only when it is
needed. By :user:`David Staub <staubda>`.
- Added ``n_jobs`` and ``sample_weight`` parameters for
:class:`ensemble.VotingClassifier` to fit underlying estimators in parallel.
:issue:`5805` by :user:`Ibraim Ganiev <olologin>`.
Linear, kernelized and related models
- In :class:`linear_model.LogisticRegression`, the SAG solver is now
available in the multinomial case. :issue:`5251` by `Tom Dupre la Tour`_.
- :class:`linear_model.RANSACRegressor`, :class:`svm.LinearSVC` and
:class:`svm.LinearSVR` now support ``sample_weight``.
By :user:`Imaculate <Imaculate>`.
- Add parameter ``loss`` to :class:`linear_model.RANSACRegressor` to measure the
error on the samples for every trial. By `Manoj Kumar`_.
- Prediction of out-of-sample events with Isotonic Regression
(:class:`isotonic.IsotonicRegression`) is now much faster (over 1000x in tests with synthetic
data). By :user:`Jonathan Arfa <jarfa>`.
- Isotonic regression (:class:`isotonic.IsotonicRegression`) now uses a better algorithm to avoid
`O(n^2)` behavior in pathological cases, and is also generally faster
(:issue:`#6691`). By `Antony Lee`_.
- :class:`naive_bayes.GaussianNB` now accepts data-independent class-priors
through the parameter ``priors``. By :user:`Guillaume Lemaitre <glemaitre>`.
- :class:`linear_model.ElasticNet` and :class:`linear_model.Lasso`
now works with ``np.float32`` input data without converting it
into ``np.float64``. This allows to reduce the memory
consumption. :issue:`6913` by :user:`YenChen Lin <yenchenlin>`.
- :class:`semi_supervised.LabelPropagation` and :class:`semi_supervised.LabelSpreading`
now accept arbitrary kernel functions in addition to strings ``knn`` and ``rbf``.
:issue:`5762` by :user:`Utkarsh Upadhyay <musically-ut>`.
Decomposition, manifold learning and clustering
- Added ``inverse_transform`` function to :class:`decomposition.NMF` to compute
data matrix of original shape. By :user:`Anish Shah <AnishShah>`.
- :class:`cluster.KMeans` and :class:`cluster.MiniBatchKMeans` now works
with ``np.float32`` and ``np.float64`` input data without converting it.
This allows to reduce the memory consumption by using ``np.float32``.
:issue:`6846` by :user:`Sebastian Säger <ssaeger>` and
:user:`YenChen Lin <yenchenlin>`.
Preprocessing and feature selection
- :class:`preprocessing.RobustScaler` now accepts ``quantile_range`` parameter.
:issue:`5929` by :user:`Konstantin Podshumok <podshumok>`.
- :class:`feature_extraction.FeatureHasher` now accepts string values.
:issue:`6173` by :user:`Ryad Zenine <ryadzenine>` and
:user:`Devashish Deshpande <dsquareindia>`.
- Keyword arguments can now be supplied to ``func`` in
:class:`preprocessing.FunctionTransformer` by means of the ``kw_args``
parameter. By `Brian McFee`_.
- :class:`feature_selection.SelectKBest` and :class:`feature_selection.SelectPercentile`
now accept score functions that take X, y as input and return only the scores.
By :user:`Nikolay Mayorov <nmayorov>`.
Model evaluation and meta-estimators
- :class:`multiclass.OneVsOneClassifier` and :class:`multiclass.OneVsRestClassifier`
now support ``partial_fit``. By :user:`Asish Panda <kaichogami>` and
:user:`Philipp Dowling <phdowling>`.
- Added support for substituting or disabling :class:`pipeline.Pipeline`
and :class:`pipeline.FeatureUnion` components using the ``set_params``
interface that powers `sklearn.grid_search`.
See :ref:`sphx_glr_auto_examples_compose_plot_compare_reduction.py`
By `Joel Nothman`_ and :user:`Robert McGibbon <rmcgibbo>`.
- The new ``cv_results_`` attribute of :class:`model_selection.GridSearchCV`
(and :class:`model_selection.RandomizedSearchCV`) can be easily imported
into pandas as a ``DataFrame``. Ref :ref:`model_selection_changes` for
more information. :issue:`6697` by `Raghav RV`_.
- Generalization of :func:`model_selection.cross_val_predict`.
One can pass method names such as `predict_proba` to be used in the cross
validation framework instead of the default `predict`.
By :user:`Ori Ziv <zivori>` and :user:`Sears Merritt <merritts>`.
- The training scores and time taken for training followed by scoring for
each search candidate are now available at the ``cv_results_`` dict.
See :ref:`model_selection_changes` for more information.
:issue:`7325` by :user:`Eugene Chen <eyc88>` and `Raghav RV`_.
Metrics
- Added ``labels`` flag to :class:`metrics.log_loss` to explicitly provide
the labels when the number of classes in ``y_true`` and ``y_pred`` differ.
:issue:`7239` by :user:`Hong Guangguo <hongguangguo>` with help from
:user:`Mads Jensen <indianajensen>` and :user:`Nelson Liu <nelson-liu>`.
- Support sparse contingency matrices in cluster evaluation
(`metrics.cluster.supervised`) to scale to a large number of
clusters.
:issue:`7419` by :user:`Gregory Stupp <stuppie>` and `Joel Nothman`_.
- Add ``sample_weight`` parameter to :func:`metrics.matthews_corrcoef`.
By :user:`Jatin Shah <jatinshah>` and `Raghav RV`_.
- Speed up :func:`metrics.silhouette_score` by using vectorized operations.
By `Manoj Kumar`_.
- Add ``sample_weight`` parameter to :func:`metrics.confusion_matrix`.
By :user:`Bernardo Stein <DanielSidhion>`.
Miscellaneous
- Added ``n_jobs`` parameter to :class:`feature_selection.RFECV` to compute
the score on the test folds in parallel. By `Manoj Kumar`_
- Codebase does not contain C/C++ cython generated files: they are
generated during build. Distribution packages will still contain generated
C/C++ files. By :user:`Arthur Mensch <arthurmensch>`.
- Reduce the memory usage for 32-bit float input arrays of
`utils.sparse_func.mean_variance_axis` and
`utils.sparse_func.incr_mean_variance_axis` by supporting cython
fused types. By :user:`YenChen Lin <yenchenlin>`.
- The `ignore_warnings` now accept a category argument to ignore only
the warnings of a specified type. By :user:`Thierry Guillemot <tguillemot>`.
- Added parameter ``return_X_y`` and return type ``(data, target) : tuple`` option to
:func:`datasets.load_iris` dataset
:issue:`7049`,
:func:`datasets.load_breast_cancer` dataset
:issue:`7152`,
:func:`datasets.load_digits` dataset,
:func:`datasets.load_diabetes` dataset,
:func:`datasets.load_linnerud` dataset,
`datasets.load_boston` dataset
:issue:`7154` by
:user:`Manvendra Singh<manu-chroma>`.
- Simplification of the ``clone`` function, deprecate support for estimators
that modify parameters in ``__init__``. :issue:`5540` by `Andreas Müller`_.
- When unpickling a scikit-learn estimator in a different version than the one
the estimator was trained with, a ``UserWarning`` is raised, see :ref:`the documentation
on model persistence <persistence_limitations>` for more details. (:issue:`7248`)
By `Andreas Müller`_.
Bug fixes
.........
Trees and ensembles
- Random forest, extra trees, decision trees and gradient boosting
won't accept anymore ``min_samples_split=1`` as at least 2 samples
are required to split a decision tree node. By `Arnaud Joly`_
- :class:`ensemble.VotingClassifier` now raises ``NotFittedError`` if ``predict``,
``transform`` or ``predict_proba`` are called on the non-fitted estimator.
by `Sebastian Raschka`_.
- Fix bug where :class:`ensemble.AdaBoostClassifier` and
:class:`ensemble.AdaBoostRegressor` would perform poorly if the
``random_state`` was fixed
(:issue:`7411`). By `Joel Nothman`_.
- Fix bug in ensembles with randomization where the ensemble would not
set ``random_state`` on base estimators in a pipeline or similar nesting.
(:issue:`7411`). Note, results for :class:`ensemble.BaggingClassifier`
:class:`ensemble.BaggingRegressor`, :class:`ensemble.AdaBoostClassifier`
and :class:`ensemble.AdaBoostRegressor` will now differ from previous
versions. By `Joel Nothman`_.
Linear, kernelized and related models
- Fixed incorrect gradient computation for ``loss='squared_epsilon_insensitive'`` in
:class:`linear_model.SGDClassifier` and :class:`linear_model.SGDRegressor`
(:issue:`6764`). By :user:`Wenhua Yang <geekoala>`.
- Fix bug in :class:`linear_model.LogisticRegressionCV` where
``solver='liblinear'`` did not accept ``class_weights='balanced``.
(:issue:`6817`). By `Tom Dupre la Tour`_.
- Fix bug in :class:`neighbors.RadiusNeighborsClassifier` where an error
occurred when there were outliers being labelled and a weight function
specified (:issue:`6902`). By
`LeonieBorne <https://github.com/LeonieBorne>`_.
- Fix :class:`linear_model.ElasticNet` sparse decision function to match
output with dense in the multioutput case.
Decomposition, manifold learning and clustering
- `decomposition.RandomizedPCA` default number of `iterated_power` is 4 instead of 3.
:issue:`5141` by :user:`Giorgio Patrini <giorgiop>`.
- :func:`utils.extmath.randomized_svd` performs 4 power iterations by default, instead or 0.
In practice this is enough for obtaining a good approximation of the
true eigenvalues/vectors in the presence of noise. When `n_components` is
small (``< .1 * min(X.shape)``) `n_iter` is set to 7, unless the user specifies
a higher number. This improves precision with few components.
:issue:`5299` by :user:`Giorgio Patrini<giorgiop>`.
- Whiten/non-whiten inconsistency between components of :class:`decomposition.PCA`
and `decomposition.RandomizedPCA` (now factored into PCA, see the
New features) is fixed. `components_` are stored with no whitening.
:issue:`5299` by :user:`Giorgio Patrini <giorgiop>`.
- Fixed bug in :func:`manifold.spectral_embedding` where diagonal of unnormalized
Laplacian matrix was incorrectly set to 1. :issue:`4995` by :user:`Peter Fischer <yanlend>`.
- Fixed incorrect initialization of `utils.arpack.eigsh` on all
occurrences. Affects `cluster.bicluster.SpectralBiclustering`,
:class:`decomposition.KernelPCA`, :class:`manifold.LocallyLinearEmbedding`,
and :class:`manifold.SpectralEmbedding` (:issue:`5012`). By
:user:`Peter Fischer <yanlend>`.
- Attribute ``explained_variance_ratio_`` calculated with the SVD solver
of :class:`discriminant_analysis.LinearDiscriminantAnalysis` now returns
correct results. By :user:`JPFrancoia <JPFrancoia>`
Preprocessing and feature selection
- `preprocessing.data._transform_selected` now always passes a copy
of ``X`` to transform function when ``copy=True`` (:issue:`7194`). By `Caio
Oliveira <https://github.com/caioaao>`_.
Model evaluation and meta-estimators
- :class:`model_selection.StratifiedKFold` now raises error if all n_labels
for individual classes is less than n_folds.
:issue:`6182` by :user:`Devashish Deshpande <dsquareindia>`.
- Fixed bug in :class:`model_selection.StratifiedShuffleSplit`
where train and test sample could overlap in some edge cases,
see :issue:`6121` for
more details. By `Loic Esteve`_.
- Fix in :class:`sklearn.model_selection.StratifiedShuffleSplit` to
return splits of size ``train_size`` and ``test_size`` in all cases
(:issue:`6472`). By `Andreas Müller`_.
- Cross-validation of :class:`multiclass.OneVsOneClassifier` and
:class:`multiclass.OneVsRestClassifier` now works with precomputed kernels.
:issue:`7350` by :user:`Russell Smith <rsmith54>`.
- Fix incomplete ``predict_proba`` method delegation from
:class:`model_selection.GridSearchCV` to
:class:`linear_model.SGDClassifier` (:issue:`7159`)
by `Yichuan Liu <https://github.com/yl565>`_.
Metrics
- Fix bug in :func:`metrics.silhouette_score` in which clusters of
size 1 were incorrectly scored. They should get a score of 0.
By `Joel Nothman`_.
- Fix bug in :func:`metrics.silhouette_samples` so that it now works with
arbitrary labels, not just those ranging from 0 to n_clusters - 1.
- Fix bug where expected and adjusted mutual information were incorrect if
cluster contingency cells exceeded ``2**16``. By `Joel Nothman`_.
- :func:`metrics.pairwise_distances` now converts arrays to
boolean arrays when required in ``scipy.spatial.distance``.
:issue:`5460` by `Tom Dupre la Tour`_.
- Fix sparse input support in :func:`metrics.silhouette_score` as well as
example examples/text/document_clustering.py. By :user:`YenChen Lin <yenchenlin>`.
- :func:`metrics.roc_curve` and :func:`metrics.precision_recall_curve` no
longer round ``y_score`` values when creating ROC curves; this was causing
problems for users with very small differences in scores (:issue:`7353`).
Miscellaneous
- `model_selection.tests._search._check_param_grid` now works correctly with all types
that extends/implements `Sequence` (except string), including range (Python 3.x) and xrange
(Python 2.x). :issue:`7323` by Viacheslav Kovalevskyi.
- :func:`utils.extmath.randomized_range_finder` is more numerically stable when many
power iterations are requested, since it applies LU normalization by default.
If ``n_iter<2`` numerical issues are unlikely, thus no normalization is applied.
Other normalization options are available: ``'none', 'LU'`` and ``'QR'``.
:issue:`5141` by :user:`Giorgio Patrini <giorgiop>`.
- Fix a bug where some formats of ``scipy.sparse`` matrix, and estimators
with them as parameters, could not be passed to :func:`base.clone`.
By `Loic Esteve`_.
- :func:`datasets.load_svmlight_file` now is able to read long int QID values.
:issue:`7101` by :user:`Ibraim Ganiev <olologin>`.
API changes summary
-------------------
Linear, kernelized and related models
- ``residual_metric`` has been deprecated in :class:`linear_model.RANSACRegressor`.
Use ``loss`` instead. By `Manoj Kumar`_.
- Access to public attributes ``.X_`` and ``.y_`` has been deprecated in
:class:`isotonic.IsotonicRegression`. By :user:`Jonathan Arfa <jarfa>`.
Decomposition, manifold learning and clustering
- The old `mixture.DPGMM` is deprecated in favor of the new
:class:`mixture.BayesianGaussianMixture` (with the parameter
``weight_concentration_prior_type='dirichlet_process'``).
The new class solves the computational
problems of the old class and computes the Gaussian mixture with a
Dirichlet process prior faster than before.
:issue:`7295` by :user:`Wei Xue <xuewei4d>` and :user:`Thierry Guillemot <tguillemot>`.
- The old `mixture.VBGMM` is deprecated in favor of the new
:class:`mixture.BayesianGaussianMixture` (with the parameter
``weight_concentration_prior_type='dirichlet_distribution'``).
The new class solves the computational
problems of the old class and computes the Variational Bayesian Gaussian
mixture faster than before.
:issue:`6651` by :user:`Wei Xue <xuewei4d>` and :user:`Thierry Guillemot <tguillemot>`.
- The old `mixture.GMM` is deprecated in favor of the new
:class:`mixture.GaussianMixture`. The new class computes the Gaussian mixture
faster than before and some of computational problems have been solved.
:issue:`6666` by :user:`Wei Xue <xuewei4d>` and :user:`Thierry Guillemot <tguillemot>`.
Model evaluation and meta-estimators
- The `sklearn.cross_validation`, `sklearn.grid_search` and
`sklearn.learning_curve` have been deprecated and the classes and
functions have been reorganized into the :mod:`sklearn.model_selection`
module. Ref :ref:`model_selection_changes` for more information.
:issue:`4294` by `Raghav RV`_.
- The ``grid_scores_`` attribute of :class:`model_selection.GridSearchCV`
and :class:`model_selection.RandomizedSearchCV` is deprecated in favor of
the attribute ``cv_results_``.
Ref :ref:`model_selection_changes` for more information.
:issue:`6697` by `Raghav RV`_.
- The parameters ``n_iter`` or ``n_folds`` in old CV splitters are replaced
by the new parameter ``n_splits`` since it can provide a consistent
and unambiguous interface to represent the number of train-test splits.
:issue:`7187` by :user:`YenChen Lin <yenchenlin>`.
- ``classes`` parameter was renamed to ``labels`` in
:func:`metrics.hamming_loss`. :issue:`7260` by :user:`Sebastián Vanrell <srvanrell>`.
- The splitter classes ``LabelKFold``, ``LabelShuffleSplit``,
``LeaveOneLabelOut`` and ``LeavePLabelsOut`` are renamed to
:class:`model_selection.GroupKFold`,
:class:`model_selection.GroupShuffleSplit`,
:class:`model_selection.LeaveOneGroupOut`
and :class:`model_selection.LeavePGroupsOut` respectively.
Also the parameter ``labels`` in the `split` method of the newly
renamed splitters :class:`model_selection.LeaveOneGroupOut` and
:class:`model_selection.LeavePGroupsOut` is renamed to
``groups``. Additionally in :class:`model_selection.LeavePGroupsOut`,
the parameter ``n_labels`` is renamed to ``n_groups``.
:issue:`6660` by `Raghav RV`_.
- Error and loss names for ``scoring`` parameters are now prefixed by
``'neg_'``, such as ``neg_mean_squared_error``. The unprefixed versions
are deprecated and will be removed in version 0.20.
:issue:`7261` by :user:`Tim Head <betatim>`.
Code Contributors
-----------------
Aditya Joshi, Alejandro, Alexander Fabisch, Alexander Loginov, Alexander
Minyushkin, Alexander Rudy, Alexandre Abadie, Alexandre Abraham, Alexandre
Gramfort, Alexandre Saint, alexfields, Alvaro Ulloa, alyssaq, Amlan Kar,
Andreas Mueller, andrew giessel, Andrew Jackson, Andrew McCulloh, Andrew
Murray, Anish Shah, Arafat, Archit Sharma, Ariel Rokem, Arnaud Joly, Arnaud
Rachez, Arthur Mensch, Ash Hoover, asnt, b0noI, Behzad Tabibian, Bernardo,
Bernhard Kratzwald, Bhargav Mangipudi, blakeflei, Boyuan Deng, Brandon Carter,
Brett Naul, Brian McFee, Caio Oliveira, Camilo Lamus, Carol Willing, Cass,
CeShine Lee, Charles Truong, Chyi-Kwei Yau, CJ Carey, codevig, Colin Ni, Dan
Shiebler, Daniel, Daniel Hnyk, David Ellis, David Nicholson, David Staub, David
Thaler, David Warshaw, Davide Lasagna, Deborah, definitelyuncertain, Didi
Bar-Zev, djipey, dsquareindia, edwinENSAE, Elias Kuthe, Elvis DOHMATOB, Ethan
White, Fabian Pedregosa, Fabio Ticconi, fisache, Florian Wilhelm, Francis,
Francis O'Donovan, Gael Varoquaux, Ganiev Ibraim, ghg, Gilles Louppe, Giorgio
Patrini, Giovanni Cherubin, Giovanni Lanzani, Glenn Qian, Gordon
Mohr, govin-vatsan, Graham Clenaghan, Greg Reda, Greg Stupp, Guillaume
Lemaitre, Gustav Mörtberg, halwai, Harizo Rajaona, Harry Mavroforakis,
hashcode55, hdmetor, Henry Lin, Hobson Lane, Hugo Bowne-Anderson,
Igor Andriushchenko, Imaculate, Inki Hwang, Isaac Sijaranamual,
Ishank Gulati, Issam Laradji, Iver Jordal, jackmartin, Jacob Schreiber, Jake
Vanderplas, James Fiedler, James Routley, Jan Zikes, Janna Brettingen, jarfa, Jason
Laska, jblackburne, jeff levesque, Jeffrey Blackburne, Jeffrey04, Jeremy Hintz,
jeremynixon, Jeroen, Jessica Yung, Jill-Jênn Vie, Jimmy Jia, Jiyuan Qian, Joel
Nothman, johannah, John, John Boersma, John Kirkham, John Moeller,
jonathan.striebel, joncrall, Jordi, Joseph Munoz, Joshua Cook, JPFrancoia,
jrfiedler, JulianKahnert, juliathebrave, kaichogami, KamalakerDadi, Kenneth
Lyons, Kevin Wang, kingjr, kjell, Konstantin Podshumok, Kornel Kielczewski,
Krishna Kalyan, krishnakalyan3, Kvle Putnam, Kyle Jackson, Lars Buitinck,
ldavid, LeiG, LeightonZhang, Leland McInnes, Liang-Chi Hsieh, Lilian Besson,
lizsz, Loic Esteve, Louis Tiao, Léonie Borne, Mads Jensen, Maniteja Nandana,
Manoj Kumar, Manvendra Singh, Marco, Mario Krell, Mark Bao, Mark Szepieniec,
Martin Madsen, MartinBpr, MaryanMorel, Massil, Matheus, Mathieu Blondel,
Mathieu Dubois, Matteo, Matthias Ekman, Max Moroz, Michael Scherer, michiaki
ariga, Mikhail Korobov, Moussa Taifi, mrandrewandrade, Mridul Seth, nadya-p,
Naoya Kanai, Nate George, Nelle Varoquaux, Nelson Liu, Nick James,
NickleDave, Nico, Nicolas Goix, Nikolay Mayorov, ningchi, nlathia,
okbalefthanded, Okhlopkov, Olivier Grisel, Panos Louridas, Paul Strickland,
Perrine Letellier, pestrickland, Peter Fischer, Pieter, Ping-Yao, Chang,
practicalswift, Preston Parry, Qimu Zheng, Rachit Kansal, Raghav RV,
Ralf Gommers, Ramana.S, Rammig, Randy Olson, Rob Alexander, Robert Lutz,
Robin Schucker, Rohan Jain, Ruifeng Zheng, Ryan Yu, Rémy Léone, saihttam,
Saiwing Yeung, Sam Shleifer, Samuel St-Jean, Sartaj Singh, Sasank Chilamkurthy,
saurabh.bansod, Scott Andrews, Scott Lowe, seales, Sebastian Raschka, Sebastian
Saeger, Sebastián Vanrell, Sergei Lebedev, shagun Sodhani, shanmuga cv,
Shashank Shekhar, shawpan, shengxiduan, Shota, shuckle16, Skipper Seabold,
sklearn-ci, SmedbergM, srvanrell, Sébastien Lerique, Taranjeet, themrmax,
Thierry, Thierry Guillemot, Thomas, Thomas Hallock, Thomas Moreau, Tim Head,
tKammy, toastedcornflakes, Tom, TomDLT, Toshihiro Kamishima, tracer0tong, Trent
Hauck, trevorstephens, Tue Vo, Varun, Varun Jewalikar, Viacheslav, Vighnesh
Birodkar, Vikram, Villu Ruusmann, Vinayak Mehta, walter, waterponey, Wenhua
Yang, Wenjian Huang, Will Welch, wyseguy7, xyguo, yanlend, Yaroslav Halchenko,
yelite, Yen, YenChenLin, Yichuan Liu, Yoav Ram, Yoshiki, Zheng RuiFeng, zivori, Óscar Nájera | scikit-learn | include contributors rst currentmodule sklearn Version 0 18 warning Scikit learn 0 18 is the last major release of scikit learn to support Python 2 6 Later versions of scikit learn will require Python 2 7 or above changes 0 18 2 Version 0 18 2 June 20 2017 Changelog Fixes for compatibility with NumPy 1 13 0 issue 7946 issue 8355 by Loic Esteve Minor compatibility changes in the examples issue 9010 issue 8040 issue 9149 Code Contributors Aman Dalmia Loic Esteve Nate Guerin Sergei Lebedev changes 0 18 1 Version 0 18 1 November 11 2016 Changelog Enhancements Improved sample without replacement speed by utilizing numpy random permutation for most cases As a result samples may differ in this release for a fixed random state Affected estimators class ensemble BaggingClassifier class ensemble BaggingRegressor class linear model RANSACRegressor class model selection RandomizedSearchCV class random projection SparseRandomProjection This also affects the meth datasets make classification method Bug fixes Fix issue where min grad norm and n iter without progress parameters were not being utilised by class manifold TSNE issue 6497 by user Sebastian S ger ssaeger Fix bug for svm s decision values when decision function shape is ovr in class svm SVC class svm SVC s decision function was incorrect from versions 0 17 0 through 0 18 0 issue 7724 by Bing Tian Dai Attribute explained variance ratio of class discriminant analysis LinearDiscriminantAnalysis calculated with SVD and Eigen solver are now of the same length issue 7632 by user JPFrancoia JPFrancoia Fixes issue in ref univariate feature selection where score functions were not accepting multi label targets issue 7676 by user Mohammed Affan affanv14 Fixed setting parameters when calling fit multiple times on class feature selection SelectFromModel issue 7756 by Andreas M ller Fixes issue in partial fit method of class multiclass OneVsRestClassifier when number of classes used in partial fit was less than the total number of classes in the data issue 7786 by Srivatsan Ramesh Fixes issue in class calibration CalibratedClassifierCV where the sum of probabilities of each class for a data was not 1 and CalibratedClassifierCV now handles the case where the training set has less number of classes than the total data issue 7799 by Srivatsan Ramesh Fix a bug where class sklearn feature selection SelectFdr did not exactly implement Benjamini Hochberg procedure It formerly may have selected fewer features than it should issue 7490 by user Peng Meng mpjlu class sklearn manifold LocallyLinearEmbedding now correctly handles integer inputs issue 6282 by Jake Vanderplas The min weight fraction leaf parameter of tree based classifiers and regressors now assumes uniform sample weights by default if the sample weight argument is not passed to the fit function Previously the parameter was silently ignored issue 7301 by user Nelson Liu nelson liu Numerical issue with class linear model RidgeCV on centered data when n features n samples issue 6178 by Bertrand Thirion Tree splitting criterion classes cloning pickling is now memory safe issue 7680 by user Ibraim Ganiev olologin Fixed a bug where class decomposition NMF sets its n iters attribute in transform issue 7553 by user Ekaterina Krivich kiote class sklearn linear model LogisticRegressionCV now correctly handles string labels issue 5874 by Raghav RV Fixed a bug where func sklearn model selection train test split raised an error when stratify is a list of string labels issue 7593 by Raghav RV Fixed a bug where class sklearn model selection GridSearchCV and class sklearn model selection RandomizedSearchCV were not pickleable because of a pickling bug in np ma MaskedArray issue 7594 by Raghav RV All cross validation utilities in mod sklearn model selection now permit one time cross validation splitters for the cv parameter Also non deterministic cross validation splitters where multiple calls to split produce dissimilar splits can be used as cv parameter The class sklearn model selection GridSearchCV will cross validate each parameter setting on the split produced by the first split call to the cross validation splitter issue 7660 by Raghav RV Fix bug where meth preprocessing MultiLabelBinarizer fit transform returned an invalid CSR matrix issue 7750 by user CJ Carey perimosocordiae Fixed a bug where func metrics pairwise cosine distances could return a small negative distance issue 7732 by user Artsion asanakoy API changes summary Trees and forests The min weight fraction leaf parameter of tree based classifiers and regressors now assumes uniform sample weights by default if the sample weight argument is not passed to the fit function Previously the parameter was silently ignored issue 7301 by user Nelson Liu nelson liu Tree splitting criterion classes cloning pickling is now memory safe issue 7680 by user Ibraim Ganiev olologin Linear kernelized and related models Length of explained variance ratio of class discriminant analysis LinearDiscriminantAnalysis changed for both Eigen and SVD solvers The attribute has now a length of min n components n classes 1 issue 7632 by user JPFrancoia JPFrancoia Numerical issue with class linear model RidgeCV on centered data when n features n samples issue 6178 by Bertrand Thirion changes 0 18 Version 0 18 September 28 2016 model selection changes Model Selection Enhancements and API Changes The model selection module The new module mod sklearn model selection which groups together the functionalities of formerly sklearn cross validation sklearn grid search and sklearn learning curve introduces new possibilities such as nested cross validation and better manipulation of parameter searches with Pandas Many things will stay the same but there are some key differences Read below to know more about the changes Data independent CV splitters enabling nested cross validation The new cross validation splitters defined in the mod sklearn model selection are no longer initialized with any data dependent parameters such as y Instead they expose a split method that takes in the data and yields a generator for the different splits This change makes it possible to use the cross validation splitters to perform nested cross validation facilitated by class model selection GridSearchCV and class model selection RandomizedSearchCV utilities The enhanced cv results attribute The new cv results attribute of class model selection GridSearchCV and class model selection RandomizedSearchCV introduced in lieu of the grid scores attribute is a dict of 1D arrays with elements in each array corresponding to the parameter settings i e search candidates The cv results dict can be easily imported into pandas as a DataFrame for exploring the search results The cv results arrays include scores for each cross validation split with keys such as split0 test score as well as their mean mean test score and standard deviation std test score The ranks for the search candidates based on their mean cross validation score is available at cv results rank test score The parameter values for each parameter is stored separately as numpy masked object arrays The value for that search candidate is masked if the corresponding parameter is not applicable Additionally a list of all the parameter dicts are stored at cv results params Parameters n folds and n iter renamed to n splits Some parameter names have changed The n folds parameter in new class model selection KFold class model selection GroupKFold see below for the name change and class model selection StratifiedKFold is now renamed to n splits The n iter parameter in class model selection ShuffleSplit the new class class model selection GroupShuffleSplit and class model selection StratifiedShuffleSplit is now renamed to n splits Rename of splitter classes which accepts group labels along with data The cross validation splitters LabelKFold LabelShuffleSplit LeaveOneLabelOut and LeavePLabelOut have been renamed to class model selection GroupKFold class model selection GroupShuffleSplit class model selection LeaveOneGroupOut and class model selection LeavePGroupsOut respectively Note the change from singular to plural form in class model selection LeavePGroupsOut Fit parameter labels renamed to groups The labels parameter in the split method of the newly renamed splitters class model selection GroupKFold class model selection LeaveOneGroupOut class model selection LeavePGroupsOut class model selection GroupShuffleSplit is renamed to groups following the new nomenclature of their class names Parameter n labels renamed to n groups The parameter n labels in the newly renamed class model selection LeavePGroupsOut is changed to n groups Training scores and Timing information cv results also includes the training scores for each cross validation split with keys such as split0 train score as well as their mean mean train score and standard deviation std train score To avoid the cost of evaluating training score set return train score False Additionally the mean and standard deviation of the times taken to split train and score the model across all the cross validation splits is available at the key mean time and std time respectively Changelog New features Classifiers and Regressors The Gaussian Process module has been reimplemented and now offers classification and regression estimators through class gaussian process GaussianProcessClassifier and class gaussian process GaussianProcessRegressor Among other things the new implementation supports kernel engineering gradient based hyperparameter optimization or sampling of functions from GP prior and GP posterior Extensive documentation and examples are provided By Jan Hendrik Metzen Added new supervised learning algorithm ref Multi layer Perceptron multilayer perceptron issue 3204 by user Issam H Laradji IssamLaradji Added class linear model HuberRegressor a linear model robust to outliers issue 5291 by Manoj Kumar Added the class multioutput MultiOutputRegressor meta estimator It converts single output regressors to multi output regressors by fitting one regressor per output By user Tim Head betatim Other estimators New class mixture GaussianMixture and class mixture BayesianGaussianMixture replace former mixture models employing faster inference for sounder results issue 7295 by user Wei Xue xuewei4d and user Thierry Guillemot tguillemot Class decomposition RandomizedPCA is now factored into class decomposition PCA and it is available calling with parameter svd solver randomized The default number of n iter for randomized has changed to 4 The old behavior of PCA is recovered by svd solver full An additional solver calls arpack and performs truncated non randomized SVD By default the best solver is selected depending on the size of the input and the number of components requested issue 5299 by user Giorgio Patrini giorgiop Added two functions for mutual information estimation func feature selection mutual info classif and func feature selection mutual info regression These functions can be used in class feature selection SelectKBest and class feature selection SelectPercentile as score functions By user Andrea Bravi AndreaBravi and user Nikolay Mayorov nmayorov Added the class ensemble IsolationForest class for anomaly detection based on random forests By Nicolas Goix Added algorithm elkan to class cluster KMeans implementing Elkan s fast K Means algorithm By Andreas M ller Model selection and evaluation Added func metrics fowlkes mallows score the Fowlkes Mallows Index which measures the similarity of two clusterings of a set of points By user Arnaud Fouchet afouchet and user Thierry Guillemot tguillemot Added metrics calinski harabaz score which computes the Calinski and Harabaz score to evaluate the resulting clustering of a set of points By user Arnaud Fouchet afouchet and user Thierry Guillemot tguillemot Added new cross validation splitter class model selection TimeSeriesSplit to handle time series data issue 6586 by user YenChen Lin yenchenlin The cross validation iterators are replaced by cross validation splitters available from mod sklearn model selection allowing for nested cross validation See ref model selection changes for more information issue 4294 by Raghav RV Enhancements Trees and ensembles Added a new splitting criterion for class tree DecisionTreeRegressor the mean absolute error This criterion can also be used in class ensemble ExtraTreesRegressor class ensemble RandomForestRegressor and the gradient boosting estimators issue 6667 by user Nelson Liu nelson liu Added weighted impurity based early stopping criterion for decision tree growth issue 6954 by user Nelson Liu nelson liu The random forest extra tree and decision tree estimators now has a method decision path which returns the decision path of samples in the tree By Arnaud Joly A new example has been added unveiling the decision tree structure By Arnaud Joly Random forest extra trees decision trees and gradient boosting estimator accept the parameter min samples split and min samples leaf provided as a percentage of the training samples By user yelite yelite and Arnaud Joly Gradient boosting estimators accept the parameter criterion to specify to splitting criterion used in built decision trees issue 6667 by user Nelson Liu nelson liu The memory footprint is reduced sometimes greatly for ensemble bagging BaseBagging and classes that inherit from it i e class ensemble BaggingClassifier class ensemble BaggingRegressor and class ensemble IsolationForest by dynamically generating attribute estimators samples only when it is needed By user David Staub staubda Added n jobs and sample weight parameters for class ensemble VotingClassifier to fit underlying estimators in parallel issue 5805 by user Ibraim Ganiev olologin Linear kernelized and related models In class linear model LogisticRegression the SAG solver is now available in the multinomial case issue 5251 by Tom Dupre la Tour class linear model RANSACRegressor class svm LinearSVC and class svm LinearSVR now support sample weight By user Imaculate Imaculate Add parameter loss to class linear model RANSACRegressor to measure the error on the samples for every trial By Manoj Kumar Prediction of out of sample events with Isotonic Regression class isotonic IsotonicRegression is now much faster over 1000x in tests with synthetic data By user Jonathan Arfa jarfa Isotonic regression class isotonic IsotonicRegression now uses a better algorithm to avoid O n 2 behavior in pathological cases and is also generally faster issue 6691 By Antony Lee class naive bayes GaussianNB now accepts data independent class priors through the parameter priors By user Guillaume Lemaitre glemaitre class linear model ElasticNet and class linear model Lasso now works with np float32 input data without converting it into np float64 This allows to reduce the memory consumption issue 6913 by user YenChen Lin yenchenlin class semi supervised LabelPropagation and class semi supervised LabelSpreading now accept arbitrary kernel functions in addition to strings knn and rbf issue 5762 by user Utkarsh Upadhyay musically ut Decomposition manifold learning and clustering Added inverse transform function to class decomposition NMF to compute data matrix of original shape By user Anish Shah AnishShah class cluster KMeans and class cluster MiniBatchKMeans now works with np float32 and np float64 input data without converting it This allows to reduce the memory consumption by using np float32 issue 6846 by user Sebastian S ger ssaeger and user YenChen Lin yenchenlin Preprocessing and feature selection class preprocessing RobustScaler now accepts quantile range parameter issue 5929 by user Konstantin Podshumok podshumok class feature extraction FeatureHasher now accepts string values issue 6173 by user Ryad Zenine ryadzenine and user Devashish Deshpande dsquareindia Keyword arguments can now be supplied to func in class preprocessing FunctionTransformer by means of the kw args parameter By Brian McFee class feature selection SelectKBest and class feature selection SelectPercentile now accept score functions that take X y as input and return only the scores By user Nikolay Mayorov nmayorov Model evaluation and meta estimators class multiclass OneVsOneClassifier and class multiclass OneVsRestClassifier now support partial fit By user Asish Panda kaichogami and user Philipp Dowling phdowling Added support for substituting or disabling class pipeline Pipeline and class pipeline FeatureUnion components using the set params interface that powers sklearn grid search See ref sphx glr auto examples compose plot compare reduction py By Joel Nothman and user Robert McGibbon rmcgibbo The new cv results attribute of class model selection GridSearchCV and class model selection RandomizedSearchCV can be easily imported into pandas as a DataFrame Ref ref model selection changes for more information issue 6697 by Raghav RV Generalization of func model selection cross val predict One can pass method names such as predict proba to be used in the cross validation framework instead of the default predict By user Ori Ziv zivori and user Sears Merritt merritts The training scores and time taken for training followed by scoring for each search candidate are now available at the cv results dict See ref model selection changes for more information issue 7325 by user Eugene Chen eyc88 and Raghav RV Metrics Added labels flag to class metrics log loss to explicitly provide the labels when the number of classes in y true and y pred differ issue 7239 by user Hong Guangguo hongguangguo with help from user Mads Jensen indianajensen and user Nelson Liu nelson liu Support sparse contingency matrices in cluster evaluation metrics cluster supervised to scale to a large number of clusters issue 7419 by user Gregory Stupp stuppie and Joel Nothman Add sample weight parameter to func metrics matthews corrcoef By user Jatin Shah jatinshah and Raghav RV Speed up func metrics silhouette score by using vectorized operations By Manoj Kumar Add sample weight parameter to func metrics confusion matrix By user Bernardo Stein DanielSidhion Miscellaneous Added n jobs parameter to class feature selection RFECV to compute the score on the test folds in parallel By Manoj Kumar Codebase does not contain C C cython generated files they are generated during build Distribution packages will still contain generated C C files By user Arthur Mensch arthurmensch Reduce the memory usage for 32 bit float input arrays of utils sparse func mean variance axis and utils sparse func incr mean variance axis by supporting cython fused types By user YenChen Lin yenchenlin The ignore warnings now accept a category argument to ignore only the warnings of a specified type By user Thierry Guillemot tguillemot Added parameter return X y and return type data target tuple option to func datasets load iris dataset issue 7049 func datasets load breast cancer dataset issue 7152 func datasets load digits dataset func datasets load diabetes dataset func datasets load linnerud dataset datasets load boston dataset issue 7154 by user Manvendra Singh manu chroma Simplification of the clone function deprecate support for estimators that modify parameters in init issue 5540 by Andreas M ller When unpickling a scikit learn estimator in a different version than the one the estimator was trained with a UserWarning is raised see ref the documentation on model persistence persistence limitations for more details issue 7248 By Andreas M ller Bug fixes Trees and ensembles Random forest extra trees decision trees and gradient boosting won t accept anymore min samples split 1 as at least 2 samples are required to split a decision tree node By Arnaud Joly class ensemble VotingClassifier now raises NotFittedError if predict transform or predict proba are called on the non fitted estimator by Sebastian Raschka Fix bug where class ensemble AdaBoostClassifier and class ensemble AdaBoostRegressor would perform poorly if the random state was fixed issue 7411 By Joel Nothman Fix bug in ensembles with randomization where the ensemble would not set random state on base estimators in a pipeline or similar nesting issue 7411 Note results for class ensemble BaggingClassifier class ensemble BaggingRegressor class ensemble AdaBoostClassifier and class ensemble AdaBoostRegressor will now differ from previous versions By Joel Nothman Linear kernelized and related models Fixed incorrect gradient computation for loss squared epsilon insensitive in class linear model SGDClassifier and class linear model SGDRegressor issue 6764 By user Wenhua Yang geekoala Fix bug in class linear model LogisticRegressionCV where solver liblinear did not accept class weights balanced issue 6817 By Tom Dupre la Tour Fix bug in class neighbors RadiusNeighborsClassifier where an error occurred when there were outliers being labelled and a weight function specified issue 6902 By LeonieBorne https github com LeonieBorne Fix class linear model ElasticNet sparse decision function to match output with dense in the multioutput case Decomposition manifold learning and clustering decomposition RandomizedPCA default number of iterated power is 4 instead of 3 issue 5141 by user Giorgio Patrini giorgiop func utils extmath randomized svd performs 4 power iterations by default instead or 0 In practice this is enough for obtaining a good approximation of the true eigenvalues vectors in the presence of noise When n components is small 1 min X shape n iter is set to 7 unless the user specifies a higher number This improves precision with few components issue 5299 by user Giorgio Patrini giorgiop Whiten non whiten inconsistency between components of class decomposition PCA and decomposition RandomizedPCA now factored into PCA see the New features is fixed components are stored with no whitening issue 5299 by user Giorgio Patrini giorgiop Fixed bug in func manifold spectral embedding where diagonal of unnormalized Laplacian matrix was incorrectly set to 1 issue 4995 by user Peter Fischer yanlend Fixed incorrect initialization of utils arpack eigsh on all occurrences Affects cluster bicluster SpectralBiclustering class decomposition KernelPCA class manifold LocallyLinearEmbedding and class manifold SpectralEmbedding issue 5012 By user Peter Fischer yanlend Attribute explained variance ratio calculated with the SVD solver of class discriminant analysis LinearDiscriminantAnalysis now returns correct results By user JPFrancoia JPFrancoia Preprocessing and feature selection preprocessing data transform selected now always passes a copy of X to transform function when copy True issue 7194 By Caio Oliveira https github com caioaao Model evaluation and meta estimators class model selection StratifiedKFold now raises error if all n labels for individual classes is less than n folds issue 6182 by user Devashish Deshpande dsquareindia Fixed bug in class model selection StratifiedShuffleSplit where train and test sample could overlap in some edge cases see issue 6121 for more details By Loic Esteve Fix in class sklearn model selection StratifiedShuffleSplit to return splits of size train size and test size in all cases issue 6472 By Andreas M ller Cross validation of class multiclass OneVsOneClassifier and class multiclass OneVsRestClassifier now works with precomputed kernels issue 7350 by user Russell Smith rsmith54 Fix incomplete predict proba method delegation from class model selection GridSearchCV to class linear model SGDClassifier issue 7159 by Yichuan Liu https github com yl565 Metrics Fix bug in func metrics silhouette score in which clusters of size 1 were incorrectly scored They should get a score of 0 By Joel Nothman Fix bug in func metrics silhouette samples so that it now works with arbitrary labels not just those ranging from 0 to n clusters 1 Fix bug where expected and adjusted mutual information were incorrect if cluster contingency cells exceeded 2 16 By Joel Nothman func metrics pairwise distances now converts arrays to boolean arrays when required in scipy spatial distance issue 5460 by Tom Dupre la Tour Fix sparse input support in func metrics silhouette score as well as example examples text document clustering py By user YenChen Lin yenchenlin func metrics roc curve and func metrics precision recall curve no longer round y score values when creating ROC curves this was causing problems for users with very small differences in scores issue 7353 Miscellaneous model selection tests search check param grid now works correctly with all types that extends implements Sequence except string including range Python 3 x and xrange Python 2 x issue 7323 by Viacheslav Kovalevskyi func utils extmath randomized range finder is more numerically stable when many power iterations are requested since it applies LU normalization by default If n iter 2 numerical issues are unlikely thus no normalization is applied Other normalization options are available none LU and QR issue 5141 by user Giorgio Patrini giorgiop Fix a bug where some formats of scipy sparse matrix and estimators with them as parameters could not be passed to func base clone By Loic Esteve func datasets load svmlight file now is able to read long int QID values issue 7101 by user Ibraim Ganiev olologin API changes summary Linear kernelized and related models residual metric has been deprecated in class linear model RANSACRegressor Use loss instead By Manoj Kumar Access to public attributes X and y has been deprecated in class isotonic IsotonicRegression By user Jonathan Arfa jarfa Decomposition manifold learning and clustering The old mixture DPGMM is deprecated in favor of the new class mixture BayesianGaussianMixture with the parameter weight concentration prior type dirichlet process The new class solves the computational problems of the old class and computes the Gaussian mixture with a Dirichlet process prior faster than before issue 7295 by user Wei Xue xuewei4d and user Thierry Guillemot tguillemot The old mixture VBGMM is deprecated in favor of the new class mixture BayesianGaussianMixture with the parameter weight concentration prior type dirichlet distribution The new class solves the computational problems of the old class and computes the Variational Bayesian Gaussian mixture faster than before issue 6651 by user Wei Xue xuewei4d and user Thierry Guillemot tguillemot The old mixture GMM is deprecated in favor of the new class mixture GaussianMixture The new class computes the Gaussian mixture faster than before and some of computational problems have been solved issue 6666 by user Wei Xue xuewei4d and user Thierry Guillemot tguillemot Model evaluation and meta estimators The sklearn cross validation sklearn grid search and sklearn learning curve have been deprecated and the classes and functions have been reorganized into the mod sklearn model selection module Ref ref model selection changes for more information issue 4294 by Raghav RV The grid scores attribute of class model selection GridSearchCV and class model selection RandomizedSearchCV is deprecated in favor of the attribute cv results Ref ref model selection changes for more information issue 6697 by Raghav RV The parameters n iter or n folds in old CV splitters are replaced by the new parameter n splits since it can provide a consistent and unambiguous interface to represent the number of train test splits issue 7187 by user YenChen Lin yenchenlin classes parameter was renamed to labels in func metrics hamming loss issue 7260 by user Sebasti n Vanrell srvanrell The splitter classes LabelKFold LabelShuffleSplit LeaveOneLabelOut and LeavePLabelsOut are renamed to class model selection GroupKFold class model selection GroupShuffleSplit class model selection LeaveOneGroupOut and class model selection LeavePGroupsOut respectively Also the parameter labels in the split method of the newly renamed splitters class model selection LeaveOneGroupOut and class model selection LeavePGroupsOut is renamed to groups Additionally in class model selection LeavePGroupsOut the parameter n labels is renamed to n groups issue 6660 by Raghav RV Error and loss names for scoring parameters are now prefixed by neg such as neg mean squared error The unprefixed versions are deprecated and will be removed in version 0 20 issue 7261 by user Tim Head betatim Code Contributors Aditya Joshi Alejandro Alexander Fabisch Alexander Loginov Alexander Minyushkin Alexander Rudy Alexandre Abadie Alexandre Abraham Alexandre Gramfort Alexandre Saint alexfields Alvaro Ulloa alyssaq Amlan Kar Andreas Mueller andrew giessel Andrew Jackson Andrew McCulloh Andrew Murray Anish Shah Arafat Archit Sharma Ariel Rokem Arnaud Joly Arnaud Rachez Arthur Mensch Ash Hoover asnt b0noI Behzad Tabibian Bernardo Bernhard Kratzwald Bhargav Mangipudi blakeflei Boyuan Deng Brandon Carter Brett Naul Brian McFee Caio Oliveira Camilo Lamus Carol Willing Cass CeShine Lee Charles Truong Chyi Kwei Yau CJ Carey codevig Colin Ni Dan Shiebler Daniel Daniel Hnyk David Ellis David Nicholson David Staub David Thaler David Warshaw Davide Lasagna Deborah definitelyuncertain Didi Bar Zev djipey dsquareindia edwinENSAE Elias Kuthe Elvis DOHMATOB Ethan White Fabian Pedregosa Fabio Ticconi fisache Florian Wilhelm Francis Francis O Donovan Gael Varoquaux Ganiev Ibraim ghg Gilles Louppe Giorgio Patrini Giovanni Cherubin Giovanni Lanzani Glenn Qian Gordon Mohr govin vatsan Graham Clenaghan Greg Reda Greg Stupp Guillaume Lemaitre Gustav M rtberg halwai Harizo Rajaona Harry Mavroforakis hashcode55 hdmetor Henry Lin Hobson Lane Hugo Bowne Anderson Igor Andriushchenko Imaculate Inki Hwang Isaac Sijaranamual Ishank Gulati Issam Laradji Iver Jordal jackmartin Jacob Schreiber Jake Vanderplas James Fiedler James Routley Jan Zikes Janna Brettingen jarfa Jason Laska jblackburne jeff levesque Jeffrey Blackburne Jeffrey04 Jeremy Hintz jeremynixon Jeroen Jessica Yung Jill J nn Vie Jimmy Jia Jiyuan Qian Joel Nothman johannah John John Boersma John Kirkham John Moeller jonathan striebel joncrall Jordi Joseph Munoz Joshua Cook JPFrancoia jrfiedler JulianKahnert juliathebrave kaichogami KamalakerDadi Kenneth Lyons Kevin Wang kingjr kjell Konstantin Podshumok Kornel Kielczewski Krishna Kalyan krishnakalyan3 Kvle Putnam Kyle Jackson Lars Buitinck ldavid LeiG LeightonZhang Leland McInnes Liang Chi Hsieh Lilian Besson lizsz Loic Esteve Louis Tiao L onie Borne Mads Jensen Maniteja Nandana Manoj Kumar Manvendra Singh Marco Mario Krell Mark Bao Mark Szepieniec Martin Madsen MartinBpr MaryanMorel Massil Matheus Mathieu Blondel Mathieu Dubois Matteo Matthias Ekman Max Moroz Michael Scherer michiaki ariga Mikhail Korobov Moussa Taifi mrandrewandrade Mridul Seth nadya p Naoya Kanai Nate George Nelle Varoquaux Nelson Liu Nick James NickleDave Nico Nicolas Goix Nikolay Mayorov ningchi nlathia okbalefthanded Okhlopkov Olivier Grisel Panos Louridas Paul Strickland Perrine Letellier pestrickland Peter Fischer Pieter Ping Yao Chang practicalswift Preston Parry Qimu Zheng Rachit Kansal Raghav RV Ralf Gommers Ramana S Rammig Randy Olson Rob Alexander Robert Lutz Robin Schucker Rohan Jain Ruifeng Zheng Ryan Yu R my L one saihttam Saiwing Yeung Sam Shleifer Samuel St Jean Sartaj Singh Sasank Chilamkurthy saurabh bansod Scott Andrews Scott Lowe seales Sebastian Raschka Sebastian Saeger Sebasti n Vanrell Sergei Lebedev shagun Sodhani shanmuga cv Shashank Shekhar shawpan shengxiduan Shota shuckle16 Skipper Seabold sklearn ci SmedbergM srvanrell S bastien Lerique Taranjeet themrmax Thierry Thierry Guillemot Thomas Thomas Hallock Thomas Moreau Tim Head tKammy toastedcornflakes Tom TomDLT Toshihiro Kamishima tracer0tong Trent Hauck trevorstephens Tue Vo Varun Varun Jewalikar Viacheslav Vighnesh Birodkar Vikram Villu Ruusmann Vinayak Mehta walter waterponey Wenhua Yang Wenjian Huang Will Welch wyseguy7 xyguo yanlend Yaroslav Halchenko yelite Yen YenChenLin Yichuan Liu Yoav Ram Yoshiki Zheng RuiFeng zivori scar N jera |
scikit-learn sklearn contributors rst Version 0 24 releasenotes024 | .. include:: _contributors.rst
.. currentmodule:: sklearn
.. _release_notes_0_24:
============
Version 0.24
============
For a short description of the main highlights of the release, please refer to
:ref:`sphx_glr_auto_examples_release_highlights_plot_release_highlights_0_24_0.py`.
.. include:: changelog_legend.inc
.. _changes_0_24_2:
Version 0.24.2
==============
**April 2021**
Changelog
---------
:mod:`sklearn.compose`
......................
- |Fix| `compose.ColumnTransformer.get_feature_names` does not call
`get_feature_names` on transformers with an empty column selection.
:pr:`19579` by `Thomas Fan`_.
:mod:`sklearn.cross_decomposition`
..................................
- |Fix| Fixed a regression in :class:`cross_decomposition.CCA`. :pr:`19646`
by `Thomas Fan`_.
- |Fix| :class:`cross_decomposition.PLSRegression` raises warning for
constant y residuals instead of a `StopIteration` error. :pr:`19922`
by `Thomas Fan`_.
:mod:`sklearn.decomposition`
............................
- |Fix| Fixed a bug in :class:`decomposition.KernelPCA`'s
``inverse_transform``. :pr:`19732` by :user:`Kei Ishikawa <kstoneriv3>`.
:mod:`sklearn.ensemble`
.......................
- |Fix| Fixed a bug in :class:`ensemble.HistGradientBoostingRegressor` `fit`
with `sample_weight` parameter and `least_absolute_deviation` loss function.
:pr:`19407` by :user:`Vadim Ushtanit <vadim-ushtanit>`.
:mod:`sklearn.feature_extraction`
.................................
- |Fix| Fixed a bug to support multiple strings for a category when
`sparse=False` in :class:`feature_extraction.DictVectorizer`.
:pr:`19982` by :user:`Guillaume Lemaitre <glemaitre>`.
:mod:`sklearn.gaussian_process`
...............................
- |Fix| Avoid explicitly forming inverse covariance matrix in
:class:`gaussian_process.GaussianProcessRegressor` when set to output
standard deviation. With certain covariance matrices this inverse is unstable
to compute explicitly. Calling Cholesky solver mitigates this issue in
computation.
:pr:`19939` by :user:`Ian Halvic <iwhalvic>`.
- |Fix| Avoid division by zero when scaling constant target in
:class:`gaussian_process.GaussianProcessRegressor`. It was due to a std. dev.
equal to 0. Now, such case is detected and the std. dev. is affected to 1
avoiding a division by zero and thus the presence of NaN values in the
normalized target.
:pr:`19703` by :user:`sobkevich`, :user:`Boris Villazón-Terrazas <boricles>`
and :user:`Alexandr Fonari <afonari>`.
:mod:`sklearn.linear_model`
...........................
- |Fix|: Fixed a bug in :class:`linear_model.LogisticRegression`: the
sample_weight object is not modified anymore. :pr:`19182` by
:user:`Yosuke KOBAYASHI <m7142yosuke>`.
:mod:`sklearn.metrics`
......................
- |Fix| :func:`metrics.top_k_accuracy_score` now supports multiclass
problems where only two classes appear in `y_true` and all the classes
are specified in `labels`.
:pr:`19721` by :user:`Joris Clement <flyingdutchman23>`.
:mod:`sklearn.model_selection`
..............................
- |Fix| :class:`model_selection.RandomizedSearchCV` and
:class:`model_selection.GridSearchCV` now correctly shows the score for
single metrics and verbose > 2. :pr:`19659` by `Thomas Fan`_.
- |Fix| Some values in the `cv_results_` attribute of
:class:`model_selection.HalvingRandomSearchCV` and
:class:`model_selection.HalvingGridSearchCV` were not properly converted to
numpy arrays. :pr:`19211` by `Nicolas Hug`_.
- |Fix| The `fit` method of the successive halving parameter search
(:class:`model_selection.HalvingGridSearchCV`, and
:class:`model_selection.HalvingRandomSearchCV`) now correctly handles the
`groups` parameter. :pr:`19847` by :user:`Xiaoyu Chai <xiaoyuchai>`.
:mod:`sklearn.multioutput`
..........................
- |Fix| :class:`multioutput.MultiOutputRegressor` now works with estimators
that dynamically define `predict` during fitting, such as
:class:`ensemble.StackingRegressor`. :pr:`19308` by `Thomas Fan`_.
:mod:`sklearn.preprocessing`
............................
- |Fix| Validate the constructor parameter `handle_unknown` in
:class:`preprocessing.OrdinalEncoder` to only allow for `'error'` and
`'use_encoded_value'` strategies.
:pr:`19234` by `Guillaume Lemaitre <glemaitre>`.
- |Fix| Fix encoder categories having dtype='S'
:class:`preprocessing.OneHotEncoder` and
:class:`preprocessing.OrdinalEncoder`.
:pr:`19727` by :user:`Andrew Delong <andrewdelong>`.
- |Fix| :meth:`preprocessing.OrdinalEncoder.transform` correctly handles
unknown values for string dtypes. :pr:`19888` by `Thomas Fan`_.
- |Fix| :meth:`preprocessing.OneHotEncoder.fit` no longer alters the `drop`
parameter. :pr:`19924` by `Thomas Fan`_.
:mod:`sklearn.semi_supervised`
..............................
- |Fix| Avoid NaN during label propagation in
:class:`~sklearn.semi_supervised.LabelPropagation`.
:pr:`19271` by :user:`Zhaowei Wang <ThuWangzw>`.
:mod:`sklearn.tree`
...................
- |Fix| Fix a bug in `fit` of `tree.BaseDecisionTree` that caused
segmentation faults under certain conditions. `fit` now deep copies the
`Criterion` object to prevent shared concurrent accesses.
:pr:`19580` by :user:`Samuel Brice <samdbrice>` and
:user:`Alex Adamson <aadamson>` and
:user:`Wil Yegelwel <wyegelwel>`.
:mod:`sklearn.utils`
....................
- |Fix| Better contains the CSS provided by :func:`utils.estimator_html_repr`
by giving CSS ids to the html representation. :pr:`19417` by `Thomas Fan`_.
.. _changes_0_24_1:
Version 0.24.1
==============
**January 2021**
Packaging
---------
The 0.24.0 scikit-learn wheels were not working with MacOS <1.15 due to
`libomp`. The version of `libomp` used to build the wheels was too recent for
older macOS versions. This issue has been fixed for 0.24.1 scikit-learn wheels.
Scikit-learn wheels published on PyPI.org now officially support macOS 10.13
and later.
Changelog
---------
:mod:`sklearn.metrics`
......................
- |Fix| Fix numerical stability bug that could happen in
:func:`metrics.adjusted_mutual_info_score` and
:func:`metrics.mutual_info_score` with NumPy 1.20+.
:pr:`19179` by `Thomas Fan`_.
:mod:`sklearn.semi_supervised`
..............................
- |Fix| :class:`semi_supervised.SelfTrainingClassifier` is now accepting
meta-estimator (e.g. :class:`ensemble.StackingClassifier`). The validation
of this estimator is done on the fitted estimator, once we know the existence
of the method `predict_proba`.
:pr:`19126` by :user:`Guillaume Lemaitre <glemaitre>`.
.. _changes_0_24:
Version 0.24.0
==============
**December 2020**
Changed models
--------------
The following estimators and functions, when fit with the same data and
parameters, may produce different models from the previous version. This often
occurs due to changes in the modelling logic (bug fixes or enhancements), or in
random sampling procedures.
- |Fix| :class:`decomposition.KernelPCA` behaviour is now more consistent
between 32-bits and 64-bits data when the kernel has small positive
eigenvalues.
- |Fix| :class:`decomposition.TruncatedSVD` becomes deterministic by exposing
a `random_state` parameter.
- |Fix| :class:`linear_model.Perceptron` when `penalty='elasticnet'`.
- |Fix| Change in the random sampling procedures for the center initialization
of :class:`cluster.KMeans`.
Details are listed in the changelog below.
(While we are trying to better inform users by providing this information, we
cannot assure that this list is complete.)
Changelog
---------
:mod:`sklearn.base`
...................
- |Fix| :meth:`base.BaseEstimator.get_params` now will raise an
`AttributeError` if a parameter cannot be retrieved as
an instance attribute. Previously it would return `None`.
:pr:`17448` by :user:`Juan Carlos Alfaro Jiménez <alfaro96>`.
:mod:`sklearn.calibration`
..........................
- |Efficiency| :class:`calibration.CalibratedClassifierCV.fit` now supports
parallelization via `joblib.Parallel` using argument `n_jobs`.
:pr:`17107` by :user:`Julien Jerphanion <jjerphan>`.
- |Enhancement| Allow :class:`calibration.CalibratedClassifierCV` use with
prefit :class:`pipeline.Pipeline` where data is not `X` is not array-like,
sparse matrix or dataframe at the start. :pr:`17546` by
:user:`Lucy Liu <lucyleeow>`.
- |Enhancement| Add `ensemble` parameter to
:class:`calibration.CalibratedClassifierCV`, which enables implementation
of calibration via an ensemble of calibrators (current method) or
just one calibrator using all the data (similar to the built-in feature of
:mod:`sklearn.svm` estimators with the `probabilities=True` parameter).
:pr:`17856` by :user:`Lucy Liu <lucyleeow>` and
:user:`Andrea Esuli <aesuli>`.
:mod:`sklearn.cluster`
......................
- |Enhancement| :class:`cluster.AgglomerativeClustering` has a new parameter
`compute_distances`. When set to `True`, distances between clusters are
computed and stored in the `distances_` attribute even when the parameter
`distance_threshold` is not used. This new parameter is useful to produce
dendrogram visualizations, but introduces a computational and memory
overhead. :pr:`17984` by :user:`Michael Riedmann <mriedmann>`,
:user:`Emilie Delattre <EmilieDel>`, and
:user:`Francesco Casalegno <FrancescoCasalegno>`.
- |Enhancement| :class:`cluster.SpectralClustering` and
:func:`cluster.spectral_clustering` have a new keyword argument `verbose`.
When set to `True`, additional messages will be displayed which can aid with
debugging. :pr:`18052` by :user:`Sean O. Stalley <sstalley>`.
- |Enhancement| Added :func:`cluster.kmeans_plusplus` as public function.
Initialization by KMeans++ can now be called separately to generate
initial cluster centroids. :pr:`17937` by :user:`g-walsh`
- |API| :class:`cluster.MiniBatchKMeans` attributes, `counts_` and
`init_size_`, are deprecated and will be removed in 1.1 (renaming of 0.26).
:pr:`17864` by :user:`Jérémie du Boisberranger <jeremiedbb>`.
:mod:`sklearn.compose`
......................
- |Fix| :class:`compose.ColumnTransformer` will skip transformers the
column selector is a list of bools that are False. :pr:`17616` by
`Thomas Fan`_.
- |Fix| :class:`compose.ColumnTransformer` now displays the remainder in the
diagram display. :pr:`18167` by `Thomas Fan`_.
- |Fix| :class:`compose.ColumnTransformer` enforces strict count and order
of column names between `fit` and `transform` by raising an error instead
of a warning, following the deprecation cycle.
:pr:`18256` by :user:`Madhura Jayratne <madhuracj>`.
:mod:`sklearn.covariance`
.........................
- |API| Deprecates `cv_alphas_` in favor of `cv_results_['alphas']` and
`grid_scores_` in favor of split scores in `cv_results_` in
:class:`covariance.GraphicalLassoCV`. `cv_alphas_` and `grid_scores_` will be
removed in version 1.1 (renaming of 0.26).
:pr:`16392` by `Thomas Fan`_.
:mod:`sklearn.cross_decomposition`
..................................
- |Fix| Fixed a bug in :class:`cross_decomposition.PLSSVD` which would
sometimes return components in the reversed order of importance.
:pr:`17095` by `Nicolas Hug`_.
- |Fix| Fixed a bug in :class:`cross_decomposition.PLSSVD`,
:class:`cross_decomposition.CCA`, and
:class:`cross_decomposition.PLSCanonical`, which would lead to incorrect
predictions for `est.transform(Y)` when the training data is single-target.
:pr:`17095` by `Nicolas Hug`_.
- |Fix| Increases the stability of :class:`cross_decomposition.CCA` :pr:`18746`
by `Thomas Fan`_.
- |API| The bounds of the `n_components` parameter is now restricted:
- into `[1, min(n_samples, n_features, n_targets)]`, for
:class:`cross_decomposition.PLSSVD`, :class:`cross_decomposition.CCA`,
and :class:`cross_decomposition.PLSCanonical`.
- into `[1, n_features]` or :class:`cross_decomposition.PLSRegression`.
An error will be raised in 1.1 (renaming of 0.26).
:pr:`17095` by `Nicolas Hug`_.
- |API| For :class:`cross_decomposition.PLSSVD`,
:class:`cross_decomposition.CCA`, and
:class:`cross_decomposition.PLSCanonical`, the `x_scores_` and `y_scores_`
attributes were deprecated and will be removed in 1.1 (renaming of 0.26).
They can be retrieved by calling `transform` on the training data.
The `norm_y_weights` attribute will also be removed.
:pr:`17095` by `Nicolas Hug`_.
- |API| For :class:`cross_decomposition.PLSRegression`,
:class:`cross_decomposition.PLSCanonical`,
:class:`cross_decomposition.CCA`, and
:class:`cross_decomposition.PLSSVD`, the `x_mean_`, `y_mean_`, `x_std_`, and
`y_std_` attributes were deprecated and will be removed in 1.1
(renaming of 0.26).
:pr:`18768` by :user:`Maren Westermann <marenwestermann>`.
- |Fix| :class:`decomposition.TruncatedSVD` becomes deterministic by using the
`random_state`. It controls the weights' initialization of the underlying
ARPACK solver.
:pr:` #18302` by :user:`Gaurav Desai <gauravkdesai>` and
:user:`Ivan Panico <FollowKenny>`.
:mod:`sklearn.datasets`
.......................
- |Feature| :func:`datasets.fetch_openml` now validates md5 checksum of arff
files downloaded or cached to ensure data integrity.
:pr:`14800` by :user:`Shashank Singh <shashanksingh28>` and `Joel Nothman`_.
- |Enhancement| :func:`datasets.fetch_openml` now allows argument `as_frame`
to be 'auto', which tries to convert returned data to pandas DataFrame
unless data is sparse.
:pr:`17396` by :user:`Jiaxiang <fujiaxiang>`.
- |Enhancement| :func:`datasets.fetch_covtype` now supports the optional
argument `as_frame`; when it is set to True, the returned Bunch object's
`data` and `frame` members are pandas DataFrames, and the `target` member is
a pandas Series.
:pr:`17491` by :user:`Alex Liang <tianchuliang>`.
- |Enhancement| :func:`datasets.fetch_kddcup99` now supports the optional
argument `as_frame`; when it is set to True, the returned Bunch object's
`data` and `frame` members are pandas DataFrames, and the `target` member is
a pandas Series.
:pr:`18280` by :user:`Alex Liang <tianchuliang>` and
`Guillaume Lemaitre`_.
- |Enhancement| :func:`datasets.fetch_20newsgroups_vectorized` now supports
loading as a pandas ``DataFrame`` by setting ``as_frame=True``.
:pr:`17499` by :user:`Brigitta Sipőcz <bsipocz>` and
`Guillaume Lemaitre`_.
- |API| The default value of `as_frame` in :func:`datasets.fetch_openml` is
changed from False to 'auto'.
:pr:`17610` by :user:`Jiaxiang <fujiaxiang>`.
:mod:`sklearn.decomposition`
............................
- |API| For :class:`decomposition.NMF`,
the `init` value, when 'init=None' and
n_components <= min(n_samples, n_features) will be changed from
`'nndsvd'` to `'nndsvda'` in 1.1 (renaming of 0.26).
:pr:`18525` by :user:`Chiara Marmo <cmarmo>`.
- |Enhancement| :func:`decomposition.FactorAnalysis` now supports the optional
argument `rotation`, which can take the value `None`, `'varimax'` or
`'quartimax'`. :pr:`11064` by :user:`Jona Sassenhagen <jona-sassenhagen>`.
- |Enhancement| :class:`decomposition.NMF` now supports the optional parameter
`regularization`, which can take the values `None`, 'components',
'transformation' or 'both', in accordance with
`decomposition.NMF.non_negative_factorization`.
:pr:`17414` by :user:`Bharat Raghunathan <bharatr21>`.
- |Fix| :class:`decomposition.KernelPCA` behaviour is now more consistent
between 32-bits and 64-bits data input when the kernel has small positive
eigenvalues. Small positive eigenvalues were not correctly discarded for
32-bits data.
:pr:`18149` by :user:`Sylvain Marié <smarie>`.
- |Fix| Fix :class:`decomposition.SparseCoder` such that it follows
scikit-learn API and support cloning. The attribute `components_` is
deprecated in 0.24 and will be removed in 1.1 (renaming of 0.26).
This attribute was redundant with the `dictionary` attribute and constructor
parameter.
:pr:`17679` by :user:`Xavier Dupré <sdpython>`.
- |Fix| :meth:`decomposition.TruncatedSVD.fit_transform` consistently returns
the same as :meth:`decomposition.TruncatedSVD.fit` followed by
:meth:`decomposition.TruncatedSVD.transform`.
:pr:`18528` by :user:`Albert Villanova del Moral <albertvillanova>` and
:user:`Ruifeng Zheng <zhengruifeng>`.
:mod:`sklearn.discriminant_analysis`
....................................
- |Enhancement| :class:`discriminant_analysis.LinearDiscriminantAnalysis` can
now use custom covariance estimate by setting the `covariance_estimator`
parameter. :pr:`14446` by :user:`Hugo Richard <hugorichard>`.
:mod:`sklearn.ensemble`
.......................
- |MajorFeature| :class:`ensemble.HistGradientBoostingRegressor` and
:class:`ensemble.HistGradientBoostingClassifier` now have native
support for categorical features with the `categorical_features`
parameter. :pr:`18394` by `Nicolas Hug`_ and `Thomas Fan`_.
- |Feature| :class:`ensemble.HistGradientBoostingRegressor` and
:class:`ensemble.HistGradientBoostingClassifier` now support the
method `staged_predict`, which allows monitoring of each stage.
:pr:`16985` by :user:`Hao Chun Chang <haochunchang>`.
- |Efficiency| break cyclic references in the tree nodes used internally in
:class:`ensemble.HistGradientBoostingRegressor` and
:class:`ensemble.HistGradientBoostingClassifier` to allow for the timely
garbage collection of large intermediate datastructures and to improve memory
usage in `fit`. :pr:`18334` by `Olivier Grisel`_ `Nicolas Hug`_, `Thomas
Fan`_ and `Andreas Müller`_.
- |Efficiency| Histogram initialization is now done in parallel in
:class:`ensemble.HistGradientBoostingRegressor` and
:class:`ensemble.HistGradientBoostingClassifier` which results in speed
improvement for problems that build a lot of nodes on multicore machines.
:pr:`18341` by `Olivier Grisel`_, `Nicolas Hug`_, `Thomas Fan`_, and
:user:`Egor Smirnov <SmirnovEgorRu>`.
- |Fix| Fixed a bug in
:class:`ensemble.HistGradientBoostingRegressor` and
:class:`ensemble.HistGradientBoostingClassifier` which can now accept data
with `uint8` dtype in `predict`. :pr:`18410` by `Nicolas Hug`_.
- |API| The parameter ``n_classes_`` is now deprecated in
:class:`ensemble.GradientBoostingRegressor` and returns `1`.
:pr:`17702` by :user:`Simona Maggio <simonamaggio>`.
- |API| Mean absolute error ('mae') is now deprecated for the parameter
``criterion`` in :class:`ensemble.GradientBoostingRegressor` and
:class:`ensemble.GradientBoostingClassifier`.
:pr:`18326` by :user:`Madhura Jayaratne <madhuracj>`.
:mod:`sklearn.exceptions`
.........................
- |API| `exceptions.ChangedBehaviorWarning` and
`exceptions.NonBLASDotWarning` are deprecated and will be removed in
1.1 (renaming of 0.26).
:pr:`17804` by `Adrin Jalali`_.
:mod:`sklearn.feature_extraction`
.................................
- |Enhancement| :class:`feature_extraction.DictVectorizer` accepts multiple
values for one categorical feature. :pr:`17367` by :user:`Peng Yu <yupbank>`
and :user:`Chiara Marmo <cmarmo>`.
- |Fix| :class:`feature_extraction.text.CountVectorizer` raises an issue if a
custom token pattern which capture more than one group is provided.
:pr:`15427` by :user:`Gangesh Gudmalwar <ggangesh>` and
:user:`Erin R Hoffman <hoffm386>`.
:mod:`sklearn.feature_selection`
................................
- |Feature| Added :class:`feature_selection.SequentialFeatureSelector`
which implements forward and backward sequential feature selection.
:pr:`6545` by `Sebastian Raschka`_ and :pr:`17159` by `Nicolas Hug`_.
- |Feature| A new parameter `importance_getter` was added to
:class:`feature_selection.RFE`, :class:`feature_selection.RFECV` and
:class:`feature_selection.SelectFromModel`, allowing the user to specify an
attribute name/path or a `callable` for extracting feature importance from
the estimator. :pr:`15361` by :user:`Venkatachalam N <venkyyuvy>`.
- |Efficiency| Reduce memory footprint in
:func:`feature_selection.mutual_info_classif`
and :func:`feature_selection.mutual_info_regression` by calling
:class:`neighbors.KDTree` for counting nearest neighbors. :pr:`17878` by
:user:`Noel Rogers <noelano>`.
- |Enhancement| :class:`feature_selection.RFE` supports the option for the
number of `n_features_to_select` to be given as a float representing the
percentage of features to select.
:pr:`17090` by :user:`Lisa Schwetlick <lschwetlick>` and
:user:`Marija Vlajic Wheeler <marijavlajic>`.
:mod:`sklearn.gaussian_process`
...............................
- |Enhancement| A new method
`gaussian_process.kernel._check_bounds_params` is called after
fitting a Gaussian Process and raises a ``ConvergenceWarning`` if the bounds
of the hyperparameters are too tight.
:issue:`12638` by :user:`Sylvain Lannuzel <SylvainLan>`.
:mod:`sklearn.impute`
.....................
- |Feature| :class:`impute.SimpleImputer` now supports a list of strings
when ``strategy='most_frequent'`` or ``strategy='constant'``.
:pr:`17526` by :user:`Ayako YAGI <yagi-3>` and
:user:`Juan Carlos Alfaro Jiménez <alfaro96>`.
- |Feature| Added method :meth:`impute.SimpleImputer.inverse_transform` to
revert imputed data to original when instantiated with
``add_indicator=True``. :pr:`17612` by :user:`Srimukh Sripada <d3b0unce>`.
- |Fix| replace the default values in :class:`impute.IterativeImputer`
of `min_value` and `max_value` parameters to `-np.inf` and `np.inf`,
respectively instead of `None`. However, the behaviour of the class does not
change since `None` was defaulting to these values already.
:pr:`16493` by :user:`Darshan N <DarshanGowda0>`.
- |Fix| :class:`impute.IterativeImputer` will not attempt to set the
estimator's `random_state` attribute, allowing to use it with more external classes.
:pr:`15636` by :user:`David Cortes <david-cortes>`.
- |Efficiency| :class:`impute.SimpleImputer` is now faster with `object` dtype array.
when `strategy='most_frequent'` in :class:`~sklearn.impute.SimpleImputer`.
:pr:`18987` by :user:`David Katz <DavidKatz-il>`.
:mod:`sklearn.inspection`
.........................
- |Feature| :func:`inspection.partial_dependence` and
`inspection.plot_partial_dependence` now support calculating and
plotting Individual Conditional Expectation (ICE) curves controlled by the
``kind`` parameter.
:pr:`16619` by :user:`Madhura Jayratne <madhuracj>`.
- |Feature| Add `sample_weight` parameter to
:func:`inspection.permutation_importance`. :pr:`16906` by
:user:`Roei Kahny <RoeiKa>`.
- |API| Positional arguments are deprecated in
:meth:`inspection.PartialDependenceDisplay.plot` and will error in 1.1
(renaming of 0.26).
:pr:`18293` by `Thomas Fan`_.
:mod:`sklearn.isotonic`
.......................
- |Feature| Expose fitted attributes ``X_thresholds_`` and ``y_thresholds_``
that hold the de-duplicated interpolation thresholds of an
:class:`isotonic.IsotonicRegression` instance for model inspection purpose.
:pr:`16289` by :user:`Masashi Kishimoto <kishimoto-banana>` and
:user:`Olivier Grisel <ogrisel>`.
- |Enhancement| :class:`isotonic.IsotonicRegression` now accepts 2d array with
1 feature as input array. :pr:`17379` by :user:`Jiaxiang <fujiaxiang>`.
- |Fix| Add tolerance when determining duplicate X values to prevent
inf values from being predicted by :class:`isotonic.IsotonicRegression`.
:pr:`18639` by :user:`Lucy Liu <lucyleeow>`.
:mod:`sklearn.kernel_approximation`
...................................
- |Feature| Added class :class:`kernel_approximation.PolynomialCountSketch`
which implements the Tensor Sketch algorithm for polynomial kernel feature
map approximation.
:pr:`13003` by :user:`Daniel López Sánchez <lopeLH>`.
- |Efficiency| :class:`kernel_approximation.Nystroem` now supports
parallelization via `joblib.Parallel` using argument `n_jobs`.
:pr:`18545` by :user:`Laurenz Reitsam <LaurenzReitsam>`.
:mod:`sklearn.linear_model`
...........................
- |Feature| :class:`linear_model.LinearRegression` now forces coefficients
to be all positive when ``positive`` is set to ``True``.
:pr:`17578` by :user:`Joseph Knox <jknox13>`,
:user:`Nelle Varoquaux <NelleV>` and :user:`Chiara Marmo <cmarmo>`.
- |Enhancement| :class:`linear_model.RidgeCV` now supports finding an optimal
regularization value `alpha` for each target separately by setting
``alpha_per_target=True``. This is only supported when using the default
efficient leave-one-out cross-validation scheme ``cv=None``. :pr:`6624` by
:user:`Marijn van Vliet <wmvanvliet>`.
- |Fix| Fixes bug in :class:`linear_model.TheilSenRegressor` where
`predict` and `score` would fail when `fit_intercept=False` and there was
one feature during fitting. :pr:`18121` by `Thomas Fan`_.
- |Fix| Fixes bug in :class:`linear_model.ARDRegression` where `predict`
was raising an error when `normalize=True` and `return_std=True` because
`X_offset_` and `X_scale_` were undefined.
:pr:`18607` by :user:`fhaselbeck <fhaselbeck>`.
- |Fix| Added the missing `l1_ratio` parameter in
:class:`linear_model.Perceptron`, to be used when `penalty='elasticnet'`.
This changes the default from 0 to 0.15. :pr:`18622` by
:user:`Haesun Park <rickiepark>`.
:mod:`sklearn.manifold`
.......................
- |Efficiency| Fixed :issue:`10493`. Improve Local Linear Embedding (LLE)
that raised `MemoryError` exception when used with large inputs.
:pr:`17997` by :user:`Bertrand Maisonneuve <bmaisonn>`.
- |Enhancement| Add `square_distances` parameter to :class:`manifold.TSNE`,
which provides backward compatibility during deprecation of legacy squaring
behavior. Distances will be squared by default in 1.1 (renaming of 0.26),
and this parameter will be removed in 1.3. :pr:`17662` by
:user:`Joshua Newton <joshuacwnewton>`.
- |Fix| :class:`manifold.MDS` now correctly sets its `_pairwise` attribute.
:pr:`18278` by `Thomas Fan`_.
:mod:`sklearn.metrics`
......................
- |Feature| Added :func:`metrics.cluster.pair_confusion_matrix` implementing
the confusion matrix arising from pairs of elements from two clusterings.
:pr:`17412` by :user:`Uwe F Mayer <ufmayer>`.
- |Feature| new metric :func:`metrics.top_k_accuracy_score`. It's a
generalization of :func:`metrics.top_k_accuracy_score`, the difference is
that a prediction is considered correct as long as the true label is
associated with one of the `k` highest predicted scores.
:func:`metrics.accuracy_score` is the special case of `k = 1`.
:pr:`16625` by :user:`Geoffrey Bolmier <gbolmier>`.
- |Feature| Added :func:`metrics.det_curve` to compute Detection Error Tradeoff
curve classification metric.
:pr:`10591` by :user:`Jeremy Karnowski <jkarnows>` and
:user:`Daniel Mohns <dmohns>`.
- |Feature| Added `metrics.plot_det_curve` and
:class:`metrics.DetCurveDisplay` to ease the plot of DET curves.
:pr:`18176` by :user:`Guillaume Lemaitre <glemaitre>`.
- |Feature| Added :func:`metrics.mean_absolute_percentage_error` metric and
the associated scorer for regression problems. :issue:`10708` fixed with the
PR :pr:`15007` by :user:`Ashutosh Hathidara <ashutosh1919>`. The scorer and
some practical test cases were taken from PR :pr:`10711` by
:user:`Mohamed Ali Jamaoui <mohamed-ali>`.
- |Feature| Added :func:`metrics.rand_score` implementing the (unadjusted)
Rand index.
:pr:`17412` by :user:`Uwe F Mayer <ufmayer>`.
- |Feature| `metrics.plot_confusion_matrix` now supports making colorbar
optional in the matplotlib plot by setting `colorbar=False`. :pr:`17192` by
:user:`Avi Gupta <avigupta2612>`
- |Enhancement| Add `sample_weight` parameter to
:func:`metrics.median_absolute_error`. :pr:`17225` by
:user:`Lucy Liu <lucyleeow>`.
- |Enhancement| Add `pos_label` parameter in
`metrics.plot_precision_recall_curve` in order to specify the positive
class to be used when computing the precision and recall statistics.
:pr:`17569` by :user:`Guillaume Lemaitre <glemaitre>`.
- |Enhancement| Add `pos_label` parameter in
`metrics.plot_roc_curve` in order to specify the positive
class to be used when computing the roc auc statistics.
:pr:`17651` by :user:`Clara Matos <claramatos>`.
- |Fix| Fixed a bug in
:func:`metrics.classification_report` which was raising AttributeError
when called with `output_dict=True` for 0-length values.
:pr:`17777` by :user:`Shubhanshu Mishra <napsternxg>`.
- |Fix| Fixed a bug in
:func:`metrics.classification_report` which was raising AttributeError
when called with `output_dict=True` for 0-length values.
:pr:`17777` by :user:`Shubhanshu Mishra <napsternxg>`.
- |Fix| Fixed a bug in
:func:`metrics.jaccard_score` which recommended the `zero_division`
parameter when called with no true or predicted samples.
:pr:`17826` by :user:`Richard Decal <crypdick>` and
:user:`Joseph Willard <josephwillard>`
- |Fix| bug in :func:`metrics.hinge_loss` where error occurs when
``y_true`` is missing some labels that are provided explicitly in the
``labels`` parameter.
:pr:`17935` by :user:`Cary Goltermann <Ultramann>`.
- |Fix| Fix scorers that accept a pos_label parameter and compute their metrics
from values returned by `decision_function` or `predict_proba`. Previously,
they would return erroneous values when pos_label was not corresponding to
`classifier.classes_[1]`. This is especially important when training
classifiers directly with string labeled target classes.
:pr:`18114` by :user:`Guillaume Lemaitre <glemaitre>`.
- |Fix| Fixed bug in `metrics.plot_confusion_matrix` where error occurs
when `y_true` contains labels that were not previously seen by the classifier
while the `labels` and `display_labels` parameters are set to `None`.
:pr:`18405` by :user:`Thomas J. Fan <thomasjpfan>` and
:user:`Yakov Pchelintsev <kyouma>`.
:mod:`sklearn.model_selection`
..............................
- |MajorFeature| Added (experimental) parameter search estimators
:class:`model_selection.HalvingRandomSearchCV` and
:class:`model_selection.HalvingGridSearchCV` which implement Successive
Halving, and can be used as a drop-in replacements for
:class:`model_selection.RandomizedSearchCV` and
:class:`model_selection.GridSearchCV`. :pr:`13900` by `Nicolas Hug`_, `Joel
Nothman`_ and `Andreas Müller`_.
- |Feature| :class:`model_selection.RandomizedSearchCV` and
:class:`model_selection.GridSearchCV` now have the method ``score_samples``
:pr:`17478` by :user:`Teon Brooks <teonbrooks>` and
:user:`Mohamed Maskani <maskani-moh>`.
- |Enhancement| :class:`model_selection.TimeSeriesSplit` has two new keyword
arguments `test_size` and `gap`. `test_size` allows the out-of-sample
time series length to be fixed for all folds. `gap` removes a fixed number of
samples between the train and test set on each fold.
:pr:`13204` by :user:`Kyle Kosic <kykosic>`.
- |Enhancement| :func:`model_selection.permutation_test_score` and
:func:`model_selection.validation_curve` now accept fit_params
to pass additional estimator parameters.
:pr:`18527` by :user:`Gaurav Dhingra <gxyd>`,
:user:`Julien Jerphanion <jjerphan>` and :user:`Amanda Dsouza <amy12xx>`.
- |Enhancement| :func:`model_selection.cross_val_score`,
:func:`model_selection.cross_validate`,
:class:`model_selection.GridSearchCV`, and
:class:`model_selection.RandomizedSearchCV` allows estimator to fail scoring
and replace the score with `error_score`. If `error_score="raise"`, the error
will be raised.
:pr:`18343` by `Guillaume Lemaitre`_ and :user:`Devi Sandeep <dsandeep0138>`.
- |Enhancement| :func:`model_selection.learning_curve` now accept fit_params
to pass additional estimator parameters.
:pr:`18595` by :user:`Amanda Dsouza <amy12xx>`.
- |Fix| Fixed the `len` of :class:`model_selection.ParameterSampler` when
all distributions are lists and `n_iter` is more than the number of unique
parameter combinations. :pr:`18222` by `Nicolas Hug`_.
- |Fix| A fix to raise warning when one or more CV splits of
:class:`model_selection.GridSearchCV` and
:class:`model_selection.RandomizedSearchCV` results in non-finite scores.
:pr:`18266` by :user:`Subrat Sahu <subrat93>`,
:user:`Nirvan <Nirvan101>` and :user:`Arthur Book <ArthurBook>`.
- |Enhancement| :class:`model_selection.GridSearchCV`,
:class:`model_selection.RandomizedSearchCV` and
:func:`model_selection.cross_validate` support `scoring` being a callable
returning a dictionary of of multiple metric names/values association.
:pr:`15126` by `Thomas Fan`_.
:mod:`sklearn.multiclass`
.........................
- |Enhancement| :class:`multiclass.OneVsOneClassifier` now accepts
the inputs with missing values. Hence, estimators which can handle
missing values (may be a pipeline with imputation step) can be used as
a estimator for multiclass wrappers.
:pr:`17987` by :user:`Venkatachalam N <venkyyuvy>`.
- |Fix| A fix to allow :class:`multiclass.OutputCodeClassifier` to accept
sparse input data in its `fit` and `predict` methods. The check for
validity of the input is now delegated to the base estimator.
:pr:`17233` by :user:`Zolisa Bleki <zoj613>`.
:mod:`sklearn.multioutput`
..........................
- |Enhancement| :class:`multioutput.MultiOutputClassifier` and
:class:`multioutput.MultiOutputRegressor` now accepts the inputs
with missing values. Hence, estimators which can handle missing
values (may be a pipeline with imputation step, HistGradientBoosting
estimators) can be used as a estimator for multiclass wrappers.
:pr:`17987` by :user:`Venkatachalam N <venkyyuvy>`.
- |Fix| A fix to accept tuples for the ``order`` parameter
in :class:`multioutput.ClassifierChain`.
:pr:`18124` by :user:`Gus Brocchini <boldloop>` and
:user:`Amanda Dsouza <amy12xx>`.
:mod:`sklearn.naive_bayes`
..........................
- |Enhancement| Adds a parameter `min_categories` to
:class:`naive_bayes.CategoricalNB` that allows a minimum number of categories
per feature to be specified. This allows categories unseen during training
to be accounted for.
:pr:`16326` by :user:`George Armstrong <gwarmstrong>`.
- |API| The attributes ``coef_`` and ``intercept_`` are now deprecated in
:class:`naive_bayes.MultinomialNB`, :class:`naive_bayes.ComplementNB`,
:class:`naive_bayes.BernoulliNB` and :class:`naive_bayes.CategoricalNB`,
and will be removed in v1.1 (renaming of 0.26).
:pr:`17427` by :user:`Juan Carlos Alfaro Jiménez <alfaro96>`.
:mod:`sklearn.neighbors`
........................
- |Efficiency| Speed up ``seuclidean``, ``wminkowski``, ``mahalanobis`` and
``haversine`` metrics in `neighbors.DistanceMetric` by avoiding
unexpected GIL acquiring in Cython when setting ``n_jobs>1`` in
:class:`neighbors.KNeighborsClassifier`,
:class:`neighbors.KNeighborsRegressor`,
:class:`neighbors.RadiusNeighborsClassifier`,
:class:`neighbors.RadiusNeighborsRegressor`,
:func:`metrics.pairwise_distances`
and by validating data out of loops.
:pr:`17038` by :user:`Wenbo Zhao <webber26232>`.
- |Efficiency| `neighbors.NeighborsBase` benefits of an improved
`algorithm = 'auto'` heuristic. In addition to the previous set of rules,
now, when the number of features exceeds 15, `brute` is selected, assuming
the data intrinsic dimensionality is too high for tree-based methods.
:pr:`17148` by :user:`Geoffrey Bolmier <gbolmier>`.
- |Fix| `neighbors.BinaryTree`
will raise a `ValueError` when fitting on data array having points with
different dimensions.
:pr:`18691` by :user:`Chiara Marmo <cmarmo>`.
- |Fix| :class:`neighbors.NearestCentroid` with a numerical `shrink_threshold`
will raise a `ValueError` when fitting on data with all constant features.
:pr:`18370` by :user:`Trevor Waite <trewaite>`.
- |Fix| In methods `radius_neighbors` and
`radius_neighbors_graph` of :class:`neighbors.NearestNeighbors`,
:class:`neighbors.RadiusNeighborsClassifier`,
:class:`neighbors.RadiusNeighborsRegressor`, and
:class:`neighbors.RadiusNeighborsTransformer`, using `sort_results=True` now
correctly sorts the results even when fitting with the "brute" algorithm.
:pr:`18612` by `Tom Dupre la Tour`_.
:mod:`sklearn.neural_network`
.............................
- |Efficiency| Neural net training and prediction are now a little faster.
:pr:`17603`, :pr:`17604`, :pr:`17606`, :pr:`17608`, :pr:`17609`, :pr:`17633`,
:pr:`17661`, :pr:`17932` by :user:`Alex Henrie <alexhenrie>`.
- |Enhancement| Avoid converting float32 input to float64 in
:class:`neural_network.BernoulliRBM`.
:pr:`16352` by :user:`Arthur Imbert <Henley13>`.
- |Enhancement| Support 32-bit computations in
:class:`neural_network.MLPClassifier` and
:class:`neural_network.MLPRegressor`.
:pr:`17759` by :user:`Srimukh Sripada <d3b0unce>`.
- |Fix| Fix method :meth:`neural_network.MLPClassifier.fit`
not iterating to ``max_iter`` if warm started.
:pr:`18269` by :user:`Norbert Preining <norbusan>` and
:user:`Guillaume Lemaitre <glemaitre>`.
:mod:`sklearn.pipeline`
.......................
- |Enhancement| References to transformers passed through ``transformer_weights``
to :class:`pipeline.FeatureUnion` that aren't present in ``transformer_list``
will raise a ``ValueError``.
:pr:`17876` by :user:`Cary Goltermann <Ultramann>`.
- |Fix| A slice of a :class:`pipeline.Pipeline` now inherits the parameters of
the original pipeline (`memory` and `verbose`).
:pr:`18429` by :user:`Albert Villanova del Moral <albertvillanova>` and
:user:`Paweł Biernat <pwl>`.
:mod:`sklearn.preprocessing`
............................
- |Feature| :class:`preprocessing.OneHotEncoder` now supports missing
values by treating them as a category. :pr:`17317` by `Thomas Fan`_.
- |Feature| Add a new ``handle_unknown`` parameter with a
``use_encoded_value`` option, along with a new ``unknown_value`` parameter,
to :class:`preprocessing.OrdinalEncoder` to allow unknown categories during
transform and set the encoded value of the unknown categories.
:pr:`17406` by :user:`Felix Wick <FelixWick>` and :pr:`18406` by
`Nicolas Hug`_.
- |Feature| Add ``clip`` parameter to :class:`preprocessing.MinMaxScaler`,
which clips the transformed values of test data to ``feature_range``.
:pr:`17833` by :user:`Yashika Sharma <yashika51>`.
- |Feature| Add ``sample_weight`` parameter to
:class:`preprocessing.StandardScaler`. Allows setting
individual weights for each sample. :pr:`18510` and
:pr:`18447` and :pr:`16066` and :pr:`18682` by
:user:`Maria Telenczuk <maikia>` and :user:`Albert Villanova <albertvillanova>`
and :user:`panpiort8` and :user:`Alex Gramfort <agramfort>`.
- |Enhancement| Verbose output of :class:`model_selection.GridSearchCV` has
been improved for readability. :pr:`16935` by :user:`Raghav Rajagopalan
<raghavrv>` and :user:`Chiara Marmo <cmarmo>`.
- |Enhancement| Add ``unit_variance`` to :class:`preprocessing.RobustScaler`,
which scales output data such that normally distributed features have a
variance of 1. :pr:`17193` by :user:`Lucy Liu <lucyleeow>` and
:user:`Mabel Villalba <mabelvj>`.
- |Enhancement| Add `dtype` parameter to
:class:`preprocessing.KBinsDiscretizer`.
:pr:`16335` by :user:`Arthur Imbert <Henley13>`.
- |Fix| Raise error on
:meth:`sklearn.preprocessing.OneHotEncoder.inverse_transform`
when `handle_unknown='error'` and `drop=None` for samples
encoded as all zeros. :pr:`14982` by
:user:`Kevin Winata <kwinata>`.
:mod:`sklearn.semi_supervised`
..............................
- |MajorFeature| Added :class:`semi_supervised.SelfTrainingClassifier`, a
meta-classifier that allows any supervised classifier to function as a
semi-supervised classifier that can learn from unlabeled data. :issue:`11682`
by :user:`Oliver Rausch <orausch>` and :user:`Patrice Becker <pr0duktiv>`.
- |Fix| Fix incorrect encoding when using unicode string dtypes in
:class:`preprocessing.OneHotEncoder` and
:class:`preprocessing.OrdinalEncoder`. :pr:`15763` by `Thomas Fan`_.
:mod:`sklearn.svm`
..................
- |Enhancement| invoke SciPy BLAS API for SVM kernel function in ``fit``,
``predict`` and related methods of :class:`svm.SVC`, :class:`svm.NuSVC`,
:class:`svm.SVR`, :class:`svm.NuSVR`, :class:`svm.OneClassSVM`.
:pr:`16530` by :user:`Shuhua Fan <jim0421>`.
:mod:`sklearn.tree`
...................
- |Feature| :class:`tree.DecisionTreeRegressor` now supports the new splitting
criterion ``'poisson'`` useful for modeling count data. :pr:`17386` by
:user:`Christian Lorentzen <lorentzenchr>`.
- |Enhancement| :func:`tree.plot_tree` now uses colors from the matplotlib
configuration settings. :pr:`17187` by `Andreas Müller`_.
- |API| The parameter ``X_idx_sorted`` is now deprecated in
:meth:`tree.DecisionTreeClassifier.fit` and
:meth:`tree.DecisionTreeRegressor.fit`, and has not effect.
:pr:`17614` by :user:`Juan Carlos Alfaro Jiménez <alfaro96>`.
:mod:`sklearn.utils`
....................
- |Enhancement| Add ``check_methods_sample_order_invariance`` to
:func:`~utils.estimator_checks.check_estimator`, which checks that
estimator methods are invariant if applied to the same dataset
with different sample order :pr:`17598` by :user:`Jason Ngo <ngojason9>`.
- |Enhancement| Add support for weights in
`utils.sparse_func.incr_mean_variance_axis`.
By :user:`Maria Telenczuk <maikia>` and :user:`Alex Gramfort <agramfort>`.
- |Fix| Raise ValueError with clear error message in :func:`utils.check_array`
for sparse DataFrames with mixed types.
:pr:`17992` by :user:`Thomas J. Fan <thomasjpfan>` and
:user:`Alex Shacked <alexshacked>`.
- |Fix| Allow serialized tree based models to be unpickled on a machine
with different endianness.
:pr:`17644` by :user:`Qi Zhang <qzhang90>`.
- |Fix| Check that we raise proper error when axis=1 and the
dimensions do not match in `utils.sparse_func.incr_mean_variance_axis`.
By :user:`Alex Gramfort <agramfort>`.
Miscellaneous
.............
- |Enhancement| Calls to ``repr`` are now faster
when `print_changed_only=True`, especially with meta-estimators.
:pr:`18508` by :user:`Nathan C. <Xethan>`.
.. rubric:: Code and documentation contributors
Thanks to everyone who has contributed to the maintenance and improvement of
the project since version 0.23, including:
Abo7atm, Adam Spannbauer, Adrin Jalali, adrinjalali, Agamemnon Krasoulis,
Akshay Deodhar, Albert Villanova del Moral, Alessandro Gentile, Alex Henrie,
Alex Itkes, Alex Liang, Alexander Lenail, alexandracraciun, Alexandre Gramfort,
alexshacked, Allan D Butler, Amanda Dsouza, amy12xx, Anand Tiwari, Anderson
Nelson, Andreas Mueller, Ankit Choraria, Archana Subramaniyan, Arthur Imbert,
Ashutosh Hathidara, Ashutosh Kushwaha, Atsushi Nukariya, Aura Munoz, AutoViz
and Auto_ViML, Avi Gupta, Avinash Anakal, Ayako YAGI, barankarakus,
barberogaston, beatrizsmg, Ben Mainye, Benjamin Bossan, Benjamin Pedigo, Bharat
Raghunathan, Bhavika Devnani, Biprateep Dey, bmaisonn, Bo Chang, Boris
Villazón-Terrazas, brigi, Brigitta Sipőcz, Bruno Charron, Byron Smith, Cary
Goltermann, Cat Chenal, CeeThinwa, chaitanyamogal, Charles Patel, Chiara Marmo,
Christian Kastner, Christian Lorentzen, Christoph Deil, Christos Aridas, Clara
Matos, clmbst, Coelhudo, crispinlogan, Cristina Mulas, Daniel López, Daniel
Mohns, darioka, Darshan N, david-cortes, Declan O'Neill, Deeksha Madan,
Elizabeth DuPre, Eric Fiegel, Eric Larson, Erich Schubert, Erin Khoo, Erin R
Hoffman, eschibli, Felix Wick, fhaselbeck, Forrest Koch, Francesco Casalegno,
Frans Larsson, Gael Varoquaux, Gaurav Desai, Gaurav Sheni, genvalen, Geoffrey
Bolmier, George Armstrong, George Kiragu, Gesa Stupperich, Ghislain Antony
Vaillant, Gim Seng, Gordon Walsh, Gregory R. Lee, Guillaume Chevalier,
Guillaume Lemaitre, Haesun Park, Hannah Bohle, Hao Chun Chang, Harry Scholes,
Harsh Soni, Henry, Hirofumi Suzuki, Hitesh Somani, Hoda1394, Hugo Le Moine,
hugorichard, indecisiveuser, Isuru Fernando, Ivan Wiryadi, j0rd1smit, Jaehyun
Ahn, Jake Tae, James Hoctor, Jan Vesely, Jeevan Anand Anne, JeroenPeterBos,
JHayes, Jiaxiang, Jie Zheng, Jigna Panchal, jim0421, Jin Li, Joaquin
Vanschoren, Joel Nothman, Jona Sassenhagen, Jonathan, Jorge Gorbe Moya, Joseph
Lucas, Joshua Newton, Juan Carlos Alfaro Jiménez, Julien Jerphanion, Justin
Huber, Jérémie du Boisberranger, Kartik Chugh, Katarina Slama, kaylani2,
Kendrick Cetina, Kenny Huynh, Kevin Markham, Kevin Winata, Kiril Isakov,
kishimoto, Koki Nishihara, Krum Arnaudov, Kyle Kosic, Lauren Oldja, Laurenz
Reitsam, Lisa Schwetlick, Louis Douge, Louis Guitton, Lucy Liu, Madhura
Jayaratne, maikia, Manimaran, Manuel López-Ibáñez, Maren Westermann, Maria
Telenczuk, Mariam-ke, Marijn van Vliet, Markus Löning, Martin Scheubrein,
Martina G. Vilas, Martina Megasari, Mateusz Górski, mathschy, mathurinm,
Matthias Bussonnier, Max Del Giudice, Michael, Milan Straka, Muoki Caleb, N.
Haiat, Nadia Tahiri, Ph. D, Naoki Hamada, Neil Botelho, Nicolas Hug, Nils
Werner, noelano, Norbert Preining, oj_lappi, Oleh Kozynets, Olivier Grisel,
Pankaj Jindal, Pardeep Singh, Parthiv Chigurupati, Patrice Becker, Pete Green,
pgithubs, Poorna Kumar, Prabakaran Kumaresshan, Probinette4, pspachtholz,
pwalchessen, Qi Zhang, rachel fischoff, Rachit Toshniwal, Rafey Iqbal Rahman,
Rahul Jakhar, Ram Rachum, RamyaNP, rauwuckl, Ravi Kiran Boggavarapu, Ray Bell,
Reshama Shaikh, Richard Decal, Rishi Advani, Rithvik Rao, Rob Romijnders, roei,
Romain Tavenard, Roman Yurchak, Ruby Werman, Ryotaro Tsukada, sadak, Saket
Khandelwal, Sam, Sam Ezebunandu, Sam Kimbinyi, Sarah Brown, Saurabh Jain, Sean
O. Stalley, Sergio, Shail Shah, Shane Keller, Shao Yang Hong, Shashank Singh,
Shooter23, Shubhanshu Mishra, simonamaggio, Soledad Galli, Srimukh Sripada,
Stephan Steinfurt, subrat93, Sunitha Selvan, Swier, Sylvain Marié, SylvainLan,
t-kusanagi2, Teon L Brooks, Terence Honles, Thijs van den Berg, Thomas J Fan,
Thomas J. Fan, Thomas S Benjamin, Thomas9292, Thorben Jensen, tijanajovanovic,
Timo Kaufmann, tnwei, Tom Dupré la Tour, Trevor Waite, ufmayer, Umberto Lupo,
Venkatachalam N, Vikas Pandey, Vinicius Rios Fuck, Violeta, watchtheblur, Wenbo
Zhao, willpeppo, xavier dupré, Xethan, Xue Qianming, xun-tang, yagi-3, Yakov
Pchelintsev, Yashika Sharma, Yi-Yan Ge, Yue Wu, Yutaro Ikeda, Zaccharie Ramzi,
zoj613, Zhao Feng. | scikit-learn | include contributors rst currentmodule sklearn release notes 0 24 Version 0 24 For a short description of the main highlights of the release please refer to ref sphx glr auto examples release highlights plot release highlights 0 24 0 py include changelog legend inc changes 0 24 2 Version 0 24 2 April 2021 Changelog mod sklearn compose Fix compose ColumnTransformer get feature names does not call get feature names on transformers with an empty column selection pr 19579 by Thomas Fan mod sklearn cross decomposition Fix Fixed a regression in class cross decomposition CCA pr 19646 by Thomas Fan Fix class cross decomposition PLSRegression raises warning for constant y residuals instead of a StopIteration error pr 19922 by Thomas Fan mod sklearn decomposition Fix Fixed a bug in class decomposition KernelPCA s inverse transform pr 19732 by user Kei Ishikawa kstoneriv3 mod sklearn ensemble Fix Fixed a bug in class ensemble HistGradientBoostingRegressor fit with sample weight parameter and least absolute deviation loss function pr 19407 by user Vadim Ushtanit vadim ushtanit mod sklearn feature extraction Fix Fixed a bug to support multiple strings for a category when sparse False in class feature extraction DictVectorizer pr 19982 by user Guillaume Lemaitre glemaitre mod sklearn gaussian process Fix Avoid explicitly forming inverse covariance matrix in class gaussian process GaussianProcessRegressor when set to output standard deviation With certain covariance matrices this inverse is unstable to compute explicitly Calling Cholesky solver mitigates this issue in computation pr 19939 by user Ian Halvic iwhalvic Fix Avoid division by zero when scaling constant target in class gaussian process GaussianProcessRegressor It was due to a std dev equal to 0 Now such case is detected and the std dev is affected to 1 avoiding a division by zero and thus the presence of NaN values in the normalized target pr 19703 by user sobkevich user Boris Villaz n Terrazas boricles and user Alexandr Fonari afonari mod sklearn linear model Fix Fixed a bug in class linear model LogisticRegression the sample weight object is not modified anymore pr 19182 by user Yosuke KOBAYASHI m7142yosuke mod sklearn metrics Fix func metrics top k accuracy score now supports multiclass problems where only two classes appear in y true and all the classes are specified in labels pr 19721 by user Joris Clement flyingdutchman23 mod sklearn model selection Fix class model selection RandomizedSearchCV and class model selection GridSearchCV now correctly shows the score for single metrics and verbose 2 pr 19659 by Thomas Fan Fix Some values in the cv results attribute of class model selection HalvingRandomSearchCV and class model selection HalvingGridSearchCV were not properly converted to numpy arrays pr 19211 by Nicolas Hug Fix The fit method of the successive halving parameter search class model selection HalvingGridSearchCV and class model selection HalvingRandomSearchCV now correctly handles the groups parameter pr 19847 by user Xiaoyu Chai xiaoyuchai mod sklearn multioutput Fix class multioutput MultiOutputRegressor now works with estimators that dynamically define predict during fitting such as class ensemble StackingRegressor pr 19308 by Thomas Fan mod sklearn preprocessing Fix Validate the constructor parameter handle unknown in class preprocessing OrdinalEncoder to only allow for error and use encoded value strategies pr 19234 by Guillaume Lemaitre glemaitre Fix Fix encoder categories having dtype S class preprocessing OneHotEncoder and class preprocessing OrdinalEncoder pr 19727 by user Andrew Delong andrewdelong Fix meth preprocessing OrdinalEncoder transform correctly handles unknown values for string dtypes pr 19888 by Thomas Fan Fix meth preprocessing OneHotEncoder fit no longer alters the drop parameter pr 19924 by Thomas Fan mod sklearn semi supervised Fix Avoid NaN during label propagation in class sklearn semi supervised LabelPropagation pr 19271 by user Zhaowei Wang ThuWangzw mod sklearn tree Fix Fix a bug in fit of tree BaseDecisionTree that caused segmentation faults under certain conditions fit now deep copies the Criterion object to prevent shared concurrent accesses pr 19580 by user Samuel Brice samdbrice and user Alex Adamson aadamson and user Wil Yegelwel wyegelwel mod sklearn utils Fix Better contains the CSS provided by func utils estimator html repr by giving CSS ids to the html representation pr 19417 by Thomas Fan changes 0 24 1 Version 0 24 1 January 2021 Packaging The 0 24 0 scikit learn wheels were not working with MacOS 1 15 due to libomp The version of libomp used to build the wheels was too recent for older macOS versions This issue has been fixed for 0 24 1 scikit learn wheels Scikit learn wheels published on PyPI org now officially support macOS 10 13 and later Changelog mod sklearn metrics Fix Fix numerical stability bug that could happen in func metrics adjusted mutual info score and func metrics mutual info score with NumPy 1 20 pr 19179 by Thomas Fan mod sklearn semi supervised Fix class semi supervised SelfTrainingClassifier is now accepting meta estimator e g class ensemble StackingClassifier The validation of this estimator is done on the fitted estimator once we know the existence of the method predict proba pr 19126 by user Guillaume Lemaitre glemaitre changes 0 24 Version 0 24 0 December 2020 Changed models The following estimators and functions when fit with the same data and parameters may produce different models from the previous version This often occurs due to changes in the modelling logic bug fixes or enhancements or in random sampling procedures Fix class decomposition KernelPCA behaviour is now more consistent between 32 bits and 64 bits data when the kernel has small positive eigenvalues Fix class decomposition TruncatedSVD becomes deterministic by exposing a random state parameter Fix class linear model Perceptron when penalty elasticnet Fix Change in the random sampling procedures for the center initialization of class cluster KMeans Details are listed in the changelog below While we are trying to better inform users by providing this information we cannot assure that this list is complete Changelog mod sklearn base Fix meth base BaseEstimator get params now will raise an AttributeError if a parameter cannot be retrieved as an instance attribute Previously it would return None pr 17448 by user Juan Carlos Alfaro Jim nez alfaro96 mod sklearn calibration Efficiency class calibration CalibratedClassifierCV fit now supports parallelization via joblib Parallel using argument n jobs pr 17107 by user Julien Jerphanion jjerphan Enhancement Allow class calibration CalibratedClassifierCV use with prefit class pipeline Pipeline where data is not X is not array like sparse matrix or dataframe at the start pr 17546 by user Lucy Liu lucyleeow Enhancement Add ensemble parameter to class calibration CalibratedClassifierCV which enables implementation of calibration via an ensemble of calibrators current method or just one calibrator using all the data similar to the built in feature of mod sklearn svm estimators with the probabilities True parameter pr 17856 by user Lucy Liu lucyleeow and user Andrea Esuli aesuli mod sklearn cluster Enhancement class cluster AgglomerativeClustering has a new parameter compute distances When set to True distances between clusters are computed and stored in the distances attribute even when the parameter distance threshold is not used This new parameter is useful to produce dendrogram visualizations but introduces a computational and memory overhead pr 17984 by user Michael Riedmann mriedmann user Emilie Delattre EmilieDel and user Francesco Casalegno FrancescoCasalegno Enhancement class cluster SpectralClustering and func cluster spectral clustering have a new keyword argument verbose When set to True additional messages will be displayed which can aid with debugging pr 18052 by user Sean O Stalley sstalley Enhancement Added func cluster kmeans plusplus as public function Initialization by KMeans can now be called separately to generate initial cluster centroids pr 17937 by user g walsh API class cluster MiniBatchKMeans attributes counts and init size are deprecated and will be removed in 1 1 renaming of 0 26 pr 17864 by user J r mie du Boisberranger jeremiedbb mod sklearn compose Fix class compose ColumnTransformer will skip transformers the column selector is a list of bools that are False pr 17616 by Thomas Fan Fix class compose ColumnTransformer now displays the remainder in the diagram display pr 18167 by Thomas Fan Fix class compose ColumnTransformer enforces strict count and order of column names between fit and transform by raising an error instead of a warning following the deprecation cycle pr 18256 by user Madhura Jayratne madhuracj mod sklearn covariance API Deprecates cv alphas in favor of cv results alphas and grid scores in favor of split scores in cv results in class covariance GraphicalLassoCV cv alphas and grid scores will be removed in version 1 1 renaming of 0 26 pr 16392 by Thomas Fan mod sklearn cross decomposition Fix Fixed a bug in class cross decomposition PLSSVD which would sometimes return components in the reversed order of importance pr 17095 by Nicolas Hug Fix Fixed a bug in class cross decomposition PLSSVD class cross decomposition CCA and class cross decomposition PLSCanonical which would lead to incorrect predictions for est transform Y when the training data is single target pr 17095 by Nicolas Hug Fix Increases the stability of class cross decomposition CCA pr 18746 by Thomas Fan API The bounds of the n components parameter is now restricted into 1 min n samples n features n targets for class cross decomposition PLSSVD class cross decomposition CCA and class cross decomposition PLSCanonical into 1 n features or class cross decomposition PLSRegression An error will be raised in 1 1 renaming of 0 26 pr 17095 by Nicolas Hug API For class cross decomposition PLSSVD class cross decomposition CCA and class cross decomposition PLSCanonical the x scores and y scores attributes were deprecated and will be removed in 1 1 renaming of 0 26 They can be retrieved by calling transform on the training data The norm y weights attribute will also be removed pr 17095 by Nicolas Hug API For class cross decomposition PLSRegression class cross decomposition PLSCanonical class cross decomposition CCA and class cross decomposition PLSSVD the x mean y mean x std and y std attributes were deprecated and will be removed in 1 1 renaming of 0 26 pr 18768 by user Maren Westermann marenwestermann Fix class decomposition TruncatedSVD becomes deterministic by using the random state It controls the weights initialization of the underlying ARPACK solver pr 18302 by user Gaurav Desai gauravkdesai and user Ivan Panico FollowKenny mod sklearn datasets Feature func datasets fetch openml now validates md5 checksum of arff files downloaded or cached to ensure data integrity pr 14800 by user Shashank Singh shashanksingh28 and Joel Nothman Enhancement func datasets fetch openml now allows argument as frame to be auto which tries to convert returned data to pandas DataFrame unless data is sparse pr 17396 by user Jiaxiang fujiaxiang Enhancement func datasets fetch covtype now supports the optional argument as frame when it is set to True the returned Bunch object s data and frame members are pandas DataFrames and the target member is a pandas Series pr 17491 by user Alex Liang tianchuliang Enhancement func datasets fetch kddcup99 now supports the optional argument as frame when it is set to True the returned Bunch object s data and frame members are pandas DataFrames and the target member is a pandas Series pr 18280 by user Alex Liang tianchuliang and Guillaume Lemaitre Enhancement func datasets fetch 20newsgroups vectorized now supports loading as a pandas DataFrame by setting as frame True pr 17499 by user Brigitta Sip cz bsipocz and Guillaume Lemaitre API The default value of as frame in func datasets fetch openml is changed from False to auto pr 17610 by user Jiaxiang fujiaxiang mod sklearn decomposition API For class decomposition NMF the init value when init None and n components min n samples n features will be changed from nndsvd to nndsvda in 1 1 renaming of 0 26 pr 18525 by user Chiara Marmo cmarmo Enhancement func decomposition FactorAnalysis now supports the optional argument rotation which can take the value None varimax or quartimax pr 11064 by user Jona Sassenhagen jona sassenhagen Enhancement class decomposition NMF now supports the optional parameter regularization which can take the values None components transformation or both in accordance with decomposition NMF non negative factorization pr 17414 by user Bharat Raghunathan bharatr21 Fix class decomposition KernelPCA behaviour is now more consistent between 32 bits and 64 bits data input when the kernel has small positive eigenvalues Small positive eigenvalues were not correctly discarded for 32 bits data pr 18149 by user Sylvain Mari smarie Fix Fix class decomposition SparseCoder such that it follows scikit learn API and support cloning The attribute components is deprecated in 0 24 and will be removed in 1 1 renaming of 0 26 This attribute was redundant with the dictionary attribute and constructor parameter pr 17679 by user Xavier Dupr sdpython Fix meth decomposition TruncatedSVD fit transform consistently returns the same as meth decomposition TruncatedSVD fit followed by meth decomposition TruncatedSVD transform pr 18528 by user Albert Villanova del Moral albertvillanova and user Ruifeng Zheng zhengruifeng mod sklearn discriminant analysis Enhancement class discriminant analysis LinearDiscriminantAnalysis can now use custom covariance estimate by setting the covariance estimator parameter pr 14446 by user Hugo Richard hugorichard mod sklearn ensemble MajorFeature class ensemble HistGradientBoostingRegressor and class ensemble HistGradientBoostingClassifier now have native support for categorical features with the categorical features parameter pr 18394 by Nicolas Hug and Thomas Fan Feature class ensemble HistGradientBoostingRegressor and class ensemble HistGradientBoostingClassifier now support the method staged predict which allows monitoring of each stage pr 16985 by user Hao Chun Chang haochunchang Efficiency break cyclic references in the tree nodes used internally in class ensemble HistGradientBoostingRegressor and class ensemble HistGradientBoostingClassifier to allow for the timely garbage collection of large intermediate datastructures and to improve memory usage in fit pr 18334 by Olivier Grisel Nicolas Hug Thomas Fan and Andreas M ller Efficiency Histogram initialization is now done in parallel in class ensemble HistGradientBoostingRegressor and class ensemble HistGradientBoostingClassifier which results in speed improvement for problems that build a lot of nodes on multicore machines pr 18341 by Olivier Grisel Nicolas Hug Thomas Fan and user Egor Smirnov SmirnovEgorRu Fix Fixed a bug in class ensemble HistGradientBoostingRegressor and class ensemble HistGradientBoostingClassifier which can now accept data with uint8 dtype in predict pr 18410 by Nicolas Hug API The parameter n classes is now deprecated in class ensemble GradientBoostingRegressor and returns 1 pr 17702 by user Simona Maggio simonamaggio API Mean absolute error mae is now deprecated for the parameter criterion in class ensemble GradientBoostingRegressor and class ensemble GradientBoostingClassifier pr 18326 by user Madhura Jayaratne madhuracj mod sklearn exceptions API exceptions ChangedBehaviorWarning and exceptions NonBLASDotWarning are deprecated and will be removed in 1 1 renaming of 0 26 pr 17804 by Adrin Jalali mod sklearn feature extraction Enhancement class feature extraction DictVectorizer accepts multiple values for one categorical feature pr 17367 by user Peng Yu yupbank and user Chiara Marmo cmarmo Fix class feature extraction text CountVectorizer raises an issue if a custom token pattern which capture more than one group is provided pr 15427 by user Gangesh Gudmalwar ggangesh and user Erin R Hoffman hoffm386 mod sklearn feature selection Feature Added class feature selection SequentialFeatureSelector which implements forward and backward sequential feature selection pr 6545 by Sebastian Raschka and pr 17159 by Nicolas Hug Feature A new parameter importance getter was added to class feature selection RFE class feature selection RFECV and class feature selection SelectFromModel allowing the user to specify an attribute name path or a callable for extracting feature importance from the estimator pr 15361 by user Venkatachalam N venkyyuvy Efficiency Reduce memory footprint in func feature selection mutual info classif and func feature selection mutual info regression by calling class neighbors KDTree for counting nearest neighbors pr 17878 by user Noel Rogers noelano Enhancement class feature selection RFE supports the option for the number of n features to select to be given as a float representing the percentage of features to select pr 17090 by user Lisa Schwetlick lschwetlick and user Marija Vlajic Wheeler marijavlajic mod sklearn gaussian process Enhancement A new method gaussian process kernel check bounds params is called after fitting a Gaussian Process and raises a ConvergenceWarning if the bounds of the hyperparameters are too tight issue 12638 by user Sylvain Lannuzel SylvainLan mod sklearn impute Feature class impute SimpleImputer now supports a list of strings when strategy most frequent or strategy constant pr 17526 by user Ayako YAGI yagi 3 and user Juan Carlos Alfaro Jim nez alfaro96 Feature Added method meth impute SimpleImputer inverse transform to revert imputed data to original when instantiated with add indicator True pr 17612 by user Srimukh Sripada d3b0unce Fix replace the default values in class impute IterativeImputer of min value and max value parameters to np inf and np inf respectively instead of None However the behaviour of the class does not change since None was defaulting to these values already pr 16493 by user Darshan N DarshanGowda0 Fix class impute IterativeImputer will not attempt to set the estimator s random state attribute allowing to use it with more external classes pr 15636 by user David Cortes david cortes Efficiency class impute SimpleImputer is now faster with object dtype array when strategy most frequent in class sklearn impute SimpleImputer pr 18987 by user David Katz DavidKatz il mod sklearn inspection Feature func inspection partial dependence and inspection plot partial dependence now support calculating and plotting Individual Conditional Expectation ICE curves controlled by the kind parameter pr 16619 by user Madhura Jayratne madhuracj Feature Add sample weight parameter to func inspection permutation importance pr 16906 by user Roei Kahny RoeiKa API Positional arguments are deprecated in meth inspection PartialDependenceDisplay plot and will error in 1 1 renaming of 0 26 pr 18293 by Thomas Fan mod sklearn isotonic Feature Expose fitted attributes X thresholds and y thresholds that hold the de duplicated interpolation thresholds of an class isotonic IsotonicRegression instance for model inspection purpose pr 16289 by user Masashi Kishimoto kishimoto banana and user Olivier Grisel ogrisel Enhancement class isotonic IsotonicRegression now accepts 2d array with 1 feature as input array pr 17379 by user Jiaxiang fujiaxiang Fix Add tolerance when determining duplicate X values to prevent inf values from being predicted by class isotonic IsotonicRegression pr 18639 by user Lucy Liu lucyleeow mod sklearn kernel approximation Feature Added class class kernel approximation PolynomialCountSketch which implements the Tensor Sketch algorithm for polynomial kernel feature map approximation pr 13003 by user Daniel L pez S nchez lopeLH Efficiency class kernel approximation Nystroem now supports parallelization via joblib Parallel using argument n jobs pr 18545 by user Laurenz Reitsam LaurenzReitsam mod sklearn linear model Feature class linear model LinearRegression now forces coefficients to be all positive when positive is set to True pr 17578 by user Joseph Knox jknox13 user Nelle Varoquaux NelleV and user Chiara Marmo cmarmo Enhancement class linear model RidgeCV now supports finding an optimal regularization value alpha for each target separately by setting alpha per target True This is only supported when using the default efficient leave one out cross validation scheme cv None pr 6624 by user Marijn van Vliet wmvanvliet Fix Fixes bug in class linear model TheilSenRegressor where predict and score would fail when fit intercept False and there was one feature during fitting pr 18121 by Thomas Fan Fix Fixes bug in class linear model ARDRegression where predict was raising an error when normalize True and return std True because X offset and X scale were undefined pr 18607 by user fhaselbeck fhaselbeck Fix Added the missing l1 ratio parameter in class linear model Perceptron to be used when penalty elasticnet This changes the default from 0 to 0 15 pr 18622 by user Haesun Park rickiepark mod sklearn manifold Efficiency Fixed issue 10493 Improve Local Linear Embedding LLE that raised MemoryError exception when used with large inputs pr 17997 by user Bertrand Maisonneuve bmaisonn Enhancement Add square distances parameter to class manifold TSNE which provides backward compatibility during deprecation of legacy squaring behavior Distances will be squared by default in 1 1 renaming of 0 26 and this parameter will be removed in 1 3 pr 17662 by user Joshua Newton joshuacwnewton Fix class manifold MDS now correctly sets its pairwise attribute pr 18278 by Thomas Fan mod sklearn metrics Feature Added func metrics cluster pair confusion matrix implementing the confusion matrix arising from pairs of elements from two clusterings pr 17412 by user Uwe F Mayer ufmayer Feature new metric func metrics top k accuracy score It s a generalization of func metrics top k accuracy score the difference is that a prediction is considered correct as long as the true label is associated with one of the k highest predicted scores func metrics accuracy score is the special case of k 1 pr 16625 by user Geoffrey Bolmier gbolmier Feature Added func metrics det curve to compute Detection Error Tradeoff curve classification metric pr 10591 by user Jeremy Karnowski jkarnows and user Daniel Mohns dmohns Feature Added metrics plot det curve and class metrics DetCurveDisplay to ease the plot of DET curves pr 18176 by user Guillaume Lemaitre glemaitre Feature Added func metrics mean absolute percentage error metric and the associated scorer for regression problems issue 10708 fixed with the PR pr 15007 by user Ashutosh Hathidara ashutosh1919 The scorer and some practical test cases were taken from PR pr 10711 by user Mohamed Ali Jamaoui mohamed ali Feature Added func metrics rand score implementing the unadjusted Rand index pr 17412 by user Uwe F Mayer ufmayer Feature metrics plot confusion matrix now supports making colorbar optional in the matplotlib plot by setting colorbar False pr 17192 by user Avi Gupta avigupta2612 Enhancement Add sample weight parameter to func metrics median absolute error pr 17225 by user Lucy Liu lucyleeow Enhancement Add pos label parameter in metrics plot precision recall curve in order to specify the positive class to be used when computing the precision and recall statistics pr 17569 by user Guillaume Lemaitre glemaitre Enhancement Add pos label parameter in metrics plot roc curve in order to specify the positive class to be used when computing the roc auc statistics pr 17651 by user Clara Matos claramatos Fix Fixed a bug in func metrics classification report which was raising AttributeError when called with output dict True for 0 length values pr 17777 by user Shubhanshu Mishra napsternxg Fix Fixed a bug in func metrics classification report which was raising AttributeError when called with output dict True for 0 length values pr 17777 by user Shubhanshu Mishra napsternxg Fix Fixed a bug in func metrics jaccard score which recommended the zero division parameter when called with no true or predicted samples pr 17826 by user Richard Decal crypdick and user Joseph Willard josephwillard Fix bug in func metrics hinge loss where error occurs when y true is missing some labels that are provided explicitly in the labels parameter pr 17935 by user Cary Goltermann Ultramann Fix Fix scorers that accept a pos label parameter and compute their metrics from values returned by decision function or predict proba Previously they would return erroneous values when pos label was not corresponding to classifier classes 1 This is especially important when training classifiers directly with string labeled target classes pr 18114 by user Guillaume Lemaitre glemaitre Fix Fixed bug in metrics plot confusion matrix where error occurs when y true contains labels that were not previously seen by the classifier while the labels and display labels parameters are set to None pr 18405 by user Thomas J Fan thomasjpfan and user Yakov Pchelintsev kyouma mod sklearn model selection MajorFeature Added experimental parameter search estimators class model selection HalvingRandomSearchCV and class model selection HalvingGridSearchCV which implement Successive Halving and can be used as a drop in replacements for class model selection RandomizedSearchCV and class model selection GridSearchCV pr 13900 by Nicolas Hug Joel Nothman and Andreas M ller Feature class model selection RandomizedSearchCV and class model selection GridSearchCV now have the method score samples pr 17478 by user Teon Brooks teonbrooks and user Mohamed Maskani maskani moh Enhancement class model selection TimeSeriesSplit has two new keyword arguments test size and gap test size allows the out of sample time series length to be fixed for all folds gap removes a fixed number of samples between the train and test set on each fold pr 13204 by user Kyle Kosic kykosic Enhancement func model selection permutation test score and func model selection validation curve now accept fit params to pass additional estimator parameters pr 18527 by user Gaurav Dhingra gxyd user Julien Jerphanion jjerphan and user Amanda Dsouza amy12xx Enhancement func model selection cross val score func model selection cross validate class model selection GridSearchCV and class model selection RandomizedSearchCV allows estimator to fail scoring and replace the score with error score If error score raise the error will be raised pr 18343 by Guillaume Lemaitre and user Devi Sandeep dsandeep0138 Enhancement func model selection learning curve now accept fit params to pass additional estimator parameters pr 18595 by user Amanda Dsouza amy12xx Fix Fixed the len of class model selection ParameterSampler when all distributions are lists and n iter is more than the number of unique parameter combinations pr 18222 by Nicolas Hug Fix A fix to raise warning when one or more CV splits of class model selection GridSearchCV and class model selection RandomizedSearchCV results in non finite scores pr 18266 by user Subrat Sahu subrat93 user Nirvan Nirvan101 and user Arthur Book ArthurBook Enhancement class model selection GridSearchCV class model selection RandomizedSearchCV and func model selection cross validate support scoring being a callable returning a dictionary of of multiple metric names values association pr 15126 by Thomas Fan mod sklearn multiclass Enhancement class multiclass OneVsOneClassifier now accepts the inputs with missing values Hence estimators which can handle missing values may be a pipeline with imputation step can be used as a estimator for multiclass wrappers pr 17987 by user Venkatachalam N venkyyuvy Fix A fix to allow class multiclass OutputCodeClassifier to accept sparse input data in its fit and predict methods The check for validity of the input is now delegated to the base estimator pr 17233 by user Zolisa Bleki zoj613 mod sklearn multioutput Enhancement class multioutput MultiOutputClassifier and class multioutput MultiOutputRegressor now accepts the inputs with missing values Hence estimators which can handle missing values may be a pipeline with imputation step HistGradientBoosting estimators can be used as a estimator for multiclass wrappers pr 17987 by user Venkatachalam N venkyyuvy Fix A fix to accept tuples for the order parameter in class multioutput ClassifierChain pr 18124 by user Gus Brocchini boldloop and user Amanda Dsouza amy12xx mod sklearn naive bayes Enhancement Adds a parameter min categories to class naive bayes CategoricalNB that allows a minimum number of categories per feature to be specified This allows categories unseen during training to be accounted for pr 16326 by user George Armstrong gwarmstrong API The attributes coef and intercept are now deprecated in class naive bayes MultinomialNB class naive bayes ComplementNB class naive bayes BernoulliNB and class naive bayes CategoricalNB and will be removed in v1 1 renaming of 0 26 pr 17427 by user Juan Carlos Alfaro Jim nez alfaro96 mod sklearn neighbors Efficiency Speed up seuclidean wminkowski mahalanobis and haversine metrics in neighbors DistanceMetric by avoiding unexpected GIL acquiring in Cython when setting n jobs 1 in class neighbors KNeighborsClassifier class neighbors KNeighborsRegressor class neighbors RadiusNeighborsClassifier class neighbors RadiusNeighborsRegressor func metrics pairwise distances and by validating data out of loops pr 17038 by user Wenbo Zhao webber26232 Efficiency neighbors NeighborsBase benefits of an improved algorithm auto heuristic In addition to the previous set of rules now when the number of features exceeds 15 brute is selected assuming the data intrinsic dimensionality is too high for tree based methods pr 17148 by user Geoffrey Bolmier gbolmier Fix neighbors BinaryTree will raise a ValueError when fitting on data array having points with different dimensions pr 18691 by user Chiara Marmo cmarmo Fix class neighbors NearestCentroid with a numerical shrink threshold will raise a ValueError when fitting on data with all constant features pr 18370 by user Trevor Waite trewaite Fix In methods radius neighbors and radius neighbors graph of class neighbors NearestNeighbors class neighbors RadiusNeighborsClassifier class neighbors RadiusNeighborsRegressor and class neighbors RadiusNeighborsTransformer using sort results True now correctly sorts the results even when fitting with the brute algorithm pr 18612 by Tom Dupre la Tour mod sklearn neural network Efficiency Neural net training and prediction are now a little faster pr 17603 pr 17604 pr 17606 pr 17608 pr 17609 pr 17633 pr 17661 pr 17932 by user Alex Henrie alexhenrie Enhancement Avoid converting float32 input to float64 in class neural network BernoulliRBM pr 16352 by user Arthur Imbert Henley13 Enhancement Support 32 bit computations in class neural network MLPClassifier and class neural network MLPRegressor pr 17759 by user Srimukh Sripada d3b0unce Fix Fix method meth neural network MLPClassifier fit not iterating to max iter if warm started pr 18269 by user Norbert Preining norbusan and user Guillaume Lemaitre glemaitre mod sklearn pipeline Enhancement References to transformers passed through transformer weights to class pipeline FeatureUnion that aren t present in transformer list will raise a ValueError pr 17876 by user Cary Goltermann Ultramann Fix A slice of a class pipeline Pipeline now inherits the parameters of the original pipeline memory and verbose pr 18429 by user Albert Villanova del Moral albertvillanova and user Pawe Biernat pwl mod sklearn preprocessing Feature class preprocessing OneHotEncoder now supports missing values by treating them as a category pr 17317 by Thomas Fan Feature Add a new handle unknown parameter with a use encoded value option along with a new unknown value parameter to class preprocessing OrdinalEncoder to allow unknown categories during transform and set the encoded value of the unknown categories pr 17406 by user Felix Wick FelixWick and pr 18406 by Nicolas Hug Feature Add clip parameter to class preprocessing MinMaxScaler which clips the transformed values of test data to feature range pr 17833 by user Yashika Sharma yashika51 Feature Add sample weight parameter to class preprocessing StandardScaler Allows setting individual weights for each sample pr 18510 and pr 18447 and pr 16066 and pr 18682 by user Maria Telenczuk maikia and user Albert Villanova albertvillanova and user panpiort8 and user Alex Gramfort agramfort Enhancement Verbose output of class model selection GridSearchCV has been improved for readability pr 16935 by user Raghav Rajagopalan raghavrv and user Chiara Marmo cmarmo Enhancement Add unit variance to class preprocessing RobustScaler which scales output data such that normally distributed features have a variance of 1 pr 17193 by user Lucy Liu lucyleeow and user Mabel Villalba mabelvj Enhancement Add dtype parameter to class preprocessing KBinsDiscretizer pr 16335 by user Arthur Imbert Henley13 Fix Raise error on meth sklearn preprocessing OneHotEncoder inverse transform when handle unknown error and drop None for samples encoded as all zeros pr 14982 by user Kevin Winata kwinata mod sklearn semi supervised MajorFeature Added class semi supervised SelfTrainingClassifier a meta classifier that allows any supervised classifier to function as a semi supervised classifier that can learn from unlabeled data issue 11682 by user Oliver Rausch orausch and user Patrice Becker pr0duktiv Fix Fix incorrect encoding when using unicode string dtypes in class preprocessing OneHotEncoder and class preprocessing OrdinalEncoder pr 15763 by Thomas Fan mod sklearn svm Enhancement invoke SciPy BLAS API for SVM kernel function in fit predict and related methods of class svm SVC class svm NuSVC class svm SVR class svm NuSVR class svm OneClassSVM pr 16530 by user Shuhua Fan jim0421 mod sklearn tree Feature class tree DecisionTreeRegressor now supports the new splitting criterion poisson useful for modeling count data pr 17386 by user Christian Lorentzen lorentzenchr Enhancement func tree plot tree now uses colors from the matplotlib configuration settings pr 17187 by Andreas M ller API The parameter X idx sorted is now deprecated in meth tree DecisionTreeClassifier fit and meth tree DecisionTreeRegressor fit and has not effect pr 17614 by user Juan Carlos Alfaro Jim nez alfaro96 mod sklearn utils Enhancement Add check methods sample order invariance to func utils estimator checks check estimator which checks that estimator methods are invariant if applied to the same dataset with different sample order pr 17598 by user Jason Ngo ngojason9 Enhancement Add support for weights in utils sparse func incr mean variance axis By user Maria Telenczuk maikia and user Alex Gramfort agramfort Fix Raise ValueError with clear error message in func utils check array for sparse DataFrames with mixed types pr 17992 by user Thomas J Fan thomasjpfan and user Alex Shacked alexshacked Fix Allow serialized tree based models to be unpickled on a machine with different endianness pr 17644 by user Qi Zhang qzhang90 Fix Check that we raise proper error when axis 1 and the dimensions do not match in utils sparse func incr mean variance axis By user Alex Gramfort agramfort Miscellaneous Enhancement Calls to repr are now faster when print changed only True especially with meta estimators pr 18508 by user Nathan C Xethan rubric Code and documentation contributors Thanks to everyone who has contributed to the maintenance and improvement of the project since version 0 23 including Abo7atm Adam Spannbauer Adrin Jalali adrinjalali Agamemnon Krasoulis Akshay Deodhar Albert Villanova del Moral Alessandro Gentile Alex Henrie Alex Itkes Alex Liang Alexander Lenail alexandracraciun Alexandre Gramfort alexshacked Allan D Butler Amanda Dsouza amy12xx Anand Tiwari Anderson Nelson Andreas Mueller Ankit Choraria Archana Subramaniyan Arthur Imbert Ashutosh Hathidara Ashutosh Kushwaha Atsushi Nukariya Aura Munoz AutoViz and Auto ViML Avi Gupta Avinash Anakal Ayako YAGI barankarakus barberogaston beatrizsmg Ben Mainye Benjamin Bossan Benjamin Pedigo Bharat Raghunathan Bhavika Devnani Biprateep Dey bmaisonn Bo Chang Boris Villaz n Terrazas brigi Brigitta Sip cz Bruno Charron Byron Smith Cary Goltermann Cat Chenal CeeThinwa chaitanyamogal Charles Patel Chiara Marmo Christian Kastner Christian Lorentzen Christoph Deil Christos Aridas Clara Matos clmbst Coelhudo crispinlogan Cristina Mulas Daniel L pez Daniel Mohns darioka Darshan N david cortes Declan O Neill Deeksha Madan Elizabeth DuPre Eric Fiegel Eric Larson Erich Schubert Erin Khoo Erin R Hoffman eschibli Felix Wick fhaselbeck Forrest Koch Francesco Casalegno Frans Larsson Gael Varoquaux Gaurav Desai Gaurav Sheni genvalen Geoffrey Bolmier George Armstrong George Kiragu Gesa Stupperich Ghislain Antony Vaillant Gim Seng Gordon Walsh Gregory R Lee Guillaume Chevalier Guillaume Lemaitre Haesun Park Hannah Bohle Hao Chun Chang Harry Scholes Harsh Soni Henry Hirofumi Suzuki Hitesh Somani Hoda1394 Hugo Le Moine hugorichard indecisiveuser Isuru Fernando Ivan Wiryadi j0rd1smit Jaehyun Ahn Jake Tae James Hoctor Jan Vesely Jeevan Anand Anne JeroenPeterBos JHayes Jiaxiang Jie Zheng Jigna Panchal jim0421 Jin Li Joaquin Vanschoren Joel Nothman Jona Sassenhagen Jonathan Jorge Gorbe Moya Joseph Lucas Joshua Newton Juan Carlos Alfaro Jim nez Julien Jerphanion Justin Huber J r mie du Boisberranger Kartik Chugh Katarina Slama kaylani2 Kendrick Cetina Kenny Huynh Kevin Markham Kevin Winata Kiril Isakov kishimoto Koki Nishihara Krum Arnaudov Kyle Kosic Lauren Oldja Laurenz Reitsam Lisa Schwetlick Louis Douge Louis Guitton Lucy Liu Madhura Jayaratne maikia Manimaran Manuel L pez Ib ez Maren Westermann Maria Telenczuk Mariam ke Marijn van Vliet Markus L ning Martin Scheubrein Martina G Vilas Martina Megasari Mateusz G rski mathschy mathurinm Matthias Bussonnier Max Del Giudice Michael Milan Straka Muoki Caleb N Haiat Nadia Tahiri Ph D Naoki Hamada Neil Botelho Nicolas Hug Nils Werner noelano Norbert Preining oj lappi Oleh Kozynets Olivier Grisel Pankaj Jindal Pardeep Singh Parthiv Chigurupati Patrice Becker Pete Green pgithubs Poorna Kumar Prabakaran Kumaresshan Probinette4 pspachtholz pwalchessen Qi Zhang rachel fischoff Rachit Toshniwal Rafey Iqbal Rahman Rahul Jakhar Ram Rachum RamyaNP rauwuckl Ravi Kiran Boggavarapu Ray Bell Reshama Shaikh Richard Decal Rishi Advani Rithvik Rao Rob Romijnders roei Romain Tavenard Roman Yurchak Ruby Werman Ryotaro Tsukada sadak Saket Khandelwal Sam Sam Ezebunandu Sam Kimbinyi Sarah Brown Saurabh Jain Sean O Stalley Sergio Shail Shah Shane Keller Shao Yang Hong Shashank Singh Shooter23 Shubhanshu Mishra simonamaggio Soledad Galli Srimukh Sripada Stephan Steinfurt subrat93 Sunitha Selvan Swier Sylvain Mari SylvainLan t kusanagi2 Teon L Brooks Terence Honles Thijs van den Berg Thomas J Fan Thomas J Fan Thomas S Benjamin Thomas9292 Thorben Jensen tijanajovanovic Timo Kaufmann tnwei Tom Dupr la Tour Trevor Waite ufmayer Umberto Lupo Venkatachalam N Vikas Pandey Vinicius Rios Fuck Violeta watchtheblur Wenbo Zhao willpeppo xavier dupr Xethan Xue Qianming xun tang yagi 3 Yakov Pchelintsev Yashika Sharma Yi Yan Ge Yue Wu Yutaro Ikeda Zaccharie Ramzi zoj613 Zhao Feng |
scikit-learn sklearn contributors rst releasenotes13 Version 1 3 | .. include:: _contributors.rst
.. currentmodule:: sklearn
.. _release_notes_1_3:
===========
Version 1.3
===========
For a short description of the main highlights of the release, please refer to
:ref:`sphx_glr_auto_examples_release_highlights_plot_release_highlights_1_3_0.py`.
.. include:: changelog_legend.inc
.. _changes_1_3_2:
Version 1.3.2
=============
**October 2023**
Changelog
---------
:mod:`sklearn.datasets`
.......................
- |Fix| All dataset fetchers now accept `data_home` as any object that implements
the :class:`os.PathLike` interface, for instance, :class:`pathlib.Path`.
:pr:`27468` by :user:`Yao Xiao <Charlie-XIAO>`.
:mod:`sklearn.decomposition`
............................
- |Fix| Fixes a bug in :class:`decomposition.KernelPCA` by forcing the output of
the internal :class:`preprocessing.KernelCenterer` to be a default array. When the
arpack solver is used, it expects an array with a `dtype` attribute.
:pr:`27583` by :user:`Guillaume Lemaitre <glemaitre>`.
:mod:`sklearn.metrics`
......................
- |Fix| Fixes a bug for metrics using `zero_division=np.nan`
(e.g. :func:`~metrics.precision_score`) within a paralell loop
(e.g. :func:`~model_selection.cross_val_score`) where the singleton for `np.nan`
will be different in the sub-processes.
:pr:`27573` by :user:`Guillaume Lemaitre <glemaitre>`.
:mod:`sklearn.tree`
...................
- |Fix| Do not leak data via non-initialized memory in decision tree pickle files and make
the generation of those files deterministic. :pr:`27580` by :user:`Loïc Estève <lesteve>`.
.. _changes_1_3_1:
Version 1.3.1
=============
**September 2023**
Changed models
--------------
The following estimators and functions, when fit with the same data and
parameters, may produce different models from the previous version. This often
occurs due to changes in the modelling logic (bug fixes or enhancements), or in
random sampling procedures.
- |Fix| Ridge models with `solver='sparse_cg'` may have slightly different
results with scipy>=1.12, because of an underlying change in the scipy solver
(see `scipy#18488 <https://github.com/scipy/scipy/pull/18488>`_ for more
details)
:pr:`26814` by :user:`Loïc Estève <lesteve>`
Changes impacting all modules
-----------------------------
- |Fix| The `set_output` API correctly works with list input. :pr:`27044` by
`Thomas Fan`_.
Changelog
---------
:mod:`sklearn.calibration`
..........................
- |Fix| :class:`calibration.CalibratedClassifierCV` can now handle models that
produce large prediction scores. Before it was numerically unstable.
:pr:`26913` by :user:`Omar Salman <OmarManzoor>`.
:mod:`sklearn.cluster`
......................
- |Fix| :class:`cluster.BisectingKMeans` could crash when predicting on data
with a different scale than the data used to fit the model.
:pr:`27167` by `Olivier Grisel`_.
- |Fix| :class:`cluster.BisectingKMeans` now works with data that has a single feature.
:pr:`27243` by :user:`Jérémie du Boisberranger <jeremiedbb>`.
:mod:`sklearn.cross_decomposition`
..................................
- |Fix| :class:`cross_decomposition.PLSRegression` now automatically ravels the output
of `predict` if fitted with one dimensional `y`.
:pr:`26602` by :user:`Yao Xiao <Charlie-XIAO>`.
:mod:`sklearn.ensemble`
.......................
- |Fix| Fix a bug in :class:`ensemble.AdaBoostClassifier` with `algorithm="SAMME"`
where the decision function of each weak learner should be symmetric (i.e.
the sum of the scores should sum to zero for a sample).
:pr:`26521` by :user:`Guillaume Lemaitre <glemaitre>`.
:mod:`sklearn.feature_selection`
................................
- |Fix| :func:`feature_selection.mutual_info_regression` now correctly computes the
result when `X` is of integer dtype. :pr:`26748` by :user:`Yao Xiao <Charlie-XIAO>`.
:mod:`sklearn.impute`
.....................
- |Fix| :class:`impute.KNNImputer` now correctly adds a missing indicator column in
``transform`` when ``add_indicator`` is set to ``True`` and missing values are observed
during ``fit``. :pr:`26600` by :user:`Shreesha Kumar Bhat <Shreesha3112>`.
:mod:`sklearn.metrics`
......................
- |Fix| Scorers used with :func:`metrics.get_scorer` handle properly
multilabel-indicator matrix.
:pr:`27002` by :user:`Guillaume Lemaitre <glemaitre>`.
:mod:`sklearn.mixture`
......................
- |Fix| The initialization of :class:`mixture.GaussianMixture` from user-provided
`precisions_init` for `covariance_type` of `full` or `tied` was not correct,
and has been fixed.
:pr:`26416` by :user:`Yang Tao <mchikyt3>`.
:mod:`sklearn.neighbors`
........................
- |Fix| :meth:`neighbors.KNeighborsClassifier.predict` no longer raises an
exception for `pandas.DataFrames` input.
:pr:`26772` by :user:`Jérémie du Boisberranger <jeremiedbb>`.
- |Fix| Reintroduce `sklearn.neighbors.BallTree.valid_metrics` and
`sklearn.neighbors.KDTree.valid_metrics` as public class attributes.
:pr:`26754` by :user:`Julien Jerphanion <jjerphan>`.
- |Fix| :class:`sklearn.model_selection.HalvingRandomSearchCV` no longer raises
when the input to the `param_distributions` parameter is a list of dicts.
:pr:`26893` by :user:`Stefanie Senger <StefanieSenger>`.
- |Fix| Neighbors based estimators now correctly work when `metric="minkowski"` and the
metric parameter `p` is in the range `0 < p < 1`, regardless of the `dtype` of `X`.
:pr:`26760` by :user:`Shreesha Kumar Bhat <Shreesha3112>`.
:mod:`sklearn.preprocessing`
............................
- |Fix| :class:`preprocessing.LabelEncoder` correctly accepts `y` as a keyword
argument. :pr:`26940` by `Thomas Fan`_.
- |Fix| :class:`preprocessing.OneHotEncoder` shows a more informative error message
when `sparse_output=True` and the output is configured to be pandas.
:pr:`26931` by `Thomas Fan`_.
:mod:`sklearn.tree`
...................
- |Fix| :func:`tree.plot_tree` now accepts `class_names=True` as documented.
:pr:`26903` by :user:`Thomas Roehr <2maz>`
- |Fix| The `feature_names` parameter of :func:`tree.plot_tree` now accepts any kind of
array-like instead of just a list. :pr:`27292` by :user:`Rahil Parikh <rprkh>`.
.. _changes_1_3:
Version 1.3.0
=============
**June 2023**
Changed models
--------------
The following estimators and functions, when fit with the same data and
parameters, may produce different models from the previous version. This often
occurs due to changes in the modelling logic (bug fixes or enhancements), or in
random sampling procedures.
- |Enhancement| :meth:`multiclass.OutputCodeClassifier.predict` now uses a more
efficient pairwise distance reduction. As a consequence, the tie-breaking
strategy is different and thus the predicted labels may be different.
:pr:`25196` by :user:`Guillaume Lemaitre <glemaitre>`.
- |Enhancement| The `fit_transform` method of :class:`decomposition.DictionaryLearning`
is more efficient but may produce different results as in previous versions when
`transform_algorithm` is not the same as `fit_algorithm` and the number of iterations
is small. :pr:`24871` by :user:`Omar Salman <OmarManzoor>`.
- |Enhancement| The `sample_weight` parameter now will be used in centroids
initialization for :class:`cluster.KMeans`, :class:`cluster.BisectingKMeans`
and :class:`cluster.MiniBatchKMeans`.
This change will break backward compatibility, since numbers generated
from same random seeds will be different.
:pr:`25752` by :user:`Gleb Levitski <glevv>`,
:user:`Jérémie du Boisberranger <jeremiedbb>`,
:user:`Guillaume Lemaitre <glemaitre>`.
- |Fix| Treat more consistently small values in the `W` and `H` matrices during the
`fit` and `transform` steps of :class:`decomposition.NMF` and
:class:`decomposition.MiniBatchNMF` which can produce different results than previous
versions. :pr:`25438` by :user:`Yotam Avidar-Constantini <yotamcons>`.
- |Fix| :class:`decomposition.KernelPCA` may produce different results through
`inverse_transform` if `gamma` is `None`. Now it will be chosen correctly as
`1/n_features` of the data that it is fitted on, while previously it might be
incorrectly chosen as `1/n_features` of the data passed to `inverse_transform`.
A new attribute `gamma_` is provided for revealing the actual value of `gamma`
used each time the kernel is called.
:pr:`26337` by :user:`Yao Xiao <Charlie-XIAO>`.
Changed displays
----------------
- |Enhancement| :class:`model_selection.LearningCurveDisplay` displays both the
train and test curves by default. You can set `score_type="test"` to keep the
past behaviour.
:pr:`25120` by :user:`Guillaume Lemaitre <glemaitre>`.
- |Fix| :class:`model_selection.ValidationCurveDisplay` now accepts passing a
list to the `param_range` parameter.
:pr:`27311` by :user:`Arturo Amor <ArturoAmorQ>`.
Changes impacting all modules
-----------------------------
- |Enhancement| The `get_feature_names_out` method of the following classes now
raises a `NotFittedError` if the instance is not fitted. This ensures the error is
consistent in all estimators with the `get_feature_names_out` method.
- :class:`impute.MissingIndicator`
- :class:`feature_extraction.DictVectorizer`
- :class:`feature_extraction.text.TfidfTransformer`
- :class:`feature_selection.GenericUnivariateSelect`
- :class:`feature_selection.RFE`
- :class:`feature_selection.RFECV`
- :class:`feature_selection.SelectFdr`
- :class:`feature_selection.SelectFpr`
- :class:`feature_selection.SelectFromModel`
- :class:`feature_selection.SelectFwe`
- :class:`feature_selection.SelectKBest`
- :class:`feature_selection.SelectPercentile`
- :class:`feature_selection.SequentialFeatureSelector`
- :class:`feature_selection.VarianceThreshold`
- :class:`kernel_approximation.AdditiveChi2Sampler`
- :class:`impute.IterativeImputer`
- :class:`impute.KNNImputer`
- :class:`impute.SimpleImputer`
- :class:`isotonic.IsotonicRegression`
- :class:`preprocessing.Binarizer`
- :class:`preprocessing.KBinsDiscretizer`
- :class:`preprocessing.MaxAbsScaler`
- :class:`preprocessing.MinMaxScaler`
- :class:`preprocessing.Normalizer`
- :class:`preprocessing.OrdinalEncoder`
- :class:`preprocessing.PowerTransformer`
- :class:`preprocessing.QuantileTransformer`
- :class:`preprocessing.RobustScaler`
- :class:`preprocessing.SplineTransformer`
- :class:`preprocessing.StandardScaler`
- :class:`random_projection.GaussianRandomProjection`
- :class:`random_projection.SparseRandomProjection`
The `NotFittedError` displays an informative message asking to fit the instance
with the appropriate arguments.
:pr:`25294`, :pr:`25308`, :pr:`25291`, :pr:`25367`, :pr:`25402`,
by :user:`John Pangas <jpangas>`, :user:`Rahil Parikh <rprkh>` ,
and :user:`Alex Buzenet <albuzenet>`.
- |Enhancement| Added a multi-threaded Cython routine to the compute squared
Euclidean distances (sometimes followed by a fused reduction operation) for a
pair of datasets consisting of a sparse CSR matrix and a dense NumPy.
This can improve the performance of following functions and estimators:
- :func:`sklearn.metrics.pairwise_distances_argmin`
- :func:`sklearn.metrics.pairwise_distances_argmin_min`
- :class:`sklearn.cluster.AffinityPropagation`
- :class:`sklearn.cluster.Birch`
- :class:`sklearn.cluster.MeanShift`
- :class:`sklearn.cluster.OPTICS`
- :class:`sklearn.cluster.SpectralClustering`
- :func:`sklearn.feature_selection.mutual_info_regression`
- :class:`sklearn.neighbors.KNeighborsClassifier`
- :class:`sklearn.neighbors.KNeighborsRegressor`
- :class:`sklearn.neighbors.RadiusNeighborsClassifier`
- :class:`sklearn.neighbors.RadiusNeighborsRegressor`
- :class:`sklearn.neighbors.LocalOutlierFactor`
- :class:`sklearn.neighbors.NearestNeighbors`
- :class:`sklearn.manifold.Isomap`
- :class:`sklearn.manifold.LocallyLinearEmbedding`
- :class:`sklearn.manifold.TSNE`
- :func:`sklearn.manifold.trustworthiness`
- :class:`sklearn.semi_supervised.LabelPropagation`
- :class:`sklearn.semi_supervised.LabelSpreading`
A typical example of this performance improvement happens when passing a sparse
CSR matrix to the `predict` or `transform` method of estimators that rely on
a dense NumPy representation to store their fitted parameters (or the reverse).
For instance, :meth:`sklearn.neighbors.NearestNeighbors.kneighbors` is now up
to 2 times faster for this case on commonly available laptops.
:pr:`25044` by :user:`Julien Jerphanion <jjerphan>`.
- |Enhancement| All estimators that internally rely on OpenMP multi-threading
(via Cython) now use a number of threads equal to the number of physical
(instead of logical) cores by default. In the past, we observed that using as
many threads as logical cores on SMT hosts could sometimes cause severe
performance problems depending on the algorithms and the shape of the data.
Note that it is still possible to manually adjust the number of threads used
by OpenMP as documented in :ref:`parallelism`.
:pr:`26082` by :user:`Jérémie du Boisberranger <jeremiedbb>` and
:user:`Olivier Grisel <ogrisel>`.
Experimental / Under Development
--------------------------------
- |MajorFeature| :ref:`Metadata routing <metadata_routing>`'s related base
methods are included in this release. This feature is only available via the
`enable_metadata_routing` feature flag which can be enabled using
:func:`sklearn.set_config` and :func:`sklearn.config_context`. For now this
feature is mostly useful for third party developers to prepare their code
base for metadata routing, and we strongly recommend that they also hide it
behind the same feature flag, rather than having it enabled by default.
:pr:`24027` by `Adrin Jalali`_, :user:`Benjamin Bossan <BenjaminBossan>`, and
:user:`Omar Salman <OmarManzoor>`.
Changelog
---------
..
Entries should be grouped by module (in alphabetic order) and prefixed with
one of the labels: |MajorFeature|, |Feature|, |Efficiency|, |Enhancement|,
|Fix| or |API| (see whats_new.rst for descriptions).
Entries should be ordered by those labels (e.g. |Fix| after |Efficiency|).
Changes not specific to a module should be listed under *Multiple Modules*
or *Miscellaneous*.
Entries should end with:
:pr:`123456` by :user:`Joe Bloggs <joeongithub>`.
where 123456 is the *pull request* number, not the issue number.
`sklearn`
.........
- |Feature| Added a new option `skip_parameter_validation`, to the function
:func:`sklearn.set_config` and context manager :func:`sklearn.config_context`, that
allows to skip the validation of the parameters passed to the estimators and public
functions. This can be useful to speed up the code but should be used with care
because it can lead to unexpected behaviors or raise obscure error messages when
setting invalid parameters.
:pr:`25815` by :user:`Jérémie du Boisberranger <jeremiedbb>`.
:mod:`sklearn.base`
...................
- |Feature| A `__sklearn_clone__` protocol is now available to override the
default behavior of :func:`base.clone`. :pr:`24568` by `Thomas Fan`_.
- |Fix| :class:`base.TransformerMixin` now currently keeps a namedtuple's class
if `transform` returns a namedtuple. :pr:`26121` by `Thomas Fan`_.
:mod:`sklearn.calibration`
..........................
- |Fix| :class:`calibration.CalibratedClassifierCV` now does not enforce sample
alignment on `fit_params`. :pr:`25805` by `Adrin Jalali`_.
:mod:`sklearn.cluster`
......................
- |MajorFeature| Added :class:`cluster.HDBSCAN`, a modern hierarchical density-based
clustering algorithm. Similarly to :class:`cluster.OPTICS`, it can be seen as a
generalization of :class:`cluster.DBSCAN` by allowing for hierarchical instead of flat
clustering, however it varies in its approach from :class:`cluster.OPTICS`. This
algorithm is very robust with respect to its hyperparameters' values and can
be used on a wide variety of data without much, if any, tuning.
This implementation is an adaptation from the original implementation of HDBSCAN in
`scikit-learn-contrib/hdbscan <https://github.com/scikit-learn-contrib/hdbscan>`_,
by :user:`Leland McInnes <lmcinnes>` et al.
:pr:`26385` by :user:`Meekail Zain <micky774>`
- |Enhancement| The `sample_weight` parameter now will be used in centroids
initialization for :class:`cluster.KMeans`, :class:`cluster.BisectingKMeans`
and :class:`cluster.MiniBatchKMeans`.
This change will break backward compatibility, since numbers generated
from same random seeds will be different.
:pr:`25752` by :user:`Gleb Levitski <glevv>`,
:user:`Jérémie du Boisberranger <jeremiedbb>`,
:user:`Guillaume Lemaitre <glemaitre>`.
- |Fix| :class:`cluster.KMeans`, :class:`cluster.MiniBatchKMeans` and
:func:`cluster.k_means` now correctly handle the combination of `n_init="auto"`
and `init` being an array-like, running one initialization in that case.
:pr:`26657` by :user:`Binesh Bannerjee <bnsh>`.
- |API| The `sample_weight` parameter in `predict` for
:meth:`cluster.KMeans.predict` and :meth:`cluster.MiniBatchKMeans.predict`
is now deprecated and will be removed in v1.5.
:pr:`25251` by :user:`Gleb Levitski <glevv>`.
- |API| The `Xred` argument in :func:`cluster.FeatureAgglomeration.inverse_transform`
is renamed to `Xt` and will be removed in v1.5. :pr:`26503` by `Adrin Jalali`_.
:mod:`sklearn.compose`
......................
- |Fix| :class:`compose.ColumnTransformer` raises an informative error when the individual
transformers of `ColumnTransformer` output pandas dataframes with indexes that are
not consistent with each other and the output is configured to be pandas.
:pr:`26286` by `Thomas Fan`_.
- |Fix| :class:`compose.ColumnTransformer` correctly sets the output of the
remainder when `set_output` is called. :pr:`26323` by `Thomas Fan`_.
:mod:`sklearn.covariance`
.........................
- |Fix| Allows `alpha=0` in :class:`covariance.GraphicalLasso` to be
consistent with :func:`covariance.graphical_lasso`.
:pr:`26033` by :user:`Genesis Valencia <genvalen>`.
- |Fix| :func:`covariance.empirical_covariance` now gives an informative
error message when input is not appropriate.
:pr:`26108` by :user:`Quentin Barthélemy <qbarthelemy>`.
- |API| Deprecates `cov_init` in :func:`covariance.graphical_lasso` in 1.3 since
the parameter has no effect. It will be removed in 1.5.
:pr:`26033` by :user:`Genesis Valencia <genvalen>`.
- |API| Adds `costs_` fitted attribute in :class:`covariance.GraphicalLasso` and
:class:`covariance.GraphicalLassoCV`.
:pr:`26033` by :user:`Genesis Valencia <genvalen>`.
- |API| Adds `covariance` parameter in :class:`covariance.GraphicalLasso`.
:pr:`26033` by :user:`Genesis Valencia <genvalen>`.
- |API| Adds `eps` parameter in :class:`covariance.GraphicalLasso`,
:func:`covariance.graphical_lasso`, and :class:`covariance.GraphicalLassoCV`.
:pr:`26033` by :user:`Genesis Valencia <genvalen>`.
:mod:`sklearn.datasets`
.......................
- |Enhancement| Allows to overwrite the parameters used to open the ARFF file using
the parameter `read_csv_kwargs` in :func:`datasets.fetch_openml` when using the
pandas parser.
:pr:`26433` by :user:`Guillaume Lemaitre <glemaitre>`.
- |Fix| :func:`datasets.fetch_openml` returns improved data types when
`as_frame=True` and `parser="liac-arff"`. :pr:`26386` by `Thomas Fan`_.
- |Fix| Following the ARFF specs, only the marker `"?"` is now considered as a missing
values when opening ARFF files fetched using :func:`datasets.fetch_openml` when using
the pandas parser. The parameter `read_csv_kwargs` allows to overwrite this behaviour.
:pr:`26551` by :user:`Guillaume Lemaitre <glemaitre>`.
- |Fix| :func:`datasets.fetch_openml` will consistently use `np.nan` as missing marker
with both parsers `"pandas"` and `"liac-arff"`.
:pr:`26579` by :user:`Guillaume Lemaitre <glemaitre>`.
- |API| The `data_transposed` argument of :func:`datasets.make_sparse_coded_signal`
is deprecated and will be removed in v1.5.
:pr:`25784` by :user:`Jérémie du Boisberranger`.
:mod:`sklearn.decomposition`
............................
- |Efficiency| :class:`decomposition.MiniBatchDictionaryLearning` and
:class:`decomposition.MiniBatchSparsePCA` are now faster for small batch sizes by
avoiding duplicate validations.
:pr:`25490` by :user:`Jérémie du Boisberranger <jeremiedbb>`.
- |Enhancement| :class:`decomposition.DictionaryLearning` now accepts the parameter
`callback` for consistency with the function :func:`decomposition.dict_learning`.
:pr:`24871` by :user:`Omar Salman <OmarManzoor>`.
- |Fix| Treat more consistently small values in the `W` and `H` matrices during the
`fit` and `transform` steps of :class:`decomposition.NMF` and
:class:`decomposition.MiniBatchNMF` which can produce different results than previous
versions. :pr:`25438` by :user:`Yotam Avidar-Constantini <yotamcons>`.
- |API| The `W` argument in :func:`decomposition.NMF.inverse_transform` and
:class:`decomposition.MiniBatchNMF.inverse_transform` is renamed to `Xt` and
will be removed in v1.5. :pr:`26503` by `Adrin Jalali`_.
:mod:`sklearn.discriminant_analysis`
....................................
- |Enhancement| :class:`discriminant_analysis.LinearDiscriminantAnalysis` now
supports the `PyTorch <https://pytorch.org/>`__. See
:ref:`array_api` for more details. :pr:`25956` by `Thomas Fan`_.
:mod:`sklearn.ensemble`
.......................
- |Feature| :class:`ensemble.HistGradientBoostingRegressor` now supports
the Gamma deviance loss via `loss="gamma"`.
Using the Gamma deviance as loss function comes in handy for modelling skewed
distributed, strictly positive valued targets.
:pr:`22409` by :user:`Christian Lorentzen <lorentzenchr>`.
- |Feature| Compute a custom out-of-bag score by passing a callable to
:class:`ensemble.RandomForestClassifier`, :class:`ensemble.RandomForestRegressor`,
:class:`ensemble.ExtraTreesClassifier` and :class:`ensemble.ExtraTreesRegressor`.
:pr:`25177` by `Tim Head`_.
- |Feature| :class:`ensemble.GradientBoostingClassifier` now exposes
out-of-bag scores via the `oob_scores_` or `oob_score_` attributes.
:pr:`24882` by :user:`Ashwin Mathur <awinml>`.
- |Efficiency| :class:`ensemble.IsolationForest` predict time is now faster
(typically by a factor of 8 or more). Internally, the estimator now precomputes
decision path lengths per tree at `fit` time. It is therefore not possible
to load an estimator trained with scikit-learn 1.2 to make it predict with
scikit-learn 1.3: retraining with scikit-learn 1.3 is required.
:pr:`25186` by :user:`Felipe Breve Siola <fsiola>`.
- |Efficiency| :class:`ensemble.RandomForestClassifier` and
:class:`ensemble.RandomForestRegressor` with `warm_start=True` now only
recomputes out-of-bag scores when there are actually more `n_estimators`
in subsequent `fit` calls.
:pr:`26318` by :user:`Joshua Choo Yun Keat <choo8>`.
- |Enhancement| :class:`ensemble.BaggingClassifier` and
:class:`ensemble.BaggingRegressor` expose the `allow_nan` tag from the
underlying estimator. :pr:`25506` by `Thomas Fan`_.
- |Fix| :meth:`ensemble.RandomForestClassifier.fit` sets `max_samples = 1`
when `max_samples` is a float and `round(n_samples * max_samples) < 1`.
:pr:`25601` by :user:`Jan Fidor <JanFidor>`.
- |Fix| :meth:`ensemble.IsolationForest.fit` no longer warns about missing
feature names when called with `contamination` not `"auto"` on a pandas
dataframe.
:pr:`25931` by :user:`Yao Xiao <Charlie-XIAO>`.
- |Fix| :class:`ensemble.HistGradientBoostingRegressor` and
:class:`ensemble.HistGradientBoostingClassifier` treats negative values for
categorical features consistently as missing values, following LightGBM's and
pandas' conventions.
:pr:`25629` by `Thomas Fan`_.
- |Fix| Fix deprecation of `base_estimator` in :class:`ensemble.AdaBoostClassifier`
and :class:`ensemble.AdaBoostRegressor` that was introduced in :pr:`23819`.
:pr:`26242` by :user:`Marko Toplak <markotoplak>`.
:mod:`sklearn.exceptions`
.........................
- |Feature| Added :class:`exceptions.InconsistentVersionWarning` which is raised
when a scikit-learn estimator is unpickled with a scikit-learn version that is
inconsistent with the sckit-learn version the estimator was pickled with.
:pr:`25297` by `Thomas Fan`_.
:mod:`sklearn.feature_extraction`
.................................
- |API| :class:`feature_extraction.image.PatchExtractor` now follows the
transformer API of scikit-learn. This class is defined as a stateless transformer
meaning that it is note required to call `fit` before calling `transform`.
Parameter validation only happens at `fit` time.
:pr:`24230` by :user:`Guillaume Lemaitre <glemaitre>`.
:mod:`sklearn.feature_selection`
................................
- |Enhancement| All selectors in :mod:`sklearn.feature_selection` will preserve
a DataFrame's dtype when transformed. :pr:`25102` by `Thomas Fan`_.
- |Fix| :class:`feature_selection.SequentialFeatureSelector`'s `cv` parameter
now supports generators. :pr:`25973` by `Yao Xiao <Charlie-XIAO>`.
:mod:`sklearn.impute`
.....................
- |Enhancement| Added the parameter `fill_value` to :class:`impute.IterativeImputer`.
:pr:`25232` by :user:`Thijs van Weezel <ValueInvestorThijs>`.
- |Fix| :class:`impute.IterativeImputer` now correctly preserves the Pandas
Index when the `set_config(transform_output="pandas")`. :pr:`26454` by `Thomas Fan`_.
:mod:`sklearn.inspection`
.........................
- |Enhancement| Added support for `sample_weight` in
:func:`inspection.partial_dependence` and
:meth:`inspection.PartialDependenceDisplay.from_estimator`. This allows for
weighted averaging when aggregating for each value of the grid we are making the
inspection on. The option is only available when `method` is set to `brute`.
:pr:`25209` and :pr:`26644` by :user:`Carlo Lemos <vitaliset>`.
- |API| :func:`inspection.partial_dependence` returns a :class:`utils.Bunch` with
new key: `grid_values`. The `values` key is deprecated in favor of `grid_values`
and the `values` key will be removed in 1.5.
:pr:`21809` and :pr:`25732` by `Thomas Fan`_.
:mod:`sklearn.kernel_approximation`
...................................
- |Fix| :class:`kernel_approximation.AdditiveChi2Sampler` is now stateless.
The `sample_interval_` attribute is deprecated and will be removed in 1.5.
:pr:`25190` by :user:`Vincent Maladière <Vincent-Maladiere>`.
:mod:`sklearn.linear_model`
...........................
- |Efficiency| Avoid data scaling when `sample_weight=None` and other
unnecessary data copies and unexpected dense to sparse data conversion in
:class:`linear_model.LinearRegression`.
:pr:`26207` by :user:`Olivier Grisel <ogrisel>`.
- |Enhancement| :class:`linear_model.SGDClassifier`,
:class:`linear_model.SGDRegressor` and :class:`linear_model.SGDOneClassSVM`
now preserve dtype for `numpy.float32`.
:pr:`25587` by :user:`Omar Salman <OmarManzoor>`.
- |Enhancement| The `n_iter_` attribute has been included in
:class:`linear_model.ARDRegression` to expose the actual number of iterations
required to reach the stopping criterion.
:pr:`25697` by :user:`John Pangas <jpangas>`.
- |Fix| Use a more robust criterion to detect convergence of
:class:`linear_model.LogisticRegression` with `penalty="l1"` and `solver="liblinear"`
on linearly separable problems.
:pr:`25214` by `Tom Dupre la Tour`_.
- |Fix| Fix a crash when calling `fit` on
:class:`linear_model.LogisticRegression` with `solver="newton-cholesky"` and
`max_iter=0` which failed to inspect the state of the model prior to the
first parameter update.
:pr:`26653` by :user:`Olivier Grisel <ogrisel>`.
- |API| Deprecates `n_iter` in favor of `max_iter` in
:class:`linear_model.BayesianRidge` and :class:`linear_model.ARDRegression`.
`n_iter` will be removed in scikit-learn 1.5. This change makes those
estimators consistent with the rest of estimators.
:pr:`25697` by :user:`John Pangas <jpangas>`.
:mod:`sklearn.manifold`
.......................
- |Fix| :class:`manifold.Isomap` now correctly preserves the Pandas
Index when the `set_config(transform_output="pandas")`. :pr:`26454` by `Thomas Fan`_.
:mod:`sklearn.metrics`
......................
- |Feature| Adds `zero_division=np.nan` to multiple classification metrics:
:func:`metrics.precision_score`, :func:`metrics.recall_score`,
:func:`metrics.f1_score`, :func:`metrics.fbeta_score`,
:func:`metrics.precision_recall_fscore_support`,
:func:`metrics.classification_report`. When `zero_division=np.nan` and there is a
zero division, the metric is undefined and is excluded from averaging. When not used
for averages, the value returned is `np.nan`.
:pr:`25531` by :user:`Marc Torrellas Socastro <marctorsoc>`.
- |Feature| :func:`metrics.average_precision_score` now supports the
multiclass case.
:pr:`17388` by :user:`Geoffrey Bolmier <gbolmier>` and
:pr:`24769` by :user:`Ashwin Mathur <awinml>`.
- |Efficiency| The computation of the expected mutual information in
:func:`metrics.adjusted_mutual_info_score` is now faster when the number of
unique labels is large and its memory usage is reduced in general.
:pr:`25713` by :user:`Kshitij Mathur <Kshitij68>`,
:user:`Guillaume Lemaitre <glemaitre>`, :user:`Omar Salman <OmarManzoor>` and
:user:`Jérémie du Boisberranger <jeremiedbb>`.
- |Enhancement| :class:`metrics.silhouette_samples` nows accepts a sparse
matrix of pairwise distances between samples, or a feature array.
:pr:`18723` by :user:`Sahil Gupta <sahilgupta2105>` and
:pr:`24677` by :user:`Ashwin Mathur <awinml>`.
- |Enhancement| A new parameter `drop_intermediate` was added to
:func:`metrics.precision_recall_curve`,
:func:`metrics.PrecisionRecallDisplay.from_estimator`,
:func:`metrics.PrecisionRecallDisplay.from_predictions`,
which drops some suboptimal thresholds to create lighter precision-recall
curves.
:pr:`24668` by :user:`dberenbaum`.
- |Enhancement| :meth:`metrics.RocCurveDisplay.from_estimator` and
:meth:`metrics.RocCurveDisplay.from_predictions` now accept two new keywords,
`plot_chance_level` and `chance_level_kw` to plot the baseline chance
level. This line is exposed in the `chance_level_` attribute.
:pr:`25987` by :user:`Yao Xiao <Charlie-XIAO>`.
- |Enhancement| :meth:`metrics.PrecisionRecallDisplay.from_estimator` and
:meth:`metrics.PrecisionRecallDisplay.from_predictions` now accept two new
keywords, `plot_chance_level` and `chance_level_kw` to plot the baseline
chance level. This line is exposed in the `chance_level_` attribute.
:pr:`26019` by :user:`Yao Xiao <Charlie-XIAO>`.
- |Fix| :func:`metrics.pairwise.manhattan_distances` now supports readonly sparse datasets.
:pr:`25432` by :user:`Julien Jerphanion <jjerphan>`.
- |Fix| Fixed :func:`metrics.classification_report` so that empty input will return
`np.nan`. Previously, "macro avg" and `weighted avg` would return
e.g. `f1-score=np.nan` and `f1-score=0.0`, being inconsistent. Now, they
both return `np.nan`.
:pr:`25531` by :user:`Marc Torrellas Socastro <marctorsoc>`.
- |Fix| :func:`metrics.ndcg_score` now gives a meaningful error message for input of
length 1.
:pr:`25672` by :user:`Lene Preuss <lene>` and :user:`Wei-Chun Chu <wcchu>`.
- |Fix| :func:`metrics.log_loss` raises a warning if the values of the parameter
`y_pred` are not normalized, instead of actually normalizing them in the metric.
Starting from 1.5 this will raise an error.
:pr:`25299` by :user:`Omar Salman <OmarManzoor`.
- |Fix| In :func:`metrics.roc_curve`, use the threshold value `np.inf` instead of
arbitrary `max(y_score) + 1`. This threshold is associated with the ROC curve point
`tpr=0` and `fpr=0`.
:pr:`26194` by :user:`Guillaume Lemaitre <glemaitre>`.
- |Fix| The `'matching'` metric has been removed when using SciPy>=1.9
to be consistent with `scipy.spatial.distance` which does not support
`'matching'` anymore.
:pr:`26264` by :user:`Barata T. Onggo <magnusbarata>`
- |API| The `eps` parameter of the :func:`metrics.log_loss` has been deprecated and
will be removed in 1.5. :pr:`25299` by :user:`Omar Salman <OmarManzoor>`.
:mod:`sklearn.gaussian_process`
...............................
- |Fix| :class:`gaussian_process.GaussianProcessRegressor` has a new argument
`n_targets`, which is used to decide the number of outputs when sampling
from the prior distributions. :pr:`23099` by :user:`Zhehao Liu <MaxwellLZH>`.
:mod:`sklearn.mixture`
......................
- |Efficiency| :class:`mixture.GaussianMixture` is more efficient now and will bypass
unnecessary initialization if the weights, means, and precisions are
given by users.
:pr:`26021` by :user:`Jiawei Zhang <jiawei-zhang-a>`.
:mod:`sklearn.model_selection`
..............................
- |MajorFeature| Added the class :class:`model_selection.ValidationCurveDisplay`
that allows easy plotting of validation curves obtained by the function
:func:`model_selection.validation_curve`.
:pr:`25120` by :user:`Guillaume Lemaitre <glemaitre>`.
- |API| The parameter `log_scale` in the class
:class:`model_selection.LearningCurveDisplay` has been deprecated in 1.3 and
will be removed in 1.5. The default scale can be overridden by setting it
directly on the `ax` object and will be set automatically from the spacing
of the data points otherwise.
:pr:`25120` by :user:`Guillaume Lemaitre <glemaitre>`.
- |Enhancement| :func:`model_selection.cross_validate` accepts a new parameter
`return_indices` to return the train-test indices of each cv split.
:pr:`25659` by :user:`Guillaume Lemaitre <glemaitre>`.
:mod:`sklearn.multioutput`
..........................
- |Fix| :func:`getattr` on :meth:`multioutput.MultiOutputRegressor.partial_fit`
and :meth:`multioutput.MultiOutputClassifier.partial_fit` now correctly raise
an `AttributeError` if done before calling `fit`. :pr:`26333` by `Adrin
Jalali`_.
:mod:`sklearn.naive_bayes`
..........................
- |Fix| :class:`naive_bayes.GaussianNB` does not raise anymore a `ZeroDivisionError`
when the provided `sample_weight` reduces the problem to a single class in `fit`.
:pr:`24140` by :user:`Jonathan Ohayon <Johayon>` and :user:`Chiara Marmo <cmarmo>`.
:mod:`sklearn.neighbors`
........................
- |Enhancement| The performance of :meth:`neighbors.KNeighborsClassifier.predict`
and of :meth:`neighbors.KNeighborsClassifier.predict_proba` has been improved
when `n_neighbors` is large and `algorithm="brute"` with non Euclidean metrics.
:pr:`24076` by :user:`Meekail Zain <micky774>`, :user:`Julien Jerphanion <jjerphan>`.
- |Fix| Remove support for `KulsinskiDistance` in :class:`neighbors.BallTree`. This
dissimilarity is not a metric and cannot be supported by the BallTree.
:pr:`25417` by :user:`Guillaume Lemaitre <glemaitre>`.
- |API| The support for metrics other than `euclidean` and `manhattan` and for
callables in :class:`neighbors.NearestNeighbors` is deprecated and will be removed in
version 1.5. :pr:`24083` by :user:`Valentin Laurent <Valentin-Laurent>`.
:mod:`sklearn.neural_network`
.............................
- |Fix| :class:`neural_network.MLPRegressor` and :class:`neural_network.MLPClassifier`
reports the right `n_iter_` when `warm_start=True`. It corresponds to the number
of iterations performed on the current call to `fit` instead of the total number
of iterations performed since the initialization of the estimator.
:pr:`25443` by :user:`Marvin Krawutschke <Marvvxi>`.
:mod:`sklearn.pipeline`
.......................
- |Feature| :class:`pipeline.FeatureUnion` can now use indexing notation (e.g.
`feature_union["scalar"]`) to access transformers by name. :pr:`25093` by
`Thomas Fan`_.
- |Feature| :class:`pipeline.FeatureUnion` can now access the
`feature_names_in_` attribute if the `X` value seen during `.fit` has a
`columns` attribute and all columns are strings. e.g. when `X` is a
`pandas.DataFrame`
:pr:`25220` by :user:`Ian Thompson <it176131>`.
- |Fix| :meth:`pipeline.Pipeline.fit_transform` now raises an `AttributeError`
if the last step of the pipeline does not support `fit_transform`.
:pr:`26325` by `Adrin Jalali`_.
:mod:`sklearn.preprocessing`
............................
- |MajorFeature| Introduces :class:`preprocessing.TargetEncoder` which is a
categorical encoding based on target mean conditioned on the value of the
category. :pr:`25334` by `Thomas Fan`_.
- |Feature| :class:`preprocessing.OrdinalEncoder` now supports grouping
infrequent categories into a single feature. Grouping infrequent categories
is enabled by specifying how to select infrequent categories with
`min_frequency` or `max_categories`. :pr:`25677` by `Thomas Fan`_.
- |Enhancement| :class:`preprocessing.PolynomialFeatures` now calculates the
number of expanded terms a-priori when dealing with sparse `csr` matrices
in order to optimize the choice of `dtype` for `indices` and `indptr`. It
can now output `csr` matrices with `np.int32` `indices/indptr` components
when there are few enough elements, and will automatically use `np.int64`
for sufficiently large matrices.
:pr:`20524` by :user:`niuk-a <niuk-a>` and
:pr:`23731` by :user:`Meekail Zain <micky774>`
- |Enhancement| A new parameter `sparse_output` was added to
:class:`preprocessing.SplineTransformer`, available as of SciPy 1.8. If
`sparse_output=True`, :class:`preprocessing.SplineTransformer` returns a sparse
CSR matrix. :pr:`24145` by :user:`Christian Lorentzen <lorentzenchr>`.
- |Enhancement| Adds a `feature_name_combiner` parameter to
:class:`preprocessing.OneHotEncoder`. This specifies a custom callable to
create feature names to be returned by
:meth:`preprocessing.OneHotEncoder.get_feature_names_out`. The callable
combines input arguments `(input_feature, category)` to a string.
:pr:`22506` by :user:`Mario Kostelac <mariokostelac>`.
- |Enhancement| Added support for `sample_weight` in
:class:`preprocessing.KBinsDiscretizer`. This allows specifying the parameter
`sample_weight` for each sample to be used while fitting. The option is only
available when `strategy` is set to `quantile` and `kmeans`.
:pr:`24935` by :user:`Seladus <seladus>`, :user:`Guillaume Lemaitre <glemaitre>`, and
:user:`Dea María Léon <deamarialeon>`, :pr:`25257` by :user:`Gleb Levitski <glevv>`.
- |Enhancement| Subsampling through the `subsample` parameter can now be used in
:class:`preprocessing.KBinsDiscretizer` regardless of the strategy used.
:pr:`26424` by :user:`Jérémie du Boisberranger <jeremiedbb>`.
- |Fix| :class:`preprocessing.PowerTransformer` now correctly preserves the Pandas
Index when the `set_config(transform_output="pandas")`. :pr:`26454` by `Thomas Fan`_.
- |Fix| :class:`preprocessing.PowerTransformer` now correctly raises error when
using `method="box-cox"` on data with a constant `np.nan` column.
:pr:`26400` by :user:`Yao Xiao <Charlie-XIAO>`.
- |Fix| :class:`preprocessing.PowerTransformer` with `method="yeo-johnson"` now leaves
constant features unchanged instead of transforming with an arbitrary value for
the `lambdas_` fitted parameter.
:pr:`26566` by :user:`Jérémie du Boisberranger <jeremiedbb>`.
- |API| The default value of the `subsample` parameter of
:class:`preprocessing.KBinsDiscretizer` will change from `None` to `200_000` in
version 1.5 when `strategy="kmeans"` or `strategy="uniform"`.
:pr:`26424` by :user:`Jérémie du Boisberranger <jeremiedbb>`.
:mod:`sklearn.svm`
..................
- |API| `dual` parameter now accepts `auto` option for
:class:`svm.LinearSVC` and :class:`svm.LinearSVR`.
:pr:`26093` by :user:`Gleb Levitski <glevv>`.
:mod:`sklearn.tree`
...................
- |MajorFeature| :class:`tree.DecisionTreeRegressor` and
:class:`tree.DecisionTreeClassifier` support missing values when
`splitter='best'` and criterion is `gini`, `entropy`, or `log_loss`,
for classification or `squared_error`, `friedman_mse`, or `poisson`
for regression. :pr:`23595`, :pr:`26376` by `Thomas Fan`_.
- |Enhancement| Adds a `class_names` parameter to
:func:`tree.export_text`. This allows specifying the parameter `class_names`
for each target class in ascending numerical order.
:pr:`25387` by :user:`William M <Akbeeh>` and :user:`crispinlogan <crispinlogan>`.
- |Fix| :func:`tree.export_graphviz` and :func:`tree.export_text` now accepts
`feature_names` and `class_names` as array-like rather than lists.
:pr:`26289` by :user:`Yao Xiao <Charlie-XIAO>`
:mod:`sklearn.utils`
....................
- |FIX| Fixes :func:`utils.check_array` to properly convert pandas
extension arrays. :pr:`25813` and :pr:`26106` by `Thomas Fan`_.
- |Fix| :func:`utils.check_array` now supports pandas DataFrames with
extension arrays and object dtypes by return an ndarray with object dtype.
:pr:`25814` by `Thomas Fan`_.
- |API| `utils.estimator_checks.check_transformers_unfitted_stateless` has been
introduced to ensure stateless transformers don't raise `NotFittedError`
during `transform` with no prior call to `fit` or `fit_transform`.
:pr:`25190` by :user:`Vincent Maladière <Vincent-Maladiere>`.
- |API| A `FutureWarning` is now raised when instantiating a class which inherits from
a deprecated base class (i.e. decorated by :class:`utils.deprecated`) and which
overrides the `__init__` method.
:pr:`25733` by :user:`Brigitta Sipőcz <bsipocz>` and
:user:`Jérémie du Boisberranger <jeremiedbb>`.
:mod:`sklearn.semi_supervised`
..............................
- |Enhancement| :meth:`semi_supervised.LabelSpreading.fit` and
:meth:`semi_supervised.LabelPropagation.fit` now accepts sparse metrics.
:pr:`19664` by :user:`Kaushik Amar Das <cozek>`.
Miscellaneous
.............
- |Enhancement| Replace obsolete exceptions `EnvironmentError`, `IOError` and
`WindowsError`.
:pr:`26466` by :user:`Dimitri Papadopoulos ORfanos <DimitriPapadopoulos>`.
.. rubric:: Code and documentation contributors
Thanks to everyone who has contributed to the maintenance and improvement of
the project since version 1.2, including:
2357juan, Abhishek Singh Kushwah, Adam Handke, Adam Kania, Adam Li, adienes,
Admir Demiraj, adoublet, Adrin Jalali, A.H.Mansouri, Ahmedbgh, Ala-Na, Alex
Buzenet, AlexL, Ali H. El-Kassas, amay, András Simon, André Pedersen, Andrew
Wang, Ankur Singh, annegnx, Ansam Zedan, Anthony22-dev, Artur Hermano, Arturo
Amor, as-90, ashah002, Ashish Dutt, Ashwin Mathur, AymericBasset, Azaria
Gebremichael, Barata Tripramudya Onggo, Benedek Harsanyi, Benjamin Bossan,
Bharat Raghunathan, Binesh Bannerjee, Boris Feld, Brendan Lu, Brevin Kunde,
cache-missing, Camille Troillard, Carla J, carlo, Carlo Lemos, c-git, Changyao
Chen, Chiara Marmo, Christian Lorentzen, Christian Veenhuis, Christine P. Chai,
crispinlogan, Da-Lan, DanGonite57, Dave Berenbaum, davidblnc, david-cortes,
Dayne, Dea María Léon, Denis, Dimitri Papadopoulos Orfanos, Dimitris
Litsidis, Dmitry Nesterov, Dominic Fox, Dominik Prodinger, Edern, Ekaterina
Butyugina, Elabonga Atuo, Emir, farhan khan, Felipe Siola, futurewarning, Gael
Varoquaux, genvalen, Gleb Levitski, Guillaume Lemaitre, gunesbayir, Haesun
Park, hujiahong726, i-aki-y, Ian Thompson, Ido M, Ily, Irene, Jack McIvor,
jakirkham, James Dean, JanFidor, Jarrod Millman, JB Mountford, Jérémie du
Boisberranger, Jessicakk0711, Jiawei Zhang, Joey Ortiz, JohnathanPi, John
Pangas, Joshua Choo Yun Keat, Joshua Hedlund, JuliaSchoepp, Julien Jerphanion,
jygerardy, ka00ri, Kaushik Amar Das, Kento Nozawa, Kian Eliasi, Kilian Kluge,
Lene Preuss, Linus, Logan Thomas, Loic Esteve, Louis Fouquet, Lucy Liu, Madhura
Jayaratne, Marc Torrellas Socastro, Maren Westermann, Mario Kostelac, Mark
Harfouche, Marko Toplak, Marvin Krawutschke, Masanori Kanazu, mathurinm, Matt
Haberland, Max Halford, maximeSaur, Maxwell Liu, m. bou, mdarii, Meekail Zain,
Mikhail Iljin, murezzda, Nawazish Alam, Nicola Fanelli, Nightwalkx, Nikolay
Petrov, Nishu Choudhary, NNLNR, npache, Olivier Grisel, Omar Salman, ouss1508,
PAB, Pandata, partev, Peter Piontek, Phil, pnucci, Pooja M, Pooja Subramaniam,
precondition, Quentin Barthélemy, Rafal Wojdyla, Raghuveer Bhat, Rahil Parikh,
Ralf Gommers, ram vikram singh, Rushil Desai, Sadra Barikbin, SANJAI_3, Sashka
Warner, Scott Gigante, Scott Gustafson, searchforpassion, Seoeun
Hong, Shady el Gewily, Shiva chauhan, Shogo Hida, Shreesha Kumar Bhat, sonnivs,
Sortofamudkip, Stanislav (Stanley) Modrak, Stefanie Senger, Steven Van
Vaerenbergh, Tabea Kossen, Théophile Baranger, Thijs van Weezel, Thomas A
Caswell, Thomas Germer, Thomas J. Fan, Tim Head, Tim P, Tom Dupré la Tour,
tomiock, tspeng, Valentin Laurent, Veghit, VIGNESH D, Vijeth Moudgalya, Vinayak
Mehta, Vincent M, Vincent-violet, Vyom Pathak, William M, windiana42, Xiao
Yuan, Yao Xiao, Yaroslav Halchenko, Yotam Avidar-Constantini, Yuchen Zhou,
Yusuf Raji, zeeshan lone | scikit-learn | include contributors rst currentmodule sklearn release notes 1 3 Version 1 3 For a short description of the main highlights of the release please refer to ref sphx glr auto examples release highlights plot release highlights 1 3 0 py include changelog legend inc changes 1 3 2 Version 1 3 2 October 2023 Changelog mod sklearn datasets Fix All dataset fetchers now accept data home as any object that implements the class os PathLike interface for instance class pathlib Path pr 27468 by user Yao Xiao Charlie XIAO mod sklearn decomposition Fix Fixes a bug in class decomposition KernelPCA by forcing the output of the internal class preprocessing KernelCenterer to be a default array When the arpack solver is used it expects an array with a dtype attribute pr 27583 by user Guillaume Lemaitre glemaitre mod sklearn metrics Fix Fixes a bug for metrics using zero division np nan e g func metrics precision score within a paralell loop e g func model selection cross val score where the singleton for np nan will be different in the sub processes pr 27573 by user Guillaume Lemaitre glemaitre mod sklearn tree Fix Do not leak data via non initialized memory in decision tree pickle files and make the generation of those files deterministic pr 27580 by user Lo c Est ve lesteve changes 1 3 1 Version 1 3 1 September 2023 Changed models The following estimators and functions when fit with the same data and parameters may produce different models from the previous version This often occurs due to changes in the modelling logic bug fixes or enhancements or in random sampling procedures Fix Ridge models with solver sparse cg may have slightly different results with scipy 1 12 because of an underlying change in the scipy solver see scipy 18488 https github com scipy scipy pull 18488 for more details pr 26814 by user Lo c Est ve lesteve Changes impacting all modules Fix The set output API correctly works with list input pr 27044 by Thomas Fan Changelog mod sklearn calibration Fix class calibration CalibratedClassifierCV can now handle models that produce large prediction scores Before it was numerically unstable pr 26913 by user Omar Salman OmarManzoor mod sklearn cluster Fix class cluster BisectingKMeans could crash when predicting on data with a different scale than the data used to fit the model pr 27167 by Olivier Grisel Fix class cluster BisectingKMeans now works with data that has a single feature pr 27243 by user J r mie du Boisberranger jeremiedbb mod sklearn cross decomposition Fix class cross decomposition PLSRegression now automatically ravels the output of predict if fitted with one dimensional y pr 26602 by user Yao Xiao Charlie XIAO mod sklearn ensemble Fix Fix a bug in class ensemble AdaBoostClassifier with algorithm SAMME where the decision function of each weak learner should be symmetric i e the sum of the scores should sum to zero for a sample pr 26521 by user Guillaume Lemaitre glemaitre mod sklearn feature selection Fix func feature selection mutual info regression now correctly computes the result when X is of integer dtype pr 26748 by user Yao Xiao Charlie XIAO mod sklearn impute Fix class impute KNNImputer now correctly adds a missing indicator column in transform when add indicator is set to True and missing values are observed during fit pr 26600 by user Shreesha Kumar Bhat Shreesha3112 mod sklearn metrics Fix Scorers used with func metrics get scorer handle properly multilabel indicator matrix pr 27002 by user Guillaume Lemaitre glemaitre mod sklearn mixture Fix The initialization of class mixture GaussianMixture from user provided precisions init for covariance type of full or tied was not correct and has been fixed pr 26416 by user Yang Tao mchikyt3 mod sklearn neighbors Fix meth neighbors KNeighborsClassifier predict no longer raises an exception for pandas DataFrames input pr 26772 by user J r mie du Boisberranger jeremiedbb Fix Reintroduce sklearn neighbors BallTree valid metrics and sklearn neighbors KDTree valid metrics as public class attributes pr 26754 by user Julien Jerphanion jjerphan Fix class sklearn model selection HalvingRandomSearchCV no longer raises when the input to the param distributions parameter is a list of dicts pr 26893 by user Stefanie Senger StefanieSenger Fix Neighbors based estimators now correctly work when metric minkowski and the metric parameter p is in the range 0 p 1 regardless of the dtype of X pr 26760 by user Shreesha Kumar Bhat Shreesha3112 mod sklearn preprocessing Fix class preprocessing LabelEncoder correctly accepts y as a keyword argument pr 26940 by Thomas Fan Fix class preprocessing OneHotEncoder shows a more informative error message when sparse output True and the output is configured to be pandas pr 26931 by Thomas Fan mod sklearn tree Fix func tree plot tree now accepts class names True as documented pr 26903 by user Thomas Roehr 2maz Fix The feature names parameter of func tree plot tree now accepts any kind of array like instead of just a list pr 27292 by user Rahil Parikh rprkh changes 1 3 Version 1 3 0 June 2023 Changed models The following estimators and functions when fit with the same data and parameters may produce different models from the previous version This often occurs due to changes in the modelling logic bug fixes or enhancements or in random sampling procedures Enhancement meth multiclass OutputCodeClassifier predict now uses a more efficient pairwise distance reduction As a consequence the tie breaking strategy is different and thus the predicted labels may be different pr 25196 by user Guillaume Lemaitre glemaitre Enhancement The fit transform method of class decomposition DictionaryLearning is more efficient but may produce different results as in previous versions when transform algorithm is not the same as fit algorithm and the number of iterations is small pr 24871 by user Omar Salman OmarManzoor Enhancement The sample weight parameter now will be used in centroids initialization for class cluster KMeans class cluster BisectingKMeans and class cluster MiniBatchKMeans This change will break backward compatibility since numbers generated from same random seeds will be different pr 25752 by user Gleb Levitski glevv user J r mie du Boisberranger jeremiedbb user Guillaume Lemaitre glemaitre Fix Treat more consistently small values in the W and H matrices during the fit and transform steps of class decomposition NMF and class decomposition MiniBatchNMF which can produce different results than previous versions pr 25438 by user Yotam Avidar Constantini yotamcons Fix class decomposition KernelPCA may produce different results through inverse transform if gamma is None Now it will be chosen correctly as 1 n features of the data that it is fitted on while previously it might be incorrectly chosen as 1 n features of the data passed to inverse transform A new attribute gamma is provided for revealing the actual value of gamma used each time the kernel is called pr 26337 by user Yao Xiao Charlie XIAO Changed displays Enhancement class model selection LearningCurveDisplay displays both the train and test curves by default You can set score type test to keep the past behaviour pr 25120 by user Guillaume Lemaitre glemaitre Fix class model selection ValidationCurveDisplay now accepts passing a list to the param range parameter pr 27311 by user Arturo Amor ArturoAmorQ Changes impacting all modules Enhancement The get feature names out method of the following classes now raises a NotFittedError if the instance is not fitted This ensures the error is consistent in all estimators with the get feature names out method class impute MissingIndicator class feature extraction DictVectorizer class feature extraction text TfidfTransformer class feature selection GenericUnivariateSelect class feature selection RFE class feature selection RFECV class feature selection SelectFdr class feature selection SelectFpr class feature selection SelectFromModel class feature selection SelectFwe class feature selection SelectKBest class feature selection SelectPercentile class feature selection SequentialFeatureSelector class feature selection VarianceThreshold class kernel approximation AdditiveChi2Sampler class impute IterativeImputer class impute KNNImputer class impute SimpleImputer class isotonic IsotonicRegression class preprocessing Binarizer class preprocessing KBinsDiscretizer class preprocessing MaxAbsScaler class preprocessing MinMaxScaler class preprocessing Normalizer class preprocessing OrdinalEncoder class preprocessing PowerTransformer class preprocessing QuantileTransformer class preprocessing RobustScaler class preprocessing SplineTransformer class preprocessing StandardScaler class random projection GaussianRandomProjection class random projection SparseRandomProjection The NotFittedError displays an informative message asking to fit the instance with the appropriate arguments pr 25294 pr 25308 pr 25291 pr 25367 pr 25402 by user John Pangas jpangas user Rahil Parikh rprkh and user Alex Buzenet albuzenet Enhancement Added a multi threaded Cython routine to the compute squared Euclidean distances sometimes followed by a fused reduction operation for a pair of datasets consisting of a sparse CSR matrix and a dense NumPy This can improve the performance of following functions and estimators func sklearn metrics pairwise distances argmin func sklearn metrics pairwise distances argmin min class sklearn cluster AffinityPropagation class sklearn cluster Birch class sklearn cluster MeanShift class sklearn cluster OPTICS class sklearn cluster SpectralClustering func sklearn feature selection mutual info regression class sklearn neighbors KNeighborsClassifier class sklearn neighbors KNeighborsRegressor class sklearn neighbors RadiusNeighborsClassifier class sklearn neighbors RadiusNeighborsRegressor class sklearn neighbors LocalOutlierFactor class sklearn neighbors NearestNeighbors class sklearn manifold Isomap class sklearn manifold LocallyLinearEmbedding class sklearn manifold TSNE func sklearn manifold trustworthiness class sklearn semi supervised LabelPropagation class sklearn semi supervised LabelSpreading A typical example of this performance improvement happens when passing a sparse CSR matrix to the predict or transform method of estimators that rely on a dense NumPy representation to store their fitted parameters or the reverse For instance meth sklearn neighbors NearestNeighbors kneighbors is now up to 2 times faster for this case on commonly available laptops pr 25044 by user Julien Jerphanion jjerphan Enhancement All estimators that internally rely on OpenMP multi threading via Cython now use a number of threads equal to the number of physical instead of logical cores by default In the past we observed that using as many threads as logical cores on SMT hosts could sometimes cause severe performance problems depending on the algorithms and the shape of the data Note that it is still possible to manually adjust the number of threads used by OpenMP as documented in ref parallelism pr 26082 by user J r mie du Boisberranger jeremiedbb and user Olivier Grisel ogrisel Experimental Under Development MajorFeature ref Metadata routing metadata routing s related base methods are included in this release This feature is only available via the enable metadata routing feature flag which can be enabled using func sklearn set config and func sklearn config context For now this feature is mostly useful for third party developers to prepare their code base for metadata routing and we strongly recommend that they also hide it behind the same feature flag rather than having it enabled by default pr 24027 by Adrin Jalali user Benjamin Bossan BenjaminBossan and user Omar Salman OmarManzoor Changelog Entries should be grouped by module in alphabetic order and prefixed with one of the labels MajorFeature Feature Efficiency Enhancement Fix or API see whats new rst for descriptions Entries should be ordered by those labels e g Fix after Efficiency Changes not specific to a module should be listed under Multiple Modules or Miscellaneous Entries should end with pr 123456 by user Joe Bloggs joeongithub where 123456 is the pull request number not the issue number sklearn Feature Added a new option skip parameter validation to the function func sklearn set config and context manager func sklearn config context that allows to skip the validation of the parameters passed to the estimators and public functions This can be useful to speed up the code but should be used with care because it can lead to unexpected behaviors or raise obscure error messages when setting invalid parameters pr 25815 by user J r mie du Boisberranger jeremiedbb mod sklearn base Feature A sklearn clone protocol is now available to override the default behavior of func base clone pr 24568 by Thomas Fan Fix class base TransformerMixin now currently keeps a namedtuple s class if transform returns a namedtuple pr 26121 by Thomas Fan mod sklearn calibration Fix class calibration CalibratedClassifierCV now does not enforce sample alignment on fit params pr 25805 by Adrin Jalali mod sklearn cluster MajorFeature Added class cluster HDBSCAN a modern hierarchical density based clustering algorithm Similarly to class cluster OPTICS it can be seen as a generalization of class cluster DBSCAN by allowing for hierarchical instead of flat clustering however it varies in its approach from class cluster OPTICS This algorithm is very robust with respect to its hyperparameters values and can be used on a wide variety of data without much if any tuning This implementation is an adaptation from the original implementation of HDBSCAN in scikit learn contrib hdbscan https github com scikit learn contrib hdbscan by user Leland McInnes lmcinnes et al pr 26385 by user Meekail Zain micky774 Enhancement The sample weight parameter now will be used in centroids initialization for class cluster KMeans class cluster BisectingKMeans and class cluster MiniBatchKMeans This change will break backward compatibility since numbers generated from same random seeds will be different pr 25752 by user Gleb Levitski glevv user J r mie du Boisberranger jeremiedbb user Guillaume Lemaitre glemaitre Fix class cluster KMeans class cluster MiniBatchKMeans and func cluster k means now correctly handle the combination of n init auto and init being an array like running one initialization in that case pr 26657 by user Binesh Bannerjee bnsh API The sample weight parameter in predict for meth cluster KMeans predict and meth cluster MiniBatchKMeans predict is now deprecated and will be removed in v1 5 pr 25251 by user Gleb Levitski glevv API The Xred argument in func cluster FeatureAgglomeration inverse transform is renamed to Xt and will be removed in v1 5 pr 26503 by Adrin Jalali mod sklearn compose Fix class compose ColumnTransformer raises an informative error when the individual transformers of ColumnTransformer output pandas dataframes with indexes that are not consistent with each other and the output is configured to be pandas pr 26286 by Thomas Fan Fix class compose ColumnTransformer correctly sets the output of the remainder when set output is called pr 26323 by Thomas Fan mod sklearn covariance Fix Allows alpha 0 in class covariance GraphicalLasso to be consistent with func covariance graphical lasso pr 26033 by user Genesis Valencia genvalen Fix func covariance empirical covariance now gives an informative error message when input is not appropriate pr 26108 by user Quentin Barth lemy qbarthelemy API Deprecates cov init in func covariance graphical lasso in 1 3 since the parameter has no effect It will be removed in 1 5 pr 26033 by user Genesis Valencia genvalen API Adds costs fitted attribute in class covariance GraphicalLasso and class covariance GraphicalLassoCV pr 26033 by user Genesis Valencia genvalen API Adds covariance parameter in class covariance GraphicalLasso pr 26033 by user Genesis Valencia genvalen API Adds eps parameter in class covariance GraphicalLasso func covariance graphical lasso and class covariance GraphicalLassoCV pr 26033 by user Genesis Valencia genvalen mod sklearn datasets Enhancement Allows to overwrite the parameters used to open the ARFF file using the parameter read csv kwargs in func datasets fetch openml when using the pandas parser pr 26433 by user Guillaume Lemaitre glemaitre Fix func datasets fetch openml returns improved data types when as frame True and parser liac arff pr 26386 by Thomas Fan Fix Following the ARFF specs only the marker is now considered as a missing values when opening ARFF files fetched using func datasets fetch openml when using the pandas parser The parameter read csv kwargs allows to overwrite this behaviour pr 26551 by user Guillaume Lemaitre glemaitre Fix func datasets fetch openml will consistently use np nan as missing marker with both parsers pandas and liac arff pr 26579 by user Guillaume Lemaitre glemaitre API The data transposed argument of func datasets make sparse coded signal is deprecated and will be removed in v1 5 pr 25784 by user J r mie du Boisberranger mod sklearn decomposition Efficiency class decomposition MiniBatchDictionaryLearning and class decomposition MiniBatchSparsePCA are now faster for small batch sizes by avoiding duplicate validations pr 25490 by user J r mie du Boisberranger jeremiedbb Enhancement class decomposition DictionaryLearning now accepts the parameter callback for consistency with the function func decomposition dict learning pr 24871 by user Omar Salman OmarManzoor Fix Treat more consistently small values in the W and H matrices during the fit and transform steps of class decomposition NMF and class decomposition MiniBatchNMF which can produce different results than previous versions pr 25438 by user Yotam Avidar Constantini yotamcons API The W argument in func decomposition NMF inverse transform and class decomposition MiniBatchNMF inverse transform is renamed to Xt and will be removed in v1 5 pr 26503 by Adrin Jalali mod sklearn discriminant analysis Enhancement class discriminant analysis LinearDiscriminantAnalysis now supports the PyTorch https pytorch org See ref array api for more details pr 25956 by Thomas Fan mod sklearn ensemble Feature class ensemble HistGradientBoostingRegressor now supports the Gamma deviance loss via loss gamma Using the Gamma deviance as loss function comes in handy for modelling skewed distributed strictly positive valued targets pr 22409 by user Christian Lorentzen lorentzenchr Feature Compute a custom out of bag score by passing a callable to class ensemble RandomForestClassifier class ensemble RandomForestRegressor class ensemble ExtraTreesClassifier and class ensemble ExtraTreesRegressor pr 25177 by Tim Head Feature class ensemble GradientBoostingClassifier now exposes out of bag scores via the oob scores or oob score attributes pr 24882 by user Ashwin Mathur awinml Efficiency class ensemble IsolationForest predict time is now faster typically by a factor of 8 or more Internally the estimator now precomputes decision path lengths per tree at fit time It is therefore not possible to load an estimator trained with scikit learn 1 2 to make it predict with scikit learn 1 3 retraining with scikit learn 1 3 is required pr 25186 by user Felipe Breve Siola fsiola Efficiency class ensemble RandomForestClassifier and class ensemble RandomForestRegressor with warm start True now only recomputes out of bag scores when there are actually more n estimators in subsequent fit calls pr 26318 by user Joshua Choo Yun Keat choo8 Enhancement class ensemble BaggingClassifier and class ensemble BaggingRegressor expose the allow nan tag from the underlying estimator pr 25506 by Thomas Fan Fix meth ensemble RandomForestClassifier fit sets max samples 1 when max samples is a float and round n samples max samples 1 pr 25601 by user Jan Fidor JanFidor Fix meth ensemble IsolationForest fit no longer warns about missing feature names when called with contamination not auto on a pandas dataframe pr 25931 by user Yao Xiao Charlie XIAO Fix class ensemble HistGradientBoostingRegressor and class ensemble HistGradientBoostingClassifier treats negative values for categorical features consistently as missing values following LightGBM s and pandas conventions pr 25629 by Thomas Fan Fix Fix deprecation of base estimator in class ensemble AdaBoostClassifier and class ensemble AdaBoostRegressor that was introduced in pr 23819 pr 26242 by user Marko Toplak markotoplak mod sklearn exceptions Feature Added class exceptions InconsistentVersionWarning which is raised when a scikit learn estimator is unpickled with a scikit learn version that is inconsistent with the sckit learn version the estimator was pickled with pr 25297 by Thomas Fan mod sklearn feature extraction API class feature extraction image PatchExtractor now follows the transformer API of scikit learn This class is defined as a stateless transformer meaning that it is note required to call fit before calling transform Parameter validation only happens at fit time pr 24230 by user Guillaume Lemaitre glemaitre mod sklearn feature selection Enhancement All selectors in mod sklearn feature selection will preserve a DataFrame s dtype when transformed pr 25102 by Thomas Fan Fix class feature selection SequentialFeatureSelector s cv parameter now supports generators pr 25973 by Yao Xiao Charlie XIAO mod sklearn impute Enhancement Added the parameter fill value to class impute IterativeImputer pr 25232 by user Thijs van Weezel ValueInvestorThijs Fix class impute IterativeImputer now correctly preserves the Pandas Index when the set config transform output pandas pr 26454 by Thomas Fan mod sklearn inspection Enhancement Added support for sample weight in func inspection partial dependence and meth inspection PartialDependenceDisplay from estimator This allows for weighted averaging when aggregating for each value of the grid we are making the inspection on The option is only available when method is set to brute pr 25209 and pr 26644 by user Carlo Lemos vitaliset API func inspection partial dependence returns a class utils Bunch with new key grid values The values key is deprecated in favor of grid values and the values key will be removed in 1 5 pr 21809 and pr 25732 by Thomas Fan mod sklearn kernel approximation Fix class kernel approximation AdditiveChi2Sampler is now stateless The sample interval attribute is deprecated and will be removed in 1 5 pr 25190 by user Vincent Maladi re Vincent Maladiere mod sklearn linear model Efficiency Avoid data scaling when sample weight None and other unnecessary data copies and unexpected dense to sparse data conversion in class linear model LinearRegression pr 26207 by user Olivier Grisel ogrisel Enhancement class linear model SGDClassifier class linear model SGDRegressor and class linear model SGDOneClassSVM now preserve dtype for numpy float32 pr 25587 by user Omar Salman OmarManzoor Enhancement The n iter attribute has been included in class linear model ARDRegression to expose the actual number of iterations required to reach the stopping criterion pr 25697 by user John Pangas jpangas Fix Use a more robust criterion to detect convergence of class linear model LogisticRegression with penalty l1 and solver liblinear on linearly separable problems pr 25214 by Tom Dupre la Tour Fix Fix a crash when calling fit on class linear model LogisticRegression with solver newton cholesky and max iter 0 which failed to inspect the state of the model prior to the first parameter update pr 26653 by user Olivier Grisel ogrisel API Deprecates n iter in favor of max iter in class linear model BayesianRidge and class linear model ARDRegression n iter will be removed in scikit learn 1 5 This change makes those estimators consistent with the rest of estimators pr 25697 by user John Pangas jpangas mod sklearn manifold Fix class manifold Isomap now correctly preserves the Pandas Index when the set config transform output pandas pr 26454 by Thomas Fan mod sklearn metrics Feature Adds zero division np nan to multiple classification metrics func metrics precision score func metrics recall score func metrics f1 score func metrics fbeta score func metrics precision recall fscore support func metrics classification report When zero division np nan and there is a zero division the metric is undefined and is excluded from averaging When not used for averages the value returned is np nan pr 25531 by user Marc Torrellas Socastro marctorsoc Feature func metrics average precision score now supports the multiclass case pr 17388 by user Geoffrey Bolmier gbolmier and pr 24769 by user Ashwin Mathur awinml Efficiency The computation of the expected mutual information in func metrics adjusted mutual info score is now faster when the number of unique labels is large and its memory usage is reduced in general pr 25713 by user Kshitij Mathur Kshitij68 user Guillaume Lemaitre glemaitre user Omar Salman OmarManzoor and user J r mie du Boisberranger jeremiedbb Enhancement class metrics silhouette samples nows accepts a sparse matrix of pairwise distances between samples or a feature array pr 18723 by user Sahil Gupta sahilgupta2105 and pr 24677 by user Ashwin Mathur awinml Enhancement A new parameter drop intermediate was added to func metrics precision recall curve func metrics PrecisionRecallDisplay from estimator func metrics PrecisionRecallDisplay from predictions which drops some suboptimal thresholds to create lighter precision recall curves pr 24668 by user dberenbaum Enhancement meth metrics RocCurveDisplay from estimator and meth metrics RocCurveDisplay from predictions now accept two new keywords plot chance level and chance level kw to plot the baseline chance level This line is exposed in the chance level attribute pr 25987 by user Yao Xiao Charlie XIAO Enhancement meth metrics PrecisionRecallDisplay from estimator and meth metrics PrecisionRecallDisplay from predictions now accept two new keywords plot chance level and chance level kw to plot the baseline chance level This line is exposed in the chance level attribute pr 26019 by user Yao Xiao Charlie XIAO Fix func metrics pairwise manhattan distances now supports readonly sparse datasets pr 25432 by user Julien Jerphanion jjerphan Fix Fixed func metrics classification report so that empty input will return np nan Previously macro avg and weighted avg would return e g f1 score np nan and f1 score 0 0 being inconsistent Now they both return np nan pr 25531 by user Marc Torrellas Socastro marctorsoc Fix func metrics ndcg score now gives a meaningful error message for input of length 1 pr 25672 by user Lene Preuss lene and user Wei Chun Chu wcchu Fix func metrics log loss raises a warning if the values of the parameter y pred are not normalized instead of actually normalizing them in the metric Starting from 1 5 this will raise an error pr 25299 by user Omar Salman OmarManzoor Fix In func metrics roc curve use the threshold value np inf instead of arbitrary max y score 1 This threshold is associated with the ROC curve point tpr 0 and fpr 0 pr 26194 by user Guillaume Lemaitre glemaitre Fix The matching metric has been removed when using SciPy 1 9 to be consistent with scipy spatial distance which does not support matching anymore pr 26264 by user Barata T Onggo magnusbarata API The eps parameter of the func metrics log loss has been deprecated and will be removed in 1 5 pr 25299 by user Omar Salman OmarManzoor mod sklearn gaussian process Fix class gaussian process GaussianProcessRegressor has a new argument n targets which is used to decide the number of outputs when sampling from the prior distributions pr 23099 by user Zhehao Liu MaxwellLZH mod sklearn mixture Efficiency class mixture GaussianMixture is more efficient now and will bypass unnecessary initialization if the weights means and precisions are given by users pr 26021 by user Jiawei Zhang jiawei zhang a mod sklearn model selection MajorFeature Added the class class model selection ValidationCurveDisplay that allows easy plotting of validation curves obtained by the function func model selection validation curve pr 25120 by user Guillaume Lemaitre glemaitre API The parameter log scale in the class class model selection LearningCurveDisplay has been deprecated in 1 3 and will be removed in 1 5 The default scale can be overridden by setting it directly on the ax object and will be set automatically from the spacing of the data points otherwise pr 25120 by user Guillaume Lemaitre glemaitre Enhancement func model selection cross validate accepts a new parameter return indices to return the train test indices of each cv split pr 25659 by user Guillaume Lemaitre glemaitre mod sklearn multioutput Fix func getattr on meth multioutput MultiOutputRegressor partial fit and meth multioutput MultiOutputClassifier partial fit now correctly raise an AttributeError if done before calling fit pr 26333 by Adrin Jalali mod sklearn naive bayes Fix class naive bayes GaussianNB does not raise anymore a ZeroDivisionError when the provided sample weight reduces the problem to a single class in fit pr 24140 by user Jonathan Ohayon Johayon and user Chiara Marmo cmarmo mod sklearn neighbors Enhancement The performance of meth neighbors KNeighborsClassifier predict and of meth neighbors KNeighborsClassifier predict proba has been improved when n neighbors is large and algorithm brute with non Euclidean metrics pr 24076 by user Meekail Zain micky774 user Julien Jerphanion jjerphan Fix Remove support for KulsinskiDistance in class neighbors BallTree This dissimilarity is not a metric and cannot be supported by the BallTree pr 25417 by user Guillaume Lemaitre glemaitre API The support for metrics other than euclidean and manhattan and for callables in class neighbors NearestNeighbors is deprecated and will be removed in version 1 5 pr 24083 by user Valentin Laurent Valentin Laurent mod sklearn neural network Fix class neural network MLPRegressor and class neural network MLPClassifier reports the right n iter when warm start True It corresponds to the number of iterations performed on the current call to fit instead of the total number of iterations performed since the initialization of the estimator pr 25443 by user Marvin Krawutschke Marvvxi mod sklearn pipeline Feature class pipeline FeatureUnion can now use indexing notation e g feature union scalar to access transformers by name pr 25093 by Thomas Fan Feature class pipeline FeatureUnion can now access the feature names in attribute if the X value seen during fit has a columns attribute and all columns are strings e g when X is a pandas DataFrame pr 25220 by user Ian Thompson it176131 Fix meth pipeline Pipeline fit transform now raises an AttributeError if the last step of the pipeline does not support fit transform pr 26325 by Adrin Jalali mod sklearn preprocessing MajorFeature Introduces class preprocessing TargetEncoder which is a categorical encoding based on target mean conditioned on the value of the category pr 25334 by Thomas Fan Feature class preprocessing OrdinalEncoder now supports grouping infrequent categories into a single feature Grouping infrequent categories is enabled by specifying how to select infrequent categories with min frequency or max categories pr 25677 by Thomas Fan Enhancement class preprocessing PolynomialFeatures now calculates the number of expanded terms a priori when dealing with sparse csr matrices in order to optimize the choice of dtype for indices and indptr It can now output csr matrices with np int32 indices indptr components when there are few enough elements and will automatically use np int64 for sufficiently large matrices pr 20524 by user niuk a niuk a and pr 23731 by user Meekail Zain micky774 Enhancement A new parameter sparse output was added to class preprocessing SplineTransformer available as of SciPy 1 8 If sparse output True class preprocessing SplineTransformer returns a sparse CSR matrix pr 24145 by user Christian Lorentzen lorentzenchr Enhancement Adds a feature name combiner parameter to class preprocessing OneHotEncoder This specifies a custom callable to create feature names to be returned by meth preprocessing OneHotEncoder get feature names out The callable combines input arguments input feature category to a string pr 22506 by user Mario Kostelac mariokostelac Enhancement Added support for sample weight in class preprocessing KBinsDiscretizer This allows specifying the parameter sample weight for each sample to be used while fitting The option is only available when strategy is set to quantile and kmeans pr 24935 by user Seladus seladus user Guillaume Lemaitre glemaitre and user Dea Mar a L on deamarialeon pr 25257 by user Gleb Levitski glevv Enhancement Subsampling through the subsample parameter can now be used in class preprocessing KBinsDiscretizer regardless of the strategy used pr 26424 by user J r mie du Boisberranger jeremiedbb Fix class preprocessing PowerTransformer now correctly preserves the Pandas Index when the set config transform output pandas pr 26454 by Thomas Fan Fix class preprocessing PowerTransformer now correctly raises error when using method box cox on data with a constant np nan column pr 26400 by user Yao Xiao Charlie XIAO Fix class preprocessing PowerTransformer with method yeo johnson now leaves constant features unchanged instead of transforming with an arbitrary value for the lambdas fitted parameter pr 26566 by user J r mie du Boisberranger jeremiedbb API The default value of the subsample parameter of class preprocessing KBinsDiscretizer will change from None to 200 000 in version 1 5 when strategy kmeans or strategy uniform pr 26424 by user J r mie du Boisberranger jeremiedbb mod sklearn svm API dual parameter now accepts auto option for class svm LinearSVC and class svm LinearSVR pr 26093 by user Gleb Levitski glevv mod sklearn tree MajorFeature class tree DecisionTreeRegressor and class tree DecisionTreeClassifier support missing values when splitter best and criterion is gini entropy or log loss for classification or squared error friedman mse or poisson for regression pr 23595 pr 26376 by Thomas Fan Enhancement Adds a class names parameter to func tree export text This allows specifying the parameter class names for each target class in ascending numerical order pr 25387 by user William M Akbeeh and user crispinlogan crispinlogan Fix func tree export graphviz and func tree export text now accepts feature names and class names as array like rather than lists pr 26289 by user Yao Xiao Charlie XIAO mod sklearn utils FIX Fixes func utils check array to properly convert pandas extension arrays pr 25813 and pr 26106 by Thomas Fan Fix func utils check array now supports pandas DataFrames with extension arrays and object dtypes by return an ndarray with object dtype pr 25814 by Thomas Fan API utils estimator checks check transformers unfitted stateless has been introduced to ensure stateless transformers don t raise NotFittedError during transform with no prior call to fit or fit transform pr 25190 by user Vincent Maladi re Vincent Maladiere API A FutureWarning is now raised when instantiating a class which inherits from a deprecated base class i e decorated by class utils deprecated and which overrides the init method pr 25733 by user Brigitta Sip cz bsipocz and user J r mie du Boisberranger jeremiedbb mod sklearn semi supervised Enhancement meth semi supervised LabelSpreading fit and meth semi supervised LabelPropagation fit now accepts sparse metrics pr 19664 by user Kaushik Amar Das cozek Miscellaneous Enhancement Replace obsolete exceptions EnvironmentError IOError and WindowsError pr 26466 by user Dimitri Papadopoulos ORfanos DimitriPapadopoulos rubric Code and documentation contributors Thanks to everyone who has contributed to the maintenance and improvement of the project since version 1 2 including 2357juan Abhishek Singh Kushwah Adam Handke Adam Kania Adam Li adienes Admir Demiraj adoublet Adrin Jalali A H Mansouri Ahmedbgh Ala Na Alex Buzenet AlexL Ali H El Kassas amay Andr s Simon Andr Pedersen Andrew Wang Ankur Singh annegnx Ansam Zedan Anthony22 dev Artur Hermano Arturo Amor as 90 ashah002 Ashish Dutt Ashwin Mathur AymericBasset Azaria Gebremichael Barata Tripramudya Onggo Benedek Harsanyi Benjamin Bossan Bharat Raghunathan Binesh Bannerjee Boris Feld Brendan Lu Brevin Kunde cache missing Camille Troillard Carla J carlo Carlo Lemos c git Changyao Chen Chiara Marmo Christian Lorentzen Christian Veenhuis Christine P Chai crispinlogan Da Lan DanGonite57 Dave Berenbaum davidblnc david cortes Dayne Dea Mar a L on Denis Dimitri Papadopoulos Orfanos Dimitris Litsidis Dmitry Nesterov Dominic Fox Dominik Prodinger Edern Ekaterina Butyugina Elabonga Atuo Emir farhan khan Felipe Siola futurewarning Gael Varoquaux genvalen Gleb Levitski Guillaume Lemaitre gunesbayir Haesun Park hujiahong726 i aki y Ian Thompson Ido M Ily Irene Jack McIvor jakirkham James Dean JanFidor Jarrod Millman JB Mountford J r mie du Boisberranger Jessicakk0711 Jiawei Zhang Joey Ortiz JohnathanPi John Pangas Joshua Choo Yun Keat Joshua Hedlund JuliaSchoepp Julien Jerphanion jygerardy ka00ri Kaushik Amar Das Kento Nozawa Kian Eliasi Kilian Kluge Lene Preuss Linus Logan Thomas Loic Esteve Louis Fouquet Lucy Liu Madhura Jayaratne Marc Torrellas Socastro Maren Westermann Mario Kostelac Mark Harfouche Marko Toplak Marvin Krawutschke Masanori Kanazu mathurinm Matt Haberland Max Halford maximeSaur Maxwell Liu m bou mdarii Meekail Zain Mikhail Iljin murezzda Nawazish Alam Nicola Fanelli Nightwalkx Nikolay Petrov Nishu Choudhary NNLNR npache Olivier Grisel Omar Salman ouss1508 PAB Pandata partev Peter Piontek Phil pnucci Pooja M Pooja Subramaniam precondition Quentin Barth lemy Rafal Wojdyla Raghuveer Bhat Rahil Parikh Ralf Gommers ram vikram singh Rushil Desai Sadra Barikbin SANJAI 3 Sashka Warner Scott Gigante Scott Gustafson searchforpassion Seoeun Hong Shady el Gewily Shiva chauhan Shogo Hida Shreesha Kumar Bhat sonnivs Sortofamudkip Stanislav Stanley Modrak Stefanie Senger Steven Van Vaerenbergh Tabea Kossen Th ophile Baranger Thijs van Weezel Thomas A Caswell Thomas Germer Thomas J Fan Tim Head Tim P Tom Dupr la Tour tomiock tspeng Valentin Laurent Veghit VIGNESH D Vijeth Moudgalya Vinayak Mehta Vincent M Vincent violet Vyom Pathak William M windiana42 Xiao Yuan Yao Xiao Yaroslav Halchenko Yotam Avidar Constantini Yuchen Zhou Yusuf Raji zeeshan lone |
scikit-learn sklearn contributors rst Version 0 20 | .. include:: _contributors.rst
.. currentmodule:: sklearn
============
Version 0.20
============
.. warning::
Version 0.20 is the last version of scikit-learn to support Python 2.7 and Python 3.4.
Scikit-learn 0.21 will require Python 3.5 or higher.
.. include:: changelog_legend.inc
.. _changes_0_20_4:
Version 0.20.4
==============
**July 30, 2019**
This is a bug-fix release with some bug fixes applied to version 0.20.3.
Changelog
---------
The bundled version of joblib was upgraded from 0.13.0 to 0.13.2.
:mod:`sklearn.cluster`
..............................
- |Fix| Fixed a bug in :class:`cluster.KMeans` where KMeans++ initialisation
could rarely result in an IndexError. :issue:`11756` by `Joel Nothman`_.
:mod:`sklearn.compose`
.......................
- |Fix| Fixed an issue in :class:`compose.ColumnTransformer` where using
DataFrames whose column order differs between :func:``fit`` and
:func:``transform`` could lead to silently passing incorrect columns to the
``remainder`` transformer.
:pr:`14237` by `Andreas Schuderer <schuderer>`.
:mod:`sklearn.decomposition`
............................
- |Fix| Fixed a bug in :class:`cross_decomposition.CCA` improving numerical
stability when `Y` is close to zero. :pr:`13903` by `Thomas Fan`_.
:mod:`sklearn.model_selection`
..............................
- |Fix| Fixed a bug where :class:`model_selection.StratifiedKFold`
shuffles each class's samples with the same ``random_state``,
making ``shuffle=True`` ineffective.
:issue:`13124` by :user:`Hanmin Qin <qinhanmin2014>`.
:mod:`sklearn.neighbors`
........................
- |Fix| Fixed a bug in :class:`neighbors.KernelDensity` which could not be
restored from a pickle if ``sample_weight`` had been used.
:issue:`13772` by :user:`Aditya Vyas <aditya1702>`.
.. _changes_0_20_3:
Version 0.20.3
==============
**March 1, 2019**
This is a bug-fix release with some minor documentation improvements and
enhancements to features released in 0.20.0.
Changelog
---------
:mod:`sklearn.cluster`
......................
- |Fix| Fixed a bug in :class:`cluster.KMeans` where computation was single
threaded when `n_jobs > 1` or `n_jobs = -1`.
:issue:`12949` by :user:`Prabakaran Kumaresshan <nixphix>`.
:mod:`sklearn.compose`
......................
- |Fix| Fixed a bug in :class:`compose.ColumnTransformer` to handle
negative indexes in the columns list of the transformers.
:issue:`12946` by :user:`Pierre Tallotte <pierretallotte>`.
:mod:`sklearn.covariance`
.........................
- |Fix| Fixed a regression in :func:`covariance.graphical_lasso` so that
the case `n_features=2` is handled correctly. :issue:`13276` by
:user:`Aurélien Bellet <bellet>`.
:mod:`sklearn.decomposition`
............................
- |Fix| Fixed a bug in :func:`decomposition.sparse_encode` where computation was single
threaded when `n_jobs > 1` or `n_jobs = -1`.
:issue:`13005` by :user:`Prabakaran Kumaresshan <nixphix>`.
:mod:`sklearn.datasets`
............................
- |Efficiency| :func:`sklearn.datasets.fetch_openml` now loads data by
streaming, avoiding high memory usage. :issue:`13312` by `Joris Van den
Bossche`_.
:mod:`sklearn.feature_extraction`
.................................
- |Fix| Fixed a bug in :class:`feature_extraction.text.CountVectorizer` which
would result in the sparse feature matrix having conflicting `indptr` and
`indices` precisions under very large vocabularies. :issue:`11295` by
:user:`Gabriel Vacaliuc <gvacaliuc>`.
:mod:`sklearn.impute`
.....................
- |Fix| add support for non-numeric data in
:class:`sklearn.impute.MissingIndicator` which was not supported while
:class:`sklearn.impute.SimpleImputer` was supporting this for some
imputation strategies.
:issue:`13046` by :user:`Guillaume Lemaitre <glemaitre>`.
:mod:`sklearn.linear_model`
...........................
- |Fix| Fixed a bug in :class:`linear_model.MultiTaskElasticNet` and
:class:`linear_model.MultiTaskLasso` which were breaking when
``warm_start = True``. :issue:`12360` by :user:`Aakanksha Joshi <joaak>`.
:mod:`sklearn.preprocessing`
............................
- |Fix| Fixed a bug in :class:`preprocessing.KBinsDiscretizer` where
``strategy='kmeans'`` fails with an error during transformation due to unsorted
bin edges. :issue:`13134` by :user:`Sandro Casagrande <SandroCasagrande>`.
- |Fix| Fixed a bug in :class:`preprocessing.OneHotEncoder` where the
deprecation of ``categorical_features`` was handled incorrectly in
combination with ``handle_unknown='ignore'``.
:issue:`12881` by `Joris Van den Bossche`_.
- |Fix| Bins whose width are too small (i.e., <= 1e-8) are removed
with a warning in :class:`preprocessing.KBinsDiscretizer`.
:issue:`13165` by :user:`Hanmin Qin <qinhanmin2014>`.
:mod:`sklearn.svm`
..................
- |FIX| Fixed a bug in :class:`svm.SVC`, :class:`svm.NuSVC`, :class:`svm.SVR`,
:class:`svm.NuSVR` and :class:`svm.OneClassSVM` where the ``scale`` option
of parameter ``gamma`` is erroneously defined as
``1 / (n_features * X.std())``. It's now defined as
``1 / (n_features * X.var())``.
:issue:`13221` by :user:`Hanmin Qin <qinhanmin2014>`.
Code and Documentation Contributors
-----------------------------------
With thanks to:
Adrin Jalali, Agamemnon Krasoulis, Albert Thomas, Andreas Mueller, Aurélien
Bellet, bertrandhaut, Bharat Raghunathan, Dowon, Emmanuel Arias, Fibinse
Xavier, Finn O'Shea, Gabriel Vacaliuc, Gael Varoquaux, Guillaume Lemaitre,
Hanmin Qin, joaak, Joel Nothman, Joris Van den Bossche, Jérémie Méhault, kms15,
Kossori Aruku, Lakshya KD, maikia, Manuel López-Ibáñez, Marco Gorelli,
MarcoGorelli, mferrari3, Mickaël Schoentgen, Nicolas Hug, pavlos kallis, Pierre
Glaser, pierretallotte, Prabakaran Kumaresshan, Reshama Shaikh, Rohit Kapoor,
Roman Yurchak, SandroCasagrande, Tashay Green, Thomas Fan, Vishaal Kapoor,
Zhuyi Xue, Zijie (ZJ) Poh
.. _changes_0_20_2:
Version 0.20.2
==============
**December 20, 2018**
This is a bug-fix release with some minor documentation improvements and
enhancements to features released in 0.20.0.
Changed models
--------------
The following estimators and functions, when fit with the same data and
parameters, may produce different models from the previous version. This often
occurs due to changes in the modelling logic (bug fixes or enhancements), or in
random sampling procedures.
- :mod:`sklearn.neighbors` when ``metric=='jaccard'`` (bug fix)
- use of ``'seuclidean'`` or ``'mahalanobis'`` metrics in some cases (bug fix)
Changelog
---------
:mod:`sklearn.compose`
......................
- |Fix| Fixed an issue in :func:`compose.make_column_transformer` which raises
unexpected error when columns is pandas Index or pandas Series.
:issue:`12704` by :user:`Hanmin Qin <qinhanmin2014>`.
:mod:`sklearn.metrics`
......................
- |Fix| Fixed a bug in :func:`metrics.pairwise_distances` and
:func:`metrics.pairwise_distances_chunked` where parameters ``V`` of
``"seuclidean"`` and ``VI`` of ``"mahalanobis"`` metrics were computed after
the data was split into chunks instead of being pre-computed on whole data.
:issue:`12701` by :user:`Jeremie du Boisberranger <jeremiedbb>`.
:mod:`sklearn.neighbors`
........................
- |Fix| Fixed `sklearn.neighbors.DistanceMetric` jaccard distance
function to return 0 when two all-zero vectors are compared.
:issue:`12685` by :user:`Thomas Fan <thomasjpfan>`.
:mod:`sklearn.utils`
....................
- |Fix| Calling :func:`utils.check_array` on `pandas.Series` with categorical
data, which raised an error in 0.20.0, now returns the expected output again.
:issue:`12699` by `Joris Van den Bossche`_.
Code and Documentation Contributors
-----------------------------------
With thanks to:
adanhawth, Adrin Jalali, Albert Thomas, Andreas Mueller, Dan Stine, Feda Curic,
Hanmin Qin, Jan S, jeremiedbb, Joel Nothman, Joris Van den Bossche,
josephsalmon, Katrin Leinweber, Loic Esteve, Muhammad Hassaan Rafique, Nicolas
Hug, Olivier Grisel, Paul Paczuski, Reshama Shaikh, Sam Waterbury, Shivam
Kotwalia, Thomas Fan
.. _changes_0_20_1:
Version 0.20.1
==============
**November 21, 2018**
This is a bug-fix release with some minor documentation improvements and
enhancements to features released in 0.20.0. Note that we also include some
API changes in this release, so you might get some extra warnings after
updating from 0.20.0 to 0.20.1.
Changed models
--------------
The following estimators and functions, when fit with the same data and
parameters, may produce different models from the previous version. This often
occurs due to changes in the modelling logic (bug fixes or enhancements), or in
random sampling procedures.
- :class:`decomposition.IncrementalPCA` (bug fix)
Changelog
---------
:mod:`sklearn.cluster`
......................
- |Efficiency| make :class:`cluster.MeanShift` no longer try to do nested
parallelism as the overhead would hurt performance significantly when
``n_jobs > 1``.
:issue:`12159` by :user:`Olivier Grisel <ogrisel>`.
- |Fix| Fixed a bug in :class:`cluster.DBSCAN` with precomputed sparse neighbors
graph, which would add explicitly zeros on the diagonal even when already
present. :issue:`12105` by `Tom Dupre la Tour`_.
:mod:`sklearn.compose`
......................
- |Fix| Fixed an issue in :class:`compose.ColumnTransformer` when stacking
columns with types not convertible to a numeric.
:issue:`11912` by :user:`Adrin Jalali <adrinjalali>`.
- |API| :class:`compose.ColumnTransformer` now applies the ``sparse_threshold``
even if all transformation results are sparse. :issue:`12304` by `Andreas
Müller`_.
- |API| :func:`compose.make_column_transformer` now expects
``(transformer, columns)`` instead of ``(columns, transformer)`` to keep
consistent with :class:`compose.ColumnTransformer`.
:issue:`12339` by :user:`Adrin Jalali <adrinjalali>`.
:mod:`sklearn.datasets`
............................
- |Fix| :func:`datasets.fetch_openml` to correctly use the local cache.
:issue:`12246` by :user:`Jan N. van Rijn <janvanrijn>`.
- |Fix| :func:`datasets.fetch_openml` to correctly handle ignore attributes and
row id attributes. :issue:`12330` by :user:`Jan N. van Rijn <janvanrijn>`.
- |Fix| Fixed integer overflow in :func:`datasets.make_classification`
for values of ``n_informative`` parameter larger than 64.
:issue:`10811` by :user:`Roman Feldbauer <VarIr>`.
- |Fix| Fixed olivetti faces dataset ``DESCR`` attribute to point to the right
location in :func:`datasets.fetch_olivetti_faces`. :issue:`12441` by
:user:`Jérémie du Boisberranger <jeremiedbb>`
- |Fix| :func:`datasets.fetch_openml` to retry downloading when reading
from local cache fails. :issue:`12517` by :user:`Thomas Fan <thomasjpfan>`.
:mod:`sklearn.decomposition`
............................
- |Fix| Fixed a regression in :class:`decomposition.IncrementalPCA` where
0.20.0 raised an error if the number of samples in the final batch for
fitting IncrementalPCA was smaller than n_components.
:issue:`12234` by :user:`Ming Li <minggli>`.
:mod:`sklearn.ensemble`
.......................
- |Fix| Fixed a bug mostly affecting :class:`ensemble.RandomForestClassifier`
where ``class_weight='balanced_subsample'`` failed with more than 32 classes.
:issue:`12165` by `Joel Nothman`_.
- |Fix| Fixed a bug affecting :class:`ensemble.BaggingClassifier`,
:class:`ensemble.BaggingRegressor` and :class:`ensemble.IsolationForest`,
where ``max_features`` was sometimes rounded down to zero.
:issue:`12388` by :user:`Connor Tann <Connossor>`.
:mod:`sklearn.feature_extraction`
..................................
- |Fix| Fixed a regression in v0.20.0 where
:func:`feature_extraction.text.CountVectorizer` and other text vectorizers
could error during stop words validation with custom preprocessors
or tokenizers. :issue:`12393` by `Roman Yurchak`_.
:mod:`sklearn.linear_model`
...........................
- |Fix| :class:`linear_model.SGDClassifier` and variants
with ``early_stopping=True`` would not use a consistent validation
split in the multiclass case and this would cause a crash when using
those estimators as part of parallel parameter search or cross-validation.
:issue:`12122` by :user:`Olivier Grisel <ogrisel>`.
- |Fix| Fixed a bug affecting :class:`linear_model.SGDClassifier` in the multiclass
case. Each one-versus-all step is run in a :class:`joblib.Parallel` call and
mutating a common parameter, causing a segmentation fault if called within a
backend using processes and not threads. We now use ``require=sharedmem``
at the :class:`joblib.Parallel` instance creation. :issue:`12518` by
:user:`Pierre Glaser <pierreglaser>` and :user:`Olivier Grisel <ogrisel>`.
:mod:`sklearn.metrics`
......................
- |Fix| Fixed a bug in `metrics.pairwise.pairwise_distances_argmin_min`
which returned the square root of the distance when the metric parameter was
set to "euclidean". :issue:`12481` by
:user:`Jérémie du Boisberranger <jeremiedbb>`.
- |Fix| Fixed a bug in `metrics.pairwise.pairwise_distances_chunked`
which didn't ensure the diagonal is zero for euclidean distances.
:issue:`12612` by :user:`Andreas Müller <amueller>`.
- |API| The `metrics.calinski_harabaz_score` has been renamed to
:func:`metrics.calinski_harabasz_score` and will be removed in version 0.23.
:issue:`12211` by :user:`Lisa Thomas <LisaThomas9>`,
:user:`Mark Hannel <markhannel>` and :user:`Melissa Ferrari <mferrari3>`.
:mod:`sklearn.mixture`
........................
- |Fix| Ensure that the ``fit_predict`` method of
:class:`mixture.GaussianMixture` and :class:`mixture.BayesianGaussianMixture`
always yield assignments consistent with ``fit`` followed by ``predict`` even
if the convergence criterion is too loose or not met. :issue:`12451`
by :user:`Olivier Grisel <ogrisel>`.
:mod:`sklearn.neighbors`
........................
- |Fix| force the parallelism backend to :code:`threading` for
:class:`neighbors.KDTree` and :class:`neighbors.BallTree` in Python 2.7 to
avoid pickling errors caused by the serialization of their methods.
:issue:`12171` by :user:`Thomas Moreau <tomMoral>`.
:mod:`sklearn.preprocessing`
.............................
- |Fix| Fixed bug in :class:`preprocessing.OrdinalEncoder` when passing
manually specified categories. :issue:`12365` by `Joris Van den Bossche`_.
- |Fix| Fixed bug in :class:`preprocessing.KBinsDiscretizer` where the
``transform`` method mutates the ``_encoder`` attribute. The ``transform``
method is now thread safe. :issue:`12514` by
:user:`Hanmin Qin <qinhanmin2014>`.
- |Fix| Fixed a bug in :class:`preprocessing.PowerTransformer` where the
Yeo-Johnson transform was incorrect for lambda parameters outside of `[0, 2]`
:issue:`12522` by :user:`Nicolas Hug<NicolasHug>`.
- |Fix| Fixed a bug in :class:`preprocessing.OneHotEncoder` where transform
failed when set to ignore unknown numpy strings of different lengths
:issue:`12471` by :user:`Gabriel Marzinotto<GMarzinotto>`.
- |API| The default value of the :code:`method` argument in
:func:`preprocessing.power_transform` will be changed from :code:`box-cox`
to :code:`yeo-johnson` to match :class:`preprocessing.PowerTransformer`
in version 0.23. A FutureWarning is raised when the default value is used.
:issue:`12317` by :user:`Eric Chang <chang>`.
:mod:`sklearn.utils`
........................
- |Fix| Use float64 for mean accumulator to avoid floating point
precision issues in :class:`preprocessing.StandardScaler` and
:class:`decomposition.IncrementalPCA` when using float32 datasets.
:issue:`12338` by :user:`bauks <bauks>`.
- |Fix| Calling :func:`utils.check_array` on `pandas.Series`, which
raised an error in 0.20.0, now returns the expected output again.
:issue:`12625` by `Andreas Müller`_
Miscellaneous
.............
- |Fix| When using site joblib by setting the environment variable
`SKLEARN_SITE_JOBLIB`, added compatibility with joblib 0.11 in addition
to 0.12+. :issue:`12350` by `Joel Nothman`_ and `Roman Yurchak`_.
- |Fix| Make sure to avoid raising ``FutureWarning`` when calling
``np.vstack`` with numpy 1.16 and later (use list comprehensions
instead of generator expressions in many locations of the scikit-learn
code base). :issue:`12467` by :user:`Olivier Grisel <ogrisel>`.
- |API| Removed all mentions of ``sklearn.externals.joblib``, and deprecated
joblib methods exposed in ``sklearn.utils``, except for
:func:`utils.parallel_backend` and :func:`utils.register_parallel_backend`,
which allow users to configure parallel computation in scikit-learn.
Other functionalities are part of `joblib <https://joblib.readthedocs.io/>`_.
package and should be used directly, by installing it.
The goal of this change is to prepare for
unvendoring joblib in future version of scikit-learn.
:issue:`12345` by :user:`Thomas Moreau <tomMoral>`
Code and Documentation Contributors
-----------------------------------
With thanks to:
^__^, Adrin Jalali, Andrea Navarrete, Andreas Mueller,
bauks, BenjaStudio, Cheuk Ting Ho, Connossor,
Corey Levinson, Dan Stine, daten-kieker, Denis Kataev,
Dillon Gardner, Dmitry Vukolov, Dougal J. Sutherland, Edward J Brown,
Eric Chang, Federico Caselli, Gabriel Marzinotto, Gael Varoquaux,
GauravAhlawat, Gustavo De Mari Pereira, Hanmin Qin, haroldfox,
JackLangerman, Jacopo Notarstefano, janvanrijn, jdethurens,
jeremiedbb, Joel Nothman, Joris Van den Bossche, Koen,
Kushal Chauhan, Lee Yi Jie Joel, Lily Xiong, mail-liam,
Mark Hannel, melsyt, Ming Li, Nicholas Smith,
Nicolas Hug, Nikolay Shebanov, Oleksandr Pavlyk, Olivier Grisel,
Peter Hausamann, Pierre Glaser, Pulkit Maloo, Quentin Batista,
Radostin Stoyanov, Ramil Nugmanov, Rebekah Kim, Reshama Shaikh,
Rohan Singh, Roman Feldbauer, Roman Yurchak, Roopam Sharma,
Sam Waterbury, Scott Lowe, Sebastian Raschka, Stephen Tierney,
SylvainLan, TakingItCasual, Thomas Fan, Thomas Moreau,
Tom Dupré la Tour, Tulio Casagrande, Utkarsh Upadhyay, Xing Han Lu,
Yaroslav Halchenko, Zach Miller
.. _changes_0_20:
Version 0.20.0
==============
**September 25, 2018**
This release packs in a mountain of bug fixes, features and enhancements for
the Scikit-learn library, and improvements to the documentation and examples.
Thanks to our contributors!
This release is dedicated to the memory of Raghav Rajagopalan.
Highlights
----------
We have tried to improve our support for common data-science use-cases
including missing values, categorical variables, heterogeneous data, and
features/targets with unusual distributions.
Missing values in features, represented by NaNs, are now accepted in
column-wise preprocessing such as scalers. Each feature is fitted disregarding
NaNs, and data containing NaNs can be transformed. The new :mod:`sklearn.impute`
module provides estimators for learning despite missing data.
:class:`~compose.ColumnTransformer` handles the case where different features
or columns of a pandas.DataFrame need different preprocessing.
String or pandas Categorical columns can now be encoded with
:class:`~preprocessing.OneHotEncoder` or
:class:`~preprocessing.OrdinalEncoder`.
:class:`~compose.TransformedTargetRegressor` helps when the regression target
needs to be transformed to be modeled. :class:`~preprocessing.PowerTransformer`
and :class:`~preprocessing.KBinsDiscretizer` join
:class:`~preprocessing.QuantileTransformer` as non-linear transformations.
Beyond this, we have added :term:`sample_weight` support to several estimators
(including :class:`~cluster.KMeans`, :class:`~linear_model.BayesianRidge` and
:class:`~neighbors.KernelDensity`) and improved stopping criteria in others
(including :class:`~neural_network.MLPRegressor`,
:class:`~ensemble.GradientBoostingRegressor` and
:class:`~linear_model.SGDRegressor`).
This release is also the first to be accompanied by a :ref:`glossary` developed
by `Joel Nothman`_. The glossary is a reference resource to help users and
contributors become familiar with the terminology and conventions used in
Scikit-learn.
Sorry if your contribution didn't make it into the highlights. There's a lot
here...
Changed models
--------------
The following estimators and functions, when fit with the same data and
parameters, may produce different models from the previous version. This often
occurs due to changes in the modelling logic (bug fixes or enhancements), or in
random sampling procedures.
- :class:`cluster.MeanShift` (bug fix)
- :class:`decomposition.IncrementalPCA` in Python 2 (bug fix)
- :class:`decomposition.SparsePCA` (bug fix)
- :class:`ensemble.GradientBoostingClassifier` (bug fix affecting feature importances)
- :class:`isotonic.IsotonicRegression` (bug fix)
- :class:`linear_model.ARDRegression` (bug fix)
- :class:`linear_model.LogisticRegressionCV` (bug fix)
- :class:`linear_model.OrthogonalMatchingPursuit` (bug fix)
- :class:`linear_model.PassiveAggressiveClassifier` (bug fix)
- :class:`linear_model.PassiveAggressiveRegressor` (bug fix)
- :class:`linear_model.Perceptron` (bug fix)
- :class:`linear_model.SGDClassifier` (bug fix)
- :class:`linear_model.SGDRegressor` (bug fix)
- :class:`metrics.roc_auc_score` (bug fix)
- :class:`metrics.roc_curve` (bug fix)
- `neural_network.BaseMultilayerPerceptron` (bug fix)
- :class:`neural_network.MLPClassifier` (bug fix)
- :class:`neural_network.MLPRegressor` (bug fix)
- The v0.19.0 release notes failed to mention a backwards incompatibility with
:class:`model_selection.StratifiedKFold` when ``shuffle=True`` due to
:issue:`7823`.
Details are listed in the changelog below.
(While we are trying to better inform users by providing this information, we
cannot assure that this list is complete.)
Known Major Bugs
----------------
* :issue:`11924`: :class:`linear_model.LogisticRegressionCV` with
`solver='lbfgs'` and `multi_class='multinomial'` may be non-deterministic or
otherwise broken on macOS. This appears to be the case on Travis CI servers,
but has not been confirmed on personal MacBooks! This issue has been present
in previous releases.
* :issue:`9354`: :func:`metrics.pairwise.euclidean_distances` (which is used
several times throughout the library) gives results with poor precision,
which particularly affects its use with 32-bit float inputs. This became
more problematic in versions 0.18 and 0.19 when some algorithms were changed
to avoid casting 32-bit data into 64-bit.
Changelog
---------
Support for Python 3.3 has been officially dropped.
:mod:`sklearn.cluster`
......................
- |MajorFeature| :class:`cluster.AgglomerativeClustering` now supports Single
Linkage clustering via ``linkage='single'``. :issue:`9372` by :user:`Leland
McInnes <lmcinnes>` and :user:`Steve Astels <sastels>`.
- |Feature| :class:`cluster.KMeans` and :class:`cluster.MiniBatchKMeans` now support
sample weights via new parameter ``sample_weight`` in ``fit`` function.
:issue:`10933` by :user:`Johannes Hansen <jnhansen>`.
- |Efficiency| :class:`cluster.KMeans`, :class:`cluster.MiniBatchKMeans` and
:func:`cluster.k_means` passed with ``algorithm='full'`` now enforces
row-major ordering, improving runtime.
:issue:`10471` by :user:`Gaurav Dhingra <gxyd>`.
- |Efficiency| :class:`cluster.DBSCAN` now is parallelized according to ``n_jobs``
regardless of ``algorithm``.
:issue:`8003` by :user:`Joël Billaud <recamshak>`.
- |Enhancement| :class:`cluster.KMeans` now gives a warning if the number of
distinct clusters found is smaller than ``n_clusters``. This may occur when
the number of distinct points in the data set is actually smaller than the
number of cluster one is looking for.
:issue:`10059` by :user:`Christian Braune <christianbraune79>`.
- |Fix| Fixed a bug where the ``fit`` method of
:class:`cluster.AffinityPropagation` stored cluster
centers as 3d array instead of 2d array in case of non-convergence. For the
same class, fixed undefined and arbitrary behavior in case of training data
where all samples had equal similarity.
:issue:`9612`. By :user:`Jonatan Samoocha <jsamoocha>`.
- |Fix| Fixed a bug in :func:`cluster.spectral_clustering` where the normalization of
the spectrum was using a division instead of a multiplication. :issue:`8129`
by :user:`Jan Margeta <jmargeta>`, :user:`Guillaume Lemaitre <glemaitre>`,
and :user:`Devansh D. <devanshdalal>`.
- |Fix| Fixed a bug in `cluster.k_means_elkan` where the returned
``iteration`` was 1 less than the correct value. Also added the missing
``n_iter_`` attribute in the docstring of :class:`cluster.KMeans`.
:issue:`11353` by :user:`Jeremie du Boisberranger <jeremiedbb>`.
- |Fix| Fixed a bug in :func:`cluster.mean_shift` where the assigned labels
were not deterministic if there were multiple clusters with the same
intensities.
:issue:`11901` by :user:`Adrin Jalali <adrinjalali>`.
- |API| Deprecate ``pooling_func`` unused parameter in
:class:`cluster.AgglomerativeClustering`.
:issue:`9875` by :user:`Kumar Ashutosh <thechargedneutron>`.
:mod:`sklearn.compose`
......................
- New module.
- |MajorFeature| Added :class:`compose.ColumnTransformer`, which allows to
apply different transformers to different columns of arrays or pandas
DataFrames. :issue:`9012` by `Andreas Müller`_ and `Joris Van den Bossche`_,
and :issue:`11315` by :user:`Thomas Fan <thomasjpfan>`.
- |MajorFeature| Added the :class:`compose.TransformedTargetRegressor` which
transforms the target y before fitting a regression model. The predictions
are mapped back to the original space via an inverse transform. :issue:`9041`
by `Andreas Müller`_ and :user:`Guillaume Lemaitre <glemaitre>`.
:mod:`sklearn.covariance`
.........................
- |Efficiency| Runtime improvements to :class:`covariance.GraphicalLasso`.
:issue:`9858` by :user:`Steven Brown <stevendbrown>`.
- |API| The `covariance.graph_lasso`,
`covariance.GraphLasso` and `covariance.GraphLassoCV` have been
renamed to :func:`covariance.graphical_lasso`,
:class:`covariance.GraphicalLasso` and :class:`covariance.GraphicalLassoCV`
respectively and will be removed in version 0.22.
:issue:`9993` by :user:`Artiem Krinitsyn <artiemq>`
:mod:`sklearn.datasets`
.......................
- |MajorFeature| Added :func:`datasets.fetch_openml` to fetch datasets from
`OpenML <https://openml.org>`_. OpenML is a free, open data sharing platform
and will be used instead of mldata as it provides better service availability.
:issue:`9908` by `Andreas Müller`_ and :user:`Jan N. van Rijn <janvanrijn>`.
- |Feature| In :func:`datasets.make_blobs`, one can now pass a list to the
``n_samples`` parameter to indicate the number of samples to generate per
cluster. :issue:`8617` by :user:`Maskani Filali Mohamed <maskani-moh>` and
:user:`Konstantinos Katrioplas <kkatrio>`.
- |Feature| Add ``filename`` attribute to :mod:`sklearn.datasets` that have a CSV file.
:issue:`9101` by :user:`alex-33 <alex-33>`
and :user:`Maskani Filali Mohamed <maskani-moh>`.
- |Feature| ``return_X_y`` parameter has been added to several dataset loaders.
:issue:`10774` by :user:`Chris Catalfo <ccatalfo>`.
- |Fix| Fixed a bug in `datasets.load_boston` which had a wrong data
point. :issue:`10795` by :user:`Takeshi Yoshizawa <tarcusx>`.
- |Fix| Fixed a bug in :func:`datasets.load_iris` which had two wrong data points.
:issue:`11082` by :user:`Sadhana Srinivasan <rotuna>`
and :user:`Hanmin Qin <qinhanmin2014>`.
- |Fix| Fixed a bug in :func:`datasets.fetch_kddcup99`, where data were not
properly shuffled. :issue:`9731` by `Nicolas Goix`_.
- |Fix| Fixed a bug in :func:`datasets.make_circles`, where no odd number of
data points could be generated. :issue:`10045` by :user:`Christian Braune
<christianbraune79>`.
- |API| Deprecated `sklearn.datasets.fetch_mldata` to be removed in
version 0.22. mldata.org is no longer operational. Until removal it will
remain possible to load cached datasets. :issue:`11466` by `Joel Nothman`_.
:mod:`sklearn.decomposition`
............................
- |Feature| :func:`decomposition.dict_learning` functions and models now
support positivity constraints. This applies to the dictionary and sparse
code. :issue:`6374` by :user:`John Kirkham <jakirkham>`.
- |Feature| |Fix| :class:`decomposition.SparsePCA` now exposes
``normalize_components``. When set to True, the train and test data are
centered with the train mean respectively during the fit phase and the
transform phase. This fixes the behavior of SparsePCA. When set to False,
which is the default, the previous abnormal behaviour still holds. The False
value is for backward compatibility and should not be used. :issue:`11585`
by :user:`Ivan Panico <FollowKenny>`.
- |Efficiency| Efficiency improvements in :func:`decomposition.dict_learning`.
:issue:`11420` and others by :user:`John Kirkham <jakirkham>`.
- |Fix| Fix for uninformative error in :class:`decomposition.IncrementalPCA`:
now an error is raised if the number of components is larger than the
chosen batch size. The ``n_components=None`` case was adapted accordingly.
:issue:`6452`. By :user:`Wally Gauze <wallygauze>`.
- |Fix| Fixed a bug where the ``partial_fit`` method of
:class:`decomposition.IncrementalPCA` used integer division instead of float
division on Python 2.
:issue:`9492` by :user:`James Bourbeau <jrbourbeau>`.
- |Fix| In :class:`decomposition.PCA` selecting a n_components parameter greater
than the number of samples now raises an error. Similarly, the
``n_components=None`` case now selects the minimum of ``n_samples`` and
``n_features``.
:issue:`8484` by :user:`Wally Gauze <wallygauze>`.
- |Fix| Fixed a bug in :class:`decomposition.PCA` where users will get
unexpected error with large datasets when ``n_components='mle'`` on Python 3
versions.
:issue:`9886` by :user:`Hanmin Qin <qinhanmin2014>`.
- |Fix| Fixed an underflow in calculating KL-divergence for
:class:`decomposition.NMF` :issue:`10142` by `Tom Dupre la Tour`_.
- |Fix| Fixed a bug in :class:`decomposition.SparseCoder` when running OMP
sparse coding in parallel using read-only memory mapped datastructures.
:issue:`5956` by :user:`Vighnesh Birodkar <vighneshbirodkar>` and
:user:`Olivier Grisel <ogrisel>`.
:mod:`sklearn.discriminant_analysis`
....................................
- |Efficiency| Memory usage improvement for `_class_means` and
`_class_cov` in :mod:`sklearn.discriminant_analysis`. :issue:`10898` by
:user:`Nanxin Chen <bobchennan>`.
:mod:`sklearn.dummy`
....................
- |Feature| :class:`dummy.DummyRegressor` now has a ``return_std`` option in its
``predict`` method. The returned standard deviations will be zeros.
- |Feature| :class:`dummy.DummyClassifier` and :class:`dummy.DummyRegressor` now
only require X to be an object with finite length or shape. :issue:`9832` by
:user:`Vrishank Bhardwaj <vrishank97>`.
- |Feature| :class:`dummy.DummyClassifier` and :class:`dummy.DummyRegressor`
can now be scored without supplying test samples.
:issue:`11951` by :user:`Rüdiger Busche <JarnoRFB>`.
:mod:`sklearn.ensemble`
.......................
- |Feature| :class:`ensemble.BaggingRegressor` and
:class:`ensemble.BaggingClassifier` can now be fit with missing/non-finite
values in X and/or multi-output Y to support wrapping pipelines that perform
their own imputation. :issue:`9707` by :user:`Jimmy Wan <jimmywan>`.
- |Feature| :class:`ensemble.GradientBoostingClassifier` and
:class:`ensemble.GradientBoostingRegressor` now support early stopping
via ``n_iter_no_change``, ``validation_fraction`` and ``tol``. :issue:`7071`
by `Raghav RV`_
- |Feature| Added ``named_estimators_`` parameter in
:class:`ensemble.VotingClassifier` to access fitted estimators.
:issue:`9157` by :user:`Herilalaina Rakotoarison <herilalaina>`.
- |Fix| Fixed a bug when fitting :class:`ensemble.GradientBoostingClassifier` or
:class:`ensemble.GradientBoostingRegressor` with ``warm_start=True`` which
previously raised a segmentation fault due to a non-conversion of CSC matrix
into CSR format expected by ``decision_function``. Similarly, Fortran-ordered
arrays are converted to C-ordered arrays in the dense case. :issue:`9991` by
:user:`Guillaume Lemaitre <glemaitre>`.
- |Fix| Fixed a bug in :class:`ensemble.GradientBoostingRegressor`
and :class:`ensemble.GradientBoostingClassifier` to have
feature importances summed and then normalized, rather than normalizing on a
per-tree basis. The previous behavior over-weighted the Gini importance of
features that appear in later stages. This issue only affected feature
importances. :issue:`11176` by :user:`Gil Forsyth <gforsyth>`.
- |API| The default value of the ``n_estimators`` parameter of
:class:`ensemble.RandomForestClassifier`, :class:`ensemble.RandomForestRegressor`,
:class:`ensemble.ExtraTreesClassifier`, :class:`ensemble.ExtraTreesRegressor`,
and :class:`ensemble.RandomTreesEmbedding` will change from 10 in version 0.20
to 100 in 0.22. A FutureWarning is raised when the default value is used.
:issue:`11542` by :user:`Anna Ayzenshtat <annaayzenshtat>`.
- |API| Classes derived from `ensemble.BaseBagging`. The attribute
``estimators_samples_`` will return a list of arrays containing the indices
selected for each bootstrap instead of a list of arrays containing the mask
of the samples selected for each bootstrap. Indices allows to repeat samples
while mask does not allow this functionality.
:issue:`9524` by :user:`Guillaume Lemaitre <glemaitre>`.
- |Fix| `ensemble.BaseBagging` where one could not deterministically
reproduce ``fit`` result using the object attributes when ``random_state``
is set. :issue:`9723` by :user:`Guillaume Lemaitre <glemaitre>`.
:mod:`sklearn.feature_extraction`
.................................
- |Feature| Enable the call to `get_feature_names` in unfitted
:class:`feature_extraction.text.CountVectorizer` initialized with a
vocabulary. :issue:`10908` by :user:`Mohamed Maskani <maskani-moh>`.
- |Enhancement| ``idf_`` can now be set on a
:class:`feature_extraction.text.TfidfTransformer`.
:issue:`10899` by :user:`Sergey Melderis <serega>`.
- |Fix| Fixed a bug in :func:`feature_extraction.image.extract_patches_2d` which
would throw an exception if ``max_patches`` was greater than or equal to the
number of all possible patches rather than simply returning the number of
possible patches. :issue:`10101` by :user:`Varun Agrawal <varunagrawal>`
- |Fix| Fixed a bug in :class:`feature_extraction.text.CountVectorizer`,
:class:`feature_extraction.text.TfidfVectorizer`,
:class:`feature_extraction.text.HashingVectorizer` to support 64 bit sparse
array indexing necessary to process large datasets with more than 2·10⁹ tokens
(words or n-grams). :issue:`9147` by :user:`Claes-Fredrik Mannby <mannby>`
and `Roman Yurchak`_.
- |Fix| Fixed bug in :class:`feature_extraction.text.TfidfVectorizer` which
was ignoring the parameter ``dtype``. In addition,
:class:`feature_extraction.text.TfidfTransformer` will preserve ``dtype``
for floating and raise a warning if ``dtype`` requested is integer.
:issue:`10441` by :user:`Mayur Kulkarni <maykulkarni>` and
:user:`Guillaume Lemaitre <glemaitre>`.
:mod:`sklearn.feature_selection`
................................
- |Feature| Added select K best features functionality to
:class:`feature_selection.SelectFromModel`.
:issue:`6689` by :user:`Nihar Sheth <nsheth12>` and
:user:`Quazi Rahman <qmaruf>`.
- |Feature| Added ``min_features_to_select`` parameter to
:class:`feature_selection.RFECV` to bound evaluated features counts.
:issue:`11293` by :user:`Brent Yi <brentyi>`.
- |Feature| :class:`feature_selection.RFECV`'s fit method now supports
:term:`groups`. :issue:`9656` by :user:`Adam Greenhall <adamgreenhall>`.
- |Fix| Fixed computation of ``n_features_to_compute`` for edge case with tied
CV scores in :class:`feature_selection.RFECV`.
:issue:`9222` by :user:`Nick Hoh <nickypie>`.
:mod:`sklearn.gaussian_process`
...............................
- |Efficiency| In :class:`gaussian_process.GaussianProcessRegressor`, method
``predict`` is faster when using ``return_std=True`` in particular more when
called several times in a row. :issue:`9234` by :user:`andrewww <andrewww>`
and :user:`Minghui Liu <minghui-liu>`.
:mod:`sklearn.impute`
.....................
- New module, adopting ``preprocessing.Imputer`` as
:class:`impute.SimpleImputer` with minor changes (see under preprocessing
below).
- |MajorFeature| Added :class:`impute.MissingIndicator` which generates a
binary indicator for missing values. :issue:`8075` by :user:`Maniteja Nandana
<maniteja123>` and :user:`Guillaume Lemaitre <glemaitre>`.
- |Feature| The :class:`impute.SimpleImputer` has a new strategy,
``'constant'``, to complete missing values with a fixed one, given by the
``fill_value`` parameter. This strategy supports numeric and non-numeric
data, and so does the ``'most_frequent'`` strategy now. :issue:`11211` by
:user:`Jeremie du Boisberranger <jeremiedbb>`.
:mod:`sklearn.isotonic`
.......................
- |Fix| Fixed a bug in :class:`isotonic.IsotonicRegression` which incorrectly
combined weights when fitting a model to data involving points with
identical X values.
:issue:`9484` by :user:`Dallas Card <dallascard>`
:mod:`sklearn.linear_model`
...........................
- |Feature| :class:`linear_model.SGDClassifier`,
:class:`linear_model.SGDRegressor`,
:class:`linear_model.PassiveAggressiveClassifier`,
:class:`linear_model.PassiveAggressiveRegressor` and
:class:`linear_model.Perceptron` now expose ``early_stopping``,
``validation_fraction`` and ``n_iter_no_change`` parameters, to stop
optimization monitoring the score on a validation set. A new learning rate
``"adaptive"`` strategy divides the learning rate by 5 each time
``n_iter_no_change`` consecutive epochs fail to improve the model.
:issue:`9043` by `Tom Dupre la Tour`_.
- |Feature| Add `sample_weight` parameter to the fit method of
:class:`linear_model.BayesianRidge` for weighted linear regression.
:issue:`10112` by :user:`Peter St. John <pstjohn>`.
- |Fix| Fixed a bug in `logistic.logistic_regression_path` to ensure
that the returned coefficients are correct when ``multiclass='multinomial'``.
Previously, some of the coefficients would override each other, leading to
incorrect results in :class:`linear_model.LogisticRegressionCV`.
:issue:`11724` by :user:`Nicolas Hug <NicolasHug>`.
- |Fix| Fixed a bug in :class:`linear_model.LogisticRegression` where when using
the parameter ``multi_class='multinomial'``, the ``predict_proba`` method was
returning incorrect probabilities in the case of binary outcomes.
:issue:`9939` by :user:`Roger Westover <rwolst>`.
- |Fix| Fixed a bug in :class:`linear_model.LogisticRegressionCV` where the
``score`` method always computes accuracy, not the metric given by
the ``scoring`` parameter.
:issue:`10998` by :user:`Thomas Fan <thomasjpfan>`.
- |Fix| Fixed a bug in :class:`linear_model.LogisticRegressionCV` where the
'ovr' strategy was always used to compute cross-validation scores in the
multiclass setting, even if ``'multinomial'`` was set.
:issue:`8720` by :user:`William de Vazelhes <wdevazelhes>`.
- |Fix| Fixed a bug in :class:`linear_model.OrthogonalMatchingPursuit` that was
broken when setting ``normalize=False``.
:issue:`10071` by `Alexandre Gramfort`_.
- |Fix| Fixed a bug in :class:`linear_model.ARDRegression` which caused
incorrectly updated estimates for the standard deviation and the
coefficients. :issue:`10153` by :user:`Jörg Döpfert <jdoepfert>`.
- |Fix| Fixed a bug in :class:`linear_model.ARDRegression` and
:class:`linear_model.BayesianRidge` which caused NaN predictions when fitted
with a constant target.
:issue:`10095` by :user:`Jörg Döpfert <jdoepfert>`.
- |Fix| Fixed a bug in :class:`linear_model.RidgeClassifierCV` where
the parameter ``store_cv_values`` was not implemented though
it was documented in ``cv_values`` as a way to set up the storage
of cross-validation values for different alphas. :issue:`10297` by
:user:`Mabel Villalba-Jiménez <mabelvj>`.
- |Fix| Fixed a bug in :class:`linear_model.ElasticNet` which caused the input
to be overridden when using parameter ``copy_X=True`` and
``check_input=False``. :issue:`10581` by :user:`Yacine Mazari <ymazari>`.
- |Fix| Fixed a bug in :class:`sklearn.linear_model.Lasso`
where the coefficient had wrong shape when ``fit_intercept=False``.
:issue:`10687` by :user:`Martin Hahn <martin-hahn>`.
- |Fix| Fixed a bug in :func:`sklearn.linear_model.LogisticRegression` where the
``multi_class='multinomial'`` with binary output ``with warm_start=True``
:issue:`10836` by :user:`Aishwarya Srinivasan <aishgrt1>`.
- |Fix| Fixed a bug in :class:`linear_model.RidgeCV` where using integer
``alphas`` raised an error.
:issue:`10397` by :user:`Mabel Villalba-Jiménez <mabelvj>`.
- |Fix| Fixed condition triggering gap computation in
:class:`linear_model.Lasso` and :class:`linear_model.ElasticNet` when working
with sparse matrices. :issue:`10992` by `Alexandre Gramfort`_.
- |Fix| Fixed a bug in :class:`linear_model.SGDClassifier`,
:class:`linear_model.SGDRegressor`,
:class:`linear_model.PassiveAggressiveClassifier`,
:class:`linear_model.PassiveAggressiveRegressor` and
:class:`linear_model.Perceptron`, where the stopping criterion was stopping
the algorithm before convergence. A parameter ``n_iter_no_change`` was added
and set by default to 5. Previous behavior is equivalent to setting the
parameter to 1. :issue:`9043` by `Tom Dupre la Tour`_.
- |Fix| Fixed a bug where liblinear and libsvm-based estimators would segfault
if passed a scipy.sparse matrix with 64-bit indices. They now raise a
ValueError.
:issue:`11327` by :user:`Karan Dhingra <kdhingra307>` and `Joel Nothman`_.
- |API| The default values of the ``solver`` and ``multi_class`` parameters of
:class:`linear_model.LogisticRegression` will change respectively from
``'liblinear'`` and ``'ovr'`` in version 0.20 to ``'lbfgs'`` and
``'auto'`` in version 0.22. A FutureWarning is raised when the default
values are used. :issue:`11905` by `Tom Dupre la Tour`_ and `Joel Nothman`_.
- |API| Deprecate ``positive=True`` option in :class:`linear_model.Lars` as
the underlying implementation is broken. Use :class:`linear_model.Lasso`
instead. :issue:`9837` by `Alexandre Gramfort`_.
- |API| ``n_iter_`` may vary from previous releases in
:class:`linear_model.LogisticRegression` with ``solver='lbfgs'`` and
:class:`linear_model.HuberRegressor`. For Scipy <= 1.0.0, the optimizer could
perform more than the requested maximum number of iterations. Now both
estimators will report at most ``max_iter`` iterations even if more were
performed. :issue:`10723` by `Joel Nothman`_.
:mod:`sklearn.manifold`
.......................
- |Efficiency| Speed improvements for both 'exact' and 'barnes_hut' methods in
:class:`manifold.TSNE`. :issue:`10593` and :issue:`10610` by
`Tom Dupre la Tour`_.
- |Feature| Support sparse input in :meth:`manifold.Isomap.fit`.
:issue:`8554` by :user:`Leland McInnes <lmcinnes>`.
- |Feature| `manifold.t_sne.trustworthiness` accepts metrics other than
Euclidean. :issue:`9775` by :user:`William de Vazelhes <wdevazelhes>`.
- |Fix| Fixed a bug in :func:`manifold.spectral_embedding` where the
normalization of the spectrum was using a division instead of a
multiplication. :issue:`8129` by :user:`Jan Margeta <jmargeta>`,
:user:`Guillaume Lemaitre <glemaitre>`, and :user:`Devansh D.
<devanshdalal>`.
- |API| |Feature| Deprecate ``precomputed`` parameter in function
`manifold.t_sne.trustworthiness`. Instead, the new parameter ``metric``
should be used with any compatible metric including 'precomputed', in which
case the input matrix ``X`` should be a matrix of pairwise distances or
squared distances. :issue:`9775` by :user:`William de Vazelhes
<wdevazelhes>`.
- |API| Deprecate ``precomputed`` parameter in function
`manifold.t_sne.trustworthiness`. Instead, the new parameter
``metric`` should be used with any compatible metric including
'precomputed', in which case the input matrix ``X`` should be a matrix of
pairwise distances or squared distances. :issue:`9775` by
:user:`William de Vazelhes <wdevazelhes>`.
:mod:`sklearn.metrics`
......................
- |MajorFeature| Added the :func:`metrics.davies_bouldin_score` metric for
evaluation of clustering models without a ground truth. :issue:`10827` by
:user:`Luis Osa <logc>`.
- |MajorFeature| Added the :func:`metrics.balanced_accuracy_score` metric and
a corresponding ``'balanced_accuracy'`` scorer for binary and multiclass
classification. :issue:`8066` by :user:`xyguo` and :user:`Aman Dalmia
<dalmia>`, and :issue:`10587` by `Joel Nothman`_.
- |Feature| Partial AUC is available via ``max_fpr`` parameter in
:func:`metrics.roc_auc_score`. :issue:`3840` by
:user:`Alexander Niederbühl <Alexander-N>`.
- |Feature| A scorer based on :func:`metrics.brier_score_loss` is also
available. :issue:`9521` by :user:`Hanmin Qin <qinhanmin2014>`.
- |Feature| Added control over the normalization in
:func:`metrics.normalized_mutual_info_score` and
:func:`metrics.adjusted_mutual_info_score` via the ``average_method``
parameter. In version 0.22, the default normalizer for each will become
the *arithmetic* mean of the entropies of each clustering. :issue:`11124` by
:user:`Arya McCarthy <aryamccarthy>`.
- |Feature| Added ``output_dict`` parameter in :func:`metrics.classification_report`
to return classification statistics as dictionary.
:issue:`11160` by :user:`Dan Barkhorn <danielbarkhorn>`.
- |Feature| :func:`metrics.classification_report` now reports all applicable averages on
the given data, including micro, macro and weighted average as well as samples
average for multilabel data. :issue:`11679` by :user:`Alexander Pacha <apacha>`.
- |Feature| :func:`metrics.average_precision_score` now supports binary
``y_true`` other than ``{0, 1}`` or ``{-1, 1}`` through ``pos_label``
parameter. :issue:`9980` by :user:`Hanmin Qin <qinhanmin2014>`.
- |Feature| :func:`metrics.label_ranking_average_precision_score` now supports
``sample_weight``.
:issue:`10845` by :user:`Jose Perez-Parras Toledano <jopepato>`.
- |Feature| Add ``dense_output`` parameter to :func:`metrics.pairwise.linear_kernel`.
When False and both inputs are sparse, will return a sparse matrix.
:issue:`10999` by :user:`Taylor G Smith <tgsmith61591>`.
- |Efficiency| :func:`metrics.silhouette_score` and
:func:`metrics.silhouette_samples` are more memory efficient and run
faster. This avoids some reported freezes and MemoryErrors.
:issue:`11135` by `Joel Nothman`_.
- |Fix| Fixed a bug in :func:`metrics.precision_recall_fscore_support`
when truncated `range(n_labels)` is passed as value for `labels`.
:issue:`10377` by :user:`Gaurav Dhingra <gxyd>`.
- |Fix| Fixed a bug due to floating point error in
:func:`metrics.roc_auc_score` with non-integer sample weights. :issue:`9786`
by :user:`Hanmin Qin <qinhanmin2014>`.
- |Fix| Fixed a bug where :func:`metrics.roc_curve` sometimes starts on y-axis
instead of (0, 0), which is inconsistent with the document and other
implementations. Note that this will not influence the result from
:func:`metrics.roc_auc_score` :issue:`10093` by :user:`alexryndin
<alexryndin>` and :user:`Hanmin Qin <qinhanmin2014>`.
- |Fix| Fixed a bug to avoid integer overflow. Casted product to 64 bits integer in
:func:`metrics.mutual_info_score`.
:issue:`9772` by :user:`Kumar Ashutosh <thechargedneutron>`.
- |Fix| Fixed a bug where :func:`metrics.average_precision_score` will sometimes return
``nan`` when ``sample_weight`` contains 0.
:issue:`9980` by :user:`Hanmin Qin <qinhanmin2014>`.
- |Fix| Fixed a bug in :func:`metrics.fowlkes_mallows_score` to avoid integer
overflow. Casted return value of `contingency_matrix` to `int64` and computed
product of square roots rather than square root of product.
:issue:`9515` by :user:`Alan Liddell <aliddell>` and
:user:`Manh Dao <manhdao>`.
- |API| Deprecate ``reorder`` parameter in :func:`metrics.auc` as it's no
longer required for :func:`metrics.roc_auc_score`. Moreover using
``reorder=True`` can hide bugs due to floating point error in the input.
:issue:`9851` by :user:`Hanmin Qin <qinhanmin2014>`.
- |API| In :func:`metrics.normalized_mutual_info_score` and
:func:`metrics.adjusted_mutual_info_score`, warn that
``average_method`` will have a new default value. In version 0.22, the
default normalizer for each will become the *arithmetic* mean of the
entropies of each clustering. Currently,
:func:`metrics.normalized_mutual_info_score` uses the default of
``average_method='geometric'``, and
:func:`metrics.adjusted_mutual_info_score` uses the default of
``average_method='max'`` to match their behaviors in version 0.19.
:issue:`11124` by :user:`Arya McCarthy <aryamccarthy>`.
- |API| The ``batch_size`` parameter to :func:`metrics.pairwise_distances_argmin_min`
and :func:`metrics.pairwise_distances_argmin` is deprecated to be removed in
v0.22. It no longer has any effect, as batch size is determined by global
``working_memory`` config. See :ref:`working_memory`. :issue:`10280` by `Joel
Nothman`_ and :user:`Aman Dalmia <dalmia>`.
:mod:`sklearn.mixture`
......................
- |Feature| Added function :term:`fit_predict` to :class:`mixture.GaussianMixture`
and :class:`mixture.GaussianMixture`, which is essentially equivalent to
calling :term:`fit` and :term:`predict`. :issue:`10336` by :user:`Shu Haoran
<haoranShu>` and :user:`Andrew Peng <Andrew-peng>`.
- |Fix| Fixed a bug in `mixture.BaseMixture` where the reported `n_iter_` was
missing an iteration. It affected :class:`mixture.GaussianMixture` and
:class:`mixture.BayesianGaussianMixture`. :issue:`10740` by :user:`Erich
Schubert <kno10>` and :user:`Guillaume Lemaitre <glemaitre>`.
- |Fix| Fixed a bug in `mixture.BaseMixture` and its subclasses
:class:`mixture.GaussianMixture` and :class:`mixture.BayesianGaussianMixture`
where the ``lower_bound_`` was not the max lower bound across all
initializations (when ``n_init > 1``), but just the lower bound of the last
initialization. :issue:`10869` by :user:`Aurélien Géron <ageron>`.
:mod:`sklearn.model_selection`
..............................
- |Feature| Add `return_estimator` parameter in
:func:`model_selection.cross_validate` to return estimators fitted on each
split. :issue:`9686` by :user:`Aurélien Bellet <bellet>`.
- |Feature| New ``refit_time_`` attribute will be stored in
:class:`model_selection.GridSearchCV` and
:class:`model_selection.RandomizedSearchCV` if ``refit`` is set to ``True``.
This will allow measuring the complete time it takes to perform
hyperparameter optimization and refitting the best model on the whole
dataset. :issue:`11310` by :user:`Matthias Feurer <mfeurer>`.
- |Feature| Expose `error_score` parameter in
:func:`model_selection.cross_validate`,
:func:`model_selection.cross_val_score`,
:func:`model_selection.learning_curve` and
:func:`model_selection.validation_curve` to control the behavior triggered
when an error occurs in `model_selection._fit_and_score`.
:issue:`11576` by :user:`Samuel O. Ronsin <samronsin>`.
- |Feature| `BaseSearchCV` now has an experimental, private interface to
support customized parameter search strategies, through its ``_run_search``
method. See the implementations in :class:`model_selection.GridSearchCV` and
:class:`model_selection.RandomizedSearchCV` and please provide feedback if
you use this. Note that we do not assure the stability of this API beyond
version 0.20. :issue:`9599` by `Joel Nothman`_
- |Enhancement| Add improved error message in
:func:`model_selection.cross_val_score` when multiple metrics are passed in
``scoring`` keyword. :issue:`11006` by :user:`Ming Li <minggli>`.
- |API| The default number of cross-validation folds ``cv`` and the default
number of splits ``n_splits`` in the :class:`model_selection.KFold`-like
splitters will change from 3 to 5 in 0.22 as 3-fold has a lot of variance.
:issue:`11557` by :user:`Alexandre Boucaud <aboucaud>`.
- |API| The default of ``iid`` parameter of :class:`model_selection.GridSearchCV`
and :class:`model_selection.RandomizedSearchCV` will change from ``True`` to
``False`` in version 0.22 to correspond to the standard definition of
cross-validation, and the parameter will be removed in version 0.24
altogether. This parameter is of greatest practical significance where the
sizes of different test sets in cross-validation were very unequal, i.e. in
group-based CV strategies. :issue:`9085` by :user:`Laurent Direr <ldirer>`
and `Andreas Müller`_.
- |API| The default value of the ``error_score`` parameter in
:class:`model_selection.GridSearchCV` and
:class:`model_selection.RandomizedSearchCV` will change to ``np.NaN`` in
version 0.22. :issue:`10677` by :user:`Kirill Zhdanovich <Zhdanovich>`.
- |API| Changed ValueError exception raised in
:class:`model_selection.ParameterSampler` to a UserWarning for case where the
class is instantiated with a greater value of ``n_iter`` than the total space
of parameters in the parameter grid. ``n_iter`` now acts as an upper bound on
iterations. :issue:`10982` by :user:`Juliet Lawton <julietcl>`
- |API| Invalid input for :class:`model_selection.ParameterGrid` now
raises TypeError.
:issue:`10928` by :user:`Solutus Immensus <solutusimmensus>`
:mod:`sklearn.multioutput`
..........................
- |MajorFeature| Added :class:`multioutput.RegressorChain` for multi-target
regression. :issue:`9257` by :user:`Kumar Ashutosh <thechargedneutron>`.
:mod:`sklearn.naive_bayes`
..........................
- |MajorFeature| Added :class:`naive_bayes.ComplementNB`, which implements the
Complement Naive Bayes classifier described in Rennie et al. (2003).
:issue:`8190` by :user:`Michael A. Alcorn <airalcorn2>`.
- |Feature| Add `var_smoothing` parameter in :class:`naive_bayes.GaussianNB`
to give a precise control over variances calculation.
:issue:`9681` by :user:`Dmitry Mottl <Mottl>`.
- |Fix| Fixed a bug in :class:`naive_bayes.GaussianNB` which incorrectly
raised error for prior list which summed to 1.
:issue:`10005` by :user:`Gaurav Dhingra <gxyd>`.
- |Fix| Fixed a bug in :class:`naive_bayes.MultinomialNB` which did not accept
vector valued pseudocounts (alpha).
:issue:`10346` by :user:`Tobias Madsen <TobiasMadsen>`
:mod:`sklearn.neighbors`
........................
- |Efficiency| :class:`neighbors.RadiusNeighborsRegressor` and
:class:`neighbors.RadiusNeighborsClassifier` are now
parallelized according to ``n_jobs`` regardless of ``algorithm``.
:issue:`10887` by :user:`Joël Billaud <recamshak>`.
- |Efficiency| :mod:`sklearn.neighbors` query methods are now more
memory efficient when ``algorithm='brute'``.
:issue:`11136` by `Joel Nothman`_ and :user:`Aman Dalmia <dalmia>`.
- |Feature| Add ``sample_weight`` parameter to the fit method of
:class:`neighbors.KernelDensity` to enable weighting in kernel density
estimation.
:issue:`4394` by :user:`Samuel O. Ronsin <samronsin>`.
- |Feature| Novelty detection with :class:`neighbors.LocalOutlierFactor`:
Add a ``novelty`` parameter to :class:`neighbors.LocalOutlierFactor`. When
``novelty`` is set to True, :class:`neighbors.LocalOutlierFactor` can then
be used for novelty detection, i.e. predict on new unseen data. Available
prediction methods are ``predict``, ``decision_function`` and
``score_samples``. By default, ``novelty`` is set to ``False``, and only
the ``fit_predict`` method is available.
By :user:`Albert Thomas <albertcthomas>`.
- |Fix| Fixed a bug in :class:`neighbors.NearestNeighbors` where fitting a
NearestNeighbors model fails when a) the distance metric used is a
callable and b) the input to the NearestNeighbors model is sparse.
:issue:`9579` by :user:`Thomas Kober <tttthomasssss>`.
- |Fix| Fixed a bug so ``predict`` in
:class:`neighbors.RadiusNeighborsRegressor` can handle empty neighbor set
when using non uniform weights. Also raises a new warning when no neighbors
are found for samples. :issue:`9655` by :user:`Andreas Bjerre-Nielsen
<abjer>`.
- |Fix| |Efficiency| Fixed a bug in ``KDTree`` construction that results in
faster construction and querying times.
:issue:`11556` by :user:`Jake VanderPlas <jakevdp>`
- |Fix| Fixed a bug in :class:`neighbors.KDTree` and :class:`neighbors.BallTree` where
pickled tree objects would change their type to the super class `BinaryTree`.
:issue:`11774` by :user:`Nicolas Hug <NicolasHug>`.
:mod:`sklearn.neural_network`
.............................
- |Feature| Add `n_iter_no_change` parameter in
`neural_network.BaseMultilayerPerceptron`,
:class:`neural_network.MLPRegressor`, and
:class:`neural_network.MLPClassifier` to give control over
maximum number of epochs to not meet ``tol`` improvement.
:issue:`9456` by :user:`Nicholas Nadeau <nnadeau>`.
- |Fix| Fixed a bug in `neural_network.BaseMultilayerPerceptron`,
:class:`neural_network.MLPRegressor`, and
:class:`neural_network.MLPClassifier` with new ``n_iter_no_change``
parameter now at 10 from previously hardcoded 2.
:issue:`9456` by :user:`Nicholas Nadeau <nnadeau>`.
- |Fix| Fixed a bug in :class:`neural_network.MLPRegressor` where fitting
quit unexpectedly early due to local minima or fluctuations.
:issue:`9456` by :user:`Nicholas Nadeau <nnadeau>`
:mod:`sklearn.pipeline`
.......................
- |Feature| The ``predict`` method of :class:`pipeline.Pipeline` now passes
keyword arguments on to the pipeline's last estimator, enabling the use of
parameters such as ``return_std`` in a pipeline with caution.
:issue:`9304` by :user:`Breno Freitas <brenolf>`.
- |API| :class:`pipeline.FeatureUnion` now supports ``'drop'`` as a transformer
to drop features. :issue:`11144` by :user:`Thomas Fan <thomasjpfan>`.
:mod:`sklearn.preprocessing`
............................
- |MajorFeature| Expanded :class:`preprocessing.OneHotEncoder` to allow to
encode categorical string features as a numeric array using a one-hot (or
dummy) encoding scheme, and added :class:`preprocessing.OrdinalEncoder` to
convert to ordinal integers. Those two classes now handle encoding of all
feature types (also handles string-valued features) and derives the
categories based on the unique values in the features instead of the maximum
value in the features. :issue:`9151` and :issue:`10521` by :user:`Vighnesh
Birodkar <vighneshbirodkar>` and `Joris Van den Bossche`_.
- |MajorFeature| Added :class:`preprocessing.KBinsDiscretizer` for turning
continuous features into categorical or one-hot encoded
features. :issue:`7668`, :issue:`9647`, :issue:`10195`,
:issue:`10192`, :issue:`11272`, :issue:`11467` and :issue:`11505`.
by :user:`Henry Lin <hlin117>`, `Hanmin Qin`_,
`Tom Dupre la Tour`_ and :user:`Giovanni Giuseppe Costa <ggc87>`.
- |MajorFeature| Added :class:`preprocessing.PowerTransformer`, which
implements the Yeo-Johnson and Box-Cox power transformations. Power
transformations try to find a set of feature-wise parametric transformations
to approximately map data to a Gaussian distribution centered at zero and
with unit variance. This is useful as a variance-stabilizing transformation
in situations where normality and homoscedasticity are desirable.
:issue:`10210` by :user:`Eric Chang <chang>` and :user:`Maniteja
Nandana <maniteja123>`, and :issue:`11520` by :user:`Nicolas Hug
<nicolashug>`.
- |MajorFeature| NaN values are ignored and handled in the following
preprocessing methods:
:class:`preprocessing.MaxAbsScaler`,
:class:`preprocessing.MinMaxScaler`,
:class:`preprocessing.RobustScaler`,
:class:`preprocessing.StandardScaler`,
:class:`preprocessing.PowerTransformer`,
:class:`preprocessing.QuantileTransformer` classes and
:func:`preprocessing.maxabs_scale`,
:func:`preprocessing.minmax_scale`,
:func:`preprocessing.robust_scale`,
:func:`preprocessing.scale`,
:func:`preprocessing.power_transform`,
:func:`preprocessing.quantile_transform` functions respectively addressed in
issues :issue:`11011`, :issue:`11005`, :issue:`11308`, :issue:`11206`,
:issue:`11306`, and :issue:`10437`.
By :user:`Lucija Gregov <LucijaGregov>` and
:user:`Guillaume Lemaitre <glemaitre>`.
- |Feature| :class:`preprocessing.PolynomialFeatures` now supports sparse
input. :issue:`10452` by :user:`Aman Dalmia <dalmia>` and `Joel Nothman`_.
- |Feature| :class:`preprocessing.RobustScaler` and
:func:`preprocessing.robust_scale` can be fitted using sparse matrices.
:issue:`11308` by :user:`Guillaume Lemaitre <glemaitre>`.
- |Feature| :class:`preprocessing.OneHotEncoder` now supports the
`get_feature_names` method to obtain the transformed feature names.
:issue:`10181` by :user:`Nirvan Anjirbag <Nirvan101>` and
`Joris Van den Bossche`_.
- |Feature| A parameter ``check_inverse`` was added to
:class:`preprocessing.FunctionTransformer` to ensure that ``func`` and
``inverse_func`` are the inverse of each other.
:issue:`9399` by :user:`Guillaume Lemaitre <glemaitre>`.
- |Feature| The ``transform`` method of :class:`sklearn.preprocessing.MultiLabelBinarizer`
now ignores any unknown classes. A warning is raised stating the unknown classes
classes found which are ignored.
:issue:`10913` by :user:`Rodrigo Agundez <rragundez>`.
- |Fix| Fixed bugs in :class:`preprocessing.LabelEncoder` which would
sometimes throw errors when ``transform`` or ``inverse_transform`` was called
with empty arrays. :issue:`10458` by :user:`Mayur Kulkarni <maykulkarni>`.
- |Fix| Fix ValueError in :class:`preprocessing.LabelEncoder` when using
``inverse_transform`` on unseen labels. :issue:`9816` by :user:`Charlie Newey
<newey01c>`.
- |Fix| Fix bug in :class:`preprocessing.OneHotEncoder` which discarded the
``dtype`` when returning a sparse matrix output.
:issue:`11042` by :user:`Daniel Morales <DanielMorales9>`.
- |Fix| Fix ``fit`` and ``partial_fit`` in
:class:`preprocessing.StandardScaler` in the rare case when ``with_mean=False``
and `with_std=False` which was crashing by calling ``fit`` more than once and
giving inconsistent results for ``mean_`` whether the input was a sparse or a
dense matrix. ``mean_`` will be set to ``None`` with both sparse and dense
inputs. ``n_samples_seen_`` will be also reported for both input types.
:issue:`11235` by :user:`Guillaume Lemaitre <glemaitre>`.
- |API| Deprecate ``n_values`` and ``categorical_features`` parameters and
``active_features_``, ``feature_indices_`` and ``n_values_`` attributes
of :class:`preprocessing.OneHotEncoder`. The ``n_values`` parameter can be
replaced with the new ``categories`` parameter, and the attributes with the
new ``categories_`` attribute. Selecting the categorical features with
the ``categorical_features`` parameter is now better supported using the
:class:`compose.ColumnTransformer`.
:issue:`10521` by `Joris Van den Bossche`_.
- |API| Deprecate `preprocessing.Imputer` and move
the corresponding module to :class:`impute.SimpleImputer`.
:issue:`9726` by :user:`Kumar Ashutosh
<thechargedneutron>`.
- |API| The ``axis`` parameter that was in
`preprocessing.Imputer` is no longer present in
:class:`impute.SimpleImputer`. The behavior is equivalent
to ``axis=0`` (impute along columns). Row-wise
imputation can be performed with FunctionTransformer
(e.g., ``FunctionTransformer(lambda X:
SimpleImputer().fit_transform(X.T).T)``). :issue:`10829`
by :user:`Guillaume Lemaitre <glemaitre>` and
:user:`Gilberto Olimpio <gilbertoolimpio>`.
- |API| The NaN marker for the missing values has been changed
between the `preprocessing.Imputer` and the
`impute.SimpleImputer`.
``missing_values='NaN'`` should now be
``missing_values=np.nan``. :issue:`11211` by
:user:`Jeremie du Boisberranger <jeremiedbb>`.
- |API| In :class:`preprocessing.FunctionTransformer`, the default of
``validate`` will be from ``True`` to ``False`` in 0.22.
:issue:`10655` by :user:`Guillaume Lemaitre <glemaitre>`.
:mod:`sklearn.svm`
..................
- |Fix| Fixed a bug in :class:`svm.SVC` where when the argument ``kernel`` is
unicode in Python2, the ``predict_proba`` method was raising an
unexpected TypeError given dense inputs.
:issue:`10412` by :user:`Jiongyan Zhang <qmick>`.
- |API| Deprecate ``random_state`` parameter in :class:`svm.OneClassSVM` as
the underlying implementation is not random.
:issue:`9497` by :user:`Albert Thomas <albertcthomas>`.
- |API| The default value of ``gamma`` parameter of :class:`svm.SVC`,
:class:`~svm.NuSVC`, :class:`~svm.SVR`, :class:`~svm.NuSVR`,
:class:`~svm.OneClassSVM` will change from ``'auto'`` to ``'scale'`` in
version 0.22 to account better for unscaled features. :issue:`8361` by
:user:`Gaurav Dhingra <gxyd>` and :user:`Ting Neo <neokt>`.
:mod:`sklearn.tree`
...................
- |Enhancement| Although private (and hence not assured API stability),
`tree._criterion.ClassificationCriterion` and
`tree._criterion.RegressionCriterion` may now be cimported and
extended. :issue:`10325` by :user:`Camil Staps <camilstaps>`.
- |Fix| Fixed a bug in `tree.BaseDecisionTree` with `splitter="best"`
where split threshold could become infinite when values in X were
near infinite. :issue:`10536` by :user:`Jonathan Ohayon <Johayon>`.
- |Fix| Fixed a bug in `tree.MAE` to ensure sample weights are being
used during the calculation of tree MAE impurity. Previous behaviour could
cause suboptimal splits to be chosen since the impurity calculation
considered all samples to be of equal weight importance.
:issue:`11464` by :user:`John Stott <JohnStott>`.
:mod:`sklearn.utils`
....................
- |Feature| :func:`utils.check_array` and :func:`utils.check_X_y` now have
``accept_large_sparse`` to control whether scipy.sparse matrices with 64-bit
indices should be rejected.
:issue:`11327` by :user:`Karan Dhingra <kdhingra307>` and `Joel Nothman`_.
- |Efficiency| |Fix| Avoid copying the data in :func:`utils.check_array` when
the input data is a memmap (and ``copy=False``). :issue:`10663` by
:user:`Arthur Mensch <arthurmensch>` and :user:`Loïc Estève <lesteve>`.
- |API| :func:`utils.check_array` yield a ``FutureWarning`` indicating
that arrays of bytes/strings will be interpreted as decimal numbers
beginning in version 0.22. :issue:`10229` by :user:`Ryan Lee <rtlee9>`
Multiple modules
................
- |Feature| |API| More consistent outlier detection API:
Add a ``score_samples`` method in :class:`svm.OneClassSVM`,
:class:`ensemble.IsolationForest`, :class:`neighbors.LocalOutlierFactor`,
:class:`covariance.EllipticEnvelope`. It allows to access raw score
functions from original papers. A new ``offset_`` parameter allows to link
``score_samples`` and ``decision_function`` methods.
The ``contamination`` parameter of :class:`ensemble.IsolationForest` and
:class:`neighbors.LocalOutlierFactor` ``decision_function`` methods is used
to define this ``offset_`` such that outliers (resp. inliers) have negative (resp.
positive) ``decision_function`` values. By default, ``contamination`` is
kept unchanged to 0.1 for a deprecation period. In 0.22, it will be set to "auto",
thus using method-specific score offsets.
In :class:`covariance.EllipticEnvelope` ``decision_function`` method, the
``raw_values`` parameter is deprecated as the shifted Mahalanobis distance
will be always returned in 0.22. :issue:`9015` by `Nicolas Goix`_.
- |Feature| |API| A ``behaviour`` parameter has been introduced in :class:`ensemble.IsolationForest`
to ensure backward compatibility.
In the old behaviour, the ``decision_function`` is independent of the ``contamination``
parameter. A threshold attribute depending on the ``contamination`` parameter is thus
used.
In the new behaviour the ``decision_function`` is dependent on the ``contamination``
parameter, in such a way that 0 becomes its natural threshold to detect outliers.
Setting behaviour to "old" is deprecated and will not be possible in version 0.22.
Beside, the behaviour parameter will be removed in 0.24.
:issue:`11553` by `Nicolas Goix`_.
- |API| Added convergence warning to :class:`svm.LinearSVC` and
:class:`linear_model.LogisticRegression` when ``verbose`` is set to 0.
:issue:`10881` by :user:`Alexandre Sevin <AlexandreSev>`.
- |API| Changed warning type from :class:`UserWarning` to
:class:`exceptions.ConvergenceWarning` for failing convergence in
`linear_model.logistic_regression_path`,
:class:`linear_model.RANSACRegressor`, :func:`linear_model.ridge_regression`,
:class:`gaussian_process.GaussianProcessRegressor`,
:class:`gaussian_process.GaussianProcessClassifier`,
:func:`decomposition.fastica`, :class:`cross_decomposition.PLSCanonical`,
:class:`cluster.AffinityPropagation`, and :class:`cluster.Birch`.
:issue:`10306` by :user:`Jonathan Siebert <jotasi>`.
Miscellaneous
.............
- |MajorFeature| A new configuration parameter, ``working_memory`` was added
to control memory consumption limits in chunked operations, such as the new
:func:`metrics.pairwise_distances_chunked`. See :ref:`working_memory`.
:issue:`10280` by `Joel Nothman`_ and :user:`Aman Dalmia <dalmia>`.
- |Feature| The version of :mod:`joblib` bundled with Scikit-learn is now 0.12.
This uses a new default multiprocessing implementation, named `loky
<https://github.com/tomMoral/loky>`_. While this may incur some memory and
communication overhead, it should provide greater cross-platform stability
than relying on Python standard library multiprocessing. :issue:`11741` by
the Joblib developers, especially :user:`Thomas Moreau <tomMoral>` and
`Olivier Grisel`_.
- |Feature| An environment variable to use the site joblib instead of the
vendored one was added (:ref:`environment_variable`). The main API of joblib
is now exposed in :mod:`sklearn.utils`.
:issue:`11166` by `Gael Varoquaux`_.
- |Feature| Add almost complete PyPy 3 support. Known unsupported
functionalities are :func:`datasets.load_svmlight_file`,
:class:`feature_extraction.FeatureHasher` and
:class:`feature_extraction.text.HashingVectorizer`. For running on PyPy,
PyPy3-v5.10+, Numpy 1.14.0+, and scipy 1.1.0+ are required.
:issue:`11010` by :user:`Ronan Lamy <rlamy>` and `Roman Yurchak`_.
- |Feature| A utility method :func:`sklearn.show_versions()` was added to
print out information relevant for debugging. It includes the user system,
the Python executable, the version of the main libraries and BLAS binding
information. :issue:`11596` by :user:`Alexandre Boucaud <aboucaud>`
- |Fix| Fixed a bug when setting parameters on meta-estimator, involving both
a wrapped estimator and its parameter. :issue:`9999` by :user:`Marcus Voss
<marcus-voss>` and `Joel Nothman`_.
- |Fix| Fixed a bug where calling :func:`sklearn.base.clone` was not thread
safe and could result in a "pop from empty list" error. :issue:`9569`
by `Andreas Müller`_.
- |API| The default value of ``n_jobs`` is changed from ``1`` to ``None`` in
all related functions and classes. ``n_jobs=None`` means ``unset``. It will
generally be interpreted as ``n_jobs=1``, unless the current
``joblib.Parallel`` backend context specifies otherwise (See
:term:`Glossary <n_jobs>` for additional information). Note that this change
happens immediately (i.e., without a deprecation cycle).
:issue:`11741` by `Olivier Grisel`_.
- |Fix| Fixed a bug in validation helpers where passing a Dask DataFrame results
in an error. :issue:`12462` by :user:`Zachariah Miller <zwmiller>`
Changes to estimator checks
---------------------------
These changes mostly affect library developers.
- Checks for transformers now apply if the estimator implements
:term:`transform`, regardless of whether it inherits from
:class:`sklearn.base.TransformerMixin`. :issue:`10474` by `Joel Nothman`_.
- Classifiers are now checked for consistency between :term:`decision_function`
and categorical predictions.
:issue:`10500` by :user:`Narine Kokhlikyan <NarineK>`.
- Allow tests in :func:`utils.estimator_checks.check_estimator` to test functions
that accept pairwise data.
:issue:`9701` by :user:`Kyle Johnson <gkjohns>`
- Allow :func:`utils.estimator_checks.check_estimator` to check that there is no
private settings apart from parameters during estimator initialization.
:issue:`9378` by :user:`Herilalaina Rakotoarison <herilalaina>`
- The set of checks in :func:`utils.estimator_checks.check_estimator` now includes a
``check_set_params`` test which checks that ``set_params`` is equivalent to
passing parameters in ``__init__`` and warns if it encounters parameter
validation. :issue:`7738` by :user:`Alvin Chiang <absolutelyNoWarranty>`
- Add invariance tests for clustering metrics. :issue:`8102` by :user:`Ankita
Sinha <anki08>` and :user:`Guillaume Lemaitre <glemaitre>`.
- Add ``check_methods_subset_invariance`` to
:func:`~utils.estimator_checks.check_estimator`, which checks that
estimator methods are invariant if applied to a data subset.
:issue:`10428` by :user:`Jonathan Ohayon <Johayon>`
- Add tests in :func:`utils.estimator_checks.check_estimator` to check that an
estimator can handle read-only memmap input data. :issue:`10663` by
:user:`Arthur Mensch <arthurmensch>` and :user:`Loïc Estève <lesteve>`.
- ``check_sample_weights_pandas_series`` now uses 8 rather than 6 samples
to accommodate for the default number of clusters in :class:`cluster.KMeans`.
:issue:`10933` by :user:`Johannes Hansen <jnhansen>`.
- Estimators are now checked for whether ``sample_weight=None`` equates to
``sample_weight=np.ones(...)``.
:issue:`11558` by :user:`Sergul Aydore <sergulaydore>`.
Code and Documentation Contributors
-----------------------------------
Thanks to everyone who has contributed to the maintenance and improvement of the
project since version 0.19, including:
211217613, Aarshay Jain, absolutelyNoWarranty, Adam Greenhall, Adam Kleczewski,
Adam Richie-Halford, adelr, AdityaDaflapurkar, Adrin Jalali, Aidan Fitzgerald,
aishgrt1, Akash Shivram, Alan Liddell, Alan Yee, Albert Thomas, Alexander
Lenail, Alexander-N, Alexandre Boucaud, Alexandre Gramfort, Alexandre Sevin,
Alex Egg, Alvaro Perez-Diaz, Amanda, Aman Dalmia, Andreas Bjerre-Nielsen,
Andreas Mueller, Andrew Peng, Angus Williams, Aniruddha Dave, annaayzenshtat,
Anthony Gitter, Antonio Quinonez, Anubhav Marwaha, Arik Pamnani, Arthur Ozga,
Artiem K, Arunava, Arya McCarthy, Attractadore, Aurélien Bellet, Aurélien
Geron, Ayush Gupta, Balakumaran Manoharan, Bangda Sun, Barry Hart, Bastian
Venthur, Ben Lawson, Benn Roth, Breno Freitas, Brent Yi, brett koonce, Caio
Oliveira, Camil Staps, cclauss, Chady Kamar, Charlie Brummitt, Charlie Newey,
chris, Chris, Chris Catalfo, Chris Foster, Chris Holdgraf, Christian Braune,
Christian Hirsch, Christian Hogan, Christopher Jenness, Clement Joudet, cnx,
cwitte, Dallas Card, Dan Barkhorn, Daniel, Daniel Ferreira, Daniel Gomez,
Daniel Klevebring, Danielle Shwed, Daniel Mohns, Danil Baibak, Darius Morawiec,
David Beach, David Burns, David Kirkby, David Nicholson, David Pickup, Derek,
Didi Bar-Zev, diegodlh, Dillon Gardner, Dillon Niederhut, dilutedsauce,
dlovell, Dmitry Mottl, Dmitry Petrov, Dor Cohen, Douglas Duhaime, Ekaterina
Tuzova, Eric Chang, Eric Dean Sanchez, Erich Schubert, Eunji, Fang-Chieh Chou,
FarahSaeed, felix, Félix Raimundo, fenx, filipj8, FrankHui, Franz Wompner,
Freija Descamps, frsi, Gabriele Calvo, Gael Varoquaux, Gaurav Dhingra, Georgi
Peev, Gil Forsyth, Giovanni Giuseppe Costa, gkevinyen5418, goncalo-rodrigues,
Gryllos Prokopis, Guillaume Lemaitre, Guillaume "Vermeille" Sanchez, Gustavo De
Mari Pereira, hakaa1, Hanmin Qin, Henry Lin, Hong, Honghe, Hossein Pourbozorg,
Hristo, Hunan Rostomyan, iampat, Ivan PANICO, Jaewon Chung, Jake VanderPlas,
jakirkham, James Bourbeau, James Malcolm, Jamie Cox, Jan Koch, Jan Margeta, Jan
Schlüter, janvanrijn, Jason Wolosonovich, JC Liu, Jeb Bearer, jeremiedbb, Jimmy
Wan, Jinkun Wang, Jiongyan Zhang, jjabl, jkleint, Joan Massich, Joël Billaud,
Joel Nothman, Johannes Hansen, JohnStott, Jonatan Samoocha, Jonathan Ohayon,
Jörg Döpfert, Joris Van den Bossche, Jose Perez-Parras Toledano, josephsalmon,
jotasi, jschendel, Julian Kuhlmann, Julien Chaumond, julietcl, Justin Shenk,
Karl F, Kasper Primdal Lauritzen, Katrin Leinweber, Kirill, ksemb, Kuai Yu,
Kumar Ashutosh, Kyeongpil Kang, Kye Taylor, kyledrogo, Leland McInnes, Léo DS,
Liam Geron, Liutong Zhou, Lizao Li, lkjcalc, Loic Esteve, louib, Luciano Viola,
Lucija Gregov, Luis Osa, Luis Pedro Coelho, Luke M Craig, Luke Persola, Mabel,
Mabel Villalba, Maniteja Nandana, MarkIwanchyshyn, Mark Roth, Markus Müller,
MarsGuy, Martin Gubri, martin-hahn, martin-kokos, mathurinm, Matthias Feurer,
Max Copeland, Mayur Kulkarni, Meghann Agarwal, Melanie Goetz, Michael A.
Alcorn, Minghui Liu, Ming Li, Minh Le, Mohamed Ali Jamaoui, Mohamed Maskani,
Mohammad Shahebaz, Muayyad Alsadi, Nabarun Pal, Nagarjuna Kumar, Naoya Kanai,
Narendran Santhanam, NarineK, Nathaniel Saul, Nathan Suh, Nicholas Nadeau,
P.Eng., AVS, Nick Hoh, Nicolas Goix, Nicolas Hug, Nicolau Werneck,
nielsenmarkus11, Nihar Sheth, Nikita Titov, Nilesh Kevlani, Nirvan Anjirbag,
notmatthancock, nzw, Oleksandr Pavlyk, oliblum90, Oliver Rausch, Olivier
Grisel, Oren Milman, Osaid Rehman Nasir, pasbi, Patrick Fernandes, Patrick
Olden, Paul Paczuski, Pedro Morales, Peter, Peter St. John, pierreablin,
pietruh, Pinaki Nath Chowdhury, Piotr Szymański, Pradeep Reddy Raamana, Pravar
D Mahajan, pravarmahajan, QingYing Chen, Raghav RV, Rajendra arora,
RAKOTOARISON Herilalaina, Rameshwar Bhaskaran, RankyLau, Rasul Kerimov,
Reiichiro Nakano, Rob, Roman Kosobrodov, Roman Yurchak, Ronan Lamy, rragundez,
Rüdiger Busche, Ryan, Sachin Kelkar, Sagnik Bhattacharya, Sailesh Choyal, Sam
Radhakrishnan, Sam Steingold, Samuel Bell, Samuel O. Ronsin, Saqib Nizam
Shamsi, SATISH J, Saurabh Gupta, Scott Gigante, Sebastian Flennerhag, Sebastian
Raschka, Sebastien Dubois, Sébastien Lerique, Sebastin Santy, Sergey Feldman,
Sergey Melderis, Sergul Aydore, Shahebaz, Shalil Awaley, Shangwu Yao, Sharad
Vijalapuram, Sharan Yalburgi, shenhanc78, Shivam Rastogi, Shu Haoran, siftikha,
Sinclert Pérez, SolutusImmensus, Somya Anand, srajan paliwal, Sriharsha Hatwar,
Sri Krishna, Stefan van der Walt, Stephen McDowell, Steven Brown, syonekura,
Taehoon Lee, Takanori Hayashi, tarcusx, Taylor G Smith, theriley106, Thomas,
Thomas Fan, Thomas Heavey, Tobias Madsen, tobycheese, Tom Augspurger, Tom Dupré
la Tour, Tommy, Trevor Stephens, Trishnendu Ghorai, Tulio Casagrande,
twosigmajab, Umar Farouk Umar, Urvang Patel, Utkarsh Upadhyay, Vadim
Markovtsev, Varun Agrawal, Vathsala Achar, Vilhelm von Ehrenheim, Vinayak
Mehta, Vinit, Vinod Kumar L, Viraj Mavani, Viraj Navkal, Vivek Kumar, Vlad
Niculae, vqean3, Vrishank Bhardwaj, vufg, wallygauze, Warut Vijitbenjaronk,
wdevazelhes, Wenhao Zhang, Wes Barnett, Will, William de Vazelhes, Will
Rosenfeld, Xin Xiong, Yiming (Paul) Li, ymazari, Yufeng, Zach Griffith, Zé
Vinícius, Zhenqing Hu, Zhiqing Xiao, Zijie (ZJ) Poh | scikit-learn | include contributors rst currentmodule sklearn Version 0 20 warning Version 0 20 is the last version of scikit learn to support Python 2 7 and Python 3 4 Scikit learn 0 21 will require Python 3 5 or higher include changelog legend inc changes 0 20 4 Version 0 20 4 July 30 2019 This is a bug fix release with some bug fixes applied to version 0 20 3 Changelog The bundled version of joblib was upgraded from 0 13 0 to 0 13 2 mod sklearn cluster Fix Fixed a bug in class cluster KMeans where KMeans initialisation could rarely result in an IndexError issue 11756 by Joel Nothman mod sklearn compose Fix Fixed an issue in class compose ColumnTransformer where using DataFrames whose column order differs between func fit and func transform could lead to silently passing incorrect columns to the remainder transformer pr 14237 by Andreas Schuderer schuderer mod sklearn decomposition Fix Fixed a bug in class cross decomposition CCA improving numerical stability when Y is close to zero pr 13903 by Thomas Fan mod sklearn model selection Fix Fixed a bug where class model selection StratifiedKFold shuffles each class s samples with the same random state making shuffle True ineffective issue 13124 by user Hanmin Qin qinhanmin2014 mod sklearn neighbors Fix Fixed a bug in class neighbors KernelDensity which could not be restored from a pickle if sample weight had been used issue 13772 by user Aditya Vyas aditya1702 changes 0 20 3 Version 0 20 3 March 1 2019 This is a bug fix release with some minor documentation improvements and enhancements to features released in 0 20 0 Changelog mod sklearn cluster Fix Fixed a bug in class cluster KMeans where computation was single threaded when n jobs 1 or n jobs 1 issue 12949 by user Prabakaran Kumaresshan nixphix mod sklearn compose Fix Fixed a bug in class compose ColumnTransformer to handle negative indexes in the columns list of the transformers issue 12946 by user Pierre Tallotte pierretallotte mod sklearn covariance Fix Fixed a regression in func covariance graphical lasso so that the case n features 2 is handled correctly issue 13276 by user Aur lien Bellet bellet mod sklearn decomposition Fix Fixed a bug in func decomposition sparse encode where computation was single threaded when n jobs 1 or n jobs 1 issue 13005 by user Prabakaran Kumaresshan nixphix mod sklearn datasets Efficiency func sklearn datasets fetch openml now loads data by streaming avoiding high memory usage issue 13312 by Joris Van den Bossche mod sklearn feature extraction Fix Fixed a bug in class feature extraction text CountVectorizer which would result in the sparse feature matrix having conflicting indptr and indices precisions under very large vocabularies issue 11295 by user Gabriel Vacaliuc gvacaliuc mod sklearn impute Fix add support for non numeric data in class sklearn impute MissingIndicator which was not supported while class sklearn impute SimpleImputer was supporting this for some imputation strategies issue 13046 by user Guillaume Lemaitre glemaitre mod sklearn linear model Fix Fixed a bug in class linear model MultiTaskElasticNet and class linear model MultiTaskLasso which were breaking when warm start True issue 12360 by user Aakanksha Joshi joaak mod sklearn preprocessing Fix Fixed a bug in class preprocessing KBinsDiscretizer where strategy kmeans fails with an error during transformation due to unsorted bin edges issue 13134 by user Sandro Casagrande SandroCasagrande Fix Fixed a bug in class preprocessing OneHotEncoder where the deprecation of categorical features was handled incorrectly in combination with handle unknown ignore issue 12881 by Joris Van den Bossche Fix Bins whose width are too small i e 1e 8 are removed with a warning in class preprocessing KBinsDiscretizer issue 13165 by user Hanmin Qin qinhanmin2014 mod sklearn svm FIX Fixed a bug in class svm SVC class svm NuSVC class svm SVR class svm NuSVR and class svm OneClassSVM where the scale option of parameter gamma is erroneously defined as 1 n features X std It s now defined as 1 n features X var issue 13221 by user Hanmin Qin qinhanmin2014 Code and Documentation Contributors With thanks to Adrin Jalali Agamemnon Krasoulis Albert Thomas Andreas Mueller Aur lien Bellet bertrandhaut Bharat Raghunathan Dowon Emmanuel Arias Fibinse Xavier Finn O Shea Gabriel Vacaliuc Gael Varoquaux Guillaume Lemaitre Hanmin Qin joaak Joel Nothman Joris Van den Bossche J r mie M hault kms15 Kossori Aruku Lakshya KD maikia Manuel L pez Ib ez Marco Gorelli MarcoGorelli mferrari3 Micka l Schoentgen Nicolas Hug pavlos kallis Pierre Glaser pierretallotte Prabakaran Kumaresshan Reshama Shaikh Rohit Kapoor Roman Yurchak SandroCasagrande Tashay Green Thomas Fan Vishaal Kapoor Zhuyi Xue Zijie ZJ Poh changes 0 20 2 Version 0 20 2 December 20 2018 This is a bug fix release with some minor documentation improvements and enhancements to features released in 0 20 0 Changed models The following estimators and functions when fit with the same data and parameters may produce different models from the previous version This often occurs due to changes in the modelling logic bug fixes or enhancements or in random sampling procedures mod sklearn neighbors when metric jaccard bug fix use of seuclidean or mahalanobis metrics in some cases bug fix Changelog mod sklearn compose Fix Fixed an issue in func compose make column transformer which raises unexpected error when columns is pandas Index or pandas Series issue 12704 by user Hanmin Qin qinhanmin2014 mod sklearn metrics Fix Fixed a bug in func metrics pairwise distances and func metrics pairwise distances chunked where parameters V of seuclidean and VI of mahalanobis metrics were computed after the data was split into chunks instead of being pre computed on whole data issue 12701 by user Jeremie du Boisberranger jeremiedbb mod sklearn neighbors Fix Fixed sklearn neighbors DistanceMetric jaccard distance function to return 0 when two all zero vectors are compared issue 12685 by user Thomas Fan thomasjpfan mod sklearn utils Fix Calling func utils check array on pandas Series with categorical data which raised an error in 0 20 0 now returns the expected output again issue 12699 by Joris Van den Bossche Code and Documentation Contributors With thanks to adanhawth Adrin Jalali Albert Thomas Andreas Mueller Dan Stine Feda Curic Hanmin Qin Jan S jeremiedbb Joel Nothman Joris Van den Bossche josephsalmon Katrin Leinweber Loic Esteve Muhammad Hassaan Rafique Nicolas Hug Olivier Grisel Paul Paczuski Reshama Shaikh Sam Waterbury Shivam Kotwalia Thomas Fan changes 0 20 1 Version 0 20 1 November 21 2018 This is a bug fix release with some minor documentation improvements and enhancements to features released in 0 20 0 Note that we also include some API changes in this release so you might get some extra warnings after updating from 0 20 0 to 0 20 1 Changed models The following estimators and functions when fit with the same data and parameters may produce different models from the previous version This often occurs due to changes in the modelling logic bug fixes or enhancements or in random sampling procedures class decomposition IncrementalPCA bug fix Changelog mod sklearn cluster Efficiency make class cluster MeanShift no longer try to do nested parallelism as the overhead would hurt performance significantly when n jobs 1 issue 12159 by user Olivier Grisel ogrisel Fix Fixed a bug in class cluster DBSCAN with precomputed sparse neighbors graph which would add explicitly zeros on the diagonal even when already present issue 12105 by Tom Dupre la Tour mod sklearn compose Fix Fixed an issue in class compose ColumnTransformer when stacking columns with types not convertible to a numeric issue 11912 by user Adrin Jalali adrinjalali API class compose ColumnTransformer now applies the sparse threshold even if all transformation results are sparse issue 12304 by Andreas M ller API func compose make column transformer now expects transformer columns instead of columns transformer to keep consistent with class compose ColumnTransformer issue 12339 by user Adrin Jalali adrinjalali mod sklearn datasets Fix func datasets fetch openml to correctly use the local cache issue 12246 by user Jan N van Rijn janvanrijn Fix func datasets fetch openml to correctly handle ignore attributes and row id attributes issue 12330 by user Jan N van Rijn janvanrijn Fix Fixed integer overflow in func datasets make classification for values of n informative parameter larger than 64 issue 10811 by user Roman Feldbauer VarIr Fix Fixed olivetti faces dataset DESCR attribute to point to the right location in func datasets fetch olivetti faces issue 12441 by user J r mie du Boisberranger jeremiedbb Fix func datasets fetch openml to retry downloading when reading from local cache fails issue 12517 by user Thomas Fan thomasjpfan mod sklearn decomposition Fix Fixed a regression in class decomposition IncrementalPCA where 0 20 0 raised an error if the number of samples in the final batch for fitting IncrementalPCA was smaller than n components issue 12234 by user Ming Li minggli mod sklearn ensemble Fix Fixed a bug mostly affecting class ensemble RandomForestClassifier where class weight balanced subsample failed with more than 32 classes issue 12165 by Joel Nothman Fix Fixed a bug affecting class ensemble BaggingClassifier class ensemble BaggingRegressor and class ensemble IsolationForest where max features was sometimes rounded down to zero issue 12388 by user Connor Tann Connossor mod sklearn feature extraction Fix Fixed a regression in v0 20 0 where func feature extraction text CountVectorizer and other text vectorizers could error during stop words validation with custom preprocessors or tokenizers issue 12393 by Roman Yurchak mod sklearn linear model Fix class linear model SGDClassifier and variants with early stopping True would not use a consistent validation split in the multiclass case and this would cause a crash when using those estimators as part of parallel parameter search or cross validation issue 12122 by user Olivier Grisel ogrisel Fix Fixed a bug affecting class linear model SGDClassifier in the multiclass case Each one versus all step is run in a class joblib Parallel call and mutating a common parameter causing a segmentation fault if called within a backend using processes and not threads We now use require sharedmem at the class joblib Parallel instance creation issue 12518 by user Pierre Glaser pierreglaser and user Olivier Grisel ogrisel mod sklearn metrics Fix Fixed a bug in metrics pairwise pairwise distances argmin min which returned the square root of the distance when the metric parameter was set to euclidean issue 12481 by user J r mie du Boisberranger jeremiedbb Fix Fixed a bug in metrics pairwise pairwise distances chunked which didn t ensure the diagonal is zero for euclidean distances issue 12612 by user Andreas M ller amueller API The metrics calinski harabaz score has been renamed to func metrics calinski harabasz score and will be removed in version 0 23 issue 12211 by user Lisa Thomas LisaThomas9 user Mark Hannel markhannel and user Melissa Ferrari mferrari3 mod sklearn mixture Fix Ensure that the fit predict method of class mixture GaussianMixture and class mixture BayesianGaussianMixture always yield assignments consistent with fit followed by predict even if the convergence criterion is too loose or not met issue 12451 by user Olivier Grisel ogrisel mod sklearn neighbors Fix force the parallelism backend to code threading for class neighbors KDTree and class neighbors BallTree in Python 2 7 to avoid pickling errors caused by the serialization of their methods issue 12171 by user Thomas Moreau tomMoral mod sklearn preprocessing Fix Fixed bug in class preprocessing OrdinalEncoder when passing manually specified categories issue 12365 by Joris Van den Bossche Fix Fixed bug in class preprocessing KBinsDiscretizer where the transform method mutates the encoder attribute The transform method is now thread safe issue 12514 by user Hanmin Qin qinhanmin2014 Fix Fixed a bug in class preprocessing PowerTransformer where the Yeo Johnson transform was incorrect for lambda parameters outside of 0 2 issue 12522 by user Nicolas Hug NicolasHug Fix Fixed a bug in class preprocessing OneHotEncoder where transform failed when set to ignore unknown numpy strings of different lengths issue 12471 by user Gabriel Marzinotto GMarzinotto API The default value of the code method argument in func preprocessing power transform will be changed from code box cox to code yeo johnson to match class preprocessing PowerTransformer in version 0 23 A FutureWarning is raised when the default value is used issue 12317 by user Eric Chang chang mod sklearn utils Fix Use float64 for mean accumulator to avoid floating point precision issues in class preprocessing StandardScaler and class decomposition IncrementalPCA when using float32 datasets issue 12338 by user bauks bauks Fix Calling func utils check array on pandas Series which raised an error in 0 20 0 now returns the expected output again issue 12625 by Andreas M ller Miscellaneous Fix When using site joblib by setting the environment variable SKLEARN SITE JOBLIB added compatibility with joblib 0 11 in addition to 0 12 issue 12350 by Joel Nothman and Roman Yurchak Fix Make sure to avoid raising FutureWarning when calling np vstack with numpy 1 16 and later use list comprehensions instead of generator expressions in many locations of the scikit learn code base issue 12467 by user Olivier Grisel ogrisel API Removed all mentions of sklearn externals joblib and deprecated joblib methods exposed in sklearn utils except for func utils parallel backend and func utils register parallel backend which allow users to configure parallel computation in scikit learn Other functionalities are part of joblib https joblib readthedocs io package and should be used directly by installing it The goal of this change is to prepare for unvendoring joblib in future version of scikit learn issue 12345 by user Thomas Moreau tomMoral Code and Documentation Contributors With thanks to Adrin Jalali Andrea Navarrete Andreas Mueller bauks BenjaStudio Cheuk Ting Ho Connossor Corey Levinson Dan Stine daten kieker Denis Kataev Dillon Gardner Dmitry Vukolov Dougal J Sutherland Edward J Brown Eric Chang Federico Caselli Gabriel Marzinotto Gael Varoquaux GauravAhlawat Gustavo De Mari Pereira Hanmin Qin haroldfox JackLangerman Jacopo Notarstefano janvanrijn jdethurens jeremiedbb Joel Nothman Joris Van den Bossche Koen Kushal Chauhan Lee Yi Jie Joel Lily Xiong mail liam Mark Hannel melsyt Ming Li Nicholas Smith Nicolas Hug Nikolay Shebanov Oleksandr Pavlyk Olivier Grisel Peter Hausamann Pierre Glaser Pulkit Maloo Quentin Batista Radostin Stoyanov Ramil Nugmanov Rebekah Kim Reshama Shaikh Rohan Singh Roman Feldbauer Roman Yurchak Roopam Sharma Sam Waterbury Scott Lowe Sebastian Raschka Stephen Tierney SylvainLan TakingItCasual Thomas Fan Thomas Moreau Tom Dupr la Tour Tulio Casagrande Utkarsh Upadhyay Xing Han Lu Yaroslav Halchenko Zach Miller changes 0 20 Version 0 20 0 September 25 2018 This release packs in a mountain of bug fixes features and enhancements for the Scikit learn library and improvements to the documentation and examples Thanks to our contributors This release is dedicated to the memory of Raghav Rajagopalan Highlights We have tried to improve our support for common data science use cases including missing values categorical variables heterogeneous data and features targets with unusual distributions Missing values in features represented by NaNs are now accepted in column wise preprocessing such as scalers Each feature is fitted disregarding NaNs and data containing NaNs can be transformed The new mod sklearn impute module provides estimators for learning despite missing data class compose ColumnTransformer handles the case where different features or columns of a pandas DataFrame need different preprocessing String or pandas Categorical columns can now be encoded with class preprocessing OneHotEncoder or class preprocessing OrdinalEncoder class compose TransformedTargetRegressor helps when the regression target needs to be transformed to be modeled class preprocessing PowerTransformer and class preprocessing KBinsDiscretizer join class preprocessing QuantileTransformer as non linear transformations Beyond this we have added term sample weight support to several estimators including class cluster KMeans class linear model BayesianRidge and class neighbors KernelDensity and improved stopping criteria in others including class neural network MLPRegressor class ensemble GradientBoostingRegressor and class linear model SGDRegressor This release is also the first to be accompanied by a ref glossary developed by Joel Nothman The glossary is a reference resource to help users and contributors become familiar with the terminology and conventions used in Scikit learn Sorry if your contribution didn t make it into the highlights There s a lot here Changed models The following estimators and functions when fit with the same data and parameters may produce different models from the previous version This often occurs due to changes in the modelling logic bug fixes or enhancements or in random sampling procedures class cluster MeanShift bug fix class decomposition IncrementalPCA in Python 2 bug fix class decomposition SparsePCA bug fix class ensemble GradientBoostingClassifier bug fix affecting feature importances class isotonic IsotonicRegression bug fix class linear model ARDRegression bug fix class linear model LogisticRegressionCV bug fix class linear model OrthogonalMatchingPursuit bug fix class linear model PassiveAggressiveClassifier bug fix class linear model PassiveAggressiveRegressor bug fix class linear model Perceptron bug fix class linear model SGDClassifier bug fix class linear model SGDRegressor bug fix class metrics roc auc score bug fix class metrics roc curve bug fix neural network BaseMultilayerPerceptron bug fix class neural network MLPClassifier bug fix class neural network MLPRegressor bug fix The v0 19 0 release notes failed to mention a backwards incompatibility with class model selection StratifiedKFold when shuffle True due to issue 7823 Details are listed in the changelog below While we are trying to better inform users by providing this information we cannot assure that this list is complete Known Major Bugs issue 11924 class linear model LogisticRegressionCV with solver lbfgs and multi class multinomial may be non deterministic or otherwise broken on macOS This appears to be the case on Travis CI servers but has not been confirmed on personal MacBooks This issue has been present in previous releases issue 9354 func metrics pairwise euclidean distances which is used several times throughout the library gives results with poor precision which particularly affects its use with 32 bit float inputs This became more problematic in versions 0 18 and 0 19 when some algorithms were changed to avoid casting 32 bit data into 64 bit Changelog Support for Python 3 3 has been officially dropped mod sklearn cluster MajorFeature class cluster AgglomerativeClustering now supports Single Linkage clustering via linkage single issue 9372 by user Leland McInnes lmcinnes and user Steve Astels sastels Feature class cluster KMeans and class cluster MiniBatchKMeans now support sample weights via new parameter sample weight in fit function issue 10933 by user Johannes Hansen jnhansen Efficiency class cluster KMeans class cluster MiniBatchKMeans and func cluster k means passed with algorithm full now enforces row major ordering improving runtime issue 10471 by user Gaurav Dhingra gxyd Efficiency class cluster DBSCAN now is parallelized according to n jobs regardless of algorithm issue 8003 by user Jo l Billaud recamshak Enhancement class cluster KMeans now gives a warning if the number of distinct clusters found is smaller than n clusters This may occur when the number of distinct points in the data set is actually smaller than the number of cluster one is looking for issue 10059 by user Christian Braune christianbraune79 Fix Fixed a bug where the fit method of class cluster AffinityPropagation stored cluster centers as 3d array instead of 2d array in case of non convergence For the same class fixed undefined and arbitrary behavior in case of training data where all samples had equal similarity issue 9612 By user Jonatan Samoocha jsamoocha Fix Fixed a bug in func cluster spectral clustering where the normalization of the spectrum was using a division instead of a multiplication issue 8129 by user Jan Margeta jmargeta user Guillaume Lemaitre glemaitre and user Devansh D devanshdalal Fix Fixed a bug in cluster k means elkan where the returned iteration was 1 less than the correct value Also added the missing n iter attribute in the docstring of class cluster KMeans issue 11353 by user Jeremie du Boisberranger jeremiedbb Fix Fixed a bug in func cluster mean shift where the assigned labels were not deterministic if there were multiple clusters with the same intensities issue 11901 by user Adrin Jalali adrinjalali API Deprecate pooling func unused parameter in class cluster AgglomerativeClustering issue 9875 by user Kumar Ashutosh thechargedneutron mod sklearn compose New module MajorFeature Added class compose ColumnTransformer which allows to apply different transformers to different columns of arrays or pandas DataFrames issue 9012 by Andreas M ller and Joris Van den Bossche and issue 11315 by user Thomas Fan thomasjpfan MajorFeature Added the class compose TransformedTargetRegressor which transforms the target y before fitting a regression model The predictions are mapped back to the original space via an inverse transform issue 9041 by Andreas M ller and user Guillaume Lemaitre glemaitre mod sklearn covariance Efficiency Runtime improvements to class covariance GraphicalLasso issue 9858 by user Steven Brown stevendbrown API The covariance graph lasso covariance GraphLasso and covariance GraphLassoCV have been renamed to func covariance graphical lasso class covariance GraphicalLasso and class covariance GraphicalLassoCV respectively and will be removed in version 0 22 issue 9993 by user Artiem Krinitsyn artiemq mod sklearn datasets MajorFeature Added func datasets fetch openml to fetch datasets from OpenML https openml org OpenML is a free open data sharing platform and will be used instead of mldata as it provides better service availability issue 9908 by Andreas M ller and user Jan N van Rijn janvanrijn Feature In func datasets make blobs one can now pass a list to the n samples parameter to indicate the number of samples to generate per cluster issue 8617 by user Maskani Filali Mohamed maskani moh and user Konstantinos Katrioplas kkatrio Feature Add filename attribute to mod sklearn datasets that have a CSV file issue 9101 by user alex 33 alex 33 and user Maskani Filali Mohamed maskani moh Feature return X y parameter has been added to several dataset loaders issue 10774 by user Chris Catalfo ccatalfo Fix Fixed a bug in datasets load boston which had a wrong data point issue 10795 by user Takeshi Yoshizawa tarcusx Fix Fixed a bug in func datasets load iris which had two wrong data points issue 11082 by user Sadhana Srinivasan rotuna and user Hanmin Qin qinhanmin2014 Fix Fixed a bug in func datasets fetch kddcup99 where data were not properly shuffled issue 9731 by Nicolas Goix Fix Fixed a bug in func datasets make circles where no odd number of data points could be generated issue 10045 by user Christian Braune christianbraune79 API Deprecated sklearn datasets fetch mldata to be removed in version 0 22 mldata org is no longer operational Until removal it will remain possible to load cached datasets issue 11466 by Joel Nothman mod sklearn decomposition Feature func decomposition dict learning functions and models now support positivity constraints This applies to the dictionary and sparse code issue 6374 by user John Kirkham jakirkham Feature Fix class decomposition SparsePCA now exposes normalize components When set to True the train and test data are centered with the train mean respectively during the fit phase and the transform phase This fixes the behavior of SparsePCA When set to False which is the default the previous abnormal behaviour still holds The False value is for backward compatibility and should not be used issue 11585 by user Ivan Panico FollowKenny Efficiency Efficiency improvements in func decomposition dict learning issue 11420 and others by user John Kirkham jakirkham Fix Fix for uninformative error in class decomposition IncrementalPCA now an error is raised if the number of components is larger than the chosen batch size The n components None case was adapted accordingly issue 6452 By user Wally Gauze wallygauze Fix Fixed a bug where the partial fit method of class decomposition IncrementalPCA used integer division instead of float division on Python 2 issue 9492 by user James Bourbeau jrbourbeau Fix In class decomposition PCA selecting a n components parameter greater than the number of samples now raises an error Similarly the n components None case now selects the minimum of n samples and n features issue 8484 by user Wally Gauze wallygauze Fix Fixed a bug in class decomposition PCA where users will get unexpected error with large datasets when n components mle on Python 3 versions issue 9886 by user Hanmin Qin qinhanmin2014 Fix Fixed an underflow in calculating KL divergence for class decomposition NMF issue 10142 by Tom Dupre la Tour Fix Fixed a bug in class decomposition SparseCoder when running OMP sparse coding in parallel using read only memory mapped datastructures issue 5956 by user Vighnesh Birodkar vighneshbirodkar and user Olivier Grisel ogrisel mod sklearn discriminant analysis Efficiency Memory usage improvement for class means and class cov in mod sklearn discriminant analysis issue 10898 by user Nanxin Chen bobchennan mod sklearn dummy Feature class dummy DummyRegressor now has a return std option in its predict method The returned standard deviations will be zeros Feature class dummy DummyClassifier and class dummy DummyRegressor now only require X to be an object with finite length or shape issue 9832 by user Vrishank Bhardwaj vrishank97 Feature class dummy DummyClassifier and class dummy DummyRegressor can now be scored without supplying test samples issue 11951 by user R diger Busche JarnoRFB mod sklearn ensemble Feature class ensemble BaggingRegressor and class ensemble BaggingClassifier can now be fit with missing non finite values in X and or multi output Y to support wrapping pipelines that perform their own imputation issue 9707 by user Jimmy Wan jimmywan Feature class ensemble GradientBoostingClassifier and class ensemble GradientBoostingRegressor now support early stopping via n iter no change validation fraction and tol issue 7071 by Raghav RV Feature Added named estimators parameter in class ensemble VotingClassifier to access fitted estimators issue 9157 by user Herilalaina Rakotoarison herilalaina Fix Fixed a bug when fitting class ensemble GradientBoostingClassifier or class ensemble GradientBoostingRegressor with warm start True which previously raised a segmentation fault due to a non conversion of CSC matrix into CSR format expected by decision function Similarly Fortran ordered arrays are converted to C ordered arrays in the dense case issue 9991 by user Guillaume Lemaitre glemaitre Fix Fixed a bug in class ensemble GradientBoostingRegressor and class ensemble GradientBoostingClassifier to have feature importances summed and then normalized rather than normalizing on a per tree basis The previous behavior over weighted the Gini importance of features that appear in later stages This issue only affected feature importances issue 11176 by user Gil Forsyth gforsyth API The default value of the n estimators parameter of class ensemble RandomForestClassifier class ensemble RandomForestRegressor class ensemble ExtraTreesClassifier class ensemble ExtraTreesRegressor and class ensemble RandomTreesEmbedding will change from 10 in version 0 20 to 100 in 0 22 A FutureWarning is raised when the default value is used issue 11542 by user Anna Ayzenshtat annaayzenshtat API Classes derived from ensemble BaseBagging The attribute estimators samples will return a list of arrays containing the indices selected for each bootstrap instead of a list of arrays containing the mask of the samples selected for each bootstrap Indices allows to repeat samples while mask does not allow this functionality issue 9524 by user Guillaume Lemaitre glemaitre Fix ensemble BaseBagging where one could not deterministically reproduce fit result using the object attributes when random state is set issue 9723 by user Guillaume Lemaitre glemaitre mod sklearn feature extraction Feature Enable the call to get feature names in unfitted class feature extraction text CountVectorizer initialized with a vocabulary issue 10908 by user Mohamed Maskani maskani moh Enhancement idf can now be set on a class feature extraction text TfidfTransformer issue 10899 by user Sergey Melderis serega Fix Fixed a bug in func feature extraction image extract patches 2d which would throw an exception if max patches was greater than or equal to the number of all possible patches rather than simply returning the number of possible patches issue 10101 by user Varun Agrawal varunagrawal Fix Fixed a bug in class feature extraction text CountVectorizer class feature extraction text TfidfVectorizer class feature extraction text HashingVectorizer to support 64 bit sparse array indexing necessary to process large datasets with more than 2 10 tokens words or n grams issue 9147 by user Claes Fredrik Mannby mannby and Roman Yurchak Fix Fixed bug in class feature extraction text TfidfVectorizer which was ignoring the parameter dtype In addition class feature extraction text TfidfTransformer will preserve dtype for floating and raise a warning if dtype requested is integer issue 10441 by user Mayur Kulkarni maykulkarni and user Guillaume Lemaitre glemaitre mod sklearn feature selection Feature Added select K best features functionality to class feature selection SelectFromModel issue 6689 by user Nihar Sheth nsheth12 and user Quazi Rahman qmaruf Feature Added min features to select parameter to class feature selection RFECV to bound evaluated features counts issue 11293 by user Brent Yi brentyi Feature class feature selection RFECV s fit method now supports term groups issue 9656 by user Adam Greenhall adamgreenhall Fix Fixed computation of n features to compute for edge case with tied CV scores in class feature selection RFECV issue 9222 by user Nick Hoh nickypie mod sklearn gaussian process Efficiency In class gaussian process GaussianProcessRegressor method predict is faster when using return std True in particular more when called several times in a row issue 9234 by user andrewww andrewww and user Minghui Liu minghui liu mod sklearn impute New module adopting preprocessing Imputer as class impute SimpleImputer with minor changes see under preprocessing below MajorFeature Added class impute MissingIndicator which generates a binary indicator for missing values issue 8075 by user Maniteja Nandana maniteja123 and user Guillaume Lemaitre glemaitre Feature The class impute SimpleImputer has a new strategy constant to complete missing values with a fixed one given by the fill value parameter This strategy supports numeric and non numeric data and so does the most frequent strategy now issue 11211 by user Jeremie du Boisberranger jeremiedbb mod sklearn isotonic Fix Fixed a bug in class isotonic IsotonicRegression which incorrectly combined weights when fitting a model to data involving points with identical X values issue 9484 by user Dallas Card dallascard mod sklearn linear model Feature class linear model SGDClassifier class linear model SGDRegressor class linear model PassiveAggressiveClassifier class linear model PassiveAggressiveRegressor and class linear model Perceptron now expose early stopping validation fraction and n iter no change parameters to stop optimization monitoring the score on a validation set A new learning rate adaptive strategy divides the learning rate by 5 each time n iter no change consecutive epochs fail to improve the model issue 9043 by Tom Dupre la Tour Feature Add sample weight parameter to the fit method of class linear model BayesianRidge for weighted linear regression issue 10112 by user Peter St John pstjohn Fix Fixed a bug in logistic logistic regression path to ensure that the returned coefficients are correct when multiclass multinomial Previously some of the coefficients would override each other leading to incorrect results in class linear model LogisticRegressionCV issue 11724 by user Nicolas Hug NicolasHug Fix Fixed a bug in class linear model LogisticRegression where when using the parameter multi class multinomial the predict proba method was returning incorrect probabilities in the case of binary outcomes issue 9939 by user Roger Westover rwolst Fix Fixed a bug in class linear model LogisticRegressionCV where the score method always computes accuracy not the metric given by the scoring parameter issue 10998 by user Thomas Fan thomasjpfan Fix Fixed a bug in class linear model LogisticRegressionCV where the ovr strategy was always used to compute cross validation scores in the multiclass setting even if multinomial was set issue 8720 by user William de Vazelhes wdevazelhes Fix Fixed a bug in class linear model OrthogonalMatchingPursuit that was broken when setting normalize False issue 10071 by Alexandre Gramfort Fix Fixed a bug in class linear model ARDRegression which caused incorrectly updated estimates for the standard deviation and the coefficients issue 10153 by user J rg D pfert jdoepfert Fix Fixed a bug in class linear model ARDRegression and class linear model BayesianRidge which caused NaN predictions when fitted with a constant target issue 10095 by user J rg D pfert jdoepfert Fix Fixed a bug in class linear model RidgeClassifierCV where the parameter store cv values was not implemented though it was documented in cv values as a way to set up the storage of cross validation values for different alphas issue 10297 by user Mabel Villalba Jim nez mabelvj Fix Fixed a bug in class linear model ElasticNet which caused the input to be overridden when using parameter copy X True and check input False issue 10581 by user Yacine Mazari ymazari Fix Fixed a bug in class sklearn linear model Lasso where the coefficient had wrong shape when fit intercept False issue 10687 by user Martin Hahn martin hahn Fix Fixed a bug in func sklearn linear model LogisticRegression where the multi class multinomial with binary output with warm start True issue 10836 by user Aishwarya Srinivasan aishgrt1 Fix Fixed a bug in class linear model RidgeCV where using integer alphas raised an error issue 10397 by user Mabel Villalba Jim nez mabelvj Fix Fixed condition triggering gap computation in class linear model Lasso and class linear model ElasticNet when working with sparse matrices issue 10992 by Alexandre Gramfort Fix Fixed a bug in class linear model SGDClassifier class linear model SGDRegressor class linear model PassiveAggressiveClassifier class linear model PassiveAggressiveRegressor and class linear model Perceptron where the stopping criterion was stopping the algorithm before convergence A parameter n iter no change was added and set by default to 5 Previous behavior is equivalent to setting the parameter to 1 issue 9043 by Tom Dupre la Tour Fix Fixed a bug where liblinear and libsvm based estimators would segfault if passed a scipy sparse matrix with 64 bit indices They now raise a ValueError issue 11327 by user Karan Dhingra kdhingra307 and Joel Nothman API The default values of the solver and multi class parameters of class linear model LogisticRegression will change respectively from liblinear and ovr in version 0 20 to lbfgs and auto in version 0 22 A FutureWarning is raised when the default values are used issue 11905 by Tom Dupre la Tour and Joel Nothman API Deprecate positive True option in class linear model Lars as the underlying implementation is broken Use class linear model Lasso instead issue 9837 by Alexandre Gramfort API n iter may vary from previous releases in class linear model LogisticRegression with solver lbfgs and class linear model HuberRegressor For Scipy 1 0 0 the optimizer could perform more than the requested maximum number of iterations Now both estimators will report at most max iter iterations even if more were performed issue 10723 by Joel Nothman mod sklearn manifold Efficiency Speed improvements for both exact and barnes hut methods in class manifold TSNE issue 10593 and issue 10610 by Tom Dupre la Tour Feature Support sparse input in meth manifold Isomap fit issue 8554 by user Leland McInnes lmcinnes Feature manifold t sne trustworthiness accepts metrics other than Euclidean issue 9775 by user William de Vazelhes wdevazelhes Fix Fixed a bug in func manifold spectral embedding where the normalization of the spectrum was using a division instead of a multiplication issue 8129 by user Jan Margeta jmargeta user Guillaume Lemaitre glemaitre and user Devansh D devanshdalal API Feature Deprecate precomputed parameter in function manifold t sne trustworthiness Instead the new parameter metric should be used with any compatible metric including precomputed in which case the input matrix X should be a matrix of pairwise distances or squared distances issue 9775 by user William de Vazelhes wdevazelhes API Deprecate precomputed parameter in function manifold t sne trustworthiness Instead the new parameter metric should be used with any compatible metric including precomputed in which case the input matrix X should be a matrix of pairwise distances or squared distances issue 9775 by user William de Vazelhes wdevazelhes mod sklearn metrics MajorFeature Added the func metrics davies bouldin score metric for evaluation of clustering models without a ground truth issue 10827 by user Luis Osa logc MajorFeature Added the func metrics balanced accuracy score metric and a corresponding balanced accuracy scorer for binary and multiclass classification issue 8066 by user xyguo and user Aman Dalmia dalmia and issue 10587 by Joel Nothman Feature Partial AUC is available via max fpr parameter in func metrics roc auc score issue 3840 by user Alexander Niederb hl Alexander N Feature A scorer based on func metrics brier score loss is also available issue 9521 by user Hanmin Qin qinhanmin2014 Feature Added control over the normalization in func metrics normalized mutual info score and func metrics adjusted mutual info score via the average method parameter In version 0 22 the default normalizer for each will become the arithmetic mean of the entropies of each clustering issue 11124 by user Arya McCarthy aryamccarthy Feature Added output dict parameter in func metrics classification report to return classification statistics as dictionary issue 11160 by user Dan Barkhorn danielbarkhorn Feature func metrics classification report now reports all applicable averages on the given data including micro macro and weighted average as well as samples average for multilabel data issue 11679 by user Alexander Pacha apacha Feature func metrics average precision score now supports binary y true other than 0 1 or 1 1 through pos label parameter issue 9980 by user Hanmin Qin qinhanmin2014 Feature func metrics label ranking average precision score now supports sample weight issue 10845 by user Jose Perez Parras Toledano jopepato Feature Add dense output parameter to func metrics pairwise linear kernel When False and both inputs are sparse will return a sparse matrix issue 10999 by user Taylor G Smith tgsmith61591 Efficiency func metrics silhouette score and func metrics silhouette samples are more memory efficient and run faster This avoids some reported freezes and MemoryErrors issue 11135 by Joel Nothman Fix Fixed a bug in func metrics precision recall fscore support when truncated range n labels is passed as value for labels issue 10377 by user Gaurav Dhingra gxyd Fix Fixed a bug due to floating point error in func metrics roc auc score with non integer sample weights issue 9786 by user Hanmin Qin qinhanmin2014 Fix Fixed a bug where func metrics roc curve sometimes starts on y axis instead of 0 0 which is inconsistent with the document and other implementations Note that this will not influence the result from func metrics roc auc score issue 10093 by user alexryndin alexryndin and user Hanmin Qin qinhanmin2014 Fix Fixed a bug to avoid integer overflow Casted product to 64 bits integer in func metrics mutual info score issue 9772 by user Kumar Ashutosh thechargedneutron Fix Fixed a bug where func metrics average precision score will sometimes return nan when sample weight contains 0 issue 9980 by user Hanmin Qin qinhanmin2014 Fix Fixed a bug in func metrics fowlkes mallows score to avoid integer overflow Casted return value of contingency matrix to int64 and computed product of square roots rather than square root of product issue 9515 by user Alan Liddell aliddell and user Manh Dao manhdao API Deprecate reorder parameter in func metrics auc as it s no longer required for func metrics roc auc score Moreover using reorder True can hide bugs due to floating point error in the input issue 9851 by user Hanmin Qin qinhanmin2014 API In func metrics normalized mutual info score and func metrics adjusted mutual info score warn that average method will have a new default value In version 0 22 the default normalizer for each will become the arithmetic mean of the entropies of each clustering Currently func metrics normalized mutual info score uses the default of average method geometric and func metrics adjusted mutual info score uses the default of average method max to match their behaviors in version 0 19 issue 11124 by user Arya McCarthy aryamccarthy API The batch size parameter to func metrics pairwise distances argmin min and func metrics pairwise distances argmin is deprecated to be removed in v0 22 It no longer has any effect as batch size is determined by global working memory config See ref working memory issue 10280 by Joel Nothman and user Aman Dalmia dalmia mod sklearn mixture Feature Added function term fit predict to class mixture GaussianMixture and class mixture GaussianMixture which is essentially equivalent to calling term fit and term predict issue 10336 by user Shu Haoran haoranShu and user Andrew Peng Andrew peng Fix Fixed a bug in mixture BaseMixture where the reported n iter was missing an iteration It affected class mixture GaussianMixture and class mixture BayesianGaussianMixture issue 10740 by user Erich Schubert kno10 and user Guillaume Lemaitre glemaitre Fix Fixed a bug in mixture BaseMixture and its subclasses class mixture GaussianMixture and class mixture BayesianGaussianMixture where the lower bound was not the max lower bound across all initializations when n init 1 but just the lower bound of the last initialization issue 10869 by user Aur lien G ron ageron mod sklearn model selection Feature Add return estimator parameter in func model selection cross validate to return estimators fitted on each split issue 9686 by user Aur lien Bellet bellet Feature New refit time attribute will be stored in class model selection GridSearchCV and class model selection RandomizedSearchCV if refit is set to True This will allow measuring the complete time it takes to perform hyperparameter optimization and refitting the best model on the whole dataset issue 11310 by user Matthias Feurer mfeurer Feature Expose error score parameter in func model selection cross validate func model selection cross val score func model selection learning curve and func model selection validation curve to control the behavior triggered when an error occurs in model selection fit and score issue 11576 by user Samuel O Ronsin samronsin Feature BaseSearchCV now has an experimental private interface to support customized parameter search strategies through its run search method See the implementations in class model selection GridSearchCV and class model selection RandomizedSearchCV and please provide feedback if you use this Note that we do not assure the stability of this API beyond version 0 20 issue 9599 by Joel Nothman Enhancement Add improved error message in func model selection cross val score when multiple metrics are passed in scoring keyword issue 11006 by user Ming Li minggli API The default number of cross validation folds cv and the default number of splits n splits in the class model selection KFold like splitters will change from 3 to 5 in 0 22 as 3 fold has a lot of variance issue 11557 by user Alexandre Boucaud aboucaud API The default of iid parameter of class model selection GridSearchCV and class model selection RandomizedSearchCV will change from True to False in version 0 22 to correspond to the standard definition of cross validation and the parameter will be removed in version 0 24 altogether This parameter is of greatest practical significance where the sizes of different test sets in cross validation were very unequal i e in group based CV strategies issue 9085 by user Laurent Direr ldirer and Andreas M ller API The default value of the error score parameter in class model selection GridSearchCV and class model selection RandomizedSearchCV will change to np NaN in version 0 22 issue 10677 by user Kirill Zhdanovich Zhdanovich API Changed ValueError exception raised in class model selection ParameterSampler to a UserWarning for case where the class is instantiated with a greater value of n iter than the total space of parameters in the parameter grid n iter now acts as an upper bound on iterations issue 10982 by user Juliet Lawton julietcl API Invalid input for class model selection ParameterGrid now raises TypeError issue 10928 by user Solutus Immensus solutusimmensus mod sklearn multioutput MajorFeature Added class multioutput RegressorChain for multi target regression issue 9257 by user Kumar Ashutosh thechargedneutron mod sklearn naive bayes MajorFeature Added class naive bayes ComplementNB which implements the Complement Naive Bayes classifier described in Rennie et al 2003 issue 8190 by user Michael A Alcorn airalcorn2 Feature Add var smoothing parameter in class naive bayes GaussianNB to give a precise control over variances calculation issue 9681 by user Dmitry Mottl Mottl Fix Fixed a bug in class naive bayes GaussianNB which incorrectly raised error for prior list which summed to 1 issue 10005 by user Gaurav Dhingra gxyd Fix Fixed a bug in class naive bayes MultinomialNB which did not accept vector valued pseudocounts alpha issue 10346 by user Tobias Madsen TobiasMadsen mod sklearn neighbors Efficiency class neighbors RadiusNeighborsRegressor and class neighbors RadiusNeighborsClassifier are now parallelized according to n jobs regardless of algorithm issue 10887 by user Jo l Billaud recamshak Efficiency mod sklearn neighbors query methods are now more memory efficient when algorithm brute issue 11136 by Joel Nothman and user Aman Dalmia dalmia Feature Add sample weight parameter to the fit method of class neighbors KernelDensity to enable weighting in kernel density estimation issue 4394 by user Samuel O Ronsin samronsin Feature Novelty detection with class neighbors LocalOutlierFactor Add a novelty parameter to class neighbors LocalOutlierFactor When novelty is set to True class neighbors LocalOutlierFactor can then be used for novelty detection i e predict on new unseen data Available prediction methods are predict decision function and score samples By default novelty is set to False and only the fit predict method is available By user Albert Thomas albertcthomas Fix Fixed a bug in class neighbors NearestNeighbors where fitting a NearestNeighbors model fails when a the distance metric used is a callable and b the input to the NearestNeighbors model is sparse issue 9579 by user Thomas Kober tttthomasssss Fix Fixed a bug so predict in class neighbors RadiusNeighborsRegressor can handle empty neighbor set when using non uniform weights Also raises a new warning when no neighbors are found for samples issue 9655 by user Andreas Bjerre Nielsen abjer Fix Efficiency Fixed a bug in KDTree construction that results in faster construction and querying times issue 11556 by user Jake VanderPlas jakevdp Fix Fixed a bug in class neighbors KDTree and class neighbors BallTree where pickled tree objects would change their type to the super class BinaryTree issue 11774 by user Nicolas Hug NicolasHug mod sklearn neural network Feature Add n iter no change parameter in neural network BaseMultilayerPerceptron class neural network MLPRegressor and class neural network MLPClassifier to give control over maximum number of epochs to not meet tol improvement issue 9456 by user Nicholas Nadeau nnadeau Fix Fixed a bug in neural network BaseMultilayerPerceptron class neural network MLPRegressor and class neural network MLPClassifier with new n iter no change parameter now at 10 from previously hardcoded 2 issue 9456 by user Nicholas Nadeau nnadeau Fix Fixed a bug in class neural network MLPRegressor where fitting quit unexpectedly early due to local minima or fluctuations issue 9456 by user Nicholas Nadeau nnadeau mod sklearn pipeline Feature The predict method of class pipeline Pipeline now passes keyword arguments on to the pipeline s last estimator enabling the use of parameters such as return std in a pipeline with caution issue 9304 by user Breno Freitas brenolf API class pipeline FeatureUnion now supports drop as a transformer to drop features issue 11144 by user Thomas Fan thomasjpfan mod sklearn preprocessing MajorFeature Expanded class preprocessing OneHotEncoder to allow to encode categorical string features as a numeric array using a one hot or dummy encoding scheme and added class preprocessing OrdinalEncoder to convert to ordinal integers Those two classes now handle encoding of all feature types also handles string valued features and derives the categories based on the unique values in the features instead of the maximum value in the features issue 9151 and issue 10521 by user Vighnesh Birodkar vighneshbirodkar and Joris Van den Bossche MajorFeature Added class preprocessing KBinsDiscretizer for turning continuous features into categorical or one hot encoded features issue 7668 issue 9647 issue 10195 issue 10192 issue 11272 issue 11467 and issue 11505 by user Henry Lin hlin117 Hanmin Qin Tom Dupre la Tour and user Giovanni Giuseppe Costa ggc87 MajorFeature Added class preprocessing PowerTransformer which implements the Yeo Johnson and Box Cox power transformations Power transformations try to find a set of feature wise parametric transformations to approximately map data to a Gaussian distribution centered at zero and with unit variance This is useful as a variance stabilizing transformation in situations where normality and homoscedasticity are desirable issue 10210 by user Eric Chang chang and user Maniteja Nandana maniteja123 and issue 11520 by user Nicolas Hug nicolashug MajorFeature NaN values are ignored and handled in the following preprocessing methods class preprocessing MaxAbsScaler class preprocessing MinMaxScaler class preprocessing RobustScaler class preprocessing StandardScaler class preprocessing PowerTransformer class preprocessing QuantileTransformer classes and func preprocessing maxabs scale func preprocessing minmax scale func preprocessing robust scale func preprocessing scale func preprocessing power transform func preprocessing quantile transform functions respectively addressed in issues issue 11011 issue 11005 issue 11308 issue 11206 issue 11306 and issue 10437 By user Lucija Gregov LucijaGregov and user Guillaume Lemaitre glemaitre Feature class preprocessing PolynomialFeatures now supports sparse input issue 10452 by user Aman Dalmia dalmia and Joel Nothman Feature class preprocessing RobustScaler and func preprocessing robust scale can be fitted using sparse matrices issue 11308 by user Guillaume Lemaitre glemaitre Feature class preprocessing OneHotEncoder now supports the get feature names method to obtain the transformed feature names issue 10181 by user Nirvan Anjirbag Nirvan101 and Joris Van den Bossche Feature A parameter check inverse was added to class preprocessing FunctionTransformer to ensure that func and inverse func are the inverse of each other issue 9399 by user Guillaume Lemaitre glemaitre Feature The transform method of class sklearn preprocessing MultiLabelBinarizer now ignores any unknown classes A warning is raised stating the unknown classes classes found which are ignored issue 10913 by user Rodrigo Agundez rragundez Fix Fixed bugs in class preprocessing LabelEncoder which would sometimes throw errors when transform or inverse transform was called with empty arrays issue 10458 by user Mayur Kulkarni maykulkarni Fix Fix ValueError in class preprocessing LabelEncoder when using inverse transform on unseen labels issue 9816 by user Charlie Newey newey01c Fix Fix bug in class preprocessing OneHotEncoder which discarded the dtype when returning a sparse matrix output issue 11042 by user Daniel Morales DanielMorales9 Fix Fix fit and partial fit in class preprocessing StandardScaler in the rare case when with mean False and with std False which was crashing by calling fit more than once and giving inconsistent results for mean whether the input was a sparse or a dense matrix mean will be set to None with both sparse and dense inputs n samples seen will be also reported for both input types issue 11235 by user Guillaume Lemaitre glemaitre API Deprecate n values and categorical features parameters and active features feature indices and n values attributes of class preprocessing OneHotEncoder The n values parameter can be replaced with the new categories parameter and the attributes with the new categories attribute Selecting the categorical features with the categorical features parameter is now better supported using the class compose ColumnTransformer issue 10521 by Joris Van den Bossche API Deprecate preprocessing Imputer and move the corresponding module to class impute SimpleImputer issue 9726 by user Kumar Ashutosh thechargedneutron API The axis parameter that was in preprocessing Imputer is no longer present in class impute SimpleImputer The behavior is equivalent to axis 0 impute along columns Row wise imputation can be performed with FunctionTransformer e g FunctionTransformer lambda X SimpleImputer fit transform X T T issue 10829 by user Guillaume Lemaitre glemaitre and user Gilberto Olimpio gilbertoolimpio API The NaN marker for the missing values has been changed between the preprocessing Imputer and the impute SimpleImputer missing values NaN should now be missing values np nan issue 11211 by user Jeremie du Boisberranger jeremiedbb API In class preprocessing FunctionTransformer the default of validate will be from True to False in 0 22 issue 10655 by user Guillaume Lemaitre glemaitre mod sklearn svm Fix Fixed a bug in class svm SVC where when the argument kernel is unicode in Python2 the predict proba method was raising an unexpected TypeError given dense inputs issue 10412 by user Jiongyan Zhang qmick API Deprecate random state parameter in class svm OneClassSVM as the underlying implementation is not random issue 9497 by user Albert Thomas albertcthomas API The default value of gamma parameter of class svm SVC class svm NuSVC class svm SVR class svm NuSVR class svm OneClassSVM will change from auto to scale in version 0 22 to account better for unscaled features issue 8361 by user Gaurav Dhingra gxyd and user Ting Neo neokt mod sklearn tree Enhancement Although private and hence not assured API stability tree criterion ClassificationCriterion and tree criterion RegressionCriterion may now be cimported and extended issue 10325 by user Camil Staps camilstaps Fix Fixed a bug in tree BaseDecisionTree with splitter best where split threshold could become infinite when values in X were near infinite issue 10536 by user Jonathan Ohayon Johayon Fix Fixed a bug in tree MAE to ensure sample weights are being used during the calculation of tree MAE impurity Previous behaviour could cause suboptimal splits to be chosen since the impurity calculation considered all samples to be of equal weight importance issue 11464 by user John Stott JohnStott mod sklearn utils Feature func utils check array and func utils check X y now have accept large sparse to control whether scipy sparse matrices with 64 bit indices should be rejected issue 11327 by user Karan Dhingra kdhingra307 and Joel Nothman Efficiency Fix Avoid copying the data in func utils check array when the input data is a memmap and copy False issue 10663 by user Arthur Mensch arthurmensch and user Lo c Est ve lesteve API func utils check array yield a FutureWarning indicating that arrays of bytes strings will be interpreted as decimal numbers beginning in version 0 22 issue 10229 by user Ryan Lee rtlee9 Multiple modules Feature API More consistent outlier detection API Add a score samples method in class svm OneClassSVM class ensemble IsolationForest class neighbors LocalOutlierFactor class covariance EllipticEnvelope It allows to access raw score functions from original papers A new offset parameter allows to link score samples and decision function methods The contamination parameter of class ensemble IsolationForest and class neighbors LocalOutlierFactor decision function methods is used to define this offset such that outliers resp inliers have negative resp positive decision function values By default contamination is kept unchanged to 0 1 for a deprecation period In 0 22 it will be set to auto thus using method specific score offsets In class covariance EllipticEnvelope decision function method the raw values parameter is deprecated as the shifted Mahalanobis distance will be always returned in 0 22 issue 9015 by Nicolas Goix Feature API A behaviour parameter has been introduced in class ensemble IsolationForest to ensure backward compatibility In the old behaviour the decision function is independent of the contamination parameter A threshold attribute depending on the contamination parameter is thus used In the new behaviour the decision function is dependent on the contamination parameter in such a way that 0 becomes its natural threshold to detect outliers Setting behaviour to old is deprecated and will not be possible in version 0 22 Beside the behaviour parameter will be removed in 0 24 issue 11553 by Nicolas Goix API Added convergence warning to class svm LinearSVC and class linear model LogisticRegression when verbose is set to 0 issue 10881 by user Alexandre Sevin AlexandreSev API Changed warning type from class UserWarning to class exceptions ConvergenceWarning for failing convergence in linear model logistic regression path class linear model RANSACRegressor func linear model ridge regression class gaussian process GaussianProcessRegressor class gaussian process GaussianProcessClassifier func decomposition fastica class cross decomposition PLSCanonical class cluster AffinityPropagation and class cluster Birch issue 10306 by user Jonathan Siebert jotasi Miscellaneous MajorFeature A new configuration parameter working memory was added to control memory consumption limits in chunked operations such as the new func metrics pairwise distances chunked See ref working memory issue 10280 by Joel Nothman and user Aman Dalmia dalmia Feature The version of mod joblib bundled with Scikit learn is now 0 12 This uses a new default multiprocessing implementation named loky https github com tomMoral loky While this may incur some memory and communication overhead it should provide greater cross platform stability than relying on Python standard library multiprocessing issue 11741 by the Joblib developers especially user Thomas Moreau tomMoral and Olivier Grisel Feature An environment variable to use the site joblib instead of the vendored one was added ref environment variable The main API of joblib is now exposed in mod sklearn utils issue 11166 by Gael Varoquaux Feature Add almost complete PyPy 3 support Known unsupported functionalities are func datasets load svmlight file class feature extraction FeatureHasher and class feature extraction text HashingVectorizer For running on PyPy PyPy3 v5 10 Numpy 1 14 0 and scipy 1 1 0 are required issue 11010 by user Ronan Lamy rlamy and Roman Yurchak Feature A utility method func sklearn show versions was added to print out information relevant for debugging It includes the user system the Python executable the version of the main libraries and BLAS binding information issue 11596 by user Alexandre Boucaud aboucaud Fix Fixed a bug when setting parameters on meta estimator involving both a wrapped estimator and its parameter issue 9999 by user Marcus Voss marcus voss and Joel Nothman Fix Fixed a bug where calling func sklearn base clone was not thread safe and could result in a pop from empty list error issue 9569 by Andreas M ller API The default value of n jobs is changed from 1 to None in all related functions and classes n jobs None means unset It will generally be interpreted as n jobs 1 unless the current joblib Parallel backend context specifies otherwise See term Glossary n jobs for additional information Note that this change happens immediately i e without a deprecation cycle issue 11741 by Olivier Grisel Fix Fixed a bug in validation helpers where passing a Dask DataFrame results in an error issue 12462 by user Zachariah Miller zwmiller Changes to estimator checks These changes mostly affect library developers Checks for transformers now apply if the estimator implements term transform regardless of whether it inherits from class sklearn base TransformerMixin issue 10474 by Joel Nothman Classifiers are now checked for consistency between term decision function and categorical predictions issue 10500 by user Narine Kokhlikyan NarineK Allow tests in func utils estimator checks check estimator to test functions that accept pairwise data issue 9701 by user Kyle Johnson gkjohns Allow func utils estimator checks check estimator to check that there is no private settings apart from parameters during estimator initialization issue 9378 by user Herilalaina Rakotoarison herilalaina The set of checks in func utils estimator checks check estimator now includes a check set params test which checks that set params is equivalent to passing parameters in init and warns if it encounters parameter validation issue 7738 by user Alvin Chiang absolutelyNoWarranty Add invariance tests for clustering metrics issue 8102 by user Ankita Sinha anki08 and user Guillaume Lemaitre glemaitre Add check methods subset invariance to func utils estimator checks check estimator which checks that estimator methods are invariant if applied to a data subset issue 10428 by user Jonathan Ohayon Johayon Add tests in func utils estimator checks check estimator to check that an estimator can handle read only memmap input data issue 10663 by user Arthur Mensch arthurmensch and user Lo c Est ve lesteve check sample weights pandas series now uses 8 rather than 6 samples to accommodate for the default number of clusters in class cluster KMeans issue 10933 by user Johannes Hansen jnhansen Estimators are now checked for whether sample weight None equates to sample weight np ones issue 11558 by user Sergul Aydore sergulaydore Code and Documentation Contributors Thanks to everyone who has contributed to the maintenance and improvement of the project since version 0 19 including 211217613 Aarshay Jain absolutelyNoWarranty Adam Greenhall Adam Kleczewski Adam Richie Halford adelr AdityaDaflapurkar Adrin Jalali Aidan Fitzgerald aishgrt1 Akash Shivram Alan Liddell Alan Yee Albert Thomas Alexander Lenail Alexander N Alexandre Boucaud Alexandre Gramfort Alexandre Sevin Alex Egg Alvaro Perez Diaz Amanda Aman Dalmia Andreas Bjerre Nielsen Andreas Mueller Andrew Peng Angus Williams Aniruddha Dave annaayzenshtat Anthony Gitter Antonio Quinonez Anubhav Marwaha Arik Pamnani Arthur Ozga Artiem K Arunava Arya McCarthy Attractadore Aur lien Bellet Aur lien Geron Ayush Gupta Balakumaran Manoharan Bangda Sun Barry Hart Bastian Venthur Ben Lawson Benn Roth Breno Freitas Brent Yi brett koonce Caio Oliveira Camil Staps cclauss Chady Kamar Charlie Brummitt Charlie Newey chris Chris Chris Catalfo Chris Foster Chris Holdgraf Christian Braune Christian Hirsch Christian Hogan Christopher Jenness Clement Joudet cnx cwitte Dallas Card Dan Barkhorn Daniel Daniel Ferreira Daniel Gomez Daniel Klevebring Danielle Shwed Daniel Mohns Danil Baibak Darius Morawiec David Beach David Burns David Kirkby David Nicholson David Pickup Derek Didi Bar Zev diegodlh Dillon Gardner Dillon Niederhut dilutedsauce dlovell Dmitry Mottl Dmitry Petrov Dor Cohen Douglas Duhaime Ekaterina Tuzova Eric Chang Eric Dean Sanchez Erich Schubert Eunji Fang Chieh Chou FarahSaeed felix F lix Raimundo fenx filipj8 FrankHui Franz Wompner Freija Descamps frsi Gabriele Calvo Gael Varoquaux Gaurav Dhingra Georgi Peev Gil Forsyth Giovanni Giuseppe Costa gkevinyen5418 goncalo rodrigues Gryllos Prokopis Guillaume Lemaitre Guillaume Vermeille Sanchez Gustavo De Mari Pereira hakaa1 Hanmin Qin Henry Lin Hong Honghe Hossein Pourbozorg Hristo Hunan Rostomyan iampat Ivan PANICO Jaewon Chung Jake VanderPlas jakirkham James Bourbeau James Malcolm Jamie Cox Jan Koch Jan Margeta Jan Schl ter janvanrijn Jason Wolosonovich JC Liu Jeb Bearer jeremiedbb Jimmy Wan Jinkun Wang Jiongyan Zhang jjabl jkleint Joan Massich Jo l Billaud Joel Nothman Johannes Hansen JohnStott Jonatan Samoocha Jonathan Ohayon J rg D pfert Joris Van den Bossche Jose Perez Parras Toledano josephsalmon jotasi jschendel Julian Kuhlmann Julien Chaumond julietcl Justin Shenk Karl F Kasper Primdal Lauritzen Katrin Leinweber Kirill ksemb Kuai Yu Kumar Ashutosh Kyeongpil Kang Kye Taylor kyledrogo Leland McInnes L o DS Liam Geron Liutong Zhou Lizao Li lkjcalc Loic Esteve louib Luciano Viola Lucija Gregov Luis Osa Luis Pedro Coelho Luke M Craig Luke Persola Mabel Mabel Villalba Maniteja Nandana MarkIwanchyshyn Mark Roth Markus M ller MarsGuy Martin Gubri martin hahn martin kokos mathurinm Matthias Feurer Max Copeland Mayur Kulkarni Meghann Agarwal Melanie Goetz Michael A Alcorn Minghui Liu Ming Li Minh Le Mohamed Ali Jamaoui Mohamed Maskani Mohammad Shahebaz Muayyad Alsadi Nabarun Pal Nagarjuna Kumar Naoya Kanai Narendran Santhanam NarineK Nathaniel Saul Nathan Suh Nicholas Nadeau P Eng AVS Nick Hoh Nicolas Goix Nicolas Hug Nicolau Werneck nielsenmarkus11 Nihar Sheth Nikita Titov Nilesh Kevlani Nirvan Anjirbag notmatthancock nzw Oleksandr Pavlyk oliblum90 Oliver Rausch Olivier Grisel Oren Milman Osaid Rehman Nasir pasbi Patrick Fernandes Patrick Olden Paul Paczuski Pedro Morales Peter Peter St John pierreablin pietruh Pinaki Nath Chowdhury Piotr Szyma ski Pradeep Reddy Raamana Pravar D Mahajan pravarmahajan QingYing Chen Raghav RV Rajendra arora RAKOTOARISON Herilalaina Rameshwar Bhaskaran RankyLau Rasul Kerimov Reiichiro Nakano Rob Roman Kosobrodov Roman Yurchak Ronan Lamy rragundez R diger Busche Ryan Sachin Kelkar Sagnik Bhattacharya Sailesh Choyal Sam Radhakrishnan Sam Steingold Samuel Bell Samuel O Ronsin Saqib Nizam Shamsi SATISH J Saurabh Gupta Scott Gigante Sebastian Flennerhag Sebastian Raschka Sebastien Dubois S bastien Lerique Sebastin Santy Sergey Feldman Sergey Melderis Sergul Aydore Shahebaz Shalil Awaley Shangwu Yao Sharad Vijalapuram Sharan Yalburgi shenhanc78 Shivam Rastogi Shu Haoran siftikha Sinclert P rez SolutusImmensus Somya Anand srajan paliwal Sriharsha Hatwar Sri Krishna Stefan van der Walt Stephen McDowell Steven Brown syonekura Taehoon Lee Takanori Hayashi tarcusx Taylor G Smith theriley106 Thomas Thomas Fan Thomas Heavey Tobias Madsen tobycheese Tom Augspurger Tom Dupr la Tour Tommy Trevor Stephens Trishnendu Ghorai Tulio Casagrande twosigmajab Umar Farouk Umar Urvang Patel Utkarsh Upadhyay Vadim Markovtsev Varun Agrawal Vathsala Achar Vilhelm von Ehrenheim Vinayak Mehta Vinit Vinod Kumar L Viraj Mavani Viraj Navkal Vivek Kumar Vlad Niculae vqean3 Vrishank Bhardwaj vufg wallygauze Warut Vijitbenjaronk wdevazelhes Wenhao Zhang Wes Barnett Will William de Vazelhes Will Rosenfeld Xin Xiong Yiming Paul Li ymazari Yufeng Zach Griffith Z Vin cius Zhenqing Hu Zhiqing Xiao Zijie ZJ Poh |
scikit-learn sklearn contributors rst changeloglegend inc Version 0 21 | .. include:: _contributors.rst
.. currentmodule:: sklearn
============
Version 0.21
============
.. include:: changelog_legend.inc
.. _changes_0_21_3:
Version 0.21.3
==============
**July 30, 2019**
Changed models
--------------
The following estimators and functions, when fit with the same data and
parameters, may produce different models from the previous version. This often
occurs due to changes in the modelling logic (bug fixes or enhancements), or in
random sampling procedures.
- The v0.20.0 release notes failed to mention a backwards incompatibility in
:func:`metrics.make_scorer` when `needs_proba=True` and `y_true` is binary.
Now, the scorer function is supposed to accept a 1D `y_pred` (i.e.,
probability of the positive class, shape `(n_samples,)`), instead of a 2D
`y_pred` (i.e., shape `(n_samples, 2)`).
Changelog
---------
:mod:`sklearn.cluster`
......................
- |Fix| Fixed a bug in :class:`cluster.KMeans` where computation with
`init='random'` was single threaded for `n_jobs > 1` or `n_jobs = -1`.
:pr:`12955` by :user:`Prabakaran Kumaresshan <nixphix>`.
- |Fix| Fixed a bug in :class:`cluster.OPTICS` where users were unable to pass
float `min_samples` and `min_cluster_size`. :pr:`14496` by
:user:`Fabian Klopfer <someusername1>`
and :user:`Hanmin Qin <qinhanmin2014>`.
- |Fix| Fixed a bug in :class:`cluster.KMeans` where KMeans++ initialisation
could rarely result in an IndexError. :issue:`11756` by `Joel Nothman`_.
:mod:`sklearn.compose`
......................
- |Fix| Fixed an issue in :class:`compose.ColumnTransformer` where using
DataFrames whose column order differs between :func:``fit`` and
:func:``transform`` could lead to silently passing incorrect columns to the
``remainder`` transformer.
:pr:`14237` by `Andreas Schuderer <schuderer>`.
:mod:`sklearn.datasets`
.......................
- |Fix| :func:`datasets.fetch_california_housing`,
:func:`datasets.fetch_covtype`,
:func:`datasets.fetch_kddcup99`, :func:`datasets.fetch_olivetti_faces`,
:func:`datasets.fetch_rcv1`, and :func:`datasets.fetch_species_distributions`
try to persist the previously cache using the new ``joblib`` if the cached
data was persisted using the deprecated ``sklearn.externals.joblib``. This
behavior is set to be deprecated and removed in v0.23.
:pr:`14197` by `Adrin Jalali`_.
:mod:`sklearn.ensemble`
.......................
- |Fix| Fix zero division error in :class:`ensemble.HistGradientBoostingClassifier` and
:class:`ensemble.HistGradientBoostingRegressor`.
:pr:`14024` by `Nicolas Hug <NicolasHug>`.
:mod:`sklearn.impute`
.....................
- |Fix| Fixed a bug in :class:`impute.SimpleImputer` and
:class:`impute.IterativeImputer` so that no errors are thrown when there are
missing values in training data. :pr:`13974` by `Frank Hoang <fhoang7>`.
:mod:`sklearn.inspection`
.........................
- |Fix| Fixed a bug in `inspection.plot_partial_dependence` where
``target`` parameter was not being taken into account for multiclass problems.
:pr:`14393` by :user:`Guillem G. Subies <guillemgsubies>`.
:mod:`sklearn.linear_model`
...........................
- |Fix| Fixed a bug in :class:`linear_model.LogisticRegressionCV` where
``refit=False`` would fail depending on the ``'multiclass'`` and
``'penalty'`` parameters (regression introduced in 0.21). :pr:`14087` by
`Nicolas Hug`_.
- |Fix| Compatibility fix for :class:`linear_model.ARDRegression` and
Scipy>=1.3.0. Adapts to upstream changes to the default `pinvh` cutoff
threshold which otherwise results in poor accuracy in some cases.
:pr:`14067` by :user:`Tim Staley <timstaley>`.
:mod:`sklearn.neighbors`
........................
- |Fix| Fixed a bug in :class:`neighbors.NeighborhoodComponentsAnalysis` where
the validation of initial parameters ``n_components``, ``max_iter`` and
``tol`` required too strict types. :pr:`14092` by
:user:`Jérémie du Boisberranger <jeremiedbb>`.
:mod:`sklearn.tree`
...................
- |Fix| Fixed bug in :func:`tree.export_text` when the tree has one feature and
a single feature name is passed in. :pr:`14053` by `Thomas Fan`.
- |Fix| Fixed an issue with :func:`tree.plot_tree` where it displayed
entropy calculations even for `gini` criterion in DecisionTreeClassifiers.
:pr:`13947` by :user:`Frank Hoang <fhoang7>`.
.. _changes_0_21_2:
Version 0.21.2
==============
**24 May 2019**
Changelog
---------
:mod:`sklearn.decomposition`
............................
- |Fix| Fixed a bug in :class:`cross_decomposition.CCA` improving numerical
stability when `Y` is close to zero. :pr:`13903` by `Thomas Fan`_.
:mod:`sklearn.metrics`
......................
- |Fix| Fixed a bug in :func:`metrics.pairwise.euclidean_distances` where a
part of the distance matrix was left un-instanciated for sufficiently large
float32 datasets (regression introduced in 0.21). :pr:`13910` by
:user:`Jérémie du Boisberranger <jeremiedbb>`.
:mod:`sklearn.preprocessing`
............................
- |Fix| Fixed a bug in :class:`preprocessing.OneHotEncoder` where the new
`drop` parameter was not reflected in `get_feature_names`. :pr:`13894`
by :user:`James Myatt <jamesmyatt>`.
`sklearn.utils.sparsefuncs`
...........................
- |Fix| Fixed a bug where `min_max_axis` would fail on 32-bit systems
for certain large inputs. This affects :class:`preprocessing.MaxAbsScaler`,
:func:`preprocessing.normalize` and :class:`preprocessing.LabelBinarizer`.
:pr:`13741` by :user:`Roddy MacSween <rlms>`.
.. _changes_0_21_1:
Version 0.21.1
==============
**17 May 2019**
This is a bug-fix release to primarily resolve some packaging issues in version
0.21.0. It also includes minor documentation improvements and some bug fixes.
Changelog
---------
:mod:`sklearn.inspection`
.........................
- |Fix| Fixed a bug in :func:`inspection.partial_dependence` to only check
classifier and not regressor for the multiclass-multioutput case.
:pr:`14309` by :user:`Guillaume Lemaitre <glemaitre>`.
:mod:`sklearn.metrics`
......................
- |Fix| Fixed a bug in :class:`metrics.pairwise_distances` where it would raise
``AttributeError`` for boolean metrics when ``X`` had a boolean dtype and
``Y == None``.
:issue:`13864` by :user:`Paresh Mathur <rick2047>`.
- |Fix| Fixed two bugs in :class:`metrics.pairwise_distances` when
``n_jobs > 1``. First it used to return a distance matrix with same dtype as
input, even for integer dtype. Then the diagonal was not zeros for euclidean
metric when ``Y`` is ``X``. :issue:`13877` by
:user:`Jérémie du Boisberranger <jeremiedbb>`.
:mod:`sklearn.neighbors`
........................
- |Fix| Fixed a bug in :class:`neighbors.KernelDensity` which could not be
restored from a pickle if ``sample_weight`` had been used.
:issue:`13772` by :user:`Aditya Vyas <aditya1702>`.
.. _changes_0_21:
Version 0.21.0
==============
**May 2019**
Changed models
--------------
The following estimators and functions, when fit with the same data and
parameters, may produce different models from the previous version. This often
occurs due to changes in the modelling logic (bug fixes or enhancements), or in
random sampling procedures.
- :class:`discriminant_analysis.LinearDiscriminantAnalysis` for multiclass
classification. |Fix|
- :class:`discriminant_analysis.LinearDiscriminantAnalysis` with 'eigen'
solver. |Fix|
- :class:`linear_model.BayesianRidge` |Fix|
- Decision trees and derived ensembles when both `max_depth` and
`max_leaf_nodes` are set. |Fix|
- :class:`linear_model.LogisticRegression` and
:class:`linear_model.LogisticRegressionCV` with 'saga' solver. |Fix|
- :class:`ensemble.GradientBoostingClassifier` |Fix|
- :class:`sklearn.feature_extraction.text.HashingVectorizer`,
:class:`sklearn.feature_extraction.text.TfidfVectorizer`, and
:class:`sklearn.feature_extraction.text.CountVectorizer` |Fix|
- :class:`neural_network.MLPClassifier` |Fix|
- :func:`svm.SVC.decision_function` and
:func:`multiclass.OneVsOneClassifier.decision_function`. |Fix|
- :class:`linear_model.SGDClassifier` and any derived classifiers. |Fix|
- Any model using the `linear_model._sag.sag_solver` function with a `0`
seed, including :class:`linear_model.LogisticRegression`,
:class:`linear_model.LogisticRegressionCV`, :class:`linear_model.Ridge`,
and :class:`linear_model.RidgeCV` with 'sag' solver. |Fix|
- :class:`linear_model.RidgeCV` when using leave-one-out cross-validation
with sparse inputs. |Fix|
Details are listed in the changelog below.
(While we are trying to better inform users by providing this information, we
cannot assure that this list is complete.)
Known Major Bugs
----------------
* The default `max_iter` for :class:`linear_model.LogisticRegression` is too
small for many solvers given the default `tol`. In particular, we
accidentally changed the default `max_iter` for the liblinear solver from
1000 to 100 iterations in :pr:`3591` released in version 0.16.
In a future release we hope to choose better default `max_iter` and `tol`
heuristically depending on the solver (see :pr:`13317`).
Changelog
---------
Support for Python 3.4 and below has been officially dropped.
..
Entries should be grouped by module (in alphabetic order) and prefixed with
one of the labels: |MajorFeature|, |Feature|, |Efficiency|, |Enhancement|,
|Fix| or |API| (see whats_new.rst for descriptions).
Entries should be ordered by those labels (e.g. |Fix| after |Efficiency|).
Changes not specific to a module should be listed under *Multiple Modules*
or *Miscellaneous*.
Entries should end with:
:pr:`123456` by :user:`Joe Bloggs <joeongithub>`.
where 123456 is the *pull request* number, not the issue number.
:mod:`sklearn.base`
...................
- |API| The R2 score used when calling ``score`` on a regressor will use
``multioutput='uniform_average'`` from version 0.23 to keep consistent with
:func:`metrics.r2_score`. This will influence the ``score`` method of all
the multioutput regressors (except for
:class:`multioutput.MultiOutputRegressor`).
:pr:`13157` by :user:`Hanmin Qin <qinhanmin2014>`.
:mod:`sklearn.calibration`
..........................
- |Enhancement| Added support to bin the data passed into
:class:`calibration.calibration_curve` by quantiles instead of uniformly
between 0 and 1.
:pr:`13086` by :user:`Scott Cole <srcole>`.
- |Enhancement| Allow n-dimensional arrays as input for
`calibration.CalibratedClassifierCV`. :pr:`13485` by
:user:`William de Vazelhes <wdevazelhes>`.
:mod:`sklearn.cluster`
......................
- |MajorFeature| A new clustering algorithm: :class:`cluster.OPTICS`: an
algorithm related to :class:`cluster.DBSCAN`, that has hyperparameters easier
to set and that scales better, by :user:`Shane <espg>`,
`Adrin Jalali`_, :user:`Erich Schubert <kno10>`, `Hanmin Qin`_, and
:user:`Assia Benbihi <assiaben>`.
- |Fix| Fixed a bug where :class:`cluster.Birch` could occasionally raise an
AttributeError. :pr:`13651` by `Joel Nothman`_.
- |Fix| Fixed a bug in :class:`cluster.KMeans` where empty clusters weren't
correctly relocated when using sample weights. :pr:`13486` by
:user:`Jérémie du Boisberranger <jeremiedbb>`.
- |API| The ``n_components_`` attribute in :class:`cluster.AgglomerativeClustering`
and :class:`cluster.FeatureAgglomeration` has been renamed to
``n_connected_components_``.
:pr:`13427` by :user:`Stephane Couvreur <scouvreur>`.
- |Enhancement| :class:`cluster.AgglomerativeClustering` and
:class:`cluster.FeatureAgglomeration` now accept a ``distance_threshold``
parameter which can be used to find the clusters instead of ``n_clusters``.
:issue:`9069` by :user:`Vathsala Achar <VathsalaAchar>` and `Adrin Jalali`_.
:mod:`sklearn.compose`
......................
- |API| :class:`compose.ColumnTransformer` is no longer an experimental
feature. :pr:`13835` by :user:`Hanmin Qin <qinhanmin2014>`.
:mod:`sklearn.datasets`
.......................
- |Fix| Added support for 64-bit group IDs and pointers in SVMLight files.
:pr:`10727` by :user:`Bryan K Woods <bryan-woods>`.
- |Fix| :func:`datasets.load_sample_images` returns images with a deterministic
order. :pr:`13250` by :user:`Thomas Fan <thomasjpfan>`.
:mod:`sklearn.decomposition`
............................
- |Enhancement| :class:`decomposition.KernelPCA` now has deterministic output
(resolved sign ambiguity in eigenvalue decomposition of the kernel matrix).
:pr:`13241` by :user:`Aurélien Bellet <bellet>`.
- |Fix| Fixed a bug in :class:`decomposition.KernelPCA`, `fit().transform()`
now produces the correct output (the same as `fit_transform()`) in case
of non-removed zero eigenvalues (`remove_zero_eig=False`).
`fit_inverse_transform` was also accelerated by using the same trick as
`fit_transform` to compute the transform of `X`.
:pr:`12143` by :user:`Sylvain Marié <smarie>`
- |Fix| Fixed a bug in :class:`decomposition.NMF` where `init = 'nndsvd'`,
`init = 'nndsvda'`, and `init = 'nndsvdar'` are allowed when
`n_components < n_features` instead of
`n_components <= min(n_samples, n_features)`.
:pr:`11650` by :user:`Hossein Pourbozorg <hossein-pourbozorg>` and
:user:`Zijie (ZJ) Poh <zjpoh>`.
- |API| The default value of the :code:`init` argument in
:func:`decomposition.non_negative_factorization` will change from
:code:`random` to :code:`None` in version 0.23 to make it consistent with
:class:`decomposition.NMF`. A FutureWarning is raised when
the default value is used.
:pr:`12988` by :user:`Zijie (ZJ) Poh <zjpoh>`.
:mod:`sklearn.discriminant_analysis`
....................................
- |Enhancement| :class:`discriminant_analysis.LinearDiscriminantAnalysis` now
preserves ``float32`` and ``float64`` dtypes. :pr:`8769` and
:pr:`11000` by :user:`Thibault Sejourne <thibsej>`
- |Fix| A ``ChangedBehaviourWarning`` is now raised when
:class:`discriminant_analysis.LinearDiscriminantAnalysis` is given as
parameter ``n_components > min(n_features, n_classes - 1)``, and
``n_components`` is changed to ``min(n_features, n_classes - 1)`` if so.
Previously the change was made, but silently. :pr:`11526` by
:user:`William de Vazelhes<wdevazelhes>`.
- |Fix| Fixed a bug in :class:`discriminant_analysis.LinearDiscriminantAnalysis`
where the predicted probabilities would be incorrectly computed in the
multiclass case. :pr:`6848`, by :user:`Agamemnon Krasoulis
<agamemnonc>` and `Guillaume Lemaitre <glemaitre>`.
- |Fix| Fixed a bug in :class:`discriminant_analysis.LinearDiscriminantAnalysis`
where the predicted probabilities would be incorrectly computed with ``eigen``
solver. :pr:`11727`, by :user:`Agamemnon Krasoulis
<agamemnonc>`.
:mod:`sklearn.dummy`
....................
- |Fix| Fixed a bug in :class:`dummy.DummyClassifier` where the
``predict_proba`` method was returning int32 array instead of
float64 for the ``stratified`` strategy. :pr:`13266` by
:user:`Christos Aridas<chkoar>`.
- |Fix| Fixed a bug in :class:`dummy.DummyClassifier` where it was throwing a
dimension mismatch error in prediction time if a column vector ``y`` with
``shape=(n, 1)`` was given at ``fit`` time. :pr:`13545` by :user:`Nick
Sorros <nsorros>` and `Adrin Jalali`_.
:mod:`sklearn.ensemble`
.......................
- |MajorFeature| Add two new implementations of
gradient boosting trees: :class:`ensemble.HistGradientBoostingClassifier`
and :class:`ensemble.HistGradientBoostingRegressor`. The implementation of
these estimators is inspired by
`LightGBM <https://github.com/Microsoft/LightGBM>`_ and can be orders of
magnitude faster than :class:`ensemble.GradientBoostingRegressor` and
:class:`ensemble.GradientBoostingClassifier` when the number of samples is
larger than tens of thousands of samples. The API of these new estimators
is slightly different, and some of the features from
:class:`ensemble.GradientBoostingClassifier` and
:class:`ensemble.GradientBoostingRegressor` are not yet supported.
These new estimators are experimental, which means that their results or
their API might change without any deprecation cycle. To use them, you
need to explicitly import ``enable_hist_gradient_boosting``::
>>> # explicitly require this experimental feature
>>> from sklearn.experimental import enable_hist_gradient_boosting # noqa
>>> # now you can import normally from sklearn.ensemble
>>> from sklearn.ensemble import HistGradientBoostingClassifier
.. note::
Update: since version 1.0, these estimators are not experimental
anymore and you don't need to use `from sklearn.experimental import
enable_hist_gradient_boosting`.
:pr:`12807` by :user:`Nicolas Hug<NicolasHug>`.
- |Feature| Add :class:`ensemble.VotingRegressor`
which provides an equivalent of :class:`ensemble.VotingClassifier`
for regression problems.
:pr:`12513` by :user:`Ramil Nugmanov <stsouko>` and
:user:`Mohamed Ali Jamaoui <mohamed-ali>`.
- |Efficiency| Make :class:`ensemble.IsolationForest` prefer threads over
processes when running with ``n_jobs > 1`` as the underlying decision tree
fit calls do release the GIL. This changes reduces memory usage and
communication overhead. :pr:`12543` by :user:`Isaac Storch <istorch>`
and `Olivier Grisel`_.
- |Efficiency| Make :class:`ensemble.IsolationForest` more memory efficient
by avoiding keeping in memory each tree prediction. :pr:`13260` by
`Nicolas Goix`_.
- |Efficiency| :class:`ensemble.IsolationForest` now uses chunks of data at
prediction step, thus capping the memory usage. :pr:`13283` by
`Nicolas Goix`_.
- |Efficiency| :class:`sklearn.ensemble.GradientBoostingClassifier` and
:class:`sklearn.ensemble.GradientBoostingRegressor` now keep the
input ``y`` as ``float64`` to avoid it being copied internally by trees.
:pr:`13524` by `Adrin Jalali`_.
- |Enhancement| Minimized the validation of X in
:class:`ensemble.AdaBoostClassifier` and :class:`ensemble.AdaBoostRegressor`
:pr:`13174` by :user:`Christos Aridas <chkoar>`.
- |Enhancement| :class:`ensemble.IsolationForest` now exposes ``warm_start``
parameter, allowing iterative addition of trees to an isolation
forest. :pr:`13496` by :user:`Peter Marko <petibear>`.
- |Fix| The values of ``feature_importances_`` in all random forest based
models (i.e.
:class:`ensemble.RandomForestClassifier`,
:class:`ensemble.RandomForestRegressor`,
:class:`ensemble.ExtraTreesClassifier`,
:class:`ensemble.ExtraTreesRegressor`,
:class:`ensemble.RandomTreesEmbedding`,
:class:`ensemble.GradientBoostingClassifier`, and
:class:`ensemble.GradientBoostingRegressor`) now:
- sum up to ``1``
- all the single node trees in feature importance calculation are ignored
- in case all trees have only one single node (i.e. a root node),
feature importances will be an array of all zeros.
:pr:`13636` and :pr:`13620` by `Adrin Jalali`_.
- |Fix| Fixed a bug in :class:`ensemble.GradientBoostingClassifier` and
:class:`ensemble.GradientBoostingRegressor`, which didn't support
scikit-learn estimators as the initial estimator. Also added support of
initial estimator which does not support sample weights. :pr:`12436` by
:user:`Jérémie du Boisberranger <jeremiedbb>` and :pr:`12983` by
:user:`Nicolas Hug<NicolasHug>`.
- |Fix| Fixed the output of the average path length computed in
:class:`ensemble.IsolationForest` when the input is either 0, 1 or 2.
:pr:`13251` by :user:`Albert Thomas <albertcthomas>`
and :user:`joshuakennethjones <joshuakennethjones>`.
- |Fix| Fixed a bug in :class:`ensemble.GradientBoostingClassifier` where
the gradients would be incorrectly computed in multiclass classification
problems. :pr:`12715` by :user:`Nicolas Hug<NicolasHug>`.
- |Fix| Fixed a bug in :class:`ensemble.GradientBoostingClassifier` where
validation sets for early stopping were not sampled with stratification.
:pr:`13164` by :user:`Nicolas Hug<NicolasHug>`.
- |Fix| Fixed a bug in :class:`ensemble.GradientBoostingClassifier` where
the default initial prediction of a multiclass classifier would predict the
classes priors instead of the log of the priors. :pr:`12983` by
:user:`Nicolas Hug<NicolasHug>`.
- |Fix| Fixed a bug in :class:`ensemble.RandomForestClassifier` where the
``predict`` method would error for multiclass multioutput forests models
if any targets were strings. :pr:`12834` by :user:`Elizabeth Sander
<elsander>`.
- |Fix| Fixed a bug in `ensemble.gradient_boosting.LossFunction` and
`ensemble.gradient_boosting.LeastSquaresError` where the default
value of ``learning_rate`` in ``update_terminal_regions`` is not consistent
with the document and the caller functions. Note however that directly using
these loss functions is deprecated.
:pr:`6463` by :user:`movelikeriver <movelikeriver>`.
- |Fix| `ensemble.partial_dependence` (and consequently the new
version :func:`sklearn.inspection.partial_dependence`) now takes sample
weights into account for the partial dependence computation when the
gradient boosting model has been trained with sample weights.
:pr:`13193` by :user:`Samuel O. Ronsin <samronsin>`.
- |API| `ensemble.partial_dependence` and
`ensemble.plot_partial_dependence` are now deprecated in favor of
:func:`inspection.partial_dependence<sklearn.inspection.partial_dependence>`
and
`inspection.plot_partial_dependence<sklearn.inspection.plot_partial_dependence>`.
:pr:`12599` by :user:`Trevor Stephens<trevorstephens>` and
:user:`Nicolas Hug<NicolasHug>`.
- |Fix| :class:`ensemble.VotingClassifier` and
:class:`ensemble.VotingRegressor` were failing during ``fit`` in one
of the estimators was set to ``None`` and ``sample_weight`` was not ``None``.
:pr:`13779` by :user:`Guillaume Lemaitre <glemaitre>`.
- |API| :class:`ensemble.VotingClassifier` and
:class:`ensemble.VotingRegressor` accept ``'drop'`` to disable an estimator
in addition to ``None`` to be consistent with other estimators (i.e.,
:class:`pipeline.FeatureUnion` and :class:`compose.ColumnTransformer`).
:pr:`13780` by :user:`Guillaume Lemaitre <glemaitre>`.
`sklearn.externals`
...................
- |API| Deprecated `externals.six` since we have dropped support for
Python 2.7. :pr:`12916` by :user:`Hanmin Qin <qinhanmin2014>`.
:mod:`sklearn.feature_extraction`
.................................
- |Fix| If ``input='file'`` or ``input='filename'``, and a callable is given as
the ``analyzer``, :class:`sklearn.feature_extraction.text.HashingVectorizer`,
:class:`sklearn.feature_extraction.text.TfidfVectorizer`, and
:class:`sklearn.feature_extraction.text.CountVectorizer` now read the data
from the file(s) and then pass it to the given ``analyzer``, instead of
passing the file name(s) or the file object(s) to the analyzer.
:pr:`13641` by `Adrin Jalali`_.
:mod:`sklearn.impute`
.....................
- |MajorFeature| Added :class:`impute.IterativeImputer`, which is a strategy
for imputing missing values by modeling each feature with missing values as a
function of other features in a round-robin fashion. :pr:`8478` and
:pr:`12177` by :user:`Sergey Feldman <sergeyf>` and :user:`Ben Lawson
<benlawson>`.
The API of IterativeImputer is experimental and subject to change without any
deprecation cycle. To use them, you need to explicitly import
``enable_iterative_imputer``::
>>> from sklearn.experimental import enable_iterative_imputer # noqa
>>> # now you can import normally from sklearn.impute
>>> from sklearn.impute import IterativeImputer
- |Feature| The :class:`impute.SimpleImputer` and
:class:`impute.IterativeImputer` have a new parameter ``'add_indicator'``,
which simply stacks a :class:`impute.MissingIndicator` transform into the
output of the imputer's transform. That allows a predictive estimator to
account for missingness. :pr:`12583`, :pr:`13601` by :user:`Danylo Baibak
<DanilBaibak>`.
- |Fix| In :class:`impute.MissingIndicator` avoid implicit densification by
raising an exception if input is sparse add `missing_values` property
is set to 0. :pr:`13240` by :user:`Bartosz Telenczuk <btel>`.
- |Fix| Fixed two bugs in :class:`impute.MissingIndicator`. First, when
``X`` is sparse, all the non-zero non missing values used to become
explicit False in the transformed data. Then, when
``features='missing-only'``, all features used to be kept if there were no
missing values at all. :pr:`13562` by :user:`Jérémie du Boisberranger
<jeremiedbb>`.
:mod:`sklearn.inspection`
.........................
(new subpackage)
- |Feature| Partial dependence plots
(`inspection.plot_partial_dependence`) are now supported for
any regressor or classifier (provided that they have a `predict_proba`
method). :pr:`12599` by :user:`Trevor Stephens <trevorstephens>` and
:user:`Nicolas Hug <NicolasHug>`.
:mod:`sklearn.isotonic`
.......................
- |Feature| Allow different dtypes (such as float32) in
:class:`isotonic.IsotonicRegression`.
:pr:`8769` by :user:`Vlad Niculae <vene>`
:mod:`sklearn.linear_model`
...........................
- |Enhancement| :class:`linear_model.Ridge` now preserves ``float32`` and
``float64`` dtypes. :issue:`8769` and :issue:`11000` by
:user:`Guillaume Lemaitre <glemaitre>`, and :user:`Joan Massich <massich>`
- |Feature| :class:`linear_model.LogisticRegression` and
:class:`linear_model.LogisticRegressionCV` now support Elastic-Net penalty,
with the 'saga' solver. :pr:`11646` by :user:`Nicolas Hug <NicolasHug>`.
- |Feature| Added :class:`linear_model.lars_path_gram`, which is
:class:`linear_model.lars_path` in the sufficient stats mode, allowing
users to compute :class:`linear_model.lars_path` without providing
``X`` and ``y``. :pr:`11699` by :user:`Kuai Yu <yukuairoy>`.
- |Efficiency| `linear_model.make_dataset` now preserves
``float32`` and ``float64`` dtypes, reducing memory consumption in stochastic
gradient, SAG and SAGA solvers.
:pr:`8769` and :pr:`11000` by
:user:`Nelle Varoquaux <NelleV>`, :user:`Arthur Imbert <Henley13>`,
:user:`Guillaume Lemaitre <glemaitre>`, and :user:`Joan Massich <massich>`
- |Enhancement| :class:`linear_model.LogisticRegression` now supports an
unregularized objective when ``penalty='none'`` is passed. This is
equivalent to setting ``C=np.inf`` with l2 regularization. Not supported
by the liblinear solver. :pr:`12860` by :user:`Nicolas Hug
<NicolasHug>`.
- |Enhancement| `sparse_cg` solver in :class:`linear_model.Ridge`
now supports fitting the intercept (i.e. ``fit_intercept=True``) when
inputs are sparse. :pr:`13336` by :user:`Bartosz Telenczuk <btel>`.
- |Enhancement| The coordinate descent solver used in `Lasso`, `ElasticNet`,
etc. now issues a `ConvergenceWarning` when it completes without meeting the
desired toleranbce.
:pr:`11754` and :pr:`13397` by :user:`Brent Fagan <brentfagan>` and
:user:`Adrin Jalali <adrinjalali>`.
- |Fix| Fixed a bug in :class:`linear_model.LogisticRegression` and
:class:`linear_model.LogisticRegressionCV` with 'saga' solver, where the
weights would not be correctly updated in some cases.
:pr:`11646` by `Tom Dupre la Tour`_.
- |Fix| Fixed the posterior mean, posterior covariance and returned
regularization parameters in :class:`linear_model.BayesianRidge`. The
posterior mean and the posterior covariance were not the ones computed
with the last update of the regularization parameters and the returned
regularization parameters were not the final ones. Also fixed the formula of
the log marginal likelihood used to compute the score when
`compute_score=True`. :pr:`12174` by
:user:`Albert Thomas <albertcthomas>`.
- |Fix| Fixed a bug in :class:`linear_model.LassoLarsIC`, where user input
``copy_X=False`` at instance creation would be overridden by default
parameter value ``copy_X=True`` in ``fit``.
:pr:`12972` by :user:`Lucio Fernandez-Arjona <luk-f-a>`
- |Fix| Fixed a bug in :class:`linear_model.LinearRegression` that
was not returning the same coeffecients and intercepts with
``fit_intercept=True`` in sparse and dense case.
:pr:`13279` by `Alexandre Gramfort`_
- |Fix| Fixed a bug in :class:`linear_model.HuberRegressor` that was
broken when ``X`` was of dtype bool. :pr:`13328` by `Alexandre Gramfort`_.
- |Fix| Fixed a performance issue of ``saga`` and ``sag`` solvers when called
in a :class:`joblib.Parallel` setting with ``n_jobs > 1`` and
``backend="threading"``, causing them to perform worse than in the sequential
case. :pr:`13389` by :user:`Pierre Glaser <pierreglaser>`.
- |Fix| Fixed a bug in
`linear_model.stochastic_gradient.BaseSGDClassifier` that was not
deterministic when trained in a multi-class setting on several threads.
:pr:`13422` by :user:`Clément Doumouro <ClemDoum>`.
- |Fix| Fixed bug in :func:`linear_model.ridge_regression`,
:class:`linear_model.Ridge` and
:class:`linear_model.RidgeClassifier` that
caused unhandled exception for arguments ``return_intercept=True`` and
``solver=auto`` (default) or any other solver different from ``sag``.
:pr:`13363` by :user:`Bartosz Telenczuk <btel>`
- |Fix| :func:`linear_model.ridge_regression` will now raise an exception
if ``return_intercept=True`` and solver is different from ``sag``. Previously,
only warning was issued. :pr:`13363` by :user:`Bartosz Telenczuk <btel>`
- |Fix| :func:`linear_model.ridge_regression` will choose ``sparse_cg``
solver for sparse inputs when ``solver=auto`` and ``sample_weight``
is provided (previously `cholesky` solver was selected).
:pr:`13363` by :user:`Bartosz Telenczuk <btel>`
- |API| The use of :class:`linear_model.lars_path` with ``X=None``
while passing ``Gram`` is deprecated in version 0.21 and will be removed
in version 0.23. Use :class:`linear_model.lars_path_gram` instead.
:pr:`11699` by :user:`Kuai Yu <yukuairoy>`.
- |API| `linear_model.logistic_regression_path` is deprecated
in version 0.21 and will be removed in version 0.23.
:pr:`12821` by :user:`Nicolas Hug <NicolasHug>`.
- |Fix| :class:`linear_model.RidgeCV` with leave-one-out cross-validation
now correctly fits an intercept when ``fit_intercept=True`` and the design
matrix is sparse. :issue:`13350` by :user:`Jérôme Dockès <jeromedockes>`
:mod:`sklearn.manifold`
.......................
- |Efficiency| Make :func:`manifold.trustworthiness` use an inverted index
instead of an `np.where` lookup to find the rank of neighbors in the input
space. This improves efficiency in particular when computed with
lots of neighbors and/or small datasets.
:pr:`9907` by :user:`William de Vazelhes <wdevazelhes>`.
:mod:`sklearn.metrics`
......................
- |Feature| Added the :func:`metrics.max_error` metric and a corresponding
``'max_error'`` scorer for single output regression.
:pr:`12232` by :user:`Krishna Sangeeth <whiletruelearn>`.
- |Feature| Add :func:`metrics.multilabel_confusion_matrix`, which calculates a
confusion matrix with true positive, false positive, false negative and true
negative counts for each class. This facilitates the calculation of set-wise
metrics such as recall, specificity, fall out and miss rate.
:pr:`11179` by :user:`Shangwu Yao <ShangwuYao>` and `Joel Nothman`_.
- |Feature| :func:`metrics.jaccard_score` has been added to calculate the
Jaccard coefficient as an evaluation metric for binary, multilabel and
multiclass tasks, with an interface analogous to :func:`metrics.f1_score`.
:pr:`13151` by :user:`Gaurav Dhingra <gxyd>` and `Joel Nothman`_.
- |Feature| Added :func:`metrics.pairwise.haversine_distances` which can be
accessed with `metric='pairwise'` through :func:`metrics.pairwise_distances`
and estimators. (Haversine distance was previously available for nearest
neighbors calculation.) :pr:`12568` by :user:`Wei Xue <xuewei4d>`,
:user:`Emmanuel Arias <eamanu>` and `Joel Nothman`_.
- |Efficiency| Faster :func:`metrics.pairwise_distances` with `n_jobs`
> 1 by using a thread-based backend, instead of process-based backends.
:pr:`8216` by :user:`Pierre Glaser <pierreglaser>` and
:user:`Romuald Menuet <zanospi>`
- |Efficiency| The pairwise manhattan distances with sparse input now uses the
BLAS shipped with scipy instead of the bundled BLAS. :pr:`12732` by
:user:`Jérémie du Boisberranger <jeremiedbb>`
- |Enhancement| Use label `accuracy` instead of `micro-average` on
:func:`metrics.classification_report` to avoid confusion. `micro-average` is
only shown for multi-label or multi-class with a subset of classes because
it is otherwise identical to accuracy.
:pr:`12334` by :user:`Emmanuel Arias <[email protected]>`,
`Joel Nothman`_ and `Andreas Müller`_
- |Enhancement| Added `beta` parameter to
:func:`metrics.homogeneity_completeness_v_measure` and
:func:`metrics.v_measure_score` to configure the
tradeoff between homogeneity and completeness.
:pr:`13607` by :user:`Stephane Couvreur <scouvreur>` and
and :user:`Ivan Sanchez <ivsanro1>`.
- |Fix| The metric :func:`metrics.r2_score` is degenerate with a single sample
and now it returns NaN and raises :class:`exceptions.UndefinedMetricWarning`.
:pr:`12855` by :user:`Pawel Sendyk <psendyk>`.
- |Fix| Fixed a bug where :func:`metrics.brier_score_loss` will sometimes
return incorrect result when there's only one class in ``y_true``.
:pr:`13628` by :user:`Hanmin Qin <qinhanmin2014>`.
- |Fix| Fixed a bug in :func:`metrics.label_ranking_average_precision_score`
where sample_weight wasn't taken into account for samples with degenerate
labels.
:pr:`13447` by :user:`Dan Ellis <dpwe>`.
- |API| The parameter ``labels`` in :func:`metrics.hamming_loss` is deprecated
in version 0.21 and will be removed in version 0.23. :pr:`10580` by
:user:`Reshama Shaikh <reshamas>` and :user:`Sandra Mitrovic <SandraMNE>`.
- |Fix| The function :func:`metrics.pairwise.euclidean_distances`, and
therefore several estimators with ``metric='euclidean'``, suffered from
numerical precision issues with ``float32`` features. Precision has been
increased at the cost of a small drop of performance. :pr:`13554` by
:user:`Celelibi` and :user:`Jérémie du Boisberranger <jeremiedbb>`.
- |API| `metrics.jaccard_similarity_score` is deprecated in favour of
the more consistent :func:`metrics.jaccard_score`. The former behavior for
binary and multiclass targets is broken.
:pr:`13151` by `Joel Nothman`_.
:mod:`sklearn.mixture`
......................
- |Fix| Fixed a bug in `mixture.BaseMixture` and therefore on estimators
based on it, i.e. :class:`mixture.GaussianMixture` and
:class:`mixture.BayesianGaussianMixture`, where ``fit_predict`` and
``fit.predict`` were not equivalent. :pr:`13142` by
:user:`Jérémie du Boisberranger <jeremiedbb>`.
:mod:`sklearn.model_selection`
..............................
- |Feature| Classes :class:`~model_selection.GridSearchCV` and
:class:`~model_selection.RandomizedSearchCV` now allow for refit=callable
to add flexibility in identifying the best estimator.
See :ref:`sphx_glr_auto_examples_model_selection_plot_grid_search_refit_callable.py`.
:pr:`11354` by :user:`Wenhao Zhang <[email protected]>`,
`Joel Nothman`_ and :user:`Adrin Jalali <adrinjalali>`.
- |Enhancement| Classes :class:`~model_selection.GridSearchCV`,
:class:`~model_selection.RandomizedSearchCV`, and methods
:func:`~model_selection.cross_val_score`,
:func:`~model_selection.cross_val_predict`,
:func:`~model_selection.cross_validate`, now print train scores when
`return_train_scores` is True and `verbose` > 2. For
:func:`~model_selection.learning_curve`, and
:func:`~model_selection.validation_curve` only the latter is required.
:pr:`12613` and :pr:`12669` by :user:`Marc Torrellas <marctorrellas>`.
- |Enhancement| Some :term:`CV splitter` classes and
`model_selection.train_test_split` now raise ``ValueError`` when the
resulting training set is empty.
:pr:`12861` by :user:`Nicolas Hug <NicolasHug>`.
- |Fix| Fixed a bug where :class:`model_selection.StratifiedKFold`
shuffles each class's samples with the same ``random_state``,
making ``shuffle=True`` ineffective.
:pr:`13124` by :user:`Hanmin Qin <qinhanmin2014>`.
- |Fix| Added ability for :func:`model_selection.cross_val_predict` to handle
multi-label (and multioutput-multiclass) targets with ``predict_proba``-type
methods. :pr:`8773` by :user:`Stephen Hoover <stephen-hoover>`.
- |Fix| Fixed an issue in :func:`~model_selection.cross_val_predict` where
`method="predict_proba"` returned always `0.0` when one of the classes was
excluded in a cross-validation fold.
:pr:`13366` by :user:`Guillaume Fournier <gfournier>`
:mod:`sklearn.multiclass`
.........................
- |Fix| Fixed an issue in :func:`multiclass.OneVsOneClassifier.decision_function`
where the decision_function value of a given sample was different depending on
whether the decision_function was evaluated on the sample alone or on a batch
containing this same sample due to the scaling used in decision_function.
:pr:`10440` by :user:`Jonathan Ohayon <Johayon>`.
:mod:`sklearn.multioutput`
..........................
- |Fix| Fixed a bug in :class:`multioutput.MultiOutputClassifier` where the
`predict_proba` method incorrectly checked for `predict_proba` attribute in
the estimator object.
:pr:`12222` by :user:`Rebekah Kim <rebekahkim>`
:mod:`sklearn.neighbors`
........................
- |MajorFeature| Added :class:`neighbors.NeighborhoodComponentsAnalysis` for
metric learning, which implements the Neighborhood Components Analysis
algorithm. :pr:`10058` by :user:`William de Vazelhes <wdevazelhes>` and
:user:`John Chiotellis <johny-c>`.
- |API| Methods in :class:`neighbors.NearestNeighbors` :
:func:`~neighbors.NearestNeighbors.kneighbors`,
:func:`~neighbors.NearestNeighbors.radius_neighbors`,
:func:`~neighbors.NearestNeighbors.kneighbors_graph`,
:func:`~neighbors.NearestNeighbors.radius_neighbors_graph`
now raise ``NotFittedError``, rather than ``AttributeError``,
when called before ``fit`` :pr:`12279` by :user:`Krishna Sangeeth
<whiletruelearn>`.
:mod:`sklearn.neural_network`
.............................
- |Fix| Fixed a bug in :class:`neural_network.MLPClassifier` and
:class:`neural_network.MLPRegressor` where the option :code:`shuffle=False`
was being ignored. :pr:`12582` by :user:`Sam Waterbury <samwaterbury>`.
- |Fix| Fixed a bug in :class:`neural_network.MLPClassifier` where
validation sets for early stopping were not sampled with stratification. In
the multilabel case however, splits are still not stratified.
:pr:`13164` by :user:`Nicolas Hug<NicolasHug>`.
:mod:`sklearn.pipeline`
.......................
- |Feature| :class:`pipeline.Pipeline` can now use indexing notation (e.g.
``my_pipeline[0:-1]``) to extract a subsequence of steps as another Pipeline
instance. A Pipeline can also be indexed directly to extract a particular
step (e.g. ``my_pipeline['svc']``), rather than accessing ``named_steps``.
:pr:`2568` by `Joel Nothman`_.
- |Feature| Added optional parameter ``verbose`` in :class:`pipeline.Pipeline`,
:class:`compose.ColumnTransformer` and :class:`pipeline.FeatureUnion`
and corresponding ``make_`` helpers for showing progress and timing of
each step. :pr:`11364` by :user:`Baze Petrushev <petrushev>`,
:user:`Karan Desai <karandesai-96>`, `Joel Nothman`_, and
:user:`Thomas Fan <thomasjpfan>`.
- |Enhancement| :class:`pipeline.Pipeline` now supports using ``'passthrough'``
as a transformer, with the same effect as ``None``.
:pr:`11144` by :user:`Thomas Fan <thomasjpfan>`.
- |Enhancement| :class:`pipeline.Pipeline` implements ``__len__`` and
therefore ``len(pipeline)`` returns the number of steps in the pipeline.
:pr:`13439` by :user:`Lakshya KD <LakshKD>`.
:mod:`sklearn.preprocessing`
............................
- |Feature| :class:`preprocessing.OneHotEncoder` now supports dropping one
feature per category with a new drop parameter. :pr:`12908` by
:user:`Drew Johnston <drewmjohnston>`.
- |Efficiency| :class:`preprocessing.OneHotEncoder` and
:class:`preprocessing.OrdinalEncoder` now handle pandas DataFrames more
efficiently. :pr:`13253` by :user:`maikia`.
- |Efficiency| Make :class:`preprocessing.MultiLabelBinarizer` cache class
mappings instead of calculating it every time on the fly.
:pr:`12116` by :user:`Ekaterina Krivich <kiote>` and `Joel Nothman`_.
- |Efficiency| :class:`preprocessing.PolynomialFeatures` now supports
compressed sparse row (CSR) matrices as input for degrees 2 and 3. This is
typically much faster than the dense case as it scales with matrix density
and expansion degree (on the order of density^degree), and is much, much
faster than the compressed sparse column (CSC) case.
:pr:`12197` by :user:`Andrew Nystrom <awnystrom>`.
- |Efficiency| Speed improvement in :class:`preprocessing.PolynomialFeatures`,
in the dense case. Also added a new parameter ``order`` which controls output
order for further speed performances. :pr:`12251` by `Tom Dupre la Tour`_.
- |Fix| Fixed the calculation overflow when using a float16 dtype with
:class:`preprocessing.StandardScaler`.
:pr:`13007` by :user:`Raffaello Baluyot <baluyotraf>`
- |Fix| Fixed a bug in :class:`preprocessing.QuantileTransformer` and
:func:`preprocessing.quantile_transform` to force n_quantiles to be at most
equal to n_samples. Values of n_quantiles larger than n_samples were either
useless or resulting in a wrong approximation of the cumulative distribution
function estimator. :pr:`13333` by :user:`Albert Thomas <albertcthomas>`.
- |API| The default value of `copy` in :func:`preprocessing.quantile_transform`
will change from False to True in 0.23 in order to make it more consistent
with the default `copy` values of other functions in
:mod:`sklearn.preprocessing` and prevent unexpected side effects by modifying
the value of `X` inplace.
:pr:`13459` by :user:`Hunter McGushion <HunterMcGushion>`.
:mod:`sklearn.svm`
..................
- |Fix| Fixed an issue in :func:`svm.SVC.decision_function` when
``decision_function_shape='ovr'``. The decision_function value of a given
sample was different depending on whether the decision_function was evaluated
on the sample alone or on a batch containing this same sample due to the
scaling used in decision_function.
:pr:`10440` by :user:`Jonathan Ohayon <Johayon>`.
:mod:`sklearn.tree`
...................
- |Feature| Decision Trees can now be plotted with matplotlib using
`tree.plot_tree` without relying on the ``dot`` library,
removing a hard-to-install dependency. :pr:`8508` by `Andreas Müller`_.
- |Feature| Decision Trees can now be exported in a human readable
textual format using :func:`tree.export_text`.
:pr:`6261` by `Giuseppe Vettigli <JustGlowing>`.
- |Feature| ``get_n_leaves()`` and ``get_depth()`` have been added to
`tree.BaseDecisionTree` and consequently all estimators based
on it, including :class:`tree.DecisionTreeClassifier`,
:class:`tree.DecisionTreeRegressor`, :class:`tree.ExtraTreeClassifier`,
and :class:`tree.ExtraTreeRegressor`.
:pr:`12300` by :user:`Adrin Jalali <adrinjalali>`.
- |Fix| Trees and forests did not previously `predict` multi-output
classification targets with string labels, despite accepting them in `fit`.
:pr:`11458` by :user:`Mitar Milutinovic <mitar>`.
- |Fix| Fixed an issue with `tree.BaseDecisionTree`
and consequently all estimators based
on it, including :class:`tree.DecisionTreeClassifier`,
:class:`tree.DecisionTreeRegressor`, :class:`tree.ExtraTreeClassifier`,
and :class:`tree.ExtraTreeRegressor`, where they used to exceed the given
``max_depth`` by 1 while expanding the tree if ``max_leaf_nodes`` and
``max_depth`` were both specified by the user. Please note that this also
affects all ensemble methods using decision trees.
:pr:`12344` by :user:`Adrin Jalali <adrinjalali>`.
:mod:`sklearn.utils`
....................
- |Feature| :func:`utils.resample` now accepts a ``stratify`` parameter for
sampling according to class distributions. :pr:`13549` by :user:`Nicolas
Hug <NicolasHug>`.
- |API| Deprecated ``warn_on_dtype`` parameter from :func:`utils.check_array`
and :func:`utils.check_X_y`. Added explicit warning for dtype conversion
in `check_pairwise_arrays` if the ``metric`` being passed is a
pairwise boolean metric.
:pr:`13382` by :user:`Prathmesh Savale <praths007>`.
Multiple modules
................
- |MajorFeature| The `__repr__()` method of all estimators (used when calling
`print(estimator)`) has been entirely re-written, building on Python's
pretty printing standard library. All parameters are printed by default,
but this can be altered with the ``print_changed_only`` option in
:func:`sklearn.set_config`. :pr:`11705` by :user:`Nicolas Hug
<NicolasHug>`.
- |MajorFeature| Add estimators tags: these are annotations of estimators
that allow programmatic inspection of their capabilities, such as sparse
matrix support, supported output types and supported methods. Estimator
tags also determine the tests that are run on an estimator when
`check_estimator` is called. Read more in the :ref:`User Guide
<estimator_tags>`. :pr:`8022` by :user:`Andreas Müller <amueller>`.
- |Efficiency| Memory copies are avoided when casting arrays to a different
dtype in multiple estimators. :pr:`11973` by :user:`Roman Yurchak
<rth>`.
- |Fix| Fixed a bug in the implementation of the `our_rand_r`
helper function that was not behaving consistently across platforms.
:pr:`13422` by :user:`Madhura Parikh <jdnc>` and
:user:`Clément Doumouro <ClemDoum>`.
Miscellaneous
.............
- |Enhancement| Joblib is no longer vendored in scikit-learn, and becomes a
dependency. Minimal supported version is joblib 0.11, however using
version >= 0.13 is strongly recommended.
:pr:`13531` by :user:`Roman Yurchak <rth>`.
Changes to estimator checks
---------------------------
These changes mostly affect library developers.
- Add ``check_fit_idempotent`` to
:func:`~utils.estimator_checks.check_estimator`, which checks that
when `fit` is called twice with the same data, the output of
`predict`, `predict_proba`, `transform`, and `decision_function` does not
change. :pr:`12328` by :user:`Nicolas Hug <NicolasHug>`
- Many checks can now be disabled or configured with :ref:`estimator_tags`.
:pr:`8022` by :user:`Andreas Müller <amueller>`.
.. rubric:: Code and documentation contributors
Thanks to everyone who has contributed to the maintenance and improvement of the
project since version 0.20, including:
adanhawth, Aditya Vyas, Adrin Jalali, Agamemnon Krasoulis, Albert Thomas,
Alberto Torres, Alexandre Gramfort, amourav, Andrea Navarrete, Andreas Mueller,
Andrew Nystrom, assiaben, Aurélien Bellet, Bartosz Michałowski, Bartosz
Telenczuk, bauks, BenjaStudio, bertrandhaut, Bharat Raghunathan, brentfagan,
Bryan Woods, Cat Chenal, Cheuk Ting Ho, Chris Choe, Christos Aridas, Clément
Doumouro, Cole Smith, Connossor, Corey Levinson, Dan Ellis, Dan Stine, Danylo
Baibak, daten-kieker, Denis Kataev, Didi Bar-Zev, Dillon Gardner, Dmitry Mottl,
Dmitry Vukolov, Dougal J. Sutherland, Dowon, drewmjohnston, Dror Atariah,
Edward J Brown, Ekaterina Krivich, Elizabeth Sander, Emmanuel Arias, Eric
Chang, Eric Larson, Erich Schubert, esvhd, Falak, Feda Curic, Federico Caselli,
Frank Hoang, Fibinse Xavier`, Finn O'Shea, Gabriel Marzinotto, Gabriel Vacaliuc,
Gabriele Calvo, Gael Varoquaux, GauravAhlawat, Giuseppe Vettigli, Greg Gandenberger,
Guillaume Fournier, Guillaume Lemaitre, Gustavo De Mari Pereira, Hanmin Qin,
haroldfox, hhu-luqi, Hunter McGushion, Ian Sanders, JackLangerman, Jacopo
Notarstefano, jakirkham, James Bourbeau, Jan Koch, Jan S, janvanrijn, Jarrod
Millman, jdethurens, jeremiedbb, JF, joaak, Joan Massich, Joel Nothman,
Jonathan Ohayon, Joris Van den Bossche, josephsalmon, Jérémie Méhault, Katrin
Leinweber, ken, kms15, Koen, Kossori Aruku, Krishna Sangeeth, Kuai Yu, Kulbear,
Kushal Chauhan, Kyle Jackson, Lakshya KD, Leandro Hermida, Lee Yi Jie Joel,
Lily Xiong, Lisa Sarah Thomas, Loic Esteve, louib, luk-f-a, maikia, mail-liam,
Manimaran, Manuel López-Ibáñez, Marc Torrellas, Marco Gaido, Marco Gorelli,
MarcoGorelli, marineLM, Mark Hannel, Martin Gubri, Masstran, mathurinm, Matthew
Roeschke, Max Copeland, melsyt, mferrari3, Mickaël Schoentgen, Ming Li, Mitar,
Mohammad Aftab, Mohammed AbdelAal, Mohammed Ibraheem, Muhammad Hassaan Rafique,
mwestt, Naoya Iijima, Nicholas Smith, Nicolas Goix, Nicolas Hug, Nikolay
Shebanov, Oleksandr Pavlyk, Oliver Rausch, Olivier Grisel, Orestis, Osman, Owen
Flanagan, Paul Paczuski, Pavel Soriano, pavlos kallis, Pawel Sendyk, peay,
Peter, Peter Cock, Peter Hausamann, Peter Marko, Pierre Glaser, pierretallotte,
Pim de Haan, Piotr Szymański, Prabakaran Kumaresshan, Pradeep Reddy Raamana,
Prathmesh Savale, Pulkit Maloo, Quentin Batista, Radostin Stoyanov, Raf
Baluyot, Rajdeep Dua, Ramil Nugmanov, Raúl García Calvo, Rebekah Kim, Reshama
Shaikh, Rohan Lekhwani, Rohan Singh, Rohan Varma, Rohit Kapoor, Roman
Feldbauer, Roman Yurchak, Romuald M, Roopam Sharma, Ryan, Rüdiger Busche, Sam
Waterbury, Samuel O. Ronsin, SandroCasagrande, Scott Cole, Scott Lowe,
Sebastian Raschka, Shangwu Yao, Shivam Kotwalia, Shiyu Duan, smarie, Sriharsha
Hatwar, Stephen Hoover, Stephen Tierney, Stéphane Couvreur, surgan12,
SylvainLan, TakingItCasual, Tashay Green, thibsej, Thomas Fan, Thomas J Fan,
Thomas Moreau, Tom Dupré la Tour, Tommy, Tulio Casagrande, Umar Farouk Umar,
Utkarsh Upadhyay, Vinayak Mehta, Vishaal Kapoor, Vivek Kumar, Vlad Niculae,
vqean3, Wenhao Zhang, William de Vazelhes, xhan, Xing Han Lu, xinyuliu12,
Yaroslav Halchenko, Zach Griffith, Zach Miller, Zayd Hammoudeh, Zhuyi Xue,
Zijie (ZJ) Poh, ^__^ | scikit-learn | include contributors rst currentmodule sklearn Version 0 21 include changelog legend inc changes 0 21 3 Version 0 21 3 July 30 2019 Changed models The following estimators and functions when fit with the same data and parameters may produce different models from the previous version This often occurs due to changes in the modelling logic bug fixes or enhancements or in random sampling procedures The v0 20 0 release notes failed to mention a backwards incompatibility in func metrics make scorer when needs proba True and y true is binary Now the scorer function is supposed to accept a 1D y pred i e probability of the positive class shape n samples instead of a 2D y pred i e shape n samples 2 Changelog mod sklearn cluster Fix Fixed a bug in class cluster KMeans where computation with init random was single threaded for n jobs 1 or n jobs 1 pr 12955 by user Prabakaran Kumaresshan nixphix Fix Fixed a bug in class cluster OPTICS where users were unable to pass float min samples and min cluster size pr 14496 by user Fabian Klopfer someusername1 and user Hanmin Qin qinhanmin2014 Fix Fixed a bug in class cluster KMeans where KMeans initialisation could rarely result in an IndexError issue 11756 by Joel Nothman mod sklearn compose Fix Fixed an issue in class compose ColumnTransformer where using DataFrames whose column order differs between func fit and func transform could lead to silently passing incorrect columns to the remainder transformer pr 14237 by Andreas Schuderer schuderer mod sklearn datasets Fix func datasets fetch california housing func datasets fetch covtype func datasets fetch kddcup99 func datasets fetch olivetti faces func datasets fetch rcv1 and func datasets fetch species distributions try to persist the previously cache using the new joblib if the cached data was persisted using the deprecated sklearn externals joblib This behavior is set to be deprecated and removed in v0 23 pr 14197 by Adrin Jalali mod sklearn ensemble Fix Fix zero division error in class ensemble HistGradientBoostingClassifier and class ensemble HistGradientBoostingRegressor pr 14024 by Nicolas Hug NicolasHug mod sklearn impute Fix Fixed a bug in class impute SimpleImputer and class impute IterativeImputer so that no errors are thrown when there are missing values in training data pr 13974 by Frank Hoang fhoang7 mod sklearn inspection Fix Fixed a bug in inspection plot partial dependence where target parameter was not being taken into account for multiclass problems pr 14393 by user Guillem G Subies guillemgsubies mod sklearn linear model Fix Fixed a bug in class linear model LogisticRegressionCV where refit False would fail depending on the multiclass and penalty parameters regression introduced in 0 21 pr 14087 by Nicolas Hug Fix Compatibility fix for class linear model ARDRegression and Scipy 1 3 0 Adapts to upstream changes to the default pinvh cutoff threshold which otherwise results in poor accuracy in some cases pr 14067 by user Tim Staley timstaley mod sklearn neighbors Fix Fixed a bug in class neighbors NeighborhoodComponentsAnalysis where the validation of initial parameters n components max iter and tol required too strict types pr 14092 by user J r mie du Boisberranger jeremiedbb mod sklearn tree Fix Fixed bug in func tree export text when the tree has one feature and a single feature name is passed in pr 14053 by Thomas Fan Fix Fixed an issue with func tree plot tree where it displayed entropy calculations even for gini criterion in DecisionTreeClassifiers pr 13947 by user Frank Hoang fhoang7 changes 0 21 2 Version 0 21 2 24 May 2019 Changelog mod sklearn decomposition Fix Fixed a bug in class cross decomposition CCA improving numerical stability when Y is close to zero pr 13903 by Thomas Fan mod sklearn metrics Fix Fixed a bug in func metrics pairwise euclidean distances where a part of the distance matrix was left un instanciated for sufficiently large float32 datasets regression introduced in 0 21 pr 13910 by user J r mie du Boisberranger jeremiedbb mod sklearn preprocessing Fix Fixed a bug in class preprocessing OneHotEncoder where the new drop parameter was not reflected in get feature names pr 13894 by user James Myatt jamesmyatt sklearn utils sparsefuncs Fix Fixed a bug where min max axis would fail on 32 bit systems for certain large inputs This affects class preprocessing MaxAbsScaler func preprocessing normalize and class preprocessing LabelBinarizer pr 13741 by user Roddy MacSween rlms changes 0 21 1 Version 0 21 1 17 May 2019 This is a bug fix release to primarily resolve some packaging issues in version 0 21 0 It also includes minor documentation improvements and some bug fixes Changelog mod sklearn inspection Fix Fixed a bug in func inspection partial dependence to only check classifier and not regressor for the multiclass multioutput case pr 14309 by user Guillaume Lemaitre glemaitre mod sklearn metrics Fix Fixed a bug in class metrics pairwise distances where it would raise AttributeError for boolean metrics when X had a boolean dtype and Y None issue 13864 by user Paresh Mathur rick2047 Fix Fixed two bugs in class metrics pairwise distances when n jobs 1 First it used to return a distance matrix with same dtype as input even for integer dtype Then the diagonal was not zeros for euclidean metric when Y is X issue 13877 by user J r mie du Boisberranger jeremiedbb mod sklearn neighbors Fix Fixed a bug in class neighbors KernelDensity which could not be restored from a pickle if sample weight had been used issue 13772 by user Aditya Vyas aditya1702 changes 0 21 Version 0 21 0 May 2019 Changed models The following estimators and functions when fit with the same data and parameters may produce different models from the previous version This often occurs due to changes in the modelling logic bug fixes or enhancements or in random sampling procedures class discriminant analysis LinearDiscriminantAnalysis for multiclass classification Fix class discriminant analysis LinearDiscriminantAnalysis with eigen solver Fix class linear model BayesianRidge Fix Decision trees and derived ensembles when both max depth and max leaf nodes are set Fix class linear model LogisticRegression and class linear model LogisticRegressionCV with saga solver Fix class ensemble GradientBoostingClassifier Fix class sklearn feature extraction text HashingVectorizer class sklearn feature extraction text TfidfVectorizer and class sklearn feature extraction text CountVectorizer Fix class neural network MLPClassifier Fix func svm SVC decision function and func multiclass OneVsOneClassifier decision function Fix class linear model SGDClassifier and any derived classifiers Fix Any model using the linear model sag sag solver function with a 0 seed including class linear model LogisticRegression class linear model LogisticRegressionCV class linear model Ridge and class linear model RidgeCV with sag solver Fix class linear model RidgeCV when using leave one out cross validation with sparse inputs Fix Details are listed in the changelog below While we are trying to better inform users by providing this information we cannot assure that this list is complete Known Major Bugs The default max iter for class linear model LogisticRegression is too small for many solvers given the default tol In particular we accidentally changed the default max iter for the liblinear solver from 1000 to 100 iterations in pr 3591 released in version 0 16 In a future release we hope to choose better default max iter and tol heuristically depending on the solver see pr 13317 Changelog Support for Python 3 4 and below has been officially dropped Entries should be grouped by module in alphabetic order and prefixed with one of the labels MajorFeature Feature Efficiency Enhancement Fix or API see whats new rst for descriptions Entries should be ordered by those labels e g Fix after Efficiency Changes not specific to a module should be listed under Multiple Modules or Miscellaneous Entries should end with pr 123456 by user Joe Bloggs joeongithub where 123456 is the pull request number not the issue number mod sklearn base API The R2 score used when calling score on a regressor will use multioutput uniform average from version 0 23 to keep consistent with func metrics r2 score This will influence the score method of all the multioutput regressors except for class multioutput MultiOutputRegressor pr 13157 by user Hanmin Qin qinhanmin2014 mod sklearn calibration Enhancement Added support to bin the data passed into class calibration calibration curve by quantiles instead of uniformly between 0 and 1 pr 13086 by user Scott Cole srcole Enhancement Allow n dimensional arrays as input for calibration CalibratedClassifierCV pr 13485 by user William de Vazelhes wdevazelhes mod sklearn cluster MajorFeature A new clustering algorithm class cluster OPTICS an algorithm related to class cluster DBSCAN that has hyperparameters easier to set and that scales better by user Shane espg Adrin Jalali user Erich Schubert kno10 Hanmin Qin and user Assia Benbihi assiaben Fix Fixed a bug where class cluster Birch could occasionally raise an AttributeError pr 13651 by Joel Nothman Fix Fixed a bug in class cluster KMeans where empty clusters weren t correctly relocated when using sample weights pr 13486 by user J r mie du Boisberranger jeremiedbb API The n components attribute in class cluster AgglomerativeClustering and class cluster FeatureAgglomeration has been renamed to n connected components pr 13427 by user Stephane Couvreur scouvreur Enhancement class cluster AgglomerativeClustering and class cluster FeatureAgglomeration now accept a distance threshold parameter which can be used to find the clusters instead of n clusters issue 9069 by user Vathsala Achar VathsalaAchar and Adrin Jalali mod sklearn compose API class compose ColumnTransformer is no longer an experimental feature pr 13835 by user Hanmin Qin qinhanmin2014 mod sklearn datasets Fix Added support for 64 bit group IDs and pointers in SVMLight files pr 10727 by user Bryan K Woods bryan woods Fix func datasets load sample images returns images with a deterministic order pr 13250 by user Thomas Fan thomasjpfan mod sklearn decomposition Enhancement class decomposition KernelPCA now has deterministic output resolved sign ambiguity in eigenvalue decomposition of the kernel matrix pr 13241 by user Aur lien Bellet bellet Fix Fixed a bug in class decomposition KernelPCA fit transform now produces the correct output the same as fit transform in case of non removed zero eigenvalues remove zero eig False fit inverse transform was also accelerated by using the same trick as fit transform to compute the transform of X pr 12143 by user Sylvain Mari smarie Fix Fixed a bug in class decomposition NMF where init nndsvd init nndsvda and init nndsvdar are allowed when n components n features instead of n components min n samples n features pr 11650 by user Hossein Pourbozorg hossein pourbozorg and user Zijie ZJ Poh zjpoh API The default value of the code init argument in func decomposition non negative factorization will change from code random to code None in version 0 23 to make it consistent with class decomposition NMF A FutureWarning is raised when the default value is used pr 12988 by user Zijie ZJ Poh zjpoh mod sklearn discriminant analysis Enhancement class discriminant analysis LinearDiscriminantAnalysis now preserves float32 and float64 dtypes pr 8769 and pr 11000 by user Thibault Sejourne thibsej Fix A ChangedBehaviourWarning is now raised when class discriminant analysis LinearDiscriminantAnalysis is given as parameter n components min n features n classes 1 and n components is changed to min n features n classes 1 if so Previously the change was made but silently pr 11526 by user William de Vazelhes wdevazelhes Fix Fixed a bug in class discriminant analysis LinearDiscriminantAnalysis where the predicted probabilities would be incorrectly computed in the multiclass case pr 6848 by user Agamemnon Krasoulis agamemnonc and Guillaume Lemaitre glemaitre Fix Fixed a bug in class discriminant analysis LinearDiscriminantAnalysis where the predicted probabilities would be incorrectly computed with eigen solver pr 11727 by user Agamemnon Krasoulis agamemnonc mod sklearn dummy Fix Fixed a bug in class dummy DummyClassifier where the predict proba method was returning int32 array instead of float64 for the stratified strategy pr 13266 by user Christos Aridas chkoar Fix Fixed a bug in class dummy DummyClassifier where it was throwing a dimension mismatch error in prediction time if a column vector y with shape n 1 was given at fit time pr 13545 by user Nick Sorros nsorros and Adrin Jalali mod sklearn ensemble MajorFeature Add two new implementations of gradient boosting trees class ensemble HistGradientBoostingClassifier and class ensemble HistGradientBoostingRegressor The implementation of these estimators is inspired by LightGBM https github com Microsoft LightGBM and can be orders of magnitude faster than class ensemble GradientBoostingRegressor and class ensemble GradientBoostingClassifier when the number of samples is larger than tens of thousands of samples The API of these new estimators is slightly different and some of the features from class ensemble GradientBoostingClassifier and class ensemble GradientBoostingRegressor are not yet supported These new estimators are experimental which means that their results or their API might change without any deprecation cycle To use them you need to explicitly import enable hist gradient boosting explicitly require this experimental feature from sklearn experimental import enable hist gradient boosting noqa now you can import normally from sklearn ensemble from sklearn ensemble import HistGradientBoostingClassifier note Update since version 1 0 these estimators are not experimental anymore and you don t need to use from sklearn experimental import enable hist gradient boosting pr 12807 by user Nicolas Hug NicolasHug Feature Add class ensemble VotingRegressor which provides an equivalent of class ensemble VotingClassifier for regression problems pr 12513 by user Ramil Nugmanov stsouko and user Mohamed Ali Jamaoui mohamed ali Efficiency Make class ensemble IsolationForest prefer threads over processes when running with n jobs 1 as the underlying decision tree fit calls do release the GIL This changes reduces memory usage and communication overhead pr 12543 by user Isaac Storch istorch and Olivier Grisel Efficiency Make class ensemble IsolationForest more memory efficient by avoiding keeping in memory each tree prediction pr 13260 by Nicolas Goix Efficiency class ensemble IsolationForest now uses chunks of data at prediction step thus capping the memory usage pr 13283 by Nicolas Goix Efficiency class sklearn ensemble GradientBoostingClassifier and class sklearn ensemble GradientBoostingRegressor now keep the input y as float64 to avoid it being copied internally by trees pr 13524 by Adrin Jalali Enhancement Minimized the validation of X in class ensemble AdaBoostClassifier and class ensemble AdaBoostRegressor pr 13174 by user Christos Aridas chkoar Enhancement class ensemble IsolationForest now exposes warm start parameter allowing iterative addition of trees to an isolation forest pr 13496 by user Peter Marko petibear Fix The values of feature importances in all random forest based models i e class ensemble RandomForestClassifier class ensemble RandomForestRegressor class ensemble ExtraTreesClassifier class ensemble ExtraTreesRegressor class ensemble RandomTreesEmbedding class ensemble GradientBoostingClassifier and class ensemble GradientBoostingRegressor now sum up to 1 all the single node trees in feature importance calculation are ignored in case all trees have only one single node i e a root node feature importances will be an array of all zeros pr 13636 and pr 13620 by Adrin Jalali Fix Fixed a bug in class ensemble GradientBoostingClassifier and class ensemble GradientBoostingRegressor which didn t support scikit learn estimators as the initial estimator Also added support of initial estimator which does not support sample weights pr 12436 by user J r mie du Boisberranger jeremiedbb and pr 12983 by user Nicolas Hug NicolasHug Fix Fixed the output of the average path length computed in class ensemble IsolationForest when the input is either 0 1 or 2 pr 13251 by user Albert Thomas albertcthomas and user joshuakennethjones joshuakennethjones Fix Fixed a bug in class ensemble GradientBoostingClassifier where the gradients would be incorrectly computed in multiclass classification problems pr 12715 by user Nicolas Hug NicolasHug Fix Fixed a bug in class ensemble GradientBoostingClassifier where validation sets for early stopping were not sampled with stratification pr 13164 by user Nicolas Hug NicolasHug Fix Fixed a bug in class ensemble GradientBoostingClassifier where the default initial prediction of a multiclass classifier would predict the classes priors instead of the log of the priors pr 12983 by user Nicolas Hug NicolasHug Fix Fixed a bug in class ensemble RandomForestClassifier where the predict method would error for multiclass multioutput forests models if any targets were strings pr 12834 by user Elizabeth Sander elsander Fix Fixed a bug in ensemble gradient boosting LossFunction and ensemble gradient boosting LeastSquaresError where the default value of learning rate in update terminal regions is not consistent with the document and the caller functions Note however that directly using these loss functions is deprecated pr 6463 by user movelikeriver movelikeriver Fix ensemble partial dependence and consequently the new version func sklearn inspection partial dependence now takes sample weights into account for the partial dependence computation when the gradient boosting model has been trained with sample weights pr 13193 by user Samuel O Ronsin samronsin API ensemble partial dependence and ensemble plot partial dependence are now deprecated in favor of func inspection partial dependence sklearn inspection partial dependence and inspection plot partial dependence sklearn inspection plot partial dependence pr 12599 by user Trevor Stephens trevorstephens and user Nicolas Hug NicolasHug Fix class ensemble VotingClassifier and class ensemble VotingRegressor were failing during fit in one of the estimators was set to None and sample weight was not None pr 13779 by user Guillaume Lemaitre glemaitre API class ensemble VotingClassifier and class ensemble VotingRegressor accept drop to disable an estimator in addition to None to be consistent with other estimators i e class pipeline FeatureUnion and class compose ColumnTransformer pr 13780 by user Guillaume Lemaitre glemaitre sklearn externals API Deprecated externals six since we have dropped support for Python 2 7 pr 12916 by user Hanmin Qin qinhanmin2014 mod sklearn feature extraction Fix If input file or input filename and a callable is given as the analyzer class sklearn feature extraction text HashingVectorizer class sklearn feature extraction text TfidfVectorizer and class sklearn feature extraction text CountVectorizer now read the data from the file s and then pass it to the given analyzer instead of passing the file name s or the file object s to the analyzer pr 13641 by Adrin Jalali mod sklearn impute MajorFeature Added class impute IterativeImputer which is a strategy for imputing missing values by modeling each feature with missing values as a function of other features in a round robin fashion pr 8478 and pr 12177 by user Sergey Feldman sergeyf and user Ben Lawson benlawson The API of IterativeImputer is experimental and subject to change without any deprecation cycle To use them you need to explicitly import enable iterative imputer from sklearn experimental import enable iterative imputer noqa now you can import normally from sklearn impute from sklearn impute import IterativeImputer Feature The class impute SimpleImputer and class impute IterativeImputer have a new parameter add indicator which simply stacks a class impute MissingIndicator transform into the output of the imputer s transform That allows a predictive estimator to account for missingness pr 12583 pr 13601 by user Danylo Baibak DanilBaibak Fix In class impute MissingIndicator avoid implicit densification by raising an exception if input is sparse add missing values property is set to 0 pr 13240 by user Bartosz Telenczuk btel Fix Fixed two bugs in class impute MissingIndicator First when X is sparse all the non zero non missing values used to become explicit False in the transformed data Then when features missing only all features used to be kept if there were no missing values at all pr 13562 by user J r mie du Boisberranger jeremiedbb mod sklearn inspection new subpackage Feature Partial dependence plots inspection plot partial dependence are now supported for any regressor or classifier provided that they have a predict proba method pr 12599 by user Trevor Stephens trevorstephens and user Nicolas Hug NicolasHug mod sklearn isotonic Feature Allow different dtypes such as float32 in class isotonic IsotonicRegression pr 8769 by user Vlad Niculae vene mod sklearn linear model Enhancement class linear model Ridge now preserves float32 and float64 dtypes issue 8769 and issue 11000 by user Guillaume Lemaitre glemaitre and user Joan Massich massich Feature class linear model LogisticRegression and class linear model LogisticRegressionCV now support Elastic Net penalty with the saga solver pr 11646 by user Nicolas Hug NicolasHug Feature Added class linear model lars path gram which is class linear model lars path in the sufficient stats mode allowing users to compute class linear model lars path without providing X and y pr 11699 by user Kuai Yu yukuairoy Efficiency linear model make dataset now preserves float32 and float64 dtypes reducing memory consumption in stochastic gradient SAG and SAGA solvers pr 8769 and pr 11000 by user Nelle Varoquaux NelleV user Arthur Imbert Henley13 user Guillaume Lemaitre glemaitre and user Joan Massich massich Enhancement class linear model LogisticRegression now supports an unregularized objective when penalty none is passed This is equivalent to setting C np inf with l2 regularization Not supported by the liblinear solver pr 12860 by user Nicolas Hug NicolasHug Enhancement sparse cg solver in class linear model Ridge now supports fitting the intercept i e fit intercept True when inputs are sparse pr 13336 by user Bartosz Telenczuk btel Enhancement The coordinate descent solver used in Lasso ElasticNet etc now issues a ConvergenceWarning when it completes without meeting the desired toleranbce pr 11754 and pr 13397 by user Brent Fagan brentfagan and user Adrin Jalali adrinjalali Fix Fixed a bug in class linear model LogisticRegression and class linear model LogisticRegressionCV with saga solver where the weights would not be correctly updated in some cases pr 11646 by Tom Dupre la Tour Fix Fixed the posterior mean posterior covariance and returned regularization parameters in class linear model BayesianRidge The posterior mean and the posterior covariance were not the ones computed with the last update of the regularization parameters and the returned regularization parameters were not the final ones Also fixed the formula of the log marginal likelihood used to compute the score when compute score True pr 12174 by user Albert Thomas albertcthomas Fix Fixed a bug in class linear model LassoLarsIC where user input copy X False at instance creation would be overridden by default parameter value copy X True in fit pr 12972 by user Lucio Fernandez Arjona luk f a Fix Fixed a bug in class linear model LinearRegression that was not returning the same coeffecients and intercepts with fit intercept True in sparse and dense case pr 13279 by Alexandre Gramfort Fix Fixed a bug in class linear model HuberRegressor that was broken when X was of dtype bool pr 13328 by Alexandre Gramfort Fix Fixed a performance issue of saga and sag solvers when called in a class joblib Parallel setting with n jobs 1 and backend threading causing them to perform worse than in the sequential case pr 13389 by user Pierre Glaser pierreglaser Fix Fixed a bug in linear model stochastic gradient BaseSGDClassifier that was not deterministic when trained in a multi class setting on several threads pr 13422 by user Cl ment Doumouro ClemDoum Fix Fixed bug in func linear model ridge regression class linear model Ridge and class linear model RidgeClassifier that caused unhandled exception for arguments return intercept True and solver auto default or any other solver different from sag pr 13363 by user Bartosz Telenczuk btel Fix func linear model ridge regression will now raise an exception if return intercept True and solver is different from sag Previously only warning was issued pr 13363 by user Bartosz Telenczuk btel Fix func linear model ridge regression will choose sparse cg solver for sparse inputs when solver auto and sample weight is provided previously cholesky solver was selected pr 13363 by user Bartosz Telenczuk btel API The use of class linear model lars path with X None while passing Gram is deprecated in version 0 21 and will be removed in version 0 23 Use class linear model lars path gram instead pr 11699 by user Kuai Yu yukuairoy API linear model logistic regression path is deprecated in version 0 21 and will be removed in version 0 23 pr 12821 by user Nicolas Hug NicolasHug Fix class linear model RidgeCV with leave one out cross validation now correctly fits an intercept when fit intercept True and the design matrix is sparse issue 13350 by user J r me Dock s jeromedockes mod sklearn manifold Efficiency Make func manifold trustworthiness use an inverted index instead of an np where lookup to find the rank of neighbors in the input space This improves efficiency in particular when computed with lots of neighbors and or small datasets pr 9907 by user William de Vazelhes wdevazelhes mod sklearn metrics Feature Added the func metrics max error metric and a corresponding max error scorer for single output regression pr 12232 by user Krishna Sangeeth whiletruelearn Feature Add func metrics multilabel confusion matrix which calculates a confusion matrix with true positive false positive false negative and true negative counts for each class This facilitates the calculation of set wise metrics such as recall specificity fall out and miss rate pr 11179 by user Shangwu Yao ShangwuYao and Joel Nothman Feature func metrics jaccard score has been added to calculate the Jaccard coefficient as an evaluation metric for binary multilabel and multiclass tasks with an interface analogous to func metrics f1 score pr 13151 by user Gaurav Dhingra gxyd and Joel Nothman Feature Added func metrics pairwise haversine distances which can be accessed with metric pairwise through func metrics pairwise distances and estimators Haversine distance was previously available for nearest neighbors calculation pr 12568 by user Wei Xue xuewei4d user Emmanuel Arias eamanu and Joel Nothman Efficiency Faster func metrics pairwise distances with n jobs 1 by using a thread based backend instead of process based backends pr 8216 by user Pierre Glaser pierreglaser and user Romuald Menuet zanospi Efficiency The pairwise manhattan distances with sparse input now uses the BLAS shipped with scipy instead of the bundled BLAS pr 12732 by user J r mie du Boisberranger jeremiedbb Enhancement Use label accuracy instead of micro average on func metrics classification report to avoid confusion micro average is only shown for multi label or multi class with a subset of classes because it is otherwise identical to accuracy pr 12334 by user Emmanuel Arias eamanu eamanu com Joel Nothman and Andreas M ller Enhancement Added beta parameter to func metrics homogeneity completeness v measure and func metrics v measure score to configure the tradeoff between homogeneity and completeness pr 13607 by user Stephane Couvreur scouvreur and and user Ivan Sanchez ivsanro1 Fix The metric func metrics r2 score is degenerate with a single sample and now it returns NaN and raises class exceptions UndefinedMetricWarning pr 12855 by user Pawel Sendyk psendyk Fix Fixed a bug where func metrics brier score loss will sometimes return incorrect result when there s only one class in y true pr 13628 by user Hanmin Qin qinhanmin2014 Fix Fixed a bug in func metrics label ranking average precision score where sample weight wasn t taken into account for samples with degenerate labels pr 13447 by user Dan Ellis dpwe API The parameter labels in func metrics hamming loss is deprecated in version 0 21 and will be removed in version 0 23 pr 10580 by user Reshama Shaikh reshamas and user Sandra Mitrovic SandraMNE Fix The function func metrics pairwise euclidean distances and therefore several estimators with metric euclidean suffered from numerical precision issues with float32 features Precision has been increased at the cost of a small drop of performance pr 13554 by user Celelibi and user J r mie du Boisberranger jeremiedbb API metrics jaccard similarity score is deprecated in favour of the more consistent func metrics jaccard score The former behavior for binary and multiclass targets is broken pr 13151 by Joel Nothman mod sklearn mixture Fix Fixed a bug in mixture BaseMixture and therefore on estimators based on it i e class mixture GaussianMixture and class mixture BayesianGaussianMixture where fit predict and fit predict were not equivalent pr 13142 by user J r mie du Boisberranger jeremiedbb mod sklearn model selection Feature Classes class model selection GridSearchCV and class model selection RandomizedSearchCV now allow for refit callable to add flexibility in identifying the best estimator See ref sphx glr auto examples model selection plot grid search refit callable py pr 11354 by user Wenhao Zhang wenhaoz ucla edu Joel Nothman and user Adrin Jalali adrinjalali Enhancement Classes class model selection GridSearchCV class model selection RandomizedSearchCV and methods func model selection cross val score func model selection cross val predict func model selection cross validate now print train scores when return train scores is True and verbose 2 For func model selection learning curve and func model selection validation curve only the latter is required pr 12613 and pr 12669 by user Marc Torrellas marctorrellas Enhancement Some term CV splitter classes and model selection train test split now raise ValueError when the resulting training set is empty pr 12861 by user Nicolas Hug NicolasHug Fix Fixed a bug where class model selection StratifiedKFold shuffles each class s samples with the same random state making shuffle True ineffective pr 13124 by user Hanmin Qin qinhanmin2014 Fix Added ability for func model selection cross val predict to handle multi label and multioutput multiclass targets with predict proba type methods pr 8773 by user Stephen Hoover stephen hoover Fix Fixed an issue in func model selection cross val predict where method predict proba returned always 0 0 when one of the classes was excluded in a cross validation fold pr 13366 by user Guillaume Fournier gfournier mod sklearn multiclass Fix Fixed an issue in func multiclass OneVsOneClassifier decision function where the decision function value of a given sample was different depending on whether the decision function was evaluated on the sample alone or on a batch containing this same sample due to the scaling used in decision function pr 10440 by user Jonathan Ohayon Johayon mod sklearn multioutput Fix Fixed a bug in class multioutput MultiOutputClassifier where the predict proba method incorrectly checked for predict proba attribute in the estimator object pr 12222 by user Rebekah Kim rebekahkim mod sklearn neighbors MajorFeature Added class neighbors NeighborhoodComponentsAnalysis for metric learning which implements the Neighborhood Components Analysis algorithm pr 10058 by user William de Vazelhes wdevazelhes and user John Chiotellis johny c API Methods in class neighbors NearestNeighbors func neighbors NearestNeighbors kneighbors func neighbors NearestNeighbors radius neighbors func neighbors NearestNeighbors kneighbors graph func neighbors NearestNeighbors radius neighbors graph now raise NotFittedError rather than AttributeError when called before fit pr 12279 by user Krishna Sangeeth whiletruelearn mod sklearn neural network Fix Fixed a bug in class neural network MLPClassifier and class neural network MLPRegressor where the option code shuffle False was being ignored pr 12582 by user Sam Waterbury samwaterbury Fix Fixed a bug in class neural network MLPClassifier where validation sets for early stopping were not sampled with stratification In the multilabel case however splits are still not stratified pr 13164 by user Nicolas Hug NicolasHug mod sklearn pipeline Feature class pipeline Pipeline can now use indexing notation e g my pipeline 0 1 to extract a subsequence of steps as another Pipeline instance A Pipeline can also be indexed directly to extract a particular step e g my pipeline svc rather than accessing named steps pr 2568 by Joel Nothman Feature Added optional parameter verbose in class pipeline Pipeline class compose ColumnTransformer and class pipeline FeatureUnion and corresponding make helpers for showing progress and timing of each step pr 11364 by user Baze Petrushev petrushev user Karan Desai karandesai 96 Joel Nothman and user Thomas Fan thomasjpfan Enhancement class pipeline Pipeline now supports using passthrough as a transformer with the same effect as None pr 11144 by user Thomas Fan thomasjpfan Enhancement class pipeline Pipeline implements len and therefore len pipeline returns the number of steps in the pipeline pr 13439 by user Lakshya KD LakshKD mod sklearn preprocessing Feature class preprocessing OneHotEncoder now supports dropping one feature per category with a new drop parameter pr 12908 by user Drew Johnston drewmjohnston Efficiency class preprocessing OneHotEncoder and class preprocessing OrdinalEncoder now handle pandas DataFrames more efficiently pr 13253 by user maikia Efficiency Make class preprocessing MultiLabelBinarizer cache class mappings instead of calculating it every time on the fly pr 12116 by user Ekaterina Krivich kiote and Joel Nothman Efficiency class preprocessing PolynomialFeatures now supports compressed sparse row CSR matrices as input for degrees 2 and 3 This is typically much faster than the dense case as it scales with matrix density and expansion degree on the order of density degree and is much much faster than the compressed sparse column CSC case pr 12197 by user Andrew Nystrom awnystrom Efficiency Speed improvement in class preprocessing PolynomialFeatures in the dense case Also added a new parameter order which controls output order for further speed performances pr 12251 by Tom Dupre la Tour Fix Fixed the calculation overflow when using a float16 dtype with class preprocessing StandardScaler pr 13007 by user Raffaello Baluyot baluyotraf Fix Fixed a bug in class preprocessing QuantileTransformer and func preprocessing quantile transform to force n quantiles to be at most equal to n samples Values of n quantiles larger than n samples were either useless or resulting in a wrong approximation of the cumulative distribution function estimator pr 13333 by user Albert Thomas albertcthomas API The default value of copy in func preprocessing quantile transform will change from False to True in 0 23 in order to make it more consistent with the default copy values of other functions in mod sklearn preprocessing and prevent unexpected side effects by modifying the value of X inplace pr 13459 by user Hunter McGushion HunterMcGushion mod sklearn svm Fix Fixed an issue in func svm SVC decision function when decision function shape ovr The decision function value of a given sample was different depending on whether the decision function was evaluated on the sample alone or on a batch containing this same sample due to the scaling used in decision function pr 10440 by user Jonathan Ohayon Johayon mod sklearn tree Feature Decision Trees can now be plotted with matplotlib using tree plot tree without relying on the dot library removing a hard to install dependency pr 8508 by Andreas M ller Feature Decision Trees can now be exported in a human readable textual format using func tree export text pr 6261 by Giuseppe Vettigli JustGlowing Feature get n leaves and get depth have been added to tree BaseDecisionTree and consequently all estimators based on it including class tree DecisionTreeClassifier class tree DecisionTreeRegressor class tree ExtraTreeClassifier and class tree ExtraTreeRegressor pr 12300 by user Adrin Jalali adrinjalali Fix Trees and forests did not previously predict multi output classification targets with string labels despite accepting them in fit pr 11458 by user Mitar Milutinovic mitar Fix Fixed an issue with tree BaseDecisionTree and consequently all estimators based on it including class tree DecisionTreeClassifier class tree DecisionTreeRegressor class tree ExtraTreeClassifier and class tree ExtraTreeRegressor where they used to exceed the given max depth by 1 while expanding the tree if max leaf nodes and max depth were both specified by the user Please note that this also affects all ensemble methods using decision trees pr 12344 by user Adrin Jalali adrinjalali mod sklearn utils Feature func utils resample now accepts a stratify parameter for sampling according to class distributions pr 13549 by user Nicolas Hug NicolasHug API Deprecated warn on dtype parameter from func utils check array and func utils check X y Added explicit warning for dtype conversion in check pairwise arrays if the metric being passed is a pairwise boolean metric pr 13382 by user Prathmesh Savale praths007 Multiple modules MajorFeature The repr method of all estimators used when calling print estimator has been entirely re written building on Python s pretty printing standard library All parameters are printed by default but this can be altered with the print changed only option in func sklearn set config pr 11705 by user Nicolas Hug NicolasHug MajorFeature Add estimators tags these are annotations of estimators that allow programmatic inspection of their capabilities such as sparse matrix support supported output types and supported methods Estimator tags also determine the tests that are run on an estimator when check estimator is called Read more in the ref User Guide estimator tags pr 8022 by user Andreas M ller amueller Efficiency Memory copies are avoided when casting arrays to a different dtype in multiple estimators pr 11973 by user Roman Yurchak rth Fix Fixed a bug in the implementation of the our rand r helper function that was not behaving consistently across platforms pr 13422 by user Madhura Parikh jdnc and user Cl ment Doumouro ClemDoum Miscellaneous Enhancement Joblib is no longer vendored in scikit learn and becomes a dependency Minimal supported version is joblib 0 11 however using version 0 13 is strongly recommended pr 13531 by user Roman Yurchak rth Changes to estimator checks These changes mostly affect library developers Add check fit idempotent to func utils estimator checks check estimator which checks that when fit is called twice with the same data the output of predict predict proba transform and decision function does not change pr 12328 by user Nicolas Hug NicolasHug Many checks can now be disabled or configured with ref estimator tags pr 8022 by user Andreas M ller amueller rubric Code and documentation contributors Thanks to everyone who has contributed to the maintenance and improvement of the project since version 0 20 including adanhawth Aditya Vyas Adrin Jalali Agamemnon Krasoulis Albert Thomas Alberto Torres Alexandre Gramfort amourav Andrea Navarrete Andreas Mueller Andrew Nystrom assiaben Aur lien Bellet Bartosz Micha owski Bartosz Telenczuk bauks BenjaStudio bertrandhaut Bharat Raghunathan brentfagan Bryan Woods Cat Chenal Cheuk Ting Ho Chris Choe Christos Aridas Cl ment Doumouro Cole Smith Connossor Corey Levinson Dan Ellis Dan Stine Danylo Baibak daten kieker Denis Kataev Didi Bar Zev Dillon Gardner Dmitry Mottl Dmitry Vukolov Dougal J Sutherland Dowon drewmjohnston Dror Atariah Edward J Brown Ekaterina Krivich Elizabeth Sander Emmanuel Arias Eric Chang Eric Larson Erich Schubert esvhd Falak Feda Curic Federico Caselli Frank Hoang Fibinse Xavier Finn O Shea Gabriel Marzinotto Gabriel Vacaliuc Gabriele Calvo Gael Varoquaux GauravAhlawat Giuseppe Vettigli Greg Gandenberger Guillaume Fournier Guillaume Lemaitre Gustavo De Mari Pereira Hanmin Qin haroldfox hhu luqi Hunter McGushion Ian Sanders JackLangerman Jacopo Notarstefano jakirkham James Bourbeau Jan Koch Jan S janvanrijn Jarrod Millman jdethurens jeremiedbb JF joaak Joan Massich Joel Nothman Jonathan Ohayon Joris Van den Bossche josephsalmon J r mie M hault Katrin Leinweber ken kms15 Koen Kossori Aruku Krishna Sangeeth Kuai Yu Kulbear Kushal Chauhan Kyle Jackson Lakshya KD Leandro Hermida Lee Yi Jie Joel Lily Xiong Lisa Sarah Thomas Loic Esteve louib luk f a maikia mail liam Manimaran Manuel L pez Ib ez Marc Torrellas Marco Gaido Marco Gorelli MarcoGorelli marineLM Mark Hannel Martin Gubri Masstran mathurinm Matthew Roeschke Max Copeland melsyt mferrari3 Micka l Schoentgen Ming Li Mitar Mohammad Aftab Mohammed AbdelAal Mohammed Ibraheem Muhammad Hassaan Rafique mwestt Naoya Iijima Nicholas Smith Nicolas Goix Nicolas Hug Nikolay Shebanov Oleksandr Pavlyk Oliver Rausch Olivier Grisel Orestis Osman Owen Flanagan Paul Paczuski Pavel Soriano pavlos kallis Pawel Sendyk peay Peter Peter Cock Peter Hausamann Peter Marko Pierre Glaser pierretallotte Pim de Haan Piotr Szyma ski Prabakaran Kumaresshan Pradeep Reddy Raamana Prathmesh Savale Pulkit Maloo Quentin Batista Radostin Stoyanov Raf Baluyot Rajdeep Dua Ramil Nugmanov Ra l Garc a Calvo Rebekah Kim Reshama Shaikh Rohan Lekhwani Rohan Singh Rohan Varma Rohit Kapoor Roman Feldbauer Roman Yurchak Romuald M Roopam Sharma Ryan R diger Busche Sam Waterbury Samuel O Ronsin SandroCasagrande Scott Cole Scott Lowe Sebastian Raschka Shangwu Yao Shivam Kotwalia Shiyu Duan smarie Sriharsha Hatwar Stephen Hoover Stephen Tierney St phane Couvreur surgan12 SylvainLan TakingItCasual Tashay Green thibsej Thomas Fan Thomas J Fan Thomas Moreau Tom Dupr la Tour Tommy Tulio Casagrande Umar Farouk Umar Utkarsh Upadhyay Vinayak Mehta Vishaal Kapoor Vivek Kumar Vlad Niculae vqean3 Wenhao Zhang William de Vazelhes xhan Xing Han Lu xinyuliu12 Yaroslav Halchenko Zach Griffith Zach Miller Zayd Hammoudeh Zhuyi Xue Zijie ZJ Poh |
scikit-learn sklearn contributors rst releasenotes12 Version 1 2 | .. include:: _contributors.rst
.. currentmodule:: sklearn
.. _release_notes_1_2:
===========
Version 1.2
===========
For a short description of the main highlights of the release, please refer to
:ref:`sphx_glr_auto_examples_release_highlights_plot_release_highlights_1_2_0.py`.
.. include:: changelog_legend.inc
.. _changes_1_2_2:
Version 1.2.2
=============
**March 2023**
Changelog
---------
:mod:`sklearn.base`
...................
- |Fix| When `set_output(transform="pandas")`, :class:`base.TransformerMixin` maintains
the index if the :term:`transform` output is already a DataFrame. :pr:`25747` by
`Thomas Fan`_.
:mod:`sklearn.calibration`
..........................
- |Fix| A deprecation warning is raised when using the `base_estimator__` prefix to
set parameters of the estimator used in :class:`calibration.CalibratedClassifierCV`.
:pr:`25477` by :user:`Tim Head <betatim>`.
:mod:`sklearn.cluster`
......................
- |Fix| Fixed a bug in :class:`cluster.BisectingKMeans`, preventing `fit` to randomly
fail due to a permutation of the labels when running multiple inits.
:pr:`25563` by :user:`Jérémie du Boisberranger <jeremiedbb>`.
:mod:`sklearn.compose`
......................
- |Fix| Fixes a bug in :class:`compose.ColumnTransformer` which now supports
empty selection of columns when `set_output(transform="pandas")`.
:pr:`25570` by `Thomas Fan`_.
:mod:`sklearn.ensemble`
.......................
- |Fix| A deprecation warning is raised when using the `base_estimator__` prefix
to set parameters of the estimator used in :class:`ensemble.AdaBoostClassifier`,
:class:`ensemble.AdaBoostRegressor`, :class:`ensemble.BaggingClassifier`,
and :class:`ensemble.BaggingRegressor`.
:pr:`25477` by :user:`Tim Head <betatim>`.
:mod:`sklearn.feature_selection`
................................
- |Fix| Fixed a regression where a negative `tol` would not be accepted any more by
:class:`feature_selection.SequentialFeatureSelector`.
:pr:`25664` by :user:`Jérémie du Boisberranger <jeremiedbb>`.
:mod:`sklearn.inspection`
.........................
- |Fix| Raise a more informative error message in :func:`inspection.partial_dependence`
when dealing with mixed data type categories that cannot be sorted by
:func:`numpy.unique`. This problem usually happen when categories are `str` and
missing values are present using `np.nan`.
:pr:`25774` by :user:`Guillaume Lemaitre <glemaitre>`.
:mod:`sklearn.isotonic`
.......................
- |Fix| Fixes a bug in :class:`isotonic.IsotonicRegression` where
:meth:`isotonic.IsotonicRegression.predict` would return a pandas DataFrame
when the global configuration sets `transform_output="pandas"`.
:pr:`25500` by :user:`Guillaume Lemaitre <glemaitre>`.
:mod:`sklearn.preprocessing`
............................
- |Fix| `preprocessing.OneHotEncoder.drop_idx_` now properly
references the dropped category in the `categories_` attribute
when there are infrequent categories. :pr:`25589` by `Thomas Fan`_.
- |Fix| :class:`preprocessing.OrdinalEncoder` now correctly supports
`encoded_missing_value` or `unknown_value` set to a categories' cardinality
when there is missing values in the training data. :pr:`25704` by `Thomas Fan`_.
:mod:`sklearn.tree`
...................
- |Fix| Fixed a regression in :class:`tree.DecisionTreeClassifier`,
:class:`tree.DecisionTreeRegressor`, :class:`tree.ExtraTreeClassifier` and
:class:`tree.ExtraTreeRegressor` where an error was no longer raised in version
1.2 when `min_sample_split=1`.
:pr:`25744` by :user:`Jérémie du Boisberranger <jeremiedbb>`.
:mod:`sklearn.utils`
....................
- |Fix| Fixes a bug in :func:`utils.check_array` which now correctly performs
non-finite validation with the Array API specification. :pr:`25619` by
`Thomas Fan`_.
- |Fix| :func:`utils.multiclass.type_of_target` can identify pandas
nullable data types as classification targets. :pr:`25638` by `Thomas Fan`_.
.. _changes_1_2_1:
Version 1.2.1
=============
**January 2023**
Changed models
--------------
The following estimators and functions, when fit with the same data and
parameters, may produce different models from the previous version. This often
occurs due to changes in the modelling logic (bug fixes or enhancements), or in
random sampling procedures.
- |Fix| The fitted components in
:class:`decomposition.MiniBatchDictionaryLearning` might differ. The online
updates of the sufficient statistics now properly take the sizes of the
batches into account.
:pr:`25354` by :user:`Jérémie du Boisberranger <jeremiedbb>`.
- |Fix| The `categories_` attribute of :class:`preprocessing.OneHotEncoder` now
always contains an array of `object`s when using predefined categories that
are strings. Predefined categories encoded as bytes will no longer work
with `X` encoded as strings. :pr:`25174` by :user:`Tim Head <betatim>`.
Changes impacting all modules
-----------------------------
- |Fix| Support `pandas.Int64` dtyped `y` for classifiers and regressors.
:pr:`25089` by :user:`Tim Head <betatim>`.
- |Fix| Remove spurious warnings for estimators internally using neighbors search methods.
:pr:`25129` by :user:`Julien Jerphanion <jjerphan>`.
- |Fix| Fix a bug where the current configuration was ignored in estimators using
`n_jobs > 1`. This bug was triggered for tasks dispatched by the auxiliary
thread of `joblib` as :func:`sklearn.get_config` used to access an empty thread
local configuration instead of the configuration visible from the thread where
`joblib.Parallel` was first called.
:pr:`25363` by :user:`Guillaume Lemaitre <glemaitre>`.
Changelog
---------
:mod:`sklearn.base`
...................
- |Fix| Fix a regression in `BaseEstimator.__getstate__` that would prevent
certain estimators to be pickled when using Python 3.11. :pr:`25188` by
:user:`Benjamin Bossan <BenjaminBossan>`.
- |Fix| Inheriting from :class:`base.TransformerMixin` will only wrap the `transform`
method if the class defines `transform` itself. :pr:`25295` by `Thomas Fan`_.
:mod:`sklearn.datasets`
.......................
- |Fix| Fixes an inconsistency in :func:`datasets.fetch_openml` between liac-arff
and pandas parser when a leading space is introduced after the delimiter.
The ARFF specs requires to ignore the leading space.
:pr:`25312` by :user:`Guillaume Lemaitre <glemaitre>`.
- |Fix| Fixes a bug in :func:`datasets.fetch_openml` when using `parser="pandas"`
where single quote and backslash escape characters were not properly handled.
:pr:`25511` by :user:`Guillaume Lemaitre <glemaitre>`.
:mod:`sklearn.decomposition`
............................
- |Fix| Fixed a bug in :class:`decomposition.MiniBatchDictionaryLearning` where the
online updates of the sufficient statistics where not correct when calling
`partial_fit` on batches of different sizes.
:pr:`25354` by :user:`Jérémie du Boisberranger <jeremiedbb>`.
- |Fix| :class:`decomposition.DictionaryLearning` better supports readonly NumPy
arrays. In particular, it better supports large datasets which are memory-mapped
when it is used with coordinate descent algorithms (i.e. when `fit_algorithm='cd'`).
:pr:`25172` by :user:`Julien Jerphanion <jjerphan>`.
:mod:`sklearn.ensemble`
.......................
- |Fix| :class:`ensemble.RandomForestClassifier`,
:class:`ensemble.RandomForestRegressor` :class:`ensemble.ExtraTreesClassifier`
and :class:`ensemble.ExtraTreesRegressor` now support sparse readonly datasets.
:pr:`25341` by :user:`Julien Jerphanion <jjerphan>`
:mod:`sklearn.feature_extraction`
.................................
- |Fix| :class:`feature_extraction.FeatureHasher` raises an informative error
when the input is a list of strings. :pr:`25094` by `Thomas Fan`_.
:mod:`sklearn.linear_model`
...........................
- |Fix| Fix a regression in :class:`linear_model.SGDClassifier` and
:class:`linear_model.SGDRegressor` that makes them unusable with the
`verbose` parameter set to a value greater than 0.
:pr:`25250` by :user:`Jérémie Du Boisberranger <jeremiedbb>`.
:mod:`sklearn.manifold`
.......................
- |Fix| :class:`manifold.TSNE` now works correctly when output type is
set to pandas :pr:`25370` by :user:`Tim Head <betatim>`.
:mod:`sklearn.model_selection`
..............................
- |Fix| :func:`model_selection.cross_validate` with multimetric scoring in
case of some failing scorers the non-failing scorers now returns proper
scores instead of `error_score` values.
:pr:`23101` by :user:`András Simon <simonandras>` and `Thomas Fan`_.
:mod:`sklearn.neural_network`
.............................
- |Fix| :class:`neural_network.MLPClassifier` and :class:`neural_network.MLPRegressor`
no longer raise warnings when fitting data with feature names.
:pr:`24873` by :user:`Tim Head <betatim>`.
- |Fix| Improves error message in :class:`neural_network.MLPClassifier` and
:class:`neural_network.MLPRegressor`, when `early_stopping=True` and
`partial_fit` is called. :pr:`25694` by `Thomas Fan`_.
:mod:`sklearn.preprocessing`
............................
- |Fix| :meth:`preprocessing.FunctionTransformer.inverse_transform` correctly
supports DataFrames that are all numerical when `check_inverse=True`.
:pr:`25274` by `Thomas Fan`_.
- |Fix| :meth:`preprocessing.SplineTransformer.get_feature_names_out` correctly
returns feature names when `extrapolations="periodic"`. :pr:`25296` by
`Thomas Fan`_.
:mod:`sklearn.tree`
...................
- |Fix| :class:`tree.DecisionTreeClassifier`, :class:`tree.DecisionTreeRegressor`
:class:`tree.ExtraTreeClassifier` and :class:`tree.ExtraTreeRegressor`
now support sparse readonly datasets.
:pr:`25341` by :user:`Julien Jerphanion <jjerphan>`
:mod:`sklearn.utils`
....................
- |Fix| Restore :func:`utils.check_array`'s behaviour for pandas Series of type
boolean. The type is maintained, instead of converting to `float64.`
:pr:`25147` by :user:`Tim Head <betatim>`.
- |API| `utils.fixes.delayed` is deprecated in 1.2.1 and will be removed
in 1.5. Instead, import :func:`utils.parallel.delayed` and use it in
conjunction with the newly introduced :func:`utils.parallel.Parallel`
to ensure proper propagation of the scikit-learn configuration to
the workers.
:pr:`25363` by :user:`Guillaume Lemaitre <glemaitre>`.
.. _changes_1_2:
Version 1.2.0
=============
**December 2022**
Changed models
--------------
The following estimators and functions, when fit with the same data and
parameters, may produce different models from the previous version. This often
occurs due to changes in the modelling logic (bug fixes or enhancements), or in
random sampling procedures.
- |Enhancement| The default `eigen_tol` for :class:`cluster.SpectralClustering`,
:class:`manifold.SpectralEmbedding`, :func:`cluster.spectral_clustering`,
and :func:`manifold.spectral_embedding` is now `None` when using the `'amg'`
or `'lobpcg'` solvers. This change improves numerical stability of the
solver, but may result in a different model.
- |Enhancement| :class:`linear_model.GammaRegressor`,
:class:`linear_model.PoissonRegressor` and :class:`linear_model.TweedieRegressor`
can reach higher precision with the lbfgs solver, in particular when `tol` is set
to a tiny value. Moreover, `verbose` is now properly propagated to L-BFGS-B.
:pr:`23619` by :user:`Christian Lorentzen <lorentzenchr>`.
- |Enhancement| The default value for `eps` :func:`metrics.log_loss` has changed
from `1e-15` to `"auto"`. `"auto"` sets `eps` to `np.finfo(y_pred.dtype).eps`.
:pr:`24354` by :user:`Safiuddin Khaja <Safikh>` and :user:`gsiisg <gsiisg>`.
- |Fix| Make sign of `components_` deterministic in :class:`decomposition.SparsePCA`.
:pr:`23935` by :user:`Guillaume Lemaitre <glemaitre>`.
- |Fix| The `components_` signs in :class:`decomposition.FastICA` might differ.
It is now consistent and deterministic with all SVD solvers.
:pr:`22527` by :user:`Meekail Zain <micky774>` and `Thomas Fan`_.
- |Fix| The condition for early stopping has now been changed in
`linear_model._sgd_fast._plain_sgd` which is used by
:class:`linear_model.SGDRegressor` and :class:`linear_model.SGDClassifier`. The old
condition did not disambiguate between
training and validation set and had an effect of overscaling the error tolerance.
This has been fixed in :pr:`23798` by :user:`Harsh Agrawal <Harsh14901>`.
- |Fix| For :class:`model_selection.GridSearchCV` and
:class:`model_selection.RandomizedSearchCV` ranks corresponding to nan
scores will all be set to the maximum possible rank.
:pr:`24543` by :user:`Guillaume Lemaitre <glemaitre>`.
- |API| The default value of `tol` was changed from `1e-3` to `1e-4` for
:func:`linear_model.ridge_regression`, :class:`linear_model.Ridge` and
:class:`linear_model.RidgeClassifier`.
:pr:`24465` by :user:`Christian Lorentzen <lorentzenchr>`.
Changes impacting all modules
-----------------------------
- |MajorFeature| The `set_output` API has been adopted by all transformers.
Meta-estimators that contain transformers such as :class:`pipeline.Pipeline`
or :class:`compose.ColumnTransformer` also define a `set_output`.
For details, see
`SLEP018 <https://scikit-learn-enhancement-proposals.readthedocs.io/en/latest/slep018/proposal.html>`__.
:pr:`23734` and :pr:`24699` by `Thomas Fan`_.
- |Efficiency| Low-level routines for reductions on pairwise distances
for dense float32 datasets have been refactored. The following functions
and estimators now benefit from improved performances in terms of hardware
scalability and speed-ups:
- :func:`sklearn.metrics.pairwise_distances_argmin`
- :func:`sklearn.metrics.pairwise_distances_argmin_min`
- :class:`sklearn.cluster.AffinityPropagation`
- :class:`sklearn.cluster.Birch`
- :class:`sklearn.cluster.MeanShift`
- :class:`sklearn.cluster.OPTICS`
- :class:`sklearn.cluster.SpectralClustering`
- :func:`sklearn.feature_selection.mutual_info_regression`
- :class:`sklearn.neighbors.KNeighborsClassifier`
- :class:`sklearn.neighbors.KNeighborsRegressor`
- :class:`sklearn.neighbors.RadiusNeighborsClassifier`
- :class:`sklearn.neighbors.RadiusNeighborsRegressor`
- :class:`sklearn.neighbors.LocalOutlierFactor`
- :class:`sklearn.neighbors.NearestNeighbors`
- :class:`sklearn.manifold.Isomap`
- :class:`sklearn.manifold.LocallyLinearEmbedding`
- :class:`sklearn.manifold.TSNE`
- :func:`sklearn.manifold.trustworthiness`
- :class:`sklearn.semi_supervised.LabelPropagation`
- :class:`sklearn.semi_supervised.LabelSpreading`
For instance :meth:`sklearn.neighbors.NearestNeighbors.kneighbors` and
:meth:`sklearn.neighbors.NearestNeighbors.radius_neighbors`
can respectively be up to ×20 and ×5 faster than previously on a laptop.
Moreover, implementations of those two algorithms are now suitable
for machine with many cores, making them usable for datasets consisting
of millions of samples.
:pr:`23865` by :user:`Julien Jerphanion <jjerphan>`.
- |Enhancement| Finiteness checks (detection of NaN and infinite values) in all
estimators are now significantly more efficient for float32 data by leveraging
NumPy's SIMD optimized primitives.
:pr:`23446` by :user:`Meekail Zain <micky774>`
- |Enhancement| Finiteness checks (detection of NaN and infinite values) in all
estimators are now faster by utilizing a more efficient stop-on-first
second-pass algorithm.
:pr:`23197` by :user:`Meekail Zain <micky774>`
- |Enhancement| Support for combinations of dense and sparse datasets pairs
for all distance metrics and for float32 and float64 datasets has been added
or has seen its performance improved for the following estimators:
- :func:`sklearn.metrics.pairwise_distances_argmin`
- :func:`sklearn.metrics.pairwise_distances_argmin_min`
- :class:`sklearn.cluster.AffinityPropagation`
- :class:`sklearn.cluster.Birch`
- :class:`sklearn.cluster.SpectralClustering`
- :class:`sklearn.neighbors.KNeighborsClassifier`
- :class:`sklearn.neighbors.KNeighborsRegressor`
- :class:`sklearn.neighbors.RadiusNeighborsClassifier`
- :class:`sklearn.neighbors.RadiusNeighborsRegressor`
- :class:`sklearn.neighbors.LocalOutlierFactor`
- :class:`sklearn.neighbors.NearestNeighbors`
- :class:`sklearn.manifold.Isomap`
- :class:`sklearn.manifold.TSNE`
- :func:`sklearn.manifold.trustworthiness`
:pr:`23604` and :pr:`23585` by :user:`Julien Jerphanion <jjerphan>`,
:user:`Olivier Grisel <ogrisel>`, and `Thomas Fan`_,
:pr:`24556` by :user:`Vincent Maladière <Vincent-Maladiere>`.
- |Fix| Systematically check the sha256 digest of dataset tarballs used in code
examples in the documentation.
:pr:`24617` by :user:`Olivier Grisel <ogrisel>` and `Thomas Fan`_. Thanks to
`Sim4n6 <https://huntr.dev/users/sim4n6>`_ for the report.
Changelog
---------
..
Entries should be grouped by module (in alphabetic order) and prefixed with
one of the labels: |MajorFeature|, |Feature|, |Efficiency|, |Enhancement|,
|Fix| or |API| (see whats_new.rst for descriptions).
Entries should be ordered by those labels (e.g. |Fix| after |Efficiency|).
Changes not specific to a module should be listed under *Multiple Modules*
or *Miscellaneous*.
Entries should end with:
:pr:`123456` by :user:`Joe Bloggs <joeongithub>`.
where 123456 is the *pull request* number, not the issue number.
:mod:`sklearn.base`
...................
- |Enhancement| Introduces :class:`base.ClassNamePrefixFeaturesOutMixin` and
:class:`base.ClassNamePrefixFeaturesOutMixin` mixins that defines
:term:`get_feature_names_out` for common transformer uses cases.
:pr:`24688` by `Thomas Fan`_.
:mod:`sklearn.calibration`
..........................
- |API| Rename `base_estimator` to `estimator` in
:class:`calibration.CalibratedClassifierCV` to improve readability and consistency.
The parameter `base_estimator` is deprecated and will be removed in 1.4.
:pr:`22054` by :user:`Kevin Roice <kevroi>`.
:mod:`sklearn.cluster`
......................
- |Efficiency| :class:`cluster.KMeans` with `algorithm="lloyd"` is now faster
and uses less memory. :pr:`24264` by
:user:`Vincent Maladiere <Vincent-Maladiere>`.
- |Enhancement| The `predict` and `fit_predict` methods of :class:`cluster.OPTICS` now
accept sparse data type for input data. :pr:`14736` by :user:`Hunt Zhan <huntzhan>`,
:pr:`20802` by :user:`Brandon Pokorny <Clickedbigfoot>`,
and :pr:`22965` by :user:`Meekail Zain <micky774>`.
- |Enhancement| :class:`cluster.Birch` now preserves dtype for `numpy.float32`
inputs. :pr:`22968` by `Meekail Zain <micky774>`.
- |Enhancement| :class:`cluster.KMeans` and :class:`cluster.MiniBatchKMeans`
now accept a new `'auto'` option for `n_init` which changes the number of
random initializations to one when using `init='k-means++'` for efficiency.
This begins deprecation for the default values of `n_init` in the two classes
and both will have their defaults changed to `n_init='auto'` in 1.4.
:pr:`23038` by :user:`Meekail Zain <micky774>`.
- |Enhancement| :class:`cluster.SpectralClustering` and
:func:`cluster.spectral_clustering` now propagates the `eigen_tol` parameter
to all choices of `eigen_solver`. Includes a new option `eigen_tol="auto"`
and begins deprecation to change the default from `eigen_tol=0` to
`eigen_tol="auto"` in version 1.3.
:pr:`23210` by :user:`Meekail Zain <micky774>`.
- |Fix| :class:`cluster.KMeans` now supports readonly attributes when predicting.
:pr:`24258` by `Thomas Fan`_
- |API| The `affinity` attribute is now deprecated for
:class:`cluster.AgglomerativeClustering` and will be renamed to `metric` in v1.4.
:pr:`23470` by :user:`Meekail Zain <micky774>`.
:mod:`sklearn.datasets`
.......................
- |Enhancement| Introduce the new parameter `parser` in
:func:`datasets.fetch_openml`. `parser="pandas"` allows to use the very CPU
and memory efficient `pandas.read_csv` parser to load dense ARFF
formatted dataset files. It is possible to pass `parser="liac-arff"`
to use the old LIAC parser.
When `parser="auto"`, dense datasets are loaded with "pandas" and sparse
datasets are loaded with "liac-arff".
Currently, `parser="liac-arff"` by default and will change to `parser="auto"`
in version 1.4
:pr:`21938` by :user:`Guillaume Lemaitre <glemaitre>`.
- |Enhancement| :func:`datasets.dump_svmlight_file` is now accelerated with a
Cython implementation, providing 2-4x speedups.
:pr:`23127` by :user:`Meekail Zain <micky774>`
- |Enhancement| Path-like objects, such as those created with pathlib are now
allowed as paths in :func:`datasets.load_svmlight_file` and
:func:`datasets.load_svmlight_files`.
:pr:`19075` by :user:`Carlos Ramos Carreño <vnmabus>`.
- |Fix| Make sure that :func:`datasets.fetch_lfw_people` and
:func:`datasets.fetch_lfw_pairs` internally crops images based on the
`slice_` parameter.
:pr:`24951` by :user:`Guillaume Lemaitre <glemaitre>`.
:mod:`sklearn.decomposition`
............................
- |Efficiency| :func:`decomposition.FastICA.fit` has been optimised w.r.t
its memory footprint and runtime.
:pr:`22268` by :user:`MohamedBsh <Bsh>`.
- |Enhancement| :class:`decomposition.SparsePCA` and
:class:`decomposition.MiniBatchSparsePCA` now implements an `inverse_transform`
function.
:pr:`23905` by :user:`Guillaume Lemaitre <glemaitre>`.
- |Enhancement| :class:`decomposition.FastICA` now allows the user to select
how whitening is performed through the new `whiten_solver` parameter, which
supports `svd` and `eigh`. `whiten_solver` defaults to `svd` although `eigh`
may be faster and more memory efficient in cases where
`num_features > num_samples`.
:pr:`11860` by :user:`Pierre Ablin <pierreablin>`,
:pr:`22527` by :user:`Meekail Zain <micky774>` and `Thomas Fan`_.
- |Enhancement| :class:`decomposition.LatentDirichletAllocation` now preserves dtype
for `numpy.float32` input. :pr:`24528` by :user:`Takeshi Oura <takoika>` and
:user:`Jérémie du Boisberranger <jeremiedbb>`.
- |Fix| Make sign of `components_` deterministic in :class:`decomposition.SparsePCA`.
:pr:`23935` by :user:`Guillaume Lemaitre <glemaitre>`.
- |API| The `n_iter` parameter of :class:`decomposition.MiniBatchSparsePCA` is
deprecated and replaced by the parameters `max_iter`, `tol`, and
`max_no_improvement` to be consistent with
:class:`decomposition.MiniBatchDictionaryLearning`. `n_iter` will be removed
in version 1.3. :pr:`23726` by :user:`Guillaume Lemaitre <glemaitre>`.
- |API| The `n_features_` attribute of
:class:`decomposition.PCA` is deprecated in favor of
`n_features_in_` and will be removed in 1.4. :pr:`24421` by
:user:`Kshitij Mathur <Kshitij68>`.
:mod:`sklearn.discriminant_analysis`
....................................
- |MajorFeature| :class:`discriminant_analysis.LinearDiscriminantAnalysis` now
supports the `Array API <https://data-apis.org/array-api/latest/>`_ for
`solver="svd"`. Array API support is considered experimental and might evolve
without being subjected to our usual rolling deprecation cycle policy. See
:ref:`array_api` for more details. :pr:`22554` by `Thomas Fan`_.
- |Fix| Validate parameters only in `fit` and not in `__init__`
for :class:`discriminant_analysis.QuadraticDiscriminantAnalysis`.
:pr:`24218` by :user:`Stefanie Molin <stefmolin>`.
:mod:`sklearn.ensemble`
.......................
- |MajorFeature| :class:`ensemble.HistGradientBoostingClassifier` and
:class:`ensemble.HistGradientBoostingRegressor` now support
interaction constraints via the argument `interaction_cst` of their
constructors.
:pr:`21020` by :user:`Christian Lorentzen <lorentzenchr>`.
Using interaction constraints also makes fitting faster.
:pr:`24856` by :user:`Christian Lorentzen <lorentzenchr>`.
- |Feature| Adds `class_weight` to :class:`ensemble.HistGradientBoostingClassifier`.
:pr:`22014` by `Thomas Fan`_.
- |Efficiency| Improve runtime performance of :class:`ensemble.IsolationForest`
by avoiding data copies. :pr:`23252` by :user:`Zhehao Liu <MaxwellLZH>`.
- |Enhancement| :class:`ensemble.StackingClassifier` now accepts any kind of
base estimator.
:pr:`24538` by :user:`Guillem G Subies <GuillemGSubies>`.
- |Enhancement| Make it possible to pass the `categorical_features` parameter
of :class:`ensemble.HistGradientBoostingClassifier` and
:class:`ensemble.HistGradientBoostingRegressor` as feature names.
:pr:`24889` by :user:`Olivier Grisel <ogrisel>`.
- |Enhancement| :class:`ensemble.StackingClassifier` now supports
multilabel-indicator target
:pr:`24146` by :user:`Nicolas Peretti <nicoperetti>`,
:user:`Nestor Navarro <nestornav>`, :user:`Nati Tomattis <natitomattis>`,
and :user:`Vincent Maladiere <Vincent-Maladiere>`.
- |Enhancement| :class:`ensemble.HistGradientBoostingClassifier` and
:class:`ensemble.HistGradientBoostingClassifier` now accept their
`monotonic_cst` parameter to be passed as a dictionary in addition
to the previously supported array-like format.
Such dictionary have feature names as keys and one of `-1`, `0`, `1`
as value to specify monotonicity constraints for each feature.
:pr:`24855` by :user:`Olivier Grisel <ogrisel>`.
- |Enhancement| Interaction constraints for
:class:`ensemble.HistGradientBoostingClassifier`
and :class:`ensemble.HistGradientBoostingRegressor` can now be specified
as strings for two common cases: "no_interactions" and "pairwise" interactions.
:pr:`24849` by :user:`Tim Head <betatim>`.
- |Fix| Fixed the issue where :class:`ensemble.AdaBoostClassifier` outputs
NaN in feature importance when fitted with very small sample weight.
:pr:`20415` by :user:`Zhehao Liu <MaxwellLZH>`.
- |Fix| :class:`ensemble.HistGradientBoostingClassifier` and
:class:`ensemble.HistGradientBoostingRegressor` no longer error when predicting
on categories encoded as negative values and instead consider them a member
of the "missing category". :pr:`24283` by `Thomas Fan`_.
- |Fix| :class:`ensemble.HistGradientBoostingClassifier` and
:class:`ensemble.HistGradientBoostingRegressor`, with `verbose>=1`, print detailed
timing information on computing histograms and finding best splits. The time spent in
the root node was previously missing and is now included in the printed information.
:pr:`24894` by :user:`Christian Lorentzen <lorentzenchr>`.
- |API| Rename the constructor parameter `base_estimator` to `estimator` in
the following classes:
:class:`ensemble.BaggingClassifier`,
:class:`ensemble.BaggingRegressor`,
:class:`ensemble.AdaBoostClassifier`,
:class:`ensemble.AdaBoostRegressor`.
`base_estimator` is deprecated in 1.2 and will be removed in 1.4.
:pr:`23819` by :user:`Adrian Trujillo <trujillo9616>` and
:user:`Edoardo Abati <EdAbati>`.
- |API| Rename the fitted attribute `base_estimator_` to `estimator_` in
the following classes:
:class:`ensemble.BaggingClassifier`,
:class:`ensemble.BaggingRegressor`,
:class:`ensemble.AdaBoostClassifier`,
:class:`ensemble.AdaBoostRegressor`,
:class:`ensemble.RandomForestClassifier`,
:class:`ensemble.RandomForestRegressor`,
:class:`ensemble.ExtraTreesClassifier`,
:class:`ensemble.ExtraTreesRegressor`,
:class:`ensemble.RandomTreesEmbedding`,
:class:`ensemble.IsolationForest`.
`base_estimator_` is deprecated in 1.2 and will be removed in 1.4.
:pr:`23819` by :user:`Adrian Trujillo <trujillo9616>` and
:user:`Edoardo Abati <EdAbati>`.
:mod:`sklearn.feature_selection`
................................
- |Fix| Fix a bug in :func:`feature_selection.mutual_info_regression` and
:func:`feature_selection.mutual_info_classif`, where the continuous features
in `X` should be scaled to a unit variance independently if the target `y` is
continuous or discrete.
:pr:`24747` by :user:`Guillaume Lemaitre <glemaitre>`
:mod:`sklearn.gaussian_process`
...............................
- |Fix| Fix :class:`gaussian_process.kernels.Matern` gradient computation with
`nu=0.5` for PyPy (and possibly other non CPython interpreters). :pr:`24245`
by :user:`Loïc Estève <lesteve>`.
- |Fix| The `fit` method of :class:`gaussian_process.GaussianProcessRegressor`
will not modify the input X in case a custom kernel is used, with a `diag`
method that returns part of the input X. :pr:`24405`
by :user:`Omar Salman <OmarManzoor>`.
:mod:`sklearn.impute`
.....................
- |Enhancement| Added `keep_empty_features` parameter to
:class:`impute.SimpleImputer`, :class:`impute.KNNImputer` and
:class:`impute.IterativeImputer`, preventing removal of features
containing only missing values when transforming.
:pr:`16695` by :user:`Vitor Santa Rosa <vitorsrg>`.
:mod:`sklearn.inspection`
.........................
- |MajorFeature| Extended :func:`inspection.partial_dependence` and
:class:`inspection.PartialDependenceDisplay` to handle categorical features.
:pr:`18298` by :user:`Madhura Jayaratne <madhuracj>` and
:user:`Guillaume Lemaitre <glemaitre>`.
- |Fix| :class:`inspection.DecisionBoundaryDisplay` now raises error if input
data is not 2-dimensional.
:pr:`25077` by :user:`Arturo Amor <ArturoAmorQ>`.
:mod:`sklearn.kernel_approximation`
...................................
- |Enhancement| :class:`kernel_approximation.RBFSampler` now preserves
dtype for `numpy.float32` inputs. :pr:`24317` by `Tim Head <betatim>`.
- |Enhancement| :class:`kernel_approximation.SkewedChi2Sampler` now preserves
dtype for `numpy.float32` inputs. :pr:`24350` by :user:`Rahil Parikh <rprkh>`.
- |Enhancement| :class:`kernel_approximation.RBFSampler` now accepts
`'scale'` option for parameter `gamma`.
:pr:`24755` by :user:`Gleb Levitski <GLevV>`.
:mod:`sklearn.linear_model`
...........................
- |Enhancement| :class:`linear_model.LogisticRegression`,
:class:`linear_model.LogisticRegressionCV`, :class:`linear_model.GammaRegressor`,
:class:`linear_model.PoissonRegressor` and :class:`linear_model.TweedieRegressor` got
a new solver `solver="newton-cholesky"`. This is a 2nd order (Newton) optimisation
routine that uses a Cholesky decomposition of the hessian matrix.
When `n_samples >> n_features`, the `"newton-cholesky"` solver has been observed to
converge both faster and to a higher precision solution than the `"lbfgs"` solver on
problems with one-hot encoded categorical variables with some rare categorical
levels.
:pr:`24637` and :pr:`24767` by :user:`Christian Lorentzen <lorentzenchr>`.
- |Enhancement| :class:`linear_model.GammaRegressor`,
:class:`linear_model.PoissonRegressor` and :class:`linear_model.TweedieRegressor`
can reach higher precision with the lbfgs solver, in particular when `tol` is set
to a tiny value. Moreover, `verbose` is now properly propagated to L-BFGS-B.
:pr:`23619` by :user:`Christian Lorentzen <lorentzenchr>`.
- |Fix| :class:`linear_model.SGDClassifier` and :class:`linear_model.SGDRegressor` will
raise an error when all the validation samples have zero sample weight.
:pr:`23275` by `Zhehao Liu <MaxwellLZH>`.
- |Fix| :class:`linear_model.SGDOneClassSVM` no longer performs parameter
validation in the constructor. All validation is now handled in `fit()` and
`partial_fit()`.
:pr:`24433` by :user:`Yogendrasingh <iofall>`, :user:`Arisa Y. <arisayosh>`
and :user:`Tim Head <betatim>`.
- |Fix| Fix average loss calculation when early stopping is enabled in
:class:`linear_model.SGDRegressor` and :class:`linear_model.SGDClassifier`.
Also updated the condition for early stopping accordingly.
:pr:`23798` by :user:`Harsh Agrawal <Harsh14901>`.
- |API| The default value for the `solver` parameter in
:class:`linear_model.QuantileRegressor` will change from `"interior-point"`
to `"highs"` in version 1.4.
:pr:`23637` by :user:`Guillaume Lemaitre <glemaitre>`.
- |API| String option `"none"` is deprecated for `penalty` argument
in :class:`linear_model.LogisticRegression`, and will be removed in version 1.4.
Use `None` instead. :pr:`23877` by :user:`Zhehao Liu <MaxwellLZH>`.
- |API| The default value of `tol` was changed from `1e-3` to `1e-4` for
:func:`linear_model.ridge_regression`, :class:`linear_model.Ridge` and
:class:`linear_model.RidgeClassifier`.
:pr:`24465` by :user:`Christian Lorentzen <lorentzenchr>`.
:mod:`sklearn.manifold`
.......................
- |Feature| Adds option to use the normalized stress in :class:`manifold.MDS`. This is
enabled by setting the new `normalize` parameter to `True`.
:pr:`10168` by :user:`Łukasz Borchmann <Borchmann>`,
:pr:`12285` by :user:`Matthias Miltenberger <mattmilten>`,
:pr:`13042` by :user:`Matthieu Parizy <matthieu-pa>`,
:pr:`18094` by :user:`Roth E Conrad <rotheconrad>` and
:pr:`22562` by :user:`Meekail Zain <micky774>`.
- |Enhancement| Adds `eigen_tol` parameter to
:class:`manifold.SpectralEmbedding`. Both :func:`manifold.spectral_embedding`
and :class:`manifold.SpectralEmbedding` now propagate `eigen_tol` to all
choices of `eigen_solver`. Includes a new option `eigen_tol="auto"`
and begins deprecation to change the default from `eigen_tol=0` to
`eigen_tol="auto"` in version 1.3.
:pr:`23210` by :user:`Meekail Zain <micky774>`.
- |Enhancement| :class:`manifold.Isomap` now preserves
dtype for `np.float32` inputs. :pr:`24714` by :user:`Rahil Parikh <rprkh>`.
- |API| Added an `"auto"` option to the `normalized_stress` argument in
:class:`manifold.MDS` and :func:`manifold.smacof`. Note that
`normalized_stress` is only valid for non-metric MDS, therefore the `"auto"`
option enables `normalized_stress` when `metric=False` and disables it when
`metric=True`. `"auto"` will become the default value for `normalized_stress`
in version 1.4.
:pr:`23834` by :user:`Meekail Zain <micky774>`
:mod:`sklearn.metrics`
......................
- |Feature| :func:`metrics.ConfusionMatrixDisplay.from_estimator`,
:func:`metrics.ConfusionMatrixDisplay.from_predictions`, and
:meth:`metrics.ConfusionMatrixDisplay.plot` accepts a `text_kw` parameter which is
passed to matplotlib's `text` function. :pr:`24051` by `Thomas Fan`_.
- |Feature| :func:`metrics.class_likelihood_ratios` is added to compute the positive and
negative likelihood ratios derived from the confusion matrix
of a binary classification problem. :pr:`22518` by
:user:`Arturo Amor <ArturoAmorQ>`.
- |Feature| Add :class:`metrics.PredictionErrorDisplay` to plot residuals vs
predicted and actual vs predicted to qualitatively assess the behavior of a
regressor. The display can be created with the class methods
:func:`metrics.PredictionErrorDisplay.from_estimator` and
:func:`metrics.PredictionErrorDisplay.from_predictions`. :pr:`18020` by
:user:`Guillaume Lemaitre <glemaitre>`.
- |Feature| :func:`metrics.roc_auc_score` now supports micro-averaging
(`average="micro"`) for the One-vs-Rest multiclass case (`multi_class="ovr"`).
:pr:`24338` by :user:`Arturo Amor <ArturoAmorQ>`.
- |Enhancement| Adds an `"auto"` option to `eps` in :func:`metrics.log_loss`.
This option will automatically set the `eps` value depending on the data
type of `y_pred`. In addition, the default value of `eps` is changed from
`1e-15` to the new `"auto"` option.
:pr:`24354` by :user:`Safiuddin Khaja <Safikh>` and :user:`gsiisg <gsiisg>`.
- |Fix| Allows `csr_matrix` as input for parameter: `y_true` of
the :func:`metrics.label_ranking_average_precision_score` metric.
:pr:`23442` by :user:`Sean Atukorala <ShehanAT>`
- |Fix| :func:`metrics.ndcg_score` will now trigger a warning when the `y_true`
value contains a negative value. Users may still use negative values, but the
result may not be between 0 and 1. Starting in v1.4, passing in negative
values for `y_true` will raise an error.
:pr:`22710` by :user:`Conroy Trinh <trinhcon>` and
:pr:`23461` by :user:`Meekail Zain <micky774>`.
- |Fix| :func:`metrics.log_loss` with `eps=0` now returns a correct value of 0 or
`np.inf` instead of `nan` for predictions at the boundaries (0 or 1). It also accepts
integer input.
:pr:`24365` by :user:`Christian Lorentzen <lorentzenchr>`.
- |API| The parameter `sum_over_features` of
:func:`metrics.pairwise.manhattan_distances` is deprecated and will be removed in 1.4.
:pr:`24630` by :user:`Rushil Desai <rusdes>`.
:mod:`sklearn.model_selection`
..............................
- |Feature| Added the class :class:`model_selection.LearningCurveDisplay`
that allows to make easy plotting of learning curves obtained by the function
:func:`model_selection.learning_curve`.
:pr:`24084` by :user:`Guillaume Lemaitre <glemaitre>`.
- |Fix| For all `SearchCV` classes and scipy >= 1.10, rank corresponding to a
nan score is correctly set to the maximum possible rank, rather than
`np.iinfo(np.int32).min`. :pr:`24141` by :user:`Loïc Estève <lesteve>`.
- |Fix| In both :class:`model_selection.HalvingGridSearchCV` and
:class:`model_selection.HalvingRandomSearchCV` parameter
combinations with a NaN score now share the lowest rank.
:pr:`24539` by :user:`Tim Head <betatim>`.
- |Fix| For :class:`model_selection.GridSearchCV` and
:class:`model_selection.RandomizedSearchCV` ranks corresponding to nan
scores will all be set to the maximum possible rank.
:pr:`24543` by :user:`Guillaume Lemaitre <glemaitre>`.
:mod:`sklearn.multioutput`
..........................
- |Feature| Added boolean `verbose` flag to classes:
:class:`multioutput.ClassifierChain` and :class:`multioutput.RegressorChain`.
:pr:`23977` by :user:`Eric Fiegel <efiegel>`,
:user:`Chiara Marmo <cmarmo>`,
:user:`Lucy Liu <lucyleeow>`, and
:user:`Guillaume Lemaitre <glemaitre>`.
:mod:`sklearn.naive_bayes`
..........................
- |Feature| Add methods `predict_joint_log_proba` to all naive Bayes classifiers.
:pr:`23683` by :user:`Andrey Melnik <avm19>`.
- |Enhancement| A new parameter `force_alpha` was added to
:class:`naive_bayes.BernoulliNB`, :class:`naive_bayes.ComplementNB`,
:class:`naive_bayes.CategoricalNB`, and :class:`naive_bayes.MultinomialNB`,
allowing user to set parameter alpha to a very small number, greater or equal
0, which was earlier automatically changed to `1e-10` instead.
:pr:`16747` by :user:`arka204`,
:pr:`18805` by :user:`hongshaoyang`,
:pr:`22269` by :user:`Meekail Zain <micky774>`.
:mod:`sklearn.neighbors`
........................
- |Feature| Adds new function :func:`neighbors.sort_graph_by_row_values` to
sort a CSR sparse graph such that each row is stored with increasing values.
This is useful to improve efficiency when using precomputed sparse distance
matrices in a variety of estimators and avoid an `EfficiencyWarning`.
:pr:`23139` by `Tom Dupre la Tour`_.
- |Efficiency| :class:`neighbors.NearestCentroid` is faster and requires
less memory as it better leverages CPUs' caches to compute predictions.
:pr:`24645` by :user:`Olivier Grisel <ogrisel>`.
- |Enhancement| :class:`neighbors.KernelDensity` bandwidth parameter now accepts
definition using Scott's and Silverman's estimation methods.
:pr:`10468` by :user:`Ruben <icfly2>` and :pr:`22993` by
:user:`Jovan Stojanovic <jovan-stojanovic>`.
- |Enhancement| `neighbors.NeighborsBase` now accepts
Minkowski semi-metric (i.e. when :math:`0 < p < 1` for
`metric="minkowski"`) for `algorithm="auto"` or `algorithm="brute"`.
:pr:`24750` by :user:`Rudresh Veerkhare <RudreshVeerkhare>`
- |Fix| :class:`neighbors.NearestCentroid` now raises an informative error message at fit-time
instead of failing with a low-level error message at predict-time.
:pr:`23874` by :user:`Juan Gomez <2357juan>`.
- |Fix| Set `n_jobs=None` by default (instead of `1`) for
:class:`neighbors.KNeighborsTransformer` and
:class:`neighbors.RadiusNeighborsTransformer`.
:pr:`24075` by :user:`Valentin Laurent <Valentin-Laurent>`.
- |Enhancement| :class:`neighbors.LocalOutlierFactor` now preserves
dtype for `numpy.float32` inputs.
:pr:`22665` by :user:`Julien Jerphanion <jjerphan>`.
:mod:`sklearn.neural_network`
.............................
- |Fix| :class:`neural_network.MLPClassifier` and
:class:`neural_network.MLPRegressor` always expose the parameters `best_loss_`,
`validation_scores_`, and `best_validation_score_`. `best_loss_` is set to
`None` when `early_stopping=True`, while `validation_scores_` and
`best_validation_score_` are set to `None` when `early_stopping=False`.
:pr:`24683` by :user:`Guillaume Lemaitre <glemaitre>`.
:mod:`sklearn.pipeline`
.......................
- |Enhancement| :meth:`pipeline.FeatureUnion.get_feature_names_out` can now
be used when one of the transformers in the :class:`pipeline.FeatureUnion` is
`"passthrough"`. :pr:`24058` by :user:`Diederik Perdok <diederikwp>`
- |Enhancement| The :class:`pipeline.FeatureUnion` class now has a `named_transformers`
attribute for accessing transformers by name.
:pr:`20331` by :user:`Christopher Flynn <crflynn>`.
:mod:`sklearn.preprocessing`
............................
- |Enhancement| :class:`preprocessing.FunctionTransformer` will always try to set
`n_features_in_` and `feature_names_in_` regardless of the `validate` parameter.
:pr:`23993` by `Thomas Fan`_.
- |Fix| :class:`preprocessing.LabelEncoder` correctly encodes NaNs in `transform`.
:pr:`22629` by `Thomas Fan`_.
- |API| The `sparse` parameter of :class:`preprocessing.OneHotEncoder`
is now deprecated and will be removed in version 1.4. Use `sparse_output` instead.
:pr:`24412` by :user:`Rushil Desai <rusdes>`.
:mod:`sklearn.svm`
..................
- |API| The `class_weight_` attribute is now deprecated for
:class:`svm.NuSVR`, :class:`svm.SVR`, :class:`svm.OneClassSVM`.
:pr:`22898` by :user:`Meekail Zain <micky774>`.
:mod:`sklearn.tree`
...................
- |Enhancement| :func:`tree.plot_tree`, :func:`tree.export_graphviz` now uses
a lower case `x[i]` to represent feature `i`. :pr:`23480` by `Thomas Fan`_.
:mod:`sklearn.utils`
....................
- |Feature| A new module exposes development tools to discover estimators (i.e.
:func:`utils.discovery.all_estimators`), displays (i.e.
:func:`utils.discovery.all_displays`) and functions (i.e.
:func:`utils.discovery.all_functions`) in scikit-learn.
:pr:`21469` by :user:`Guillaume Lemaitre <glemaitre>`.
- |Enhancement| :func:`utils.extmath.randomized_svd` now accepts an argument,
`lapack_svd_driver`, to specify the lapack driver used in the internal
deterministic SVD used by the randomized SVD algorithm.
:pr:`20617` by :user:`Srinath Kailasa <skailasa>`
- |Enhancement| :func:`utils.validation.column_or_1d` now accepts a `dtype`
parameter to specific `y`'s dtype. :pr:`22629` by `Thomas Fan`_.
- |Enhancement| `utils.extmath.cartesian` now accepts arrays with different
`dtype` and will cast the output to the most permissive `dtype`.
:pr:`25067` by :user:`Guillaume Lemaitre <glemaitre>`.
- |Fix| :func:`utils.multiclass.type_of_target` now properly handles sparse matrices.
:pr:`14862` by :user:`Léonard Binet <leonardbinet>`.
- |Fix| HTML representation no longer errors when an estimator class is a value in
`get_params`. :pr:`24512` by `Thomas Fan`_.
- |Fix| :func:`utils.estimator_checks.check_estimator` now takes into account
the `requires_positive_X` tag correctly. :pr:`24667` by `Thomas Fan`_.
- |Fix| :func:`utils.check_array` now supports Pandas Series with `pd.NA`
by raising a better error message or returning a compatible `ndarray`.
:pr:`25080` by `Thomas Fan`_.
- |API| The extra keyword parameters of :func:`utils.extmath.density` are deprecated
and will be removed in 1.4.
:pr:`24523` by :user:`Mia Bajic <clytaemnestra>`.
.. rubric:: Code and documentation contributors
Thanks to everyone who has contributed to the maintenance and improvement of
the project since version 1.1, including:
2357juan, 3lLobo, Adam J. Stewart, Adam Kania, Adam Li, Aditya Anulekh, Admir
Demiraj, adoublet, Adrin Jalali, Ahmedbgh, Aiko, Akshita Prasanth, Ala-Na,
Alessandro Miola, Alex, Alexandr, Alexandre Perez-Lebel, Alex Buzenet, Ali H.
El-Kassas, aman kumar, Amit Bera, András Simon, Andreas Grivas, Andreas
Mueller, Andrew Wang, angela-maennel, Aniket Shirsat, Anthony22-dev, Antony
Lee, anupam, Apostolos Tsetoglou, Aravindh R, Artur Hermano, Arturo Amor,
as-90, ashah002, Ashwin Mathur, avm19, Azaria Gebremichael, b0rxington, Badr
MOUFAD, Bardiya Ak, Bartłomiej Gońda, BdeGraaff, Benjamin Bossan, Benjamin
Carter, berkecanrizai, Bernd Fritzke, Bhoomika, Biswaroop Mitra, Brandon TH
Chen, Brett Cannon, Bsh, cache-missing, carlo, Carlos Ramos Carreño, ceh,
chalulu, Changyao Chen, Charles Zablit, Chiara Marmo, Christian Lorentzen,
Christian Ritter, Christian Veenhuis, christianwaldmann, Christine P. Chai,
Claudio Salvatore Arcidiacono, Clément Verrier, crispinlogan, Da-Lan,
DanGonite57, Daniela Fernandes, DanielGaerber, darioka, Darren Nguyen,
davidblnc, david-cortes, David Gilbertson, David Poznik, Dayne, Dea María
Léon, Denis, Dev Khant, Dhanshree Arora, Diadochokinetic, diederikwp, Dimitri
Papadopoulos Orfanos, Dimitris Litsidis, drewhogg, Duarte OC, Dwight Lindquist,
Eden Brekke, Edern, Edoardo Abati, Eleanore Denies, EliaSchiavon, Emir,
ErmolaevPA, Fabrizio Damicelli, fcharras, Felipe Siola, Flynn,
francesco-tuveri, Franck Charras, ftorres16, Gael Varoquaux, Geevarghese
George, genvalen, GeorgiaMayDay, Gianr Lazz, Gleb Levitski, Glòria Macià
Muñoz, Guillaume Lemaitre, Guillem García Subies, Guitared, gunesbayir,
Haesun Park, Hansin Ahuja, Hao Chun Chang, Harsh Agrawal, harshit5674,
hasan-yaman, henrymooresc, Henry Sorsky, Hristo Vrigazov, htsedebenham, humahn,
i-aki-y, Ian Thompson, Ido M, Iglesys, Iliya Zhechev, Irene, ivanllt, Ivan
Sedykh, Jack McIvor, jakirkham, JanFidor, Jason G, Jérémie du Boisberranger,
Jiten Sidhpura, jkarolczak, João David, JohnathanPi, John Koumentis, John P,
John Pangas, johnthagen, Jordan Fleming, Joshua Choo Yun Keat, Jovan
Stojanovic, Juan Carlos Alfaro Jiménez, juanfe88, Juan Felipe Arias,
JuliaSchoepp, Julien Jerphanion, jygerardy, ka00ri, Kanishk Sachdev, Kanissh,
Kaushik Amar Das, Kendall, Kenneth Prabakaran, Kento Nozawa, kernc, Kevin
Roice, Kian Eliasi, Kilian Kluge, Kilian Lieret, Kirandevraj, Kraig, krishna
kumar, krishna vamsi, Kshitij Kapadni, Kshitij Mathur, Lauren Burke, Léonard
Binet, lingyi1110, Lisa Casino, Logan Thomas, Loic Esteve, Luciano Mantovani,
Lucy Liu, Maascha, Madhura Jayaratne, madinak, Maksym, Malte S. Kurz, Mansi
Agrawal, Marco Edward Gorelli, Marco Wurps, Maren Westermann, Maria Telenczuk,
Mario Kostelac, martin-kokos, Marvin Krawutschke, Masanori Kanazu, mathurinm,
Matt Haberland, mauroantonioserrano, Max Halford, Maxi Marufo, maximeSaur,
Maxim Smolskiy, Maxwell, m. bou, Meekail Zain, Mehgarg, mehmetcanakbay, Mia
Bajić, Michael Flaks, Michael Hornstein, Michel de Ruiter, Michelle Paradis,
Mikhail Iljin, Misa Ogura, Moritz Wilksch, mrastgoo, Naipawat Poolsawat, Naoise
Holohan, Nass, Nathan Jacobi, Nawazish Alam, Nguyễn Văn Diễn, Nicola
Fanelli, Nihal Thukarama Rao, Nikita Jare, nima10khodaveisi, Nima Sarajpoor,
nitinramvelraj, NNLNR, npache, Nwanna-Joseph, Nymark Kho, o-holman, Olivier
Grisel, Olle Lukowski, Omar Hassoun, Omar Salman, osman tamer, ouss1508,
Oyindamola Olatunji, PAB, Pandata, partev, Paulo Sergio Soares, Petar
Mlinarić, Peter Jansson, Peter Steinbach, Philipp Jung, Piet Brömmel, Pooja
M, Pooja Subramaniam, priyam kakati, puhuk, Rachel Freeland, Rachit Keerti Das,
Rafal Wojdyla, Raghuveer Bhat, Rahil Parikh, Ralf Gommers, ram vikram singh,
Ravi Makhija, Rehan Guha, Reshama Shaikh, Richard Klima, Rob Crockett, Robert
Hommes, Robert Juergens, Robin Lenz, Rocco Meli, Roman4oo, Ross Barnowski,
Rowan Mankoo, Rudresh Veerkhare, Rushil Desai, Sabri Monaf Sabri, Safikh,
Safiuddin Khaja, Salahuddin, Sam Adam Day, Sandra Yojana Meneses, Sandro
Ephrem, Sangam, SangamSwadik, SANJAI_3, SarahRemus, Sashka Warner, SavkoMax,
Scott Gigante, Scott Gustafson, Sean Atukorala, sec65, SELEE, seljaks, Shady el
Gewily, Shane, shellyfung, Shinsuke Mori, Shiva chauhan, Shoaib Khan, Shogo
Hida, Shrankhla Srivastava, Shuangchi He, Simon, sonnivs, Sortofamudkip,
Srinath Kailasa, Stanislav (Stanley) Modrak, Stefanie Molin, stellalin7,
Stéphane Collot, Steven Van Vaerenbergh, Steve Schmerler, Sven Stehle, Tabea
Kossen, TheDevPanda, the-syd-sre, Thijs van Weezel, Thomas Bonald, Thomas
Germer, Thomas J. Fan, Ti-Ion, Tim Head, Timofei Kornev, toastedyeast, Tobias
Pitters, Tom Dupré la Tour, tomiock, Tom Mathews, Tom McTiernan, tspeng, Tyler
Egashira, Valentin Laurent, Varun Jain, Vera Komeyer, Vicente Reyes-Puerta,
Vinayak Mehta, Vincent M, Vishal, Vyom Pathak, wattai, wchathura, WEN Hao,
William M, x110, Xiao Yuan, Xunius, yanhong-zhao-ef, Yusuf Raji, Z Adil Khwaja,
zeeshan lone | scikit-learn | include contributors rst currentmodule sklearn release notes 1 2 Version 1 2 For a short description of the main highlights of the release please refer to ref sphx glr auto examples release highlights plot release highlights 1 2 0 py include changelog legend inc changes 1 2 2 Version 1 2 2 March 2023 Changelog mod sklearn base Fix When set output transform pandas class base TransformerMixin maintains the index if the term transform output is already a DataFrame pr 25747 by Thomas Fan mod sklearn calibration Fix A deprecation warning is raised when using the base estimator prefix to set parameters of the estimator used in class calibration CalibratedClassifierCV pr 25477 by user Tim Head betatim mod sklearn cluster Fix Fixed a bug in class cluster BisectingKMeans preventing fit to randomly fail due to a permutation of the labels when running multiple inits pr 25563 by user J r mie du Boisberranger jeremiedbb mod sklearn compose Fix Fixes a bug in class compose ColumnTransformer which now supports empty selection of columns when set output transform pandas pr 25570 by Thomas Fan mod sklearn ensemble Fix A deprecation warning is raised when using the base estimator prefix to set parameters of the estimator used in class ensemble AdaBoostClassifier class ensemble AdaBoostRegressor class ensemble BaggingClassifier and class ensemble BaggingRegressor pr 25477 by user Tim Head betatim mod sklearn feature selection Fix Fixed a regression where a negative tol would not be accepted any more by class feature selection SequentialFeatureSelector pr 25664 by user J r mie du Boisberranger jeremiedbb mod sklearn inspection Fix Raise a more informative error message in func inspection partial dependence when dealing with mixed data type categories that cannot be sorted by func numpy unique This problem usually happen when categories are str and missing values are present using np nan pr 25774 by user Guillaume Lemaitre glemaitre mod sklearn isotonic Fix Fixes a bug in class isotonic IsotonicRegression where meth isotonic IsotonicRegression predict would return a pandas DataFrame when the global configuration sets transform output pandas pr 25500 by user Guillaume Lemaitre glemaitre mod sklearn preprocessing Fix preprocessing OneHotEncoder drop idx now properly references the dropped category in the categories attribute when there are infrequent categories pr 25589 by Thomas Fan Fix class preprocessing OrdinalEncoder now correctly supports encoded missing value or unknown value set to a categories cardinality when there is missing values in the training data pr 25704 by Thomas Fan mod sklearn tree Fix Fixed a regression in class tree DecisionTreeClassifier class tree DecisionTreeRegressor class tree ExtraTreeClassifier and class tree ExtraTreeRegressor where an error was no longer raised in version 1 2 when min sample split 1 pr 25744 by user J r mie du Boisberranger jeremiedbb mod sklearn utils Fix Fixes a bug in func utils check array which now correctly performs non finite validation with the Array API specification pr 25619 by Thomas Fan Fix func utils multiclass type of target can identify pandas nullable data types as classification targets pr 25638 by Thomas Fan changes 1 2 1 Version 1 2 1 January 2023 Changed models The following estimators and functions when fit with the same data and parameters may produce different models from the previous version This often occurs due to changes in the modelling logic bug fixes or enhancements or in random sampling procedures Fix The fitted components in class decomposition MiniBatchDictionaryLearning might differ The online updates of the sufficient statistics now properly take the sizes of the batches into account pr 25354 by user J r mie du Boisberranger jeremiedbb Fix The categories attribute of class preprocessing OneHotEncoder now always contains an array of object s when using predefined categories that are strings Predefined categories encoded as bytes will no longer work with X encoded as strings pr 25174 by user Tim Head betatim Changes impacting all modules Fix Support pandas Int64 dtyped y for classifiers and regressors pr 25089 by user Tim Head betatim Fix Remove spurious warnings for estimators internally using neighbors search methods pr 25129 by user Julien Jerphanion jjerphan Fix Fix a bug where the current configuration was ignored in estimators using n jobs 1 This bug was triggered for tasks dispatched by the auxiliary thread of joblib as func sklearn get config used to access an empty thread local configuration instead of the configuration visible from the thread where joblib Parallel was first called pr 25363 by user Guillaume Lemaitre glemaitre Changelog mod sklearn base Fix Fix a regression in BaseEstimator getstate that would prevent certain estimators to be pickled when using Python 3 11 pr 25188 by user Benjamin Bossan BenjaminBossan Fix Inheriting from class base TransformerMixin will only wrap the transform method if the class defines transform itself pr 25295 by Thomas Fan mod sklearn datasets Fix Fixes an inconsistency in func datasets fetch openml between liac arff and pandas parser when a leading space is introduced after the delimiter The ARFF specs requires to ignore the leading space pr 25312 by user Guillaume Lemaitre glemaitre Fix Fixes a bug in func datasets fetch openml when using parser pandas where single quote and backslash escape characters were not properly handled pr 25511 by user Guillaume Lemaitre glemaitre mod sklearn decomposition Fix Fixed a bug in class decomposition MiniBatchDictionaryLearning where the online updates of the sufficient statistics where not correct when calling partial fit on batches of different sizes pr 25354 by user J r mie du Boisberranger jeremiedbb Fix class decomposition DictionaryLearning better supports readonly NumPy arrays In particular it better supports large datasets which are memory mapped when it is used with coordinate descent algorithms i e when fit algorithm cd pr 25172 by user Julien Jerphanion jjerphan mod sklearn ensemble Fix class ensemble RandomForestClassifier class ensemble RandomForestRegressor class ensemble ExtraTreesClassifier and class ensemble ExtraTreesRegressor now support sparse readonly datasets pr 25341 by user Julien Jerphanion jjerphan mod sklearn feature extraction Fix class feature extraction FeatureHasher raises an informative error when the input is a list of strings pr 25094 by Thomas Fan mod sklearn linear model Fix Fix a regression in class linear model SGDClassifier and class linear model SGDRegressor that makes them unusable with the verbose parameter set to a value greater than 0 pr 25250 by user J r mie Du Boisberranger jeremiedbb mod sklearn manifold Fix class manifold TSNE now works correctly when output type is set to pandas pr 25370 by user Tim Head betatim mod sklearn model selection Fix func model selection cross validate with multimetric scoring in case of some failing scorers the non failing scorers now returns proper scores instead of error score values pr 23101 by user Andr s Simon simonandras and Thomas Fan mod sklearn neural network Fix class neural network MLPClassifier and class neural network MLPRegressor no longer raise warnings when fitting data with feature names pr 24873 by user Tim Head betatim Fix Improves error message in class neural network MLPClassifier and class neural network MLPRegressor when early stopping True and partial fit is called pr 25694 by Thomas Fan mod sklearn preprocessing Fix meth preprocessing FunctionTransformer inverse transform correctly supports DataFrames that are all numerical when check inverse True pr 25274 by Thomas Fan Fix meth preprocessing SplineTransformer get feature names out correctly returns feature names when extrapolations periodic pr 25296 by Thomas Fan mod sklearn tree Fix class tree DecisionTreeClassifier class tree DecisionTreeRegressor class tree ExtraTreeClassifier and class tree ExtraTreeRegressor now support sparse readonly datasets pr 25341 by user Julien Jerphanion jjerphan mod sklearn utils Fix Restore func utils check array s behaviour for pandas Series of type boolean The type is maintained instead of converting to float64 pr 25147 by user Tim Head betatim API utils fixes delayed is deprecated in 1 2 1 and will be removed in 1 5 Instead import func utils parallel delayed and use it in conjunction with the newly introduced func utils parallel Parallel to ensure proper propagation of the scikit learn configuration to the workers pr 25363 by user Guillaume Lemaitre glemaitre changes 1 2 Version 1 2 0 December 2022 Changed models The following estimators and functions when fit with the same data and parameters may produce different models from the previous version This often occurs due to changes in the modelling logic bug fixes or enhancements or in random sampling procedures Enhancement The default eigen tol for class cluster SpectralClustering class manifold SpectralEmbedding func cluster spectral clustering and func manifold spectral embedding is now None when using the amg or lobpcg solvers This change improves numerical stability of the solver but may result in a different model Enhancement class linear model GammaRegressor class linear model PoissonRegressor and class linear model TweedieRegressor can reach higher precision with the lbfgs solver in particular when tol is set to a tiny value Moreover verbose is now properly propagated to L BFGS B pr 23619 by user Christian Lorentzen lorentzenchr Enhancement The default value for eps func metrics log loss has changed from 1e 15 to auto auto sets eps to np finfo y pred dtype eps pr 24354 by user Safiuddin Khaja Safikh and user gsiisg gsiisg Fix Make sign of components deterministic in class decomposition SparsePCA pr 23935 by user Guillaume Lemaitre glemaitre Fix The components signs in class decomposition FastICA might differ It is now consistent and deterministic with all SVD solvers pr 22527 by user Meekail Zain micky774 and Thomas Fan Fix The condition for early stopping has now been changed in linear model sgd fast plain sgd which is used by class linear model SGDRegressor and class linear model SGDClassifier The old condition did not disambiguate between training and validation set and had an effect of overscaling the error tolerance This has been fixed in pr 23798 by user Harsh Agrawal Harsh14901 Fix For class model selection GridSearchCV and class model selection RandomizedSearchCV ranks corresponding to nan scores will all be set to the maximum possible rank pr 24543 by user Guillaume Lemaitre glemaitre API The default value of tol was changed from 1e 3 to 1e 4 for func linear model ridge regression class linear model Ridge and class linear model RidgeClassifier pr 24465 by user Christian Lorentzen lorentzenchr Changes impacting all modules MajorFeature The set output API has been adopted by all transformers Meta estimators that contain transformers such as class pipeline Pipeline or class compose ColumnTransformer also define a set output For details see SLEP018 https scikit learn enhancement proposals readthedocs io en latest slep018 proposal html pr 23734 and pr 24699 by Thomas Fan Efficiency Low level routines for reductions on pairwise distances for dense float32 datasets have been refactored The following functions and estimators now benefit from improved performances in terms of hardware scalability and speed ups func sklearn metrics pairwise distances argmin func sklearn metrics pairwise distances argmin min class sklearn cluster AffinityPropagation class sklearn cluster Birch class sklearn cluster MeanShift class sklearn cluster OPTICS class sklearn cluster SpectralClustering func sklearn feature selection mutual info regression class sklearn neighbors KNeighborsClassifier class sklearn neighbors KNeighborsRegressor class sklearn neighbors RadiusNeighborsClassifier class sklearn neighbors RadiusNeighborsRegressor class sklearn neighbors LocalOutlierFactor class sklearn neighbors NearestNeighbors class sklearn manifold Isomap class sklearn manifold LocallyLinearEmbedding class sklearn manifold TSNE func sklearn manifold trustworthiness class sklearn semi supervised LabelPropagation class sklearn semi supervised LabelSpreading For instance meth sklearn neighbors NearestNeighbors kneighbors and meth sklearn neighbors NearestNeighbors radius neighbors can respectively be up to 20 and 5 faster than previously on a laptop Moreover implementations of those two algorithms are now suitable for machine with many cores making them usable for datasets consisting of millions of samples pr 23865 by user Julien Jerphanion jjerphan Enhancement Finiteness checks detection of NaN and infinite values in all estimators are now significantly more efficient for float32 data by leveraging NumPy s SIMD optimized primitives pr 23446 by user Meekail Zain micky774 Enhancement Finiteness checks detection of NaN and infinite values in all estimators are now faster by utilizing a more efficient stop on first second pass algorithm pr 23197 by user Meekail Zain micky774 Enhancement Support for combinations of dense and sparse datasets pairs for all distance metrics and for float32 and float64 datasets has been added or has seen its performance improved for the following estimators func sklearn metrics pairwise distances argmin func sklearn metrics pairwise distances argmin min class sklearn cluster AffinityPropagation class sklearn cluster Birch class sklearn cluster SpectralClustering class sklearn neighbors KNeighborsClassifier class sklearn neighbors KNeighborsRegressor class sklearn neighbors RadiusNeighborsClassifier class sklearn neighbors RadiusNeighborsRegressor class sklearn neighbors LocalOutlierFactor class sklearn neighbors NearestNeighbors class sklearn manifold Isomap class sklearn manifold TSNE func sklearn manifold trustworthiness pr 23604 and pr 23585 by user Julien Jerphanion jjerphan user Olivier Grisel ogrisel and Thomas Fan pr 24556 by user Vincent Maladi re Vincent Maladiere Fix Systematically check the sha256 digest of dataset tarballs used in code examples in the documentation pr 24617 by user Olivier Grisel ogrisel and Thomas Fan Thanks to Sim4n6 https huntr dev users sim4n6 for the report Changelog Entries should be grouped by module in alphabetic order and prefixed with one of the labels MajorFeature Feature Efficiency Enhancement Fix or API see whats new rst for descriptions Entries should be ordered by those labels e g Fix after Efficiency Changes not specific to a module should be listed under Multiple Modules or Miscellaneous Entries should end with pr 123456 by user Joe Bloggs joeongithub where 123456 is the pull request number not the issue number mod sklearn base Enhancement Introduces class base ClassNamePrefixFeaturesOutMixin and class base ClassNamePrefixFeaturesOutMixin mixins that defines term get feature names out for common transformer uses cases pr 24688 by Thomas Fan mod sklearn calibration API Rename base estimator to estimator in class calibration CalibratedClassifierCV to improve readability and consistency The parameter base estimator is deprecated and will be removed in 1 4 pr 22054 by user Kevin Roice kevroi mod sklearn cluster Efficiency class cluster KMeans with algorithm lloyd is now faster and uses less memory pr 24264 by user Vincent Maladiere Vincent Maladiere Enhancement The predict and fit predict methods of class cluster OPTICS now accept sparse data type for input data pr 14736 by user Hunt Zhan huntzhan pr 20802 by user Brandon Pokorny Clickedbigfoot and pr 22965 by user Meekail Zain micky774 Enhancement class cluster Birch now preserves dtype for numpy float32 inputs pr 22968 by Meekail Zain micky774 Enhancement class cluster KMeans and class cluster MiniBatchKMeans now accept a new auto option for n init which changes the number of random initializations to one when using init k means for efficiency This begins deprecation for the default values of n init in the two classes and both will have their defaults changed to n init auto in 1 4 pr 23038 by user Meekail Zain micky774 Enhancement class cluster SpectralClustering and func cluster spectral clustering now propagates the eigen tol parameter to all choices of eigen solver Includes a new option eigen tol auto and begins deprecation to change the default from eigen tol 0 to eigen tol auto in version 1 3 pr 23210 by user Meekail Zain micky774 Fix class cluster KMeans now supports readonly attributes when predicting pr 24258 by Thomas Fan API The affinity attribute is now deprecated for class cluster AgglomerativeClustering and will be renamed to metric in v1 4 pr 23470 by user Meekail Zain micky774 mod sklearn datasets Enhancement Introduce the new parameter parser in func datasets fetch openml parser pandas allows to use the very CPU and memory efficient pandas read csv parser to load dense ARFF formatted dataset files It is possible to pass parser liac arff to use the old LIAC parser When parser auto dense datasets are loaded with pandas and sparse datasets are loaded with liac arff Currently parser liac arff by default and will change to parser auto in version 1 4 pr 21938 by user Guillaume Lemaitre glemaitre Enhancement func datasets dump svmlight file is now accelerated with a Cython implementation providing 2 4x speedups pr 23127 by user Meekail Zain micky774 Enhancement Path like objects such as those created with pathlib are now allowed as paths in func datasets load svmlight file and func datasets load svmlight files pr 19075 by user Carlos Ramos Carre o vnmabus Fix Make sure that func datasets fetch lfw people and func datasets fetch lfw pairs internally crops images based on the slice parameter pr 24951 by user Guillaume Lemaitre glemaitre mod sklearn decomposition Efficiency func decomposition FastICA fit has been optimised w r t its memory footprint and runtime pr 22268 by user MohamedBsh Bsh Enhancement class decomposition SparsePCA and class decomposition MiniBatchSparsePCA now implements an inverse transform function pr 23905 by user Guillaume Lemaitre glemaitre Enhancement class decomposition FastICA now allows the user to select how whitening is performed through the new whiten solver parameter which supports svd and eigh whiten solver defaults to svd although eigh may be faster and more memory efficient in cases where num features num samples pr 11860 by user Pierre Ablin pierreablin pr 22527 by user Meekail Zain micky774 and Thomas Fan Enhancement class decomposition LatentDirichletAllocation now preserves dtype for numpy float32 input pr 24528 by user Takeshi Oura takoika and user J r mie du Boisberranger jeremiedbb Fix Make sign of components deterministic in class decomposition SparsePCA pr 23935 by user Guillaume Lemaitre glemaitre API The n iter parameter of class decomposition MiniBatchSparsePCA is deprecated and replaced by the parameters max iter tol and max no improvement to be consistent with class decomposition MiniBatchDictionaryLearning n iter will be removed in version 1 3 pr 23726 by user Guillaume Lemaitre glemaitre API The n features attribute of class decomposition PCA is deprecated in favor of n features in and will be removed in 1 4 pr 24421 by user Kshitij Mathur Kshitij68 mod sklearn discriminant analysis MajorFeature class discriminant analysis LinearDiscriminantAnalysis now supports the Array API https data apis org array api latest for solver svd Array API support is considered experimental and might evolve without being subjected to our usual rolling deprecation cycle policy See ref array api for more details pr 22554 by Thomas Fan Fix Validate parameters only in fit and not in init for class discriminant analysis QuadraticDiscriminantAnalysis pr 24218 by user Stefanie Molin stefmolin mod sklearn ensemble MajorFeature class ensemble HistGradientBoostingClassifier and class ensemble HistGradientBoostingRegressor now support interaction constraints via the argument interaction cst of their constructors pr 21020 by user Christian Lorentzen lorentzenchr Using interaction constraints also makes fitting faster pr 24856 by user Christian Lorentzen lorentzenchr Feature Adds class weight to class ensemble HistGradientBoostingClassifier pr 22014 by Thomas Fan Efficiency Improve runtime performance of class ensemble IsolationForest by avoiding data copies pr 23252 by user Zhehao Liu MaxwellLZH Enhancement class ensemble StackingClassifier now accepts any kind of base estimator pr 24538 by user Guillem G Subies GuillemGSubies Enhancement Make it possible to pass the categorical features parameter of class ensemble HistGradientBoostingClassifier and class ensemble HistGradientBoostingRegressor as feature names pr 24889 by user Olivier Grisel ogrisel Enhancement class ensemble StackingClassifier now supports multilabel indicator target pr 24146 by user Nicolas Peretti nicoperetti user Nestor Navarro nestornav user Nati Tomattis natitomattis and user Vincent Maladiere Vincent Maladiere Enhancement class ensemble HistGradientBoostingClassifier and class ensemble HistGradientBoostingClassifier now accept their monotonic cst parameter to be passed as a dictionary in addition to the previously supported array like format Such dictionary have feature names as keys and one of 1 0 1 as value to specify monotonicity constraints for each feature pr 24855 by user Olivier Grisel ogrisel Enhancement Interaction constraints for class ensemble HistGradientBoostingClassifier and class ensemble HistGradientBoostingRegressor can now be specified as strings for two common cases no interactions and pairwise interactions pr 24849 by user Tim Head betatim Fix Fixed the issue where class ensemble AdaBoostClassifier outputs NaN in feature importance when fitted with very small sample weight pr 20415 by user Zhehao Liu MaxwellLZH Fix class ensemble HistGradientBoostingClassifier and class ensemble HistGradientBoostingRegressor no longer error when predicting on categories encoded as negative values and instead consider them a member of the missing category pr 24283 by Thomas Fan Fix class ensemble HistGradientBoostingClassifier and class ensemble HistGradientBoostingRegressor with verbose 1 print detailed timing information on computing histograms and finding best splits The time spent in the root node was previously missing and is now included in the printed information pr 24894 by user Christian Lorentzen lorentzenchr API Rename the constructor parameter base estimator to estimator in the following classes class ensemble BaggingClassifier class ensemble BaggingRegressor class ensemble AdaBoostClassifier class ensemble AdaBoostRegressor base estimator is deprecated in 1 2 and will be removed in 1 4 pr 23819 by user Adrian Trujillo trujillo9616 and user Edoardo Abati EdAbati API Rename the fitted attribute base estimator to estimator in the following classes class ensemble BaggingClassifier class ensemble BaggingRegressor class ensemble AdaBoostClassifier class ensemble AdaBoostRegressor class ensemble RandomForestClassifier class ensemble RandomForestRegressor class ensemble ExtraTreesClassifier class ensemble ExtraTreesRegressor class ensemble RandomTreesEmbedding class ensemble IsolationForest base estimator is deprecated in 1 2 and will be removed in 1 4 pr 23819 by user Adrian Trujillo trujillo9616 and user Edoardo Abati EdAbati mod sklearn feature selection Fix Fix a bug in func feature selection mutual info regression and func feature selection mutual info classif where the continuous features in X should be scaled to a unit variance independently if the target y is continuous or discrete pr 24747 by user Guillaume Lemaitre glemaitre mod sklearn gaussian process Fix Fix class gaussian process kernels Matern gradient computation with nu 0 5 for PyPy and possibly other non CPython interpreters pr 24245 by user Lo c Est ve lesteve Fix The fit method of class gaussian process GaussianProcessRegressor will not modify the input X in case a custom kernel is used with a diag method that returns part of the input X pr 24405 by user Omar Salman OmarManzoor mod sklearn impute Enhancement Added keep empty features parameter to class impute SimpleImputer class impute KNNImputer and class impute IterativeImputer preventing removal of features containing only missing values when transforming pr 16695 by user Vitor Santa Rosa vitorsrg mod sklearn inspection MajorFeature Extended func inspection partial dependence and class inspection PartialDependenceDisplay to handle categorical features pr 18298 by user Madhura Jayaratne madhuracj and user Guillaume Lemaitre glemaitre Fix class inspection DecisionBoundaryDisplay now raises error if input data is not 2 dimensional pr 25077 by user Arturo Amor ArturoAmorQ mod sklearn kernel approximation Enhancement class kernel approximation RBFSampler now preserves dtype for numpy float32 inputs pr 24317 by Tim Head betatim Enhancement class kernel approximation SkewedChi2Sampler now preserves dtype for numpy float32 inputs pr 24350 by user Rahil Parikh rprkh Enhancement class kernel approximation RBFSampler now accepts scale option for parameter gamma pr 24755 by user Gleb Levitski GLevV mod sklearn linear model Enhancement class linear model LogisticRegression class linear model LogisticRegressionCV class linear model GammaRegressor class linear model PoissonRegressor and class linear model TweedieRegressor got a new solver solver newton cholesky This is a 2nd order Newton optimisation routine that uses a Cholesky decomposition of the hessian matrix When n samples n features the newton cholesky solver has been observed to converge both faster and to a higher precision solution than the lbfgs solver on problems with one hot encoded categorical variables with some rare categorical levels pr 24637 and pr 24767 by user Christian Lorentzen lorentzenchr Enhancement class linear model GammaRegressor class linear model PoissonRegressor and class linear model TweedieRegressor can reach higher precision with the lbfgs solver in particular when tol is set to a tiny value Moreover verbose is now properly propagated to L BFGS B pr 23619 by user Christian Lorentzen lorentzenchr Fix class linear model SGDClassifier and class linear model SGDRegressor will raise an error when all the validation samples have zero sample weight pr 23275 by Zhehao Liu MaxwellLZH Fix class linear model SGDOneClassSVM no longer performs parameter validation in the constructor All validation is now handled in fit and partial fit pr 24433 by user Yogendrasingh iofall user Arisa Y arisayosh and user Tim Head betatim Fix Fix average loss calculation when early stopping is enabled in class linear model SGDRegressor and class linear model SGDClassifier Also updated the condition for early stopping accordingly pr 23798 by user Harsh Agrawal Harsh14901 API The default value for the solver parameter in class linear model QuantileRegressor will change from interior point to highs in version 1 4 pr 23637 by user Guillaume Lemaitre glemaitre API String option none is deprecated for penalty argument in class linear model LogisticRegression and will be removed in version 1 4 Use None instead pr 23877 by user Zhehao Liu MaxwellLZH API The default value of tol was changed from 1e 3 to 1e 4 for func linear model ridge regression class linear model Ridge and class linear model RidgeClassifier pr 24465 by user Christian Lorentzen lorentzenchr mod sklearn manifold Feature Adds option to use the normalized stress in class manifold MDS This is enabled by setting the new normalize parameter to True pr 10168 by user ukasz Borchmann Borchmann pr 12285 by user Matthias Miltenberger mattmilten pr 13042 by user Matthieu Parizy matthieu pa pr 18094 by user Roth E Conrad rotheconrad and pr 22562 by user Meekail Zain micky774 Enhancement Adds eigen tol parameter to class manifold SpectralEmbedding Both func manifold spectral embedding and class manifold SpectralEmbedding now propagate eigen tol to all choices of eigen solver Includes a new option eigen tol auto and begins deprecation to change the default from eigen tol 0 to eigen tol auto in version 1 3 pr 23210 by user Meekail Zain micky774 Enhancement class manifold Isomap now preserves dtype for np float32 inputs pr 24714 by user Rahil Parikh rprkh API Added an auto option to the normalized stress argument in class manifold MDS and func manifold smacof Note that normalized stress is only valid for non metric MDS therefore the auto option enables normalized stress when metric False and disables it when metric True auto will become the default value for normalized stress in version 1 4 pr 23834 by user Meekail Zain micky774 mod sklearn metrics Feature func metrics ConfusionMatrixDisplay from estimator func metrics ConfusionMatrixDisplay from predictions and meth metrics ConfusionMatrixDisplay plot accepts a text kw parameter which is passed to matplotlib s text function pr 24051 by Thomas Fan Feature func metrics class likelihood ratios is added to compute the positive and negative likelihood ratios derived from the confusion matrix of a binary classification problem pr 22518 by user Arturo Amor ArturoAmorQ Feature Add class metrics PredictionErrorDisplay to plot residuals vs predicted and actual vs predicted to qualitatively assess the behavior of a regressor The display can be created with the class methods func metrics PredictionErrorDisplay from estimator and func metrics PredictionErrorDisplay from predictions pr 18020 by user Guillaume Lemaitre glemaitre Feature func metrics roc auc score now supports micro averaging average micro for the One vs Rest multiclass case multi class ovr pr 24338 by user Arturo Amor ArturoAmorQ Enhancement Adds an auto option to eps in func metrics log loss This option will automatically set the eps value depending on the data type of y pred In addition the default value of eps is changed from 1e 15 to the new auto option pr 24354 by user Safiuddin Khaja Safikh and user gsiisg gsiisg Fix Allows csr matrix as input for parameter y true of the func metrics label ranking average precision score metric pr 23442 by user Sean Atukorala ShehanAT Fix func metrics ndcg score will now trigger a warning when the y true value contains a negative value Users may still use negative values but the result may not be between 0 and 1 Starting in v1 4 passing in negative values for y true will raise an error pr 22710 by user Conroy Trinh trinhcon and pr 23461 by user Meekail Zain micky774 Fix func metrics log loss with eps 0 now returns a correct value of 0 or np inf instead of nan for predictions at the boundaries 0 or 1 It also accepts integer input pr 24365 by user Christian Lorentzen lorentzenchr API The parameter sum over features of func metrics pairwise manhattan distances is deprecated and will be removed in 1 4 pr 24630 by user Rushil Desai rusdes mod sklearn model selection Feature Added the class class model selection LearningCurveDisplay that allows to make easy plotting of learning curves obtained by the function func model selection learning curve pr 24084 by user Guillaume Lemaitre glemaitre Fix For all SearchCV classes and scipy 1 10 rank corresponding to a nan score is correctly set to the maximum possible rank rather than np iinfo np int32 min pr 24141 by user Lo c Est ve lesteve Fix In both class model selection HalvingGridSearchCV and class model selection HalvingRandomSearchCV parameter combinations with a NaN score now share the lowest rank pr 24539 by user Tim Head betatim Fix For class model selection GridSearchCV and class model selection RandomizedSearchCV ranks corresponding to nan scores will all be set to the maximum possible rank pr 24543 by user Guillaume Lemaitre glemaitre mod sklearn multioutput Feature Added boolean verbose flag to classes class multioutput ClassifierChain and class multioutput RegressorChain pr 23977 by user Eric Fiegel efiegel user Chiara Marmo cmarmo user Lucy Liu lucyleeow and user Guillaume Lemaitre glemaitre mod sklearn naive bayes Feature Add methods predict joint log proba to all naive Bayes classifiers pr 23683 by user Andrey Melnik avm19 Enhancement A new parameter force alpha was added to class naive bayes BernoulliNB class naive bayes ComplementNB class naive bayes CategoricalNB and class naive bayes MultinomialNB allowing user to set parameter alpha to a very small number greater or equal 0 which was earlier automatically changed to 1e 10 instead pr 16747 by user arka204 pr 18805 by user hongshaoyang pr 22269 by user Meekail Zain micky774 mod sklearn neighbors Feature Adds new function func neighbors sort graph by row values to sort a CSR sparse graph such that each row is stored with increasing values This is useful to improve efficiency when using precomputed sparse distance matrices in a variety of estimators and avoid an EfficiencyWarning pr 23139 by Tom Dupre la Tour Efficiency class neighbors NearestCentroid is faster and requires less memory as it better leverages CPUs caches to compute predictions pr 24645 by user Olivier Grisel ogrisel Enhancement class neighbors KernelDensity bandwidth parameter now accepts definition using Scott s and Silverman s estimation methods pr 10468 by user Ruben icfly2 and pr 22993 by user Jovan Stojanovic jovan stojanovic Enhancement neighbors NeighborsBase now accepts Minkowski semi metric i e when math 0 p 1 for metric minkowski for algorithm auto or algorithm brute pr 24750 by user Rudresh Veerkhare RudreshVeerkhare Fix class neighbors NearestCentroid now raises an informative error message at fit time instead of failing with a low level error message at predict time pr 23874 by user Juan Gomez 2357juan Fix Set n jobs None by default instead of 1 for class neighbors KNeighborsTransformer and class neighbors RadiusNeighborsTransformer pr 24075 by user Valentin Laurent Valentin Laurent Enhancement class neighbors LocalOutlierFactor now preserves dtype for numpy float32 inputs pr 22665 by user Julien Jerphanion jjerphan mod sklearn neural network Fix class neural network MLPClassifier and class neural network MLPRegressor always expose the parameters best loss validation scores and best validation score best loss is set to None when early stopping True while validation scores and best validation score are set to None when early stopping False pr 24683 by user Guillaume Lemaitre glemaitre mod sklearn pipeline Enhancement meth pipeline FeatureUnion get feature names out can now be used when one of the transformers in the class pipeline FeatureUnion is passthrough pr 24058 by user Diederik Perdok diederikwp Enhancement The class pipeline FeatureUnion class now has a named transformers attribute for accessing transformers by name pr 20331 by user Christopher Flynn crflynn mod sklearn preprocessing Enhancement class preprocessing FunctionTransformer will always try to set n features in and feature names in regardless of the validate parameter pr 23993 by Thomas Fan Fix class preprocessing LabelEncoder correctly encodes NaNs in transform pr 22629 by Thomas Fan API The sparse parameter of class preprocessing OneHotEncoder is now deprecated and will be removed in version 1 4 Use sparse output instead pr 24412 by user Rushil Desai rusdes mod sklearn svm API The class weight attribute is now deprecated for class svm NuSVR class svm SVR class svm OneClassSVM pr 22898 by user Meekail Zain micky774 mod sklearn tree Enhancement func tree plot tree func tree export graphviz now uses a lower case x i to represent feature i pr 23480 by Thomas Fan mod sklearn utils Feature A new module exposes development tools to discover estimators i e func utils discovery all estimators displays i e func utils discovery all displays and functions i e func utils discovery all functions in scikit learn pr 21469 by user Guillaume Lemaitre glemaitre Enhancement func utils extmath randomized svd now accepts an argument lapack svd driver to specify the lapack driver used in the internal deterministic SVD used by the randomized SVD algorithm pr 20617 by user Srinath Kailasa skailasa Enhancement func utils validation column or 1d now accepts a dtype parameter to specific y s dtype pr 22629 by Thomas Fan Enhancement utils extmath cartesian now accepts arrays with different dtype and will cast the output to the most permissive dtype pr 25067 by user Guillaume Lemaitre glemaitre Fix func utils multiclass type of target now properly handles sparse matrices pr 14862 by user L onard Binet leonardbinet Fix HTML representation no longer errors when an estimator class is a value in get params pr 24512 by Thomas Fan Fix func utils estimator checks check estimator now takes into account the requires positive X tag correctly pr 24667 by Thomas Fan Fix func utils check array now supports Pandas Series with pd NA by raising a better error message or returning a compatible ndarray pr 25080 by Thomas Fan API The extra keyword parameters of func utils extmath density are deprecated and will be removed in 1 4 pr 24523 by user Mia Bajic clytaemnestra rubric Code and documentation contributors Thanks to everyone who has contributed to the maintenance and improvement of the project since version 1 1 including 2357juan 3lLobo Adam J Stewart Adam Kania Adam Li Aditya Anulekh Admir Demiraj adoublet Adrin Jalali Ahmedbgh Aiko Akshita Prasanth Ala Na Alessandro Miola Alex Alexandr Alexandre Perez Lebel Alex Buzenet Ali H El Kassas aman kumar Amit Bera Andr s Simon Andreas Grivas Andreas Mueller Andrew Wang angela maennel Aniket Shirsat Anthony22 dev Antony Lee anupam Apostolos Tsetoglou Aravindh R Artur Hermano Arturo Amor as 90 ashah002 Ashwin Mathur avm19 Azaria Gebremichael b0rxington Badr MOUFAD Bardiya Ak Bart omiej Go da BdeGraaff Benjamin Bossan Benjamin Carter berkecanrizai Bernd Fritzke Bhoomika Biswaroop Mitra Brandon TH Chen Brett Cannon Bsh cache missing carlo Carlos Ramos Carre o ceh chalulu Changyao Chen Charles Zablit Chiara Marmo Christian Lorentzen Christian Ritter Christian Veenhuis christianwaldmann Christine P Chai Claudio Salvatore Arcidiacono Cl ment Verrier crispinlogan Da Lan DanGonite57 Daniela Fernandes DanielGaerber darioka Darren Nguyen davidblnc david cortes David Gilbertson David Poznik Dayne Dea Mar a L on Denis Dev Khant Dhanshree Arora Diadochokinetic diederikwp Dimitri Papadopoulos Orfanos Dimitris Litsidis drewhogg Duarte OC Dwight Lindquist Eden Brekke Edern Edoardo Abati Eleanore Denies EliaSchiavon Emir ErmolaevPA Fabrizio Damicelli fcharras Felipe Siola Flynn francesco tuveri Franck Charras ftorres16 Gael Varoquaux Geevarghese George genvalen GeorgiaMayDay Gianr Lazz Gleb Levitski Gl ria Maci Mu oz Guillaume Lemaitre Guillem Garc a Subies Guitared gunesbayir Haesun Park Hansin Ahuja Hao Chun Chang Harsh Agrawal harshit5674 hasan yaman henrymooresc Henry Sorsky Hristo Vrigazov htsedebenham humahn i aki y Ian Thompson Ido M Iglesys Iliya Zhechev Irene ivanllt Ivan Sedykh Jack McIvor jakirkham JanFidor Jason G J r mie du Boisberranger Jiten Sidhpura jkarolczak Jo o David JohnathanPi John Koumentis John P John Pangas johnthagen Jordan Fleming Joshua Choo Yun Keat Jovan Stojanovic Juan Carlos Alfaro Jim nez juanfe88 Juan Felipe Arias JuliaSchoepp Julien Jerphanion jygerardy ka00ri Kanishk Sachdev Kanissh Kaushik Amar Das Kendall Kenneth Prabakaran Kento Nozawa kernc Kevin Roice Kian Eliasi Kilian Kluge Kilian Lieret Kirandevraj Kraig krishna kumar krishna vamsi Kshitij Kapadni Kshitij Mathur Lauren Burke L onard Binet lingyi1110 Lisa Casino Logan Thomas Loic Esteve Luciano Mantovani Lucy Liu Maascha Madhura Jayaratne madinak Maksym Malte S Kurz Mansi Agrawal Marco Edward Gorelli Marco Wurps Maren Westermann Maria Telenczuk Mario Kostelac martin kokos Marvin Krawutschke Masanori Kanazu mathurinm Matt Haberland mauroantonioserrano Max Halford Maxi Marufo maximeSaur Maxim Smolskiy Maxwell m bou Meekail Zain Mehgarg mehmetcanakbay Mia Baji Michael Flaks Michael Hornstein Michel de Ruiter Michelle Paradis Mikhail Iljin Misa Ogura Moritz Wilksch mrastgoo Naipawat Poolsawat Naoise Holohan Nass Nathan Jacobi Nawazish Alam Nguy n V n Di n Nicola Fanelli Nihal Thukarama Rao Nikita Jare nima10khodaveisi Nima Sarajpoor nitinramvelraj NNLNR npache Nwanna Joseph Nymark Kho o holman Olivier Grisel Olle Lukowski Omar Hassoun Omar Salman osman tamer ouss1508 Oyindamola Olatunji PAB Pandata partev Paulo Sergio Soares Petar Mlinari Peter Jansson Peter Steinbach Philipp Jung Piet Br mmel Pooja M Pooja Subramaniam priyam kakati puhuk Rachel Freeland Rachit Keerti Das Rafal Wojdyla Raghuveer Bhat Rahil Parikh Ralf Gommers ram vikram singh Ravi Makhija Rehan Guha Reshama Shaikh Richard Klima Rob Crockett Robert Hommes Robert Juergens Robin Lenz Rocco Meli Roman4oo Ross Barnowski Rowan Mankoo Rudresh Veerkhare Rushil Desai Sabri Monaf Sabri Safikh Safiuddin Khaja Salahuddin Sam Adam Day Sandra Yojana Meneses Sandro Ephrem Sangam SangamSwadik SANJAI 3 SarahRemus Sashka Warner SavkoMax Scott Gigante Scott Gustafson Sean Atukorala sec65 SELEE seljaks Shady el Gewily Shane shellyfung Shinsuke Mori Shiva chauhan Shoaib Khan Shogo Hida Shrankhla Srivastava Shuangchi He Simon sonnivs Sortofamudkip Srinath Kailasa Stanislav Stanley Modrak Stefanie Molin stellalin7 St phane Collot Steven Van Vaerenbergh Steve Schmerler Sven Stehle Tabea Kossen TheDevPanda the syd sre Thijs van Weezel Thomas Bonald Thomas Germer Thomas J Fan Ti Ion Tim Head Timofei Kornev toastedyeast Tobias Pitters Tom Dupr la Tour tomiock Tom Mathews Tom McTiernan tspeng Tyler Egashira Valentin Laurent Varun Jain Vera Komeyer Vicente Reyes Puerta Vinayak Mehta Vincent M Vishal Vyom Pathak wattai wchathura WEN Hao William M x110 Xiao Yuan Xunius yanhong zhao ef Yusuf Raji Z Adil Khwaja zeeshan lone |
scikit-learn sklearn contributors rst Version 1 0 releasenotes10 | .. include:: _contributors.rst
.. currentmodule:: sklearn
.. _release_notes_1_0:
===========
Version 1.0
===========
For a short description of the main highlights of the release, please refer to
:ref:`sphx_glr_auto_examples_release_highlights_plot_release_highlights_1_0_0.py`.
.. include:: changelog_legend.inc
.. _changes_1_0_2:
Version 1.0.2
=============
**December 2021**
- |Fix| :class:`cluster.Birch`,
:class:`feature_selection.RFECV`, :class:`ensemble.RandomForestRegressor`,
:class:`ensemble.RandomForestClassifier`,
:class:`ensemble.GradientBoostingRegressor`, and
:class:`ensemble.GradientBoostingClassifier` do not raise warning when fitted
on a pandas DataFrame anymore. :pr:`21578` by `Thomas Fan`_.
Changelog
---------
:mod:`sklearn.cluster`
......................
- |Fix| Fixed an infinite loop in :func:`cluster.SpectralClustering` by
moving an iteration counter from try to except.
:pr:`21271` by :user:`Tyler Martin <martintb>`.
:mod:`sklearn.datasets`
.......................
- |Fix| :func:`datasets.fetch_openml` is now thread safe. Data is first
downloaded to a temporary subfolder and then renamed.
:pr:`21833` by :user:`Siavash Rezazadeh <siavrez>`.
:mod:`sklearn.decomposition`
............................
- |Fix| Fixed the constraint on the objective function of
:class:`decomposition.DictionaryLearning`,
:class:`decomposition.MiniBatchDictionaryLearning`, :class:`decomposition.SparsePCA`
and :class:`decomposition.MiniBatchSparsePCA` to be convex and match the referenced
article. :pr:`19210` by :user:`Jérémie du Boisberranger <jeremiedbb>`.
:mod:`sklearn.ensemble`
.......................
- |Fix| :class:`ensemble.RandomForestClassifier`,
:class:`ensemble.RandomForestRegressor`,
:class:`ensemble.ExtraTreesClassifier`, :class:`ensemble.ExtraTreesRegressor`,
and :class:`ensemble.RandomTreesEmbedding` now raise a ``ValueError`` when
``bootstrap=False`` and ``max_samples`` is not ``None``.
:pr:`21295` :user:`Haoyin Xu <PSSF23>`.
- |Fix| Solve a bug in :class:`ensemble.GradientBoostingClassifier` where the
exponential loss was computing the positive gradient instead of the
negative one.
:pr:`22050` by :user:`Guillaume Lemaitre <glemaitre>`.
:mod:`sklearn.feature_selection`
................................
- |Fix| Fixed :class:`feature_selection.SelectFromModel` by improving support
for base estimators that do not set `feature_names_in_`. :pr:`21991` by
`Thomas Fan`_.
:mod:`sklearn.impute`
.....................
- |Fix| Fix a bug in :class:`linear_model.RidgeClassifierCV` where the method
`predict` was performing an `argmax` on the scores obtained from
`decision_function` instead of returning the multilabel indicator matrix.
:pr:`19869` by :user:`Guillaume Lemaitre <glemaitre>`.
:mod:`sklearn.linear_model`
...........................
- |Fix| :class:`linear_model.LassoLarsIC` now correctly computes AIC
and BIC. An error is now raised when `n_features > n_samples` and
when the noise variance is not provided.
:pr:`21481` by :user:`Guillaume Lemaitre <glemaitre>` and
:user:`Andrés Babino <ababino>`.
:mod:`sklearn.manifold`
.......................
- |Fix| Fixed an unnecessary error when fitting :class:`manifold.Isomap` with a
precomputed dense distance matrix where the neighbors graph has multiple
disconnected components. :pr:`21915` by `Tom Dupre la Tour`_.
:mod:`sklearn.metrics`
......................
- |Fix| All :class:`sklearn.metrics.DistanceMetric` subclasses now correctly support
read-only buffer attributes.
This fixes a regression introduced in 1.0.0 with respect to 0.24.2.
:pr:`21694` by :user:`Julien Jerphanion <jjerphan>`.
- |Fix| All `sklearn.metrics.MinkowskiDistance` now accepts a weight
parameter that makes it possible to write code that behaves consistently both
with scipy 1.8 and earlier versions. In turns this means that all
neighbors-based estimators (except those that use `algorithm="kd_tree"`) now
accept a weight parameter with `metric="minknowski"` to yield results that
are always consistent with `scipy.spatial.distance.cdist`.
:pr:`21741` by :user:`Olivier Grisel <ogrisel>`.
:mod:`sklearn.multiclass`
.........................
- |Fix| :meth:`multiclass.OneVsRestClassifier.predict_proba` does not error when
fitted on constant integer targets. :pr:`21871` by `Thomas Fan`_.
:mod:`sklearn.neighbors`
........................
- |Fix| :class:`neighbors.KDTree` and :class:`neighbors.BallTree` correctly supports
read-only buffer attributes. :pr:`21845` by `Thomas Fan`_.
:mod:`sklearn.preprocessing`
............................
- |Fix| Fixes compatibility bug with NumPy 1.22 in :class:`preprocessing.OneHotEncoder`.
:pr:`21517` by `Thomas Fan`_.
:mod:`sklearn.tree`
...................
- |Fix| Prevents :func:`tree.plot_tree` from drawing out of the boundary of
the figure. :pr:`21917` by `Thomas Fan`_.
- |Fix| Support loading pickles of decision tree models when the pickle has
been generated on a platform with a different bitness. A typical example is
to train and pickle the model on 64 bit machine and load the model on a 32
bit machine for prediction. :pr:`21552` by :user:`Loïc Estève <lesteve>`.
:mod:`sklearn.utils`
....................
- |Fix| :func:`utils.estimator_html_repr` now escapes all the estimator
descriptions in the generated HTML. :pr:`21493` by
:user:`Aurélien Geron <ageron>`.
.. _changes_1_0_1:
Version 1.0.1
=============
**October 2021**
Fixed models
------------
- |Fix| Non-fit methods in the following classes do not raise a UserWarning
when fitted on DataFrames with valid feature names:
:class:`covariance.EllipticEnvelope`, :class:`ensemble.IsolationForest`,
:class:`ensemble.AdaBoostClassifier`, :class:`neighbors.KNeighborsClassifier`,
:class:`neighbors.KNeighborsRegressor`,
:class:`neighbors.RadiusNeighborsClassifier`,
:class:`neighbors.RadiusNeighborsRegressor`. :pr:`21199` by `Thomas Fan`_.
:mod:`sklearn.calibration`
..........................
- |Fix| Fixed :class:`calibration.CalibratedClassifierCV` to take into account
`sample_weight` when computing the base estimator prediction when
`ensemble=False`.
:pr:`20638` by :user:`Julien Bohné <JulienB-78>`.
- |Fix| Fixed a bug in :class:`calibration.CalibratedClassifierCV` with
`method="sigmoid"` that was ignoring the `sample_weight` when computing the
the Bayesian priors.
:pr:`21179` by :user:`Guillaume Lemaitre <glemaitre>`.
:mod:`sklearn.cluster`
......................
- |Fix| Fixed a bug in :class:`cluster.KMeans`, ensuring reproducibility and equivalence
between sparse and dense input. :pr:`21195`
by :user:`Jérémie du Boisberranger <jeremiedbb>`.
:mod:`sklearn.ensemble`
.......................
- |Fix| Fixed a bug that could produce a segfault in rare cases for
:class:`ensemble.HistGradientBoostingClassifier` and
:class:`ensemble.HistGradientBoostingRegressor`.
:pr:`21130` :user:`Christian Lorentzen <lorentzenchr>`.
:mod:`sklearn.gaussian_process`
...............................
- |Fix| Compute `y_std` properly with multi-target in
:class:`sklearn.gaussian_process.GaussianProcessRegressor` allowing
proper normalization in multi-target scene.
:pr:`20761` by :user:`Patrick de C. T. R. Ferreira <patrickctrf>`.
:mod:`sklearn.feature_extraction`
.................................
- |Efficiency| Fixed an efficiency regression introduced in version 1.0.0 in the
`transform` method of :class:`feature_extraction.text.CountVectorizer` which no
longer checks for uppercase characters in the provided vocabulary. :pr:`21251`
by :user:`Jérémie du Boisberranger <jeremiedbb>`.
- |Fix| Fixed a bug in :class:`feature_extraction.text.CountVectorizer` and
:class:`feature_extraction.text.TfidfVectorizer` by raising an
error when 'min_idf' or 'max_idf' are floating-point numbers greater than 1.
:pr:`20752` by :user:`Alek Lefebvre <AlekLefebvre>`.
:mod:`sklearn.linear_model`
...........................
- |Fix| Improves stability of :class:`linear_model.LassoLars` for different
versions of openblas. :pr:`21340` by `Thomas Fan`_.
- |Fix| :class:`linear_model.LogisticRegression` now raises a better error
message when the solver does not support sparse matrices with int64 indices.
:pr:`21093` by `Tom Dupre la Tour`_.
:mod:`sklearn.neighbors`
........................
- |Fix| :class:`neighbors.KNeighborsClassifier`,
:class:`neighbors.KNeighborsRegressor`,
:class:`neighbors.RadiusNeighborsClassifier`,
:class:`neighbors.RadiusNeighborsRegressor` with `metric="precomputed"` raises
an error for `bsr` and `dok` sparse matrices in methods: `fit`, `kneighbors`
and `radius_neighbors`, due to handling of explicit zeros in `bsr` and `dok`
:term:`sparse graph` formats. :pr:`21199` by `Thomas Fan`_.
:mod:`sklearn.pipeline`
.......................
- |Fix| :meth:`pipeline.Pipeline.get_feature_names_out` correctly passes feature
names out from one step of a pipeline to the next. :pr:`21351` by
`Thomas Fan`_.
:mod:`sklearn.svm`
..................
- |Fix| :class:`svm.SVC` and :class:`svm.SVR` check for an inconsistency
in its internal representation and raise an error instead of segfaulting.
This fix also resolves
`CVE-2020-28975 <https://nvd.nist.gov/vuln/detail/CVE-2020-28975>`__.
:pr:`21336` by `Thomas Fan`_.
:mod:`sklearn.utils`
....................
- |Enhancement| `utils.validation._check_sample_weight` can perform a
non-negativity check on the sample weights. It can be turned on
using the only_non_negative bool parameter.
Estimators that check for non-negative weights are updated:
:func:`linear_model.LinearRegression` (here the previous
error message was misleading),
:func:`ensemble.AdaBoostClassifier`,
:func:`ensemble.AdaBoostRegressor`,
:func:`neighbors.KernelDensity`.
:pr:`20880` by :user:`Guillaume Lemaitre <glemaitre>`
and :user:`András Simon <simonandras>`.
- |Fix| Solve a bug in ``sklearn.utils.metaestimators.if_delegate_has_method``
where the underlying check for an attribute did not work with NumPy arrays.
:pr:`21145` by :user:`Zahlii <Zahlii>`.
Miscellaneous
.............
- |Fix| Fitting an estimator on a dataset that has no feature names, that was previously
fitted on a dataset with feature names no longer keeps the old feature names stored in
the `feature_names_in_` attribute. :pr:`21389` by
:user:`Jérémie du Boisberranger <jeremiedbb>`.
.. _changes_1_0:
Version 1.0.0
=============
**September 2021**
Minimal dependencies
--------------------
Version 1.0.0 of scikit-learn requires python 3.7+, numpy 1.14.6+ and
scipy 1.1.0+. Optional minimal dependency is matplotlib 2.2.2+.
Enforcing keyword-only arguments
--------------------------------
In an effort to promote clear and non-ambiguous use of the library, most
constructor and function parameters must now be passed as keyword arguments
(i.e. using the `param=value` syntax) instead of positional. If a keyword-only
parameter is used as positional, a `TypeError` is now raised.
:issue:`15005` :pr:`20002` by `Joel Nothman`_, `Adrin Jalali`_, `Thomas Fan`_,
`Nicolas Hug`_, and `Tom Dupre la Tour`_. See `SLEP009
<https://scikit-learn-enhancement-proposals.readthedocs.io/en/latest/slep009/proposal.html>`_
for more details.
Changed models
--------------
The following estimators and functions, when fit with the same data and
parameters, may produce different models from the previous version. This often
occurs due to changes in the modelling logic (bug fixes or enhancements), or in
random sampling procedures.
- |Fix| :class:`manifold.TSNE` now avoids numerical underflow issues during
affinity matrix computation.
- |Fix| :class:`manifold.Isomap` now connects disconnected components of the
neighbors graph along some minimum distance pairs, instead of changing
every infinite distances to zero.
- |Fix| The splitting criterion of :class:`tree.DecisionTreeClassifier` and
:class:`tree.DecisionTreeRegressor` can be impacted by a fix in the handling
of rounding errors. Previously some extra spurious splits could occur.
- |Fix| :func:`model_selection.train_test_split` with a `stratify` parameter
and :class:`model_selection.StratifiedShuffleSplit` may lead to slightly
different results.
Details are listed in the changelog below.
(While we are trying to better inform users by providing this information, we
cannot assure that this list is complete.)
Changelog
---------
..
Entries should be grouped by module (in alphabetic order) and prefixed with
one of the labels: |MajorFeature|, |Feature|, |Efficiency|, |Enhancement|,
|Fix| or |API| (see whats_new.rst for descriptions).
Entries should be ordered by those labels (e.g. |Fix| after |Efficiency|).
Changes not specific to a module should be listed under *Multiple Modules*
or *Miscellaneous*.
Entries should end with:
:pr:`123456` by :user:`Joe Bloggs <joeongithub>`.
where 123456 is the *pull request* number, not the issue number.
- |API| The option for using the squared error via ``loss`` and
``criterion`` parameters was made more consistent. The preferred way is by
setting the value to `"squared_error"`. Old option names are still valid,
produce the same models, but are deprecated and will be removed in version
1.2.
:pr:`19310` by :user:`Christian Lorentzen <lorentzenchr>`.
- For :class:`ensemble.ExtraTreesRegressor`, `criterion="mse"` is deprecated,
use `"squared_error"` instead which is now the default.
- For :class:`ensemble.GradientBoostingRegressor`, `loss="ls"` is deprecated,
use `"squared_error"` instead which is now the default.
- For :class:`ensemble.RandomForestRegressor`, `criterion="mse"` is deprecated,
use `"squared_error"` instead which is now the default.
- For :class:`ensemble.HistGradientBoostingRegressor`, `loss="least_squares"`
is deprecated, use `"squared_error"` instead which is now the default.
- For :class:`linear_model.RANSACRegressor`, `loss="squared_loss"` is
deprecated, use `"squared_error"` instead.
- For :class:`linear_model.SGDRegressor`, `loss="squared_loss"` is
deprecated, use `"squared_error"` instead which is now the default.
- For :class:`tree.DecisionTreeRegressor`, `criterion="mse"` is deprecated,
use `"squared_error"` instead which is now the default.
- For :class:`tree.ExtraTreeRegressor`, `criterion="mse"` is deprecated,
use `"squared_error"` instead which is now the default.
- |API| The option for using the absolute error via ``loss`` and
``criterion`` parameters was made more consistent. The preferred way is by
setting the value to `"absolute_error"`. Old option names are still valid,
produce the same models, but are deprecated and will be removed in version
1.2.
:pr:`19733` by :user:`Christian Lorentzen <lorentzenchr>`.
- For :class:`ensemble.ExtraTreesRegressor`, `criterion="mae"` is deprecated,
use `"absolute_error"` instead.
- For :class:`ensemble.GradientBoostingRegressor`, `loss="lad"` is deprecated,
use `"absolute_error"` instead.
- For :class:`ensemble.RandomForestRegressor`, `criterion="mae"` is deprecated,
use `"absolute_error"` instead.
- For :class:`ensemble.HistGradientBoostingRegressor`,
`loss="least_absolute_deviation"` is deprecated, use `"absolute_error"`
instead.
- For :class:`linear_model.RANSACRegressor`, `loss="absolute_loss"` is
deprecated, use `"absolute_error"` instead which is now the default.
- For :class:`tree.DecisionTreeRegressor`, `criterion="mae"` is deprecated,
use `"absolute_error"` instead.
- For :class:`tree.ExtraTreeRegressor`, `criterion="mae"` is deprecated,
use `"absolute_error"` instead.
- |API| `np.matrix` usage is deprecated in 1.0 and will raise a `TypeError` in
1.2. :pr:`20165` by `Thomas Fan`_.
- |API| :term:`get_feature_names_out` has been added to the transformer API
to get the names of the output features. `get_feature_names` has in
turn been deprecated. :pr:`18444` by `Thomas Fan`_.
- |API| All estimators store `feature_names_in_` when fitted on pandas Dataframes.
These feature names are compared to names seen in non-`fit` methods, e.g.
`transform` and will raise a `FutureWarning` if they are not consistent.
These ``FutureWarning`` s will become ``ValueError`` s in 1.2. :pr:`18010` by
`Thomas Fan`_.
:mod:`sklearn.base`
...................
- |Fix| :func:`config_context` is now threadsafe. :pr:`18736` by `Thomas Fan`_.
:mod:`sklearn.calibration`
..........................
- |Feature| :func:`calibration.CalibrationDisplay` added to plot
calibration curves. :pr:`17443` by :user:`Lucy Liu <lucyleeow>`.
- |Fix| The ``predict`` and ``predict_proba`` methods of
:class:`calibration.CalibratedClassifierCV` can now properly be used on
prefitted pipelines. :pr:`19641` by :user:`Alek Lefebvre <AlekLefebvre>`.
- |Fix| Fixed an error when using a :class:`ensemble.VotingClassifier`
as `base_estimator` in :class:`calibration.CalibratedClassifierCV`.
:pr:`20087` by :user:`Clément Fauchereau <clement-f>`.
:mod:`sklearn.cluster`
......................
- |Efficiency| The ``"k-means++"`` initialization of :class:`cluster.KMeans`
and :class:`cluster.MiniBatchKMeans` is now faster, especially in multicore
settings. :pr:`19002` by :user:`Jon Crall <Erotemic>` and :user:`Jérémie du
Boisberranger <jeremiedbb>`.
- |Efficiency| :class:`cluster.KMeans` with `algorithm='elkan'` is now faster
in multicore settings. :pr:`19052` by
:user:`Yusuke Nagasaka <YusukeNagasaka>`.
- |Efficiency| :class:`cluster.MiniBatchKMeans` is now faster in multicore
settings. :pr:`17622` by :user:`Jérémie du Boisberranger <jeremiedbb>`.
- |Efficiency| :class:`cluster.OPTICS` can now cache the output of the
computation of the tree, using the `memory` parameter. :pr:`19024` by
:user:`Frankie Robertson <frankier>`.
- |Enhancement| The `predict` and `fit_predict` methods of
:class:`cluster.AffinityPropagation` now accept sparse data type for input
data.
:pr:`20117` by :user:`Venkatachalam Natchiappan <venkyyuvy>`
- |Fix| Fixed a bug in :class:`cluster.MiniBatchKMeans` where the sample
weights were partially ignored when the input is sparse. :pr:`17622` by
:user:`Jérémie du Boisberranger <jeremiedbb>`.
- |Fix| Improved convergence detection based on center change in
:class:`cluster.MiniBatchKMeans` which was almost never achievable.
:pr:`17622` by :user:`Jérémie du Boisberranger <jeremiedbb>`.
- |FIX| :class:`cluster.AgglomerativeClustering` now supports readonly
memory-mapped datasets.
:pr:`19883` by :user:`Julien Jerphanion <jjerphan>`.
- |Fix| :class:`cluster.AgglomerativeClustering` correctly connects components
when connectivity and affinity are both precomputed and the number
of connected components is greater than 1. :pr:`20597` by
`Thomas Fan`_.
- |Fix| :class:`cluster.FeatureAgglomeration` does not accept a ``**params`` kwarg in
the ``fit`` function anymore, resulting in a more concise error message. :pr:`20899`
by :user:`Adam Li <adam2392>`.
- |Fix| Fixed a bug in :class:`cluster.KMeans`, ensuring reproducibility and equivalence
between sparse and dense input. :pr:`20200`
by :user:`Jérémie du Boisberranger <jeremiedbb>`.
- |API| :class:`cluster.Birch` attributes, `fit_` and `partial_fit_`, are
deprecated and will be removed in 1.2. :pr:`19297` by `Thomas Fan`_.
- |API| the default value for the `batch_size` parameter of
:class:`cluster.MiniBatchKMeans` was changed from 100 to 1024 due to
efficiency reasons. The `n_iter_` attribute of
:class:`cluster.MiniBatchKMeans` now reports the number of started epochs and
the `n_steps_` attribute reports the number of mini batches processed.
:pr:`17622` by :user:`Jérémie du Boisberranger <jeremiedbb>`.
- |API| :func:`cluster.spectral_clustering` raises an improved error when passed
a `np.matrix`. :pr:`20560` by `Thomas Fan`_.
:mod:`sklearn.compose`
......................
- |Enhancement| :class:`compose.ColumnTransformer` now records the output
of each transformer in `output_indices_`. :pr:`18393` by
:user:`Luca Bittarello <lbittarello>`.
- |Enhancement| :class:`compose.ColumnTransformer` now allows DataFrame input to
have its columns appear in a changed order in `transform`. Further, columns that
are dropped will not be required in transform, and additional columns will be
ignored if `remainder='drop'`. :pr:`19263` by `Thomas Fan`_.
- |Enhancement| Adds `**predict_params` keyword argument to
:meth:`compose.TransformedTargetRegressor.predict` that passes keyword
argument to the regressor.
:pr:`19244` by :user:`Ricardo <ricardojnf>`.
- |FIX| `compose.ColumnTransformer.get_feature_names` supports
non-string feature names returned by any of its transformers. However, note
that ``get_feature_names`` is deprecated, use ``get_feature_names_out``
instead. :pr:`18459` by :user:`Albert Villanova del Moral <albertvillanova>`
and :user:`Alonso Silva Allende <alonsosilvaallende>`.
- |Fix| :class:`compose.TransformedTargetRegressor` now takes nD targets with
an adequate transformer.
:pr:`18898` by :user:`Oras Phongpanagnam <panangam>`.
- |API| Adds `verbose_feature_names_out` to :class:`compose.ColumnTransformer`.
This flag controls the prefixing of feature names out in
:term:`get_feature_names_out`. :pr:`18444` and :pr:`21080` by `Thomas Fan`_.
:mod:`sklearn.covariance`
.........................
- |Fix| Adds arrays check to :func:`covariance.ledoit_wolf` and
:func:`covariance.ledoit_wolf_shrinkage`. :pr:`20416` by :user:`Hugo Defois
<defoishugo>`.
- |API| Deprecates the following keys in `cv_results_`: `'mean_score'`,
`'std_score'`, and `'split(k)_score'` in favor of `'mean_test_score'`
`'std_test_score'`, and `'split(k)_test_score'`. :pr:`20583` by `Thomas Fan`_.
:mod:`sklearn.datasets`
.......................
- |Enhancement| :func:`datasets.fetch_openml` now supports categories with
missing values when returning a pandas dataframe. :pr:`19365` by
`Thomas Fan`_ and :user:`Amanda Dsouza <amy12xx>` and
:user:`EL-ATEIF Sara <elateifsara>`.
- |Enhancement| :func:`datasets.fetch_kddcup99` raises a better message
when the cached file is invalid. :pr:`19669` `Thomas Fan`_.
- |Enhancement| Replace usages of ``__file__`` related to resource file I/O
with ``importlib.resources`` to avoid the assumption that these resource
files (e.g. ``iris.csv``) already exist on a filesystem, and by extension
to enable compatibility with tools such as ``PyOxidizer``.
:pr:`20297` by :user:`Jack Liu <jackzyliu>`.
- |Fix| Shorten data file names in the openml tests to better support
installing on Windows and its default 260 character limit on file names.
:pr:`20209` by `Thomas Fan`_.
- |Fix| :func:`datasets.fetch_kddcup99` returns dataframes when
`return_X_y=True` and `as_frame=True`. :pr:`19011` by `Thomas Fan`_.
- |API| Deprecates `datasets.load_boston` in 1.0 and it will be removed
in 1.2. Alternative code snippets to load similar datasets are provided.
Please report to the docstring of the function for details.
:pr:`20729` by `Guillaume Lemaitre`_.
:mod:`sklearn.decomposition`
............................
- |Enhancement| added a new approximate solver (randomized SVD, available with
`eigen_solver='randomized'`) to :class:`decomposition.KernelPCA`. This
significantly accelerates computation when the number of samples is much
larger than the desired number of components.
:pr:`12069` by :user:`Sylvain Marié <smarie>`.
- |Fix| Fixes incorrect multiple data-conversion warnings when clustering
boolean data. :pr:`19046` by :user:`Surya Prakash <jdsurya>`.
- |Fix| Fixed :func:`decomposition.dict_learning`, used by
:class:`decomposition.DictionaryLearning`, to ensure determinism of the
output. Achieved by flipping signs of the SVD output which is used to
initialize the code. :pr:`18433` by :user:`Bruno Charron <brcharron>`.
- |Fix| Fixed a bug in :class:`decomposition.MiniBatchDictionaryLearning`,
:class:`decomposition.MiniBatchSparsePCA` and
:func:`decomposition.dict_learning_online` where the update of the dictionary
was incorrect. :pr:`19198` by :user:`Jérémie du Boisberranger <jeremiedbb>`.
- |Fix| Fixed a bug in :class:`decomposition.DictionaryLearning`,
:class:`decomposition.SparsePCA`,
:class:`decomposition.MiniBatchDictionaryLearning`,
:class:`decomposition.MiniBatchSparsePCA`,
:func:`decomposition.dict_learning` and
:func:`decomposition.dict_learning_online` where the restart of unused atoms
during the dictionary update was not working as expected. :pr:`19198` by
:user:`Jérémie du Boisberranger <jeremiedbb>`.
- |API| In :class:`decomposition.DictionaryLearning`,
:class:`decomposition.MiniBatchDictionaryLearning`,
:func:`decomposition.dict_learning` and
:func:`decomposition.dict_learning_online`, `transform_alpha` will be equal
to `alpha` instead of 1.0 by default starting from version 1.2 :pr:`19159` by
:user:`Benoît Malézieux <bmalezieux>`.
- |API| Rename variable names in :class:`decomposition.KernelPCA` to improve
readability. `lambdas_` and `alphas_` are renamed to `eigenvalues_`
and `eigenvectors_`, respectively. `lambdas_` and `alphas_` are
deprecated and will be removed in 1.2.
:pr:`19908` by :user:`Kei Ishikawa <kstoneriv3>`.
- |API| The `alpha` and `regularization` parameters of :class:`decomposition.NMF` and
:func:`decomposition.non_negative_factorization` are deprecated and will be removed
in 1.2. Use the new parameters `alpha_W` and `alpha_H` instead. :pr:`20512` by
:user:`Jérémie du Boisberranger <jeremiedbb>`.
:mod:`sklearn.dummy`
....................
- |API| Attribute `n_features_in_` in :class:`dummy.DummyRegressor` and
:class:`dummy.DummyRegressor` is deprecated and will be removed in 1.2.
:pr:`20960` by `Thomas Fan`_.
:mod:`sklearn.ensemble`
.......................
- |Enhancement| :class:`~sklearn.ensemble.HistGradientBoostingClassifier` and
:class:`~sklearn.ensemble.HistGradientBoostingRegressor` take cgroups quotas
into account when deciding the number of threads used by OpenMP. This
avoids performance problems caused by over-subscription when using those
classes in a docker container for instance. :pr:`20477`
by `Thomas Fan`_.
- |Enhancement| :class:`~sklearn.ensemble.HistGradientBoostingClassifier` and
:class:`~sklearn.ensemble.HistGradientBoostingRegressor` are no longer
experimental. They are now considered stable and are subject to the same
deprecation cycles as all other estimators. :pr:`19799` by `Nicolas Hug`_.
- |Enhancement| Improve the HTML rendering of the
:class:`ensemble.StackingClassifier` and :class:`ensemble.StackingRegressor`.
:pr:`19564` by `Thomas Fan`_.
- |Enhancement| Added Poisson criterion to
:class:`ensemble.RandomForestRegressor`. :pr:`19836` by :user:`Brian Sun
<bsun94>`.
- |Fix| Do not allow to compute out-of-bag (OOB) score in
:class:`ensemble.RandomForestClassifier` and
:class:`ensemble.ExtraTreesClassifier` with multiclass-multioutput target
since scikit-learn does not provide any metric supporting this type of
target. Additional private refactoring was performed.
:pr:`19162` by :user:`Guillaume Lemaitre <glemaitre>`.
- |Fix| Improve numerical precision for weights boosting in
:class:`ensemble.AdaBoostClassifier` and :class:`ensemble.AdaBoostRegressor`
to avoid underflows.
:pr:`10096` by :user:`Fenil Suchak <fenilsuchak>`.
- |Fix| Fixed the range of the argument ``max_samples`` to be ``(0.0, 1.0]``
in :class:`ensemble.RandomForestClassifier`,
:class:`ensemble.RandomForestRegressor`, where `max_samples=1.0` is
interpreted as using all `n_samples` for bootstrapping. :pr:`20159` by
:user:`murata-yu`.
- |Fix| Fixed a bug in :class:`ensemble.AdaBoostClassifier` and
:class:`ensemble.AdaBoostRegressor` where the `sample_weight` parameter
got overwritten during `fit`.
:pr:`20534` by :user:`Guillaume Lemaitre <glemaitre>`.
- |API| Removes `tol=None` option in
:class:`ensemble.HistGradientBoostingClassifier` and
:class:`ensemble.HistGradientBoostingRegressor`. Please use `tol=0` for
the same behavior. :pr:`19296` by `Thomas Fan`_.
:mod:`sklearn.feature_extraction`
.................................
- |Fix| Fixed a bug in :class:`feature_extraction.text.HashingVectorizer`
where some input strings would result in negative indices in the transformed
data. :pr:`19035` by :user:`Liu Yu <ly648499246>`.
- |Fix| Fixed a bug in :class:`feature_extraction.DictVectorizer` by raising an
error with unsupported value type.
:pr:`19520` by :user:`Jeff Zhao <kamiyaa>`.
- |Fix| Fixed a bug in :func:`feature_extraction.image.img_to_graph`
and :func:`feature_extraction.image.grid_to_graph` where singleton connected
components were not handled properly, resulting in a wrong vertex indexing.
:pr:`18964` by `Bertrand Thirion`_.
- |Fix| Raise a warning in :class:`feature_extraction.text.CountVectorizer`
with `lowercase=True` when there are vocabulary entries with uppercase
characters to avoid silent misses in the resulting feature vectors.
:pr:`19401` by :user:`Zito Relova <zitorelova>`
:mod:`sklearn.feature_selection`
................................
- |Feature| :func:`feature_selection.r_regression` computes Pearson's R
correlation coefficients between the features and the target.
:pr:`17169` by :user:`Dmytro Lituiev <DSLituiev>`
and :user:`Julien Jerphanion <jjerphan>`.
- |Enhancement| :func:`feature_selection.RFE.fit` accepts additional estimator
parameters that are passed directly to the estimator's `fit` method.
:pr:`20380` by :user:`Iván Pulido <ijpulidos>`, :user:`Felipe Bidu <fbidu>`,
:user:`Gil Rutter <g-rutter>`, and :user:`Adrin Jalali <adrinjalali>`.
- |FIX| Fix a bug in :func:`isotonic.isotonic_regression` where the
`sample_weight` passed by a user were overwritten during ``fit``.
:pr:`20515` by :user:`Carsten Allefeld <allefeld>`.
- |Fix| Change :func:`feature_selection.SequentialFeatureSelector` to
allow for unsupervised modelling so that the `fit` signature need not
do any `y` validation and allow for `y=None`.
:pr:`19568` by :user:`Shyam Desai <ShyamDesai>`.
- |API| Raises an error in :class:`feature_selection.VarianceThreshold`
when the variance threshold is negative.
:pr:`20207` by :user:`Tomohiro Endo <europeanplaice>`
- |API| Deprecates `grid_scores_` in favor of split scores in `cv_results_` in
:class:`feature_selection.RFECV`. `grid_scores_` will be removed in
version 1.2.
:pr:`20161` by :user:`Shuhei Kayawari <wowry>` and :user:`arka204`.
:mod:`sklearn.inspection`
.........................
- |Enhancement| Add `max_samples` parameter in
:func:`inspection.permutation_importance`. It enables to draw a subset of the
samples to compute the permutation importance. This is useful to keep the
method tractable when evaluating feature importance on large datasets.
:pr:`20431` by :user:`Oliver Pfaffel <o1iv3r>`.
- |Enhancement| Add kwargs to format ICE and PD lines separately in partial
dependence plots `inspection.plot_partial_dependence` and
:meth:`inspection.PartialDependenceDisplay.plot`. :pr:`19428` by :user:`Mehdi
Hamoumi <mhham>`.
- |Fix| Allow multiple scorers input to
:func:`inspection.permutation_importance`. :pr:`19411` by :user:`Simona
Maggio <simonamaggio>`.
- |API| :class:`inspection.PartialDependenceDisplay` exposes a class method:
:func:`~inspection.PartialDependenceDisplay.from_estimator`.
`inspection.plot_partial_dependence` is deprecated in favor of the
class method and will be removed in 1.2. :pr:`20959` by `Thomas Fan`_.
:mod:`sklearn.kernel_approximation`
...................................
- |Fix| Fix a bug in :class:`kernel_approximation.Nystroem`
where the attribute `component_indices_` did not correspond to the subset of
sample indices used to generate the approximated kernel. :pr:`20554` by
:user:`Xiangyin Kong <kxytim>`.
:mod:`sklearn.linear_model`
...........................
- |MajorFeature| Added :class:`linear_model.QuantileRegressor` which implements
linear quantile regression with L1 penalty.
:pr:`9978` by :user:`David Dale <avidale>` and
:user:`Christian Lorentzen <lorentzenchr>`.
- |Feature| The new :class:`linear_model.SGDOneClassSVM` provides an SGD
implementation of the linear One-Class SVM. Combined with kernel
approximation techniques, this implementation approximates the solution of
a kernelized One Class SVM while benefitting from a linear
complexity in the number of samples.
:pr:`10027` by :user:`Albert Thomas <albertcthomas>`.
- |Feature| Added `sample_weight` parameter to
:class:`linear_model.LassoCV` and :class:`linear_model.ElasticNetCV`.
:pr:`16449` by :user:`Christian Lorentzen <lorentzenchr>`.
- |Feature| Added new solver `lbfgs` (available with `solver="lbfgs"`)
and `positive` argument to :class:`linear_model.Ridge`. When `positive` is
set to `True`, forces the coefficients to be positive (only supported by
`lbfgs`). :pr:`20231` by :user:`Toshihiro Nakae <tnakae>`.
- |Efficiency| The implementation of :class:`linear_model.LogisticRegression`
has been optimised for dense matrices when using `solver='newton-cg'` and
`multi_class!='multinomial'`.
:pr:`19571` by :user:`Julien Jerphanion <jjerphan>`.
- |Enhancement| `fit` method preserves dtype for numpy.float32 in
:class:`linear_model.Lars`, :class:`linear_model.LassoLars`,
:class:`linear_model.LassoLars`, :class:`linear_model.LarsCV` and
:class:`linear_model.LassoLarsCV`. :pr:`20155` by :user:`Takeshi Oura
<takoika>`.
- |Enhancement| Validate user-supplied gram matrix passed to linear models
via the `precompute` argument. :pr:`19004` by :user:`Adam Midvidy <amidvidy>`.
- |Fix| :meth:`linear_model.ElasticNet.fit` no longer modifies `sample_weight`
in place. :pr:`19055` by `Thomas Fan`_.
- |Fix| :class:`linear_model.Lasso` and :class:`linear_model.ElasticNet` no
longer have a `dual_gap_` not corresponding to their objective. :pr:`19172`
by :user:`Mathurin Massias <mathurinm>`
- |Fix| `sample_weight` are now fully taken into account in linear models
when `normalize=True` for both feature centering and feature
scaling.
:pr:`19426` by :user:`Alexandre Gramfort <agramfort>` and
:user:`Maria Telenczuk <maikia>`.
- |Fix| Points with residuals equal to ``residual_threshold`` are now considered
as inliers for :class:`linear_model.RANSACRegressor`. This allows fitting
a model perfectly on some datasets when `residual_threshold=0`.
:pr:`19499` by :user:`Gregory Strubel <gregorystrubel>`.
- |Fix| Sample weight invariance for :class:`linear_model.Ridge` was fixed in
:pr:`19616` by :user:`Oliver Grisel <ogrisel>` and :user:`Christian Lorentzen
<lorentzenchr>`.
- |Fix| The dictionary `params` in :func:`linear_model.enet_path` and
:func:`linear_model.lasso_path` should only contain parameter of the
coordinate descent solver. Otherwise, an error will be raised.
:pr:`19391` by :user:`Shao Yang Hong <hongshaoyang>`.
- |API| Raise a warning in :class:`linear_model.RANSACRegressor` that from
version 1.2, `min_samples` need to be set explicitly for models other than
:class:`linear_model.LinearRegression`. :pr:`19390` by :user:`Shao Yang Hong
<hongshaoyang>`.
- |API|: The parameter ``normalize`` of :class:`linear_model.LinearRegression`
is deprecated and will be removed in 1.2. Motivation for this deprecation:
``normalize`` parameter did not take any effect if ``fit_intercept`` was set
to False and therefore was deemed confusing. The behavior of the deprecated
``LinearModel(normalize=True)`` can be reproduced with a
:class:`~sklearn.pipeline.Pipeline` with ``LinearModel`` (where
``LinearModel`` is :class:`~linear_model.LinearRegression`,
:class:`~linear_model.Ridge`, :class:`~linear_model.RidgeClassifier`,
:class:`~linear_model.RidgeCV` or :class:`~linear_model.RidgeClassifierCV`)
as follows: ``make_pipeline(StandardScaler(with_mean=False),
LinearModel())``. The ``normalize`` parameter in
:class:`~linear_model.LinearRegression` was deprecated in :pr:`17743` by
:user:`Maria Telenczuk <maikia>` and :user:`Alexandre Gramfort <agramfort>`.
Same for :class:`~linear_model.Ridge`,
:class:`~linear_model.RidgeClassifier`, :class:`~linear_model.RidgeCV`, and
:class:`~linear_model.RidgeClassifierCV`, in: :pr:`17772` by :user:`Maria
Telenczuk <maikia>` and :user:`Alexandre Gramfort <agramfort>`. Same for
:class:`~linear_model.BayesianRidge`, :class:`~linear_model.ARDRegression`
in: :pr:`17746` by :user:`Maria Telenczuk <maikia>`. Same for
:class:`~linear_model.Lasso`, :class:`~linear_model.LassoCV`,
:class:`~linear_model.ElasticNet`, :class:`~linear_model.ElasticNetCV`,
:class:`~linear_model.MultiTaskLasso`,
:class:`~linear_model.MultiTaskLassoCV`,
:class:`~linear_model.MultiTaskElasticNet`,
:class:`~linear_model.MultiTaskElasticNetCV`, in: :pr:`17785` by :user:`Maria
Telenczuk <maikia>` and :user:`Alexandre Gramfort <agramfort>`.
- |API| The ``normalize`` parameter of
:class:`~linear_model.OrthogonalMatchingPursuit` and
:class:`~linear_model.OrthogonalMatchingPursuitCV` will default to False in
1.2 and will be removed in 1.4. :pr:`17750` by :user:`Maria Telenczuk
<maikia>` and :user:`Alexandre Gramfort <agramfort>`. Same for
:class:`~linear_model.Lars` :class:`~linear_model.LarsCV`
:class:`~linear_model.LassoLars` :class:`~linear_model.LassoLarsCV`
:class:`~linear_model.LassoLarsIC`, in :pr:`17769` by :user:`Maria Telenczuk
<maikia>` and :user:`Alexandre Gramfort <agramfort>`.
- |API| Keyword validation has moved from `__init__` and `set_params` to `fit`
for the following estimators conforming to scikit-learn's conventions:
:class:`~linear_model.SGDClassifier`,
:class:`~linear_model.SGDRegressor`,
:class:`~linear_model.SGDOneClassSVM`,
:class:`~linear_model.PassiveAggressiveClassifier`, and
:class:`~linear_model.PassiveAggressiveRegressor`.
:pr:`20683` by `Guillaume Lemaitre`_.
:mod:`sklearn.manifold`
.......................
- |Enhancement| Implement `'auto'` heuristic for the `learning_rate` in
:class:`manifold.TSNE`. It will become default in 1.2. The default
initialization will change to `pca` in 1.2. PCA initialization will
be scaled to have standard deviation 1e-4 in 1.2.
:pr:`19491` by :user:`Dmitry Kobak <dkobak>`.
- |Fix| Change numerical precision to prevent underflow issues
during affinity matrix computation for :class:`manifold.TSNE`.
:pr:`19472` by :user:`Dmitry Kobak <dkobak>`.
- |Fix| :class:`manifold.Isomap` now uses `scipy.sparse.csgraph.shortest_path`
to compute the graph shortest path. It also connects disconnected components
of the neighbors graph along some minimum distance pairs, instead of changing
every infinite distances to zero. :pr:`20531` by `Roman Yurchak`_ and `Tom
Dupre la Tour`_.
- |Fix| Decrease the numerical default tolerance in the lobpcg call
in :func:`manifold.spectral_embedding` to prevent numerical instability.
:pr:`21194` by :user:`Andrew Knyazev <lobpcg>`.
:mod:`sklearn.metrics`
......................
- |Feature| :func:`metrics.mean_pinball_loss` exposes the pinball loss for
quantile regression. :pr:`19415` by :user:`Xavier Dupré <sdpython>`
and :user:`Oliver Grisel <ogrisel>`.
- |Feature| :func:`metrics.d2_tweedie_score` calculates the D^2 regression
score for Tweedie deviances with power parameter ``power``. This is a
generalization of the `r2_score` and can be interpreted as percentage of
Tweedie deviance explained.
:pr:`17036` by :user:`Christian Lorentzen <lorentzenchr>`.
- |Feature| :func:`metrics.mean_squared_log_error` now supports
`squared=False`.
:pr:`20326` by :user:`Uttam kumar <helper-uttam>`.
- |Efficiency| Improved speed of :func:`metrics.confusion_matrix` when labels
are integral.
:pr:`9843` by :user:`Jon Crall <Erotemic>`.
- |Enhancement| A fix to raise an error in :func:`metrics.hinge_loss` when
``pred_decision`` is 1d whereas it is a multiclass classification or when
``pred_decision`` parameter is not consistent with the ``labels`` parameter.
:pr:`19643` by :user:`Pierre Attard <PierreAttard>`.
- |Fix| :meth:`metrics.ConfusionMatrixDisplay.plot` uses the correct max
for colormap. :pr:`19784` by `Thomas Fan`_.
- |Fix| Samples with zero `sample_weight` values do not affect the results
from :func:`metrics.det_curve`, :func:`metrics.precision_recall_curve`
and :func:`metrics.roc_curve`.
:pr:`18328` by :user:`Albert Villanova del Moral <albertvillanova>` and
:user:`Alonso Silva Allende <alonsosilvaallende>`.
- |Fix| avoid overflow in :func:`metrics.adjusted_rand_score` with
large amount of data. :pr:`20312` by :user:`Divyanshu Deoli
<divyanshudeoli>`.
- |API| :class:`metrics.ConfusionMatrixDisplay` exposes two class methods
:func:`~metrics.ConfusionMatrixDisplay.from_estimator` and
:func:`~metrics.ConfusionMatrixDisplay.from_predictions` allowing to create
a confusion matrix plot using an estimator or the predictions.
`metrics.plot_confusion_matrix` is deprecated in favor of these two
class methods and will be removed in 1.2.
:pr:`18543` by `Guillaume Lemaitre`_.
- |API| :class:`metrics.PrecisionRecallDisplay` exposes two class methods
:func:`~metrics.PrecisionRecallDisplay.from_estimator` and
:func:`~metrics.PrecisionRecallDisplay.from_predictions` allowing to create
a precision-recall curve using an estimator or the predictions.
`metrics.plot_precision_recall_curve` is deprecated in favor of these
two class methods and will be removed in 1.2.
:pr:`20552` by `Guillaume Lemaitre`_.
- |API| :class:`metrics.DetCurveDisplay` exposes two class methods
:func:`~metrics.DetCurveDisplay.from_estimator` and
:func:`~metrics.DetCurveDisplay.from_predictions` allowing to create
a confusion matrix plot using an estimator or the predictions.
`metrics.plot_det_curve` is deprecated in favor of these two
class methods and will be removed in 1.2.
:pr:`19278` by `Guillaume Lemaitre`_.
:mod:`sklearn.mixture`
......................
- |Fix| Ensure that the best parameters are set appropriately
in the case of divergency for :class:`mixture.GaussianMixture` and
:class:`mixture.BayesianGaussianMixture`.
:pr:`20030` by :user:`Tingshan Liu <tliu68>` and
:user:`Benjamin Pedigo <bdpedigo>`.
:mod:`sklearn.model_selection`
..............................
- |Feature| added :class:`model_selection.StratifiedGroupKFold`, that combines
:class:`model_selection.StratifiedKFold` and
:class:`model_selection.GroupKFold`, providing an ability to split data
preserving the distribution of classes in each split while keeping each
group within a single split.
:pr:`18649` by :user:`Leandro Hermida <hermidalc>` and
:user:`Rodion Martynov <marrodion>`.
- |Enhancement| warn only once in the main process for per-split fit failures
in cross-validation. :pr:`20619` by :user:`Loïc Estève <lesteve>`
- |Enhancement| The `model_selection.BaseShuffleSplit` base class is
now public. :pr:`20056` by :user:`pabloduque0`.
- |Fix| Avoid premature overflow in :func:`model_selection.train_test_split`.
:pr:`20904` by :user:`Tomasz Jakubek <t-jakubek>`.
:mod:`sklearn.naive_bayes`
..........................
- |Fix| The `fit` and `partial_fit` methods of the discrete naive Bayes
classifiers (:class:`naive_bayes.BernoulliNB`,
:class:`naive_bayes.CategoricalNB`, :class:`naive_bayes.ComplementNB`,
and :class:`naive_bayes.MultinomialNB`) now correctly handle the degenerate
case of a single class in the training set.
:pr:`18925` by :user:`David Poznik <dpoznik>`.
- |API| The attribute ``sigma_`` is now deprecated in
:class:`naive_bayes.GaussianNB` and will be removed in 1.2.
Use ``var_`` instead.
:pr:`18842` by :user:`Hong Shao Yang <hongshaoyang>`.
:mod:`sklearn.neighbors`
........................
- |Enhancement| The creation of :class:`neighbors.KDTree` and
:class:`neighbors.BallTree` has been improved for their worst-cases time
complexity from :math:`\mathcal{O}(n^2)` to :math:`\mathcal{O}(n)`.
:pr:`19473` by :user:`jiefangxuanyan <jiefangxuanyan>` and
:user:`Julien Jerphanion <jjerphan>`.
- |FIX| `neighbors.DistanceMetric` subclasses now support readonly
memory-mapped datasets. :pr:`19883` by :user:`Julien Jerphanion <jjerphan>`.
- |FIX| :class:`neighbors.NearestNeighbors`, :class:`neighbors.KNeighborsClassifier`,
:class:`neighbors.RadiusNeighborsClassifier`, :class:`neighbors.KNeighborsRegressor`
and :class:`neighbors.RadiusNeighborsRegressor` do not validate `weights` in
`__init__` and validates `weights` in `fit` instead. :pr:`20072` by
:user:`Juan Carlos Alfaro Jiménez <alfaro96>`.
- |API| The parameter `kwargs` of :class:`neighbors.RadiusNeighborsClassifier` is
deprecated and will be removed in 1.2.
:pr:`20842` by :user:`Juan Martín Loyola <jmloyola>`.
:mod:`sklearn.neural_network`
.............................
- |Fix| :class:`neural_network.MLPClassifier` and
:class:`neural_network.MLPRegressor` now correctly support continued training
when loading from a pickled file. :pr:`19631` by `Thomas Fan`_.
:mod:`sklearn.pipeline`
.......................
- |API| The `predict_proba` and `predict_log_proba` methods of the
:class:`pipeline.Pipeline` now support passing prediction kwargs to the final
estimator. :pr:`19790` by :user:`Christopher Flynn <crflynn>`.
:mod:`sklearn.preprocessing`
............................
- |Feature| The new :class:`preprocessing.SplineTransformer` is a feature
preprocessing tool for the generation of B-splines, parametrized by the
polynomial ``degree`` of the splines, number of knots ``n_knots`` and knot
positioning strategy ``knots``.
:pr:`18368` by :user:`Christian Lorentzen <lorentzenchr>`.
:class:`preprocessing.SplineTransformer` also supports periodic
splines via the ``extrapolation`` argument.
:pr:`19483` by :user:`Malte Londschien <mlondschien>`.
:class:`preprocessing.SplineTransformer` supports sample weights for
knot position strategy ``"quantile"``.
:pr:`20526` by :user:`Malte Londschien <mlondschien>`.
- |Feature| :class:`preprocessing.OrdinalEncoder` supports passing through
missing values by default. :pr:`19069` by `Thomas Fan`_.
- |Feature| :class:`preprocessing.OneHotEncoder` now supports
`handle_unknown='ignore'` and dropping categories. :pr:`19041` by
`Thomas Fan`_.
- |Feature| :class:`preprocessing.PolynomialFeatures` now supports passing
a tuple to `degree`, i.e. `degree=(min_degree, max_degree)`.
:pr:`20250` by :user:`Christian Lorentzen <lorentzenchr>`.
- |Efficiency| :class:`preprocessing.StandardScaler` is faster and more memory
efficient. :pr:`20652` by `Thomas Fan`_.
- |Efficiency| Changed ``algorithm`` argument for :class:`cluster.KMeans` in
:class:`preprocessing.KBinsDiscretizer` from ``auto`` to ``full``.
:pr:`19934` by :user:`Gleb Levitskiy <GLevV>`.
- |Efficiency| The implementation of `fit` for
:class:`preprocessing.PolynomialFeatures` transformer is now faster. This is
especially noticeable on large sparse input. :pr:`19734` by :user:`Fred
Robinson <frrad>`.
- |Fix| The :func:`preprocessing.StandardScaler.inverse_transform` method
now raises error when the input data is 1D. :pr:`19752` by :user:`Zhehao Liu
<Max1993Liu>`.
- |Fix| :func:`preprocessing.scale`, :class:`preprocessing.StandardScaler`
and similar scalers detect near-constant features to avoid scaling them to
very large values. This problem happens in particular when using a scaler on
sparse data with a constant column with sample weights, in which case
centering is typically disabled. :pr:`19527` by :user:`Oliver Grisel
<ogrisel>` and :user:`Maria Telenczuk <maikia>` and :pr:`19788` by
:user:`Jérémie du Boisberranger <jeremiedbb>`.
- |Fix| :meth:`preprocessing.StandardScaler.inverse_transform` now
correctly handles integer dtypes. :pr:`19356` by :user:`makoeppel`.
- |Fix| :meth:`preprocessing.OrdinalEncoder.inverse_transform` is not
supporting sparse matrix and raises the appropriate error message.
:pr:`19879` by :user:`Guillaume Lemaitre <glemaitre>`.
- |Fix| The `fit` method of :class:`preprocessing.OrdinalEncoder` will not
raise error when `handle_unknown='ignore'` and unknown categories are given
to `fit`.
:pr:`19906` by :user:`Zhehao Liu <MaxwellLZH>`.
- |Fix| Fix a regression in :class:`preprocessing.OrdinalEncoder` where large
Python numeric would raise an error due to overflow when casted to C type
(`np.float64` or `np.int64`).
:pr:`20727` by `Guillaume Lemaitre`_.
- |Fix| :class:`preprocessing.FunctionTransformer` does not set `n_features_in_`
based on the input to `inverse_transform`. :pr:`20961` by `Thomas Fan`_.
- |API| The `n_input_features_` attribute of
:class:`preprocessing.PolynomialFeatures` is deprecated in favor of
`n_features_in_` and will be removed in 1.2. :pr:`20240` by
:user:`Jérémie du Boisberranger <jeremiedbb>`.
:mod:`sklearn.svm`
...................
- |API| The parameter `**params` of :func:`svm.OneClassSVM.fit` is
deprecated and will be removed in 1.2.
:pr:`20843` by :user:`Juan Martín Loyola <jmloyola>`.
:mod:`sklearn.tree`
...................
- |Enhancement| Add `fontname` argument in :func:`tree.export_graphviz`
for non-English characters. :pr:`18959` by :user:`Zero <Zeroto521>`
and :user:`wstates <wstates>`.
- |Fix| Improves compatibility of :func:`tree.plot_tree` with high DPI screens.
:pr:`20023` by `Thomas Fan`_.
- |Fix| Fixed a bug in :class:`tree.DecisionTreeClassifier`,
:class:`tree.DecisionTreeRegressor` where a node could be split whereas it
should not have been due to incorrect handling of rounding errors.
:pr:`19336` by :user:`Jérémie du Boisberranger <jeremiedbb>`.
- |API| The `n_features_` attribute of :class:`tree.DecisionTreeClassifier`,
:class:`tree.DecisionTreeRegressor`, :class:`tree.ExtraTreeClassifier` and
:class:`tree.ExtraTreeRegressor` is deprecated in favor of `n_features_in_`
and will be removed in 1.2. :pr:`20272` by
:user:`Jérémie du Boisberranger <jeremiedbb>`.
:mod:`sklearn.utils`
....................
- |Enhancement| Deprecated the default value of the `random_state=0` in
:func:`~sklearn.utils.extmath.randomized_svd`. Starting in 1.2,
the default value of `random_state` will be set to `None`.
:pr:`19459` by :user:`Cindy Bezuidenhout <cinbez>` and
:user:`Clifford Akai-Nettey<cliffordEmmanuel>`.
- |Enhancement| Added helper decorator :func:`utils.metaestimators.available_if`
to provide flexibility in metaestimators making methods available or
unavailable on the basis of state, in a more readable way.
:pr:`19948` by `Joel Nothman`_.
- |Enhancement| :func:`utils.validation.check_is_fitted` now uses
``__sklearn_is_fitted__`` if available, instead of checking for attributes
ending with an underscore. This also makes :class:`pipeline.Pipeline` and
:class:`preprocessing.FunctionTransformer` pass
``check_is_fitted(estimator)``. :pr:`20657` by `Adrin Jalali`_.
- |Fix| Fixed a bug in :func:`utils.sparsefuncs.mean_variance_axis` where the
precision of the computed variance was very poor when the real variance is
exactly zero. :pr:`19766` by :user:`Jérémie du Boisberranger <jeremiedbb>`.
- |Fix| The docstrings of properties that are decorated with
:func:`utils.deprecated` are now properly wrapped. :pr:`20385` by `Thomas
Fan`_.
- |Fix| `utils.stats._weighted_percentile` now correctly ignores
zero-weighted observations smaller than the smallest observation with
positive weight for ``percentile=0``. Affected classes are
:class:`dummy.DummyRegressor` for ``quantile=0`` and
`ensemble.HuberLossFunction` and `ensemble.HuberLossFunction`
for ``alpha=0``. :pr:`20528` by :user:`Malte Londschien <mlondschien>`.
- |Fix| :func:`utils._safe_indexing` explicitly takes a dataframe copy when
integer indices are provided avoiding to raise a warning from Pandas. This
warning was previously raised in resampling utilities and functions using
those utilities (e.g. :func:`model_selection.train_test_split`,
:func:`model_selection.cross_validate`,
:func:`model_selection.cross_val_score`,
:func:`model_selection.cross_val_predict`).
:pr:`20673` by :user:`Joris Van den Bossche <jorisvandenbossche>`.
- |Fix| Fix a regression in `utils.is_scalar_nan` where large Python
numbers would raise an error due to overflow in C types (`np.float64` or
`np.int64`).
:pr:`20727` by `Guillaume Lemaitre`_.
- |Fix| Support for `np.matrix` is deprecated in
:func:`~sklearn.utils.check_array` in 1.0 and will raise a `TypeError` in
1.2. :pr:`20165` by `Thomas Fan`_.
- |API| `utils._testing.assert_warns` and `utils._testing.assert_warns_message`
are deprecated in 1.0 and will be removed in 1.2. Used `pytest.warns` context
manager instead. Note that these functions were not documented and part from
the public API. :pr:`20521` by :user:`Olivier Grisel <ogrisel>`.
- |API| Fixed several bugs in `utils.graph.graph_shortest_path`, which is
now deprecated. Use `scipy.sparse.csgraph.shortest_path` instead. :pr:`20531`
by `Tom Dupre la Tour`_.
.. rubric:: Code and documentation contributors
Thanks to everyone who has contributed to the maintenance and improvement of
the project since version 0.24, including:
Abdulelah S. Al Mesfer, Abhinav Gupta, Adam J. Stewart, Adam Li, Adam Midvidy,
Adrian Garcia Badaracco, Adrian Sadłocha, Adrin Jalali, Agamemnon Krasoulis,
Alberto Rubiales, Albert Thomas, Albert Villanova del Moral, Alek Lefebvre,
Alessia Marcolini, Alexandr Fonari, Alihan Zihna, Aline Ribeiro de Almeida,
Amanda, Amanda Dsouza, Amol Deshmukh, Ana Pessoa, Anavelyz, Andreas Mueller,
Andrew Delong, Ashish, Ashvith Shetty, Atsushi Nukariya, Aurélien Geron, Avi
Gupta, Ayush Singh, baam, BaptBillard, Benjamin Pedigo, Bertrand Thirion,
Bharat Raghunathan, bmalezieux, Brian Rice, Brian Sun, Bruno Charron, Bryan
Chen, bumblebee, caherrera-meli, Carsten Allefeld, CeeThinwa, Chiara Marmo,
chrissobel, Christian Lorentzen, Christopher Yeh, Chuliang Xiao, Clément
Fauchereau, cliffordEmmanuel, Conner Shen, Connor Tann, David Dale, David Katz,
David Poznik, Dimitri Papadopoulos Orfanos, Divyanshu Deoli, dmallia17,
Dmitry Kobak, DS_anas, Eduardo Jardim, EdwinWenink, EL-ATEIF Sara, Eleni
Markou, EricEllwanger, Eric Fiegel, Erich Schubert, Ezri-Mudde, Fatos Morina,
Felipe Rodrigues, Felix Hafner, Fenil Suchak, flyingdutchman23, Flynn, Fortune
Uwha, Francois Berenger, Frankie Robertson, Frans Larsson, Frederick Robinson,
frellwan, Gabriel S Vicente, Gael Varoquaux, genvalen, Geoffrey Thomas,
geroldcsendes, Gleb Levitskiy, Glen, Glòria Macià Muñoz, gregorystrubel,
groceryheist, Guillaume Lemaitre, guiweber, Haidar Almubarak, Hans Moritz
Günther, Haoyin Xu, Harris Mirza, Harry Wei, Harutaka Kawamura, Hassan
Alsawadi, Helder Geovane Gomes de Lima, Hugo DEFOIS, Igor Ilic, Ikko Ashimine,
Isaack Mungui, Ishaan Bhat, Ishan Mishra, Iván Pulido, iwhalvic, J Alexander,
Jack Liu, James Alan Preiss, James Budarz, James Lamb, Jannik, Jeff Zhao,
Jennifer Maldonado, Jérémie du Boisberranger, Jesse Lima, Jianzhu Guo, jnboehm,
Joel Nothman, JohanWork, John Paton, Jonathan Schneider, Jon Crall, Jon Haitz
Legarreta Gorroño, Joris Van den Bossche, José Manuel Nápoles Duarte, Juan
Carlos Alfaro Jiménez, Juan Martin Loyola, Julien Jerphanion, Julio Batista
Silva, julyrashchenko, JVM, Kadatatlu Kishore, Karen Palacio, Kei Ishikawa,
kmatt10, kobaski, Kot271828, Kunj, KurumeYuta, kxytim, lacrosse91, LalliAcqua,
Laveen Bagai, Leonardo Rocco, Leonardo Uieda, Leopoldo Corona, Loic Esteve,
LSturtew, Luca Bittarello, Luccas Quadros, Lucy Jiménez, Lucy Liu, ly648499246,
Mabu Manaileng, Manimaran, makoeppel, Marco Gorelli, Maren Westermann,
Mariangela, Maria Telenczuk, marielaraj, Martin Hirzel, Mateo Noreña, Mathieu
Blondel, Mathis Batoul, mathurinm, Matthew Calcote, Maxime Prieur, Maxwell,
Mehdi Hamoumi, Mehmet Ali Özer, Miao Cai, Michal Karbownik, michalkrawczyk,
Mitzi, mlondschien, Mohamed Haseeb, Mohamed Khoualed, Muhammad Jarir Kanji,
murata-yu, Nadim Kawwa, Nanshan Li, naozin555, Nate Parsons, Neal Fultz, Nic
Annau, Nicolas Hug, Nicolas Miller, Nico Stefani, Nigel Bosch, Nikita Titov,
Nodar Okroshiashvili, Norbert Preining, novaya, Ogbonna Chibuike Stephen,
OGordon100, Oliver Pfaffel, Olivier Grisel, Oras Phongpanangam, Pablo Duque,
Pablo Ibieta-Jimenez, Patric Lacouth, Paulo S. Costa, Paweł Olszewski, Peter
Dye, PierreAttard, Pierre-Yves Le Borgne, PranayAnchuri, Prince Canuma,
putschblos, qdeffense, RamyaNP, ranjanikrishnan, Ray Bell, Rene Jean Corneille,
Reshama Shaikh, ricardojnf, RichardScottOZ, Rodion Martynov, Rohan Paul, Roman
Lutz, Roman Yurchak, Samuel Brice, Sandy Khosasi, Sean Benhur J, Sebastian
Flores, Sebastian Pölsterl, Shao Yang Hong, shinehide, shinnar, shivamgargsya,
Shooter23, Shuhei Kayawari, Shyam Desai, simonamaggio, Sina Tootoonian,
solosilence, Steven Kolawole, Steve Stagg, Surya Prakash, swpease, Sylvain
Marié, Takeshi Oura, Terence Honles, TFiFiE, Thomas A Caswell, Thomas J. Fan,
Tim Gates, TimotheeMathieu, Timothy Wolodzko, Tim Vink, t-jakubek, t-kusanagi,
tliu68, Tobias Uhmann, tom1092, Tomás Moreyra, Tomás Ronald Hughes, Tom
Dupré la Tour, Tommaso Di Noto, Tomohiro Endo, TONY GEORGE, Toshihiro NAKAE,
tsuga, Uttam kumar, vadim-ushtanit, Vangelis Gkiastas, Venkatachalam N, Vilém
Zouhar, Vinicius Rios Fuck, Vlasovets, waijean, Whidou, xavier dupré,
xiaoyuchai, Yasmeen Alsaedy, yoch, Yosuke KOBAYASHI, Yu Feng, YusukeNagasaka,
yzhenman, Zero, ZeyuSun, ZhaoweiWang, Zito, Zito Relova | scikit-learn | include contributors rst currentmodule sklearn release notes 1 0 Version 1 0 For a short description of the main highlights of the release please refer to ref sphx glr auto examples release highlights plot release highlights 1 0 0 py include changelog legend inc changes 1 0 2 Version 1 0 2 December 2021 Fix class cluster Birch class feature selection RFECV class ensemble RandomForestRegressor class ensemble RandomForestClassifier class ensemble GradientBoostingRegressor and class ensemble GradientBoostingClassifier do not raise warning when fitted on a pandas DataFrame anymore pr 21578 by Thomas Fan Changelog mod sklearn cluster Fix Fixed an infinite loop in func cluster SpectralClustering by moving an iteration counter from try to except pr 21271 by user Tyler Martin martintb mod sklearn datasets Fix func datasets fetch openml is now thread safe Data is first downloaded to a temporary subfolder and then renamed pr 21833 by user Siavash Rezazadeh siavrez mod sklearn decomposition Fix Fixed the constraint on the objective function of class decomposition DictionaryLearning class decomposition MiniBatchDictionaryLearning class decomposition SparsePCA and class decomposition MiniBatchSparsePCA to be convex and match the referenced article pr 19210 by user J r mie du Boisberranger jeremiedbb mod sklearn ensemble Fix class ensemble RandomForestClassifier class ensemble RandomForestRegressor class ensemble ExtraTreesClassifier class ensemble ExtraTreesRegressor and class ensemble RandomTreesEmbedding now raise a ValueError when bootstrap False and max samples is not None pr 21295 user Haoyin Xu PSSF23 Fix Solve a bug in class ensemble GradientBoostingClassifier where the exponential loss was computing the positive gradient instead of the negative one pr 22050 by user Guillaume Lemaitre glemaitre mod sklearn feature selection Fix Fixed class feature selection SelectFromModel by improving support for base estimators that do not set feature names in pr 21991 by Thomas Fan mod sklearn impute Fix Fix a bug in class linear model RidgeClassifierCV where the method predict was performing an argmax on the scores obtained from decision function instead of returning the multilabel indicator matrix pr 19869 by user Guillaume Lemaitre glemaitre mod sklearn linear model Fix class linear model LassoLarsIC now correctly computes AIC and BIC An error is now raised when n features n samples and when the noise variance is not provided pr 21481 by user Guillaume Lemaitre glemaitre and user Andr s Babino ababino mod sklearn manifold Fix Fixed an unnecessary error when fitting class manifold Isomap with a precomputed dense distance matrix where the neighbors graph has multiple disconnected components pr 21915 by Tom Dupre la Tour mod sklearn metrics Fix All class sklearn metrics DistanceMetric subclasses now correctly support read only buffer attributes This fixes a regression introduced in 1 0 0 with respect to 0 24 2 pr 21694 by user Julien Jerphanion jjerphan Fix All sklearn metrics MinkowskiDistance now accepts a weight parameter that makes it possible to write code that behaves consistently both with scipy 1 8 and earlier versions In turns this means that all neighbors based estimators except those that use algorithm kd tree now accept a weight parameter with metric minknowski to yield results that are always consistent with scipy spatial distance cdist pr 21741 by user Olivier Grisel ogrisel mod sklearn multiclass Fix meth multiclass OneVsRestClassifier predict proba does not error when fitted on constant integer targets pr 21871 by Thomas Fan mod sklearn neighbors Fix class neighbors KDTree and class neighbors BallTree correctly supports read only buffer attributes pr 21845 by Thomas Fan mod sklearn preprocessing Fix Fixes compatibility bug with NumPy 1 22 in class preprocessing OneHotEncoder pr 21517 by Thomas Fan mod sklearn tree Fix Prevents func tree plot tree from drawing out of the boundary of the figure pr 21917 by Thomas Fan Fix Support loading pickles of decision tree models when the pickle has been generated on a platform with a different bitness A typical example is to train and pickle the model on 64 bit machine and load the model on a 32 bit machine for prediction pr 21552 by user Lo c Est ve lesteve mod sklearn utils Fix func utils estimator html repr now escapes all the estimator descriptions in the generated HTML pr 21493 by user Aur lien Geron ageron changes 1 0 1 Version 1 0 1 October 2021 Fixed models Fix Non fit methods in the following classes do not raise a UserWarning when fitted on DataFrames with valid feature names class covariance EllipticEnvelope class ensemble IsolationForest class ensemble AdaBoostClassifier class neighbors KNeighborsClassifier class neighbors KNeighborsRegressor class neighbors RadiusNeighborsClassifier class neighbors RadiusNeighborsRegressor pr 21199 by Thomas Fan mod sklearn calibration Fix Fixed class calibration CalibratedClassifierCV to take into account sample weight when computing the base estimator prediction when ensemble False pr 20638 by user Julien Bohn JulienB 78 Fix Fixed a bug in class calibration CalibratedClassifierCV with method sigmoid that was ignoring the sample weight when computing the the Bayesian priors pr 21179 by user Guillaume Lemaitre glemaitre mod sklearn cluster Fix Fixed a bug in class cluster KMeans ensuring reproducibility and equivalence between sparse and dense input pr 21195 by user J r mie du Boisberranger jeremiedbb mod sklearn ensemble Fix Fixed a bug that could produce a segfault in rare cases for class ensemble HistGradientBoostingClassifier and class ensemble HistGradientBoostingRegressor pr 21130 user Christian Lorentzen lorentzenchr mod sklearn gaussian process Fix Compute y std properly with multi target in class sklearn gaussian process GaussianProcessRegressor allowing proper normalization in multi target scene pr 20761 by user Patrick de C T R Ferreira patrickctrf mod sklearn feature extraction Efficiency Fixed an efficiency regression introduced in version 1 0 0 in the transform method of class feature extraction text CountVectorizer which no longer checks for uppercase characters in the provided vocabulary pr 21251 by user J r mie du Boisberranger jeremiedbb Fix Fixed a bug in class feature extraction text CountVectorizer and class feature extraction text TfidfVectorizer by raising an error when min idf or max idf are floating point numbers greater than 1 pr 20752 by user Alek Lefebvre AlekLefebvre mod sklearn linear model Fix Improves stability of class linear model LassoLars for different versions of openblas pr 21340 by Thomas Fan Fix class linear model LogisticRegression now raises a better error message when the solver does not support sparse matrices with int64 indices pr 21093 by Tom Dupre la Tour mod sklearn neighbors Fix class neighbors KNeighborsClassifier class neighbors KNeighborsRegressor class neighbors RadiusNeighborsClassifier class neighbors RadiusNeighborsRegressor with metric precomputed raises an error for bsr and dok sparse matrices in methods fit kneighbors and radius neighbors due to handling of explicit zeros in bsr and dok term sparse graph formats pr 21199 by Thomas Fan mod sklearn pipeline Fix meth pipeline Pipeline get feature names out correctly passes feature names out from one step of a pipeline to the next pr 21351 by Thomas Fan mod sklearn svm Fix class svm SVC and class svm SVR check for an inconsistency in its internal representation and raise an error instead of segfaulting This fix also resolves CVE 2020 28975 https nvd nist gov vuln detail CVE 2020 28975 pr 21336 by Thomas Fan mod sklearn utils Enhancement utils validation check sample weight can perform a non negativity check on the sample weights It can be turned on using the only non negative bool parameter Estimators that check for non negative weights are updated func linear model LinearRegression here the previous error message was misleading func ensemble AdaBoostClassifier func ensemble AdaBoostRegressor func neighbors KernelDensity pr 20880 by user Guillaume Lemaitre glemaitre and user Andr s Simon simonandras Fix Solve a bug in sklearn utils metaestimators if delegate has method where the underlying check for an attribute did not work with NumPy arrays pr 21145 by user Zahlii Zahlii Miscellaneous Fix Fitting an estimator on a dataset that has no feature names that was previously fitted on a dataset with feature names no longer keeps the old feature names stored in the feature names in attribute pr 21389 by user J r mie du Boisberranger jeremiedbb changes 1 0 Version 1 0 0 September 2021 Minimal dependencies Version 1 0 0 of scikit learn requires python 3 7 numpy 1 14 6 and scipy 1 1 0 Optional minimal dependency is matplotlib 2 2 2 Enforcing keyword only arguments In an effort to promote clear and non ambiguous use of the library most constructor and function parameters must now be passed as keyword arguments i e using the param value syntax instead of positional If a keyword only parameter is used as positional a TypeError is now raised issue 15005 pr 20002 by Joel Nothman Adrin Jalali Thomas Fan Nicolas Hug and Tom Dupre la Tour See SLEP009 https scikit learn enhancement proposals readthedocs io en latest slep009 proposal html for more details Changed models The following estimators and functions when fit with the same data and parameters may produce different models from the previous version This often occurs due to changes in the modelling logic bug fixes or enhancements or in random sampling procedures Fix class manifold TSNE now avoids numerical underflow issues during affinity matrix computation Fix class manifold Isomap now connects disconnected components of the neighbors graph along some minimum distance pairs instead of changing every infinite distances to zero Fix The splitting criterion of class tree DecisionTreeClassifier and class tree DecisionTreeRegressor can be impacted by a fix in the handling of rounding errors Previously some extra spurious splits could occur Fix func model selection train test split with a stratify parameter and class model selection StratifiedShuffleSplit may lead to slightly different results Details are listed in the changelog below While we are trying to better inform users by providing this information we cannot assure that this list is complete Changelog Entries should be grouped by module in alphabetic order and prefixed with one of the labels MajorFeature Feature Efficiency Enhancement Fix or API see whats new rst for descriptions Entries should be ordered by those labels e g Fix after Efficiency Changes not specific to a module should be listed under Multiple Modules or Miscellaneous Entries should end with pr 123456 by user Joe Bloggs joeongithub where 123456 is the pull request number not the issue number API The option for using the squared error via loss and criterion parameters was made more consistent The preferred way is by setting the value to squared error Old option names are still valid produce the same models but are deprecated and will be removed in version 1 2 pr 19310 by user Christian Lorentzen lorentzenchr For class ensemble ExtraTreesRegressor criterion mse is deprecated use squared error instead which is now the default For class ensemble GradientBoostingRegressor loss ls is deprecated use squared error instead which is now the default For class ensemble RandomForestRegressor criterion mse is deprecated use squared error instead which is now the default For class ensemble HistGradientBoostingRegressor loss least squares is deprecated use squared error instead which is now the default For class linear model RANSACRegressor loss squared loss is deprecated use squared error instead For class linear model SGDRegressor loss squared loss is deprecated use squared error instead which is now the default For class tree DecisionTreeRegressor criterion mse is deprecated use squared error instead which is now the default For class tree ExtraTreeRegressor criterion mse is deprecated use squared error instead which is now the default API The option for using the absolute error via loss and criterion parameters was made more consistent The preferred way is by setting the value to absolute error Old option names are still valid produce the same models but are deprecated and will be removed in version 1 2 pr 19733 by user Christian Lorentzen lorentzenchr For class ensemble ExtraTreesRegressor criterion mae is deprecated use absolute error instead For class ensemble GradientBoostingRegressor loss lad is deprecated use absolute error instead For class ensemble RandomForestRegressor criterion mae is deprecated use absolute error instead For class ensemble HistGradientBoostingRegressor loss least absolute deviation is deprecated use absolute error instead For class linear model RANSACRegressor loss absolute loss is deprecated use absolute error instead which is now the default For class tree DecisionTreeRegressor criterion mae is deprecated use absolute error instead For class tree ExtraTreeRegressor criterion mae is deprecated use absolute error instead API np matrix usage is deprecated in 1 0 and will raise a TypeError in 1 2 pr 20165 by Thomas Fan API term get feature names out has been added to the transformer API to get the names of the output features get feature names has in turn been deprecated pr 18444 by Thomas Fan API All estimators store feature names in when fitted on pandas Dataframes These feature names are compared to names seen in non fit methods e g transform and will raise a FutureWarning if they are not consistent These FutureWarning s will become ValueError s in 1 2 pr 18010 by Thomas Fan mod sklearn base Fix func config context is now threadsafe pr 18736 by Thomas Fan mod sklearn calibration Feature func calibration CalibrationDisplay added to plot calibration curves pr 17443 by user Lucy Liu lucyleeow Fix The predict and predict proba methods of class calibration CalibratedClassifierCV can now properly be used on prefitted pipelines pr 19641 by user Alek Lefebvre AlekLefebvre Fix Fixed an error when using a class ensemble VotingClassifier as base estimator in class calibration CalibratedClassifierCV pr 20087 by user Cl ment Fauchereau clement f mod sklearn cluster Efficiency The k means initialization of class cluster KMeans and class cluster MiniBatchKMeans is now faster especially in multicore settings pr 19002 by user Jon Crall Erotemic and user J r mie du Boisberranger jeremiedbb Efficiency class cluster KMeans with algorithm elkan is now faster in multicore settings pr 19052 by user Yusuke Nagasaka YusukeNagasaka Efficiency class cluster MiniBatchKMeans is now faster in multicore settings pr 17622 by user J r mie du Boisberranger jeremiedbb Efficiency class cluster OPTICS can now cache the output of the computation of the tree using the memory parameter pr 19024 by user Frankie Robertson frankier Enhancement The predict and fit predict methods of class cluster AffinityPropagation now accept sparse data type for input data pr 20117 by user Venkatachalam Natchiappan venkyyuvy Fix Fixed a bug in class cluster MiniBatchKMeans where the sample weights were partially ignored when the input is sparse pr 17622 by user J r mie du Boisberranger jeremiedbb Fix Improved convergence detection based on center change in class cluster MiniBatchKMeans which was almost never achievable pr 17622 by user J r mie du Boisberranger jeremiedbb FIX class cluster AgglomerativeClustering now supports readonly memory mapped datasets pr 19883 by user Julien Jerphanion jjerphan Fix class cluster AgglomerativeClustering correctly connects components when connectivity and affinity are both precomputed and the number of connected components is greater than 1 pr 20597 by Thomas Fan Fix class cluster FeatureAgglomeration does not accept a params kwarg in the fit function anymore resulting in a more concise error message pr 20899 by user Adam Li adam2392 Fix Fixed a bug in class cluster KMeans ensuring reproducibility and equivalence between sparse and dense input pr 20200 by user J r mie du Boisberranger jeremiedbb API class cluster Birch attributes fit and partial fit are deprecated and will be removed in 1 2 pr 19297 by Thomas Fan API the default value for the batch size parameter of class cluster MiniBatchKMeans was changed from 100 to 1024 due to efficiency reasons The n iter attribute of class cluster MiniBatchKMeans now reports the number of started epochs and the n steps attribute reports the number of mini batches processed pr 17622 by user J r mie du Boisberranger jeremiedbb API func cluster spectral clustering raises an improved error when passed a np matrix pr 20560 by Thomas Fan mod sklearn compose Enhancement class compose ColumnTransformer now records the output of each transformer in output indices pr 18393 by user Luca Bittarello lbittarello Enhancement class compose ColumnTransformer now allows DataFrame input to have its columns appear in a changed order in transform Further columns that are dropped will not be required in transform and additional columns will be ignored if remainder drop pr 19263 by Thomas Fan Enhancement Adds predict params keyword argument to meth compose TransformedTargetRegressor predict that passes keyword argument to the regressor pr 19244 by user Ricardo ricardojnf FIX compose ColumnTransformer get feature names supports non string feature names returned by any of its transformers However note that get feature names is deprecated use get feature names out instead pr 18459 by user Albert Villanova del Moral albertvillanova and user Alonso Silva Allende alonsosilvaallende Fix class compose TransformedTargetRegressor now takes nD targets with an adequate transformer pr 18898 by user Oras Phongpanagnam panangam API Adds verbose feature names out to class compose ColumnTransformer This flag controls the prefixing of feature names out in term get feature names out pr 18444 and pr 21080 by Thomas Fan mod sklearn covariance Fix Adds arrays check to func covariance ledoit wolf and func covariance ledoit wolf shrinkage pr 20416 by user Hugo Defois defoishugo API Deprecates the following keys in cv results mean score std score and split k score in favor of mean test score std test score and split k test score pr 20583 by Thomas Fan mod sklearn datasets Enhancement func datasets fetch openml now supports categories with missing values when returning a pandas dataframe pr 19365 by Thomas Fan and user Amanda Dsouza amy12xx and user EL ATEIF Sara elateifsara Enhancement func datasets fetch kddcup99 raises a better message when the cached file is invalid pr 19669 Thomas Fan Enhancement Replace usages of file related to resource file I O with importlib resources to avoid the assumption that these resource files e g iris csv already exist on a filesystem and by extension to enable compatibility with tools such as PyOxidizer pr 20297 by user Jack Liu jackzyliu Fix Shorten data file names in the openml tests to better support installing on Windows and its default 260 character limit on file names pr 20209 by Thomas Fan Fix func datasets fetch kddcup99 returns dataframes when return X y True and as frame True pr 19011 by Thomas Fan API Deprecates datasets load boston in 1 0 and it will be removed in 1 2 Alternative code snippets to load similar datasets are provided Please report to the docstring of the function for details pr 20729 by Guillaume Lemaitre mod sklearn decomposition Enhancement added a new approximate solver randomized SVD available with eigen solver randomized to class decomposition KernelPCA This significantly accelerates computation when the number of samples is much larger than the desired number of components pr 12069 by user Sylvain Mari smarie Fix Fixes incorrect multiple data conversion warnings when clustering boolean data pr 19046 by user Surya Prakash jdsurya Fix Fixed func decomposition dict learning used by class decomposition DictionaryLearning to ensure determinism of the output Achieved by flipping signs of the SVD output which is used to initialize the code pr 18433 by user Bruno Charron brcharron Fix Fixed a bug in class decomposition MiniBatchDictionaryLearning class decomposition MiniBatchSparsePCA and func decomposition dict learning online where the update of the dictionary was incorrect pr 19198 by user J r mie du Boisberranger jeremiedbb Fix Fixed a bug in class decomposition DictionaryLearning class decomposition SparsePCA class decomposition MiniBatchDictionaryLearning class decomposition MiniBatchSparsePCA func decomposition dict learning and func decomposition dict learning online where the restart of unused atoms during the dictionary update was not working as expected pr 19198 by user J r mie du Boisberranger jeremiedbb API In class decomposition DictionaryLearning class decomposition MiniBatchDictionaryLearning func decomposition dict learning and func decomposition dict learning online transform alpha will be equal to alpha instead of 1 0 by default starting from version 1 2 pr 19159 by user Beno t Mal zieux bmalezieux API Rename variable names in class decomposition KernelPCA to improve readability lambdas and alphas are renamed to eigenvalues and eigenvectors respectively lambdas and alphas are deprecated and will be removed in 1 2 pr 19908 by user Kei Ishikawa kstoneriv3 API The alpha and regularization parameters of class decomposition NMF and func decomposition non negative factorization are deprecated and will be removed in 1 2 Use the new parameters alpha W and alpha H instead pr 20512 by user J r mie du Boisberranger jeremiedbb mod sklearn dummy API Attribute n features in in class dummy DummyRegressor and class dummy DummyRegressor is deprecated and will be removed in 1 2 pr 20960 by Thomas Fan mod sklearn ensemble Enhancement class sklearn ensemble HistGradientBoostingClassifier and class sklearn ensemble HistGradientBoostingRegressor take cgroups quotas into account when deciding the number of threads used by OpenMP This avoids performance problems caused by over subscription when using those classes in a docker container for instance pr 20477 by Thomas Fan Enhancement class sklearn ensemble HistGradientBoostingClassifier and class sklearn ensemble HistGradientBoostingRegressor are no longer experimental They are now considered stable and are subject to the same deprecation cycles as all other estimators pr 19799 by Nicolas Hug Enhancement Improve the HTML rendering of the class ensemble StackingClassifier and class ensemble StackingRegressor pr 19564 by Thomas Fan Enhancement Added Poisson criterion to class ensemble RandomForestRegressor pr 19836 by user Brian Sun bsun94 Fix Do not allow to compute out of bag OOB score in class ensemble RandomForestClassifier and class ensemble ExtraTreesClassifier with multiclass multioutput target since scikit learn does not provide any metric supporting this type of target Additional private refactoring was performed pr 19162 by user Guillaume Lemaitre glemaitre Fix Improve numerical precision for weights boosting in class ensemble AdaBoostClassifier and class ensemble AdaBoostRegressor to avoid underflows pr 10096 by user Fenil Suchak fenilsuchak Fix Fixed the range of the argument max samples to be 0 0 1 0 in class ensemble RandomForestClassifier class ensemble RandomForestRegressor where max samples 1 0 is interpreted as using all n samples for bootstrapping pr 20159 by user murata yu Fix Fixed a bug in class ensemble AdaBoostClassifier and class ensemble AdaBoostRegressor where the sample weight parameter got overwritten during fit pr 20534 by user Guillaume Lemaitre glemaitre API Removes tol None option in class ensemble HistGradientBoostingClassifier and class ensemble HistGradientBoostingRegressor Please use tol 0 for the same behavior pr 19296 by Thomas Fan mod sklearn feature extraction Fix Fixed a bug in class feature extraction text HashingVectorizer where some input strings would result in negative indices in the transformed data pr 19035 by user Liu Yu ly648499246 Fix Fixed a bug in class feature extraction DictVectorizer by raising an error with unsupported value type pr 19520 by user Jeff Zhao kamiyaa Fix Fixed a bug in func feature extraction image img to graph and func feature extraction image grid to graph where singleton connected components were not handled properly resulting in a wrong vertex indexing pr 18964 by Bertrand Thirion Fix Raise a warning in class feature extraction text CountVectorizer with lowercase True when there are vocabulary entries with uppercase characters to avoid silent misses in the resulting feature vectors pr 19401 by user Zito Relova zitorelova mod sklearn feature selection Feature func feature selection r regression computes Pearson s R correlation coefficients between the features and the target pr 17169 by user Dmytro Lituiev DSLituiev and user Julien Jerphanion jjerphan Enhancement func feature selection RFE fit accepts additional estimator parameters that are passed directly to the estimator s fit method pr 20380 by user Iv n Pulido ijpulidos user Felipe Bidu fbidu user Gil Rutter g rutter and user Adrin Jalali adrinjalali FIX Fix a bug in func isotonic isotonic regression where the sample weight passed by a user were overwritten during fit pr 20515 by user Carsten Allefeld allefeld Fix Change func feature selection SequentialFeatureSelector to allow for unsupervised modelling so that the fit signature need not do any y validation and allow for y None pr 19568 by user Shyam Desai ShyamDesai API Raises an error in class feature selection VarianceThreshold when the variance threshold is negative pr 20207 by user Tomohiro Endo europeanplaice API Deprecates grid scores in favor of split scores in cv results in class feature selection RFECV grid scores will be removed in version 1 2 pr 20161 by user Shuhei Kayawari wowry and user arka204 mod sklearn inspection Enhancement Add max samples parameter in func inspection permutation importance It enables to draw a subset of the samples to compute the permutation importance This is useful to keep the method tractable when evaluating feature importance on large datasets pr 20431 by user Oliver Pfaffel o1iv3r Enhancement Add kwargs to format ICE and PD lines separately in partial dependence plots inspection plot partial dependence and meth inspection PartialDependenceDisplay plot pr 19428 by user Mehdi Hamoumi mhham Fix Allow multiple scorers input to func inspection permutation importance pr 19411 by user Simona Maggio simonamaggio API class inspection PartialDependenceDisplay exposes a class method func inspection PartialDependenceDisplay from estimator inspection plot partial dependence is deprecated in favor of the class method and will be removed in 1 2 pr 20959 by Thomas Fan mod sklearn kernel approximation Fix Fix a bug in class kernel approximation Nystroem where the attribute component indices did not correspond to the subset of sample indices used to generate the approximated kernel pr 20554 by user Xiangyin Kong kxytim mod sklearn linear model MajorFeature Added class linear model QuantileRegressor which implements linear quantile regression with L1 penalty pr 9978 by user David Dale avidale and user Christian Lorentzen lorentzenchr Feature The new class linear model SGDOneClassSVM provides an SGD implementation of the linear One Class SVM Combined with kernel approximation techniques this implementation approximates the solution of a kernelized One Class SVM while benefitting from a linear complexity in the number of samples pr 10027 by user Albert Thomas albertcthomas Feature Added sample weight parameter to class linear model LassoCV and class linear model ElasticNetCV pr 16449 by user Christian Lorentzen lorentzenchr Feature Added new solver lbfgs available with solver lbfgs and positive argument to class linear model Ridge When positive is set to True forces the coefficients to be positive only supported by lbfgs pr 20231 by user Toshihiro Nakae tnakae Efficiency The implementation of class linear model LogisticRegression has been optimised for dense matrices when using solver newton cg and multi class multinomial pr 19571 by user Julien Jerphanion jjerphan Enhancement fit method preserves dtype for numpy float32 in class linear model Lars class linear model LassoLars class linear model LassoLars class linear model LarsCV and class linear model LassoLarsCV pr 20155 by user Takeshi Oura takoika Enhancement Validate user supplied gram matrix passed to linear models via the precompute argument pr 19004 by user Adam Midvidy amidvidy Fix meth linear model ElasticNet fit no longer modifies sample weight in place pr 19055 by Thomas Fan Fix class linear model Lasso and class linear model ElasticNet no longer have a dual gap not corresponding to their objective pr 19172 by user Mathurin Massias mathurinm Fix sample weight are now fully taken into account in linear models when normalize True for both feature centering and feature scaling pr 19426 by user Alexandre Gramfort agramfort and user Maria Telenczuk maikia Fix Points with residuals equal to residual threshold are now considered as inliers for class linear model RANSACRegressor This allows fitting a model perfectly on some datasets when residual threshold 0 pr 19499 by user Gregory Strubel gregorystrubel Fix Sample weight invariance for class linear model Ridge was fixed in pr 19616 by user Oliver Grisel ogrisel and user Christian Lorentzen lorentzenchr Fix The dictionary params in func linear model enet path and func linear model lasso path should only contain parameter of the coordinate descent solver Otherwise an error will be raised pr 19391 by user Shao Yang Hong hongshaoyang API Raise a warning in class linear model RANSACRegressor that from version 1 2 min samples need to be set explicitly for models other than class linear model LinearRegression pr 19390 by user Shao Yang Hong hongshaoyang API The parameter normalize of class linear model LinearRegression is deprecated and will be removed in 1 2 Motivation for this deprecation normalize parameter did not take any effect if fit intercept was set to False and therefore was deemed confusing The behavior of the deprecated LinearModel normalize True can be reproduced with a class sklearn pipeline Pipeline with LinearModel where LinearModel is class linear model LinearRegression class linear model Ridge class linear model RidgeClassifier class linear model RidgeCV or class linear model RidgeClassifierCV as follows make pipeline StandardScaler with mean False LinearModel The normalize parameter in class linear model LinearRegression was deprecated in pr 17743 by user Maria Telenczuk maikia and user Alexandre Gramfort agramfort Same for class linear model Ridge class linear model RidgeClassifier class linear model RidgeCV and class linear model RidgeClassifierCV in pr 17772 by user Maria Telenczuk maikia and user Alexandre Gramfort agramfort Same for class linear model BayesianRidge class linear model ARDRegression in pr 17746 by user Maria Telenczuk maikia Same for class linear model Lasso class linear model LassoCV class linear model ElasticNet class linear model ElasticNetCV class linear model MultiTaskLasso class linear model MultiTaskLassoCV class linear model MultiTaskElasticNet class linear model MultiTaskElasticNetCV in pr 17785 by user Maria Telenczuk maikia and user Alexandre Gramfort agramfort API The normalize parameter of class linear model OrthogonalMatchingPursuit and class linear model OrthogonalMatchingPursuitCV will default to False in 1 2 and will be removed in 1 4 pr 17750 by user Maria Telenczuk maikia and user Alexandre Gramfort agramfort Same for class linear model Lars class linear model LarsCV class linear model LassoLars class linear model LassoLarsCV class linear model LassoLarsIC in pr 17769 by user Maria Telenczuk maikia and user Alexandre Gramfort agramfort API Keyword validation has moved from init and set params to fit for the following estimators conforming to scikit learn s conventions class linear model SGDClassifier class linear model SGDRegressor class linear model SGDOneClassSVM class linear model PassiveAggressiveClassifier and class linear model PassiveAggressiveRegressor pr 20683 by Guillaume Lemaitre mod sklearn manifold Enhancement Implement auto heuristic for the learning rate in class manifold TSNE It will become default in 1 2 The default initialization will change to pca in 1 2 PCA initialization will be scaled to have standard deviation 1e 4 in 1 2 pr 19491 by user Dmitry Kobak dkobak Fix Change numerical precision to prevent underflow issues during affinity matrix computation for class manifold TSNE pr 19472 by user Dmitry Kobak dkobak Fix class manifold Isomap now uses scipy sparse csgraph shortest path to compute the graph shortest path It also connects disconnected components of the neighbors graph along some minimum distance pairs instead of changing every infinite distances to zero pr 20531 by Roman Yurchak and Tom Dupre la Tour Fix Decrease the numerical default tolerance in the lobpcg call in func manifold spectral embedding to prevent numerical instability pr 21194 by user Andrew Knyazev lobpcg mod sklearn metrics Feature func metrics mean pinball loss exposes the pinball loss for quantile regression pr 19415 by user Xavier Dupr sdpython and user Oliver Grisel ogrisel Feature func metrics d2 tweedie score calculates the D 2 regression score for Tweedie deviances with power parameter power This is a generalization of the r2 score and can be interpreted as percentage of Tweedie deviance explained pr 17036 by user Christian Lorentzen lorentzenchr Feature func metrics mean squared log error now supports squared False pr 20326 by user Uttam kumar helper uttam Efficiency Improved speed of func metrics confusion matrix when labels are integral pr 9843 by user Jon Crall Erotemic Enhancement A fix to raise an error in func metrics hinge loss when pred decision is 1d whereas it is a multiclass classification or when pred decision parameter is not consistent with the labels parameter pr 19643 by user Pierre Attard PierreAttard Fix meth metrics ConfusionMatrixDisplay plot uses the correct max for colormap pr 19784 by Thomas Fan Fix Samples with zero sample weight values do not affect the results from func metrics det curve func metrics precision recall curve and func metrics roc curve pr 18328 by user Albert Villanova del Moral albertvillanova and user Alonso Silva Allende alonsosilvaallende Fix avoid overflow in func metrics adjusted rand score with large amount of data pr 20312 by user Divyanshu Deoli divyanshudeoli API class metrics ConfusionMatrixDisplay exposes two class methods func metrics ConfusionMatrixDisplay from estimator and func metrics ConfusionMatrixDisplay from predictions allowing to create a confusion matrix plot using an estimator or the predictions metrics plot confusion matrix is deprecated in favor of these two class methods and will be removed in 1 2 pr 18543 by Guillaume Lemaitre API class metrics PrecisionRecallDisplay exposes two class methods func metrics PrecisionRecallDisplay from estimator and func metrics PrecisionRecallDisplay from predictions allowing to create a precision recall curve using an estimator or the predictions metrics plot precision recall curve is deprecated in favor of these two class methods and will be removed in 1 2 pr 20552 by Guillaume Lemaitre API class metrics DetCurveDisplay exposes two class methods func metrics DetCurveDisplay from estimator and func metrics DetCurveDisplay from predictions allowing to create a confusion matrix plot using an estimator or the predictions metrics plot det curve is deprecated in favor of these two class methods and will be removed in 1 2 pr 19278 by Guillaume Lemaitre mod sklearn mixture Fix Ensure that the best parameters are set appropriately in the case of divergency for class mixture GaussianMixture and class mixture BayesianGaussianMixture pr 20030 by user Tingshan Liu tliu68 and user Benjamin Pedigo bdpedigo mod sklearn model selection Feature added class model selection StratifiedGroupKFold that combines class model selection StratifiedKFold and class model selection GroupKFold providing an ability to split data preserving the distribution of classes in each split while keeping each group within a single split pr 18649 by user Leandro Hermida hermidalc and user Rodion Martynov marrodion Enhancement warn only once in the main process for per split fit failures in cross validation pr 20619 by user Lo c Est ve lesteve Enhancement The model selection BaseShuffleSplit base class is now public pr 20056 by user pabloduque0 Fix Avoid premature overflow in func model selection train test split pr 20904 by user Tomasz Jakubek t jakubek mod sklearn naive bayes Fix The fit and partial fit methods of the discrete naive Bayes classifiers class naive bayes BernoulliNB class naive bayes CategoricalNB class naive bayes ComplementNB and class naive bayes MultinomialNB now correctly handle the degenerate case of a single class in the training set pr 18925 by user David Poznik dpoznik API The attribute sigma is now deprecated in class naive bayes GaussianNB and will be removed in 1 2 Use var instead pr 18842 by user Hong Shao Yang hongshaoyang mod sklearn neighbors Enhancement The creation of class neighbors KDTree and class neighbors BallTree has been improved for their worst cases time complexity from math mathcal O n 2 to math mathcal O n pr 19473 by user jiefangxuanyan jiefangxuanyan and user Julien Jerphanion jjerphan FIX neighbors DistanceMetric subclasses now support readonly memory mapped datasets pr 19883 by user Julien Jerphanion jjerphan FIX class neighbors NearestNeighbors class neighbors KNeighborsClassifier class neighbors RadiusNeighborsClassifier class neighbors KNeighborsRegressor and class neighbors RadiusNeighborsRegressor do not validate weights in init and validates weights in fit instead pr 20072 by user Juan Carlos Alfaro Jim nez alfaro96 API The parameter kwargs of class neighbors RadiusNeighborsClassifier is deprecated and will be removed in 1 2 pr 20842 by user Juan Mart n Loyola jmloyola mod sklearn neural network Fix class neural network MLPClassifier and class neural network MLPRegressor now correctly support continued training when loading from a pickled file pr 19631 by Thomas Fan mod sklearn pipeline API The predict proba and predict log proba methods of the class pipeline Pipeline now support passing prediction kwargs to the final estimator pr 19790 by user Christopher Flynn crflynn mod sklearn preprocessing Feature The new class preprocessing SplineTransformer is a feature preprocessing tool for the generation of B splines parametrized by the polynomial degree of the splines number of knots n knots and knot positioning strategy knots pr 18368 by user Christian Lorentzen lorentzenchr class preprocessing SplineTransformer also supports periodic splines via the extrapolation argument pr 19483 by user Malte Londschien mlondschien class preprocessing SplineTransformer supports sample weights for knot position strategy quantile pr 20526 by user Malte Londschien mlondschien Feature class preprocessing OrdinalEncoder supports passing through missing values by default pr 19069 by Thomas Fan Feature class preprocessing OneHotEncoder now supports handle unknown ignore and dropping categories pr 19041 by Thomas Fan Feature class preprocessing PolynomialFeatures now supports passing a tuple to degree i e degree min degree max degree pr 20250 by user Christian Lorentzen lorentzenchr Efficiency class preprocessing StandardScaler is faster and more memory efficient pr 20652 by Thomas Fan Efficiency Changed algorithm argument for class cluster KMeans in class preprocessing KBinsDiscretizer from auto to full pr 19934 by user Gleb Levitskiy GLevV Efficiency The implementation of fit for class preprocessing PolynomialFeatures transformer is now faster This is especially noticeable on large sparse input pr 19734 by user Fred Robinson frrad Fix The func preprocessing StandardScaler inverse transform method now raises error when the input data is 1D pr 19752 by user Zhehao Liu Max1993Liu Fix func preprocessing scale class preprocessing StandardScaler and similar scalers detect near constant features to avoid scaling them to very large values This problem happens in particular when using a scaler on sparse data with a constant column with sample weights in which case centering is typically disabled pr 19527 by user Oliver Grisel ogrisel and user Maria Telenczuk maikia and pr 19788 by user J r mie du Boisberranger jeremiedbb Fix meth preprocessing StandardScaler inverse transform now correctly handles integer dtypes pr 19356 by user makoeppel Fix meth preprocessing OrdinalEncoder inverse transform is not supporting sparse matrix and raises the appropriate error message pr 19879 by user Guillaume Lemaitre glemaitre Fix The fit method of class preprocessing OrdinalEncoder will not raise error when handle unknown ignore and unknown categories are given to fit pr 19906 by user Zhehao Liu MaxwellLZH Fix Fix a regression in class preprocessing OrdinalEncoder where large Python numeric would raise an error due to overflow when casted to C type np float64 or np int64 pr 20727 by Guillaume Lemaitre Fix class preprocessing FunctionTransformer does not set n features in based on the input to inverse transform pr 20961 by Thomas Fan API The n input features attribute of class preprocessing PolynomialFeatures is deprecated in favor of n features in and will be removed in 1 2 pr 20240 by user J r mie du Boisberranger jeremiedbb mod sklearn svm API The parameter params of func svm OneClassSVM fit is deprecated and will be removed in 1 2 pr 20843 by user Juan Mart n Loyola jmloyola mod sklearn tree Enhancement Add fontname argument in func tree export graphviz for non English characters pr 18959 by user Zero Zeroto521 and user wstates wstates Fix Improves compatibility of func tree plot tree with high DPI screens pr 20023 by Thomas Fan Fix Fixed a bug in class tree DecisionTreeClassifier class tree DecisionTreeRegressor where a node could be split whereas it should not have been due to incorrect handling of rounding errors pr 19336 by user J r mie du Boisberranger jeremiedbb API The n features attribute of class tree DecisionTreeClassifier class tree DecisionTreeRegressor class tree ExtraTreeClassifier and class tree ExtraTreeRegressor is deprecated in favor of n features in and will be removed in 1 2 pr 20272 by user J r mie du Boisberranger jeremiedbb mod sklearn utils Enhancement Deprecated the default value of the random state 0 in func sklearn utils extmath randomized svd Starting in 1 2 the default value of random state will be set to None pr 19459 by user Cindy Bezuidenhout cinbez and user Clifford Akai Nettey cliffordEmmanuel Enhancement Added helper decorator func utils metaestimators available if to provide flexibility in metaestimators making methods available or unavailable on the basis of state in a more readable way pr 19948 by Joel Nothman Enhancement func utils validation check is fitted now uses sklearn is fitted if available instead of checking for attributes ending with an underscore This also makes class pipeline Pipeline and class preprocessing FunctionTransformer pass check is fitted estimator pr 20657 by Adrin Jalali Fix Fixed a bug in func utils sparsefuncs mean variance axis where the precision of the computed variance was very poor when the real variance is exactly zero pr 19766 by user J r mie du Boisberranger jeremiedbb Fix The docstrings of properties that are decorated with func utils deprecated are now properly wrapped pr 20385 by Thomas Fan Fix utils stats weighted percentile now correctly ignores zero weighted observations smaller than the smallest observation with positive weight for percentile 0 Affected classes are class dummy DummyRegressor for quantile 0 and ensemble HuberLossFunction and ensemble HuberLossFunction for alpha 0 pr 20528 by user Malte Londschien mlondschien Fix func utils safe indexing explicitly takes a dataframe copy when integer indices are provided avoiding to raise a warning from Pandas This warning was previously raised in resampling utilities and functions using those utilities e g func model selection train test split func model selection cross validate func model selection cross val score func model selection cross val predict pr 20673 by user Joris Van den Bossche jorisvandenbossche Fix Fix a regression in utils is scalar nan where large Python numbers would raise an error due to overflow in C types np float64 or np int64 pr 20727 by Guillaume Lemaitre Fix Support for np matrix is deprecated in func sklearn utils check array in 1 0 and will raise a TypeError in 1 2 pr 20165 by Thomas Fan API utils testing assert warns and utils testing assert warns message are deprecated in 1 0 and will be removed in 1 2 Used pytest warns context manager instead Note that these functions were not documented and part from the public API pr 20521 by user Olivier Grisel ogrisel API Fixed several bugs in utils graph graph shortest path which is now deprecated Use scipy sparse csgraph shortest path instead pr 20531 by Tom Dupre la Tour rubric Code and documentation contributors Thanks to everyone who has contributed to the maintenance and improvement of the project since version 0 24 including Abdulelah S Al Mesfer Abhinav Gupta Adam J Stewart Adam Li Adam Midvidy Adrian Garcia Badaracco Adrian Sad ocha Adrin Jalali Agamemnon Krasoulis Alberto Rubiales Albert Thomas Albert Villanova del Moral Alek Lefebvre Alessia Marcolini Alexandr Fonari Alihan Zihna Aline Ribeiro de Almeida Amanda Amanda Dsouza Amol Deshmukh Ana Pessoa Anavelyz Andreas Mueller Andrew Delong Ashish Ashvith Shetty Atsushi Nukariya Aur lien Geron Avi Gupta Ayush Singh baam BaptBillard Benjamin Pedigo Bertrand Thirion Bharat Raghunathan bmalezieux Brian Rice Brian Sun Bruno Charron Bryan Chen bumblebee caherrera meli Carsten Allefeld CeeThinwa Chiara Marmo chrissobel Christian Lorentzen Christopher Yeh Chuliang Xiao Cl ment Fauchereau cliffordEmmanuel Conner Shen Connor Tann David Dale David Katz David Poznik Dimitri Papadopoulos Orfanos Divyanshu Deoli dmallia17 Dmitry Kobak DS anas Eduardo Jardim EdwinWenink EL ATEIF Sara Eleni Markou EricEllwanger Eric Fiegel Erich Schubert Ezri Mudde Fatos Morina Felipe Rodrigues Felix Hafner Fenil Suchak flyingdutchman23 Flynn Fortune Uwha Francois Berenger Frankie Robertson Frans Larsson Frederick Robinson frellwan Gabriel S Vicente Gael Varoquaux genvalen Geoffrey Thomas geroldcsendes Gleb Levitskiy Glen Gl ria Maci Mu oz gregorystrubel groceryheist Guillaume Lemaitre guiweber Haidar Almubarak Hans Moritz G nther Haoyin Xu Harris Mirza Harry Wei Harutaka Kawamura Hassan Alsawadi Helder Geovane Gomes de Lima Hugo DEFOIS Igor Ilic Ikko Ashimine Isaack Mungui Ishaan Bhat Ishan Mishra Iv n Pulido iwhalvic J Alexander Jack Liu James Alan Preiss James Budarz James Lamb Jannik Jeff Zhao Jennifer Maldonado J r mie du Boisberranger Jesse Lima Jianzhu Guo jnboehm Joel Nothman JohanWork John Paton Jonathan Schneider Jon Crall Jon Haitz Legarreta Gorro o Joris Van den Bossche Jos Manuel N poles Duarte Juan Carlos Alfaro Jim nez Juan Martin Loyola Julien Jerphanion Julio Batista Silva julyrashchenko JVM Kadatatlu Kishore Karen Palacio Kei Ishikawa kmatt10 kobaski Kot271828 Kunj KurumeYuta kxytim lacrosse91 LalliAcqua Laveen Bagai Leonardo Rocco Leonardo Uieda Leopoldo Corona Loic Esteve LSturtew Luca Bittarello Luccas Quadros Lucy Jim nez Lucy Liu ly648499246 Mabu Manaileng Manimaran makoeppel Marco Gorelli Maren Westermann Mariangela Maria Telenczuk marielaraj Martin Hirzel Mateo Nore a Mathieu Blondel Mathis Batoul mathurinm Matthew Calcote Maxime Prieur Maxwell Mehdi Hamoumi Mehmet Ali zer Miao Cai Michal Karbownik michalkrawczyk Mitzi mlondschien Mohamed Haseeb Mohamed Khoualed Muhammad Jarir Kanji murata yu Nadim Kawwa Nanshan Li naozin555 Nate Parsons Neal Fultz Nic Annau Nicolas Hug Nicolas Miller Nico Stefani Nigel Bosch Nikita Titov Nodar Okroshiashvili Norbert Preining novaya Ogbonna Chibuike Stephen OGordon100 Oliver Pfaffel Olivier Grisel Oras Phongpanangam Pablo Duque Pablo Ibieta Jimenez Patric Lacouth Paulo S Costa Pawe Olszewski Peter Dye PierreAttard Pierre Yves Le Borgne PranayAnchuri Prince Canuma putschblos qdeffense RamyaNP ranjanikrishnan Ray Bell Rene Jean Corneille Reshama Shaikh ricardojnf RichardScottOZ Rodion Martynov Rohan Paul Roman Lutz Roman Yurchak Samuel Brice Sandy Khosasi Sean Benhur J Sebastian Flores Sebastian P lsterl Shao Yang Hong shinehide shinnar shivamgargsya Shooter23 Shuhei Kayawari Shyam Desai simonamaggio Sina Tootoonian solosilence Steven Kolawole Steve Stagg Surya Prakash swpease Sylvain Mari Takeshi Oura Terence Honles TFiFiE Thomas A Caswell Thomas J Fan Tim Gates TimotheeMathieu Timothy Wolodzko Tim Vink t jakubek t kusanagi tliu68 Tobias Uhmann tom1092 Tom s Moreyra Tom s Ronald Hughes Tom Dupr la Tour Tommaso Di Noto Tomohiro Endo TONY GEORGE Toshihiro NAKAE tsuga Uttam kumar vadim ushtanit Vangelis Gkiastas Venkatachalam N Vil m Zouhar Vinicius Rios Fuck Vlasovets waijean Whidou xavier dupr xiaoyuchai Yasmeen Alsaedy yoch Yosuke KOBAYASHI Yu Feng YusukeNagasaka yzhenman Zero ZeyuSun ZhaoweiWang Zito Zito Relova |
scikit-learn sklearn contributors rst releasenotes023 Version 0 23 | .. include:: _contributors.rst
.. currentmodule:: sklearn
.. _release_notes_0_23:
============
Version 0.23
============
For a short description of the main highlights of the release, please refer to
:ref:`sphx_glr_auto_examples_release_highlights_plot_release_highlights_0_23_0.py`.
.. include:: changelog_legend.inc
.. _changes_0_23_2:
Version 0.23.2
==============
Changed models
--------------
The following estimators and functions, when fit with the same data and
parameters, may produce different models from the previous version. This often
occurs due to changes in the modelling logic (bug fixes or enhancements), or in
random sampling procedures.
- |Fix| ``inertia_`` attribute of :class:`cluster.KMeans` and
:class:`cluster.MiniBatchKMeans`.
Details are listed in the changelog below.
(While we are trying to better inform users by providing this information, we
cannot assure that this list is complete.)
Changelog
---------
:mod:`sklearn.cluster`
......................
- |Fix| Fixed a bug in :class:`cluster.KMeans` where rounding errors could
prevent convergence to be declared when `tol=0`. :pr:`17959` by
:user:`Jérémie du Boisberranger <jeremiedbb>`.
- |Fix| Fixed a bug in :class:`cluster.KMeans` and
:class:`cluster.MiniBatchKMeans` where the reported inertia was incorrectly
weighted by the sample weights. :pr:`17848` by
:user:`Jérémie du Boisberranger <jeremiedbb>`.
- |Fix| Fixed a bug in :class:`cluster.MeanShift` with `bin_seeding=True`. When
the estimated bandwidth is 0, the behavior is equivalent to
`bin_seeding=False`.
:pr:`17742` by :user:`Jeremie du Boisberranger <jeremiedbb>`.
- |Fix| Fixed a bug in :class:`cluster.AffinityPropagation`, that
gives incorrect clusters when the array dtype is float32.
:pr:`17995` by :user:`Thomaz Santana <Wikilicious>` and
:user:`Amanda Dsouza <amy12xx>`.
:mod:`sklearn.decomposition`
............................
- |Fix| Fixed a bug in
:func:`decomposition.MiniBatchDictionaryLearning.partial_fit` which should
update the dictionary by iterating only once over a mini-batch.
:pr:`17433` by :user:`Chiara Marmo <cmarmo>`.
- |Fix| Avoid overflows on Windows in
:func:`decomposition.IncrementalPCA.partial_fit` for large ``batch_size`` and
``n_samples`` values.
:pr:`17985` by :user:`Alan Butler <aldee153>` and
:user:`Amanda Dsouza <amy12xx>`.
:mod:`sklearn.ensemble`
.......................
- |Fix| Fixed bug in `ensemble.MultinomialDeviance` where the
average of logloss was incorrectly calculated as sum of logloss.
:pr:`17694` by :user:`Markus Rempfler <rempfler>` and
:user:`Tsutomu Kusanagi <t-kusanagi2>`.
- |Fix| Fixes :class:`ensemble.StackingClassifier` and
:class:`ensemble.StackingRegressor` compatibility with estimators that
do not define `n_features_in_`. :pr:`17357` by `Thomas Fan`_.
:mod:`sklearn.feature_extraction`
.................................
- |Fix| Fixes bug in :class:`feature_extraction.text.CountVectorizer` where
sample order invariance was broken when `max_features` was set and features
had the same count. :pr:`18016` by `Thomas Fan`_, `Roman Yurchak`_, and
`Joel Nothman`_.
:mod:`sklearn.linear_model`
...........................
- |Fix| :func:`linear_model.lars_path` does not overwrite `X` when
`X_copy=True` and `Gram='auto'`. :pr:`17914` by `Thomas Fan`_.
:mod:`sklearn.manifold`
.......................
- |Fix| Fixed a bug where :func:`metrics.pairwise_distances` would raise an
error if ``metric='seuclidean'`` and ``X`` is not type ``np.float64``.
:pr:`15730` by :user:`Forrest Koch <ForrestCKoch>`.
:mod:`sklearn.metrics`
......................
- |Fix| Fixed a bug in :func:`metrics.mean_squared_error` where the
average of multiple RMSE values was incorrectly calculated as the root of the
average of multiple MSE values.
:pr:`17309` by :user:`Swier Heeres <swierh>`.
:mod:`sklearn.pipeline`
.......................
- |Fix| :class:`pipeline.FeatureUnion` raises a deprecation warning when
`None` is included in `transformer_list`. :pr:`17360` by `Thomas Fan`_.
:mod:`sklearn.utils`
....................
- |Fix| Fix :func:`utils.estimator_checks.check_estimator` so that all test
cases support the `binary_only` estimator tag.
:pr:`17812` by :user:`Bruno Charron <brcharron>`.
.. _changes_0_23_1:
Version 0.23.1
==============
**May 18 2020**
Changelog
---------
:mod:`sklearn.cluster`
......................
- |Efficiency| :class:`cluster.KMeans` efficiency has been improved for very
small datasets. In particular it cannot spawn idle threads any more.
:pr:`17210` and :pr:`17235` by :user:`Jeremie du Boisberranger <jeremiedbb>`.
- |Fix| Fixed a bug in :class:`cluster.KMeans` where the sample weights
provided by the user were modified in place. :pr:`17204` by
:user:`Jeremie du Boisberranger <jeremiedbb>`.
Miscellaneous
.............
- |Fix| Fixed a bug in the `repr` of third-party estimators that use a
`**kwargs` parameter in their constructor, when `changed_only` is True
which is now the default. :pr:`17205` by `Nicolas Hug`_.
.. _changes_0_23:
Version 0.23.0
==============
**May 12 2020**
Enforcing keyword-only arguments
--------------------------------
In an effort to promote clear and non-ambiguous use of the library, most
constructor and function parameters are now expected to be passed as keyword
arguments (i.e. using the `param=value` syntax) instead of positional. To
ease the transition, a `FutureWarning` is raised if a keyword-only parameter
is used as positional. In version 1.0 (renaming of 0.25), these parameters
will be strictly keyword-only, and a `TypeError` will be raised.
:issue:`15005` by `Joel Nothman`_, `Adrin Jalali`_, `Thomas Fan`_, and
`Nicolas Hug`_. See `SLEP009
<https://scikit-learn-enhancement-proposals.readthedocs.io/en/latest/slep009/proposal.html>`_
for more details.
Changed models
--------------
The following estimators and functions, when fit with the same data and
parameters, may produce different models from the previous version. This often
occurs due to changes in the modelling logic (bug fixes or enhancements), or in
random sampling procedures.
- |Fix| :class:`ensemble.BaggingClassifier`, :class:`ensemble.BaggingRegressor`,
and :class:`ensemble.IsolationForest`.
- |Fix| :class:`cluster.KMeans` with ``algorithm="elkan"`` and
``algorithm="full"``.
- |Fix| :class:`cluster.Birch`
- |Fix| `compose.ColumnTransformer.get_feature_names`
- |Fix| :func:`compose.ColumnTransformer.fit`
- |Fix| :func:`datasets.make_multilabel_classification`
- |Fix| :class:`decomposition.PCA` with `n_components='mle'`
- |Enhancement| :class:`decomposition.NMF` and
:func:`decomposition.non_negative_factorization` with float32 dtype input.
- |Fix| :func:`decomposition.KernelPCA.inverse_transform`
- |API| :class:`ensemble.HistGradientBoostingClassifier` and
:class:`ensemble.HistGradientBoostingRegressor`
- |Fix| ``estimator_samples_`` in :class:`ensemble.BaggingClassifier`,
:class:`ensemble.BaggingRegressor` and :class:`ensemble.IsolationForest`
- |Fix| :class:`ensemble.StackingClassifier` and
:class:`ensemble.StackingRegressor` with `sample_weight`
- |Fix| :class:`gaussian_process.GaussianProcessRegressor`
- |Fix| :class:`linear_model.RANSACRegressor` with ``sample_weight``.
- |Fix| :class:`linear_model.RidgeClassifierCV`
- |Fix| :func:`metrics.mean_squared_error` with `squared` and
`multioutput='raw_values'`.
- |Fix| :func:`metrics.mutual_info_score` with negative scores.
- |Fix| :func:`metrics.confusion_matrix` with zero length `y_true` and `y_pred`
- |Fix| :class:`neural_network.MLPClassifier`
- |Fix| :class:`preprocessing.StandardScaler` with `partial_fit` and sparse
input.
- |Fix| :class:`preprocessing.Normalizer` with norm='max'
- |Fix| Any model using the `svm.libsvm` or the `svm.liblinear` solver,
including :class:`svm.LinearSVC`, :class:`svm.LinearSVR`,
:class:`svm.NuSVC`, :class:`svm.NuSVR`, :class:`svm.OneClassSVM`,
:class:`svm.SVC`, :class:`svm.SVR`, :class:`linear_model.LogisticRegression`.
- |Fix| :class:`tree.DecisionTreeClassifier`, :class:`tree.ExtraTreeClassifier` and
:class:`ensemble.GradientBoostingClassifier` as well as ``predict`` method of
:class:`tree.DecisionTreeRegressor`, :class:`tree.ExtraTreeRegressor`, and
:class:`ensemble.GradientBoostingRegressor` and read-only float32 input in
``predict``, ``decision_path`` and ``predict_proba``.
Details are listed in the changelog below.
(While we are trying to better inform users by providing this information, we
cannot assure that this list is complete.)
Changelog
---------
..
Entries should be grouped by module (in alphabetic order) and prefixed with
one of the labels: |MajorFeature|, |Feature|, |Efficiency|, |Enhancement|,
|Fix| or |API| (see whats_new.rst for descriptions).
Entries should be ordered by those labels (e.g. |Fix| after |Efficiency|).
Changes not specific to a module should be listed under *Multiple Modules*
or *Miscellaneous*.
Entries should end with:
:pr:`123456` by :user:`Joe Bloggs <joeongithub>`.
where 123456 is the *pull request* number, not the issue number.
:mod:`sklearn.cluster`
......................
- |Efficiency| :class:`cluster.Birch` implementation of the predict method
avoids high memory footprint by calculating the distances matrix using
a chunked scheme.
:pr:`16149` by :user:`Jeremie du Boisberranger <jeremiedbb>` and
:user:`Alex Shacked <alexshacked>`.
- |Efficiency| |MajorFeature| The critical parts of :class:`cluster.KMeans`
have a more optimized implementation. Parallelism is now over the data
instead of over initializations allowing better scalability. :pr:`11950` by
:user:`Jeremie du Boisberranger <jeremiedbb>`.
- |Enhancement| :class:`cluster.KMeans` now supports sparse data when
`solver = "elkan"`. :pr:`11950` by
:user:`Jeremie du Boisberranger <jeremiedbb>`.
- |Enhancement| :class:`cluster.AgglomerativeClustering` has a faster and more
memory efficient implementation of single linkage clustering.
:pr:`11514` by :user:`Leland McInnes <lmcinnes>`.
- |Fix| :class:`cluster.KMeans` with ``algorithm="elkan"`` now converges with
``tol=0`` as with the default ``algorithm="full"``. :pr:`16075` by
:user:`Erich Schubert <kno10>`.
- |Fix| Fixed a bug in :class:`cluster.Birch` where the `n_clusters` parameter
could not have a `np.int64` type. :pr:`16484`
by :user:`Jeremie du Boisberranger <jeremiedbb>`.
- |Fix| :class:`cluster.AgglomerativeClustering` add specific error when
distance matrix is not square and `affinity=precomputed`.
:pr:`16257` by :user:`Simona Maggio <simonamaggio>`.
- |API| The ``n_jobs`` parameter of :class:`cluster.KMeans`,
:class:`cluster.SpectralCoclustering` and
:class:`cluster.SpectralBiclustering` is deprecated. They now use OpenMP
based parallelism. For more details on how to control the number of threads,
please refer to our :ref:`parallelism` notes. :pr:`11950` by
:user:`Jeremie du Boisberranger <jeremiedbb>`.
- |API| The ``precompute_distances`` parameter of :class:`cluster.KMeans` is
deprecated. It has no effect. :pr:`11950` by
:user:`Jeremie du Boisberranger <jeremiedbb>`.
- |API| The ``random_state`` parameter has been added to
:class:`cluster.AffinityPropagation`. :pr:`16801` by :user:`rcwoolston`
and :user:`Chiara Marmo <cmarmo>`.
:mod:`sklearn.compose`
......................
- |Efficiency| :class:`compose.ColumnTransformer` is now faster when working
with dataframes and strings are used to specific subsets of data for
transformers. :pr:`16431` by `Thomas Fan`_.
- |Enhancement| :class:`compose.ColumnTransformer` method ``get_feature_names``
now supports `'passthrough'` columns, with the feature name being either
the column name for a dataframe, or `'xi'` for column index `i`.
:pr:`14048` by :user:`Lewis Ball <lrjball>`.
- |Fix| :class:`compose.ColumnTransformer` method ``get_feature_names`` now
returns correct results when one of the transformer steps applies on an
empty list of columns :pr:`15963` by `Roman Yurchak`_.
- |Fix| :func:`compose.ColumnTransformer.fit` will error when selecting
a column name that is not unique in the dataframe. :pr:`16431` by
`Thomas Fan`_.
:mod:`sklearn.datasets`
.......................
- |Efficiency| :func:`datasets.fetch_openml` has reduced memory usage because
it no longer stores the full dataset text stream in memory. :pr:`16084` by
`Joel Nothman`_.
- |Feature| :func:`datasets.fetch_california_housing` now supports
heterogeneous data using pandas by setting `as_frame=True`. :pr:`15950`
by :user:`Stephanie Andrews <gitsteph>` and
:user:`Reshama Shaikh <reshamas>`.
- |Feature| embedded dataset loaders :func:`datasets.load_breast_cancer`,
:func:`datasets.load_diabetes`, :func:`datasets.load_digits`,
:func:`datasets.load_iris`, :func:`datasets.load_linnerud` and
:func:`datasets.load_wine` now support loading as a pandas ``DataFrame`` by
setting `as_frame=True`. :pr:`15980` by :user:`wconnell` and
:user:`Reshama Shaikh <reshamas>`.
- |Enhancement| Added ``return_centers`` parameter in
:func:`datasets.make_blobs`, which can be used to return
centers for each cluster.
:pr:`15709` by :user:`shivamgargsya` and
:user:`Venkatachalam N <venkyyuvy>`.
- |Enhancement| Functions :func:`datasets.make_circles` and
:func:`datasets.make_moons` now accept two-element tuple.
:pr:`15707` by :user:`Maciej J Mikulski <mjmikulski>`.
- |Fix| :func:`datasets.make_multilabel_classification` now generates
`ValueError` for arguments `n_classes < 1` OR `length < 1`.
:pr:`16006` by :user:`Rushabh Vasani <rushabh-v>`.
- |API| The `StreamHandler` was removed from `sklearn.logger` to avoid
double logging of messages in common cases where a handler is attached
to the root logger, and to follow the Python logging documentation
recommendation for libraries to leave the log message handling to
users and application code. :pr:`16451` by :user:`Christoph Deil <cdeil>`.
:mod:`sklearn.decomposition`
............................
- |Enhancement| :class:`decomposition.NMF` and
:func:`decomposition.non_negative_factorization` now preserves float32 dtype.
:pr:`16280` by :user:`Jeremie du Boisberranger <jeremiedbb>`.
- |Enhancement| :func:`decomposition.TruncatedSVD.transform` is now faster on
given sparse ``csc`` matrices. :pr:`16837` by :user:`wornbb`.
- |Fix| :class:`decomposition.PCA` with a float `n_components` parameter, will
exclusively choose the components that explain the variance greater than
`n_components`. :pr:`15669` by :user:`Krishna Chaitanya <krishnachaitanya9>`
- |Fix| :class:`decomposition.PCA` with `n_components='mle'` now correctly
handles small eigenvalues, and does not infer 0 as the correct number of
components. :pr:`16224` by :user:`Lisa Schwetlick <lschwetlick>`, and
:user:`Gelavizh Ahmadi <gelavizh1>` and :user:`Marija Vlajic Wheeler
<marijavlajic>` and :pr:`16841` by `Nicolas Hug`_.
- |Fix| :class:`decomposition.KernelPCA` method ``inverse_transform`` now
applies the correct inverse transform to the transformed data. :pr:`16655`
by :user:`Lewis Ball <lrjball>`.
- |Fix| Fixed bug that was causing :class:`decomposition.KernelPCA` to sometimes
raise `invalid value encountered in multiply` during `fit`.
:pr:`16718` by :user:`Gui Miotto <gui-miotto>`.
- |Feature| Added `n_components_` attribute to :class:`decomposition.SparsePCA`
and :class:`decomposition.MiniBatchSparsePCA`. :pr:`16981` by
:user:`Mateusz Górski <Reksbril>`.
:mod:`sklearn.ensemble`
.......................
- |MajorFeature| :class:`ensemble.HistGradientBoostingClassifier` and
:class:`ensemble.HistGradientBoostingRegressor` now support
:term:`sample_weight`. :pr:`14696` by `Adrin Jalali`_ and `Nicolas Hug`_.
- |Feature| Early stopping in
:class:`ensemble.HistGradientBoostingClassifier` and
:class:`ensemble.HistGradientBoostingRegressor` is now determined with a
new `early_stopping` parameter instead of `n_iter_no_change`. Default value
is 'auto', which enables early stopping if there are at least 10,000
samples in the training set. :pr:`14516` by :user:`Johann Faouzi
<johannfaouzi>`.
- |MajorFeature| :class:`ensemble.HistGradientBoostingClassifier` and
:class:`ensemble.HistGradientBoostingRegressor` now support monotonic
constraints, useful when features are supposed to have a positive/negative
effect on the target. :pr:`15582` by `Nicolas Hug`_.
- |API| Added boolean `verbose` flag to classes:
:class:`ensemble.VotingClassifier` and :class:`ensemble.VotingRegressor`.
:pr:`16069` by :user:`Sam Bail <spbail>`,
:user:`Hanna Bruce MacDonald <hannahbrucemacdonald>`,
:user:`Reshama Shaikh <reshamas>`, and
:user:`Chiara Marmo <cmarmo>`.
- |API| Fixed a bug in :class:`ensemble.HistGradientBoostingClassifier` and
:class:`ensemble.HistGradientBoostingRegressor` that would not respect the
`max_leaf_nodes` parameter if the criteria was reached at the same time as
the `max_depth` criteria. :pr:`16183` by `Nicolas Hug`_.
- |Fix| Changed the convention for `max_depth` parameter of
:class:`ensemble.HistGradientBoostingClassifier` and
:class:`ensemble.HistGradientBoostingRegressor`. The depth now corresponds to
the number of edges to go from the root to the deepest leaf.
Stumps (trees with one split) are now allowed.
:pr:`16182` by :user:`Santhosh B <santhoshbala18>`
- |Fix| Fixed a bug in :class:`ensemble.BaggingClassifier`,
:class:`ensemble.BaggingRegressor` and :class:`ensemble.IsolationForest`
where the attribute `estimators_samples_` did not generate the proper indices
used during `fit`.
:pr:`16437` by :user:`Jin-Hwan CHO <chofchof>`.
- |Fix| Fixed a bug in :class:`ensemble.StackingClassifier` and
:class:`ensemble.StackingRegressor` where the `sample_weight`
argument was not being passed to `cross_val_predict` when
evaluating the base estimators on cross-validation folds
to obtain the input to the meta estimator.
:pr:`16539` by :user:`Bill DeRose <wderose>`.
- |Feature| Added additional option `loss="poisson"` to
:class:`ensemble.HistGradientBoostingRegressor`, which adds Poisson deviance
with log-link useful for modeling count data.
:pr:`16692` by :user:`Christian Lorentzen <lorentzenchr>`
- |Fix| Fixed a bug where :class:`ensemble.HistGradientBoostingRegressor` and
:class:`ensemble.HistGradientBoostingClassifier` would fail with multiple
calls to fit when `warm_start=True`, `early_stopping=True`, and there is no
validation set. :pr:`16663` by `Thomas Fan`_.
:mod:`sklearn.feature_extraction`
.................................
- |Efficiency| :class:`feature_extraction.text.CountVectorizer` now sorts
features after pruning them by document frequency. This improves performances
for datasets with large vocabularies combined with ``min_df`` or ``max_df``.
:pr:`15834` by :user:`Santiago M. Mola <smola>`.
:mod:`sklearn.feature_selection`
................................
- |Enhancement| Added support for multioutput data in
:class:`feature_selection.RFE` and :class:`feature_selection.RFECV`.
:pr:`16103` by :user:`Divyaprabha M <divyaprabha123>`.
- |API| Adds :class:`feature_selection.SelectorMixin` back to public API.
:pr:`16132` by :user:`trimeta`.
:mod:`sklearn.gaussian_process`
...............................
- |Enhancement| :func:`gaussian_process.kernels.Matern` returns the RBF kernel when ``nu=np.inf``.
:pr:`15503` by :user:`Sam Dixon <sam-dixon>`.
- |Fix| Fixed bug in :class:`gaussian_process.GaussianProcessRegressor` that
caused predicted standard deviations to only be between 0 and 1 when
WhiteKernel is not used. :pr:`15782`
by :user:`plgreenLIRU`.
:mod:`sklearn.impute`
.....................
- |Enhancement| :class:`impute.IterativeImputer` accepts both scalar and array-like inputs for
``max_value`` and ``min_value``. Array-like inputs allow a different max and min to be specified
for each feature. :pr:`16403` by :user:`Narendra Mukherjee <narendramukherjee>`.
- |Enhancement| :class:`impute.SimpleImputer`, :class:`impute.KNNImputer`, and
:class:`impute.IterativeImputer` accepts pandas' nullable integer dtype with
missing values. :pr:`16508` by `Thomas Fan`_.
:mod:`sklearn.inspection`
.........................
- |Feature| :func:`inspection.partial_dependence` and
`inspection.plot_partial_dependence` now support the fast 'recursion'
method for :class:`ensemble.RandomForestRegressor` and
:class:`tree.DecisionTreeRegressor`. :pr:`15864` by
`Nicolas Hug`_.
:mod:`sklearn.linear_model`
...........................
- |MajorFeature| Added generalized linear models (GLM) with non normal error
distributions, including :class:`linear_model.PoissonRegressor`,
:class:`linear_model.GammaRegressor` and :class:`linear_model.TweedieRegressor`
which use Poisson, Gamma and Tweedie distributions respectively.
:pr:`14300` by :user:`Christian Lorentzen <lorentzenchr>`, `Roman Yurchak`_,
and `Olivier Grisel`_.
- |MajorFeature| Support of `sample_weight` in
:class:`linear_model.ElasticNet` and :class:`linear_model.Lasso` for dense
feature matrix `X`. :pr:`15436` by :user:`Christian Lorentzen
<lorentzenchr>`.
- |Efficiency| :class:`linear_model.RidgeCV` and
:class:`linear_model.RidgeClassifierCV` now does not allocate a
potentially large array to store dual coefficients for all hyperparameters
during its `fit`, nor an array to store all error or LOO predictions unless
`store_cv_values` is `True`.
:pr:`15652` by :user:`Jérôme Dockès <jeromedockes>`.
- |Enhancement| :class:`linear_model.LassoLars` and
:class:`linear_model.Lars` now support a `jitter` parameter that adds
random noise to the target. This might help with stability in some edge
cases. :pr:`15179` by :user:`angelaambroz`.
- |Fix| Fixed a bug where if a `sample_weight` parameter was passed to the fit
method of :class:`linear_model.RANSACRegressor`, it would not be passed to
the wrapped `base_estimator` during the fitting of the final model.
:pr:`15773` by :user:`Jeremy Alexandre <J-A16>`.
- |Fix| Add `best_score_` attribute to :class:`linear_model.RidgeCV` and
:class:`linear_model.RidgeClassifierCV`.
:pr:`15655` by :user:`Jérôme Dockès <jeromedockes>`.
- |Fix| Fixed a bug in :class:`linear_model.RidgeClassifierCV` to pass a
specific scoring strategy. Before the internal estimator outputs score
instead of predictions.
:pr:`14848` by :user:`Venkatachalam N <venkyyuvy>`.
- |Fix| :class:`linear_model.LogisticRegression` will now avoid an unnecessary
iteration when `solver='newton-cg'` by checking for inferior or equal instead
of strictly inferior for maximum of `absgrad` and `tol` in `utils.optimize._newton_cg`.
:pr:`16266` by :user:`Rushabh Vasani <rushabh-v>`.
- |API| Deprecated public attributes `standard_coef_`, `standard_intercept_`,
`average_coef_`, and `average_intercept_` in
:class:`linear_model.SGDClassifier`,
:class:`linear_model.SGDRegressor`,
:class:`linear_model.PassiveAggressiveClassifier`,
:class:`linear_model.PassiveAggressiveRegressor`.
:pr:`16261` by :user:`Carlos Brandt <chbrandt>`.
- |Fix| |Efficiency| :class:`linear_model.ARDRegression` is more stable and
much faster when `n_samples > n_features`. It can now scale to hundreds of
thousands of samples. The stability fix might imply changes in the number
of non-zero coefficients and in the predicted output. :pr:`16849` by
`Nicolas Hug`_.
- |Fix| Fixed a bug in :class:`linear_model.ElasticNetCV`,
:class:`linear_model.MultiTaskElasticNetCV`, :class:`linear_model.LassoCV`
and :class:`linear_model.MultiTaskLassoCV` where fitting would fail when
using joblib loky backend. :pr:`14264` by
:user:`Jérémie du Boisberranger <jeremiedbb>`.
- |Efficiency| Speed up :class:`linear_model.MultiTaskLasso`,
:class:`linear_model.MultiTaskLassoCV`, :class:`linear_model.MultiTaskElasticNet`,
:class:`linear_model.MultiTaskElasticNetCV` by avoiding slower
BLAS Level 2 calls on small arrays
:pr:`17021` by :user:`Alex Gramfort <agramfort>` and
:user:`Mathurin Massias <mathurinm>`.
:mod:`sklearn.metrics`
......................
- |Enhancement| :func:`metrics.pairwise_distances_chunked` now allows
its ``reduce_func`` to not have a return value, enabling in-place operations.
:pr:`16397` by `Joel Nothman`_.
- |Fix| Fixed a bug in :func:`metrics.mean_squared_error` to not ignore
argument `squared` when argument `multioutput='raw_values'`.
:pr:`16323` by :user:`Rushabh Vasani <rushabh-v>`
- |Fix| Fixed a bug in :func:`metrics.mutual_info_score` where negative
scores could be returned. :pr:`16362` by `Thomas Fan`_.
- |Fix| Fixed a bug in :func:`metrics.confusion_matrix` that would raise
an error when `y_true` and `y_pred` were length zero and `labels` was
not `None`. In addition, we raise an error when an empty list is given to
the `labels` parameter.
:pr:`16442` by :user:`Kyle Parsons <parsons-kyle-89>`.
- |API| Changed the formatting of values in
:meth:`metrics.ConfusionMatrixDisplay.plot` and
`metrics.plot_confusion_matrix` to pick the shorter format (either '2g'
or 'd'). :pr:`16159` by :user:`Rick Mackenbach <Rick-Mackenbach>` and
`Thomas Fan`_.
- |API| From version 0.25, :func:`metrics.pairwise_distances` will no
longer automatically compute the ``VI`` parameter for Mahalanobis distance
and the ``V`` parameter for seuclidean distance if ``Y`` is passed. The user
will be expected to compute this parameter on the training data of their
choice and pass it to `pairwise_distances`. :pr:`16993` by `Joel Nothman`_.
:mod:`sklearn.model_selection`
..............................
- |Enhancement| :class:`model_selection.GridSearchCV` and
:class:`model_selection.RandomizedSearchCV` yields stack trace information
in fit failed warning messages in addition to previously emitted
type and details.
:pr:`15622` by :user:`Gregory Morse <GregoryMorse>`.
- |Fix| :func:`model_selection.cross_val_predict` supports
`method="predict_proba"` when `y=None`. :pr:`15918` by
:user:`Luca Kubin <lkubin>`.
- |Fix| `model_selection.fit_grid_point` is deprecated in 0.23 and will
be removed in 0.25. :pr:`16401` by
:user:`Arie Pratama Sutiono <ariepratama>`
:mod:`sklearn.multioutput`
..........................
- |Feature| :func:`multioutput.MultiOutputRegressor.fit` and
:func:`multioutput.MultiOutputClassifier.fit` now can accept `fit_params`
to pass to the `estimator.fit` method of each step. :issue:`15953`
:pr:`15959` by :user:`Ke Huang <huangk10>`.
- |Enhancement| :class:`multioutput.RegressorChain` now supports `fit_params`
for `base_estimator` during `fit`.
:pr:`16111` by :user:`Venkatachalam N <venkyyuvy>`.
:mod:`sklearn.naive_bayes`
.............................
- |Fix| A correctly formatted error message is shown in
:class:`naive_bayes.CategoricalNB` when the number of features in the input
differs between `predict` and `fit`.
:pr:`16090` by :user:`Madhura Jayaratne <madhuracj>`.
:mod:`sklearn.neural_network`
.............................
- |Efficiency| :class:`neural_network.MLPClassifier` and
:class:`neural_network.MLPRegressor` has reduced memory footprint when using
stochastic solvers, `'sgd'` or `'adam'`, and `shuffle=True`. :pr:`14075` by
:user:`meyer89`.
- |Fix| Increases the numerical stability of the logistic loss function in
:class:`neural_network.MLPClassifier` by clipping the probabilities.
:pr:`16117` by `Thomas Fan`_.
:mod:`sklearn.inspection`
.........................
- |Enhancement| :class:`inspection.PartialDependenceDisplay` now exposes the
deciles lines as attributes so they can be hidden or customized. :pr:`15785`
by `Nicolas Hug`_
:mod:`sklearn.preprocessing`
............................
- |Feature| argument `drop` of :class:`preprocessing.OneHotEncoder`
will now accept value 'if_binary' and will drop the first category of
each feature with two categories. :pr:`16245`
by :user:`Rushabh Vasani <rushabh-v>`.
- |Enhancement| :class:`preprocessing.OneHotEncoder`'s `drop_idx_` ndarray
can now contain `None`, where `drop_idx_[i] = None` means that no category
is dropped for index `i`. :pr:`16585` by :user:`Chiara Marmo <cmarmo>`.
- |Enhancement| :class:`preprocessing.MaxAbsScaler`,
:class:`preprocessing.MinMaxScaler`, :class:`preprocessing.StandardScaler`,
:class:`preprocessing.PowerTransformer`,
:class:`preprocessing.QuantileTransformer`,
:class:`preprocessing.RobustScaler` now supports pandas' nullable integer
dtype with missing values. :pr:`16508` by `Thomas Fan`_.
- |Efficiency| :class:`preprocessing.OneHotEncoder` is now faster at
transforming. :pr:`15762` by `Thomas Fan`_.
- |Fix| Fix a bug in :class:`preprocessing.StandardScaler` which was incorrectly
computing statistics when calling `partial_fit` on sparse inputs.
:pr:`16466` by :user:`Guillaume Lemaitre <glemaitre>`.
- |Fix| Fix a bug in :class:`preprocessing.Normalizer` with norm='max',
which was not taking the absolute value of the maximum values before
normalizing the vectors. :pr:`16632` by
:user:`Maura Pintor <Maupin1991>` and :user:`Battista Biggio <bbiggio>`.
:mod:`sklearn.semi_supervised`
..............................
- |Fix| :class:`semi_supervised.LabelSpreading` and
:class:`semi_supervised.LabelPropagation` avoids divide by zero warnings
when normalizing `label_distributions_`. :pr:`15946` by :user:`ngshya`.
:mod:`sklearn.svm`
..................
- |Fix| |Efficiency| Improved ``libsvm`` and ``liblinear`` random number
generators used to randomly select coordinates in the coordinate descent
algorithms. Platform-dependent C ``rand()`` was used, which is only able to
generate numbers up to ``32767`` on windows platform (see this `blog
post <https://codeforces.com/blog/entry/61587>`_) and also has poor
randomization power as suggested by `this presentation
<https://channel9.msdn.com/Events/GoingNative/2013/rand-Considered-Harmful>`_.
It was replaced with C++11 ``mt19937``, a Mersenne Twister that correctly
generates 31bits/63bits random numbers on all platforms. In addition, the
crude "modulo" postprocessor used to get a random number in a bounded
interval was replaced by the tweaked Lemire method as suggested by `this blog
post <http://www.pcg-random.org/posts/bounded-rands.html>`_.
Any model using the `svm.libsvm` or the `svm.liblinear` solver,
including :class:`svm.LinearSVC`, :class:`svm.LinearSVR`,
:class:`svm.NuSVC`, :class:`svm.NuSVR`, :class:`svm.OneClassSVM`,
:class:`svm.SVC`, :class:`svm.SVR`, :class:`linear_model.LogisticRegression`,
is affected. In particular users can expect a better convergence when the
number of samples (LibSVM) or the number of features (LibLinear) is large.
:pr:`13511` by :user:`Sylvain Marié <smarie>`.
- |Fix| Fix use of custom kernel not taking float entries such as string
kernels in :class:`svm.SVC` and :class:`svm.SVR`. Note that custom kennels
are now expected to validate their input where they previously received
valid numeric arrays.
:pr:`11296` by `Alexandre Gramfort`_ and :user:`Georgi Peev <georgipeev>`.
- |API| :class:`svm.SVR` and :class:`svm.OneClassSVM` attributes, `probA_` and
`probB_`, are now deprecated as they were not useful. :pr:`15558` by
`Thomas Fan`_.
:mod:`sklearn.tree`
...................
- |Fix| :func:`tree.plot_tree` `rotate` parameter was unused and has been
deprecated.
:pr:`15806` by :user:`Chiara Marmo <cmarmo>`.
- |Fix| Fix support of read-only float32 array input in ``predict``,
``decision_path`` and ``predict_proba`` methods of
:class:`tree.DecisionTreeClassifier`, :class:`tree.ExtraTreeClassifier` and
:class:`ensemble.GradientBoostingClassifier` as well as ``predict`` method of
:class:`tree.DecisionTreeRegressor`, :class:`tree.ExtraTreeRegressor`, and
:class:`ensemble.GradientBoostingRegressor`.
:pr:`16331` by :user:`Alexandre Batisse <batalex>`.
:mod:`sklearn.utils`
....................
- |MajorFeature| Estimators can now be displayed with a rich html
representation. This can be enabled in Jupyter notebooks by setting
`display='diagram'` in :func:`~sklearn.set_config`. The raw html can be
returned by using :func:`utils.estimator_html_repr`.
:pr:`14180` by `Thomas Fan`_.
- |Enhancement| improve error message in :func:`utils.validation.column_or_1d`.
:pr:`15926` by :user:`Loïc Estève <lesteve>`.
- |Enhancement| add warning in :func:`utils.check_array` for
pandas sparse DataFrame.
:pr:`16021` by :user:`Rushabh Vasani <rushabh-v>`.
- |Enhancement| :func:`utils.check_array` now constructs a sparse
matrix from a pandas DataFrame that contains only `SparseArray` columns.
:pr:`16728` by `Thomas Fan`_.
- |Enhancement| :func:`utils.check_array` supports pandas'
nullable integer dtype with missing values when `force_all_finite` is set to
`False` or `'allow-nan'` in which case the data is converted to floating
point values where `pd.NA` values are replaced by `np.nan`. As a consequence,
all :mod:`sklearn.preprocessing` transformers that accept numeric inputs with
missing values represented as `np.nan` now also accepts being directly fed
pandas dataframes with `pd.Int* or `pd.Uint*` typed columns that use `pd.NA`
as a missing value marker. :pr:`16508` by `Thomas Fan`_.
- |API| Passing classes to :func:`utils.estimator_checks.check_estimator` and
:func:`utils.estimator_checks.parametrize_with_checks` is now deprecated,
and support for classes will be removed in 0.24. Pass instances instead.
:pr:`17032` by `Nicolas Hug`_.
- |API| The private utility `_safe_tags` in `utils.estimator_checks` was
removed, hence all tags should be obtained through `estimator._get_tags()`.
Note that Mixins like `RegressorMixin` must come *before* base classes
in the MRO for `_get_tags()` to work properly.
:pr:`16950` by `Nicolas Hug`_.
- |FIX| `utils.all_estimators` now only returns public estimators.
:pr:`15380` by `Thomas Fan`_.
Miscellaneous
.............
- |MajorFeature| Adds a HTML representation of estimators to be shown in
a jupyter notebook or lab. This visualization is activated by setting the
`display` option in :func:`sklearn.set_config`. :pr:`14180` by
`Thomas Fan`_.
- |Enhancement| ``scikit-learn`` now works with ``mypy`` without errors.
:pr:`16726` by `Roman Yurchak`_.
- |API| Most estimators now expose a `n_features_in_` attribute. This
attribute is equal to the number of features passed to the `fit` method.
See `SLEP010
<https://scikit-learn-enhancement-proposals.readthedocs.io/en/latest/slep010/proposal.html>`_
for details. :pr:`16112` by `Nicolas Hug`_.
- |API| Estimators now have a `requires_y` tags which is False by default
except for estimators that inherit from `~sklearn.base.RegressorMixin` or
`~sklearn.base.ClassifierMixin`. This tag is used to ensure that a proper
error message is raised when y was expected but None was passed.
:pr:`16622` by `Nicolas Hug`_.
- |API| The default setting `print_changed_only` has been changed from False
to True. This means that the `repr` of estimators is now more concise and
only shows the parameters whose default value has been changed when
printing an estimator. You can restore the previous behaviour by using
`sklearn.set_config(print_changed_only=False)`. Also, note that it is
always possible to quickly inspect the parameters of any estimator using
`est.get_params(deep=False)`. :pr:`17061` by `Nicolas Hug`_.
.. rubric:: Code and documentation contributors
Thanks to everyone who has contributed to the maintenance and improvement of the
project since version 0.22, including:
Abbie Popa, Adrin Jalali, Aleksandra Kocot, Alexandre Batisse, Alexandre
Gramfort, Alex Henrie, Alex Itkes, Alex Liang, alexshacked, Alonso Silva
Allende, Ana Casado, Andreas Mueller, Angela Ambroz, Ankit810, Arie Pratama
Sutiono, Arunav Konwar, Baptiste Maingret, Benjamin Beier Liu, bernie gray,
Bharathi Srinivasan, Bharat Raghunathan, Bibhash Chandra Mitra, Brian Wignall,
brigi, Brigitta Sipőcz, Carlos H Brandt, CastaChick, castor, cgsavard, Chiara
Marmo, Chris Gregory, Christian Kastner, Christian Lorentzen, Corrie
Bartelheimer, Daniël van Gelder, Daphne, David Breuer, david-cortes, dbauer9,
Divyaprabha M, Edward Qian, Ekaterina Borovikova, ELNS, Emily Taylor, Erich
Schubert, Eric Leung, Evgeni Chasnovski, Fabiana, Facundo Ferrín, Fan,
Franziska Boenisch, Gael Varoquaux, Gaurav Sharma, Geoffrey Bolmier, Georgi
Peev, gholdman1, Gonthier Nicolas, Gregory Morse, Gregory R. Lee, Guillaume
Lemaitre, Gui Miotto, Hailey Nguyen, Hanmin Qin, Hao Chun Chang, HaoYin, Hélion
du Mas des Bourboux, Himanshu Garg, Hirofumi Suzuki, huangk10, Hugo van
Kemenade, Hye Sung Jung, indecisiveuser, inderjeet, J-A16, Jérémie du
Boisberranger, Jin-Hwan CHO, JJmistry, Joel Nothman, Johann Faouzi, Jon Haitz
Legarreta Gorroño, Juan Carlos Alfaro Jiménez, judithabk6, jumon, Kathryn
Poole, Katrina Ni, Kesshi Jordan, Kevin Loftis, Kevin Markham,
krishnachaitanya9, Lam Gia Thuan, Leland McInnes, Lisa Schwetlick, lkubin, Loic
Esteve, lopusz, lrjball, lucgiffon, lucyleeow, Lucy Liu, Lukas Kemkes, Maciej J
Mikulski, Madhura Jayaratne, Magda Zielinska, maikia, Mandy Gu, Manimaran,
Manish Aradwad, Maren Westermann, Maria, Mariana Meireles, Marie Douriez,
Marielle, Mateusz Górski, mathurinm, Matt Hall, Maura Pintor, mc4229, meyer89,
m.fab, Michael Shoemaker, Michał Słapek, Mina Naghshhnejad, mo, Mohamed
Maskani, Mojca Bertoncelj, narendramukherjee, ngshya, Nicholas Won, Nicolas
Hug, nicolasservel, Niklas, @nkish, Noa Tamir, Oleksandr Pavlyk, olicairns,
Oliver Urs Lenz, Olivier Grisel, parsons-kyle-89, Paula, Pete Green, Pierre
Delanoue, pspachtholz, Pulkit Mehta, Qizhi Jiang, Quang Nguyen, rachelcjordan,
raduspaimoc, Reshama Shaikh, Riccardo Folloni, Rick Mackenbach, Ritchie Ng,
Roman Feldbauer, Roman Yurchak, Rory Hartong-Redden, Rüdiger Busche, Rushabh
Vasani, Sambhav Kothari, Samesh Lakhotia, Samuel Duan, SanthoshBala18, Santiago
M. Mola, Sarat Addepalli, scibol, Sebastian Kießling, SergioDSR, Sergul Aydore,
Shiki-H, shivamgargsya, SHUBH CHATTERJEE, Siddharth Gupta, simonamaggio,
smarie, Snowhite, stareh, Stephen Blystone, Stephen Marsh, Sunmi Yoon,
SylvainLan, talgatomarov, tamirlan1, th0rwas, theoptips, Thomas J Fan, Thomas
Li, Thomas Schmitt, Tim Nonner, Tim Vink, Tiphaine Viard, Tirth Patel, Titus
Christian, Tom Dupré la Tour, trimeta, Vachan D A, Vandana Iyer, Venkatachalam
N, waelbenamara, wconnell, wderose, wenliwyan, Windber, wornbb, Yu-Hang "Maxin"
Tang | scikit-learn | include contributors rst currentmodule sklearn release notes 0 23 Version 0 23 For a short description of the main highlights of the release please refer to ref sphx glr auto examples release highlights plot release highlights 0 23 0 py include changelog legend inc changes 0 23 2 Version 0 23 2 Changed models The following estimators and functions when fit with the same data and parameters may produce different models from the previous version This often occurs due to changes in the modelling logic bug fixes or enhancements or in random sampling procedures Fix inertia attribute of class cluster KMeans and class cluster MiniBatchKMeans Details are listed in the changelog below While we are trying to better inform users by providing this information we cannot assure that this list is complete Changelog mod sklearn cluster Fix Fixed a bug in class cluster KMeans where rounding errors could prevent convergence to be declared when tol 0 pr 17959 by user J r mie du Boisberranger jeremiedbb Fix Fixed a bug in class cluster KMeans and class cluster MiniBatchKMeans where the reported inertia was incorrectly weighted by the sample weights pr 17848 by user J r mie du Boisberranger jeremiedbb Fix Fixed a bug in class cluster MeanShift with bin seeding True When the estimated bandwidth is 0 the behavior is equivalent to bin seeding False pr 17742 by user Jeremie du Boisberranger jeremiedbb Fix Fixed a bug in class cluster AffinityPropagation that gives incorrect clusters when the array dtype is float32 pr 17995 by user Thomaz Santana Wikilicious and user Amanda Dsouza amy12xx mod sklearn decomposition Fix Fixed a bug in func decomposition MiniBatchDictionaryLearning partial fit which should update the dictionary by iterating only once over a mini batch pr 17433 by user Chiara Marmo cmarmo Fix Avoid overflows on Windows in func decomposition IncrementalPCA partial fit for large batch size and n samples values pr 17985 by user Alan Butler aldee153 and user Amanda Dsouza amy12xx mod sklearn ensemble Fix Fixed bug in ensemble MultinomialDeviance where the average of logloss was incorrectly calculated as sum of logloss pr 17694 by user Markus Rempfler rempfler and user Tsutomu Kusanagi t kusanagi2 Fix Fixes class ensemble StackingClassifier and class ensemble StackingRegressor compatibility with estimators that do not define n features in pr 17357 by Thomas Fan mod sklearn feature extraction Fix Fixes bug in class feature extraction text CountVectorizer where sample order invariance was broken when max features was set and features had the same count pr 18016 by Thomas Fan Roman Yurchak and Joel Nothman mod sklearn linear model Fix func linear model lars path does not overwrite X when X copy True and Gram auto pr 17914 by Thomas Fan mod sklearn manifold Fix Fixed a bug where func metrics pairwise distances would raise an error if metric seuclidean and X is not type np float64 pr 15730 by user Forrest Koch ForrestCKoch mod sklearn metrics Fix Fixed a bug in func metrics mean squared error where the average of multiple RMSE values was incorrectly calculated as the root of the average of multiple MSE values pr 17309 by user Swier Heeres swierh mod sklearn pipeline Fix class pipeline FeatureUnion raises a deprecation warning when None is included in transformer list pr 17360 by Thomas Fan mod sklearn utils Fix Fix func utils estimator checks check estimator so that all test cases support the binary only estimator tag pr 17812 by user Bruno Charron brcharron changes 0 23 1 Version 0 23 1 May 18 2020 Changelog mod sklearn cluster Efficiency class cluster KMeans efficiency has been improved for very small datasets In particular it cannot spawn idle threads any more pr 17210 and pr 17235 by user Jeremie du Boisberranger jeremiedbb Fix Fixed a bug in class cluster KMeans where the sample weights provided by the user were modified in place pr 17204 by user Jeremie du Boisberranger jeremiedbb Miscellaneous Fix Fixed a bug in the repr of third party estimators that use a kwargs parameter in their constructor when changed only is True which is now the default pr 17205 by Nicolas Hug changes 0 23 Version 0 23 0 May 12 2020 Enforcing keyword only arguments In an effort to promote clear and non ambiguous use of the library most constructor and function parameters are now expected to be passed as keyword arguments i e using the param value syntax instead of positional To ease the transition a FutureWarning is raised if a keyword only parameter is used as positional In version 1 0 renaming of 0 25 these parameters will be strictly keyword only and a TypeError will be raised issue 15005 by Joel Nothman Adrin Jalali Thomas Fan and Nicolas Hug See SLEP009 https scikit learn enhancement proposals readthedocs io en latest slep009 proposal html for more details Changed models The following estimators and functions when fit with the same data and parameters may produce different models from the previous version This often occurs due to changes in the modelling logic bug fixes or enhancements or in random sampling procedures Fix class ensemble BaggingClassifier class ensemble BaggingRegressor and class ensemble IsolationForest Fix class cluster KMeans with algorithm elkan and algorithm full Fix class cluster Birch Fix compose ColumnTransformer get feature names Fix func compose ColumnTransformer fit Fix func datasets make multilabel classification Fix class decomposition PCA with n components mle Enhancement class decomposition NMF and func decomposition non negative factorization with float32 dtype input Fix func decomposition KernelPCA inverse transform API class ensemble HistGradientBoostingClassifier and class ensemble HistGradientBoostingRegressor Fix estimator samples in class ensemble BaggingClassifier class ensemble BaggingRegressor and class ensemble IsolationForest Fix class ensemble StackingClassifier and class ensemble StackingRegressor with sample weight Fix class gaussian process GaussianProcessRegressor Fix class linear model RANSACRegressor with sample weight Fix class linear model RidgeClassifierCV Fix func metrics mean squared error with squared and multioutput raw values Fix func metrics mutual info score with negative scores Fix func metrics confusion matrix with zero length y true and y pred Fix class neural network MLPClassifier Fix class preprocessing StandardScaler with partial fit and sparse input Fix class preprocessing Normalizer with norm max Fix Any model using the svm libsvm or the svm liblinear solver including class svm LinearSVC class svm LinearSVR class svm NuSVC class svm NuSVR class svm OneClassSVM class svm SVC class svm SVR class linear model LogisticRegression Fix class tree DecisionTreeClassifier class tree ExtraTreeClassifier and class ensemble GradientBoostingClassifier as well as predict method of class tree DecisionTreeRegressor class tree ExtraTreeRegressor and class ensemble GradientBoostingRegressor and read only float32 input in predict decision path and predict proba Details are listed in the changelog below While we are trying to better inform users by providing this information we cannot assure that this list is complete Changelog Entries should be grouped by module in alphabetic order and prefixed with one of the labels MajorFeature Feature Efficiency Enhancement Fix or API see whats new rst for descriptions Entries should be ordered by those labels e g Fix after Efficiency Changes not specific to a module should be listed under Multiple Modules or Miscellaneous Entries should end with pr 123456 by user Joe Bloggs joeongithub where 123456 is the pull request number not the issue number mod sklearn cluster Efficiency class cluster Birch implementation of the predict method avoids high memory footprint by calculating the distances matrix using a chunked scheme pr 16149 by user Jeremie du Boisberranger jeremiedbb and user Alex Shacked alexshacked Efficiency MajorFeature The critical parts of class cluster KMeans have a more optimized implementation Parallelism is now over the data instead of over initializations allowing better scalability pr 11950 by user Jeremie du Boisberranger jeremiedbb Enhancement class cluster KMeans now supports sparse data when solver elkan pr 11950 by user Jeremie du Boisberranger jeremiedbb Enhancement class cluster AgglomerativeClustering has a faster and more memory efficient implementation of single linkage clustering pr 11514 by user Leland McInnes lmcinnes Fix class cluster KMeans with algorithm elkan now converges with tol 0 as with the default algorithm full pr 16075 by user Erich Schubert kno10 Fix Fixed a bug in class cluster Birch where the n clusters parameter could not have a np int64 type pr 16484 by user Jeremie du Boisberranger jeremiedbb Fix class cluster AgglomerativeClustering add specific error when distance matrix is not square and affinity precomputed pr 16257 by user Simona Maggio simonamaggio API The n jobs parameter of class cluster KMeans class cluster SpectralCoclustering and class cluster SpectralBiclustering is deprecated They now use OpenMP based parallelism For more details on how to control the number of threads please refer to our ref parallelism notes pr 11950 by user Jeremie du Boisberranger jeremiedbb API The precompute distances parameter of class cluster KMeans is deprecated It has no effect pr 11950 by user Jeremie du Boisberranger jeremiedbb API The random state parameter has been added to class cluster AffinityPropagation pr 16801 by user rcwoolston and user Chiara Marmo cmarmo mod sklearn compose Efficiency class compose ColumnTransformer is now faster when working with dataframes and strings are used to specific subsets of data for transformers pr 16431 by Thomas Fan Enhancement class compose ColumnTransformer method get feature names now supports passthrough columns with the feature name being either the column name for a dataframe or xi for column index i pr 14048 by user Lewis Ball lrjball Fix class compose ColumnTransformer method get feature names now returns correct results when one of the transformer steps applies on an empty list of columns pr 15963 by Roman Yurchak Fix func compose ColumnTransformer fit will error when selecting a column name that is not unique in the dataframe pr 16431 by Thomas Fan mod sklearn datasets Efficiency func datasets fetch openml has reduced memory usage because it no longer stores the full dataset text stream in memory pr 16084 by Joel Nothman Feature func datasets fetch california housing now supports heterogeneous data using pandas by setting as frame True pr 15950 by user Stephanie Andrews gitsteph and user Reshama Shaikh reshamas Feature embedded dataset loaders func datasets load breast cancer func datasets load diabetes func datasets load digits func datasets load iris func datasets load linnerud and func datasets load wine now support loading as a pandas DataFrame by setting as frame True pr 15980 by user wconnell and user Reshama Shaikh reshamas Enhancement Added return centers parameter in func datasets make blobs which can be used to return centers for each cluster pr 15709 by user shivamgargsya and user Venkatachalam N venkyyuvy Enhancement Functions func datasets make circles and func datasets make moons now accept two element tuple pr 15707 by user Maciej J Mikulski mjmikulski Fix func datasets make multilabel classification now generates ValueError for arguments n classes 1 OR length 1 pr 16006 by user Rushabh Vasani rushabh v API The StreamHandler was removed from sklearn logger to avoid double logging of messages in common cases where a handler is attached to the root logger and to follow the Python logging documentation recommendation for libraries to leave the log message handling to users and application code pr 16451 by user Christoph Deil cdeil mod sklearn decomposition Enhancement class decomposition NMF and func decomposition non negative factorization now preserves float32 dtype pr 16280 by user Jeremie du Boisberranger jeremiedbb Enhancement func decomposition TruncatedSVD transform is now faster on given sparse csc matrices pr 16837 by user wornbb Fix class decomposition PCA with a float n components parameter will exclusively choose the components that explain the variance greater than n components pr 15669 by user Krishna Chaitanya krishnachaitanya9 Fix class decomposition PCA with n components mle now correctly handles small eigenvalues and does not infer 0 as the correct number of components pr 16224 by user Lisa Schwetlick lschwetlick and user Gelavizh Ahmadi gelavizh1 and user Marija Vlajic Wheeler marijavlajic and pr 16841 by Nicolas Hug Fix class decomposition KernelPCA method inverse transform now applies the correct inverse transform to the transformed data pr 16655 by user Lewis Ball lrjball Fix Fixed bug that was causing class decomposition KernelPCA to sometimes raise invalid value encountered in multiply during fit pr 16718 by user Gui Miotto gui miotto Feature Added n components attribute to class decomposition SparsePCA and class decomposition MiniBatchSparsePCA pr 16981 by user Mateusz G rski Reksbril mod sklearn ensemble MajorFeature class ensemble HistGradientBoostingClassifier and class ensemble HistGradientBoostingRegressor now support term sample weight pr 14696 by Adrin Jalali and Nicolas Hug Feature Early stopping in class ensemble HistGradientBoostingClassifier and class ensemble HistGradientBoostingRegressor is now determined with a new early stopping parameter instead of n iter no change Default value is auto which enables early stopping if there are at least 10 000 samples in the training set pr 14516 by user Johann Faouzi johannfaouzi MajorFeature class ensemble HistGradientBoostingClassifier and class ensemble HistGradientBoostingRegressor now support monotonic constraints useful when features are supposed to have a positive negative effect on the target pr 15582 by Nicolas Hug API Added boolean verbose flag to classes class ensemble VotingClassifier and class ensemble VotingRegressor pr 16069 by user Sam Bail spbail user Hanna Bruce MacDonald hannahbrucemacdonald user Reshama Shaikh reshamas and user Chiara Marmo cmarmo API Fixed a bug in class ensemble HistGradientBoostingClassifier and class ensemble HistGradientBoostingRegressor that would not respect the max leaf nodes parameter if the criteria was reached at the same time as the max depth criteria pr 16183 by Nicolas Hug Fix Changed the convention for max depth parameter of class ensemble HistGradientBoostingClassifier and class ensemble HistGradientBoostingRegressor The depth now corresponds to the number of edges to go from the root to the deepest leaf Stumps trees with one split are now allowed pr 16182 by user Santhosh B santhoshbala18 Fix Fixed a bug in class ensemble BaggingClassifier class ensemble BaggingRegressor and class ensemble IsolationForest where the attribute estimators samples did not generate the proper indices used during fit pr 16437 by user Jin Hwan CHO chofchof Fix Fixed a bug in class ensemble StackingClassifier and class ensemble StackingRegressor where the sample weight argument was not being passed to cross val predict when evaluating the base estimators on cross validation folds to obtain the input to the meta estimator pr 16539 by user Bill DeRose wderose Feature Added additional option loss poisson to class ensemble HistGradientBoostingRegressor which adds Poisson deviance with log link useful for modeling count data pr 16692 by user Christian Lorentzen lorentzenchr Fix Fixed a bug where class ensemble HistGradientBoostingRegressor and class ensemble HistGradientBoostingClassifier would fail with multiple calls to fit when warm start True early stopping True and there is no validation set pr 16663 by Thomas Fan mod sklearn feature extraction Efficiency class feature extraction text CountVectorizer now sorts features after pruning them by document frequency This improves performances for datasets with large vocabularies combined with min df or max df pr 15834 by user Santiago M Mola smola mod sklearn feature selection Enhancement Added support for multioutput data in class feature selection RFE and class feature selection RFECV pr 16103 by user Divyaprabha M divyaprabha123 API Adds class feature selection SelectorMixin back to public API pr 16132 by user trimeta mod sklearn gaussian process Enhancement func gaussian process kernels Matern returns the RBF kernel when nu np inf pr 15503 by user Sam Dixon sam dixon Fix Fixed bug in class gaussian process GaussianProcessRegressor that caused predicted standard deviations to only be between 0 and 1 when WhiteKernel is not used pr 15782 by user plgreenLIRU mod sklearn impute Enhancement class impute IterativeImputer accepts both scalar and array like inputs for max value and min value Array like inputs allow a different max and min to be specified for each feature pr 16403 by user Narendra Mukherjee narendramukherjee Enhancement class impute SimpleImputer class impute KNNImputer and class impute IterativeImputer accepts pandas nullable integer dtype with missing values pr 16508 by Thomas Fan mod sklearn inspection Feature func inspection partial dependence and inspection plot partial dependence now support the fast recursion method for class ensemble RandomForestRegressor and class tree DecisionTreeRegressor pr 15864 by Nicolas Hug mod sklearn linear model MajorFeature Added generalized linear models GLM with non normal error distributions including class linear model PoissonRegressor class linear model GammaRegressor and class linear model TweedieRegressor which use Poisson Gamma and Tweedie distributions respectively pr 14300 by user Christian Lorentzen lorentzenchr Roman Yurchak and Olivier Grisel MajorFeature Support of sample weight in class linear model ElasticNet and class linear model Lasso for dense feature matrix X pr 15436 by user Christian Lorentzen lorentzenchr Efficiency class linear model RidgeCV and class linear model RidgeClassifierCV now does not allocate a potentially large array to store dual coefficients for all hyperparameters during its fit nor an array to store all error or LOO predictions unless store cv values is True pr 15652 by user J r me Dock s jeromedockes Enhancement class linear model LassoLars and class linear model Lars now support a jitter parameter that adds random noise to the target This might help with stability in some edge cases pr 15179 by user angelaambroz Fix Fixed a bug where if a sample weight parameter was passed to the fit method of class linear model RANSACRegressor it would not be passed to the wrapped base estimator during the fitting of the final model pr 15773 by user Jeremy Alexandre J A16 Fix Add best score attribute to class linear model RidgeCV and class linear model RidgeClassifierCV pr 15655 by user J r me Dock s jeromedockes Fix Fixed a bug in class linear model RidgeClassifierCV to pass a specific scoring strategy Before the internal estimator outputs score instead of predictions pr 14848 by user Venkatachalam N venkyyuvy Fix class linear model LogisticRegression will now avoid an unnecessary iteration when solver newton cg by checking for inferior or equal instead of strictly inferior for maximum of absgrad and tol in utils optimize newton cg pr 16266 by user Rushabh Vasani rushabh v API Deprecated public attributes standard coef standard intercept average coef and average intercept in class linear model SGDClassifier class linear model SGDRegressor class linear model PassiveAggressiveClassifier class linear model PassiveAggressiveRegressor pr 16261 by user Carlos Brandt chbrandt Fix Efficiency class linear model ARDRegression is more stable and much faster when n samples n features It can now scale to hundreds of thousands of samples The stability fix might imply changes in the number of non zero coefficients and in the predicted output pr 16849 by Nicolas Hug Fix Fixed a bug in class linear model ElasticNetCV class linear model MultiTaskElasticNetCV class linear model LassoCV and class linear model MultiTaskLassoCV where fitting would fail when using joblib loky backend pr 14264 by user J r mie du Boisberranger jeremiedbb Efficiency Speed up class linear model MultiTaskLasso class linear model MultiTaskLassoCV class linear model MultiTaskElasticNet class linear model MultiTaskElasticNetCV by avoiding slower BLAS Level 2 calls on small arrays pr 17021 by user Alex Gramfort agramfort and user Mathurin Massias mathurinm mod sklearn metrics Enhancement func metrics pairwise distances chunked now allows its reduce func to not have a return value enabling in place operations pr 16397 by Joel Nothman Fix Fixed a bug in func metrics mean squared error to not ignore argument squared when argument multioutput raw values pr 16323 by user Rushabh Vasani rushabh v Fix Fixed a bug in func metrics mutual info score where negative scores could be returned pr 16362 by Thomas Fan Fix Fixed a bug in func metrics confusion matrix that would raise an error when y true and y pred were length zero and labels was not None In addition we raise an error when an empty list is given to the labels parameter pr 16442 by user Kyle Parsons parsons kyle 89 API Changed the formatting of values in meth metrics ConfusionMatrixDisplay plot and metrics plot confusion matrix to pick the shorter format either 2g or d pr 16159 by user Rick Mackenbach Rick Mackenbach and Thomas Fan API From version 0 25 func metrics pairwise distances will no longer automatically compute the VI parameter for Mahalanobis distance and the V parameter for seuclidean distance if Y is passed The user will be expected to compute this parameter on the training data of their choice and pass it to pairwise distances pr 16993 by Joel Nothman mod sklearn model selection Enhancement class model selection GridSearchCV and class model selection RandomizedSearchCV yields stack trace information in fit failed warning messages in addition to previously emitted type and details pr 15622 by user Gregory Morse GregoryMorse Fix func model selection cross val predict supports method predict proba when y None pr 15918 by user Luca Kubin lkubin Fix model selection fit grid point is deprecated in 0 23 and will be removed in 0 25 pr 16401 by user Arie Pratama Sutiono ariepratama mod sklearn multioutput Feature func multioutput MultiOutputRegressor fit and func multioutput MultiOutputClassifier fit now can accept fit params to pass to the estimator fit method of each step issue 15953 pr 15959 by user Ke Huang huangk10 Enhancement class multioutput RegressorChain now supports fit params for base estimator during fit pr 16111 by user Venkatachalam N venkyyuvy mod sklearn naive bayes Fix A correctly formatted error message is shown in class naive bayes CategoricalNB when the number of features in the input differs between predict and fit pr 16090 by user Madhura Jayaratne madhuracj mod sklearn neural network Efficiency class neural network MLPClassifier and class neural network MLPRegressor has reduced memory footprint when using stochastic solvers sgd or adam and shuffle True pr 14075 by user meyer89 Fix Increases the numerical stability of the logistic loss function in class neural network MLPClassifier by clipping the probabilities pr 16117 by Thomas Fan mod sklearn inspection Enhancement class inspection PartialDependenceDisplay now exposes the deciles lines as attributes so they can be hidden or customized pr 15785 by Nicolas Hug mod sklearn preprocessing Feature argument drop of class preprocessing OneHotEncoder will now accept value if binary and will drop the first category of each feature with two categories pr 16245 by user Rushabh Vasani rushabh v Enhancement class preprocessing OneHotEncoder s drop idx ndarray can now contain None where drop idx i None means that no category is dropped for index i pr 16585 by user Chiara Marmo cmarmo Enhancement class preprocessing MaxAbsScaler class preprocessing MinMaxScaler class preprocessing StandardScaler class preprocessing PowerTransformer class preprocessing QuantileTransformer class preprocessing RobustScaler now supports pandas nullable integer dtype with missing values pr 16508 by Thomas Fan Efficiency class preprocessing OneHotEncoder is now faster at transforming pr 15762 by Thomas Fan Fix Fix a bug in class preprocessing StandardScaler which was incorrectly computing statistics when calling partial fit on sparse inputs pr 16466 by user Guillaume Lemaitre glemaitre Fix Fix a bug in class preprocessing Normalizer with norm max which was not taking the absolute value of the maximum values before normalizing the vectors pr 16632 by user Maura Pintor Maupin1991 and user Battista Biggio bbiggio mod sklearn semi supervised Fix class semi supervised LabelSpreading and class semi supervised LabelPropagation avoids divide by zero warnings when normalizing label distributions pr 15946 by user ngshya mod sklearn svm Fix Efficiency Improved libsvm and liblinear random number generators used to randomly select coordinates in the coordinate descent algorithms Platform dependent C rand was used which is only able to generate numbers up to 32767 on windows platform see this blog post https codeforces com blog entry 61587 and also has poor randomization power as suggested by this presentation https channel9 msdn com Events GoingNative 2013 rand Considered Harmful It was replaced with C 11 mt19937 a Mersenne Twister that correctly generates 31bits 63bits random numbers on all platforms In addition the crude modulo postprocessor used to get a random number in a bounded interval was replaced by the tweaked Lemire method as suggested by this blog post http www pcg random org posts bounded rands html Any model using the svm libsvm or the svm liblinear solver including class svm LinearSVC class svm LinearSVR class svm NuSVC class svm NuSVR class svm OneClassSVM class svm SVC class svm SVR class linear model LogisticRegression is affected In particular users can expect a better convergence when the number of samples LibSVM or the number of features LibLinear is large pr 13511 by user Sylvain Mari smarie Fix Fix use of custom kernel not taking float entries such as string kernels in class svm SVC and class svm SVR Note that custom kennels are now expected to validate their input where they previously received valid numeric arrays pr 11296 by Alexandre Gramfort and user Georgi Peev georgipeev API class svm SVR and class svm OneClassSVM attributes probA and probB are now deprecated as they were not useful pr 15558 by Thomas Fan mod sklearn tree Fix func tree plot tree rotate parameter was unused and has been deprecated pr 15806 by user Chiara Marmo cmarmo Fix Fix support of read only float32 array input in predict decision path and predict proba methods of class tree DecisionTreeClassifier class tree ExtraTreeClassifier and class ensemble GradientBoostingClassifier as well as predict method of class tree DecisionTreeRegressor class tree ExtraTreeRegressor and class ensemble GradientBoostingRegressor pr 16331 by user Alexandre Batisse batalex mod sklearn utils MajorFeature Estimators can now be displayed with a rich html representation This can be enabled in Jupyter notebooks by setting display diagram in func sklearn set config The raw html can be returned by using func utils estimator html repr pr 14180 by Thomas Fan Enhancement improve error message in func utils validation column or 1d pr 15926 by user Lo c Est ve lesteve Enhancement add warning in func utils check array for pandas sparse DataFrame pr 16021 by user Rushabh Vasani rushabh v Enhancement func utils check array now constructs a sparse matrix from a pandas DataFrame that contains only SparseArray columns pr 16728 by Thomas Fan Enhancement func utils check array supports pandas nullable integer dtype with missing values when force all finite is set to False or allow nan in which case the data is converted to floating point values where pd NA values are replaced by np nan As a consequence all mod sklearn preprocessing transformers that accept numeric inputs with missing values represented as np nan now also accepts being directly fed pandas dataframes with pd Int or pd Uint typed columns that use pd NA as a missing value marker pr 16508 by Thomas Fan API Passing classes to func utils estimator checks check estimator and func utils estimator checks parametrize with checks is now deprecated and support for classes will be removed in 0 24 Pass instances instead pr 17032 by Nicolas Hug API The private utility safe tags in utils estimator checks was removed hence all tags should be obtained through estimator get tags Note that Mixins like RegressorMixin must come before base classes in the MRO for get tags to work properly pr 16950 by Nicolas Hug FIX utils all estimators now only returns public estimators pr 15380 by Thomas Fan Miscellaneous MajorFeature Adds a HTML representation of estimators to be shown in a jupyter notebook or lab This visualization is activated by setting the display option in func sklearn set config pr 14180 by Thomas Fan Enhancement scikit learn now works with mypy without errors pr 16726 by Roman Yurchak API Most estimators now expose a n features in attribute This attribute is equal to the number of features passed to the fit method See SLEP010 https scikit learn enhancement proposals readthedocs io en latest slep010 proposal html for details pr 16112 by Nicolas Hug API Estimators now have a requires y tags which is False by default except for estimators that inherit from sklearn base RegressorMixin or sklearn base ClassifierMixin This tag is used to ensure that a proper error message is raised when y was expected but None was passed pr 16622 by Nicolas Hug API The default setting print changed only has been changed from False to True This means that the repr of estimators is now more concise and only shows the parameters whose default value has been changed when printing an estimator You can restore the previous behaviour by using sklearn set config print changed only False Also note that it is always possible to quickly inspect the parameters of any estimator using est get params deep False pr 17061 by Nicolas Hug rubric Code and documentation contributors Thanks to everyone who has contributed to the maintenance and improvement of the project since version 0 22 including Abbie Popa Adrin Jalali Aleksandra Kocot Alexandre Batisse Alexandre Gramfort Alex Henrie Alex Itkes Alex Liang alexshacked Alonso Silva Allende Ana Casado Andreas Mueller Angela Ambroz Ankit810 Arie Pratama Sutiono Arunav Konwar Baptiste Maingret Benjamin Beier Liu bernie gray Bharathi Srinivasan Bharat Raghunathan Bibhash Chandra Mitra Brian Wignall brigi Brigitta Sip cz Carlos H Brandt CastaChick castor cgsavard Chiara Marmo Chris Gregory Christian Kastner Christian Lorentzen Corrie Bartelheimer Dani l van Gelder Daphne David Breuer david cortes dbauer9 Divyaprabha M Edward Qian Ekaterina Borovikova ELNS Emily Taylor Erich Schubert Eric Leung Evgeni Chasnovski Fabiana Facundo Ferr n Fan Franziska Boenisch Gael Varoquaux Gaurav Sharma Geoffrey Bolmier Georgi Peev gholdman1 Gonthier Nicolas Gregory Morse Gregory R Lee Guillaume Lemaitre Gui Miotto Hailey Nguyen Hanmin Qin Hao Chun Chang HaoYin H lion du Mas des Bourboux Himanshu Garg Hirofumi Suzuki huangk10 Hugo van Kemenade Hye Sung Jung indecisiveuser inderjeet J A16 J r mie du Boisberranger Jin Hwan CHO JJmistry Joel Nothman Johann Faouzi Jon Haitz Legarreta Gorro o Juan Carlos Alfaro Jim nez judithabk6 jumon Kathryn Poole Katrina Ni Kesshi Jordan Kevin Loftis Kevin Markham krishnachaitanya9 Lam Gia Thuan Leland McInnes Lisa Schwetlick lkubin Loic Esteve lopusz lrjball lucgiffon lucyleeow Lucy Liu Lukas Kemkes Maciej J Mikulski Madhura Jayaratne Magda Zielinska maikia Mandy Gu Manimaran Manish Aradwad Maren Westermann Maria Mariana Meireles Marie Douriez Marielle Mateusz G rski mathurinm Matt Hall Maura Pintor mc4229 meyer89 m fab Michael Shoemaker Micha S apek Mina Naghshhnejad mo Mohamed Maskani Mojca Bertoncelj narendramukherjee ngshya Nicholas Won Nicolas Hug nicolasservel Niklas nkish Noa Tamir Oleksandr Pavlyk olicairns Oliver Urs Lenz Olivier Grisel parsons kyle 89 Paula Pete Green Pierre Delanoue pspachtholz Pulkit Mehta Qizhi Jiang Quang Nguyen rachelcjordan raduspaimoc Reshama Shaikh Riccardo Folloni Rick Mackenbach Ritchie Ng Roman Feldbauer Roman Yurchak Rory Hartong Redden R diger Busche Rushabh Vasani Sambhav Kothari Samesh Lakhotia Samuel Duan SanthoshBala18 Santiago M Mola Sarat Addepalli scibol Sebastian Kie ling SergioDSR Sergul Aydore Shiki H shivamgargsya SHUBH CHATTERJEE Siddharth Gupta simonamaggio smarie Snowhite stareh Stephen Blystone Stephen Marsh Sunmi Yoon SylvainLan talgatomarov tamirlan1 th0rwas theoptips Thomas J Fan Thomas Li Thomas Schmitt Tim Nonner Tim Vink Tiphaine Viard Tirth Patel Titus Christian Tom Dupr la Tour trimeta Vachan D A Vandana Iyer Venkatachalam N waelbenamara wconnell wderose wenliwyan Windber wornbb Yu Hang Maxin Tang |
scikit-learn It also defines other ReST substitutions format html their github page to be their URL target Historically it was used to raw html raw hyperlink all contributors names and should now be preferred This file maps contributor names to their URLs It should mostly be used for core contributors and occasionally for contributors who do not want |
..
This file maps contributor names to their URLs. It should mostly be used
for core contributors, and occasionally for contributors who do not want
their github page to be their URL target. Historically it was used to
hyperlink all contributors' names, and ``:user:`` should now be preferred.
It also defines other ReST substitutions.
.. role:: raw-html(raw)
:format: html
.. role:: raw-latex(raw)
:format: latex
.. |MajorFeature| replace:: :raw-html:`<span class="badge text-bg-success">Major Feature</span>` :raw-latex:`{\small\sc [Major Feature]}`
.. |Feature| replace:: :raw-html:`<span class="badge text-bg-success">Feature</span>` :raw-latex:`{\small\sc [Feature]}`
.. |Efficiency| replace:: :raw-html:`<span class="badge text-bg-info">Efficiency</span>` :raw-latex:`{\small\sc [Efficiency]}`
.. |Enhancement| replace:: :raw-html:`<span class="badge text-bg-info">Enhancement</span>` :raw-latex:`{\small\sc [Enhancement]}`
.. |Fix| replace:: :raw-html:`<span class="badge text-bg-danger">Fix</span>` :raw-latex:`{\small\sc [Fix]}`
.. |API| replace:: :raw-html:`<span class="badge text-bg-warning">API Change</span>` :raw-latex:`{\small\sc [API Change]}`
.. _Olivier Grisel: https://twitter.com/ogrisel
.. _Gael Varoquaux: http://gael-varoquaux.info
.. _Alexandre Gramfort: http://alexandre.gramfort.net
.. _Fabian Pedregosa: http://fa.bianp.net
.. _Mathieu Blondel: http://www.mblondel.org
.. _James Bergstra: http://www-etud.iro.umontreal.ca/~bergstrj/
.. _liblinear: https://www.csie.ntu.edu.tw/~cjlin/liblinear/
.. _Yaroslav Halchenko: http://www.onerussian.com/
.. _Vlad Niculae: https://vene.ro/
.. _Edouard Duchesnay: https://duchesnay.github.io/
.. _Peter Prettenhofer: https://sites.google.com/site/peterprettenhofer/
.. _Alexandre Passos: http://atpassos.me
.. _Nicolas Pinto: https://twitter.com/npinto
.. _Bertrand Thirion: https://team.inria.fr/parietal/bertrand-thirions-page
.. _Andreas Müller: https://amueller.github.io/
.. _Matthieu Perrot: http://brainvisa.info/biblio/lnao/en/Author/PERROT-M.html
.. _Jake Vanderplas: https://staff.washington.edu/jakevdp/
.. _Gilles Louppe: http://www.montefiore.ulg.ac.be/~glouppe/
.. _INRIA: https://www.inria.fr/
.. _Parietal Team: http://parietal.saclay.inria.fr/
.. _David Warde-Farley: http://www-etud.iro.umontreal.ca/~wardefar/
.. _Brian Holt: http://personal.ee.surrey.ac.uk/Personal/B.Holt
.. _Satrajit Ghosh: https://www.mit.edu/~satra/
.. _Robert Layton: https://twitter.com/robertlayton
.. _Scott White: https://twitter.com/scottblanc
.. _David Marek: https://davidmarek.cz/
.. _Christian Osendorfer: https://osdf.github.io
.. _Arnaud Joly: http://www.ajoly.org
.. _Rob Zinkov: https://www.zinkov.com/
.. _Joel Nothman: https://joelnothman.com/
.. _Nicolas Trésegnie: https://github.com/NicolasTr
.. _Kemal Eren: http://www.kemaleren.com
.. _Yann Dauphin: https://ynd.github.io/
.. _Yannick Schwartz: https://team.inria.fr/parietal/schwarty/
.. _Kyle Kastner: https://kastnerkyle.github.io/
.. _Daniel Nouri: http://danielnouri.org
.. _Manoj Kumar: https://manojbits.wordpress.com
.. _Luis Pedro Coelho: http://luispedro.org
.. _Fares Hedyati: http://www.eecs.berkeley.edu/~fareshed
.. _Antony Lee: https://www.ocf.berkeley.edu/~antonyl/
.. _Martin Billinger: https://tnsre.embs.org/author/martinbillinger/
.. _Matteo Visconti di Oleggio Castello: http://www.mvdoc.me
.. _Trevor Stephens: http://trevorstephens.com/
.. _Jan Hendrik Metzen: https://jmetzen.github.io/
.. _Will Dawson: http://www.dawsonresearch.com
.. _Andrew Tulloch: https://tullo.ch/
.. _Hanna Wallach: https://dirichlet.net/
.. _Yan Yi: http://seowyanyi.org
.. _Hervé Bredin: https://herve.niderb.fr/
.. _Eric Martin: http://www.ericmart.in
.. _Nicolas Goix: https://ngoix.github.io/
.. _Sebastian Raschka: https://sebastianraschka.com/
.. _Brian McFee: https://bmcfee.github.io
.. _Valentin Stolbunov: http://www.vstolbunov.com
.. _Jaques Grobler: https://github.com/jaquesgrobler
.. _Lars Buitinck: https://github.com/larsmans
.. _Loic Esteve: https://github.com/lesteve
.. _Noel Dawe: https://github.com/ndawe
.. _Raghav RV: https://github.com/raghavrv
.. _Tom Dupre la Tour: https://github.com/TomDLT
.. _Nelle Varoquaux: https://github.com/nellev
.. _Bing Tian Dai: https://github.com/btdai
.. _Dylan Werner-Meier: https://github.com/unautre
.. _Alyssa Batula: https://github.com/abatula
.. _Srivatsan Ramesh: https://github.com/srivatsan-ramesh
.. _Ron Weiss: https://www.ee.columbia.edu/~ronw/
.. _Kathleen Chen: https://github.com/kchen17
.. _Vincent Pham: https://github.com/vincentpham1991
.. _Denis Engemann: http://denis-engemann.de
.. _Anish Shah: https://github.com/AnishShah
.. _Neeraj Gangwar: http://neerajgangwar.in
.. _Arthur Mensch: https://amensch.fr
.. _Joris Van den Bossche: https://github.com/jorisvandenbossche
.. _Roman Yurchak: https://github.com/rth
.. _Hanmin Qin: https://github.com/qinhanmin2014
.. _Adrin Jalali: https://github.com/adrinjalali
.. _Thomas Fan: https://github.com/thomasjpfan
.. _Nicolas Hug: https://github.com/NicolasHug
.. _Guillaume Lemaitre: https://github.com/glemaitre
.. _Tim Head: https://betatim.github.io/ | scikit-learn | This file maps contributor names to their URLs It should mostly be used for core contributors and occasionally for contributors who do not want their github page to be their URL target Historically it was used to hyperlink all contributors names and user should now be preferred It also defines other ReST substitutions role raw html raw format html role raw latex raw format latex MajorFeature replace raw html span class badge text bg success Major Feature span raw latex small sc Major Feature Feature replace raw html span class badge text bg success Feature span raw latex small sc Feature Efficiency replace raw html span class badge text bg info Efficiency span raw latex small sc Efficiency Enhancement replace raw html span class badge text bg info Enhancement span raw latex small sc Enhancement Fix replace raw html span class badge text bg danger Fix span raw latex small sc Fix API replace raw html span class badge text bg warning API Change span raw latex small sc API Change Olivier Grisel https twitter com ogrisel Gael Varoquaux http gael varoquaux info Alexandre Gramfort http alexandre gramfort net Fabian Pedregosa http fa bianp net Mathieu Blondel http www mblondel org James Bergstra http www etud iro umontreal ca bergstrj liblinear https www csie ntu edu tw cjlin liblinear Yaroslav Halchenko http www onerussian com Vlad Niculae https vene ro Edouard Duchesnay https duchesnay github io Peter Prettenhofer https sites google com site peterprettenhofer Alexandre Passos http atpassos me Nicolas Pinto https twitter com npinto Bertrand Thirion https team inria fr parietal bertrand thirions page Andreas M ller https amueller github io Matthieu Perrot http brainvisa info biblio lnao en Author PERROT M html Jake Vanderplas https staff washington edu jakevdp Gilles Louppe http www montefiore ulg ac be glouppe INRIA https www inria fr Parietal Team http parietal saclay inria fr David Warde Farley http www etud iro umontreal ca wardefar Brian Holt http personal ee surrey ac uk Personal B Holt Satrajit Ghosh https www mit edu satra Robert Layton https twitter com robertlayton Scott White https twitter com scottblanc David Marek https davidmarek cz Christian Osendorfer https osdf github io Arnaud Joly http www ajoly org Rob Zinkov https www zinkov com Joel Nothman https joelnothman com Nicolas Tr segnie https github com NicolasTr Kemal Eren http www kemaleren com Yann Dauphin https ynd github io Yannick Schwartz https team inria fr parietal schwarty Kyle Kastner https kastnerkyle github io Daniel Nouri http danielnouri org Manoj Kumar https manojbits wordpress com Luis Pedro Coelho http luispedro org Fares Hedyati http www eecs berkeley edu fareshed Antony Lee https www ocf berkeley edu antonyl Martin Billinger https tnsre embs org author martinbillinger Matteo Visconti di Oleggio Castello http www mvdoc me Trevor Stephens http trevorstephens com Jan Hendrik Metzen https jmetzen github io Will Dawson http www dawsonresearch com Andrew Tulloch https tullo ch Hanna Wallach https dirichlet net Yan Yi http seowyanyi org Herv Bredin https herve niderb fr Eric Martin http www ericmart in Nicolas Goix https ngoix github io Sebastian Raschka https sebastianraschka com Brian McFee https bmcfee github io Valentin Stolbunov http www vstolbunov com Jaques Grobler https github com jaquesgrobler Lars Buitinck https github com larsmans Loic Esteve https github com lesteve Noel Dawe https github com ndawe Raghav RV https github com raghavrv Tom Dupre la Tour https github com TomDLT Nelle Varoquaux https github com nellev Bing Tian Dai https github com btdai Dylan Werner Meier https github com unautre Alyssa Batula https github com abatula Srivatsan Ramesh https github com srivatsan ramesh Ron Weiss https www ee columbia edu ronw Kathleen Chen https github com kchen17 Vincent Pham https github com vincentpham1991 Denis Engemann http denis engemann de Anish Shah https github com AnishShah Neeraj Gangwar http neerajgangwar in Arthur Mensch https amensch fr Joris Van den Bossche https github com jorisvandenbossche Roman Yurchak https github com rth Hanmin Qin https github com qinhanmin2014 Adrin Jalali https github com adrinjalali Thomas Fan https github com thomasjpfan Nicolas Hug https github com NicolasHug Guillaume Lemaitre https github com glemaitre Tim Head https betatim github io |
scikit-learn sklearn contributors rst releasenotes022 Version 0 22 | .. include:: _contributors.rst
.. currentmodule:: sklearn
.. _release_notes_0_22:
============
Version 0.22
============
For a short description of the main highlights of the release, please refer to
:ref:`sphx_glr_auto_examples_release_highlights_plot_release_highlights_0_22_0.py`.
.. include:: changelog_legend.inc
.. _changes_0_22_2:
Version 0.22.2.post1
====================
**March 3 2020**
The 0.22.2.post1 release includes a packaging fix for the source distribution
but the content of the packages is otherwise identical to the content of the
wheels with the 0.22.2 version (without the .post1 suffix). Both contain the
following changes.
Changelog
---------
:mod:`sklearn.impute`
.....................
- |Efficiency| Reduce :func:`impute.KNNImputer` asymptotic memory usage by
chunking pairwise distance computation.
:pr:`16397` by `Joel Nothman`_.
:mod:`sklearn.metrics`
......................
- |Fix| Fixed a bug in `metrics.plot_roc_curve` where
the name of the estimator was passed in the :class:`metrics.RocCurveDisplay`
instead of the parameter `name`. It results in a different plot when calling
:meth:`metrics.RocCurveDisplay.plot` for the subsequent times.
:pr:`16500` by :user:`Guillaume Lemaitre <glemaitre>`.
- |Fix| Fixed a bug in `metrics.plot_precision_recall_curve` where the
name of the estimator was passed in the
:class:`metrics.PrecisionRecallDisplay` instead of the parameter `name`. It
results in a different plot when calling
:meth:`metrics.PrecisionRecallDisplay.plot` for the subsequent times.
:pr:`16505` by :user:`Guillaume Lemaitre <glemaitre>`.
:mod:`sklearn.neighbors`
........................
- |Fix| Fix a bug which converted a list of arrays into a 2-D object
array instead of a 1-D array containing NumPy arrays. This bug
was affecting :meth:`neighbors.NearestNeighbors.radius_neighbors`.
:pr:`16076` by :user:`Guillaume Lemaitre <glemaitre>` and
:user:`Alex Shacked <alexshacked>`.
.. _changes_0_22_1:
Version 0.22.1
==============
**January 2 2020**
This is a bug-fix release to primarily resolve some packaging issues in version
0.22.0. It also includes minor documentation improvements and some bug fixes.
Changelog
---------
:mod:`sklearn.cluster`
......................
- |Fix| :class:`cluster.KMeans` with ``algorithm="elkan"`` now uses the same
stopping criterion as with the default ``algorithm="full"``. :pr:`15930` by
:user:`inder128`.
:mod:`sklearn.inspection`
.........................
- |Fix| :func:`inspection.permutation_importance` will return the same
`importances` when a `random_state` is given for both `n_jobs=1` or
`n_jobs>1` both with shared memory backends (thread-safety) and
isolated memory, process-based backends.
Also avoid casting the data as object dtype and avoid read-only error
on large dataframes with `n_jobs>1` as reported in :issue:`15810`.
Follow-up of :pr:`15898` by :user:`Shivam Gargsya <shivamgargsya>`.
:pr:`15933` by :user:`Guillaume Lemaitre <glemaitre>` and `Olivier Grisel`_.
- |Fix| `inspection.plot_partial_dependence` and
:meth:`inspection.PartialDependenceDisplay.plot` now consistently checks
the number of axes passed in. :pr:`15760` by `Thomas Fan`_.
:mod:`sklearn.metrics`
......................
- |Fix| `metrics.plot_confusion_matrix` now raises error when `normalize`
is invalid. Previously, it runs fine with no normalization.
:pr:`15888` by `Hanmin Qin`_.
- |Fix| `metrics.plot_confusion_matrix` now colors the label color
correctly to maximize contrast with its background. :pr:`15936` by
`Thomas Fan`_ and :user:`DizietAsahi`.
- |Fix| :func:`metrics.classification_report` does no longer ignore the
value of the ``zero_division`` keyword argument. :pr:`15879`
by :user:`Bibhash Chandra Mitra <Bibyutatsu>`.
- |Fix| Fixed a bug in `metrics.plot_confusion_matrix` to correctly
pass the `values_format` parameter to the :class:`metrics.ConfusionMatrixDisplay`
plot() call. :pr:`15937` by :user:`Stephen Blystone <blynotes>`.
:mod:`sklearn.model_selection`
..............................
- |Fix| :class:`model_selection.GridSearchCV` and
:class:`model_selection.RandomizedSearchCV` accept scalar values provided in
`fit_params`. Change in 0.22 was breaking backward compatibility.
:pr:`15863` by :user:`Adrin Jalali <adrinjalali>` and
:user:`Guillaume Lemaitre <glemaitre>`.
:mod:`sklearn.naive_bayes`
..........................
- |Fix| Removed `abstractmethod` decorator for the method `_check_X` in
`naive_bayes.BaseNB` that could break downstream projects inheriting
from this deprecated public base class. :pr:`15996` by
:user:`Brigitta Sipőcz <bsipocz>`.
:mod:`sklearn.preprocessing`
............................
- |Fix| :class:`preprocessing.QuantileTransformer` now guarantees the
`quantiles_` attribute to be completely sorted in non-decreasing manner.
:pr:`15751` by :user:`Tirth Patel <tirthasheshpatel>`.
:mod:`sklearn.semi_supervised`
..............................
- |Fix| :class:`semi_supervised.LabelPropagation` and
:class:`semi_supervised.LabelSpreading` now allow callable kernel function to
return sparse weight matrix.
:pr:`15868` by :user:`Niklas Smedemark-Margulies <nik-sm>`.
:mod:`sklearn.utils`
....................
- |Fix| :func:`utils.check_array` now correctly converts pandas DataFrame with
boolean columns to floats. :pr:`15797` by `Thomas Fan`_.
- |Fix| :func:`utils.validation.check_is_fitted` accepts back an explicit ``attributes``
argument to check for specific attributes as explicit markers of a fitted
estimator. When no explicit ``attributes`` are provided, only the attributes
that end with a underscore and do not start with double underscore are used
as "fitted" markers. The ``all_or_any`` argument is also no longer
deprecated. This change is made to restore some backward compatibility with
the behavior of this utility in version 0.21. :pr:`15947` by `Thomas Fan`_.
.. _changes_0_22:
Version 0.22.0
==============
**December 3 2019**
Website update
--------------
`Our website <https://scikit-learn.org/>`_ was revamped and given a fresh
new look. :pr:`14849` by `Thomas Fan`_.
Clear definition of the public API
----------------------------------
Scikit-learn has a public API, and a private API.
We do our best not to break the public API, and to only introduce
backward-compatible changes that do not require any user action. However, in
cases where that's not possible, any change to the public API is subject to
a deprecation cycle of two minor versions. The private API isn't publicly
documented and isn't subject to any deprecation cycle, so users should not
rely on its stability.
A function or object is public if it is documented in the `API Reference
<https://scikit-learn.org/dev/modules/classes.html>`_ and if it can be
imported with an import path without leading underscores. For example
``sklearn.pipeline.make_pipeline`` is public, while
`sklearn.pipeline._name_estimators` is private.
``sklearn.ensemble._gb.BaseEnsemble`` is private too because the whole `_gb`
module is private.
Up to 0.22, some tools were de-facto public (no leading underscore), while
they should have been private in the first place. In version 0.22, these
tools have been made properly private, and the public API space has been
cleaned. In addition, importing from most sub-modules is now deprecated: you
should for example use ``from sklearn.cluster import Birch`` instead of
``from sklearn.cluster.birch import Birch`` (in practice, ``birch.py`` has
been moved to ``_birch.py``).
.. note::
All the tools in the public API should be documented in the `API
Reference <https://scikit-learn.org/dev/modules/classes.html>`_. If you
find a public tool (without leading underscore) that isn't in the API
reference, that means it should either be private or documented. Please
let us know by opening an issue!
This work was tracked in `issue 9250
<https://github.com/scikit-learn/scikit-learn/issues/9250>`_ and `issue
12927 <https://github.com/scikit-learn/scikit-learn/issues/12927>`_.
Deprecations: using ``FutureWarning`` from now on
-------------------------------------------------
When deprecating a feature, previous versions of scikit-learn used to raise
a ``DeprecationWarning``. Since the ``DeprecationWarnings`` aren't shown by
default by Python, scikit-learn needed to resort to a custom warning filter
to always show the warnings. That filter would sometimes interfere
with users custom warning filters.
Starting from version 0.22, scikit-learn will show ``FutureWarnings`` for
deprecations, `as recommended by the Python documentation
<https://docs.python.org/3/library/exceptions.html#FutureWarning>`_.
``FutureWarnings`` are always shown by default by Python, so the custom
filter has been removed and scikit-learn no longer hinders with user
filters. :pr:`15080` by `Nicolas Hug`_.
Changed models
--------------
The following estimators and functions, when fit with the same data and
parameters, may produce different models from the previous version. This often
occurs due to changes in the modelling logic (bug fixes or enhancements), or in
random sampling procedures.
- :class:`cluster.KMeans` when `n_jobs=1`. |Fix|
- :class:`decomposition.SparseCoder`,
:class:`decomposition.DictionaryLearning`, and
:class:`decomposition.MiniBatchDictionaryLearning` |Fix|
- :class:`decomposition.SparseCoder` with `algorithm='lasso_lars'` |Fix|
- :class:`decomposition.SparsePCA` where `normalize_components` has no effect
due to deprecation.
- :class:`ensemble.HistGradientBoostingClassifier` and
:class:`ensemble.HistGradientBoostingRegressor` |Fix|, |Feature|,
|Enhancement|.
- :class:`impute.IterativeImputer` when `X` has features with no missing
values. |Feature|
- :class:`linear_model.Ridge` when `X` is sparse. |Fix|
- :class:`model_selection.StratifiedKFold` and any use of `cv=int` with a
classifier. |Fix|
- :class:`cross_decomposition.CCA` when using scipy >= 1.3 |Fix|
Details are listed in the changelog below.
(While we are trying to better inform users by providing this information, we
cannot assure that this list is complete.)
Changelog
---------
..
Entries should be grouped by module (in alphabetic order) and prefixed with
one of the labels: |MajorFeature|, |Feature|, |Efficiency|, |Enhancement|,
|Fix| or |API| (see whats_new.rst for descriptions).
Entries should be ordered by those labels (e.g. |Fix| after |Efficiency|).
Changes not specific to a module should be listed under *Multiple Modules*
or *Miscellaneous*.
Entries should end with:
:pr:`123456` by :user:`Joe Bloggs <joeongithub>`.
where 123456 is the *pull request* number, not the issue number.
:mod:`sklearn.base`
...................
- |API| From version 0.24 :meth:`base.BaseEstimator.get_params` will raise an
AttributeError rather than return None for parameters that are in the
estimator's constructor but not stored as attributes on the instance.
:pr:`14464` by `Joel Nothman`_.
:mod:`sklearn.calibration`
..........................
- |Fix| Fixed a bug that made :class:`calibration.CalibratedClassifierCV` fail when
given a `sample_weight` parameter of type `list` (in the case where
`sample_weights` are not supported by the wrapped estimator). :pr:`13575`
by :user:`William de Vazelhes <wdevazelhes>`.
:mod:`sklearn.cluster`
......................
- |Feature| :class:`cluster.SpectralClustering` now accepts precomputed sparse
neighbors graph as input. :issue:`10482` by `Tom Dupre la Tour`_ and
:user:`Kumar Ashutosh <thechargedneutron>`.
- |Enhancement| :class:`cluster.SpectralClustering` now accepts a ``n_components``
parameter. This parameter extends `SpectralClustering` class functionality to
match :meth:`cluster.spectral_clustering`.
:pr:`13726` by :user:`Shuzhe Xiao <fdas3213>`.
- |Fix| Fixed a bug where :class:`cluster.KMeans` produced inconsistent results
between `n_jobs=1` and `n_jobs>1` due to the handling of the random state.
:pr:`9288` by :user:`Bryan Yang <bryanyang0528>`.
- |Fix| Fixed a bug where `elkan` algorithm in :class:`cluster.KMeans` was
producing Segmentation Fault on large arrays due to integer index overflow.
:pr:`15057` by :user:`Vladimir Korolev <balodja>`.
- |Fix| :class:`~cluster.MeanShift` now accepts a :term:`max_iter` with a
default value of 300 instead of always using the default 300. It also now
exposes an ``n_iter_`` indicating the maximum number of iterations performed
on each seed. :pr:`15120` by `Adrin Jalali`_.
- |Fix| :class:`cluster.AgglomerativeClustering` and
:class:`cluster.FeatureAgglomeration` now raise an error if
`affinity='cosine'` and `X` has samples that are all-zeros. :pr:`7943` by
:user:`mthorrell`.
:mod:`sklearn.compose`
......................
- |Feature| Adds :func:`compose.make_column_selector` which is used with
:class:`compose.ColumnTransformer` to select DataFrame columns on the basis
of name and dtype. :pr:`12303` by `Thomas Fan`_.
- |Fix| Fixed a bug in :class:`compose.ColumnTransformer` which failed to
select the proper columns when using a boolean list, with NumPy older than
1.12.
:pr:`14510` by `Guillaume Lemaitre`_.
- |Fix| Fixed a bug in :class:`compose.TransformedTargetRegressor` which did not
pass `**fit_params` to the underlying regressor.
:pr:`14890` by :user:`Miguel Cabrera <mfcabrera>`.
- |Fix| The :class:`compose.ColumnTransformer` now requires the number of
features to be consistent between `fit` and `transform`. A `FutureWarning`
is raised now, and this will raise an error in 0.24. If the number of
features isn't consistent and negative indexing is used, an error is
raised. :pr:`14544` by `Adrin Jalali`_.
:mod:`sklearn.cross_decomposition`
..................................
- |Feature| :class:`cross_decomposition.PLSCanonical` and
:class:`cross_decomposition.PLSRegression` have a new function
``inverse_transform`` to transform data to the original space.
:pr:`15304` by :user:`Jaime Ferrando Huertas <jiwidi>`.
- |Enhancement| :class:`decomposition.KernelPCA` now properly checks the
eigenvalues found by the solver for numerical or conditioning issues. This
ensures consistency of results across solvers (different choices for
``eigen_solver``), including approximate solvers such as ``'randomized'`` and
``'lobpcg'`` (see :issue:`12068`).
:pr:`12145` by :user:`Sylvain Marié <smarie>`
- |Fix| Fixed a bug where :class:`cross_decomposition.PLSCanonical` and
:class:`cross_decomposition.PLSRegression` were raising an error when fitted
with a target matrix `Y` in which the first column was constant.
:issue:`13609` by :user:`Camila Williamson <camilaagw>`.
- |Fix| :class:`cross_decomposition.CCA` now produces the same results with
scipy 1.3 and previous scipy versions. :pr:`15661` by `Thomas Fan`_.
:mod:`sklearn.datasets`
.......................
- |Feature| :func:`datasets.fetch_openml` now supports heterogeneous data using
pandas by setting `as_frame=True`. :pr:`13902` by `Thomas Fan`_.
- |Feature| :func:`datasets.fetch_openml` now includes the `target_names` in
the returned Bunch. :pr:`15160` by `Thomas Fan`_.
- |Enhancement| The parameter `return_X_y` was added to
:func:`datasets.fetch_20newsgroups` and :func:`datasets.fetch_olivetti_faces`
. :pr:`14259` by :user:`Sourav Singh <souravsingh>`.
- |Enhancement| :func:`datasets.make_classification` now accepts array-like
`weights` parameter, i.e. list or numpy.array, instead of list only.
:pr:`14764` by :user:`Cat Chenal <CatChenal>`.
- |Enhancement| The parameter `normalize` was added to
:func:`datasets.fetch_20newsgroups_vectorized`.
:pr:`14740` by :user:`Stéphan Tulkens <stephantul>`
- |Fix| Fixed a bug in :func:`datasets.fetch_openml`, which failed to load
an OpenML dataset that contains an ignored feature.
:pr:`14623` by :user:`Sarra Habchi <HabchiSarra>`.
:mod:`sklearn.decomposition`
............................
- |Efficiency| :class:`decomposition.NMF` with `solver="mu"` fitted on sparse input
matrices now uses batching to avoid briefly allocating an array with size
(#non-zero elements, n_components). :pr:`15257` by :user:`Mart Willocx <Maocx>`.
- |Enhancement| :func:`decomposition.dict_learning` and
:func:`decomposition.dict_learning_online` now accept `method_max_iter` and
pass it to :meth:`decomposition.sparse_encode`.
:issue:`12650` by `Adrin Jalali`_.
- |Enhancement| :class:`decomposition.SparseCoder`,
:class:`decomposition.DictionaryLearning`, and
:class:`decomposition.MiniBatchDictionaryLearning` now take a
`transform_max_iter` parameter and pass it to either
:func:`decomposition.dict_learning()` or
:func:`decomposition.sparse_encode()`. :issue:`12650` by `Adrin Jalali`_.
- |Enhancement| :class:`decomposition.IncrementalPCA` now accepts sparse
matrices as input, converting them to dense in batches thereby avoiding the
need to store the entire dense matrix at once.
:pr:`13960` by :user:`Scott Gigante <scottgigante>`.
- |Fix| :func:`decomposition.sparse_encode()` now passes the `max_iter` to the
underlying :class:`linear_model.LassoLars` when `algorithm='lasso_lars'`.
:issue:`12650` by `Adrin Jalali`_.
:mod:`sklearn.dummy`
....................
- |Fix| :class:`dummy.DummyClassifier` now handles checking the existence
of the provided constant in multiouput cases.
:pr:`14908` by :user:`Martina G. Vilas <martinagvilas>`.
- |API| The default value of the `strategy` parameter in
:class:`dummy.DummyClassifier` will change from `'stratified'` in version
0.22 to `'prior'` in 0.24. A FutureWarning is raised when the default value
is used. :pr:`15382` by `Thomas Fan`_.
- |API| The ``outputs_2d_`` attribute is deprecated in
:class:`dummy.DummyClassifier` and :class:`dummy.DummyRegressor`. It is
equivalent to ``n_outputs > 1``. :pr:`14933` by `Nicolas Hug`_
:mod:`sklearn.ensemble`
.......................
- |MajorFeature| Added :class:`ensemble.StackingClassifier` and
:class:`ensemble.StackingRegressor` to stack predictors using a final
classifier or regressor. :pr:`11047` by :user:`Guillaume Lemaitre
<glemaitre>` and :user:`Caio Oliveira <caioaao>` and :pr:`15138` by
:user:`Jon Cusick <jcusick13>`..
- |MajorFeature| Many improvements were made to
:class:`ensemble.HistGradientBoostingClassifier` and
:class:`ensemble.HistGradientBoostingRegressor`:
- |Feature| Estimators now natively support dense data with missing
values both for training and predicting. They also support infinite
values. :pr:`13911` and :pr:`14406` by `Nicolas Hug`_, `Adrin Jalali`_
and `Olivier Grisel`_.
- |Feature| Estimators now have an additional `warm_start` parameter that
enables warm starting. :pr:`14012` by :user:`Johann Faouzi <johannfaouzi>`.
- |Feature| :func:`inspection.partial_dependence` and
`inspection.plot_partial_dependence` now support the fast 'recursion'
method for both estimators. :pr:`13769` by `Nicolas Hug`_.
- |Enhancement| for :class:`ensemble.HistGradientBoostingClassifier` the
training loss or score is now monitored on a class-wise stratified
subsample to preserve the class balance of the original training set.
:pr:`14194` by :user:`Johann Faouzi <johannfaouzi>`.
- |Enhancement| :class:`ensemble.HistGradientBoostingRegressor` now supports
the 'least_absolute_deviation' loss. :pr:`13896` by `Nicolas Hug`_.
- |Fix| Estimators now bin the training and validation data separately to
avoid any data leak. :pr:`13933` by `Nicolas Hug`_.
- |Fix| Fixed a bug where early stopping would break with string targets.
:pr:`14710` by `Guillaume Lemaitre`_.
- |Fix| :class:`ensemble.HistGradientBoostingClassifier` now raises an error
if ``categorical_crossentropy`` loss is given for a binary classification
problem. :pr:`14869` by `Adrin Jalali`_.
Note that pickles from 0.21 will not work in 0.22.
- |Enhancement| Addition of ``max_samples`` argument allows limiting
size of bootstrap samples to be less than size of dataset. Added to
:class:`ensemble.RandomForestClassifier`,
:class:`ensemble.RandomForestRegressor`,
:class:`ensemble.ExtraTreesClassifier`,
:class:`ensemble.ExtraTreesRegressor`. :pr:`14682` by
:user:`Matt Hancock <notmatthancock>` and
:pr:`5963` by :user:`Pablo Duboue <DrDub>`.
- |Fix| :func:`ensemble.VotingClassifier.predict_proba` will no longer be
present when `voting='hard'`. :pr:`14287` by `Thomas Fan`_.
- |Fix| The `named_estimators_` attribute in :class:`ensemble.VotingClassifier`
and :class:`ensemble.VotingRegressor` now correctly maps to dropped estimators.
Previously, the `named_estimators_` mapping was incorrect whenever one of the
estimators was dropped. :pr:`15375` by `Thomas Fan`_.
- |Fix| Run by default
:func:`utils.estimator_checks.check_estimator` on both
:class:`ensemble.VotingClassifier` and :class:`ensemble.VotingRegressor`. It
leads to solve issues regarding shape consistency during `predict` which was
failing when the underlying estimators were not outputting consistent array
dimensions. Note that it should be replaced by refactoring the common tests
in the future.
:pr:`14305` by `Guillaume Lemaitre`_.
- |Fix| :class:`ensemble.AdaBoostClassifier` computes probabilities based on
the decision function as in the literature. Thus, `predict` and
`predict_proba` give consistent results.
:pr:`14114` by `Guillaume Lemaitre`_.
- |Fix| Stacking and Voting estimators now ensure that their underlying
estimators are either all classifiers or all regressors.
:class:`ensemble.StackingClassifier`, :class:`ensemble.StackingRegressor`,
and :class:`ensemble.VotingClassifier` and :class:`ensemble.VotingRegressor`
now raise consistent error messages.
:pr:`15084` by `Guillaume Lemaitre`_.
- |Fix| :class:`ensemble.AdaBoostRegressor` where the loss should be normalized
by the max of the samples with non-null weights only.
:pr:`14294` by `Guillaume Lemaitre`_.
- |API| ``presort`` is now deprecated in
:class:`ensemble.GradientBoostingClassifier` and
:class:`ensemble.GradientBoostingRegressor`, and the parameter has no effect.
Users are recommended to use :class:`ensemble.HistGradientBoostingClassifier`
and :class:`ensemble.HistGradientBoostingRegressor` instead.
:pr:`14907` by `Adrin Jalali`_.
:mod:`sklearn.feature_extraction`
.................................
- |Enhancement| A warning will now be raised if a parameter choice means
that another parameter will be unused on calling the fit() method for
:class:`feature_extraction.text.HashingVectorizer`,
:class:`feature_extraction.text.CountVectorizer` and
:class:`feature_extraction.text.TfidfVectorizer`.
:pr:`14602` by :user:`Gaurav Chawla <getgaurav2>`.
- |Fix| Functions created by ``build_preprocessor`` and ``build_analyzer`` of
`feature_extraction.text.VectorizerMixin` can now be pickled.
:pr:`14430` by :user:`Dillon Niederhut <deniederhut>`.
- |Fix| `feature_extraction.text.strip_accents_unicode` now correctly
removes accents from strings that are in NFKD normalized form. :pr:`15100` by
:user:`Daniel Grady <DGrady>`.
- |Fix| Fixed a bug that caused :class:`feature_extraction.DictVectorizer` to raise
an `OverflowError` during the `transform` operation when producing a `scipy.sparse`
matrix on large input data. :pr:`15463` by :user:`Norvan Sahiner <norvan>`.
- |API| Deprecated unused `copy` param for
:meth:`feature_extraction.text.TfidfVectorizer.transform` it will be
removed in v0.24. :pr:`14520` by
:user:`Guillem G. Subies <guillemgsubies>`.
:mod:`sklearn.feature_selection`
................................
- |Enhancement| Updated the following :mod:`sklearn.feature_selection`
estimators to allow NaN/Inf values in ``transform`` and ``fit``:
:class:`feature_selection.RFE`, :class:`feature_selection.RFECV`,
:class:`feature_selection.SelectFromModel`,
and :class:`feature_selection.VarianceThreshold`. Note that if the underlying
estimator of the feature selector does not allow NaN/Inf then it will still
error, but the feature selectors themselves no longer enforce this
restriction unnecessarily. :issue:`11635` by :user:`Alec Peters <adpeters>`.
- |Fix| Fixed a bug where :class:`feature_selection.VarianceThreshold` with
`threshold=0` did not remove constant features due to numerical instability,
by using range rather than variance in this case.
:pr:`13704` by :user:`Roddy MacSween <rlms>`.
:mod:`sklearn.gaussian_process`
...............................
- |Feature| Gaussian process models on structured data: :class:`gaussian_process.GaussianProcessRegressor`
and :class:`gaussian_process.GaussianProcessClassifier` can now accept a list
of generic objects (e.g. strings, trees, graphs, etc.) as the ``X`` argument
to their training/prediction methods.
A user-defined kernel should be provided for computing the kernel matrix among
the generic objects, and should inherit from `gaussian_process.kernels.GenericKernelMixin`
to notify the GPR/GPC model that it handles non-vectorial samples.
:pr:`15557` by :user:`Yu-Hang Tang <yhtang>`.
- |Efficiency| :func:`gaussian_process.GaussianProcessClassifier.log_marginal_likelihood`
and :func:`gaussian_process.GaussianProcessRegressor.log_marginal_likelihood` now
accept a ``clone_kernel=True`` keyword argument. When set to ``False``,
the kernel attribute is modified, but may result in a performance improvement.
:pr:`14378` by :user:`Masashi Shibata <c-bata>`.
- |API| From version 0.24 :meth:`gaussian_process.kernels.Kernel.get_params` will raise an
``AttributeError`` rather than return ``None`` for parameters that are in the
estimator's constructor but not stored as attributes on the instance.
:pr:`14464` by `Joel Nothman`_.
:mod:`sklearn.impute`
.....................
- |MajorFeature| Added :class:`impute.KNNImputer`, to impute missing values using
k-Nearest Neighbors. :issue:`12852` by :user:`Ashim Bhattarai <ashimb9>` and
`Thomas Fan`_ and :pr:`15010` by `Guillaume Lemaitre`_.
- |Feature| :class:`impute.IterativeImputer` has new `skip_compute` flag that
is False by default, which, when True, will skip computation on features that
have no missing values during the fit phase. :issue:`13773` by
:user:`Sergey Feldman <sergeyf>`.
- |Efficiency| :meth:`impute.MissingIndicator.fit_transform` avoid repeated
computation of the masked matrix. :pr:`14356` by :user:`Harsh Soni <harsh020>`.
- |Fix| :class:`impute.IterativeImputer` now works when there is only one feature.
By :user:`Sergey Feldman <sergeyf>`.
- |Fix| Fixed a bug in :class:`impute.IterativeImputer` where features where
imputed in the reverse desired order with ``imputation_order`` either
``"ascending"`` or ``"descending"``. :pr:`15393` by
:user:`Venkatachalam N <venkyyuvy>`.
:mod:`sklearn.inspection`
.........................
- |MajorFeature| :func:`inspection.permutation_importance` has been added to
measure the importance of each feature in an arbitrary trained model with
respect to a given scoring function. :issue:`13146` by `Thomas Fan`_.
- |Feature| :func:`inspection.partial_dependence` and
`inspection.plot_partial_dependence` now support the fast 'recursion'
method for :class:`ensemble.HistGradientBoostingClassifier` and
:class:`ensemble.HistGradientBoostingRegressor`. :pr:`13769` by
`Nicolas Hug`_.
- |Enhancement| `inspection.plot_partial_dependence` has been extended to
now support the new visualization API described in the :ref:`User Guide
<visualizations>`. :pr:`14646` by `Thomas Fan`_.
- |Enhancement| :func:`inspection.partial_dependence` accepts pandas DataFrame
and :class:`pipeline.Pipeline` containing :class:`compose.ColumnTransformer`.
In addition `inspection.plot_partial_dependence` will use the column
names by default when a dataframe is passed.
:pr:`14028` and :pr:`15429` by `Guillaume Lemaitre`_.
:mod:`sklearn.kernel_approximation`
...................................
- |Fix| Fixed a bug where :class:`kernel_approximation.Nystroem` raised a
`KeyError` when using `kernel="precomputed"`.
:pr:`14706` by :user:`Venkatachalam N <venkyyuvy>`.
:mod:`sklearn.linear_model`
...........................
- |Efficiency| The 'liblinear' logistic regression solver is now faster and
requires less memory.
:pr:`14108`, :pr:`14170`, :pr:`14296` by :user:`Alex Henrie <alexhenrie>`.
- |Enhancement| :class:`linear_model.BayesianRidge` now accepts hyperparameters
``alpha_init`` and ``lambda_init`` which can be used to set the initial value
of the maximization procedure in :term:`fit`.
:pr:`13618` by :user:`Yoshihiro Uchida <c56pony>`.
- |Fix| :class:`linear_model.Ridge` now correctly fits an intercept when `X` is
sparse, `solver="auto"` and `fit_intercept=True`, because the default solver
in this configuration has changed to `sparse_cg`, which can fit an intercept
with sparse data. :pr:`13995` by :user:`Jérôme Dockès <jeromedockes>`.
- |Fix| :class:`linear_model.Ridge` with `solver='sag'` now accepts F-ordered
and non-contiguous arrays and makes a conversion instead of failing.
:pr:`14458` by `Guillaume Lemaitre`_.
- |Fix| :class:`linear_model.LassoCV` no longer forces ``precompute=False``
when fitting the final model. :pr:`14591` by `Andreas Müller`_.
- |Fix| :class:`linear_model.RidgeCV` and :class:`linear_model.RidgeClassifierCV`
now correctly scores when `cv=None`.
:pr:`14864` by :user:`Venkatachalam N <venkyyuvy>`.
- |Fix| Fixed a bug in :class:`linear_model.LogisticRegressionCV` where the
``scores_``, ``n_iter_`` and ``coefs_paths_`` attribute would have a wrong
ordering with ``penalty='elastic-net'``. :pr:`15044` by `Nicolas Hug`_
- |Fix| :class:`linear_model.MultiTaskLassoCV` and
:class:`linear_model.MultiTaskElasticNetCV` with X of dtype int
and `fit_intercept=True`.
:pr:`15086` by :user:`Alex Gramfort <agramfort>`.
- |Fix| The liblinear solver now supports ``sample_weight``.
:pr:`15038` by `Guillaume Lemaitre`_.
:mod:`sklearn.manifold`
.......................
- |Feature| :class:`manifold.Isomap`, :class:`manifold.TSNE`, and
:class:`manifold.SpectralEmbedding` now accept precomputed sparse
neighbors graph as input. :issue:`10482` by `Tom Dupre la Tour`_ and
:user:`Kumar Ashutosh <thechargedneutron>`.
- |Feature| Exposed the ``n_jobs`` parameter in :class:`manifold.TSNE` for
multi-core calculation of the neighbors graph. This parameter has no
impact when ``metric="precomputed"`` or (``metric="euclidean"`` and
``method="exact"``). :issue:`15082` by `Roman Yurchak`_.
- |Efficiency| Improved efficiency of :class:`manifold.TSNE` when
``method="barnes-hut"`` by computing the gradient in parallel.
:pr:`13213` by :user:`Thomas Moreau <tommoral>`
- |Fix| Fixed a bug where :func:`manifold.spectral_embedding` (and therefore
:class:`manifold.SpectralEmbedding` and :class:`cluster.SpectralClustering`)
computed wrong eigenvalues with ``eigen_solver='amg'`` when
``n_samples < 5 * n_components``. :pr:`14647` by `Andreas Müller`_.
- |Fix| Fixed a bug in :func:`manifold.spectral_embedding` used in
:class:`manifold.SpectralEmbedding` and :class:`cluster.SpectralClustering`
where ``eigen_solver="amg"`` would sometimes result in a LinAlgError.
:issue:`13393` by :user:`Andrew Knyazev <lobpcg>`
:pr:`13707` by :user:`Scott White <whitews>`
- |API| Deprecate ``training_data_`` unused attribute in
:class:`manifold.Isomap`. :issue:`10482` by `Tom Dupre la Tour`_.
:mod:`sklearn.metrics`
......................
- |MajorFeature| `metrics.plot_roc_curve` has been added to plot roc
curves. This function introduces the visualization API described in
the :ref:`User Guide <visualizations>`. :pr:`14357` by `Thomas Fan`_.
- |Feature| Added a new parameter ``zero_division`` to multiple classification
metrics: :func:`metrics.precision_score`, :func:`metrics.recall_score`,
:func:`metrics.f1_score`, :func:`metrics.fbeta_score`,
:func:`metrics.precision_recall_fscore_support`,
:func:`metrics.classification_report`. This allows to set returned value for
ill-defined metrics.
:pr:`14900` by :user:`Marc Torrellas Socastro <marctorrellas>`.
- |Feature| Added the :func:`metrics.pairwise.nan_euclidean_distances` metric,
which calculates euclidean distances in the presence of missing values.
:issue:`12852` by :user:`Ashim Bhattarai <ashimb9>` and `Thomas Fan`_.
- |Feature| New ranking metrics :func:`metrics.ndcg_score` and
:func:`metrics.dcg_score` have been added to compute Discounted Cumulative
Gain and Normalized Discounted Cumulative Gain. :pr:`9951` by :user:`Jérôme
Dockès <jeromedockes>`.
- |Feature| `metrics.plot_precision_recall_curve` has been added to plot
precision recall curves. :pr:`14936` by `Thomas Fan`_.
- |Feature| `metrics.plot_confusion_matrix` has been added to plot
confusion matrices. :pr:`15083` by `Thomas Fan`_.
- |Feature| Added multiclass support to :func:`metrics.roc_auc_score` with
corresponding scorers `'roc_auc_ovr'`, `'roc_auc_ovo'`,
`'roc_auc_ovr_weighted'`, and `'roc_auc_ovo_weighted'`.
:pr:`12789` and :pr:`15274` by
:user:`Kathy Chen <kathyxchen>`, :user:`Mohamed Maskani <maskani-moh>`, and
`Thomas Fan`_.
- |Feature| Add :class:`metrics.mean_tweedie_deviance` measuring the
Tweedie deviance for a given ``power`` parameter. Also add mean Poisson
deviance :class:`metrics.mean_poisson_deviance` and mean Gamma deviance
:class:`metrics.mean_gamma_deviance` that are special cases of the Tweedie
deviance for ``power=1`` and ``power=2`` respectively.
:pr:`13938` by :user:`Christian Lorentzen <lorentzenchr>` and
`Roman Yurchak`_.
- |Efficiency| Improved performance of
:func:`metrics.pairwise.manhattan_distances` in the case of sparse matrices.
:pr:`15049` by `Paolo Toccaceli <ptocca>`.
- |Enhancement| The parameter ``beta`` in :func:`metrics.fbeta_score` is
updated to accept the zero and `float('+inf')` value.
:pr:`13231` by :user:`Dong-hee Na <corona10>`.
- |Enhancement| Added parameter ``squared`` in :func:`metrics.mean_squared_error`
to return root mean squared error.
:pr:`13467` by :user:`Urvang Patel <urvang96>`.
- |Enhancement| Allow computing averaged metrics in the case of no true positives.
:pr:`14595` by `Andreas Müller`_.
- |Enhancement| Multilabel metrics now supports list of lists as input.
:pr:`14865` :user:`Srivatsan Ramesh <srivatsan-ramesh>`,
:user:`Herilalaina Rakotoarison <herilalaina>`,
:user:`Léonard Binet <leonardbinet>`.
- |Enhancement| :func:`metrics.median_absolute_error` now supports
``multioutput`` parameter.
:pr:`14732` by :user:`Agamemnon Krasoulis <agamemnonc>`.
- |Enhancement| 'roc_auc_ovr_weighted' and 'roc_auc_ovo_weighted' can now be
used as the :term:`scoring` parameter of model-selection tools.
:pr:`14417` by `Thomas Fan`_.
- |Enhancement| :func:`metrics.confusion_matrix` accepts a parameters
`normalize` allowing to normalize the confusion matrix by column, rows, or
overall.
:pr:`15625` by `Guillaume Lemaitre <glemaitre>`.
- |Fix| Raise a ValueError in :func:`metrics.silhouette_score` when a
precomputed distance matrix contains non-zero diagonal entries.
:pr:`12258` by :user:`Stephen Tierney <sjtrny>`.
- |API| ``scoring="neg_brier_score"`` should be used instead of
``scoring="brier_score_loss"`` which is now deprecated.
:pr:`14898` by :user:`Stefan Matcovici <stefan-matcovici>`.
:mod:`sklearn.model_selection`
..............................
- |Efficiency| Improved performance of multimetric scoring in
:func:`model_selection.cross_validate`,
:class:`model_selection.GridSearchCV`, and
:class:`model_selection.RandomizedSearchCV`. :pr:`14593` by `Thomas Fan`_.
- |Enhancement| :class:`model_selection.learning_curve` now accepts parameter
``return_times`` which can be used to retrieve computation times in order to
plot model scalability (see learning_curve example).
:pr:`13938` by :user:`Hadrien Reboul <H4dr1en>`.
- |Enhancement| :class:`model_selection.RandomizedSearchCV` now accepts lists
of parameter distributions. :pr:`14549` by `Andreas Müller`_.
- |Fix| Reimplemented :class:`model_selection.StratifiedKFold` to fix an issue
where one test set could be `n_classes` larger than another. Test sets should
now be near-equally sized. :pr:`14704` by `Joel Nothman`_.
- |Fix| The `cv_results_` attribute of :class:`model_selection.GridSearchCV`
and :class:`model_selection.RandomizedSearchCV` now only contains unfitted
estimators. This potentially saves a lot of memory since the state of the
estimators isn't stored. :pr:`#15096` by `Andreas Müller`_.
- |API| :class:`model_selection.KFold` and
:class:`model_selection.StratifiedKFold` now raise a warning if
`random_state` is set but `shuffle` is False. This will raise an error in
0.24.
:mod:`sklearn.multioutput`
..........................
- |Fix| :class:`multioutput.MultiOutputClassifier` now has attribute
``classes_``. :pr:`14629` by :user:`Agamemnon Krasoulis <agamemnonc>`.
- |Fix| :class:`multioutput.MultiOutputClassifier` now has `predict_proba`
as property and can be checked with `hasattr`.
:issue:`15488` :pr:`15490` by :user:`Rebekah Kim <rebekahkim>`
:mod:`sklearn.naive_bayes`
...............................
- |MajorFeature| Added :class:`naive_bayes.CategoricalNB` that implements the
Categorical Naive Bayes classifier.
:pr:`12569` by :user:`Tim Bicker <timbicker>` and
:user:`Florian Wilhelm <FlorianWilhelm>`.
:mod:`sklearn.neighbors`
........................
- |MajorFeature| Added :class:`neighbors.KNeighborsTransformer` and
:class:`neighbors.RadiusNeighborsTransformer`, which transform input dataset
into a sparse neighbors graph. They give finer control on nearest neighbors
computations and enable easy pipeline caching for multiple use.
:issue:`10482` by `Tom Dupre la Tour`_.
- |Feature| :class:`neighbors.KNeighborsClassifier`,
:class:`neighbors.KNeighborsRegressor`,
:class:`neighbors.RadiusNeighborsClassifier`,
:class:`neighbors.RadiusNeighborsRegressor`, and
:class:`neighbors.LocalOutlierFactor` now accept precomputed sparse
neighbors graph as input. :issue:`10482` by `Tom Dupre la Tour`_ and
:user:`Kumar Ashutosh <thechargedneutron>`.
- |Feature| :class:`neighbors.RadiusNeighborsClassifier` now supports
predicting probabilities by using `predict_proba` and supports more
outlier_label options: 'most_frequent', or different outlier_labels
for multi-outputs.
:pr:`9597` by :user:`Wenbo Zhao <webber26232>`.
- |Efficiency| Efficiency improvements for
:func:`neighbors.RadiusNeighborsClassifier.predict`.
:pr:`9597` by :user:`Wenbo Zhao <webber26232>`.
- |Fix| :class:`neighbors.KNeighborsRegressor` now throws error when
`metric='precomputed'` and fit on non-square data. :pr:`14336` by
:user:`Gregory Dexter <gdex1>`.
:mod:`sklearn.neural_network`
.............................
- |Feature| Add `max_fun` parameter in
`neural_network.BaseMultilayerPerceptron`,
:class:`neural_network.MLPRegressor`, and
:class:`neural_network.MLPClassifier` to give control over
maximum number of function evaluation to not meet ``tol`` improvement.
:issue:`9274` by :user:`Daniel Perry <daniel-perry>`.
:mod:`sklearn.pipeline`
.......................
- |Enhancement| :class:`pipeline.Pipeline` now supports :term:`score_samples` if
the final estimator does.
:pr:`13806` by :user:`Anaël Beaugnon <ab-anssi>`.
- |Fix| The `fit` in :class:`~pipeline.FeatureUnion` now accepts `fit_params`
to pass to the underlying transformers. :pr:`15119` by `Adrin Jalali`_.
- |API| `None` as a transformer is now deprecated in
:class:`pipeline.FeatureUnion`. Please use `'drop'` instead. :pr:`15053` by
`Thomas Fan`_.
:mod:`sklearn.preprocessing`
............................
- |Efficiency| :class:`preprocessing.PolynomialFeatures` is now faster when
the input data is dense. :pr:`13290` by :user:`Xavier Dupré <sdpython>`.
- |Enhancement| Avoid unnecessary data copy when fitting preprocessors
:class:`preprocessing.StandardScaler`, :class:`preprocessing.MinMaxScaler`,
:class:`preprocessing.MaxAbsScaler`, :class:`preprocessing.RobustScaler`
and :class:`preprocessing.QuantileTransformer` which results in a slight
performance improvement. :pr:`13987` by `Roman Yurchak`_.
- |Fix| KernelCenterer now throws error when fit on non-square
:class:`preprocessing.KernelCenterer`
:pr:`14336` by :user:`Gregory Dexter <gdex1>`.
:mod:`sklearn.model_selection`
..............................
- |Fix| :class:`model_selection.GridSearchCV` and
`model_selection.RandomizedSearchCV` now supports the
`_pairwise` property, which prevents an error during cross-validation
for estimators with pairwise inputs (such as
:class:`neighbors.KNeighborsClassifier` when :term:`metric` is set to
'precomputed').
:pr:`13925` by :user:`Isaac S. Robson <isrobson>` and :pr:`15524` by
:user:`Xun Tang <xun-tang>`.
:mod:`sklearn.svm`
..................
- |Enhancement| :class:`svm.SVC` and :class:`svm.NuSVC` now accept a
``break_ties`` parameter. This parameter results in :term:`predict` breaking
the ties according to the confidence values of :term:`decision_function`, if
``decision_function_shape='ovr'``, and the number of target classes > 2.
:pr:`12557` by `Adrin Jalali`_.
- |Enhancement| SVM estimators now throw a more specific error when
`kernel='precomputed'` and fit on non-square data.
:pr:`14336` by :user:`Gregory Dexter <gdex1>`.
- |Fix| :class:`svm.SVC`, :class:`svm.SVR`, :class:`svm.NuSVR` and
:class:`svm.OneClassSVM` when received values negative or zero
for parameter ``sample_weight`` in method fit(), generated an
invalid model. This behavior occurred only in some border scenarios.
Now in these cases, fit() will fail with an Exception.
:pr:`14286` by :user:`Alex Shacked <alexshacked>`.
- |Fix| The `n_support_` attribute of :class:`svm.SVR` and
:class:`svm.OneClassSVM` was previously non-initialized, and had size 2. It
has now size 1 with the correct value. :pr:`15099` by `Nicolas Hug`_.
- |Fix| fixed a bug in `BaseLibSVM._sparse_fit` where n_SV=0 raised a
ZeroDivisionError. :pr:`14894` by :user:`Danna Naser <danna-naser>`.
- |Fix| The liblinear solver now supports ``sample_weight``.
:pr:`15038` by `Guillaume Lemaitre`_.
:mod:`sklearn.tree`
...................
- |Feature| Adds minimal cost complexity pruning, controlled by ``ccp_alpha``,
to :class:`tree.DecisionTreeClassifier`, :class:`tree.DecisionTreeRegressor`,
:class:`tree.ExtraTreeClassifier`, :class:`tree.ExtraTreeRegressor`,
:class:`ensemble.RandomForestClassifier`,
:class:`ensemble.RandomForestRegressor`,
:class:`ensemble.ExtraTreesClassifier`,
:class:`ensemble.ExtraTreesRegressor`,
:class:`ensemble.GradientBoostingClassifier`,
and :class:`ensemble.GradientBoostingRegressor`.
:pr:`12887` by `Thomas Fan`_.
- |API| ``presort`` is now deprecated in
:class:`tree.DecisionTreeClassifier` and
:class:`tree.DecisionTreeRegressor`, and the parameter has no effect.
:pr:`14907` by `Adrin Jalali`_.
- |API| The ``classes_`` and ``n_classes_`` attributes of
:class:`tree.DecisionTreeRegressor` are now deprecated. :pr:`15028` by
:user:`Mei Guan <meiguan>`, `Nicolas Hug`_, and `Adrin Jalali`_.
:mod:`sklearn.utils`
....................
- |Feature| :func:`~utils.estimator_checks.check_estimator` can now generate
checks by setting `generate_only=True`. Previously, running
:func:`~utils.estimator_checks.check_estimator` will stop when the first
check fails. With `generate_only=True`, all checks can run independently and
report the ones that are failing. Read more in
:ref:`rolling_your_own_estimator`. :pr:`14381` by `Thomas Fan`_.
- |Feature| Added a pytest specific decorator,
:func:`~utils.estimator_checks.parametrize_with_checks`, to parametrize
estimator checks for a list of estimators. :pr:`14381` by `Thomas Fan`_.
- |Feature| A new random variable, `utils.fixes.loguniform` implements a
log-uniform random variable (e.g., for use in RandomizedSearchCV).
For example, the outcomes ``1``, ``10`` and ``100`` are all equally likely
for ``loguniform(1, 100)``. See :issue:`11232` by
:user:`Scott Sievert <stsievert>` and :user:`Nathaniel Saul <sauln>`,
and `SciPy PR 10815 <https://github.com/scipy/scipy/pull/10815>`.
- |Enhancement| `utils.safe_indexing` (now deprecated) accepts an
``axis`` parameter to index array-like across rows and columns. The column
indexing can be done on NumPy array, SciPy sparse matrix, and Pandas
DataFrame. An additional refactoring was done. :pr:`14035` and :pr:`14475`
by `Guillaume Lemaitre`_.
- |Enhancement| :func:`utils.extmath.safe_sparse_dot` works between 3D+ ndarray
and sparse matrix.
:pr:`14538` by :user:`Jérémie du Boisberranger <jeremiedbb>`.
- |Fix| :func:`utils.check_array` is now raising an error instead of casting
NaN to integer.
:pr:`14872` by `Roman Yurchak`_.
- |Fix| :func:`utils.check_array` will now correctly detect numeric dtypes in
pandas dataframes, fixing a bug where ``float32`` was upcast to ``float64``
unnecessarily. :pr:`15094` by `Andreas Müller`_.
- |API| The following utils have been deprecated and are now private:
- ``choose_check_classifiers_labels``
- ``enforce_estimator_tags_y``
- ``mocking.MockDataFrame``
- ``mocking.CheckingClassifier``
- ``optimize.newton_cg``
- ``random.random_choice_csc``
- ``utils.choose_check_classifiers_labels``
- ``utils.enforce_estimator_tags_y``
- ``utils.optimize.newton_cg``
- ``utils.random.random_choice_csc``
- ``utils.safe_indexing``
- ``utils.mocking``
- ``utils.fast_dict``
- ``utils.seq_dataset``
- ``utils.weight_vector``
- ``utils.fixes.parallel_helper`` (removed)
- All of ``utils.testing`` except for ``all_estimators`` which is now in
``utils``.
:mod:`sklearn.isotonic`
..................................
- |Fix| Fixed a bug where :class:`isotonic.IsotonicRegression.fit` raised error
when `X.dtype == 'float32'` and `X.dtype != y.dtype`.
:pr:`14902` by :user:`Lucas <lostcoaster>`.
Miscellaneous
.............
- |Fix| Port `lobpcg` from SciPy which implement some bug fixes but only
available in 1.3+.
:pr:`13609` and :pr:`14971` by `Guillaume Lemaitre`_.
- |API| Scikit-learn now converts any input data structure implementing a
duck array to a numpy array (using ``__array__``) to ensure consistent
behavior instead of relying on ``__array_function__`` (see `NEP 18
<https://numpy.org/neps/nep-0018-array-function-protocol.html>`_).
:pr:`14702` by `Andreas Müller`_.
- |API| Replace manual checks with ``check_is_fitted``. Errors thrown when
using a non-fitted estimators are now more uniform.
:pr:`13013` by :user:`Agamemnon Krasoulis <agamemnonc>`.
Changes to estimator checks
---------------------------
These changes mostly affect library developers.
- Estimators are now expected to raise a ``NotFittedError`` if ``predict`` or
``transform`` is called before ``fit``; previously an ``AttributeError`` or
``ValueError`` was acceptable.
:pr:`13013` by by :user:`Agamemnon Krasoulis <agamemnonc>`.
- Binary only classifiers are now supported in estimator checks.
Such classifiers need to have the `binary_only=True` estimator tag.
:pr:`13875` by `Trevor Stephens`_.
- Estimators are expected to convert input data (``X``, ``y``,
``sample_weights``) to :class:`numpy.ndarray` and never call
``__array_function__`` on the original datatype that is passed (see `NEP 18
<https://numpy.org/neps/nep-0018-array-function-protocol.html>`_).
:pr:`14702` by `Andreas Müller`_.
- `requires_positive_X` estimator tag (for models that require
X to be non-negative) is now used by :meth:`utils.estimator_checks.check_estimator`
to make sure a proper error message is raised if X contains some negative entries.
:pr:`14680` by :user:`Alex Gramfort <agramfort>`.
- Added check that pairwise estimators raise error on non-square data
:pr:`14336` by :user:`Gregory Dexter <gdex1>`.
- Added two common multioutput estimator tests
`utils.estimator_checks.check_classifier_multioutput` and
`utils.estimator_checks.check_regressor_multioutput`.
:pr:`13392` by :user:`Rok Mihevc <rok>`.
- |Fix| Added ``check_transformer_data_not_an_array`` to checks where missing
- |Fix| The estimators tags resolution now follows the regular MRO. They used
to be overridable only once. :pr:`14884` by `Andreas Müller`_.
.. rubric:: Code and documentation contributors
Thanks to everyone who has contributed to the maintenance and improvement of the
project since version 0.21, including:
Aaron Alphonsus, Abbie Popa, Abdur-Rahmaan Janhangeer, abenbihi, Abhinav Sagar,
Abhishek Jana, Abraham K. Lagat, Adam J. Stewart, Aditya Vyas, Adrin Jalali,
Agamemnon Krasoulis, Alec Peters, Alessandro Surace, Alexandre de Siqueira,
Alexandre Gramfort, alexgoryainov, Alex Henrie, Alex Itkes, alexshacked, Allen
Akinkunle, Anaël Beaugnon, Anders Kaseorg, Andrea Maldonado, Andrea Navarrete,
Andreas Mueller, Andreas Schuderer, Andrew Nystrom, Angela Ambroz, Anisha
Keshavan, Ankit Jha, Antonio Gutierrez, Anuja Kelkar, Archana Alva,
arnaudstiegler, arpanchowdhry, ashimb9, Ayomide Bamidele, Baran Buluttekin,
barrycg, Bharat Raghunathan, Bill Mill, Biswadip Mandal, blackd0t, Brian G.
Barkley, Brian Wignall, Bryan Yang, c56pony, camilaagw, cartman_nabana,
catajara, Cat Chenal, Cathy, cgsavard, Charles Vesteghem, Chiara Marmo, Chris
Gregory, Christian Lorentzen, Christos Aridas, Dakota Grusak, Daniel Grady,
Daniel Perry, Danna Naser, DatenBergwerk, David Dormagen, deeplook, Dillon
Niederhut, Dong-hee Na, Dougal J. Sutherland, DrGFreeman, Dylan Cashman,
edvardlindelof, Eric Larson, Eric Ndirangu, Eunseop Jeong, Fanny,
federicopisanu, Felix Divo, flaviomorelli, FranciDona, Franco M. Luque, Frank
Hoang, Frederic Haase, g0g0gadget, Gabriel Altay, Gabriel do Vale Rios, Gael
Varoquaux, ganevgv, gdex1, getgaurav2, Gideon Sonoiya, Gordon Chen, gpapadok,
Greg Mogavero, Grzegorz Szpak, Guillaume Lemaitre, Guillem García Subies,
H4dr1en, hadshirt, Hailey Nguyen, Hanmin Qin, Hannah Bruce Macdonald, Harsh
Mahajan, Harsh Soni, Honglu Zhang, Hossein Pourbozorg, Ian Sanders, Ingrid
Spielman, J-A16, jaehong park, Jaime Ferrando Huertas, James Hill, James Myatt,
Jay, jeremiedbb, Jérémie du Boisberranger, jeromedockes, Jesper Dramsch, Joan
Massich, Joanna Zhang, Joel Nothman, Johann Faouzi, Jonathan Rahn, Jon Cusick,
Jose Ortiz, Kanika Sabharwal, Katarina Slama, kellycarmody, Kennedy Kang'ethe,
Kensuke Arai, Kesshi Jordan, Kevad, Kevin Loftis, Kevin Winata, Kevin Yu-Sheng
Li, Kirill Dolmatov, Kirthi Shankar Sivamani, krishna katyal, Lakshmi Krishnan,
Lakshya KD, LalliAcqua, lbfin, Leland McInnes, Léonard Binet, Loic Esteve,
loopyme, lostcoaster, Louis Huynh, lrjball, Luca Ionescu, Lutz Roeder,
MaggieChege, Maithreyi Venkatesh, Maltimore, Maocx, Marc Torrellas, Marie
Douriez, Markus, Markus Frey, Martina G. Vilas, Martin Oywa, Martin Thoma,
Masashi SHIBATA, Maxwell Aladago, mbillingr, m-clare, Meghann Agarwal, m.fab,
Micah Smith, miguelbarao, Miguel Cabrera, Mina Naghshhnejad, Ming Li, motmoti,
mschaffenroth, mthorrell, Natasha Borders, nezar-a, Nicolas Hug, Nidhin
Pattaniyil, Nikita Titov, Nishan Singh Mann, Nitya Mandyam, norvan,
notmatthancock, novaya, nxorable, Oleg Stikhin, Oleksandr Pavlyk, Olivier
Grisel, Omar Saleem, Owen Flanagan, panpiort8, Paolo, Paolo Toccaceli, Paresh
Mathur, Paula, Peng Yu, Peter Marko, pierretallotte, poorna-kumar, pspachtholz,
qdeffense, Rajat Garg, Raphaël Bournhonesque, Ray, Ray Bell, Rebekah Kim, Reza
Gharibi, Richard Payne, Richard W, rlms, Robert Juergens, Rok Mihevc, Roman
Feldbauer, Roman Yurchak, R Sanjabi, RuchitaGarde, Ruth Waithera, Sackey, Sam
Dixon, Samesh Lakhotia, Samuel Taylor, Sarra Habchi, Scott Gigante, Scott
Sievert, Scott White, Sebastian Pölsterl, Sergey Feldman, SeWook Oh, she-dares,
Shreya V, Shubham Mehta, Shuzhe Xiao, SimonCW, smarie, smujjiga, Sönke
Behrends, Soumirai, Sourav Singh, stefan-matcovici, steinfurt, Stéphane
Couvreur, Stephan Tulkens, Stephen Cowley, Stephen Tierney, SylvainLan,
th0rwas, theoptips, theotheo, Thierno Ibrahima DIOP, Thomas Edwards, Thomas J
Fan, Thomas Moreau, Thomas Schmitt, Tilen Kusterle, Tim Bicker, Timsaur, Tim
Staley, Tirth Patel, Tola A, Tom Augspurger, Tom Dupré la Tour, topisan, Trevor
Stephens, ttang131, Urvang Patel, Vathsala Achar, veerlosar, Venkatachalam N,
Victor Luzgin, Vincent Jeanselme, Vincent Lostanlen, Vladimir Korolev,
vnherdeiro, Wenbo Zhao, Wendy Hu, willdarnell, William de Vazelhes,
wolframalpha, xavier dupré, xcjason, x-martian, xsat, xun-tang, Yinglr,
yokasre, Yu-Hang "Maxin" Tang, Yulia Zamriy, Zhao Feng | scikit-learn | include contributors rst currentmodule sklearn release notes 0 22 Version 0 22 For a short description of the main highlights of the release please refer to ref sphx glr auto examples release highlights plot release highlights 0 22 0 py include changelog legend inc changes 0 22 2 Version 0 22 2 post1 March 3 2020 The 0 22 2 post1 release includes a packaging fix for the source distribution but the content of the packages is otherwise identical to the content of the wheels with the 0 22 2 version without the post1 suffix Both contain the following changes Changelog mod sklearn impute Efficiency Reduce func impute KNNImputer asymptotic memory usage by chunking pairwise distance computation pr 16397 by Joel Nothman mod sklearn metrics Fix Fixed a bug in metrics plot roc curve where the name of the estimator was passed in the class metrics RocCurveDisplay instead of the parameter name It results in a different plot when calling meth metrics RocCurveDisplay plot for the subsequent times pr 16500 by user Guillaume Lemaitre glemaitre Fix Fixed a bug in metrics plot precision recall curve where the name of the estimator was passed in the class metrics PrecisionRecallDisplay instead of the parameter name It results in a different plot when calling meth metrics PrecisionRecallDisplay plot for the subsequent times pr 16505 by user Guillaume Lemaitre glemaitre mod sklearn neighbors Fix Fix a bug which converted a list of arrays into a 2 D object array instead of a 1 D array containing NumPy arrays This bug was affecting meth neighbors NearestNeighbors radius neighbors pr 16076 by user Guillaume Lemaitre glemaitre and user Alex Shacked alexshacked changes 0 22 1 Version 0 22 1 January 2 2020 This is a bug fix release to primarily resolve some packaging issues in version 0 22 0 It also includes minor documentation improvements and some bug fixes Changelog mod sklearn cluster Fix class cluster KMeans with algorithm elkan now uses the same stopping criterion as with the default algorithm full pr 15930 by user inder128 mod sklearn inspection Fix func inspection permutation importance will return the same importances when a random state is given for both n jobs 1 or n jobs 1 both with shared memory backends thread safety and isolated memory process based backends Also avoid casting the data as object dtype and avoid read only error on large dataframes with n jobs 1 as reported in issue 15810 Follow up of pr 15898 by user Shivam Gargsya shivamgargsya pr 15933 by user Guillaume Lemaitre glemaitre and Olivier Grisel Fix inspection plot partial dependence and meth inspection PartialDependenceDisplay plot now consistently checks the number of axes passed in pr 15760 by Thomas Fan mod sklearn metrics Fix metrics plot confusion matrix now raises error when normalize is invalid Previously it runs fine with no normalization pr 15888 by Hanmin Qin Fix metrics plot confusion matrix now colors the label color correctly to maximize contrast with its background pr 15936 by Thomas Fan and user DizietAsahi Fix func metrics classification report does no longer ignore the value of the zero division keyword argument pr 15879 by user Bibhash Chandra Mitra Bibyutatsu Fix Fixed a bug in metrics plot confusion matrix to correctly pass the values format parameter to the class metrics ConfusionMatrixDisplay plot call pr 15937 by user Stephen Blystone blynotes mod sklearn model selection Fix class model selection GridSearchCV and class model selection RandomizedSearchCV accept scalar values provided in fit params Change in 0 22 was breaking backward compatibility pr 15863 by user Adrin Jalali adrinjalali and user Guillaume Lemaitre glemaitre mod sklearn naive bayes Fix Removed abstractmethod decorator for the method check X in naive bayes BaseNB that could break downstream projects inheriting from this deprecated public base class pr 15996 by user Brigitta Sip cz bsipocz mod sklearn preprocessing Fix class preprocessing QuantileTransformer now guarantees the quantiles attribute to be completely sorted in non decreasing manner pr 15751 by user Tirth Patel tirthasheshpatel mod sklearn semi supervised Fix class semi supervised LabelPropagation and class semi supervised LabelSpreading now allow callable kernel function to return sparse weight matrix pr 15868 by user Niklas Smedemark Margulies nik sm mod sklearn utils Fix func utils check array now correctly converts pandas DataFrame with boolean columns to floats pr 15797 by Thomas Fan Fix func utils validation check is fitted accepts back an explicit attributes argument to check for specific attributes as explicit markers of a fitted estimator When no explicit attributes are provided only the attributes that end with a underscore and do not start with double underscore are used as fitted markers The all or any argument is also no longer deprecated This change is made to restore some backward compatibility with the behavior of this utility in version 0 21 pr 15947 by Thomas Fan changes 0 22 Version 0 22 0 December 3 2019 Website update Our website https scikit learn org was revamped and given a fresh new look pr 14849 by Thomas Fan Clear definition of the public API Scikit learn has a public API and a private API We do our best not to break the public API and to only introduce backward compatible changes that do not require any user action However in cases where that s not possible any change to the public API is subject to a deprecation cycle of two minor versions The private API isn t publicly documented and isn t subject to any deprecation cycle so users should not rely on its stability A function or object is public if it is documented in the API Reference https scikit learn org dev modules classes html and if it can be imported with an import path without leading underscores For example sklearn pipeline make pipeline is public while sklearn pipeline name estimators is private sklearn ensemble gb BaseEnsemble is private too because the whole gb module is private Up to 0 22 some tools were de facto public no leading underscore while they should have been private in the first place In version 0 22 these tools have been made properly private and the public API space has been cleaned In addition importing from most sub modules is now deprecated you should for example use from sklearn cluster import Birch instead of from sklearn cluster birch import Birch in practice birch py has been moved to birch py note All the tools in the public API should be documented in the API Reference https scikit learn org dev modules classes html If you find a public tool without leading underscore that isn t in the API reference that means it should either be private or documented Please let us know by opening an issue This work was tracked in issue 9250 https github com scikit learn scikit learn issues 9250 and issue 12927 https github com scikit learn scikit learn issues 12927 Deprecations using FutureWarning from now on When deprecating a feature previous versions of scikit learn used to raise a DeprecationWarning Since the DeprecationWarnings aren t shown by default by Python scikit learn needed to resort to a custom warning filter to always show the warnings That filter would sometimes interfere with users custom warning filters Starting from version 0 22 scikit learn will show FutureWarnings for deprecations as recommended by the Python documentation https docs python org 3 library exceptions html FutureWarning FutureWarnings are always shown by default by Python so the custom filter has been removed and scikit learn no longer hinders with user filters pr 15080 by Nicolas Hug Changed models The following estimators and functions when fit with the same data and parameters may produce different models from the previous version This often occurs due to changes in the modelling logic bug fixes or enhancements or in random sampling procedures class cluster KMeans when n jobs 1 Fix class decomposition SparseCoder class decomposition DictionaryLearning and class decomposition MiniBatchDictionaryLearning Fix class decomposition SparseCoder with algorithm lasso lars Fix class decomposition SparsePCA where normalize components has no effect due to deprecation class ensemble HistGradientBoostingClassifier and class ensemble HistGradientBoostingRegressor Fix Feature Enhancement class impute IterativeImputer when X has features with no missing values Feature class linear model Ridge when X is sparse Fix class model selection StratifiedKFold and any use of cv int with a classifier Fix class cross decomposition CCA when using scipy 1 3 Fix Details are listed in the changelog below While we are trying to better inform users by providing this information we cannot assure that this list is complete Changelog Entries should be grouped by module in alphabetic order and prefixed with one of the labels MajorFeature Feature Efficiency Enhancement Fix or API see whats new rst for descriptions Entries should be ordered by those labels e g Fix after Efficiency Changes not specific to a module should be listed under Multiple Modules or Miscellaneous Entries should end with pr 123456 by user Joe Bloggs joeongithub where 123456 is the pull request number not the issue number mod sklearn base API From version 0 24 meth base BaseEstimator get params will raise an AttributeError rather than return None for parameters that are in the estimator s constructor but not stored as attributes on the instance pr 14464 by Joel Nothman mod sklearn calibration Fix Fixed a bug that made class calibration CalibratedClassifierCV fail when given a sample weight parameter of type list in the case where sample weights are not supported by the wrapped estimator pr 13575 by user William de Vazelhes wdevazelhes mod sklearn cluster Feature class cluster SpectralClustering now accepts precomputed sparse neighbors graph as input issue 10482 by Tom Dupre la Tour and user Kumar Ashutosh thechargedneutron Enhancement class cluster SpectralClustering now accepts a n components parameter This parameter extends SpectralClustering class functionality to match meth cluster spectral clustering pr 13726 by user Shuzhe Xiao fdas3213 Fix Fixed a bug where class cluster KMeans produced inconsistent results between n jobs 1 and n jobs 1 due to the handling of the random state pr 9288 by user Bryan Yang bryanyang0528 Fix Fixed a bug where elkan algorithm in class cluster KMeans was producing Segmentation Fault on large arrays due to integer index overflow pr 15057 by user Vladimir Korolev balodja Fix class cluster MeanShift now accepts a term max iter with a default value of 300 instead of always using the default 300 It also now exposes an n iter indicating the maximum number of iterations performed on each seed pr 15120 by Adrin Jalali Fix class cluster AgglomerativeClustering and class cluster FeatureAgglomeration now raise an error if affinity cosine and X has samples that are all zeros pr 7943 by user mthorrell mod sklearn compose Feature Adds func compose make column selector which is used with class compose ColumnTransformer to select DataFrame columns on the basis of name and dtype pr 12303 by Thomas Fan Fix Fixed a bug in class compose ColumnTransformer which failed to select the proper columns when using a boolean list with NumPy older than 1 12 pr 14510 by Guillaume Lemaitre Fix Fixed a bug in class compose TransformedTargetRegressor which did not pass fit params to the underlying regressor pr 14890 by user Miguel Cabrera mfcabrera Fix The class compose ColumnTransformer now requires the number of features to be consistent between fit and transform A FutureWarning is raised now and this will raise an error in 0 24 If the number of features isn t consistent and negative indexing is used an error is raised pr 14544 by Adrin Jalali mod sklearn cross decomposition Feature class cross decomposition PLSCanonical and class cross decomposition PLSRegression have a new function inverse transform to transform data to the original space pr 15304 by user Jaime Ferrando Huertas jiwidi Enhancement class decomposition KernelPCA now properly checks the eigenvalues found by the solver for numerical or conditioning issues This ensures consistency of results across solvers different choices for eigen solver including approximate solvers such as randomized and lobpcg see issue 12068 pr 12145 by user Sylvain Mari smarie Fix Fixed a bug where class cross decomposition PLSCanonical and class cross decomposition PLSRegression were raising an error when fitted with a target matrix Y in which the first column was constant issue 13609 by user Camila Williamson camilaagw Fix class cross decomposition CCA now produces the same results with scipy 1 3 and previous scipy versions pr 15661 by Thomas Fan mod sklearn datasets Feature func datasets fetch openml now supports heterogeneous data using pandas by setting as frame True pr 13902 by Thomas Fan Feature func datasets fetch openml now includes the target names in the returned Bunch pr 15160 by Thomas Fan Enhancement The parameter return X y was added to func datasets fetch 20newsgroups and func datasets fetch olivetti faces pr 14259 by user Sourav Singh souravsingh Enhancement func datasets make classification now accepts array like weights parameter i e list or numpy array instead of list only pr 14764 by user Cat Chenal CatChenal Enhancement The parameter normalize was added to func datasets fetch 20newsgroups vectorized pr 14740 by user St phan Tulkens stephantul Fix Fixed a bug in func datasets fetch openml which failed to load an OpenML dataset that contains an ignored feature pr 14623 by user Sarra Habchi HabchiSarra mod sklearn decomposition Efficiency class decomposition NMF with solver mu fitted on sparse input matrices now uses batching to avoid briefly allocating an array with size non zero elements n components pr 15257 by user Mart Willocx Maocx Enhancement func decomposition dict learning and func decomposition dict learning online now accept method max iter and pass it to meth decomposition sparse encode issue 12650 by Adrin Jalali Enhancement class decomposition SparseCoder class decomposition DictionaryLearning and class decomposition MiniBatchDictionaryLearning now take a transform max iter parameter and pass it to either func decomposition dict learning or func decomposition sparse encode issue 12650 by Adrin Jalali Enhancement class decomposition IncrementalPCA now accepts sparse matrices as input converting them to dense in batches thereby avoiding the need to store the entire dense matrix at once pr 13960 by user Scott Gigante scottgigante Fix func decomposition sparse encode now passes the max iter to the underlying class linear model LassoLars when algorithm lasso lars issue 12650 by Adrin Jalali mod sklearn dummy Fix class dummy DummyClassifier now handles checking the existence of the provided constant in multiouput cases pr 14908 by user Martina G Vilas martinagvilas API The default value of the strategy parameter in class dummy DummyClassifier will change from stratified in version 0 22 to prior in 0 24 A FutureWarning is raised when the default value is used pr 15382 by Thomas Fan API The outputs 2d attribute is deprecated in class dummy DummyClassifier and class dummy DummyRegressor It is equivalent to n outputs 1 pr 14933 by Nicolas Hug mod sklearn ensemble MajorFeature Added class ensemble StackingClassifier and class ensemble StackingRegressor to stack predictors using a final classifier or regressor pr 11047 by user Guillaume Lemaitre glemaitre and user Caio Oliveira caioaao and pr 15138 by user Jon Cusick jcusick13 MajorFeature Many improvements were made to class ensemble HistGradientBoostingClassifier and class ensemble HistGradientBoostingRegressor Feature Estimators now natively support dense data with missing values both for training and predicting They also support infinite values pr 13911 and pr 14406 by Nicolas Hug Adrin Jalali and Olivier Grisel Feature Estimators now have an additional warm start parameter that enables warm starting pr 14012 by user Johann Faouzi johannfaouzi Feature func inspection partial dependence and inspection plot partial dependence now support the fast recursion method for both estimators pr 13769 by Nicolas Hug Enhancement for class ensemble HistGradientBoostingClassifier the training loss or score is now monitored on a class wise stratified subsample to preserve the class balance of the original training set pr 14194 by user Johann Faouzi johannfaouzi Enhancement class ensemble HistGradientBoostingRegressor now supports the least absolute deviation loss pr 13896 by Nicolas Hug Fix Estimators now bin the training and validation data separately to avoid any data leak pr 13933 by Nicolas Hug Fix Fixed a bug where early stopping would break with string targets pr 14710 by Guillaume Lemaitre Fix class ensemble HistGradientBoostingClassifier now raises an error if categorical crossentropy loss is given for a binary classification problem pr 14869 by Adrin Jalali Note that pickles from 0 21 will not work in 0 22 Enhancement Addition of max samples argument allows limiting size of bootstrap samples to be less than size of dataset Added to class ensemble RandomForestClassifier class ensemble RandomForestRegressor class ensemble ExtraTreesClassifier class ensemble ExtraTreesRegressor pr 14682 by user Matt Hancock notmatthancock and pr 5963 by user Pablo Duboue DrDub Fix func ensemble VotingClassifier predict proba will no longer be present when voting hard pr 14287 by Thomas Fan Fix The named estimators attribute in class ensemble VotingClassifier and class ensemble VotingRegressor now correctly maps to dropped estimators Previously the named estimators mapping was incorrect whenever one of the estimators was dropped pr 15375 by Thomas Fan Fix Run by default func utils estimator checks check estimator on both class ensemble VotingClassifier and class ensemble VotingRegressor It leads to solve issues regarding shape consistency during predict which was failing when the underlying estimators were not outputting consistent array dimensions Note that it should be replaced by refactoring the common tests in the future pr 14305 by Guillaume Lemaitre Fix class ensemble AdaBoostClassifier computes probabilities based on the decision function as in the literature Thus predict and predict proba give consistent results pr 14114 by Guillaume Lemaitre Fix Stacking and Voting estimators now ensure that their underlying estimators are either all classifiers or all regressors class ensemble StackingClassifier class ensemble StackingRegressor and class ensemble VotingClassifier and class ensemble VotingRegressor now raise consistent error messages pr 15084 by Guillaume Lemaitre Fix class ensemble AdaBoostRegressor where the loss should be normalized by the max of the samples with non null weights only pr 14294 by Guillaume Lemaitre API presort is now deprecated in class ensemble GradientBoostingClassifier and class ensemble GradientBoostingRegressor and the parameter has no effect Users are recommended to use class ensemble HistGradientBoostingClassifier and class ensemble HistGradientBoostingRegressor instead pr 14907 by Adrin Jalali mod sklearn feature extraction Enhancement A warning will now be raised if a parameter choice means that another parameter will be unused on calling the fit method for class feature extraction text HashingVectorizer class feature extraction text CountVectorizer and class feature extraction text TfidfVectorizer pr 14602 by user Gaurav Chawla getgaurav2 Fix Functions created by build preprocessor and build analyzer of feature extraction text VectorizerMixin can now be pickled pr 14430 by user Dillon Niederhut deniederhut Fix feature extraction text strip accents unicode now correctly removes accents from strings that are in NFKD normalized form pr 15100 by user Daniel Grady DGrady Fix Fixed a bug that caused class feature extraction DictVectorizer to raise an OverflowError during the transform operation when producing a scipy sparse matrix on large input data pr 15463 by user Norvan Sahiner norvan API Deprecated unused copy param for meth feature extraction text TfidfVectorizer transform it will be removed in v0 24 pr 14520 by user Guillem G Subies guillemgsubies mod sklearn feature selection Enhancement Updated the following mod sklearn feature selection estimators to allow NaN Inf values in transform and fit class feature selection RFE class feature selection RFECV class feature selection SelectFromModel and class feature selection VarianceThreshold Note that if the underlying estimator of the feature selector does not allow NaN Inf then it will still error but the feature selectors themselves no longer enforce this restriction unnecessarily issue 11635 by user Alec Peters adpeters Fix Fixed a bug where class feature selection VarianceThreshold with threshold 0 did not remove constant features due to numerical instability by using range rather than variance in this case pr 13704 by user Roddy MacSween rlms mod sklearn gaussian process Feature Gaussian process models on structured data class gaussian process GaussianProcessRegressor and class gaussian process GaussianProcessClassifier can now accept a list of generic objects e g strings trees graphs etc as the X argument to their training prediction methods A user defined kernel should be provided for computing the kernel matrix among the generic objects and should inherit from gaussian process kernels GenericKernelMixin to notify the GPR GPC model that it handles non vectorial samples pr 15557 by user Yu Hang Tang yhtang Efficiency func gaussian process GaussianProcessClassifier log marginal likelihood and func gaussian process GaussianProcessRegressor log marginal likelihood now accept a clone kernel True keyword argument When set to False the kernel attribute is modified but may result in a performance improvement pr 14378 by user Masashi Shibata c bata API From version 0 24 meth gaussian process kernels Kernel get params will raise an AttributeError rather than return None for parameters that are in the estimator s constructor but not stored as attributes on the instance pr 14464 by Joel Nothman mod sklearn impute MajorFeature Added class impute KNNImputer to impute missing values using k Nearest Neighbors issue 12852 by user Ashim Bhattarai ashimb9 and Thomas Fan and pr 15010 by Guillaume Lemaitre Feature class impute IterativeImputer has new skip compute flag that is False by default which when True will skip computation on features that have no missing values during the fit phase issue 13773 by user Sergey Feldman sergeyf Efficiency meth impute MissingIndicator fit transform avoid repeated computation of the masked matrix pr 14356 by user Harsh Soni harsh020 Fix class impute IterativeImputer now works when there is only one feature By user Sergey Feldman sergeyf Fix Fixed a bug in class impute IterativeImputer where features where imputed in the reverse desired order with imputation order either ascending or descending pr 15393 by user Venkatachalam N venkyyuvy mod sklearn inspection MajorFeature func inspection permutation importance has been added to measure the importance of each feature in an arbitrary trained model with respect to a given scoring function issue 13146 by Thomas Fan Feature func inspection partial dependence and inspection plot partial dependence now support the fast recursion method for class ensemble HistGradientBoostingClassifier and class ensemble HistGradientBoostingRegressor pr 13769 by Nicolas Hug Enhancement inspection plot partial dependence has been extended to now support the new visualization API described in the ref User Guide visualizations pr 14646 by Thomas Fan Enhancement func inspection partial dependence accepts pandas DataFrame and class pipeline Pipeline containing class compose ColumnTransformer In addition inspection plot partial dependence will use the column names by default when a dataframe is passed pr 14028 and pr 15429 by Guillaume Lemaitre mod sklearn kernel approximation Fix Fixed a bug where class kernel approximation Nystroem raised a KeyError when using kernel precomputed pr 14706 by user Venkatachalam N venkyyuvy mod sklearn linear model Efficiency The liblinear logistic regression solver is now faster and requires less memory pr 14108 pr 14170 pr 14296 by user Alex Henrie alexhenrie Enhancement class linear model BayesianRidge now accepts hyperparameters alpha init and lambda init which can be used to set the initial value of the maximization procedure in term fit pr 13618 by user Yoshihiro Uchida c56pony Fix class linear model Ridge now correctly fits an intercept when X is sparse solver auto and fit intercept True because the default solver in this configuration has changed to sparse cg which can fit an intercept with sparse data pr 13995 by user J r me Dock s jeromedockes Fix class linear model Ridge with solver sag now accepts F ordered and non contiguous arrays and makes a conversion instead of failing pr 14458 by Guillaume Lemaitre Fix class linear model LassoCV no longer forces precompute False when fitting the final model pr 14591 by Andreas M ller Fix class linear model RidgeCV and class linear model RidgeClassifierCV now correctly scores when cv None pr 14864 by user Venkatachalam N venkyyuvy Fix Fixed a bug in class linear model LogisticRegressionCV where the scores n iter and coefs paths attribute would have a wrong ordering with penalty elastic net pr 15044 by Nicolas Hug Fix class linear model MultiTaskLassoCV and class linear model MultiTaskElasticNetCV with X of dtype int and fit intercept True pr 15086 by user Alex Gramfort agramfort Fix The liblinear solver now supports sample weight pr 15038 by Guillaume Lemaitre mod sklearn manifold Feature class manifold Isomap class manifold TSNE and class manifold SpectralEmbedding now accept precomputed sparse neighbors graph as input issue 10482 by Tom Dupre la Tour and user Kumar Ashutosh thechargedneutron Feature Exposed the n jobs parameter in class manifold TSNE for multi core calculation of the neighbors graph This parameter has no impact when metric precomputed or metric euclidean and method exact issue 15082 by Roman Yurchak Efficiency Improved efficiency of class manifold TSNE when method barnes hut by computing the gradient in parallel pr 13213 by user Thomas Moreau tommoral Fix Fixed a bug where func manifold spectral embedding and therefore class manifold SpectralEmbedding and class cluster SpectralClustering computed wrong eigenvalues with eigen solver amg when n samples 5 n components pr 14647 by Andreas M ller Fix Fixed a bug in func manifold spectral embedding used in class manifold SpectralEmbedding and class cluster SpectralClustering where eigen solver amg would sometimes result in a LinAlgError issue 13393 by user Andrew Knyazev lobpcg pr 13707 by user Scott White whitews API Deprecate training data unused attribute in class manifold Isomap issue 10482 by Tom Dupre la Tour mod sklearn metrics MajorFeature metrics plot roc curve has been added to plot roc curves This function introduces the visualization API described in the ref User Guide visualizations pr 14357 by Thomas Fan Feature Added a new parameter zero division to multiple classification metrics func metrics precision score func metrics recall score func metrics f1 score func metrics fbeta score func metrics precision recall fscore support func metrics classification report This allows to set returned value for ill defined metrics pr 14900 by user Marc Torrellas Socastro marctorrellas Feature Added the func metrics pairwise nan euclidean distances metric which calculates euclidean distances in the presence of missing values issue 12852 by user Ashim Bhattarai ashimb9 and Thomas Fan Feature New ranking metrics func metrics ndcg score and func metrics dcg score have been added to compute Discounted Cumulative Gain and Normalized Discounted Cumulative Gain pr 9951 by user J r me Dock s jeromedockes Feature metrics plot precision recall curve has been added to plot precision recall curves pr 14936 by Thomas Fan Feature metrics plot confusion matrix has been added to plot confusion matrices pr 15083 by Thomas Fan Feature Added multiclass support to func metrics roc auc score with corresponding scorers roc auc ovr roc auc ovo roc auc ovr weighted and roc auc ovo weighted pr 12789 and pr 15274 by user Kathy Chen kathyxchen user Mohamed Maskani maskani moh and Thomas Fan Feature Add class metrics mean tweedie deviance measuring the Tweedie deviance for a given power parameter Also add mean Poisson deviance class metrics mean poisson deviance and mean Gamma deviance class metrics mean gamma deviance that are special cases of the Tweedie deviance for power 1 and power 2 respectively pr 13938 by user Christian Lorentzen lorentzenchr and Roman Yurchak Efficiency Improved performance of func metrics pairwise manhattan distances in the case of sparse matrices pr 15049 by Paolo Toccaceli ptocca Enhancement The parameter beta in func metrics fbeta score is updated to accept the zero and float inf value pr 13231 by user Dong hee Na corona10 Enhancement Added parameter squared in func metrics mean squared error to return root mean squared error pr 13467 by user Urvang Patel urvang96 Enhancement Allow computing averaged metrics in the case of no true positives pr 14595 by Andreas M ller Enhancement Multilabel metrics now supports list of lists as input pr 14865 user Srivatsan Ramesh srivatsan ramesh user Herilalaina Rakotoarison herilalaina user L onard Binet leonardbinet Enhancement func metrics median absolute error now supports multioutput parameter pr 14732 by user Agamemnon Krasoulis agamemnonc Enhancement roc auc ovr weighted and roc auc ovo weighted can now be used as the term scoring parameter of model selection tools pr 14417 by Thomas Fan Enhancement func metrics confusion matrix accepts a parameters normalize allowing to normalize the confusion matrix by column rows or overall pr 15625 by Guillaume Lemaitre glemaitre Fix Raise a ValueError in func metrics silhouette score when a precomputed distance matrix contains non zero diagonal entries pr 12258 by user Stephen Tierney sjtrny API scoring neg brier score should be used instead of scoring brier score loss which is now deprecated pr 14898 by user Stefan Matcovici stefan matcovici mod sklearn model selection Efficiency Improved performance of multimetric scoring in func model selection cross validate class model selection GridSearchCV and class model selection RandomizedSearchCV pr 14593 by Thomas Fan Enhancement class model selection learning curve now accepts parameter return times which can be used to retrieve computation times in order to plot model scalability see learning curve example pr 13938 by user Hadrien Reboul H4dr1en Enhancement class model selection RandomizedSearchCV now accepts lists of parameter distributions pr 14549 by Andreas M ller Fix Reimplemented class model selection StratifiedKFold to fix an issue where one test set could be n classes larger than another Test sets should now be near equally sized pr 14704 by Joel Nothman Fix The cv results attribute of class model selection GridSearchCV and class model selection RandomizedSearchCV now only contains unfitted estimators This potentially saves a lot of memory since the state of the estimators isn t stored pr 15096 by Andreas M ller API class model selection KFold and class model selection StratifiedKFold now raise a warning if random state is set but shuffle is False This will raise an error in 0 24 mod sklearn multioutput Fix class multioutput MultiOutputClassifier now has attribute classes pr 14629 by user Agamemnon Krasoulis agamemnonc Fix class multioutput MultiOutputClassifier now has predict proba as property and can be checked with hasattr issue 15488 pr 15490 by user Rebekah Kim rebekahkim mod sklearn naive bayes MajorFeature Added class naive bayes CategoricalNB that implements the Categorical Naive Bayes classifier pr 12569 by user Tim Bicker timbicker and user Florian Wilhelm FlorianWilhelm mod sklearn neighbors MajorFeature Added class neighbors KNeighborsTransformer and class neighbors RadiusNeighborsTransformer which transform input dataset into a sparse neighbors graph They give finer control on nearest neighbors computations and enable easy pipeline caching for multiple use issue 10482 by Tom Dupre la Tour Feature class neighbors KNeighborsClassifier class neighbors KNeighborsRegressor class neighbors RadiusNeighborsClassifier class neighbors RadiusNeighborsRegressor and class neighbors LocalOutlierFactor now accept precomputed sparse neighbors graph as input issue 10482 by Tom Dupre la Tour and user Kumar Ashutosh thechargedneutron Feature class neighbors RadiusNeighborsClassifier now supports predicting probabilities by using predict proba and supports more outlier label options most frequent or different outlier labels for multi outputs pr 9597 by user Wenbo Zhao webber26232 Efficiency Efficiency improvements for func neighbors RadiusNeighborsClassifier predict pr 9597 by user Wenbo Zhao webber26232 Fix class neighbors KNeighborsRegressor now throws error when metric precomputed and fit on non square data pr 14336 by user Gregory Dexter gdex1 mod sklearn neural network Feature Add max fun parameter in neural network BaseMultilayerPerceptron class neural network MLPRegressor and class neural network MLPClassifier to give control over maximum number of function evaluation to not meet tol improvement issue 9274 by user Daniel Perry daniel perry mod sklearn pipeline Enhancement class pipeline Pipeline now supports term score samples if the final estimator does pr 13806 by user Ana l Beaugnon ab anssi Fix The fit in class pipeline FeatureUnion now accepts fit params to pass to the underlying transformers pr 15119 by Adrin Jalali API None as a transformer is now deprecated in class pipeline FeatureUnion Please use drop instead pr 15053 by Thomas Fan mod sklearn preprocessing Efficiency class preprocessing PolynomialFeatures is now faster when the input data is dense pr 13290 by user Xavier Dupr sdpython Enhancement Avoid unnecessary data copy when fitting preprocessors class preprocessing StandardScaler class preprocessing MinMaxScaler class preprocessing MaxAbsScaler class preprocessing RobustScaler and class preprocessing QuantileTransformer which results in a slight performance improvement pr 13987 by Roman Yurchak Fix KernelCenterer now throws error when fit on non square class preprocessing KernelCenterer pr 14336 by user Gregory Dexter gdex1 mod sklearn model selection Fix class model selection GridSearchCV and model selection RandomizedSearchCV now supports the pairwise property which prevents an error during cross validation for estimators with pairwise inputs such as class neighbors KNeighborsClassifier when term metric is set to precomputed pr 13925 by user Isaac S Robson isrobson and pr 15524 by user Xun Tang xun tang mod sklearn svm Enhancement class svm SVC and class svm NuSVC now accept a break ties parameter This parameter results in term predict breaking the ties according to the confidence values of term decision function if decision function shape ovr and the number of target classes 2 pr 12557 by Adrin Jalali Enhancement SVM estimators now throw a more specific error when kernel precomputed and fit on non square data pr 14336 by user Gregory Dexter gdex1 Fix class svm SVC class svm SVR class svm NuSVR and class svm OneClassSVM when received values negative or zero for parameter sample weight in method fit generated an invalid model This behavior occurred only in some border scenarios Now in these cases fit will fail with an Exception pr 14286 by user Alex Shacked alexshacked Fix The n support attribute of class svm SVR and class svm OneClassSVM was previously non initialized and had size 2 It has now size 1 with the correct value pr 15099 by Nicolas Hug Fix fixed a bug in BaseLibSVM sparse fit where n SV 0 raised a ZeroDivisionError pr 14894 by user Danna Naser danna naser Fix The liblinear solver now supports sample weight pr 15038 by Guillaume Lemaitre mod sklearn tree Feature Adds minimal cost complexity pruning controlled by ccp alpha to class tree DecisionTreeClassifier class tree DecisionTreeRegressor class tree ExtraTreeClassifier class tree ExtraTreeRegressor class ensemble RandomForestClassifier class ensemble RandomForestRegressor class ensemble ExtraTreesClassifier class ensemble ExtraTreesRegressor class ensemble GradientBoostingClassifier and class ensemble GradientBoostingRegressor pr 12887 by Thomas Fan API presort is now deprecated in class tree DecisionTreeClassifier and class tree DecisionTreeRegressor and the parameter has no effect pr 14907 by Adrin Jalali API The classes and n classes attributes of class tree DecisionTreeRegressor are now deprecated pr 15028 by user Mei Guan meiguan Nicolas Hug and Adrin Jalali mod sklearn utils Feature func utils estimator checks check estimator can now generate checks by setting generate only True Previously running func utils estimator checks check estimator will stop when the first check fails With generate only True all checks can run independently and report the ones that are failing Read more in ref rolling your own estimator pr 14381 by Thomas Fan Feature Added a pytest specific decorator func utils estimator checks parametrize with checks to parametrize estimator checks for a list of estimators pr 14381 by Thomas Fan Feature A new random variable utils fixes loguniform implements a log uniform random variable e g for use in RandomizedSearchCV For example the outcomes 1 10 and 100 are all equally likely for loguniform 1 100 See issue 11232 by user Scott Sievert stsievert and user Nathaniel Saul sauln and SciPy PR 10815 https github com scipy scipy pull 10815 Enhancement utils safe indexing now deprecated accepts an axis parameter to index array like across rows and columns The column indexing can be done on NumPy array SciPy sparse matrix and Pandas DataFrame An additional refactoring was done pr 14035 and pr 14475 by Guillaume Lemaitre Enhancement func utils extmath safe sparse dot works between 3D ndarray and sparse matrix pr 14538 by user J r mie du Boisberranger jeremiedbb Fix func utils check array is now raising an error instead of casting NaN to integer pr 14872 by Roman Yurchak Fix func utils check array will now correctly detect numeric dtypes in pandas dataframes fixing a bug where float32 was upcast to float64 unnecessarily pr 15094 by Andreas M ller API The following utils have been deprecated and are now private choose check classifiers labels enforce estimator tags y mocking MockDataFrame mocking CheckingClassifier optimize newton cg random random choice csc utils choose check classifiers labels utils enforce estimator tags y utils optimize newton cg utils random random choice csc utils safe indexing utils mocking utils fast dict utils seq dataset utils weight vector utils fixes parallel helper removed All of utils testing except for all estimators which is now in utils mod sklearn isotonic Fix Fixed a bug where class isotonic IsotonicRegression fit raised error when X dtype float32 and X dtype y dtype pr 14902 by user Lucas lostcoaster Miscellaneous Fix Port lobpcg from SciPy which implement some bug fixes but only available in 1 3 pr 13609 and pr 14971 by Guillaume Lemaitre API Scikit learn now converts any input data structure implementing a duck array to a numpy array using array to ensure consistent behavior instead of relying on array function see NEP 18 https numpy org neps nep 0018 array function protocol html pr 14702 by Andreas M ller API Replace manual checks with check is fitted Errors thrown when using a non fitted estimators are now more uniform pr 13013 by user Agamemnon Krasoulis agamemnonc Changes to estimator checks These changes mostly affect library developers Estimators are now expected to raise a NotFittedError if predict or transform is called before fit previously an AttributeError or ValueError was acceptable pr 13013 by by user Agamemnon Krasoulis agamemnonc Binary only classifiers are now supported in estimator checks Such classifiers need to have the binary only True estimator tag pr 13875 by Trevor Stephens Estimators are expected to convert input data X y sample weights to class numpy ndarray and never call array function on the original datatype that is passed see NEP 18 https numpy org neps nep 0018 array function protocol html pr 14702 by Andreas M ller requires positive X estimator tag for models that require X to be non negative is now used by meth utils estimator checks check estimator to make sure a proper error message is raised if X contains some negative entries pr 14680 by user Alex Gramfort agramfort Added check that pairwise estimators raise error on non square data pr 14336 by user Gregory Dexter gdex1 Added two common multioutput estimator tests utils estimator checks check classifier multioutput and utils estimator checks check regressor multioutput pr 13392 by user Rok Mihevc rok Fix Added check transformer data not an array to checks where missing Fix The estimators tags resolution now follows the regular MRO They used to be overridable only once pr 14884 by Andreas M ller rubric Code and documentation contributors Thanks to everyone who has contributed to the maintenance and improvement of the project since version 0 21 including Aaron Alphonsus Abbie Popa Abdur Rahmaan Janhangeer abenbihi Abhinav Sagar Abhishek Jana Abraham K Lagat Adam J Stewart Aditya Vyas Adrin Jalali Agamemnon Krasoulis Alec Peters Alessandro Surace Alexandre de Siqueira Alexandre Gramfort alexgoryainov Alex Henrie Alex Itkes alexshacked Allen Akinkunle Ana l Beaugnon Anders Kaseorg Andrea Maldonado Andrea Navarrete Andreas Mueller Andreas Schuderer Andrew Nystrom Angela Ambroz Anisha Keshavan Ankit Jha Antonio Gutierrez Anuja Kelkar Archana Alva arnaudstiegler arpanchowdhry ashimb9 Ayomide Bamidele Baran Buluttekin barrycg Bharat Raghunathan Bill Mill Biswadip Mandal blackd0t Brian G Barkley Brian Wignall Bryan Yang c56pony camilaagw cartman nabana catajara Cat Chenal Cathy cgsavard Charles Vesteghem Chiara Marmo Chris Gregory Christian Lorentzen Christos Aridas Dakota Grusak Daniel Grady Daniel Perry Danna Naser DatenBergwerk David Dormagen deeplook Dillon Niederhut Dong hee Na Dougal J Sutherland DrGFreeman Dylan Cashman edvardlindelof Eric Larson Eric Ndirangu Eunseop Jeong Fanny federicopisanu Felix Divo flaviomorelli FranciDona Franco M Luque Frank Hoang Frederic Haase g0g0gadget Gabriel Altay Gabriel do Vale Rios Gael Varoquaux ganevgv gdex1 getgaurav2 Gideon Sonoiya Gordon Chen gpapadok Greg Mogavero Grzegorz Szpak Guillaume Lemaitre Guillem Garc a Subies H4dr1en hadshirt Hailey Nguyen Hanmin Qin Hannah Bruce Macdonald Harsh Mahajan Harsh Soni Honglu Zhang Hossein Pourbozorg Ian Sanders Ingrid Spielman J A16 jaehong park Jaime Ferrando Huertas James Hill James Myatt Jay jeremiedbb J r mie du Boisberranger jeromedockes Jesper Dramsch Joan Massich Joanna Zhang Joel Nothman Johann Faouzi Jonathan Rahn Jon Cusick Jose Ortiz Kanika Sabharwal Katarina Slama kellycarmody Kennedy Kang ethe Kensuke Arai Kesshi Jordan Kevad Kevin Loftis Kevin Winata Kevin Yu Sheng Li Kirill Dolmatov Kirthi Shankar Sivamani krishna katyal Lakshmi Krishnan Lakshya KD LalliAcqua lbfin Leland McInnes L onard Binet Loic Esteve loopyme lostcoaster Louis Huynh lrjball Luca Ionescu Lutz Roeder MaggieChege Maithreyi Venkatesh Maltimore Maocx Marc Torrellas Marie Douriez Markus Markus Frey Martina G Vilas Martin Oywa Martin Thoma Masashi SHIBATA Maxwell Aladago mbillingr m clare Meghann Agarwal m fab Micah Smith miguelbarao Miguel Cabrera Mina Naghshhnejad Ming Li motmoti mschaffenroth mthorrell Natasha Borders nezar a Nicolas Hug Nidhin Pattaniyil Nikita Titov Nishan Singh Mann Nitya Mandyam norvan notmatthancock novaya nxorable Oleg Stikhin Oleksandr Pavlyk Olivier Grisel Omar Saleem Owen Flanagan panpiort8 Paolo Paolo Toccaceli Paresh Mathur Paula Peng Yu Peter Marko pierretallotte poorna kumar pspachtholz qdeffense Rajat Garg Rapha l Bournhonesque Ray Ray Bell Rebekah Kim Reza Gharibi Richard Payne Richard W rlms Robert Juergens Rok Mihevc Roman Feldbauer Roman Yurchak R Sanjabi RuchitaGarde Ruth Waithera Sackey Sam Dixon Samesh Lakhotia Samuel Taylor Sarra Habchi Scott Gigante Scott Sievert Scott White Sebastian P lsterl Sergey Feldman SeWook Oh she dares Shreya V Shubham Mehta Shuzhe Xiao SimonCW smarie smujjiga S nke Behrends Soumirai Sourav Singh stefan matcovici steinfurt St phane Couvreur Stephan Tulkens Stephen Cowley Stephen Tierney SylvainLan th0rwas theoptips theotheo Thierno Ibrahima DIOP Thomas Edwards Thomas J Fan Thomas Moreau Thomas Schmitt Tilen Kusterle Tim Bicker Timsaur Tim Staley Tirth Patel Tola A Tom Augspurger Tom Dupr la Tour topisan Trevor Stephens ttang131 Urvang Patel Vathsala Achar veerlosar Venkatachalam N Victor Luzgin Vincent Jeanselme Vincent Lostanlen Vladimir Korolev vnherdeiro Wenbo Zhao Wendy Hu willdarnell William de Vazelhes wolframalpha xavier dupr xcjason x martian xsat xun tang Yinglr yokasre Yu Hang Maxin Tang Yulia Zamriy Zhao Feng |
scikit-learn sklearn contributors rst Version 1 1 releasenotes11 | .. include:: _contributors.rst
.. currentmodule:: sklearn
.. _release_notes_1_1:
===========
Version 1.1
===========
For a short description of the main highlights of the release, please refer to
:ref:`sphx_glr_auto_examples_release_highlights_plot_release_highlights_1_1_0.py`.
.. include:: changelog_legend.inc
.. _changes_1_1_3:
Version 1.1.3
=============
**October 2022**
This bugfix release only includes fixes for compatibility with the latest
SciPy release >= 1.9.2. Notable changes include:
- |Fix| Include `msvcp140.dll` in the scikit-learn wheels since it has been
removed in the latest SciPy wheels.
:pr:`24631` by :user:`Chiara Marmo <cmarmo>`.
- |Enhancement| Create wheels for Python 3.11.
:pr:`24446` by :user:`Chiara Marmo <cmarmo>`.
Other bug fixes will be available in the next 1.2 release, which will be
released in the coming weeks.
Note that support for 32-bit Python on Windows has been dropped in this release. This
is due to the fact that SciPy 1.9.2 also dropped the support for that platform.
Windows users are advised to install the 64-bit version of Python instead.
.. _changes_1_1_2:
Version 1.1.2
=============
**August 2022**
Changed models
--------------
The following estimators and functions, when fit with the same data and
parameters, may produce different models from the previous version. This often
occurs due to changes in the modelling logic (bug fixes or enhancements), or in
random sampling procedures.
- |Fix| :class:`manifold.TSNE` now throws a `ValueError` when fit with
`perplexity>=n_samples` to ensure mathematical correctness of the algorithm.
:pr:`10805` by :user:`Mathias Andersen <MrMathias>` and
:pr:`23471` by :user:`Meekail Zain <micky774>`.
Changelog
---------
- |Fix| A default HTML representation is shown for meta-estimators with invalid
parameters. :pr:`24015` by `Thomas Fan`_.
- |Fix| Add support for F-contiguous arrays for estimators and functions whose back-end
have been changed in 1.1.
:pr:`23990` by :user:`Julien Jerphanion <jjerphan>`.
- |Fix| Wheels are now available for MacOS 10.9 and greater. :pr:`23833` by
`Thomas Fan`_.
:mod:`sklearn.base`
...................
- |Fix| The `get_params` method of the :class:`base.BaseEstimator` class now supports
estimators with `type`-type params that have the `get_params` method.
:pr:`24017` by :user:`Henry Sorsky <hsorsky>`.
:mod:`sklearn.cluster`
......................
- |Fix| Fixed a bug in :class:`cluster.Birch` that could trigger an error when splitting
a node if there are duplicates in the dataset.
:pr:`23395` by :user:`Jérémie du Boisberranger <jeremiedbb>`.
:mod:`sklearn.feature_selection`
................................
- |Fix| :class:`feature_selection.SelectFromModel` defaults to selection
threshold 1e-5 when the estimator is either :class:`linear_model.ElasticNet`
or :class:`linear_model.ElasticNetCV` with `l1_ratio` equals 1 or
:class:`linear_model.LassoCV`.
:pr:`23636` by :user:`Hao Chun Chang <haochunchang>`.
:mod:`sklearn.impute`
.....................
- |Fix| :class:`impute.SimpleImputer` uses the dtype seen in `fit` for
`transform` when the dtype is object. :pr:`22063` by `Thomas Fan`_.
:mod:`sklearn.linear_model`
...........................
- |Fix| Use dtype-aware tolerances for the validation of gram matrices (passed by users
or precomputed). :pr:`22059` by :user:`Malte S. Kurz <MalteKurz>`.
- |Fix| Fixed an error in :class:`linear_model.LogisticRegression` with
`solver="newton-cg"`, `fit_intercept=True`, and a single feature. :pr:`23608`
by `Tom Dupre la Tour`_.
:mod:`sklearn.manifold`
.......................
- |Fix| :class:`manifold.TSNE` now throws a `ValueError` when fit with
`perplexity>=n_samples` to ensure mathematical correctness of the algorithm.
:pr:`10805` by :user:`Mathias Andersen <MrMathias>` and
:pr:`23471` by :user:`Meekail Zain <micky774>`.
:mod:`sklearn.metrics`
......................
- |Fix| Fixed error message of :class:`metrics.coverage_error` for 1D array input.
:pr:`23548` by :user:`Hao Chun Chang <haochunchang>`.
:mod:`sklearn.preprocessing`
............................
- |Fix| :meth:`preprocessing.OrdinalEncoder.inverse_transform` correctly handles
use cases where `unknown_value` or `encoded_missing_value` is `nan`. :pr:`24087`
by `Thomas Fan`_.
:mod:`sklearn.tree`
...................
- |Fix| Fixed invalid memory access bug during fit in
:class:`tree.DecisionTreeRegressor` and :class:`tree.DecisionTreeClassifier`.
:pr:`23273` by `Thomas Fan`_.
.. _changes_1_1_1:
Version 1.1.1
=============
**May 2022**
Changelog
---------
- |Enhancement| The error message is improved when importing
:class:`model_selection.HalvingGridSearchCV`,
:class:`model_selection.HalvingRandomSearchCV`, or
:class:`impute.IterativeImputer` without importing the experimental flag.
:pr:`23194` by `Thomas Fan`_.
- |Enhancement| Added an extension in doc/conf.py to automatically generate
the list of estimators that handle NaN values.
:pr:`23198` by :user:`Lise Kleiber <lisekleiber>`, :user:`Zhehao Liu <MaxwellLZH>`
and :user:`Chiara Marmo <cmarmo>`.
:mod:`sklearn.datasets`
.......................
- |Fix| Avoid timeouts in :func:`datasets.fetch_openml` by not passing a
`timeout` argument, :pr:`23358` by :user:`Loïc Estève <lesteve>`.
:mod:`sklearn.decomposition`
............................
- |Fix| Avoid spurious warning in :class:`decomposition.IncrementalPCA` when
`n_samples == n_components`. :pr:`23264` by :user:`Lucy Liu <lucyleeow>`.
:mod:`sklearn.feature_selection`
................................
- |Fix| The `partial_fit` method of :class:`feature_selection.SelectFromModel`
now conducts validation for `max_features` and `feature_names_in` parameters.
:pr:`23299` by :user:`Long Bao <lorentzbao>`.
:mod:`sklearn.metrics`
......................
- |Fix| Fixes :func:`metrics.precision_recall_curve` to compute precision-recall at 100%
recall. The Precision-Recall curve now displays the last point corresponding to a
classifier that always predicts the positive class: recall=100% and
precision=class balance.
:pr:`23214` by :user:`Stéphane Collot <stephanecollot>` and :user:`Max Baak <mbaak>`.
:mod:`sklearn.preprocessing`
............................
- |Fix| :class:`preprocessing.PolynomialFeatures` with ``degree`` equal to 0
will raise error when ``include_bias`` is set to False, and outputs a single
constant array when ``include_bias`` is set to True.
:pr:`23370` by :user:`Zhehao Liu <MaxwellLZH>`.
:mod:`sklearn.tree`
...................
- |Fix| Fixes performance regression with low cardinality features for
:class:`tree.DecisionTreeClassifier`,
:class:`tree.DecisionTreeRegressor`,
:class:`ensemble.RandomForestClassifier`,
:class:`ensemble.RandomForestRegressor`,
:class:`ensemble.GradientBoostingClassifier`, and
:class:`ensemble.GradientBoostingRegressor`.
:pr:`23410` by :user:`Loïc Estève <lesteve>`.
:mod:`sklearn.utils`
....................
- |Fix| :func:`utils.class_weight.compute_sample_weight` now works with sparse `y`.
:pr:`23115` by :user:`kernc <kernc>`.
.. _changes_1_1:
Version 1.1.0
=============
**May 2022**
Minimal dependencies
--------------------
Version 1.1.0 of scikit-learn requires python 3.8+, numpy 1.17.3+ and
scipy 1.3.2+. Optional minimal dependency is matplotlib 3.1.2+.
Changed models
--------------
The following estimators and functions, when fit with the same data and
parameters, may produce different models from the previous version. This often
occurs due to changes in the modelling logic (bug fixes or enhancements), or in
random sampling procedures.
- |Efficiency| :class:`cluster.KMeans` now defaults to ``algorithm="lloyd"``
instead of ``algorithm="auto"``, which was equivalent to
``algorithm="elkan"``. Lloyd's algorithm and Elkan's algorithm converge to the
same solution, up to numerical rounding errors, but in general Lloyd's
algorithm uses much less memory, and it is often faster.
- |Efficiency| Fitting :class:`tree.DecisionTreeClassifier`,
:class:`tree.DecisionTreeRegressor`,
:class:`ensemble.RandomForestClassifier`,
:class:`ensemble.RandomForestRegressor`,
:class:`ensemble.GradientBoostingClassifier`, and
:class:`ensemble.GradientBoostingRegressor` is on average 15% faster than in
previous versions thanks to a new sort algorithm to find the best split.
Models might be different because of a different handling of splits
with tied criterion values: both the old and the new sorting algorithm
are unstable sorting algorithms. :pr:`22868` by `Thomas Fan`_.
- |Fix| The eigenvectors initialization for :class:`cluster.SpectralClustering`
and :class:`manifold.SpectralEmbedding` now samples from a Gaussian when
using the `'amg'` or `'lobpcg'` solver. This change improves numerical
stability of the solver, but may result in a different model.
- |Fix| :func:`feature_selection.f_regression` and
:func:`feature_selection.r_regression` will now returned finite score by
default instead of `np.nan` and `np.inf` for some corner case. You can use
`force_finite=False` if you really want to get non-finite values and keep
the old behavior.
- |Fix| Panda's DataFrames with all non-string columns such as a MultiIndex no
longer warns when passed into an Estimator. Estimators will continue to
ignore the column names in DataFrames with non-string columns. For
`feature_names_in_` to be defined, columns must be all strings. :pr:`22410` by
`Thomas Fan`_.
- |Fix| :class:`preprocessing.KBinsDiscretizer` changed handling of bin edges
slightly, which might result in a different encoding with the same data.
- |Fix| :func:`calibration.calibration_curve` changed handling of bin
edges slightly, which might result in a different output curve given the same
data.
- |Fix| :class:`discriminant_analysis.LinearDiscriminantAnalysis` now uses
the correct variance-scaling coefficient which may result in different model
behavior.
- |Fix| :meth:`feature_selection.SelectFromModel.fit` and
:meth:`feature_selection.SelectFromModel.partial_fit` can now be called with
`prefit=True`. `estimators_` will be a deep copy of `estimator` when
`prefit=True`. :pr:`23271` by :user:`Guillaume Lemaitre <glemaitre>`.
Changelog
---------
..
Entries should be grouped by module (in alphabetic order) and prefixed with
one of the labels: |MajorFeature|, |Feature|, |Efficiency|, |Enhancement|,
|Fix| or |API| (see whats_new.rst for descriptions).
Entries should be ordered by those labels (e.g. |Fix| after |Efficiency|).
Changes not specific to a module should be listed under *Multiple Modules*
or *Miscellaneous*.
Entries should end with:
:pr:`123456` by :user:`Joe Bloggs <joeongithub>`.
where 123456 is the *pull request* number, not the issue number.
- |Efficiency| Low-level routines for reductions on pairwise distances
for dense float64 datasets have been refactored. The following functions
and estimators now benefit from improved performances in terms of hardware
scalability and speed-ups:
- :func:`sklearn.metrics.pairwise_distances_argmin`
- :func:`sklearn.metrics.pairwise_distances_argmin_min`
- :class:`sklearn.cluster.AffinityPropagation`
- :class:`sklearn.cluster.Birch`
- :class:`sklearn.cluster.MeanShift`
- :class:`sklearn.cluster.OPTICS`
- :class:`sklearn.cluster.SpectralClustering`
- :func:`sklearn.feature_selection.mutual_info_regression`
- :class:`sklearn.neighbors.KNeighborsClassifier`
- :class:`sklearn.neighbors.KNeighborsRegressor`
- :class:`sklearn.neighbors.RadiusNeighborsClassifier`
- :class:`sklearn.neighbors.RadiusNeighborsRegressor`
- :class:`sklearn.neighbors.LocalOutlierFactor`
- :class:`sklearn.neighbors.NearestNeighbors`
- :class:`sklearn.manifold.Isomap`
- :class:`sklearn.manifold.LocallyLinearEmbedding`
- :class:`sklearn.manifold.TSNE`
- :func:`sklearn.manifold.trustworthiness`
- :class:`sklearn.semi_supervised.LabelPropagation`
- :class:`sklearn.semi_supervised.LabelSpreading`
For instance :class:`sklearn.neighbors.NearestNeighbors.kneighbors` and
:class:`sklearn.neighbors.NearestNeighbors.radius_neighbors`
can respectively be up to ×20 and ×5 faster than previously on a laptop.
Moreover, implementations of those two algorithms are now suitable
for machine with many cores, making them usable for datasets consisting
of millions of samples.
:pr:`21987`, :pr:`22064`, :pr:`22065`, :pr:`22288` and :pr:`22320`
by :user:`Julien Jerphanion <jjerphan>`.
- |Enhancement| All scikit-learn models now generate a more informative
error message when some input contains unexpected `NaN` or infinite values.
In particular the message contains the input name ("X", "y" or
"sample_weight") and if an unexpected `NaN` value is found in `X`, the error
message suggests potential solutions.
:pr:`21219` by :user:`Olivier Grisel <ogrisel>`.
- |Enhancement| All scikit-learn models now generate a more informative
error message when setting invalid hyper-parameters with `set_params`.
:pr:`21542` by :user:`Olivier Grisel <ogrisel>`.
- |Enhancement| Removes random unique identifiers in the HTML representation.
With this change, jupyter notebooks are reproducible as long as the cells are
run in the same order. :pr:`23098` by `Thomas Fan`_.
- |Fix| Estimators with `non_deterministic` tag set to `True` will skip both
`check_methods_sample_order_invariance` and `check_methods_subset_invariance` tests.
:pr:`22318` by :user:`Zhehao Liu <MaxwellLZH>`.
- |API| The option for using the log loss, aka binomial or multinomial deviance, via
the `loss` parameters was made more consistent. The preferred way is by
setting the value to `"log_loss"`. Old option names are still valid and
produce the same models, but are deprecated and will be removed in version
1.3.
- For :class:`ensemble.GradientBoostingClassifier`, the `loss` parameter name
"deviance" is deprecated in favor of the new name "log_loss", which is now the
default.
:pr:`23036` by :user:`Christian Lorentzen <lorentzenchr>`.
- For :class:`ensemble.HistGradientBoostingClassifier`, the `loss` parameter names
"auto", "binary_crossentropy" and "categorical_crossentropy" are deprecated in
favor of the new name "log_loss", which is now the default.
:pr:`23040` by :user:`Christian Lorentzen <lorentzenchr>`.
- For :class:`linear_model.SGDClassifier`, the `loss` parameter name
"log" is deprecated in favor of the new name "log_loss".
:pr:`23046` by :user:`Christian Lorentzen <lorentzenchr>`.
- |API| Rich html representation of estimators is now enabled by default in Jupyter
notebooks. It can be deactivated by setting `display='text'` in
:func:`sklearn.set_config`.
:pr:`22856` by :user:`Jérémie du Boisberranger <jeremiedbb>`.
:mod:`sklearn.calibration`
..........................
- |Enhancement| :func:`calibration.calibration_curve` accepts a parameter
`pos_label` to specify the positive class label.
:pr:`21032` by :user:`Guillaume Lemaitre <glemaitre>`.
- |Enhancement| :meth:`calibration.CalibratedClassifierCV.fit` now supports passing
`fit_params`, which are routed to the `base_estimator`.
:pr:`18170` by :user:`Benjamin Bossan <BenjaminBossan>`.
- |Enhancement| :class:`calibration.CalibrationDisplay` accepts a parameter `pos_label`
to add this information to the plot.
:pr:`21038` by :user:`Guillaume Lemaitre <glemaitre>`.
- |Fix| :func:`calibration.calibration_curve` handles bin edges more consistently now.
:pr:`14975` by `Andreas Müller`_ and :pr:`22526` by :user:`Meekail Zain <micky774>`.
- |API| :func:`calibration.calibration_curve`'s `normalize` parameter is
now deprecated and will be removed in version 1.3. It is recommended that
a proper probability (i.e. a classifier's :term:`predict_proba` positive
class) is used for `y_prob`.
:pr:`23095` by :user:`Jordan Silke <jsilke>`.
:mod:`sklearn.cluster`
......................
- |MajorFeature| :class:`cluster.BisectingKMeans` introducing Bisecting K-Means algorithm
:pr:`20031` by :user:`Michal Krawczyk <michalkrawczyk>`,
:user:`Tom Dupre la Tour <TomDLT>`
and :user:`Jérémie du Boisberranger <jeremiedbb>`.
- |Enhancement| :class:`cluster.SpectralClustering` and
:func:`cluster.spectral_clustering` now include the new `'cluster_qr'` method that
clusters samples in the embedding space as an alternative to the existing `'kmeans'`
and `'discrete'` methods. See :func:`cluster.spectral_clustering` for more details.
:pr:`21148` by :user:`Andrew Knyazev <lobpcg>`.
- |Enhancement| Adds :term:`get_feature_names_out` to :class:`cluster.Birch`,
:class:`cluster.FeatureAgglomeration`, :class:`cluster.KMeans`,
:class:`cluster.MiniBatchKMeans`. :pr:`22255` by `Thomas Fan`_.
- |Enhancement| :class:`cluster.SpectralClustering` now raises consistent
error messages when passed invalid values for `n_clusters`, `n_init`,
`gamma`, `n_neighbors`, `eigen_tol` or `degree`.
:pr:`21881` by :user:`Hugo Vassard <hvassard>`.
- |Enhancement| :class:`cluster.AffinityPropagation` now returns cluster
centers and labels if they exist, even if the model has not fully converged.
When returning these potentially-degenerate cluster centers and labels, a new
warning message is shown. If no cluster centers were constructed,
then the cluster centers remain an empty list with labels set to
`-1` and the original warning message is shown.
:pr:`22217` by :user:`Meekail Zain <micky774>`.
- |Efficiency| In :class:`cluster.KMeans`, the default ``algorithm`` is now
``"lloyd"`` which is the full classical EM-style algorithm. Both ``"auto"``
and ``"full"`` are deprecated and will be removed in version 1.3. They are
now aliases for ``"lloyd"``. The previous default was ``"auto"``, which relied
on Elkan's algorithm. Lloyd's algorithm uses less memory than Elkan's, it
is faster on many datasets, and its results are identical, hence the change.
:pr:`21735` by :user:`Aurélien Geron <ageron>`.
- |Fix| :class:`cluster.KMeans`'s `init` parameter now properly supports
array-like input and NumPy string scalars. :pr:`22154` by `Thomas Fan`_.
:mod:`sklearn.compose`
......................
- |Fix| :class:`compose.ColumnTransformer` now removes validation errors from
`__init__` and `set_params` methods.
:pr:`22537` by :user:`iofall <iofall>` and :user:`Arisa Y. <arisayosh>`.
- |Fix| :term:`get_feature_names_out` functionality in
:class:`compose.ColumnTransformer` was broken when columns were specified
using `slice`. This is fixed in :pr:`22775` and :pr:`22913` by
:user:`randomgeek78 <randomgeek78>`.
:mod:`sklearn.covariance`
.........................
- |Fix| :class:`covariance.GraphicalLassoCV` now accepts NumPy array for the
parameter `alphas`.
:pr:`22493` by :user:`Guillaume Lemaitre <glemaitre>`.
:mod:`sklearn.cross_decomposition`
..................................
- |Enhancement| the `inverse_transform` method of
:class:`cross_decomposition.PLSRegression`, :class:`cross_decomposition.PLSCanonical`
and :class:`cross_decomposition.CCA` now allows reconstruction of a `X` target when
a `Y` parameter is given. :pr:`19680` by
:user:`Robin Thibaut <robinthibaut>`.
- |Enhancement| Adds :term:`get_feature_names_out` to all transformers in the
:mod:`~sklearn.cross_decomposition` module:
:class:`cross_decomposition.CCA`,
:class:`cross_decomposition.PLSSVD`,
:class:`cross_decomposition.PLSRegression`,
and :class:`cross_decomposition.PLSCanonical`. :pr:`22119` by `Thomas Fan`_.
- |Fix| The shape of the :term:`coef_` attribute of :class:`cross_decomposition.CCA`,
:class:`cross_decomposition.PLSCanonical` and
:class:`cross_decomposition.PLSRegression` will change in version 1.3, from
`(n_features, n_targets)` to `(n_targets, n_features)`, to be consistent
with other linear models and to make it work with interface expecting a
specific shape for `coef_` (e.g. :class:`feature_selection.RFE`).
:pr:`22016` by :user:`Guillaume Lemaitre <glemaitre>`.
- |API| add the fitted attribute `intercept_` to
:class:`cross_decomposition.PLSCanonical`,
:class:`cross_decomposition.PLSRegression`, and
:class:`cross_decomposition.CCA`. The method `predict` is indeed equivalent to
`Y = X @ coef_ + intercept_`.
:pr:`22015` by :user:`Guillaume Lemaitre <glemaitre>`.
:mod:`sklearn.datasets`
.......................
- |Feature| :func:`datasets.load_files` now accepts a ignore list and
an allow list based on file extensions.
:pr:`19747` by :user:`Tony Attalla <tonyattalla>` and :pr:`22498` by
:user:`Meekail Zain <micky774>`.
- |Enhancement| :func:`datasets.make_swiss_roll` now supports the optional argument
hole; when set to True, it returns the swiss-hole dataset. :pr:`21482` by
:user:`Sebastian Pujalte <pujaltes>`.
- |Enhancement| :func:`datasets.make_blobs` no longer copies data during the generation
process, therefore uses less memory.
:pr:`22412` by :user:`Zhehao Liu <MaxwellLZH>`.
- |Enhancement| :func:`datasets.load_diabetes` now accepts the parameter
``scaled``, to allow loading unscaled data. The scaled version of this
dataset is now computed from the unscaled data, and can produce slightly
different results that in previous version (within a 1e-4 absolute
tolerance).
:pr:`16605` by :user:`Mandy Gu <happilyeverafter95>`.
- |Enhancement| :func:`datasets.fetch_openml` now has two optional arguments
`n_retries` and `delay`. By default, :func:`datasets.fetch_openml` will retry
3 times in case of a network failure with a delay between each try.
:pr:`21901` by :user:`Rileran <rileran>`.
- |Fix| :func:`datasets.fetch_covtype` is now concurrent-safe: data is downloaded
to a temporary directory before being moved to the data directory.
:pr:`23113` by :user:`Ilion Beyst <iasoon>`.
- |API| :func:`datasets.make_sparse_coded_signal` now accepts a parameter
`data_transposed` to explicitly specify the shape of matrix `X`. The default
behavior `True` is to return a transposed matrix `X` corresponding to a
`(n_features, n_samples)` shape. The default value will change to `False` in
version 1.3. :pr:`21425` by :user:`Gabriel Stefanini Vicente <g4brielvs>`.
:mod:`sklearn.decomposition`
............................
- |MajorFeature| Added a new estimator :class:`decomposition.MiniBatchNMF`. It is a
faster but less accurate version of non-negative matrix factorization, better suited
for large datasets. :pr:`16948` by :user:`Chiara Marmo <cmarmo>`,
:user:`Patricio Cerda <pcerda>` and :user:`Jérémie du Boisberranger <jeremiedbb>`.
- |Enhancement| :func:`decomposition.dict_learning`,
:func:`decomposition.dict_learning_online`
and :func:`decomposition.sparse_encode` preserve dtype for `numpy.float32`.
:class:`decomposition.DictionaryLearning`,
:class:`decomposition.MiniBatchDictionaryLearning`
and :class:`decomposition.SparseCoder` preserve dtype for `numpy.float32`.
:pr:`22002` by :user:`Takeshi Oura <takoika>`.
- |Enhancement| :class:`decomposition.PCA` exposes a parameter `n_oversamples` to tune
:func:`utils.extmath.randomized_svd` and get accurate results when the number of
features is large.
:pr:`21109` by :user:`Smile <x-shadow-man>`.
- |Enhancement| The :class:`decomposition.MiniBatchDictionaryLearning` and
:func:`decomposition.dict_learning_online` have been refactored and now have a
stopping criterion based on a small change of the dictionary or objective function,
controlled by the new `max_iter`, `tol` and `max_no_improvement` parameters. In
addition, some of their parameters and attributes are deprecated.
- the `n_iter` parameter of both is deprecated. Use `max_iter` instead.
- the `iter_offset`, `return_inner_stats`, `inner_stats` and `return_n_iter`
parameters of :func:`decomposition.dict_learning_online` serve internal purpose
and are deprecated.
- the `inner_stats_`, `iter_offset_` and `random_state_` attributes of
:class:`decomposition.MiniBatchDictionaryLearning` serve internal purpose and are
deprecated.
- the default value of the `batch_size` parameter of both will change from 3 to 256
in version 1.3.
:pr:`18975` by :user:`Jérémie du Boisberranger <jeremiedbb>`.
- |Enhancement| :class:`decomposition.SparsePCA` and :class:`decomposition.MiniBatchSparsePCA`
preserve dtype for `numpy.float32`.
:pr:`22111` by :user:`Takeshi Oura <takoika>`.
- |Enhancement| :class:`decomposition.TruncatedSVD` now allows
`n_components == n_features`, if `algorithm='randomized'`.
:pr:`22181` by :user:`Zach Deane-Mayer <zachmayer>`.
- |Enhancement| Adds :term:`get_feature_names_out` to all transformers in the
:mod:`~sklearn.decomposition` module:
:class:`decomposition.DictionaryLearning`,
:class:`decomposition.FactorAnalysis`,
:class:`decomposition.FastICA`,
:class:`decomposition.IncrementalPCA`,
:class:`decomposition.KernelPCA`,
:class:`decomposition.LatentDirichletAllocation`,
:class:`decomposition.MiniBatchDictionaryLearning`,
:class:`decomposition.MiniBatchSparsePCA`,
:class:`decomposition.NMF`,
:class:`decomposition.PCA`,
:class:`decomposition.SparsePCA`,
and :class:`decomposition.TruncatedSVD`. :pr:`21334` by
`Thomas Fan`_.
- |Enhancement| :class:`decomposition.TruncatedSVD` exposes the parameter
`n_oversamples` and `power_iteration_normalizer` to tune
:func:`utils.extmath.randomized_svd` and get accurate results when the number
of features is large, the rank of the matrix is high, or other features of
the matrix make low rank approximation difficult.
:pr:`21705` by :user:`Jay S. Stanley III <stanleyjs>`.
- |Enhancement| :class:`decomposition.PCA` exposes the parameter
`power_iteration_normalizer` to tune :func:`utils.extmath.randomized_svd` and
get more accurate results when low rank approximation is difficult.
:pr:`21705` by :user:`Jay S. Stanley III <stanleyjs>`.
- |Fix| :class:`decomposition.FastICA` now validates input parameters in `fit`
instead of `__init__`.
:pr:`21432` by :user:`Hannah Bohle <hhnnhh>` and
:user:`Maren Westermann <marenwestermann>`.
- |Fix| :class:`decomposition.FastICA` now accepts `np.float32` data without
silent upcasting. The dtype is preserved by `fit` and `fit_transform` and the
main fitted attributes use a dtype of the same precision as the training
data. :pr:`22806` by :user:`Jihane Bennis <JihaneBennis>` and
:user:`Olivier Grisel <ogrisel>`.
- |Fix| :class:`decomposition.FactorAnalysis` now validates input parameters
in `fit` instead of `__init__`.
:pr:`21713` by :user:`Haya <HayaAlmutairi>` and :user:`Krum Arnaudov <krumeto>`.
- |Fix| :class:`decomposition.KernelPCA` now validates input parameters in
`fit` instead of `__init__`.
:pr:`21567` by :user:`Maggie Chege <MaggieChege>`.
- |Fix| :class:`decomposition.PCA` and :class:`decomposition.IncrementalPCA`
more safely calculate precision using the inverse of the covariance matrix
if `self.noise_variance_` is zero.
:pr:`22300` by :user:`Meekail Zain <micky774>` and :pr:`15948` by :user:`sysuresh`.
- |Fix| Greatly reduced peak memory usage in :class:`decomposition.PCA` when
calling `fit` or `fit_transform`.
:pr:`22553` by :user:`Meekail Zain <micky774>`.
- |API| :func:`decomposition.FastICA` now supports unit variance for whitening.
The default value of its `whiten` argument will change from `True`
(which behaves like `'arbitrary-variance'`) to `'unit-variance'` in version 1.3.
:pr:`19490` by :user:`Facundo Ferrin <fferrin>` and
:user:`Julien Jerphanion <jjerphan>`.
:mod:`sklearn.discriminant_analysis`
....................................
- |Enhancement| Adds :term:`get_feature_names_out` to
:class:`discriminant_analysis.LinearDiscriminantAnalysis`. :pr:`22120` by
`Thomas Fan`_.
- |Fix| :class:`discriminant_analysis.LinearDiscriminantAnalysis` now uses
the correct variance-scaling coefficient which may result in different model
behavior. :pr:`15984` by :user:`Okon Samuel <OkonSamuel>` and :pr:`22696` by
:user:`Meekail Zain <micky774>`.
:mod:`sklearn.dummy`
....................
- |Fix| :class:`dummy.DummyRegressor` no longer overrides the `constant`
parameter during `fit`. :pr:`22486` by `Thomas Fan`_.
:mod:`sklearn.ensemble`
.......................
- |MajorFeature| Added additional option `loss="quantile"` to
:class:`ensemble.HistGradientBoostingRegressor` for modelling quantiles.
The quantile level can be specified with the new parameter `quantile`.
:pr:`21800` and :pr:`20567` by :user:`Christian Lorentzen <lorentzenchr>`.
- |Efficiency| `fit` of :class:`ensemble.GradientBoostingClassifier`
and :class:`ensemble.GradientBoostingRegressor` now calls :func:`utils.check_array`
with parameter `force_all_finite=False` for non initial warm-start runs as it has
already been checked before.
:pr:`22159` by :user:`Geoffrey Paris <Geoffrey-Paris>`.
- |Enhancement| :class:`ensemble.HistGradientBoostingClassifier` is faster,
for binary and in particular for multiclass problems thanks to the new private loss
function module.
:pr:`20811`, :pr:`20567` and :pr:`21814` by
:user:`Christian Lorentzen <lorentzenchr>`.
- |Enhancement| Adds support to use pre-fit models with `cv="prefit"`
in :class:`ensemble.StackingClassifier` and :class:`ensemble.StackingRegressor`.
:pr:`16748` by :user:`Siqi He <siqi-he>` and :pr:`22215` by
:user:`Meekail Zain <micky774>`.
- |Enhancement| :class:`ensemble.RandomForestClassifier` and
:class:`ensemble.ExtraTreesClassifier` have the new `criterion="log_loss"`, which is
equivalent to `criterion="entropy"`.
:pr:`23047` by :user:`Christian Lorentzen <lorentzenchr>`.
- |Enhancement| Adds :term:`get_feature_names_out` to
:class:`ensemble.VotingClassifier`, :class:`ensemble.VotingRegressor`,
:class:`ensemble.StackingClassifier`, and
:class:`ensemble.StackingRegressor`. :pr:`22695` and :pr:`22697` by `Thomas Fan`_.
- |Enhancement| :class:`ensemble.RandomTreesEmbedding` now has an informative
:term:`get_feature_names_out` function that includes both tree index and leaf index in
the output feature names.
:pr:`21762` by :user:`Zhehao Liu <MaxwellLZH>` and `Thomas Fan`_.
- |Efficiency| Fitting a :class:`ensemble.RandomForestClassifier`,
:class:`ensemble.RandomForestRegressor`, :class:`ensemble.ExtraTreesClassifier`,
:class:`ensemble.ExtraTreesRegressor`, and :class:`ensemble.RandomTreesEmbedding`
is now faster in a multiprocessing setting, especially for subsequent fits with
`warm_start` enabled.
:pr:`22106` by :user:`Pieter Gijsbers <PGijsbers>`.
- |Fix| Change the parameter `validation_fraction` in
:class:`ensemble.GradientBoostingClassifier` and
:class:`ensemble.GradientBoostingRegressor` so that an error is raised if anything
other than a float is passed in as an argument.
:pr:`21632` by :user:`Genesis Valencia <genvalen>`.
- |Fix| Removed a potential source of CPU oversubscription in
:class:`ensemble.HistGradientBoostingClassifier` and
:class:`ensemble.HistGradientBoostingRegressor` when CPU resource usage is limited,
for instance using cgroups quota in a docker container. :pr:`22566` by
:user:`Jérémie du Boisberranger <jeremiedbb>`.
- |Fix| :class:`ensemble.HistGradientBoostingClassifier` and
:class:`ensemble.HistGradientBoostingRegressor` no longer warns when
fitting on a pandas DataFrame with a non-default `scoring` parameter and
early_stopping enabled. :pr:`22908` by `Thomas Fan`_.
- |Fix| Fixes HTML repr for :class:`ensemble.StackingClassifier` and
:class:`ensemble.StackingRegressor`. :pr:`23097` by `Thomas Fan`_.
- |API| The attribute `loss_` of :class:`ensemble.GradientBoostingClassifier` and
:class:`ensemble.GradientBoostingRegressor` has been deprecated and will be removed
in version 1.3.
:pr:`23079` by :user:`Christian Lorentzen <lorentzenchr>`.
- |API| Changed the default of `max_features` to 1.0 for
:class:`ensemble.RandomForestRegressor` and to `"sqrt"` for
:class:`ensemble.RandomForestClassifier`. Note that these give the same fit
results as before, but are much easier to understand. The old default value
`"auto"` has been deprecated and will be removed in version 1.3. The same
changes are also applied for :class:`ensemble.ExtraTreesRegressor` and
:class:`ensemble.ExtraTreesClassifier`.
:pr:`20803` by :user:`Brian Sun <bsun94>`.
- |Efficiency| Improve runtime performance of :class:`ensemble.IsolationForest`
by skipping repetitive input checks. :pr:`23149` by :user:`Zhehao Liu <MaxwellLZH>`.
:mod:`sklearn.feature_extraction`
.................................
- |Feature| :class:`feature_extraction.FeatureHasher` now supports PyPy.
:pr:`23023` by `Thomas Fan`_.
- |Fix| :class:`feature_extraction.FeatureHasher` now validates input parameters
in `transform` instead of `__init__`. :pr:`21573` by
:user:`Hannah Bohle <hhnnhh>` and :user:`Maren Westermann <marenwestermann>`.
- |Fix| :class:`feature_extraction.text.TfidfVectorizer` now does not create
a :class:`feature_extraction.text.TfidfTransformer` at `__init__` as required
by our API.
:pr:`21832` by :user:`Guillaume Lemaitre <glemaitre>`.
:mod:`sklearn.feature_selection`
................................
- |Feature| Added auto mode to :class:`feature_selection.SequentialFeatureSelector`.
If the argument `n_features_to_select` is `'auto'`, select features until the score
improvement does not exceed the argument `tol`. The default value of
`n_features_to_select` changed from `None` to `'warn'` in 1.1 and will become
`'auto'` in 1.3. `None` and `'warn'` will be removed in 1.3. :pr:`20145` by
:user:`murata-yu <murata-yu>`.
- |Feature| Added the ability to pass callables to the `max_features` parameter
of :class:`feature_selection.SelectFromModel`. Also introduced new attribute
`max_features_` which is inferred from `max_features` and the data during
`fit`. If `max_features` is an integer, then `max_features_ = max_features`.
If `max_features` is a callable, then `max_features_ = max_features(X)`.
:pr:`22356` by :user:`Meekail Zain <micky774>`.
- |Enhancement| :class:`feature_selection.GenericUnivariateSelect` preserves
float32 dtype. :pr:`18482` by :user:`Thierry Gameiro <titigmr>`
and :user:`Daniel Kharsa <aflatoune>` and :pr:`22370` by
:user:`Meekail Zain <micky774>`.
- |Enhancement| Add a parameter `force_finite` to
:func:`feature_selection.f_regression` and
:func:`feature_selection.r_regression`. This parameter allows to force the
output to be finite in the case where a feature or a the target is constant
or that the feature and target are perfectly correlated (only for the
F-statistic).
:pr:`17819` by :user:`Juan Carlos Alfaro Jiménez <alfaro96>`.
- |Efficiency| Improve runtime performance of :func:`feature_selection.chi2`
with boolean arrays. :pr:`22235` by `Thomas Fan`_.
- |Efficiency| Reduced memory usage of :func:`feature_selection.chi2`.
:pr:`21837` by :user:`Louis Wagner <lrwagner>`.
:mod:`sklearn.gaussian_process`
...............................
- |Fix| `predict` and `sample_y` methods of
:class:`gaussian_process.GaussianProcessRegressor` now return
arrays of the correct shape in single-target and multi-target cases, and for
both `normalize_y=False` and `normalize_y=True`.
:pr:`22199` by :user:`Guillaume Lemaitre <glemaitre>`,
:user:`Aidar Shakerimoff <AidarShakerimoff>` and
:user:`Tenavi Nakamura-Zimmerer <Tenavi>`.
- |Fix| :class:`gaussian_process.GaussianProcessClassifier` raises
a more informative error if `CompoundKernel` is passed via `kernel`.
:pr:`22223` by :user:`MarcoM <marcozzxx810>`.
:mod:`sklearn.impute`
.....................
- |Enhancement| :class:`impute.SimpleImputer` now warns with feature names when features
which are skipped due to the lack of any observed values in the training set.
:pr:`21617` by :user:`Christian Ritter <chritter>`.
- |Enhancement| Added support for `pd.NA` in :class:`impute.SimpleImputer`.
:pr:`21114` by :user:`Ying Xiong <yxiong>`.
- |Enhancement| Adds :term:`get_feature_names_out` to
:class:`impute.SimpleImputer`, :class:`impute.KNNImputer`,
:class:`impute.IterativeImputer`, and :class:`impute.MissingIndicator`.
:pr:`21078` by `Thomas Fan`_.
- |API| The `verbose` parameter was deprecated for :class:`impute.SimpleImputer`.
A warning will always be raised upon the removal of empty columns.
:pr:`21448` by :user:`Oleh Kozynets <OlehKSS>` and
:user:`Christian Ritter <chritter>`.
:mod:`sklearn.inspection`
.........................
- |Feature| Add a display to plot the boundary decision of a classifier by
using the method :func:`inspection.DecisionBoundaryDisplay.from_estimator`.
:pr:`16061` by `Thomas Fan`_.
- |Enhancement| In
:meth:`inspection.PartialDependenceDisplay.from_estimator`, allow
`kind` to accept a list of strings to specify which type of
plot to draw for each feature interaction.
:pr:`19438` by :user:`Guillaume Lemaitre <glemaitre>`.
- |Enhancement| :meth:`inspection.PartialDependenceDisplay.from_estimator`,
:meth:`inspection.PartialDependenceDisplay.plot`, and
`inspection.plot_partial_dependence` now support plotting centered
Individual Conditional Expectation (cICE) and centered PDP curves controlled
by setting the parameter `centered`.
:pr:`18310` by :user:`Johannes Elfner <JoElfner>` and
:user:`Guillaume Lemaitre <glemaitre>`.
:mod:`sklearn.isotonic`
.......................
- |Enhancement| Adds :term:`get_feature_names_out` to
:class:`isotonic.IsotonicRegression`.
:pr:`22249` by `Thomas Fan`_.
:mod:`sklearn.kernel_approximation`
...................................
- |Enhancement| Adds :term:`get_feature_names_out` to
:class:`kernel_approximation.AdditiveChi2Sampler`.
:class:`kernel_approximation.Nystroem`,
:class:`kernel_approximation.PolynomialCountSketch`,
:class:`kernel_approximation.RBFSampler`, and
:class:`kernel_approximation.SkewedChi2Sampler`.
:pr:`22137` and :pr:`22694` by `Thomas Fan`_.
:mod:`sklearn.linear_model`
...........................
- |Feature| :class:`linear_model.ElasticNet`, :class:`linear_model.ElasticNetCV`,
:class:`linear_model.Lasso` and :class:`linear_model.LassoCV` support `sample_weight`
for sparse input `X`.
:pr:`22808` by :user:`Christian Lorentzen <lorentzenchr>`.
- |Feature| :class:`linear_model.Ridge` with `solver="lsqr"` now supports to fit sparse
input with `fit_intercept=True`.
:pr:`22950` by :user:`Christian Lorentzen <lorentzenchr>`.
- |Enhancement| :class:`linear_model.QuantileRegressor` support sparse input
for the highs based solvers.
:pr:`21086` by :user:`Venkatachalam Natchiappan <venkyyuvy>`.
In addition, those solvers now use the CSC matrix right from the
beginning which speeds up fitting.
:pr:`22206` by :user:`Christian Lorentzen <lorentzenchr>`.
- |Enhancement| :class:`linear_model.LogisticRegression` is faster for
``solvers="lbfgs"`` and ``solver="newton-cg"``, for binary and in particular for
multiclass problems thanks to the new private loss function module. In the multiclass
case, the memory consumption has also been reduced for these solvers as the target is
now label encoded (mapped to integers) instead of label binarized (one-hot encoded).
The more classes, the larger the benefit.
:pr:`21808`, :pr:`20567` and :pr:`21814` by
:user:`Christian Lorentzen <lorentzenchr>`.
- |Enhancement| :class:`linear_model.GammaRegressor`,
:class:`linear_model.PoissonRegressor` and :class:`linear_model.TweedieRegressor`
are faster for ``solvers="lbfgs"``.
:pr:`22548`, :pr:`21808` and :pr:`20567` by
:user:`Christian Lorentzen <lorentzenchr>`.
- |Enhancement| Rename parameter `base_estimator` to `estimator` in
:class:`linear_model.RANSACRegressor` to improve readability and consistency.
`base_estimator` is deprecated and will be removed in 1.3.
:pr:`22062` by :user:`Adrian Trujillo <trujillo9616>`.
- |Enhancement| :func:`linear_model.ElasticNet` and
and other linear model classes using coordinate descent show error
messages when non-finite parameter weights are produced. :pr:`22148`
by :user:`Christian Ritter <chritter>` and :user:`Norbert Preining <norbusan>`.
- |Enhancement| :class:`linear_model.ElasticNet` and :class:`linear_model.Lasso`
now raise consistent error messages when passed invalid values for `l1_ratio`,
`alpha`, `max_iter` and `tol`.
:pr:`22240` by :user:`Arturo Amor <ArturoAmorQ>`.
- |Enhancement| :class:`linear_model.BayesianRidge` and
:class:`linear_model.ARDRegression` now preserve float32 dtype. :pr:`9087` by
:user:`Arthur Imbert <Henley13>` and :pr:`22525` by :user:`Meekail Zain <micky774>`.
- |Enhancement| :class:`linear_model.RidgeClassifier` is now supporting
multilabel classification.
:pr:`19689` by :user:`Guillaume Lemaitre <glemaitre>`.
- |Enhancement| :class:`linear_model.RidgeCV` and
:class:`linear_model.RidgeClassifierCV` now raise consistent error message
when passed invalid values for `alphas`.
:pr:`21606` by :user:`Arturo Amor <ArturoAmorQ>`.
- |Enhancement| :class:`linear_model.Ridge` and :class:`linear_model.RidgeClassifier`
now raise consistent error message when passed invalid values for `alpha`,
`max_iter` and `tol`.
:pr:`21341` by :user:`Arturo Amor <ArturoAmorQ>`.
- |Enhancement| :func:`linear_model.orthogonal_mp_gram` preservse dtype for
`numpy.float32`.
:pr:`22002` by :user:`Takeshi Oura <takoika>`.
- |Fix| :class:`linear_model.LassoLarsIC` now correctly computes AIC
and BIC. An error is now raised when `n_features > n_samples` and
when the noise variance is not provided.
:pr:`21481` by :user:`Guillaume Lemaitre <glemaitre>` and
:user:`Andrés Babino <ababino>`.
- |Fix| :class:`linear_model.TheilSenRegressor` now validates input parameter
``max_subpopulation`` in `fit` instead of `__init__`.
:pr:`21767` by :user:`Maren Westermann <marenwestermann>`.
- |Fix| :class:`linear_model.ElasticNetCV` now produces correct
warning when `l1_ratio=0`.
:pr:`21724` by :user:`Yar Khine Phyo <yarkhinephyo>`.
- |Fix| :class:`linear_model.LogisticRegression` and
:class:`linear_model.LogisticRegressionCV` now set the `n_iter_` attribute
with a shape that respects the docstring and that is consistent with the shape
obtained when using the other solvers in the one-vs-rest setting. Previously,
it would record only the maximum of the number of iterations for each binary
sub-problem while now all of them are recorded. :pr:`21998` by
:user:`Olivier Grisel <ogrisel>`.
- |Fix| The property `family` of :class:`linear_model.TweedieRegressor` is not
validated in `__init__` anymore. Instead, this (private) property is deprecated in
:class:`linear_model.GammaRegressor`, :class:`linear_model.PoissonRegressor` and
:class:`linear_model.TweedieRegressor`, and will be removed in 1.3.
:pr:`22548` by :user:`Christian Lorentzen <lorentzenchr>`.
- |Fix| The `coef_` and `intercept_` attributes of
:class:`linear_model.LinearRegression` are now correctly computed in the presence of
sample weights when the input is sparse.
:pr:`22891` by :user:`Jérémie du Boisberranger <jeremiedbb>`.
- |Fix| The `coef_` and `intercept_` attributes of :class:`linear_model.Ridge` with
`solver="sparse_cg"` and `solver="lbfgs"` are now correctly computed in the presence
of sample weights when the input is sparse.
:pr:`22899` by :user:`Jérémie du Boisberranger <jeremiedbb>`.
- |Fix| :class:`linear_model.SGDRegressor` and :class:`linear_model.SGDClassifier` now
computes the validation error correctly when early stopping is enabled.
:pr:`23256` by :user:`Zhehao Liu <MaxwellLZH>`.
- |API| :class:`linear_model.LassoLarsIC` now exposes `noise_variance` as
a parameter in order to provide an estimate of the noise variance.
This is particularly relevant when `n_features > n_samples` and the
estimator of the noise variance cannot be computed.
:pr:`21481` by :user:`Guillaume Lemaitre <glemaitre>`.
:mod:`sklearn.manifold`
.......................
- |Feature| :class:`manifold.Isomap` now supports radius-based
neighbors via the `radius` argument.
:pr:`19794` by :user:`Zhehao Liu <MaxwellLZH>`.
- |Enhancement| :func:`manifold.spectral_embedding` and
:class:`manifold.SpectralEmbedding` supports `np.float32` dtype and will
preserve this dtype.
:pr:`21534` by :user:`Andrew Knyazev <lobpcg>`.
- |Enhancement| Adds :term:`get_feature_names_out` to :class:`manifold.Isomap`
and :class:`manifold.LocallyLinearEmbedding`. :pr:`22254` by `Thomas Fan`_.
- |Enhancement| added `metric_params` to :class:`manifold.TSNE` constructor for
additional parameters of distance metric to use in optimization.
:pr:`21805` by :user:`Jeanne Dionisi <jeannedionisi>` and :pr:`22685` by
:user:`Meekail Zain <micky774>`.
- |Enhancement| :func:`manifold.trustworthiness` raises an error if
`n_neighbours >= n_samples / 2` to ensure a correct support for the function.
:pr:`18832` by :user:`Hong Shao Yang <hongshaoyang>` and :pr:`23033` by
:user:`Meekail Zain <micky774>`.
- |Fix| :func:`manifold.spectral_embedding` now uses Gaussian instead of
the previous uniform on [0, 1] random initial approximations to eigenvectors
in eigen_solvers `lobpcg` and `amg` to improve their numerical stability.
:pr:`21565` by :user:`Andrew Knyazev <lobpcg>`.
:mod:`sklearn.metrics`
......................
- |Feature| :func:`metrics.r2_score` and :func:`metrics.explained_variance_score` have a
new `force_finite` parameter. Setting this parameter to `False` will return the
actual non-finite score in case of perfect predictions or constant `y_true`,
instead of the finite approximation (`1.0` and `0.0` respectively) currently
returned by default. :pr:`17266` by :user:`Sylvain Marié <smarie>`.
- |Feature| :func:`metrics.d2_pinball_score` and :func:`metrics.d2_absolute_error_score`
calculate the :math:`D^2` regression score for the pinball loss and the
absolute error respectively. :func:`metrics.d2_absolute_error_score` is a special case
of :func:`metrics.d2_pinball_score` with a fixed quantile parameter `alpha=0.5`
for ease of use and discovery. The :math:`D^2` scores are generalizations
of the `r2_score` and can be interpreted as the fraction of deviance explained.
:pr:`22118` by :user:`Ohad Michel <ohadmich>`.
- |Enhancement| :func:`metrics.top_k_accuracy_score` raises an improved error
message when `y_true` is binary and `y_score` is 2d. :pr:`22284` by `Thomas Fan`_.
- |Enhancement| :func:`metrics.roc_auc_score` now supports ``average=None``
in the multiclass case when ``multiclass='ovr'`` which will return the score
per class. :pr:`19158` by :user:`Nicki Skafte <SkafteNicki>`.
- |Enhancement| Adds `im_kw` parameter to
:meth:`metrics.ConfusionMatrixDisplay.from_estimator`
:meth:`metrics.ConfusionMatrixDisplay.from_predictions`, and
:meth:`metrics.ConfusionMatrixDisplay.plot`. The `im_kw` parameter is passed
to the `matplotlib.pyplot.imshow` call when plotting the confusion matrix.
:pr:`20753` by `Thomas Fan`_.
- |Fix| :func:`metrics.silhouette_score` now supports integer input for precomputed
distances. :pr:`22108` by `Thomas Fan`_.
- |Fix| Fixed a bug in :func:`metrics.normalized_mutual_info_score` which could return
unbounded values. :pr:`22635` by :user:`Jérémie du Boisberranger <jeremiedbb>`.
- |Fix| Fixes :func:`metrics.precision_recall_curve` and
:func:`metrics.average_precision_score` when true labels are all negative.
:pr:`19085` by :user:`Varun Agrawal <varunagrawal>`.
- |API| `metrics.SCORERS` is now deprecated and will be removed in 1.3. Please
use :func:`metrics.get_scorer_names` to retrieve the names of all available
scorers. :pr:`22866` by `Adrin Jalali`_.
- |API| Parameters ``sample_weight`` and ``multioutput`` of
:func:`metrics.mean_absolute_percentage_error` are now keyword-only, in accordance
with `SLEP009 <https://scikit-learn-enhancement-proposals.readthedocs.io/en/latest/slep009/proposal.html>`_.
A deprecation cycle was introduced.
:pr:`21576` by :user:`Paul-Emile Dugnat <pedugnat>`.
- |API| The `"wminkowski"` metric of :class:`metrics.DistanceMetric` is deprecated
and will be removed in version 1.3. Instead the existing `"minkowski"` metric now takes
in an optional `w` parameter for weights. This deprecation aims at remaining consistent
with SciPy 1.8 convention. :pr:`21873` by :user:`Yar Khine Phyo <yarkhinephyo>`.
- |API| :class:`metrics.DistanceMetric` has been moved from
:mod:`sklearn.neighbors` to :mod:`sklearn.metrics`.
Using `neighbors.DistanceMetric` for imports is still valid for
backward compatibility, but this alias will be removed in 1.3.
:pr:`21177` by :user:`Julien Jerphanion <jjerphan>`.
:mod:`sklearn.mixture`
......................
- |Enhancement| :class:`mixture.GaussianMixture` and
:class:`mixture.BayesianGaussianMixture` can now be initialized using
k-means++ and random data points. :pr:`20408` by
:user:`Gordon Walsh <g-walsh>`, :user:`Alberto Ceballos<alceballosa>`
and :user:`Andres Rios<ariosramirez>`.
- |Fix| Fix a bug that correctly initialize `precisions_cholesky_` in
:class:`mixture.GaussianMixture` when providing `precisions_init` by taking
its square root.
:pr:`22058` by :user:`Guillaume Lemaitre <glemaitre>`.
- |Fix| :class:`mixture.GaussianMixture` now normalizes `weights_` more safely,
preventing rounding errors when calling :meth:`mixture.GaussianMixture.sample` with
`n_components=1`.
:pr:`23034` by :user:`Meekail Zain <micky774>`.
:mod:`sklearn.model_selection`
..............................
- |Enhancement| it is now possible to pass `scoring="matthews_corrcoef"` to all
model selection tools with a `scoring` argument to use the Matthews
correlation coefficient (MCC).
:pr:`22203` by :user:`Olivier Grisel <ogrisel>`.
- |Enhancement| raise an error during cross-validation when the fits for all the
splits failed. Similarly raise an error during grid-search when the fits for
all the models and all the splits failed.
:pr:`21026` by :user:`Loïc Estève <lesteve>`.
- |Fix| :class:`model_selection.GridSearchCV`,
:class:`model_selection.HalvingGridSearchCV`
now validate input parameters in `fit` instead of `__init__`.
:pr:`21880` by :user:`Mrinal Tyagi <MrinalTyagi>`.
- |Fix| :func:`model_selection.learning_curve` now supports `partial_fit`
with regressors. :pr:`22982` by `Thomas Fan`_.
:mod:`sklearn.multiclass`
.........................
- |Enhancement| :class:`multiclass.OneVsRestClassifier` now supports a `verbose`
parameter so progress on fitting can be seen.
:pr:`22508` by :user:`Chris Combs <combscCode>`.
- |Fix| :meth:`multiclass.OneVsOneClassifier.predict` returns correct predictions when
the inner classifier only has a :term:`predict_proba`. :pr:`22604` by `Thomas Fan`_.
:mod:`sklearn.neighbors`
........................
- |Enhancement| Adds :term:`get_feature_names_out` to
:class:`neighbors.RadiusNeighborsTransformer`,
:class:`neighbors.KNeighborsTransformer`
and :class:`neighbors.NeighborhoodComponentsAnalysis`.
:pr:`22212` by :user:`Meekail Zain <micky774>`.
- |Fix| :class:`neighbors.KernelDensity` now validates input parameters in `fit`
instead of `__init__`. :pr:`21430` by :user:`Desislava Vasileva <DessyVV>` and
:user:`Lucy Jimenez <LucyJimenez>`.
- |Fix| :func:`neighbors.KNeighborsRegressor.predict` now works properly when
given an array-like input if `KNeighborsRegressor` is first constructed with a
callable passed to the `weights` parameter. :pr:`22687` by
:user:`Meekail Zain <micky774>`.
:mod:`sklearn.neural_network`
.............................
- |Enhancement| :func:`neural_network.MLPClassifier` and
:func:`neural_network.MLPRegressor` show error
messages when optimizers produce non-finite parameter weights. :pr:`22150`
by :user:`Christian Ritter <chritter>` and :user:`Norbert Preining <norbusan>`.
- |Enhancement| Adds :term:`get_feature_names_out` to
:class:`neural_network.BernoulliRBM`. :pr:`22248` by `Thomas Fan`_.
:mod:`sklearn.pipeline`
.......................
- |Enhancement| Added support for "passthrough" in :class:`pipeline.FeatureUnion`.
Setting a transformer to "passthrough" will pass the features unchanged.
:pr:`20860` by :user:`Shubhraneel Pal <shubhraneel>`.
- |Fix| :class:`pipeline.Pipeline` now does not validate hyper-parameters in
`__init__` but in `.fit()`.
:pr:`21888` by :user:`iofall <iofall>` and :user:`Arisa Y. <arisayosh>`.
- |Fix| :class:`pipeline.FeatureUnion` does not validate hyper-parameters in
`__init__`. Validation is now handled in `.fit()` and `.fit_transform()`.
:pr:`21954` by :user:`iofall <iofall>` and :user:`Arisa Y. <arisayosh>`.
- |Fix| Defines `__sklearn_is_fitted__` in :class:`pipeline.FeatureUnion` to
return correct result with :func:`utils.validation.check_is_fitted`.
:pr:`22953` by :user:`randomgeek78 <randomgeek78>`.
:mod:`sklearn.preprocessing`
............................
- |Feature| :class:`preprocessing.OneHotEncoder` now supports grouping
infrequent categories into a single feature. Grouping infrequent categories
is enabled by specifying how to select infrequent categories with
`min_frequency` or `max_categories`. :pr:`16018` by `Thomas Fan`_.
- |Enhancement| Adds a `subsample` parameter to :class:`preprocessing.KBinsDiscretizer`.
This allows specifying a maximum number of samples to be used while fitting
the model. The option is only available when `strategy` is set to `quantile`.
:pr:`21445` by :user:`Felipe Bidu <fbidu>` and :user:`Amanda Dsouza <amy12xx>`.
- |Enhancement| Adds `encoded_missing_value` to :class:`preprocessing.OrdinalEncoder`
to configure the encoded value for missing data. :pr:`21988` by `Thomas Fan`_.
- |Enhancement| Added the `get_feature_names_out` method and a new parameter
`feature_names_out` to :class:`preprocessing.FunctionTransformer`. You can set
`feature_names_out` to 'one-to-one' to use the input features names as the
output feature names, or you can set it to a callable that returns the output
feature names. This is especially useful when the transformer changes the
number of features. If `feature_names_out` is None (which is the default),
then `get_output_feature_names` is not defined.
:pr:`21569` by :user:`Aurélien Geron <ageron>`.
- |Enhancement| Adds :term:`get_feature_names_out` to
:class:`preprocessing.Normalizer`,
:class:`preprocessing.KernelCenterer`,
:class:`preprocessing.OrdinalEncoder`, and
:class:`preprocessing.Binarizer`. :pr:`21079` by `Thomas Fan`_.
- |Fix| :class:`preprocessing.PowerTransformer` with `method='yeo-johnson'`
better supports significantly non-Gaussian data when searching for an optimal
lambda. :pr:`20653` by `Thomas Fan`_.
- |Fix| :class:`preprocessing.LabelBinarizer` now validates input parameters in
`fit` instead of `__init__`.
:pr:`21434` by :user:`Krum Arnaudov <krumeto>`.
- |Fix| :class:`preprocessing.FunctionTransformer` with `check_inverse=True`
now provides informative error message when input has mixed dtypes. :pr:`19916` by
:user:`Zhehao Liu <MaxwellLZH>`.
- |Fix| :class:`preprocessing.KBinsDiscretizer` handles bin edges more consistently now.
:pr:`14975` by `Andreas Müller`_ and :pr:`22526` by :user:`Meekail Zain <micky774>`.
- |Fix| Adds :meth:`preprocessing.KBinsDiscretizer.get_feature_names_out` support when
`encode="ordinal"`. :pr:`22735` by `Thomas Fan`_.
:mod:`sklearn.random_projection`
................................
- |Enhancement| Adds an `inverse_transform` method and a `compute_inverse_transform`
parameter to :class:`random_projection.GaussianRandomProjection` and
:class:`random_projection.SparseRandomProjection`. When the parameter is set
to True, the pseudo-inverse of the components is computed during `fit` and stored as
`inverse_components_`. :pr:`21701` by :user:`Aurélien Geron <ageron>`.
- |Enhancement| :class:`random_projection.SparseRandomProjection` and
:class:`random_projection.GaussianRandomProjection` preserves dtype for
`numpy.float32`. :pr:`22114` by :user:`Takeshi Oura <takoika>`.
- |Enhancement| Adds :term:`get_feature_names_out` to all transformers in the
:mod:`sklearn.random_projection` module:
:class:`random_projection.GaussianRandomProjection` and
:class:`random_projection.SparseRandomProjection`. :pr:`21330` by
:user:`Loïc Estève <lesteve>`.
:mod:`sklearn.svm`
..................
- |Enhancement| :class:`svm.OneClassSVM`, :class:`svm.NuSVC`,
:class:`svm.NuSVR`, :class:`svm.SVC` and :class:`svm.SVR` now expose
`n_iter_`, the number of iterations of the libsvm optimization routine.
:pr:`21408` by :user:`Juan Martín Loyola <jmloyola>`.
- |Enhancement| :func:`svm.SVR`, :func:`svm.SVC`, :func:`svm.NuSVR`,
:func:`svm.OneClassSVM`, :func:`svm.NuSVC` now raise an error
when the dual-gap estimation produce non-finite parameter weights.
:pr:`22149` by :user:`Christian Ritter <chritter>` and
:user:`Norbert Preining <norbusan>`.
- |Fix| :class:`svm.NuSVC`, :class:`svm.NuSVR`, :class:`svm.SVC`,
:class:`svm.SVR`, :class:`svm.OneClassSVM` now validate input
parameters in `fit` instead of `__init__`.
:pr:`21436` by :user:`Haidar Almubarak <Haidar13 >`.
:mod:`sklearn.tree`
...................
- |Enhancement| :class:`tree.DecisionTreeClassifier` and
:class:`tree.ExtraTreeClassifier` have the new `criterion="log_loss"`, which is
equivalent to `criterion="entropy"`.
:pr:`23047` by :user:`Christian Lorentzen <lorentzenchr>`.
- |Fix| Fix a bug in the Poisson splitting criterion for
:class:`tree.DecisionTreeRegressor`.
:pr:`22191` by :user:`Christian Lorentzen <lorentzenchr>`.
- |API| Changed the default value of `max_features` to 1.0 for
:class:`tree.ExtraTreeRegressor` and to `"sqrt"` for
:class:`tree.ExtraTreeClassifier`, which will not change the fit result. The original
default value `"auto"` has been deprecated and will be removed in version 1.3.
Setting `max_features` to `"auto"` is also deprecated
for :class:`tree.DecisionTreeClassifier` and :class:`tree.DecisionTreeRegressor`.
:pr:`22476` by :user:`Zhehao Liu <MaxwellLZH>`.
:mod:`sklearn.utils`
....................
- |Enhancement| :func:`utils.check_array` and
:func:`utils.multiclass.type_of_target` now accept an `input_name` parameter to make
the error message more informative when passed invalid input data (e.g. with NaN or
infinite values).
:pr:`21219` by :user:`Olivier Grisel <ogrisel>`.
- |Enhancement| :func:`utils.check_array` returns a float
ndarray with `np.nan` when passed a `Float32` or `Float64` pandas extension
array with `pd.NA`. :pr:`21278` by `Thomas Fan`_.
- |Enhancement| :func:`utils.estimator_html_repr` shows a more helpful error
message when running in a jupyter notebook that is not trusted. :pr:`21316`
by `Thomas Fan`_.
- |Enhancement| :func:`utils.estimator_html_repr` displays an arrow on the top
left corner of the HTML representation to show how the elements are
clickable. :pr:`21298` by `Thomas Fan`_.
- |Enhancement| :func:`utils.check_array` with `dtype=None` returns numeric
arrays when passed in a pandas DataFrame with mixed dtypes. `dtype="numeric"`
will also make better infer the dtype when the DataFrame has mixed dtypes.
:pr:`22237` by `Thomas Fan`_.
- |Enhancement| :func:`utils.check_scalar` now has better messages
when displaying the type. :pr:`22218` by `Thomas Fan`_.
- |Fix| Changes the error message of the `ValidationError` raised by
:func:`utils.check_X_y` when y is None so that it is compatible
with the `check_requires_y_none` estimator check. :pr:`22578` by
:user:`Claudio Salvatore Arcidiacono <ClaudioSalvatoreArcidiacono>`.
- |Fix| :func:`utils.class_weight.compute_class_weight` now only requires that
all classes in `y` have a weight in `class_weight`. An error is still raised
when a class is present in `y` but not in `class_weight`. :pr:`22595` by
`Thomas Fan`_.
- |Fix| :func:`utils.estimator_html_repr` has an improved visualization for nested
meta-estimators. :pr:`21310` by `Thomas Fan`_.
- |Fix| :func:`utils.check_scalar` raises an error when
`include_boundaries={"left", "right"}` and the boundaries are not set.
:pr:`22027` by :user:`Marie Lanternier <mlant>`.
- |Fix| :func:`utils.metaestimators.available_if` correctly returns a bounded
method that can be pickled. :pr:`23077` by `Thomas Fan`_.
- |API| :func:`utils.estimator_checks.check_estimator`'s argument is now called
`estimator` (previous name was `Estimator`). :pr:`22188` by
:user:`Mathurin Massias <mathurinm>`.
- |API| ``utils.metaestimators.if_delegate_has_method`` is deprecated and will be
removed in version 1.3. Use :func:`utils.metaestimators.available_if` instead.
:pr:`22830` by :user:`Jérémie du Boisberranger <jeremiedbb>`.
.. rubric:: Code and documentation contributors
Thanks to everyone who has contributed to the maintenance and improvement of
the project since version 1.0, including:
2357juan, Abhishek Gupta, adamgonzo, Adam Li, adijohar, Aditya Kumawat, Aditya
Raghuwanshi, Aditya Singh, Adrian Trujillo Duron, Adrin Jalali, ahmadjubair33,
AJ Druck, aj-white, Alan Peixinho, Alberto Mario Ceballos-Arroyo, Alek
Lefebvre, Alex, Alexandr, Alexandre Gramfort, alexanmv, almeidayoel, Amanda
Dsouza, Aman Sharma, Amar pratap singh, Amit, amrcode, András Simon, Andreas
Grivas, Andreas Mueller, Andrew Knyazev, Andriy, Angus L'Herrou, Ankit Sharma,
Anne Ducout, Arisa, Arth, arthurmello, Arturo Amor, ArturoAmor, Atharva Patil,
aufarkari, Aurélien Geron, avm19, Ayan Bag, baam, Bardiya Ak, Behrouz B,
Ben3940, Benjamin Bossan, Bharat Raghunathan, Bijil Subhash, bmreiniger,
Brandon Truth, Brenden Kadota, Brian Sun, cdrig, Chalmer Lowe, Chiara Marmo,
Chitteti Srinath Reddy, Chloe-Agathe Azencott, Christian Lorentzen, Christian
Ritter, christopherlim98, Christoph T. Weidemann, Christos Aridas, Claudio
Salvatore Arcidiacono, combscCode, Daniela Fernandes, darioka, Darren Nguyen,
Dave Eargle, David Gilbertson, David Poznik, Dea María Léon, Dennis Osei,
DessyVV, Dev514, Dimitri Papadopoulos Orfanos, Diwakar Gupta, Dr. Felix M.
Riese, drskd, Emiko Sano, Emmanouil Gionanidis, EricEllwanger, Erich Schubert,
Eric Larson, Eric Ndirangu, ErmolaevPA, Estefania Barreto-Ojeda, eyast, Fatima
GASMI, Federico Luna, Felix Glushchenkov, fkaren27, Fortune Uwha, FPGAwesome,
francoisgoupil, Frans Larsson, ftorres16, Gabor Berei, Gabor Kertesz, Gabriel
Stefanini Vicente, Gabriel S Vicente, Gael Varoquaux, GAURAV CHOUDHARY,
Gauthier I, genvalen, Geoffrey-Paris, Giancarlo Pablo, glennfrutiz, gpapadok,
Guillaume Lemaitre, Guillermo Tomás Fernández Martín, Gustavo Oliveira, Haidar
Almubarak, Hannah Bohle, Hansin Ahuja, Haoyin Xu, Haya, Helder Geovane Gomes de
Lima, henrymooresc, Hideaki Imamura, Himanshu Kumar, Hind-M, hmasdev, hvassard,
i-aki-y, iasoon, Inclusive Coding Bot, Ingela, iofall, Ishan Kumar, Jack Liu,
Jake Cowton, jalexand3r, J Alexander, Jauhar, Jaya Surya Kommireddy, Jay
Stanley, Jeff Hale, je-kr, JElfner, Jenny Vo, Jérémie du Boisberranger, Jihane,
Jirka Borovec, Joel Nothman, Jon Haitz Legarreta Gorroño, Jordan Silke, Jorge
Ciprián, Jorge Loayza, Joseph Chazalon, Joseph Schwartz-Messing, Jovan
Stojanovic, JSchuerz, Juan Carlos Alfaro Jiménez, Juan Martin Loyola, Julien
Jerphanion, katotten, Kaushik Roy Chowdhury, Ken4git, Kenneth Prabakaran,
kernc, Kevin Doucet, KimAYoung, Koushik Joshi, Kranthi Sedamaki, krishna kumar,
krumetoft, lesnee, Lisa Casino, Logan Thomas, Loic Esteve, Louis Wagner,
LucieClair, Lucy Liu, Luiz Eduardo Amaral, Magali, MaggieChege, Mai,
mandjevant, Mandy Gu, Manimaran, MarcoM, Marco Wurps, Maren Westermann, Maria
Boerner, MarieS-WiMLDS, Martel Corentin, martin-kokos, mathurinm, Matías,
matjansen, Matteo Francia, Maxwell, Meekail Zain, Megabyte, Mehrdad
Moradizadeh, melemo2, Michael I Chen, michalkrawczyk, Micky774, milana2,
millawell, Ming-Yang Ho, Mitzi, miwojc, Mizuki, mlant, Mohamed Haseeb, Mohit
Sharma, Moonkyung94, mpoemsl, MrinalTyagi, Mr. Leu, msabatier, murata-yu, N,
Nadirhan Şahin, Naipawat Poolsawat, NartayXD, nastegiano, nathansquan,
nat-salt, Nicki Skafte Detlefsen, Nicolas Hug, Niket Jain, Nikhil Suresh,
Nikita Titov, Nikolay Kondratyev, Ohad Michel, Oleksandr Husak, Olivier Grisel,
partev, Patrick Ferreira, Paul, pelennor, PierreAttard, Piet Brömmel, Pieter
Gijsbers, Pinky, poloso, Pramod Anantharam, puhuk, Purna Chandra Mansingh,
QuadV, Rahil Parikh, Randall Boyes, randomgeek78, Raz Hoshia, Reshama Shaikh,
Ricardo Ferreira, Richard Taylor, Rileran, Rishabh, Robin Thibaut, Rocco Meli,
Roman Feldbauer, Roman Yurchak, Ross Barnowski, rsnegrin, Sachin Yadav,
sakinaOuisrani, Sam Adam Day, Sanjay Marreddi, Sebastian Pujalte, SEELE, SELEE,
Seyedsaman (Sam) Emami, ShanDeng123, Shao Yang Hong, sharmadharmpal,
shaymerNaturalint, Shuangchi He, Shubhraneel Pal, siavrez, slishak, Smile,
spikebh, sply88, Srinath Kailasa, Stéphane Collot, Sultan Orazbayev, Sumit
Saha, Sven Eschlbeck, Sven Stehle, Swapnil Jha, Sylvain Marié, Takeshi Oura,
Tamires Santana, Tenavi, teunpe, Theis Ferré Hjortkjær, Thiruvenkadam, Thomas
J. Fan, t-jakubek, toastedyeast, Tom Dupré la Tour, Tom McTiernan, TONY GEORGE,
Tyler Martin, Tyler Reddy, Udit Gupta, Ugo Marchand, Varun Agrawal,
Venkatachalam N, Vera Komeyer, victoirelouis, Vikas Vishwakarma, Vikrant
khedkar, Vladimir Chernyy, Vladimir Kim, WeijiaDu, Xiao Yuan, Yar Khine Phyo,
Ying Xiong, yiyangq, Yosshi999, Yuki Koyama, Zach Deane-Mayer, Zeel B Patel,
zempleni, zhenfisher, 赵丰 (Zhao Feng) | scikit-learn | include contributors rst currentmodule sklearn release notes 1 1 Version 1 1 For a short description of the main highlights of the release please refer to ref sphx glr auto examples release highlights plot release highlights 1 1 0 py include changelog legend inc changes 1 1 3 Version 1 1 3 October 2022 This bugfix release only includes fixes for compatibility with the latest SciPy release 1 9 2 Notable changes include Fix Include msvcp140 dll in the scikit learn wheels since it has been removed in the latest SciPy wheels pr 24631 by user Chiara Marmo cmarmo Enhancement Create wheels for Python 3 11 pr 24446 by user Chiara Marmo cmarmo Other bug fixes will be available in the next 1 2 release which will be released in the coming weeks Note that support for 32 bit Python on Windows has been dropped in this release This is due to the fact that SciPy 1 9 2 also dropped the support for that platform Windows users are advised to install the 64 bit version of Python instead changes 1 1 2 Version 1 1 2 August 2022 Changed models The following estimators and functions when fit with the same data and parameters may produce different models from the previous version This often occurs due to changes in the modelling logic bug fixes or enhancements or in random sampling procedures Fix class manifold TSNE now throws a ValueError when fit with perplexity n samples to ensure mathematical correctness of the algorithm pr 10805 by user Mathias Andersen MrMathias and pr 23471 by user Meekail Zain micky774 Changelog Fix A default HTML representation is shown for meta estimators with invalid parameters pr 24015 by Thomas Fan Fix Add support for F contiguous arrays for estimators and functions whose back end have been changed in 1 1 pr 23990 by user Julien Jerphanion jjerphan Fix Wheels are now available for MacOS 10 9 and greater pr 23833 by Thomas Fan mod sklearn base Fix The get params method of the class base BaseEstimator class now supports estimators with type type params that have the get params method pr 24017 by user Henry Sorsky hsorsky mod sklearn cluster Fix Fixed a bug in class cluster Birch that could trigger an error when splitting a node if there are duplicates in the dataset pr 23395 by user J r mie du Boisberranger jeremiedbb mod sklearn feature selection Fix class feature selection SelectFromModel defaults to selection threshold 1e 5 when the estimator is either class linear model ElasticNet or class linear model ElasticNetCV with l1 ratio equals 1 or class linear model LassoCV pr 23636 by user Hao Chun Chang haochunchang mod sklearn impute Fix class impute SimpleImputer uses the dtype seen in fit for transform when the dtype is object pr 22063 by Thomas Fan mod sklearn linear model Fix Use dtype aware tolerances for the validation of gram matrices passed by users or precomputed pr 22059 by user Malte S Kurz MalteKurz Fix Fixed an error in class linear model LogisticRegression with solver newton cg fit intercept True and a single feature pr 23608 by Tom Dupre la Tour mod sklearn manifold Fix class manifold TSNE now throws a ValueError when fit with perplexity n samples to ensure mathematical correctness of the algorithm pr 10805 by user Mathias Andersen MrMathias and pr 23471 by user Meekail Zain micky774 mod sklearn metrics Fix Fixed error message of class metrics coverage error for 1D array input pr 23548 by user Hao Chun Chang haochunchang mod sklearn preprocessing Fix meth preprocessing OrdinalEncoder inverse transform correctly handles use cases where unknown value or encoded missing value is nan pr 24087 by Thomas Fan mod sklearn tree Fix Fixed invalid memory access bug during fit in class tree DecisionTreeRegressor and class tree DecisionTreeClassifier pr 23273 by Thomas Fan changes 1 1 1 Version 1 1 1 May 2022 Changelog Enhancement The error message is improved when importing class model selection HalvingGridSearchCV class model selection HalvingRandomSearchCV or class impute IterativeImputer without importing the experimental flag pr 23194 by Thomas Fan Enhancement Added an extension in doc conf py to automatically generate the list of estimators that handle NaN values pr 23198 by user Lise Kleiber lisekleiber user Zhehao Liu MaxwellLZH and user Chiara Marmo cmarmo mod sklearn datasets Fix Avoid timeouts in func datasets fetch openml by not passing a timeout argument pr 23358 by user Lo c Est ve lesteve mod sklearn decomposition Fix Avoid spurious warning in class decomposition IncrementalPCA when n samples n components pr 23264 by user Lucy Liu lucyleeow mod sklearn feature selection Fix The partial fit method of class feature selection SelectFromModel now conducts validation for max features and feature names in parameters pr 23299 by user Long Bao lorentzbao mod sklearn metrics Fix Fixes func metrics precision recall curve to compute precision recall at 100 recall The Precision Recall curve now displays the last point corresponding to a classifier that always predicts the positive class recall 100 and precision class balance pr 23214 by user St phane Collot stephanecollot and user Max Baak mbaak mod sklearn preprocessing Fix class preprocessing PolynomialFeatures with degree equal to 0 will raise error when include bias is set to False and outputs a single constant array when include bias is set to True pr 23370 by user Zhehao Liu MaxwellLZH mod sklearn tree Fix Fixes performance regression with low cardinality features for class tree DecisionTreeClassifier class tree DecisionTreeRegressor class ensemble RandomForestClassifier class ensemble RandomForestRegressor class ensemble GradientBoostingClassifier and class ensemble GradientBoostingRegressor pr 23410 by user Lo c Est ve lesteve mod sklearn utils Fix func utils class weight compute sample weight now works with sparse y pr 23115 by user kernc kernc changes 1 1 Version 1 1 0 May 2022 Minimal dependencies Version 1 1 0 of scikit learn requires python 3 8 numpy 1 17 3 and scipy 1 3 2 Optional minimal dependency is matplotlib 3 1 2 Changed models The following estimators and functions when fit with the same data and parameters may produce different models from the previous version This often occurs due to changes in the modelling logic bug fixes or enhancements or in random sampling procedures Efficiency class cluster KMeans now defaults to algorithm lloyd instead of algorithm auto which was equivalent to algorithm elkan Lloyd s algorithm and Elkan s algorithm converge to the same solution up to numerical rounding errors but in general Lloyd s algorithm uses much less memory and it is often faster Efficiency Fitting class tree DecisionTreeClassifier class tree DecisionTreeRegressor class ensemble RandomForestClassifier class ensemble RandomForestRegressor class ensemble GradientBoostingClassifier and class ensemble GradientBoostingRegressor is on average 15 faster than in previous versions thanks to a new sort algorithm to find the best split Models might be different because of a different handling of splits with tied criterion values both the old and the new sorting algorithm are unstable sorting algorithms pr 22868 by Thomas Fan Fix The eigenvectors initialization for class cluster SpectralClustering and class manifold SpectralEmbedding now samples from a Gaussian when using the amg or lobpcg solver This change improves numerical stability of the solver but may result in a different model Fix func feature selection f regression and func feature selection r regression will now returned finite score by default instead of np nan and np inf for some corner case You can use force finite False if you really want to get non finite values and keep the old behavior Fix Panda s DataFrames with all non string columns such as a MultiIndex no longer warns when passed into an Estimator Estimators will continue to ignore the column names in DataFrames with non string columns For feature names in to be defined columns must be all strings pr 22410 by Thomas Fan Fix class preprocessing KBinsDiscretizer changed handling of bin edges slightly which might result in a different encoding with the same data Fix func calibration calibration curve changed handling of bin edges slightly which might result in a different output curve given the same data Fix class discriminant analysis LinearDiscriminantAnalysis now uses the correct variance scaling coefficient which may result in different model behavior Fix meth feature selection SelectFromModel fit and meth feature selection SelectFromModel partial fit can now be called with prefit True estimators will be a deep copy of estimator when prefit True pr 23271 by user Guillaume Lemaitre glemaitre Changelog Entries should be grouped by module in alphabetic order and prefixed with one of the labels MajorFeature Feature Efficiency Enhancement Fix or API see whats new rst for descriptions Entries should be ordered by those labels e g Fix after Efficiency Changes not specific to a module should be listed under Multiple Modules or Miscellaneous Entries should end with pr 123456 by user Joe Bloggs joeongithub where 123456 is the pull request number not the issue number Efficiency Low level routines for reductions on pairwise distances for dense float64 datasets have been refactored The following functions and estimators now benefit from improved performances in terms of hardware scalability and speed ups func sklearn metrics pairwise distances argmin func sklearn metrics pairwise distances argmin min class sklearn cluster AffinityPropagation class sklearn cluster Birch class sklearn cluster MeanShift class sklearn cluster OPTICS class sklearn cluster SpectralClustering func sklearn feature selection mutual info regression class sklearn neighbors KNeighborsClassifier class sklearn neighbors KNeighborsRegressor class sklearn neighbors RadiusNeighborsClassifier class sklearn neighbors RadiusNeighborsRegressor class sklearn neighbors LocalOutlierFactor class sklearn neighbors NearestNeighbors class sklearn manifold Isomap class sklearn manifold LocallyLinearEmbedding class sklearn manifold TSNE func sklearn manifold trustworthiness class sklearn semi supervised LabelPropagation class sklearn semi supervised LabelSpreading For instance class sklearn neighbors NearestNeighbors kneighbors and class sklearn neighbors NearestNeighbors radius neighbors can respectively be up to 20 and 5 faster than previously on a laptop Moreover implementations of those two algorithms are now suitable for machine with many cores making them usable for datasets consisting of millions of samples pr 21987 pr 22064 pr 22065 pr 22288 and pr 22320 by user Julien Jerphanion jjerphan Enhancement All scikit learn models now generate a more informative error message when some input contains unexpected NaN or infinite values In particular the message contains the input name X y or sample weight and if an unexpected NaN value is found in X the error message suggests potential solutions pr 21219 by user Olivier Grisel ogrisel Enhancement All scikit learn models now generate a more informative error message when setting invalid hyper parameters with set params pr 21542 by user Olivier Grisel ogrisel Enhancement Removes random unique identifiers in the HTML representation With this change jupyter notebooks are reproducible as long as the cells are run in the same order pr 23098 by Thomas Fan Fix Estimators with non deterministic tag set to True will skip both check methods sample order invariance and check methods subset invariance tests pr 22318 by user Zhehao Liu MaxwellLZH API The option for using the log loss aka binomial or multinomial deviance via the loss parameters was made more consistent The preferred way is by setting the value to log loss Old option names are still valid and produce the same models but are deprecated and will be removed in version 1 3 For class ensemble GradientBoostingClassifier the loss parameter name deviance is deprecated in favor of the new name log loss which is now the default pr 23036 by user Christian Lorentzen lorentzenchr For class ensemble HistGradientBoostingClassifier the loss parameter names auto binary crossentropy and categorical crossentropy are deprecated in favor of the new name log loss which is now the default pr 23040 by user Christian Lorentzen lorentzenchr For class linear model SGDClassifier the loss parameter name log is deprecated in favor of the new name log loss pr 23046 by user Christian Lorentzen lorentzenchr API Rich html representation of estimators is now enabled by default in Jupyter notebooks It can be deactivated by setting display text in func sklearn set config pr 22856 by user J r mie du Boisberranger jeremiedbb mod sklearn calibration Enhancement func calibration calibration curve accepts a parameter pos label to specify the positive class label pr 21032 by user Guillaume Lemaitre glemaitre Enhancement meth calibration CalibratedClassifierCV fit now supports passing fit params which are routed to the base estimator pr 18170 by user Benjamin Bossan BenjaminBossan Enhancement class calibration CalibrationDisplay accepts a parameter pos label to add this information to the plot pr 21038 by user Guillaume Lemaitre glemaitre Fix func calibration calibration curve handles bin edges more consistently now pr 14975 by Andreas M ller and pr 22526 by user Meekail Zain micky774 API func calibration calibration curve s normalize parameter is now deprecated and will be removed in version 1 3 It is recommended that a proper probability i e a classifier s term predict proba positive class is used for y prob pr 23095 by user Jordan Silke jsilke mod sklearn cluster MajorFeature class cluster BisectingKMeans introducing Bisecting K Means algorithm pr 20031 by user Michal Krawczyk michalkrawczyk user Tom Dupre la Tour TomDLT and user J r mie du Boisberranger jeremiedbb Enhancement class cluster SpectralClustering and func cluster spectral clustering now include the new cluster qr method that clusters samples in the embedding space as an alternative to the existing kmeans and discrete methods See func cluster spectral clustering for more details pr 21148 by user Andrew Knyazev lobpcg Enhancement Adds term get feature names out to class cluster Birch class cluster FeatureAgglomeration class cluster KMeans class cluster MiniBatchKMeans pr 22255 by Thomas Fan Enhancement class cluster SpectralClustering now raises consistent error messages when passed invalid values for n clusters n init gamma n neighbors eigen tol or degree pr 21881 by user Hugo Vassard hvassard Enhancement class cluster AffinityPropagation now returns cluster centers and labels if they exist even if the model has not fully converged When returning these potentially degenerate cluster centers and labels a new warning message is shown If no cluster centers were constructed then the cluster centers remain an empty list with labels set to 1 and the original warning message is shown pr 22217 by user Meekail Zain micky774 Efficiency In class cluster KMeans the default algorithm is now lloyd which is the full classical EM style algorithm Both auto and full are deprecated and will be removed in version 1 3 They are now aliases for lloyd The previous default was auto which relied on Elkan s algorithm Lloyd s algorithm uses less memory than Elkan s it is faster on many datasets and its results are identical hence the change pr 21735 by user Aur lien Geron ageron Fix class cluster KMeans s init parameter now properly supports array like input and NumPy string scalars pr 22154 by Thomas Fan mod sklearn compose Fix class compose ColumnTransformer now removes validation errors from init and set params methods pr 22537 by user iofall iofall and user Arisa Y arisayosh Fix term get feature names out functionality in class compose ColumnTransformer was broken when columns were specified using slice This is fixed in pr 22775 and pr 22913 by user randomgeek78 randomgeek78 mod sklearn covariance Fix class covariance GraphicalLassoCV now accepts NumPy array for the parameter alphas pr 22493 by user Guillaume Lemaitre glemaitre mod sklearn cross decomposition Enhancement the inverse transform method of class cross decomposition PLSRegression class cross decomposition PLSCanonical and class cross decomposition CCA now allows reconstruction of a X target when a Y parameter is given pr 19680 by user Robin Thibaut robinthibaut Enhancement Adds term get feature names out to all transformers in the mod sklearn cross decomposition module class cross decomposition CCA class cross decomposition PLSSVD class cross decomposition PLSRegression and class cross decomposition PLSCanonical pr 22119 by Thomas Fan Fix The shape of the term coef attribute of class cross decomposition CCA class cross decomposition PLSCanonical and class cross decomposition PLSRegression will change in version 1 3 from n features n targets to n targets n features to be consistent with other linear models and to make it work with interface expecting a specific shape for coef e g class feature selection RFE pr 22016 by user Guillaume Lemaitre glemaitre API add the fitted attribute intercept to class cross decomposition PLSCanonical class cross decomposition PLSRegression and class cross decomposition CCA The method predict is indeed equivalent to Y X coef intercept pr 22015 by user Guillaume Lemaitre glemaitre mod sklearn datasets Feature func datasets load files now accepts a ignore list and an allow list based on file extensions pr 19747 by user Tony Attalla tonyattalla and pr 22498 by user Meekail Zain micky774 Enhancement func datasets make swiss roll now supports the optional argument hole when set to True it returns the swiss hole dataset pr 21482 by user Sebastian Pujalte pujaltes Enhancement func datasets make blobs no longer copies data during the generation process therefore uses less memory pr 22412 by user Zhehao Liu MaxwellLZH Enhancement func datasets load diabetes now accepts the parameter scaled to allow loading unscaled data The scaled version of this dataset is now computed from the unscaled data and can produce slightly different results that in previous version within a 1e 4 absolute tolerance pr 16605 by user Mandy Gu happilyeverafter95 Enhancement func datasets fetch openml now has two optional arguments n retries and delay By default func datasets fetch openml will retry 3 times in case of a network failure with a delay between each try pr 21901 by user Rileran rileran Fix func datasets fetch covtype is now concurrent safe data is downloaded to a temporary directory before being moved to the data directory pr 23113 by user Ilion Beyst iasoon API func datasets make sparse coded signal now accepts a parameter data transposed to explicitly specify the shape of matrix X The default behavior True is to return a transposed matrix X corresponding to a n features n samples shape The default value will change to False in version 1 3 pr 21425 by user Gabriel Stefanini Vicente g4brielvs mod sklearn decomposition MajorFeature Added a new estimator class decomposition MiniBatchNMF It is a faster but less accurate version of non negative matrix factorization better suited for large datasets pr 16948 by user Chiara Marmo cmarmo user Patricio Cerda pcerda and user J r mie du Boisberranger jeremiedbb Enhancement func decomposition dict learning func decomposition dict learning online and func decomposition sparse encode preserve dtype for numpy float32 class decomposition DictionaryLearning class decomposition MiniBatchDictionaryLearning and class decomposition SparseCoder preserve dtype for numpy float32 pr 22002 by user Takeshi Oura takoika Enhancement class decomposition PCA exposes a parameter n oversamples to tune func utils extmath randomized svd and get accurate results when the number of features is large pr 21109 by user Smile x shadow man Enhancement The class decomposition MiniBatchDictionaryLearning and func decomposition dict learning online have been refactored and now have a stopping criterion based on a small change of the dictionary or objective function controlled by the new max iter tol and max no improvement parameters In addition some of their parameters and attributes are deprecated the n iter parameter of both is deprecated Use max iter instead the iter offset return inner stats inner stats and return n iter parameters of func decomposition dict learning online serve internal purpose and are deprecated the inner stats iter offset and random state attributes of class decomposition MiniBatchDictionaryLearning serve internal purpose and are deprecated the default value of the batch size parameter of both will change from 3 to 256 in version 1 3 pr 18975 by user J r mie du Boisberranger jeremiedbb Enhancement class decomposition SparsePCA and class decomposition MiniBatchSparsePCA preserve dtype for numpy float32 pr 22111 by user Takeshi Oura takoika Enhancement class decomposition TruncatedSVD now allows n components n features if algorithm randomized pr 22181 by user Zach Deane Mayer zachmayer Enhancement Adds term get feature names out to all transformers in the mod sklearn decomposition module class decomposition DictionaryLearning class decomposition FactorAnalysis class decomposition FastICA class decomposition IncrementalPCA class decomposition KernelPCA class decomposition LatentDirichletAllocation class decomposition MiniBatchDictionaryLearning class decomposition MiniBatchSparsePCA class decomposition NMF class decomposition PCA class decomposition SparsePCA and class decomposition TruncatedSVD pr 21334 by Thomas Fan Enhancement class decomposition TruncatedSVD exposes the parameter n oversamples and power iteration normalizer to tune func utils extmath randomized svd and get accurate results when the number of features is large the rank of the matrix is high or other features of the matrix make low rank approximation difficult pr 21705 by user Jay S Stanley III stanleyjs Enhancement class decomposition PCA exposes the parameter power iteration normalizer to tune func utils extmath randomized svd and get more accurate results when low rank approximation is difficult pr 21705 by user Jay S Stanley III stanleyjs Fix class decomposition FastICA now validates input parameters in fit instead of init pr 21432 by user Hannah Bohle hhnnhh and user Maren Westermann marenwestermann Fix class decomposition FastICA now accepts np float32 data without silent upcasting The dtype is preserved by fit and fit transform and the main fitted attributes use a dtype of the same precision as the training data pr 22806 by user Jihane Bennis JihaneBennis and user Olivier Grisel ogrisel Fix class decomposition FactorAnalysis now validates input parameters in fit instead of init pr 21713 by user Haya HayaAlmutairi and user Krum Arnaudov krumeto Fix class decomposition KernelPCA now validates input parameters in fit instead of init pr 21567 by user Maggie Chege MaggieChege Fix class decomposition PCA and class decomposition IncrementalPCA more safely calculate precision using the inverse of the covariance matrix if self noise variance is zero pr 22300 by user Meekail Zain micky774 and pr 15948 by user sysuresh Fix Greatly reduced peak memory usage in class decomposition PCA when calling fit or fit transform pr 22553 by user Meekail Zain micky774 API func decomposition FastICA now supports unit variance for whitening The default value of its whiten argument will change from True which behaves like arbitrary variance to unit variance in version 1 3 pr 19490 by user Facundo Ferrin fferrin and user Julien Jerphanion jjerphan mod sklearn discriminant analysis Enhancement Adds term get feature names out to class discriminant analysis LinearDiscriminantAnalysis pr 22120 by Thomas Fan Fix class discriminant analysis LinearDiscriminantAnalysis now uses the correct variance scaling coefficient which may result in different model behavior pr 15984 by user Okon Samuel OkonSamuel and pr 22696 by user Meekail Zain micky774 mod sklearn dummy Fix class dummy DummyRegressor no longer overrides the constant parameter during fit pr 22486 by Thomas Fan mod sklearn ensemble MajorFeature Added additional option loss quantile to class ensemble HistGradientBoostingRegressor for modelling quantiles The quantile level can be specified with the new parameter quantile pr 21800 and pr 20567 by user Christian Lorentzen lorentzenchr Efficiency fit of class ensemble GradientBoostingClassifier and class ensemble GradientBoostingRegressor now calls func utils check array with parameter force all finite False for non initial warm start runs as it has already been checked before pr 22159 by user Geoffrey Paris Geoffrey Paris Enhancement class ensemble HistGradientBoostingClassifier is faster for binary and in particular for multiclass problems thanks to the new private loss function module pr 20811 pr 20567 and pr 21814 by user Christian Lorentzen lorentzenchr Enhancement Adds support to use pre fit models with cv prefit in class ensemble StackingClassifier and class ensemble StackingRegressor pr 16748 by user Siqi He siqi he and pr 22215 by user Meekail Zain micky774 Enhancement class ensemble RandomForestClassifier and class ensemble ExtraTreesClassifier have the new criterion log loss which is equivalent to criterion entropy pr 23047 by user Christian Lorentzen lorentzenchr Enhancement Adds term get feature names out to class ensemble VotingClassifier class ensemble VotingRegressor class ensemble StackingClassifier and class ensemble StackingRegressor pr 22695 and pr 22697 by Thomas Fan Enhancement class ensemble RandomTreesEmbedding now has an informative term get feature names out function that includes both tree index and leaf index in the output feature names pr 21762 by user Zhehao Liu MaxwellLZH and Thomas Fan Efficiency Fitting a class ensemble RandomForestClassifier class ensemble RandomForestRegressor class ensemble ExtraTreesClassifier class ensemble ExtraTreesRegressor and class ensemble RandomTreesEmbedding is now faster in a multiprocessing setting especially for subsequent fits with warm start enabled pr 22106 by user Pieter Gijsbers PGijsbers Fix Change the parameter validation fraction in class ensemble GradientBoostingClassifier and class ensemble GradientBoostingRegressor so that an error is raised if anything other than a float is passed in as an argument pr 21632 by user Genesis Valencia genvalen Fix Removed a potential source of CPU oversubscription in class ensemble HistGradientBoostingClassifier and class ensemble HistGradientBoostingRegressor when CPU resource usage is limited for instance using cgroups quota in a docker container pr 22566 by user J r mie du Boisberranger jeremiedbb Fix class ensemble HistGradientBoostingClassifier and class ensemble HistGradientBoostingRegressor no longer warns when fitting on a pandas DataFrame with a non default scoring parameter and early stopping enabled pr 22908 by Thomas Fan Fix Fixes HTML repr for class ensemble StackingClassifier and class ensemble StackingRegressor pr 23097 by Thomas Fan API The attribute loss of class ensemble GradientBoostingClassifier and class ensemble GradientBoostingRegressor has been deprecated and will be removed in version 1 3 pr 23079 by user Christian Lorentzen lorentzenchr API Changed the default of max features to 1 0 for class ensemble RandomForestRegressor and to sqrt for class ensemble RandomForestClassifier Note that these give the same fit results as before but are much easier to understand The old default value auto has been deprecated and will be removed in version 1 3 The same changes are also applied for class ensemble ExtraTreesRegressor and class ensemble ExtraTreesClassifier pr 20803 by user Brian Sun bsun94 Efficiency Improve runtime performance of class ensemble IsolationForest by skipping repetitive input checks pr 23149 by user Zhehao Liu MaxwellLZH mod sklearn feature extraction Feature class feature extraction FeatureHasher now supports PyPy pr 23023 by Thomas Fan Fix class feature extraction FeatureHasher now validates input parameters in transform instead of init pr 21573 by user Hannah Bohle hhnnhh and user Maren Westermann marenwestermann Fix class feature extraction text TfidfVectorizer now does not create a class feature extraction text TfidfTransformer at init as required by our API pr 21832 by user Guillaume Lemaitre glemaitre mod sklearn feature selection Feature Added auto mode to class feature selection SequentialFeatureSelector If the argument n features to select is auto select features until the score improvement does not exceed the argument tol The default value of n features to select changed from None to warn in 1 1 and will become auto in 1 3 None and warn will be removed in 1 3 pr 20145 by user murata yu murata yu Feature Added the ability to pass callables to the max features parameter of class feature selection SelectFromModel Also introduced new attribute max features which is inferred from max features and the data during fit If max features is an integer then max features max features If max features is a callable then max features max features X pr 22356 by user Meekail Zain micky774 Enhancement class feature selection GenericUnivariateSelect preserves float32 dtype pr 18482 by user Thierry Gameiro titigmr and user Daniel Kharsa aflatoune and pr 22370 by user Meekail Zain micky774 Enhancement Add a parameter force finite to func feature selection f regression and func feature selection r regression This parameter allows to force the output to be finite in the case where a feature or a the target is constant or that the feature and target are perfectly correlated only for the F statistic pr 17819 by user Juan Carlos Alfaro Jim nez alfaro96 Efficiency Improve runtime performance of func feature selection chi2 with boolean arrays pr 22235 by Thomas Fan Efficiency Reduced memory usage of func feature selection chi2 pr 21837 by user Louis Wagner lrwagner mod sklearn gaussian process Fix predict and sample y methods of class gaussian process GaussianProcessRegressor now return arrays of the correct shape in single target and multi target cases and for both normalize y False and normalize y True pr 22199 by user Guillaume Lemaitre glemaitre user Aidar Shakerimoff AidarShakerimoff and user Tenavi Nakamura Zimmerer Tenavi Fix class gaussian process GaussianProcessClassifier raises a more informative error if CompoundKernel is passed via kernel pr 22223 by user MarcoM marcozzxx810 mod sklearn impute Enhancement class impute SimpleImputer now warns with feature names when features which are skipped due to the lack of any observed values in the training set pr 21617 by user Christian Ritter chritter Enhancement Added support for pd NA in class impute SimpleImputer pr 21114 by user Ying Xiong yxiong Enhancement Adds term get feature names out to class impute SimpleImputer class impute KNNImputer class impute IterativeImputer and class impute MissingIndicator pr 21078 by Thomas Fan API The verbose parameter was deprecated for class impute SimpleImputer A warning will always be raised upon the removal of empty columns pr 21448 by user Oleh Kozynets OlehKSS and user Christian Ritter chritter mod sklearn inspection Feature Add a display to plot the boundary decision of a classifier by using the method func inspection DecisionBoundaryDisplay from estimator pr 16061 by Thomas Fan Enhancement In meth inspection PartialDependenceDisplay from estimator allow kind to accept a list of strings to specify which type of plot to draw for each feature interaction pr 19438 by user Guillaume Lemaitre glemaitre Enhancement meth inspection PartialDependenceDisplay from estimator meth inspection PartialDependenceDisplay plot and inspection plot partial dependence now support plotting centered Individual Conditional Expectation cICE and centered PDP curves controlled by setting the parameter centered pr 18310 by user Johannes Elfner JoElfner and user Guillaume Lemaitre glemaitre mod sklearn isotonic Enhancement Adds term get feature names out to class isotonic IsotonicRegression pr 22249 by Thomas Fan mod sklearn kernel approximation Enhancement Adds term get feature names out to class kernel approximation AdditiveChi2Sampler class kernel approximation Nystroem class kernel approximation PolynomialCountSketch class kernel approximation RBFSampler and class kernel approximation SkewedChi2Sampler pr 22137 and pr 22694 by Thomas Fan mod sklearn linear model Feature class linear model ElasticNet class linear model ElasticNetCV class linear model Lasso and class linear model LassoCV support sample weight for sparse input X pr 22808 by user Christian Lorentzen lorentzenchr Feature class linear model Ridge with solver lsqr now supports to fit sparse input with fit intercept True pr 22950 by user Christian Lorentzen lorentzenchr Enhancement class linear model QuantileRegressor support sparse input for the highs based solvers pr 21086 by user Venkatachalam Natchiappan venkyyuvy In addition those solvers now use the CSC matrix right from the beginning which speeds up fitting pr 22206 by user Christian Lorentzen lorentzenchr Enhancement class linear model LogisticRegression is faster for solvers lbfgs and solver newton cg for binary and in particular for multiclass problems thanks to the new private loss function module In the multiclass case the memory consumption has also been reduced for these solvers as the target is now label encoded mapped to integers instead of label binarized one hot encoded The more classes the larger the benefit pr 21808 pr 20567 and pr 21814 by user Christian Lorentzen lorentzenchr Enhancement class linear model GammaRegressor class linear model PoissonRegressor and class linear model TweedieRegressor are faster for solvers lbfgs pr 22548 pr 21808 and pr 20567 by user Christian Lorentzen lorentzenchr Enhancement Rename parameter base estimator to estimator in class linear model RANSACRegressor to improve readability and consistency base estimator is deprecated and will be removed in 1 3 pr 22062 by user Adrian Trujillo trujillo9616 Enhancement func linear model ElasticNet and and other linear model classes using coordinate descent show error messages when non finite parameter weights are produced pr 22148 by user Christian Ritter chritter and user Norbert Preining norbusan Enhancement class linear model ElasticNet and class linear model Lasso now raise consistent error messages when passed invalid values for l1 ratio alpha max iter and tol pr 22240 by user Arturo Amor ArturoAmorQ Enhancement class linear model BayesianRidge and class linear model ARDRegression now preserve float32 dtype pr 9087 by user Arthur Imbert Henley13 and pr 22525 by user Meekail Zain micky774 Enhancement class linear model RidgeClassifier is now supporting multilabel classification pr 19689 by user Guillaume Lemaitre glemaitre Enhancement class linear model RidgeCV and class linear model RidgeClassifierCV now raise consistent error message when passed invalid values for alphas pr 21606 by user Arturo Amor ArturoAmorQ Enhancement class linear model Ridge and class linear model RidgeClassifier now raise consistent error message when passed invalid values for alpha max iter and tol pr 21341 by user Arturo Amor ArturoAmorQ Enhancement func linear model orthogonal mp gram preservse dtype for numpy float32 pr 22002 by user Takeshi Oura takoika Fix class linear model LassoLarsIC now correctly computes AIC and BIC An error is now raised when n features n samples and when the noise variance is not provided pr 21481 by user Guillaume Lemaitre glemaitre and user Andr s Babino ababino Fix class linear model TheilSenRegressor now validates input parameter max subpopulation in fit instead of init pr 21767 by user Maren Westermann marenwestermann Fix class linear model ElasticNetCV now produces correct warning when l1 ratio 0 pr 21724 by user Yar Khine Phyo yarkhinephyo Fix class linear model LogisticRegression and class linear model LogisticRegressionCV now set the n iter attribute with a shape that respects the docstring and that is consistent with the shape obtained when using the other solvers in the one vs rest setting Previously it would record only the maximum of the number of iterations for each binary sub problem while now all of them are recorded pr 21998 by user Olivier Grisel ogrisel Fix The property family of class linear model TweedieRegressor is not validated in init anymore Instead this private property is deprecated in class linear model GammaRegressor class linear model PoissonRegressor and class linear model TweedieRegressor and will be removed in 1 3 pr 22548 by user Christian Lorentzen lorentzenchr Fix The coef and intercept attributes of class linear model LinearRegression are now correctly computed in the presence of sample weights when the input is sparse pr 22891 by user J r mie du Boisberranger jeremiedbb Fix The coef and intercept attributes of class linear model Ridge with solver sparse cg and solver lbfgs are now correctly computed in the presence of sample weights when the input is sparse pr 22899 by user J r mie du Boisberranger jeremiedbb Fix class linear model SGDRegressor and class linear model SGDClassifier now computes the validation error correctly when early stopping is enabled pr 23256 by user Zhehao Liu MaxwellLZH API class linear model LassoLarsIC now exposes noise variance as a parameter in order to provide an estimate of the noise variance This is particularly relevant when n features n samples and the estimator of the noise variance cannot be computed pr 21481 by user Guillaume Lemaitre glemaitre mod sklearn manifold Feature class manifold Isomap now supports radius based neighbors via the radius argument pr 19794 by user Zhehao Liu MaxwellLZH Enhancement func manifold spectral embedding and class manifold SpectralEmbedding supports np float32 dtype and will preserve this dtype pr 21534 by user Andrew Knyazev lobpcg Enhancement Adds term get feature names out to class manifold Isomap and class manifold LocallyLinearEmbedding pr 22254 by Thomas Fan Enhancement added metric params to class manifold TSNE constructor for additional parameters of distance metric to use in optimization pr 21805 by user Jeanne Dionisi jeannedionisi and pr 22685 by user Meekail Zain micky774 Enhancement func manifold trustworthiness raises an error if n neighbours n samples 2 to ensure a correct support for the function pr 18832 by user Hong Shao Yang hongshaoyang and pr 23033 by user Meekail Zain micky774 Fix func manifold spectral embedding now uses Gaussian instead of the previous uniform on 0 1 random initial approximations to eigenvectors in eigen solvers lobpcg and amg to improve their numerical stability pr 21565 by user Andrew Knyazev lobpcg mod sklearn metrics Feature func metrics r2 score and func metrics explained variance score have a new force finite parameter Setting this parameter to False will return the actual non finite score in case of perfect predictions or constant y true instead of the finite approximation 1 0 and 0 0 respectively currently returned by default pr 17266 by user Sylvain Mari smarie Feature func metrics d2 pinball score and func metrics d2 absolute error score calculate the math D 2 regression score for the pinball loss and the absolute error respectively func metrics d2 absolute error score is a special case of func metrics d2 pinball score with a fixed quantile parameter alpha 0 5 for ease of use and discovery The math D 2 scores are generalizations of the r2 score and can be interpreted as the fraction of deviance explained pr 22118 by user Ohad Michel ohadmich Enhancement func metrics top k accuracy score raises an improved error message when y true is binary and y score is 2d pr 22284 by Thomas Fan Enhancement func metrics roc auc score now supports average None in the multiclass case when multiclass ovr which will return the score per class pr 19158 by user Nicki Skafte SkafteNicki Enhancement Adds im kw parameter to meth metrics ConfusionMatrixDisplay from estimator meth metrics ConfusionMatrixDisplay from predictions and meth metrics ConfusionMatrixDisplay plot The im kw parameter is passed to the matplotlib pyplot imshow call when plotting the confusion matrix pr 20753 by Thomas Fan Fix func metrics silhouette score now supports integer input for precomputed distances pr 22108 by Thomas Fan Fix Fixed a bug in func metrics normalized mutual info score which could return unbounded values pr 22635 by user J r mie du Boisberranger jeremiedbb Fix Fixes func metrics precision recall curve and func metrics average precision score when true labels are all negative pr 19085 by user Varun Agrawal varunagrawal API metrics SCORERS is now deprecated and will be removed in 1 3 Please use func metrics get scorer names to retrieve the names of all available scorers pr 22866 by Adrin Jalali API Parameters sample weight and multioutput of func metrics mean absolute percentage error are now keyword only in accordance with SLEP009 https scikit learn enhancement proposals readthedocs io en latest slep009 proposal html A deprecation cycle was introduced pr 21576 by user Paul Emile Dugnat pedugnat API The wminkowski metric of class metrics DistanceMetric is deprecated and will be removed in version 1 3 Instead the existing minkowski metric now takes in an optional w parameter for weights This deprecation aims at remaining consistent with SciPy 1 8 convention pr 21873 by user Yar Khine Phyo yarkhinephyo API class metrics DistanceMetric has been moved from mod sklearn neighbors to mod sklearn metrics Using neighbors DistanceMetric for imports is still valid for backward compatibility but this alias will be removed in 1 3 pr 21177 by user Julien Jerphanion jjerphan mod sklearn mixture Enhancement class mixture GaussianMixture and class mixture BayesianGaussianMixture can now be initialized using k means and random data points pr 20408 by user Gordon Walsh g walsh user Alberto Ceballos alceballosa and user Andres Rios ariosramirez Fix Fix a bug that correctly initialize precisions cholesky in class mixture GaussianMixture when providing precisions init by taking its square root pr 22058 by user Guillaume Lemaitre glemaitre Fix class mixture GaussianMixture now normalizes weights more safely preventing rounding errors when calling meth mixture GaussianMixture sample with n components 1 pr 23034 by user Meekail Zain micky774 mod sklearn model selection Enhancement it is now possible to pass scoring matthews corrcoef to all model selection tools with a scoring argument to use the Matthews correlation coefficient MCC pr 22203 by user Olivier Grisel ogrisel Enhancement raise an error during cross validation when the fits for all the splits failed Similarly raise an error during grid search when the fits for all the models and all the splits failed pr 21026 by user Lo c Est ve lesteve Fix class model selection GridSearchCV class model selection HalvingGridSearchCV now validate input parameters in fit instead of init pr 21880 by user Mrinal Tyagi MrinalTyagi Fix func model selection learning curve now supports partial fit with regressors pr 22982 by Thomas Fan mod sklearn multiclass Enhancement class multiclass OneVsRestClassifier now supports a verbose parameter so progress on fitting can be seen pr 22508 by user Chris Combs combscCode Fix meth multiclass OneVsOneClassifier predict returns correct predictions when the inner classifier only has a term predict proba pr 22604 by Thomas Fan mod sklearn neighbors Enhancement Adds term get feature names out to class neighbors RadiusNeighborsTransformer class neighbors KNeighborsTransformer and class neighbors NeighborhoodComponentsAnalysis pr 22212 by user Meekail Zain micky774 Fix class neighbors KernelDensity now validates input parameters in fit instead of init pr 21430 by user Desislava Vasileva DessyVV and user Lucy Jimenez LucyJimenez Fix func neighbors KNeighborsRegressor predict now works properly when given an array like input if KNeighborsRegressor is first constructed with a callable passed to the weights parameter pr 22687 by user Meekail Zain micky774 mod sklearn neural network Enhancement func neural network MLPClassifier and func neural network MLPRegressor show error messages when optimizers produce non finite parameter weights pr 22150 by user Christian Ritter chritter and user Norbert Preining norbusan Enhancement Adds term get feature names out to class neural network BernoulliRBM pr 22248 by Thomas Fan mod sklearn pipeline Enhancement Added support for passthrough in class pipeline FeatureUnion Setting a transformer to passthrough will pass the features unchanged pr 20860 by user Shubhraneel Pal shubhraneel Fix class pipeline Pipeline now does not validate hyper parameters in init but in fit pr 21888 by user iofall iofall and user Arisa Y arisayosh Fix class pipeline FeatureUnion does not validate hyper parameters in init Validation is now handled in fit and fit transform pr 21954 by user iofall iofall and user Arisa Y arisayosh Fix Defines sklearn is fitted in class pipeline FeatureUnion to return correct result with func utils validation check is fitted pr 22953 by user randomgeek78 randomgeek78 mod sklearn preprocessing Feature class preprocessing OneHotEncoder now supports grouping infrequent categories into a single feature Grouping infrequent categories is enabled by specifying how to select infrequent categories with min frequency or max categories pr 16018 by Thomas Fan Enhancement Adds a subsample parameter to class preprocessing KBinsDiscretizer This allows specifying a maximum number of samples to be used while fitting the model The option is only available when strategy is set to quantile pr 21445 by user Felipe Bidu fbidu and user Amanda Dsouza amy12xx Enhancement Adds encoded missing value to class preprocessing OrdinalEncoder to configure the encoded value for missing data pr 21988 by Thomas Fan Enhancement Added the get feature names out method and a new parameter feature names out to class preprocessing FunctionTransformer You can set feature names out to one to one to use the input features names as the output feature names or you can set it to a callable that returns the output feature names This is especially useful when the transformer changes the number of features If feature names out is None which is the default then get output feature names is not defined pr 21569 by user Aur lien Geron ageron Enhancement Adds term get feature names out to class preprocessing Normalizer class preprocessing KernelCenterer class preprocessing OrdinalEncoder and class preprocessing Binarizer pr 21079 by Thomas Fan Fix class preprocessing PowerTransformer with method yeo johnson better supports significantly non Gaussian data when searching for an optimal lambda pr 20653 by Thomas Fan Fix class preprocessing LabelBinarizer now validates input parameters in fit instead of init pr 21434 by user Krum Arnaudov krumeto Fix class preprocessing FunctionTransformer with check inverse True now provides informative error message when input has mixed dtypes pr 19916 by user Zhehao Liu MaxwellLZH Fix class preprocessing KBinsDiscretizer handles bin edges more consistently now pr 14975 by Andreas M ller and pr 22526 by user Meekail Zain micky774 Fix Adds meth preprocessing KBinsDiscretizer get feature names out support when encode ordinal pr 22735 by Thomas Fan mod sklearn random projection Enhancement Adds an inverse transform method and a compute inverse transform parameter to class random projection GaussianRandomProjection and class random projection SparseRandomProjection When the parameter is set to True the pseudo inverse of the components is computed during fit and stored as inverse components pr 21701 by user Aur lien Geron ageron Enhancement class random projection SparseRandomProjection and class random projection GaussianRandomProjection preserves dtype for numpy float32 pr 22114 by user Takeshi Oura takoika Enhancement Adds term get feature names out to all transformers in the mod sklearn random projection module class random projection GaussianRandomProjection and class random projection SparseRandomProjection pr 21330 by user Lo c Est ve lesteve mod sklearn svm Enhancement class svm OneClassSVM class svm NuSVC class svm NuSVR class svm SVC and class svm SVR now expose n iter the number of iterations of the libsvm optimization routine pr 21408 by user Juan Mart n Loyola jmloyola Enhancement func svm SVR func svm SVC func svm NuSVR func svm OneClassSVM func svm NuSVC now raise an error when the dual gap estimation produce non finite parameter weights pr 22149 by user Christian Ritter chritter and user Norbert Preining norbusan Fix class svm NuSVC class svm NuSVR class svm SVC class svm SVR class svm OneClassSVM now validate input parameters in fit instead of init pr 21436 by user Haidar Almubarak Haidar13 mod sklearn tree Enhancement class tree DecisionTreeClassifier and class tree ExtraTreeClassifier have the new criterion log loss which is equivalent to criterion entropy pr 23047 by user Christian Lorentzen lorentzenchr Fix Fix a bug in the Poisson splitting criterion for class tree DecisionTreeRegressor pr 22191 by user Christian Lorentzen lorentzenchr API Changed the default value of max features to 1 0 for class tree ExtraTreeRegressor and to sqrt for class tree ExtraTreeClassifier which will not change the fit result The original default value auto has been deprecated and will be removed in version 1 3 Setting max features to auto is also deprecated for class tree DecisionTreeClassifier and class tree DecisionTreeRegressor pr 22476 by user Zhehao Liu MaxwellLZH mod sklearn utils Enhancement func utils check array and func utils multiclass type of target now accept an input name parameter to make the error message more informative when passed invalid input data e g with NaN or infinite values pr 21219 by user Olivier Grisel ogrisel Enhancement func utils check array returns a float ndarray with np nan when passed a Float32 or Float64 pandas extension array with pd NA pr 21278 by Thomas Fan Enhancement func utils estimator html repr shows a more helpful error message when running in a jupyter notebook that is not trusted pr 21316 by Thomas Fan Enhancement func utils estimator html repr displays an arrow on the top left corner of the HTML representation to show how the elements are clickable pr 21298 by Thomas Fan Enhancement func utils check array with dtype None returns numeric arrays when passed in a pandas DataFrame with mixed dtypes dtype numeric will also make better infer the dtype when the DataFrame has mixed dtypes pr 22237 by Thomas Fan Enhancement func utils check scalar now has better messages when displaying the type pr 22218 by Thomas Fan Fix Changes the error message of the ValidationError raised by func utils check X y when y is None so that it is compatible with the check requires y none estimator check pr 22578 by user Claudio Salvatore Arcidiacono ClaudioSalvatoreArcidiacono Fix func utils class weight compute class weight now only requires that all classes in y have a weight in class weight An error is still raised when a class is present in y but not in class weight pr 22595 by Thomas Fan Fix func utils estimator html repr has an improved visualization for nested meta estimators pr 21310 by Thomas Fan Fix func utils check scalar raises an error when include boundaries left right and the boundaries are not set pr 22027 by user Marie Lanternier mlant Fix func utils metaestimators available if correctly returns a bounded method that can be pickled pr 23077 by Thomas Fan API func utils estimator checks check estimator s argument is now called estimator previous name was Estimator pr 22188 by user Mathurin Massias mathurinm API utils metaestimators if delegate has method is deprecated and will be removed in version 1 3 Use func utils metaestimators available if instead pr 22830 by user J r mie du Boisberranger jeremiedbb rubric Code and documentation contributors Thanks to everyone who has contributed to the maintenance and improvement of the project since version 1 0 including 2357juan Abhishek Gupta adamgonzo Adam Li adijohar Aditya Kumawat Aditya Raghuwanshi Aditya Singh Adrian Trujillo Duron Adrin Jalali ahmadjubair33 AJ Druck aj white Alan Peixinho Alberto Mario Ceballos Arroyo Alek Lefebvre Alex Alexandr Alexandre Gramfort alexanmv almeidayoel Amanda Dsouza Aman Sharma Amar pratap singh Amit amrcode Andr s Simon Andreas Grivas Andreas Mueller Andrew Knyazev Andriy Angus L Herrou Ankit Sharma Anne Ducout Arisa Arth arthurmello Arturo Amor ArturoAmor Atharva Patil aufarkari Aur lien Geron avm19 Ayan Bag baam Bardiya Ak Behrouz B Ben3940 Benjamin Bossan Bharat Raghunathan Bijil Subhash bmreiniger Brandon Truth Brenden Kadota Brian Sun cdrig Chalmer Lowe Chiara Marmo Chitteti Srinath Reddy Chloe Agathe Azencott Christian Lorentzen Christian Ritter christopherlim98 Christoph T Weidemann Christos Aridas Claudio Salvatore Arcidiacono combscCode Daniela Fernandes darioka Darren Nguyen Dave Eargle David Gilbertson David Poznik Dea Mar a L on Dennis Osei DessyVV Dev514 Dimitri Papadopoulos Orfanos Diwakar Gupta Dr Felix M Riese drskd Emiko Sano Emmanouil Gionanidis EricEllwanger Erich Schubert Eric Larson Eric Ndirangu ErmolaevPA Estefania Barreto Ojeda eyast Fatima GASMI Federico Luna Felix Glushchenkov fkaren27 Fortune Uwha FPGAwesome francoisgoupil Frans Larsson ftorres16 Gabor Berei Gabor Kertesz Gabriel Stefanini Vicente Gabriel S Vicente Gael Varoquaux GAURAV CHOUDHARY Gauthier I genvalen Geoffrey Paris Giancarlo Pablo glennfrutiz gpapadok Guillaume Lemaitre Guillermo Tom s Fern ndez Mart n Gustavo Oliveira Haidar Almubarak Hannah Bohle Hansin Ahuja Haoyin Xu Haya Helder Geovane Gomes de Lima henrymooresc Hideaki Imamura Himanshu Kumar Hind M hmasdev hvassard i aki y iasoon Inclusive Coding Bot Ingela iofall Ishan Kumar Jack Liu Jake Cowton jalexand3r J Alexander Jauhar Jaya Surya Kommireddy Jay Stanley Jeff Hale je kr JElfner Jenny Vo J r mie du Boisberranger Jihane Jirka Borovec Joel Nothman Jon Haitz Legarreta Gorro o Jordan Silke Jorge Cipri n Jorge Loayza Joseph Chazalon Joseph Schwartz Messing Jovan Stojanovic JSchuerz Juan Carlos Alfaro Jim nez Juan Martin Loyola Julien Jerphanion katotten Kaushik Roy Chowdhury Ken4git Kenneth Prabakaran kernc Kevin Doucet KimAYoung Koushik Joshi Kranthi Sedamaki krishna kumar krumetoft lesnee Lisa Casino Logan Thomas Loic Esteve Louis Wagner LucieClair Lucy Liu Luiz Eduardo Amaral Magali MaggieChege Mai mandjevant Mandy Gu Manimaran MarcoM Marco Wurps Maren Westermann Maria Boerner MarieS WiMLDS Martel Corentin martin kokos mathurinm Mat as matjansen Matteo Francia Maxwell Meekail Zain Megabyte Mehrdad Moradizadeh melemo2 Michael I Chen michalkrawczyk Micky774 milana2 millawell Ming Yang Ho Mitzi miwojc Mizuki mlant Mohamed Haseeb Mohit Sharma Moonkyung94 mpoemsl MrinalTyagi Mr Leu msabatier murata yu N Nadirhan ahin Naipawat Poolsawat NartayXD nastegiano nathansquan nat salt Nicki Skafte Detlefsen Nicolas Hug Niket Jain Nikhil Suresh Nikita Titov Nikolay Kondratyev Ohad Michel Oleksandr Husak Olivier Grisel partev Patrick Ferreira Paul pelennor PierreAttard Piet Br mmel Pieter Gijsbers Pinky poloso Pramod Anantharam puhuk Purna Chandra Mansingh QuadV Rahil Parikh Randall Boyes randomgeek78 Raz Hoshia Reshama Shaikh Ricardo Ferreira Richard Taylor Rileran Rishabh Robin Thibaut Rocco Meli Roman Feldbauer Roman Yurchak Ross Barnowski rsnegrin Sachin Yadav sakinaOuisrani Sam Adam Day Sanjay Marreddi Sebastian Pujalte SEELE SELEE Seyedsaman Sam Emami ShanDeng123 Shao Yang Hong sharmadharmpal shaymerNaturalint Shuangchi He Shubhraneel Pal siavrez slishak Smile spikebh sply88 Srinath Kailasa St phane Collot Sultan Orazbayev Sumit Saha Sven Eschlbeck Sven Stehle Swapnil Jha Sylvain Mari Takeshi Oura Tamires Santana Tenavi teunpe Theis Ferr Hjortkj r Thiruvenkadam Thomas J Fan t jakubek toastedyeast Tom Dupr la Tour Tom McTiernan TONY GEORGE Tyler Martin Tyler Reddy Udit Gupta Ugo Marchand Varun Agrawal Venkatachalam N Vera Komeyer victoirelouis Vikas Vishwakarma Vikrant khedkar Vladimir Chernyy Vladimir Kim WeijiaDu Xiao Yuan Yar Khine Phyo Ying Xiong yiyangq Yosshi999 Yuki Koyama Zach Deane Mayer Zeel B Patel zempleni zhenfisher Zhao Feng |
scikit-learn sklearn contributors rst Version 0 13 changes0131 | .. include:: _contributors.rst
.. currentmodule:: sklearn
============
Version 0.13
============
.. _changes_0_13_1:
Version 0.13.1
==============
**February 23, 2013**
The 0.13.1 release only fixes some bugs and does not add any new functionality.
Changelog
---------
- Fixed a testing error caused by the function `cross_validation.train_test_split` being
interpreted as a test by `Yaroslav Halchenko`_.
- Fixed a bug in the reassignment of small clusters in the :class:`cluster.MiniBatchKMeans`
by `Gael Varoquaux`_.
- Fixed default value of ``gamma`` in :class:`decomposition.KernelPCA` by `Lars Buitinck`_.
- Updated joblib to ``0.7.0d`` by `Gael Varoquaux`_.
- Fixed scaling of the deviance in :class:`ensemble.GradientBoostingClassifier` by `Peter Prettenhofer`_.
- Better tie-breaking in :class:`multiclass.OneVsOneClassifier` by `Andreas Müller`_.
- Other small improvements to tests and documentation.
People
------
List of contributors for release 0.13.1 by number of commits.
* 16 `Lars Buitinck`_
* 12 `Andreas Müller`_
* 8 `Gael Varoquaux`_
* 5 Robert Marchman
* 3 `Peter Prettenhofer`_
* 2 Hrishikesh Huilgolkar
* 1 Bastiaan van den Berg
* 1 Diego Molla
* 1 `Gilles Louppe`_
* 1 `Mathieu Blondel`_
* 1 `Nelle Varoquaux`_
* 1 Rafael Cunha de Almeida
* 1 Rolando Espinoza La fuente
* 1 `Vlad Niculae`_
* 1 `Yaroslav Halchenko`_
.. _changes_0_13:
Version 0.13
============
**January 21, 2013**
New Estimator Classes
---------------------
- :class:`dummy.DummyClassifier` and :class:`dummy.DummyRegressor`, two
data-independent predictors by `Mathieu Blondel`_. Useful to sanity-check
your estimators. See :ref:`dummy_estimators` in the user guide.
Multioutput support added by `Arnaud Joly`_.
- :class:`decomposition.FactorAnalysis`, a transformer implementing the
classical factor analysis, by `Christian Osendorfer`_ and `Alexandre
Gramfort`_. See :ref:`FA` in the user guide.
- :class:`feature_extraction.FeatureHasher`, a transformer implementing the
"hashing trick" for fast, low-memory feature extraction from string fields
by `Lars Buitinck`_ and :class:`feature_extraction.text.HashingVectorizer`
for text documents by `Olivier Grisel`_ See :ref:`feature_hashing` and
:ref:`hashing_vectorizer` for the documentation and sample usage.
- :class:`pipeline.FeatureUnion`, a transformer that concatenates
results of several other transformers by `Andreas Müller`_. See
:ref:`feature_union` in the user guide.
- :class:`random_projection.GaussianRandomProjection`,
:class:`random_projection.SparseRandomProjection` and the function
:func:`random_projection.johnson_lindenstrauss_min_dim`. The first two are
transformers implementing Gaussian and sparse random projection matrix
by `Olivier Grisel`_ and `Arnaud Joly`_.
See :ref:`random_projection` in the user guide.
- :class:`kernel_approximation.Nystroem`, a transformer for approximating
arbitrary kernels by `Andreas Müller`_. See
:ref:`nystroem_kernel_approx` in the user guide.
- :class:`preprocessing.OneHotEncoder`, a transformer that computes binary
encodings of categorical features by `Andreas Müller`_. See
:ref:`preprocessing_categorical_features` in the user guide.
- :class:`linear_model.PassiveAggressiveClassifier` and
:class:`linear_model.PassiveAggressiveRegressor`, predictors implementing
an efficient stochastic optimization for linear models by `Rob Zinkov`_ and
`Mathieu Blondel`_. See :ref:`passive_aggressive` in the user
guide.
- :class:`ensemble.RandomTreesEmbedding`, a transformer for creating high-dimensional
sparse representations using ensembles of totally random trees by `Andreas Müller`_.
See :ref:`random_trees_embedding` in the user guide.
- :class:`manifold.SpectralEmbedding` and function
:func:`manifold.spectral_embedding`, implementing the "laplacian
eigenmaps" transformation for non-linear dimensionality reduction by Wei
Li. See :ref:`spectral_embedding` in the user guide.
- :class:`isotonic.IsotonicRegression` by `Fabian Pedregosa`_, `Alexandre Gramfort`_
and `Nelle Varoquaux`_,
Changelog
---------
- :func:`metrics.zero_one_loss` (formerly ``metrics.zero_one``) now has
option for normalized output that reports the fraction of
misclassifications, rather than the raw number of misclassifications. By
Kyle Beauchamp.
- :class:`tree.DecisionTreeClassifier` and all derived ensemble models now
support sample weighting, by `Noel Dawe`_ and `Gilles Louppe`_.
- Speedup improvement when using bootstrap samples in forests of randomized
trees, by `Peter Prettenhofer`_ and `Gilles Louppe`_.
- Partial dependence plots for :ref:`gradient_boosting` in
`ensemble.partial_dependence.partial_dependence` by `Peter
Prettenhofer`_. See :ref:`sphx_glr_auto_examples_inspection_plot_partial_dependence.py` for an
example.
- The table of contents on the website has now been made expandable by
`Jaques Grobler`_.
- :class:`feature_selection.SelectPercentile` now breaks ties
deterministically instead of returning all equally ranked features.
- :class:`feature_selection.SelectKBest` and
:class:`feature_selection.SelectPercentile` are more numerically stable
since they use scores, rather than p-values, to rank results. This means
that they might sometimes select different features than they did
previously.
- Ridge regression and ridge classification fitting with ``sparse_cg`` solver
no longer has quadratic memory complexity, by `Lars Buitinck`_ and
`Fabian Pedregosa`_.
- Ridge regression and ridge classification now support a new fast solver
called ``lsqr``, by `Mathieu Blondel`_.
- Speed up of :func:`metrics.precision_recall_curve` by Conrad Lee.
- Added support for reading/writing svmlight files with pairwise
preference attribute (qid in svmlight file format) in
:func:`datasets.dump_svmlight_file` and
:func:`datasets.load_svmlight_file` by `Fabian Pedregosa`_.
- Faster and more robust :func:`metrics.confusion_matrix` and
:ref:`clustering_evaluation` by Wei Li.
- `cross_validation.cross_val_score` now works with precomputed kernels
and affinity matrices, by `Andreas Müller`_.
- LARS algorithm made more numerically stable with heuristics to drop
regressors too correlated as well as to stop the path when
numerical noise becomes predominant, by `Gael Varoquaux`_.
- Faster implementation of :func:`metrics.precision_recall_curve` by
Conrad Lee.
- New kernel `metrics.chi2_kernel` by `Andreas Müller`_, often used
in computer vision applications.
- Fix of longstanding bug in :class:`naive_bayes.BernoulliNB` fixed by
Shaun Jackman.
- Implemented ``predict_proba`` in :class:`multiclass.OneVsRestClassifier`,
by Andrew Winterman.
- Improve consistency in gradient boosting: estimators
:class:`ensemble.GradientBoostingRegressor` and
:class:`ensemble.GradientBoostingClassifier` use the estimator
:class:`tree.DecisionTreeRegressor` instead of the
`tree._tree.Tree` data structure by `Arnaud Joly`_.
- Fixed a floating point exception in the :ref:`decision trees <tree>`
module, by Seberg.
- Fix :func:`metrics.roc_curve` fails when y_true has only one class
by Wei Li.
- Add the :func:`metrics.mean_absolute_error` function which computes the
mean absolute error. The :func:`metrics.mean_squared_error`,
:func:`metrics.mean_absolute_error` and
:func:`metrics.r2_score` metrics support multioutput by `Arnaud Joly`_.
- Fixed ``class_weight`` support in :class:`svm.LinearSVC` and
:class:`linear_model.LogisticRegression` by `Andreas Müller`_. The meaning
of ``class_weight`` was reversed as erroneously higher weight meant less
positives of a given class in earlier releases.
- Improve narrative documentation and consistency in
:mod:`sklearn.metrics` for regression and classification metrics
by `Arnaud Joly`_.
- Fixed a bug in :class:`sklearn.svm.SVC` when using csr-matrices with
unsorted indices by Xinfan Meng and `Andreas Müller`_.
- :class:`cluster.MiniBatchKMeans`: Add random reassignment of cluster centers
with little observations attached to them, by `Gael Varoquaux`_.
API changes summary
-------------------
- Renamed all occurrences of ``n_atoms`` to ``n_components`` for consistency.
This applies to :class:`decomposition.DictionaryLearning`,
:class:`decomposition.MiniBatchDictionaryLearning`,
:func:`decomposition.dict_learning`, :func:`decomposition.dict_learning_online`.
- Renamed all occurrences of ``max_iters`` to ``max_iter`` for consistency.
This applies to `semi_supervised.LabelPropagation` and
`semi_supervised.label_propagation.LabelSpreading`.
- Renamed all occurrences of ``learn_rate`` to ``learning_rate`` for
consistency in `ensemble.BaseGradientBoosting` and
:class:`ensemble.GradientBoostingRegressor`.
- The module ``sklearn.linear_model.sparse`` is gone. Sparse matrix support
was already integrated into the "regular" linear models.
- `sklearn.metrics.mean_square_error`, which incorrectly returned the
accumulated error, was removed. Use :func:`metrics.mean_squared_error` instead.
- Passing ``class_weight`` parameters to ``fit`` methods is no longer
supported. Pass them to estimator constructors instead.
- GMMs no longer have ``decode`` and ``rvs`` methods. Use the ``score``,
``predict`` or ``sample`` methods instead.
- The ``solver`` fit option in Ridge regression and classification is now
deprecated and will be removed in v0.14. Use the constructor option
instead.
- `feature_extraction.text.DictVectorizer` now returns sparse
matrices in the CSR format, instead of COO.
- Renamed ``k`` in `cross_validation.KFold` and
`cross_validation.StratifiedKFold` to ``n_folds``, renamed
``n_bootstraps`` to ``n_iter`` in ``cross_validation.Bootstrap``.
- Renamed all occurrences of ``n_iterations`` to ``n_iter`` for consistency.
This applies to `cross_validation.ShuffleSplit`,
`cross_validation.StratifiedShuffleSplit`,
:func:`utils.extmath.randomized_range_finder` and
:func:`utils.extmath.randomized_svd`.
- Replaced ``rho`` in :class:`linear_model.ElasticNet` and
:class:`linear_model.SGDClassifier` by ``l1_ratio``. The ``rho`` parameter
had different meanings; ``l1_ratio`` was introduced to avoid confusion.
It has the same meaning as previously ``rho`` in
:class:`linear_model.ElasticNet` and ``(1-rho)`` in
:class:`linear_model.SGDClassifier`.
- :class:`linear_model.LassoLars` and :class:`linear_model.Lars` now
store a list of paths in the case of multiple targets, rather than
an array of paths.
- The attribute ``gmm`` of `hmm.GMMHMM` was renamed to ``gmm_``
to adhere more strictly with the API.
- `cluster.spectral_embedding` was moved to
:func:`manifold.spectral_embedding`.
- Renamed ``eig_tol`` in :func:`manifold.spectral_embedding`,
:class:`cluster.SpectralClustering` to ``eigen_tol``, renamed ``mode``
to ``eigen_solver``.
- Renamed ``mode`` in :func:`manifold.spectral_embedding` and
:class:`cluster.SpectralClustering` to ``eigen_solver``.
- ``classes_`` and ``n_classes_`` attributes of
:class:`tree.DecisionTreeClassifier` and all derived ensemble models are
now flat in case of single output problems and nested in case of
multi-output problems.
- The ``estimators_`` attribute of
:class:`ensemble.GradientBoostingRegressor` and
:class:`ensemble.GradientBoostingClassifier` is now an
array of :class:`tree.DecisionTreeRegressor`.
- Renamed ``chunk_size`` to ``batch_size`` in
:class:`decomposition.MiniBatchDictionaryLearning` and
:class:`decomposition.MiniBatchSparsePCA` for consistency.
- :class:`svm.SVC` and :class:`svm.NuSVC` now provide a ``classes_``
attribute and support arbitrary dtypes for labels ``y``.
Also, the dtype returned by ``predict`` now reflects the dtype of
``y`` during ``fit`` (used to be ``np.float``).
- Changed default test_size in `cross_validation.train_test_split`
to None, added possibility to infer ``test_size`` from ``train_size`` in
`cross_validation.ShuffleSplit` and
`cross_validation.StratifiedShuffleSplit`.
- Renamed function `sklearn.metrics.zero_one` to
`sklearn.metrics.zero_one_loss`. Be aware that the default behavior
in `sklearn.metrics.zero_one_loss` is different from
`sklearn.metrics.zero_one`: ``normalize=False`` is changed to
``normalize=True``.
- Renamed function `metrics.zero_one_score` to
:func:`metrics.accuracy_score`.
- :func:`datasets.make_circles` now has the same number of inner and outer points.
- In the Naive Bayes classifiers, the ``class_prior`` parameter was moved
from ``fit`` to ``__init__``.
People
------
List of contributors for release 0.13 by number of commits.
* 364 `Andreas Müller`_
* 143 `Arnaud Joly`_
* 137 `Peter Prettenhofer`_
* 131 `Gael Varoquaux`_
* 117 `Mathieu Blondel`_
* 108 `Lars Buitinck`_
* 106 Wei Li
* 101 `Olivier Grisel`_
* 65 `Vlad Niculae`_
* 54 `Gilles Louppe`_
* 40 `Jaques Grobler`_
* 38 `Alexandre Gramfort`_
* 30 `Rob Zinkov`_
* 19 Aymeric Masurelle
* 18 Andrew Winterman
* 17 `Fabian Pedregosa`_
* 17 Nelle Varoquaux
* 16 `Christian Osendorfer`_
* 14 `Daniel Nouri`_
* 13 :user:`Virgile Fritsch <VirgileFritsch>`
* 13 syhw
* 12 `Satrajit Ghosh`_
* 10 Corey Lynch
* 10 Kyle Beauchamp
* 9 Brian Cheung
* 9 Immanuel Bayer
* 9 mr.Shu
* 8 Conrad Lee
* 8 `James Bergstra`_
* 7 Tadej Janež
* 6 Brian Cajes
* 6 `Jake Vanderplas`_
* 6 Michael
* 6 Noel Dawe
* 6 Tiago Nunes
* 6 cow
* 5 Anze
* 5 Shiqiao Du
* 4 Christian Jauvin
* 4 Jacques Kvam
* 4 Richard T. Guy
* 4 `Robert Layton`_
* 3 Alexandre Abraham
* 3 Doug Coleman
* 3 Scott Dickerson
* 2 ApproximateIdentity
* 2 John Benediktsson
* 2 Mark Veronda
* 2 Matti Lyra
* 2 Mikhail Korobov
* 2 Xinfan Meng
* 1 Alejandro Weinstein
* 1 `Alexandre Passos`_
* 1 Christoph Deil
* 1 Eugene Nizhibitsky
* 1 Kenneth C. Arnold
* 1 Luis Pedro Coelho
* 1 Miroslav Batchkarov
* 1 Pavel
* 1 Sebastian Berg
* 1 Shaun Jackman
* 1 Subhodeep Moitra
* 1 bob
* 1 dengemann
* 1 emanuele
* 1 x006 | scikit-learn | include contributors rst currentmodule sklearn Version 0 13 changes 0 13 1 Version 0 13 1 February 23 2013 The 0 13 1 release only fixes some bugs and does not add any new functionality Changelog Fixed a testing error caused by the function cross validation train test split being interpreted as a test by Yaroslav Halchenko Fixed a bug in the reassignment of small clusters in the class cluster MiniBatchKMeans by Gael Varoquaux Fixed default value of gamma in class decomposition KernelPCA by Lars Buitinck Updated joblib to 0 7 0d by Gael Varoquaux Fixed scaling of the deviance in class ensemble GradientBoostingClassifier by Peter Prettenhofer Better tie breaking in class multiclass OneVsOneClassifier by Andreas M ller Other small improvements to tests and documentation People List of contributors for release 0 13 1 by number of commits 16 Lars Buitinck 12 Andreas M ller 8 Gael Varoquaux 5 Robert Marchman 3 Peter Prettenhofer 2 Hrishikesh Huilgolkar 1 Bastiaan van den Berg 1 Diego Molla 1 Gilles Louppe 1 Mathieu Blondel 1 Nelle Varoquaux 1 Rafael Cunha de Almeida 1 Rolando Espinoza La fuente 1 Vlad Niculae 1 Yaroslav Halchenko changes 0 13 Version 0 13 January 21 2013 New Estimator Classes class dummy DummyClassifier and class dummy DummyRegressor two data independent predictors by Mathieu Blondel Useful to sanity check your estimators See ref dummy estimators in the user guide Multioutput support added by Arnaud Joly class decomposition FactorAnalysis a transformer implementing the classical factor analysis by Christian Osendorfer and Alexandre Gramfort See ref FA in the user guide class feature extraction FeatureHasher a transformer implementing the hashing trick for fast low memory feature extraction from string fields by Lars Buitinck and class feature extraction text HashingVectorizer for text documents by Olivier Grisel See ref feature hashing and ref hashing vectorizer for the documentation and sample usage class pipeline FeatureUnion a transformer that concatenates results of several other transformers by Andreas M ller See ref feature union in the user guide class random projection GaussianRandomProjection class random projection SparseRandomProjection and the function func random projection johnson lindenstrauss min dim The first two are transformers implementing Gaussian and sparse random projection matrix by Olivier Grisel and Arnaud Joly See ref random projection in the user guide class kernel approximation Nystroem a transformer for approximating arbitrary kernels by Andreas M ller See ref nystroem kernel approx in the user guide class preprocessing OneHotEncoder a transformer that computes binary encodings of categorical features by Andreas M ller See ref preprocessing categorical features in the user guide class linear model PassiveAggressiveClassifier and class linear model PassiveAggressiveRegressor predictors implementing an efficient stochastic optimization for linear models by Rob Zinkov and Mathieu Blondel See ref passive aggressive in the user guide class ensemble RandomTreesEmbedding a transformer for creating high dimensional sparse representations using ensembles of totally random trees by Andreas M ller See ref random trees embedding in the user guide class manifold SpectralEmbedding and function func manifold spectral embedding implementing the laplacian eigenmaps transformation for non linear dimensionality reduction by Wei Li See ref spectral embedding in the user guide class isotonic IsotonicRegression by Fabian Pedregosa Alexandre Gramfort and Nelle Varoquaux Changelog func metrics zero one loss formerly metrics zero one now has option for normalized output that reports the fraction of misclassifications rather than the raw number of misclassifications By Kyle Beauchamp class tree DecisionTreeClassifier and all derived ensemble models now support sample weighting by Noel Dawe and Gilles Louppe Speedup improvement when using bootstrap samples in forests of randomized trees by Peter Prettenhofer and Gilles Louppe Partial dependence plots for ref gradient boosting in ensemble partial dependence partial dependence by Peter Prettenhofer See ref sphx glr auto examples inspection plot partial dependence py for an example The table of contents on the website has now been made expandable by Jaques Grobler class feature selection SelectPercentile now breaks ties deterministically instead of returning all equally ranked features class feature selection SelectKBest and class feature selection SelectPercentile are more numerically stable since they use scores rather than p values to rank results This means that they might sometimes select different features than they did previously Ridge regression and ridge classification fitting with sparse cg solver no longer has quadratic memory complexity by Lars Buitinck and Fabian Pedregosa Ridge regression and ridge classification now support a new fast solver called lsqr by Mathieu Blondel Speed up of func metrics precision recall curve by Conrad Lee Added support for reading writing svmlight files with pairwise preference attribute qid in svmlight file format in func datasets dump svmlight file and func datasets load svmlight file by Fabian Pedregosa Faster and more robust func metrics confusion matrix and ref clustering evaluation by Wei Li cross validation cross val score now works with precomputed kernels and affinity matrices by Andreas M ller LARS algorithm made more numerically stable with heuristics to drop regressors too correlated as well as to stop the path when numerical noise becomes predominant by Gael Varoquaux Faster implementation of func metrics precision recall curve by Conrad Lee New kernel metrics chi2 kernel by Andreas M ller often used in computer vision applications Fix of longstanding bug in class naive bayes BernoulliNB fixed by Shaun Jackman Implemented predict proba in class multiclass OneVsRestClassifier by Andrew Winterman Improve consistency in gradient boosting estimators class ensemble GradientBoostingRegressor and class ensemble GradientBoostingClassifier use the estimator class tree DecisionTreeRegressor instead of the tree tree Tree data structure by Arnaud Joly Fixed a floating point exception in the ref decision trees tree module by Seberg Fix func metrics roc curve fails when y true has only one class by Wei Li Add the func metrics mean absolute error function which computes the mean absolute error The func metrics mean squared error func metrics mean absolute error and func metrics r2 score metrics support multioutput by Arnaud Joly Fixed class weight support in class svm LinearSVC and class linear model LogisticRegression by Andreas M ller The meaning of class weight was reversed as erroneously higher weight meant less positives of a given class in earlier releases Improve narrative documentation and consistency in mod sklearn metrics for regression and classification metrics by Arnaud Joly Fixed a bug in class sklearn svm SVC when using csr matrices with unsorted indices by Xinfan Meng and Andreas M ller class cluster MiniBatchKMeans Add random reassignment of cluster centers with little observations attached to them by Gael Varoquaux API changes summary Renamed all occurrences of n atoms to n components for consistency This applies to class decomposition DictionaryLearning class decomposition MiniBatchDictionaryLearning func decomposition dict learning func decomposition dict learning online Renamed all occurrences of max iters to max iter for consistency This applies to semi supervised LabelPropagation and semi supervised label propagation LabelSpreading Renamed all occurrences of learn rate to learning rate for consistency in ensemble BaseGradientBoosting and class ensemble GradientBoostingRegressor The module sklearn linear model sparse is gone Sparse matrix support was already integrated into the regular linear models sklearn metrics mean square error which incorrectly returned the accumulated error was removed Use func metrics mean squared error instead Passing class weight parameters to fit methods is no longer supported Pass them to estimator constructors instead GMMs no longer have decode and rvs methods Use the score predict or sample methods instead The solver fit option in Ridge regression and classification is now deprecated and will be removed in v0 14 Use the constructor option instead feature extraction text DictVectorizer now returns sparse matrices in the CSR format instead of COO Renamed k in cross validation KFold and cross validation StratifiedKFold to n folds renamed n bootstraps to n iter in cross validation Bootstrap Renamed all occurrences of n iterations to n iter for consistency This applies to cross validation ShuffleSplit cross validation StratifiedShuffleSplit func utils extmath randomized range finder and func utils extmath randomized svd Replaced rho in class linear model ElasticNet and class linear model SGDClassifier by l1 ratio The rho parameter had different meanings l1 ratio was introduced to avoid confusion It has the same meaning as previously rho in class linear model ElasticNet and 1 rho in class linear model SGDClassifier class linear model LassoLars and class linear model Lars now store a list of paths in the case of multiple targets rather than an array of paths The attribute gmm of hmm GMMHMM was renamed to gmm to adhere more strictly with the API cluster spectral embedding was moved to func manifold spectral embedding Renamed eig tol in func manifold spectral embedding class cluster SpectralClustering to eigen tol renamed mode to eigen solver Renamed mode in func manifold spectral embedding and class cluster SpectralClustering to eigen solver classes and n classes attributes of class tree DecisionTreeClassifier and all derived ensemble models are now flat in case of single output problems and nested in case of multi output problems The estimators attribute of class ensemble GradientBoostingRegressor and class ensemble GradientBoostingClassifier is now an array of class tree DecisionTreeRegressor Renamed chunk size to batch size in class decomposition MiniBatchDictionaryLearning and class decomposition MiniBatchSparsePCA for consistency class svm SVC and class svm NuSVC now provide a classes attribute and support arbitrary dtypes for labels y Also the dtype returned by predict now reflects the dtype of y during fit used to be np float Changed default test size in cross validation train test split to None added possibility to infer test size from train size in cross validation ShuffleSplit and cross validation StratifiedShuffleSplit Renamed function sklearn metrics zero one to sklearn metrics zero one loss Be aware that the default behavior in sklearn metrics zero one loss is different from sklearn metrics zero one normalize False is changed to normalize True Renamed function metrics zero one score to func metrics accuracy score func datasets make circles now has the same number of inner and outer points In the Naive Bayes classifiers the class prior parameter was moved from fit to init People List of contributors for release 0 13 by number of commits 364 Andreas M ller 143 Arnaud Joly 137 Peter Prettenhofer 131 Gael Varoquaux 117 Mathieu Blondel 108 Lars Buitinck 106 Wei Li 101 Olivier Grisel 65 Vlad Niculae 54 Gilles Louppe 40 Jaques Grobler 38 Alexandre Gramfort 30 Rob Zinkov 19 Aymeric Masurelle 18 Andrew Winterman 17 Fabian Pedregosa 17 Nelle Varoquaux 16 Christian Osendorfer 14 Daniel Nouri 13 user Virgile Fritsch VirgileFritsch 13 syhw 12 Satrajit Ghosh 10 Corey Lynch 10 Kyle Beauchamp 9 Brian Cheung 9 Immanuel Bayer 9 mr Shu 8 Conrad Lee 8 James Bergstra 7 Tadej Jane 6 Brian Cajes 6 Jake Vanderplas 6 Michael 6 Noel Dawe 6 Tiago Nunes 6 cow 5 Anze 5 Shiqiao Du 4 Christian Jauvin 4 Jacques Kvam 4 Richard T Guy 4 Robert Layton 3 Alexandre Abraham 3 Doug Coleman 3 Scott Dickerson 2 ApproximateIdentity 2 John Benediktsson 2 Mark Veronda 2 Matti Lyra 2 Mikhail Korobov 2 Xinfan Meng 1 Alejandro Weinstein 1 Alexandre Passos 1 Christoph Deil 1 Eugene Nizhibitsky 1 Kenneth C Arnold 1 Luis Pedro Coelho 1 Miroslav Batchkarov 1 Pavel 1 Sebastian Berg 1 Shaun Jackman 1 Subhodeep Moitra 1 bob 1 dengemann 1 emanuele 1 x006 |
scikit-learn sklearn contributors rst changes0152 Version 0 15 | .. include:: _contributors.rst
.. currentmodule:: sklearn
============
Version 0.15
============
.. _changes_0_15_2:
Version 0.15.2
==============
**September 4, 2014**
Bug fixes
---------
- Fixed handling of the ``p`` parameter of the Minkowski distance that was
previously ignored in nearest neighbors models. By :user:`Nikolay
Mayorov <nmayorov>`.
- Fixed duplicated alphas in :class:`linear_model.LassoLars` with early
stopping on 32 bit Python. By `Olivier Grisel`_ and `Fabian Pedregosa`_.
- Fixed the build under Windows when scikit-learn is built with MSVC while
NumPy is built with MinGW. By `Olivier Grisel`_ and :user:`Federico
Vaggi <FedericoV>`.
- Fixed an array index overflow bug in the coordinate descent solver. By
`Gael Varoquaux`_.
- Better handling of numpy 1.9 deprecation warnings. By `Gael Varoquaux`_.
- Removed unnecessary data copy in :class:`cluster.KMeans`.
By `Gael Varoquaux`_.
- Explicitly close open files to avoid ``ResourceWarnings`` under Python 3.
By Calvin Giles.
- The ``transform`` of :class:`discriminant_analysis.LinearDiscriminantAnalysis`
now projects the input on the most discriminant directions. By Martin Billinger.
- Fixed potential overflow in ``_tree.safe_realloc`` by `Lars Buitinck`_.
- Performance optimization in :class:`isotonic.IsotonicRegression`.
By Robert Bradshaw.
- ``nose`` is non-longer a runtime dependency to import ``sklearn``, only for
running the tests. By `Joel Nothman`_.
- Many documentation and website fixes by `Joel Nothman`_, `Lars Buitinck`_
:user:`Matt Pico <MattpSoftware>`, and others.
.. _changes_0_15_1:
Version 0.15.1
==============
**August 1, 2014**
Bug fixes
---------
- Made `cross_validation.cross_val_score` use
`cross_validation.KFold` instead of
`cross_validation.StratifiedKFold` on multi-output classification
problems. By :user:`Nikolay Mayorov <nmayorov>`.
- Support unseen labels :class:`preprocessing.LabelBinarizer` to restore
the default behavior of 0.14.1 for backward compatibility. By
:user:`Hamzeh Alsalhi <hamsal>`.
- Fixed the :class:`cluster.KMeans` stopping criterion that prevented early
convergence detection. By Edward Raff and `Gael Varoquaux`_.
- Fixed the behavior of :class:`multiclass.OneVsOneClassifier`.
in case of ties at the per-class vote level by computing the correct
per-class sum of prediction scores. By `Andreas Müller`_.
- Made `cross_validation.cross_val_score` and
`grid_search.GridSearchCV` accept Python lists as input data.
This is especially useful for cross-validation and model selection of
text processing pipelines. By `Andreas Müller`_.
- Fixed data input checks of most estimators to accept input data that
implements the NumPy ``__array__`` protocol. This is the case for
for ``pandas.Series`` and ``pandas.DataFrame`` in recent versions of
pandas. By `Gael Varoquaux`_.
- Fixed a regression for :class:`linear_model.SGDClassifier` with
``class_weight="auto"`` on data with non-contiguous labels. By
`Olivier Grisel`_.
.. _changes_0_15:
Version 0.15
============
**July 15, 2014**
Highlights
-----------
- Many speed and memory improvements all across the code
- Huge speed and memory improvements to random forests (and extra
trees) that also benefit better from parallel computing.
- Incremental fit to :class:`BernoulliRBM <neural_network.BernoulliRBM>`
- Added :class:`cluster.AgglomerativeClustering` for hierarchical
agglomerative clustering with average linkage, complete linkage and
ward strategies.
- Added :class:`linear_model.RANSACRegressor` for robust regression
models.
- Added dimensionality reduction with :class:`manifold.TSNE` which can be
used to visualize high-dimensional data.
Changelog
---------
New features
............
- Added :class:`ensemble.BaggingClassifier` and
:class:`ensemble.BaggingRegressor` meta-estimators for ensembling
any kind of base estimator. See the :ref:`Bagging <bagging>` section of
the user guide for details and examples. By `Gilles Louppe`_.
- New unsupervised feature selection algorithm
:class:`feature_selection.VarianceThreshold`, by `Lars Buitinck`_.
- Added :class:`linear_model.RANSACRegressor` meta-estimator for the robust
fitting of regression models. By :user:`Johannes Schönberger <ahojnnes>`.
- Added :class:`cluster.AgglomerativeClustering` for hierarchical
agglomerative clustering with average linkage, complete linkage and
ward strategies, by `Nelle Varoquaux`_ and `Gael Varoquaux`_.
- Shorthand constructors :func:`pipeline.make_pipeline` and
:func:`pipeline.make_union` were added by `Lars Buitinck`_.
- Shuffle option for `cross_validation.StratifiedKFold`.
By :user:`Jeffrey Blackburne <jblackburne>`.
- Incremental learning (``partial_fit``) for Gaussian Naive Bayes by
Imran Haque.
- Added ``partial_fit`` to :class:`BernoulliRBM
<neural_network.BernoulliRBM>`
By :user:`Danny Sullivan <dsullivan7>`.
- Added `learning_curve` utility to
chart performance with respect to training size. See
:ref:`sphx_glr_auto_examples_model_selection_plot_learning_curve.py`. By Alexander Fabisch.
- Add positive option in :class:`LassoCV <linear_model.LassoCV>` and
:class:`ElasticNetCV <linear_model.ElasticNetCV>`.
By Brian Wignall and `Alexandre Gramfort`_.
- Added :class:`linear_model.MultiTaskElasticNetCV` and
:class:`linear_model.MultiTaskLassoCV`. By `Manoj Kumar`_.
- Added :class:`manifold.TSNE`. By Alexander Fabisch.
Enhancements
............
- Add sparse input support to :class:`ensemble.AdaBoostClassifier` and
:class:`ensemble.AdaBoostRegressor` meta-estimators.
By :user:`Hamzeh Alsalhi <hamsal>`.
- Memory improvements of decision trees, by `Arnaud Joly`_.
- Decision trees can now be built in best-first manner by using ``max_leaf_nodes``
as the stopping criteria. Refactored the tree code to use either a
stack or a priority queue for tree building.
By `Peter Prettenhofer`_ and `Gilles Louppe`_.
- Decision trees can now be fitted on fortran- and c-style arrays, and
non-continuous arrays without the need to make a copy.
If the input array has a different dtype than ``np.float32``, a fortran-
style copy will be made since fortran-style memory layout has speed
advantages. By `Peter Prettenhofer`_ and `Gilles Louppe`_.
- Speed improvement of regression trees by optimizing the
the computation of the mean square error criterion. This lead
to speed improvement of the tree, forest and gradient boosting tree
modules. By `Arnaud Joly`_
- The ``img_to_graph`` and ``grid_tograph`` functions in
:mod:`sklearn.feature_extraction.image` now return ``np.ndarray``
instead of ``np.matrix`` when ``return_as=np.ndarray``. See the
Notes section for more information on compatibility.
- Changed the internal storage of decision trees to use a struct array.
This fixed some small bugs, while improving code and providing a small
speed gain. By `Joel Nothman`_.
- Reduce memory usage and overhead when fitting and predicting with forests
of randomized trees in parallel with ``n_jobs != 1`` by leveraging new
threading backend of joblib 0.8 and releasing the GIL in the tree fitting
Cython code. By `Olivier Grisel`_ and `Gilles Louppe`_.
- Speed improvement of the `sklearn.ensemble.gradient_boosting` module.
By `Gilles Louppe`_ and `Peter Prettenhofer`_.
- Various enhancements to the `sklearn.ensemble.gradient_boosting`
module: a ``warm_start`` argument to fit additional trees,
a ``max_leaf_nodes`` argument to fit GBM style trees,
a ``monitor`` fit argument to inspect the estimator during training, and
refactoring of the verbose code. By `Peter Prettenhofer`_.
- Faster `sklearn.ensemble.ExtraTrees` by caching feature values.
By `Arnaud Joly`_.
- Faster depth-based tree building algorithm such as decision tree,
random forest, extra trees or gradient tree boosting (with depth based
growing strategy) by avoiding trying to split on found constant features
in the sample subset. By `Arnaud Joly`_.
- Add ``min_weight_fraction_leaf`` pre-pruning parameter to tree-based
methods: the minimum weighted fraction of the input samples required to be
at a leaf node. By `Noel Dawe`_.
- Added :func:`metrics.pairwise_distances_argmin_min`, by Philippe Gervais.
- Added predict method to :class:`cluster.AffinityPropagation` and
:class:`cluster.MeanShift`, by `Mathieu Blondel`_.
- Vector and matrix multiplications have been optimised throughout the
library by `Denis Engemann`_, and `Alexandre Gramfort`_.
In particular, they should take less memory with older NumPy versions
(prior to 1.7.2).
- Precision-recall and ROC examples now use train_test_split, and have more
explanation of why these metrics are useful. By `Kyle Kastner`_
- The training algorithm for :class:`decomposition.NMF` is faster for
sparse matrices and has much lower memory complexity, meaning it will
scale up gracefully to large datasets. By `Lars Buitinck`_.
- Added svd_method option with default value to "randomized" to
:class:`decomposition.FactorAnalysis` to save memory and
significantly speedup computation by `Denis Engemann`_, and
`Alexandre Gramfort`_.
- Changed `cross_validation.StratifiedKFold` to try and
preserve as much of the original ordering of samples as possible so as
not to hide overfitting on datasets with a non-negligible level of
samples dependency.
By `Daniel Nouri`_ and `Olivier Grisel`_.
- Add multi-output support to :class:`gaussian_process.GaussianProcessRegressor`
by John Novak.
- Support for precomputed distance matrices in nearest neighbor estimators
by `Robert Layton`_ and `Joel Nothman`_.
- Norm computations optimized for NumPy 1.6 and later versions by
`Lars Buitinck`_. In particular, the k-means algorithm no longer
needs a temporary data structure the size of its input.
- :class:`dummy.DummyClassifier` can now be used to predict a constant
output value. By `Manoj Kumar`_.
- :class:`dummy.DummyRegressor` has now a strategy parameter which allows
to predict the mean, the median of the training set or a constant
output value. By :user:`Maheshakya Wijewardena <maheshakya>`.
- Multi-label classification output in multilabel indicator format
is now supported by :func:`metrics.roc_auc_score` and
:func:`metrics.average_precision_score` by `Arnaud Joly`_.
- Significant performance improvements (more than 100x speedup for
large problems) in :class:`isotonic.IsotonicRegression` by
`Andrew Tulloch`_.
- Speed and memory usage improvements to the SGD algorithm for linear
models: it now uses threads, not separate processes, when ``n_jobs>1``.
By `Lars Buitinck`_.
- Grid search and cross validation allow NaNs in the input arrays so that
preprocessors such as `preprocessing.Imputer` can be trained within the cross
validation loop, avoiding potentially skewed results.
- Ridge regression can now deal with sample weights in feature space
(only sample space until then). By :user:`Michael Eickenberg <eickenberg>`.
Both solutions are provided by the Cholesky solver.
- Several classification and regression metrics now support weighted
samples with the new ``sample_weight`` argument:
:func:`metrics.accuracy_score`,
:func:`metrics.zero_one_loss`,
:func:`metrics.precision_score`,
:func:`metrics.average_precision_score`,
:func:`metrics.f1_score`,
:func:`metrics.fbeta_score`,
:func:`metrics.recall_score`,
:func:`metrics.roc_auc_score`,
:func:`metrics.explained_variance_score`,
:func:`metrics.mean_squared_error`,
:func:`metrics.mean_absolute_error`,
:func:`metrics.r2_score`.
By `Noel Dawe`_.
- Speed up of the sample generator
:func:`datasets.make_multilabel_classification`. By `Joel Nothman`_.
Documentation improvements
...........................
- The Working With Text Data tutorial
has now been worked in to the main documentation's tutorial section.
Includes exercises and skeletons for tutorial presentation.
Original tutorial created by several authors including
`Olivier Grisel`_, Lars Buitinck and many others.
Tutorial integration into the scikit-learn documentation
by `Jaques Grobler`_
- Added :ref:`Computational Performance <computational_performance>`
documentation. Discussion and examples of prediction latency / throughput
and different factors that have influence over speed. Additional tips for
building faster models and choosing a relevant compromise between speed
and predictive power.
By :user:`Eustache Diemert <oddskool>`.
Bug fixes
.........
- Fixed bug in :class:`decomposition.MiniBatchDictionaryLearning` :
``partial_fit`` was not working properly.
- Fixed bug in `linear_model.stochastic_gradient` :
``l1_ratio`` was used as ``(1.0 - l1_ratio)`` .
- Fixed bug in :class:`multiclass.OneVsOneClassifier` with string
labels
- Fixed a bug in :class:`LassoCV <linear_model.LassoCV>` and
:class:`ElasticNetCV <linear_model.ElasticNetCV>`: they would not
pre-compute the Gram matrix with ``precompute=True`` or
``precompute="auto"`` and ``n_samples > n_features``. By `Manoj Kumar`_.
- Fixed incorrect estimation of the degrees of freedom in
:func:`feature_selection.f_regression` when variates are not centered.
By :user:`Virgile Fritsch <VirgileFritsch>`.
- Fixed a race condition in parallel processing with
``pre_dispatch != "all"`` (for instance, in ``cross_val_score``).
By `Olivier Grisel`_.
- Raise error in :class:`cluster.FeatureAgglomeration` and
`cluster.WardAgglomeration` when no samples are given,
rather than returning meaningless clustering.
- Fixed bug in `gradient_boosting.GradientBoostingRegressor` with
``loss='huber'``: ``gamma`` might have not been initialized.
- Fixed feature importances as computed with a forest of randomized trees
when fit with ``sample_weight != None`` and/or with ``bootstrap=True``.
By `Gilles Louppe`_.
API changes summary
-------------------
- `sklearn.hmm` is deprecated. Its removal is planned
for the 0.17 release.
- Use of `covariance.EllipticEnvelop` has now been removed after
deprecation.
Please use :class:`covariance.EllipticEnvelope` instead.
- `cluster.Ward` is deprecated. Use
:class:`cluster.AgglomerativeClustering` instead.
- `cluster.WardClustering` is deprecated. Use
- :class:`cluster.AgglomerativeClustering` instead.
- `cross_validation.Bootstrap` is deprecated.
`cross_validation.KFold` or
`cross_validation.ShuffleSplit` are recommended instead.
- Direct support for the sequence of sequences (or list of lists) multilabel
format is deprecated. To convert to and from the supported binary
indicator matrix format, use
:class:`preprocessing.MultiLabelBinarizer`.
By `Joel Nothman`_.
- Add score method to :class:`decomposition.PCA` following the model of
probabilistic PCA and deprecate
`ProbabilisticPCA` model whose
score implementation is not correct. The computation now also exploits the
matrix inversion lemma for faster computation. By `Alexandre Gramfort`_.
- The score method of :class:`decomposition.FactorAnalysis`
now returns the average log-likelihood of the samples. Use score_samples
to get log-likelihood of each sample. By `Alexandre Gramfort`_.
- Generating boolean masks (the setting ``indices=False``)
from cross-validation generators is deprecated.
Support for masks will be removed in 0.17.
The generators have produced arrays of indices by default since 0.10.
By `Joel Nothman`_.
- 1-d arrays containing strings with ``dtype=object`` (as used in Pandas)
are now considered valid classification targets. This fixes a regression
from version 0.13 in some classifiers. By `Joel Nothman`_.
- Fix wrong ``explained_variance_ratio_`` attribute in
`RandomizedPCA`.
By `Alexandre Gramfort`_.
- Fit alphas for each ``l1_ratio`` instead of ``mean_l1_ratio`` in
:class:`linear_model.ElasticNetCV` and :class:`linear_model.LassoCV`.
This changes the shape of ``alphas_`` from ``(n_alphas,)`` to
``(n_l1_ratio, n_alphas)`` if the ``l1_ratio`` provided is a 1-D array like
object of length greater than one.
By `Manoj Kumar`_.
- Fix :class:`linear_model.ElasticNetCV` and :class:`linear_model.LassoCV`
when fitting intercept and input data is sparse. The automatic grid
of alphas was not computed correctly and the scaling with normalize
was wrong. By `Manoj Kumar`_.
- Fix wrong maximal number of features drawn (``max_features``) at each split
for decision trees, random forests and gradient tree boosting.
Previously, the count for the number of drawn features started only after
one non constant features in the split. This bug fix will affect
computational and generalization performance of those algorithms in the
presence of constant features. To get back previous generalization
performance, you should modify the value of ``max_features``.
By `Arnaud Joly`_.
- Fix wrong maximal number of features drawn (``max_features``) at each split
for :class:`ensemble.ExtraTreesClassifier` and
:class:`ensemble.ExtraTreesRegressor`. Previously, only non constant
features in the split was counted as drawn. Now constant features are
counted as drawn. Furthermore at least one feature must be non constant
in order to make a valid split. This bug fix will affect
computational and generalization performance of extra trees in the
presence of constant features. To get back previous generalization
performance, you should modify the value of ``max_features``.
By `Arnaud Joly`_.
- Fix :func:`utils.class_weight.compute_class_weight` when ``class_weight=="auto"``.
Previously it was broken for input of non-integer ``dtype`` and the
weighted array that was returned was wrong. By `Manoj Kumar`_.
- Fix `cross_validation.Bootstrap` to return ``ValueError``
when ``n_train + n_test > n``. By :user:`Ronald Phlypo <rphlypo>`.
People
------
List of contributors for release 0.15 by number of commits.
* 312 Olivier Grisel
* 275 Lars Buitinck
* 221 Gael Varoquaux
* 148 Arnaud Joly
* 134 Johannes Schönberger
* 119 Gilles Louppe
* 113 Joel Nothman
* 111 Alexandre Gramfort
* 95 Jaques Grobler
* 89 Denis Engemann
* 83 Peter Prettenhofer
* 83 Alexander Fabisch
* 62 Mathieu Blondel
* 60 Eustache Diemert
* 60 Nelle Varoquaux
* 49 Michael Bommarito
* 45 Manoj-Kumar-S
* 28 Kyle Kastner
* 26 Andreas Mueller
* 22 Noel Dawe
* 21 Maheshakya Wijewardena
* 21 Brooke Osborn
* 21 Hamzeh Alsalhi
* 21 Jake VanderPlas
* 21 Philippe Gervais
* 19 Bala Subrahmanyam Varanasi
* 12 Ronald Phlypo
* 10 Mikhail Korobov
* 8 Thomas Unterthiner
* 8 Jeffrey Blackburne
* 8 eltermann
* 8 bwignall
* 7 Ankit Agrawal
* 7 CJ Carey
* 6 Daniel Nouri
* 6 Chen Liu
* 6 Michael Eickenberg
* 6 ugurthemaster
* 5 Aaron Schumacher
* 5 Baptiste Lagarde
* 5 Rajat Khanduja
* 5 Robert McGibbon
* 5 Sergio Pascual
* 4 Alexis Metaireau
* 4 Ignacio Rossi
* 4 Virgile Fritsch
* 4 Sebastian Säger
* 4 Ilambharathi Kanniah
* 4 sdenton4
* 4 Robert Layton
* 4 Alyssa
* 4 Amos Waterland
* 3 Andrew Tulloch
* 3 murad
* 3 Steven Maude
* 3 Karol Pysniak
* 3 Jacques Kvam
* 3 cgohlke
* 3 cjlin
* 3 Michael Becker
* 3 hamzeh
* 3 Eric Jacobsen
* 3 john collins
* 3 kaushik94
* 3 Erwin Marsi
* 2 csytracy
* 2 LK
* 2 Vlad Niculae
* 2 Laurent Direr
* 2 Erik Shilts
* 2 Raul Garreta
* 2 Yoshiki Vázquez Baeza
* 2 Yung Siang Liau
* 2 abhishek thakur
* 2 James Yu
* 2 Rohit Sivaprasad
* 2 Roland Szabo
* 2 amormachine
* 2 Alexis Mignon
* 2 Oscar Carlsson
* 2 Nantas Nardelli
* 2 jess010
* 2 kowalski87
* 2 Andrew Clegg
* 2 Federico Vaggi
* 2 Simon Frid
* 2 Félix-Antoine Fortin
* 1 Ralf Gommers
* 1 t-aft
* 1 Ronan Amicel
* 1 Rupesh Kumar Srivastava
* 1 Ryan Wang
* 1 Samuel Charron
* 1 Samuel St-Jean
* 1 Fabian Pedregosa
* 1 Skipper Seabold
* 1 Stefan Walk
* 1 Stefan van der Walt
* 1 Stephan Hoyer
* 1 Allen Riddell
* 1 Valentin Haenel
* 1 Vijay Ramesh
* 1 Will Myers
* 1 Yaroslav Halchenko
* 1 Yoni Ben-Meshulam
* 1 Yury V. Zaytsev
* 1 adrinjalali
* 1 ai8rahim
* 1 alemagnani
* 1 alex
* 1 benjamin wilson
* 1 chalmerlowe
* 1 dzikie drożdże
* 1 jamestwebber
* 1 matrixorz
* 1 popo
* 1 samuela
* 1 François Boulogne
* 1 Alexander Measure
* 1 Ethan White
* 1 Guilherme Trein
* 1 Hendrik Heuer
* 1 IvicaJovic
* 1 Jan Hendrik Metzen
* 1 Jean Michel Rouly
* 1 Eduardo Ariño de la Rubia
* 1 Jelle Zijlstra
* 1 Eddy L O Jansson
* 1 Denis
* 1 John
* 1 John Schmidt
* 1 Jorge Cañardo Alastuey
* 1 Joseph Perla
* 1 Joshua Vredevoogd
* 1 José Ricardo
* 1 Julien Miotte
* 1 Kemal Eren
* 1 Kenta Sato
* 1 David Cournapeau
* 1 Kyle Kelley
* 1 Daniele Medri
* 1 Laurent Luce
* 1 Laurent Pierron
* 1 Luis Pedro Coelho
* 1 DanielWeitzenfeld
* 1 Craig Thompson
* 1 Chyi-Kwei Yau
* 1 Matthew Brett
* 1 Matthias Feurer
* 1 Max Linke
* 1 Chris Filo Gorgolewski
* 1 Charles Earl
* 1 Michael Hanke
* 1 Michele Orrù
* 1 Bryan Lunt
* 1 Brian Kearns
* 1 Paul Butler
* 1 Paweł Mandera
* 1 Peter
* 1 Andrew Ash
* 1 Pietro Zambelli
* 1 staubda | scikit-learn | include contributors rst currentmodule sklearn Version 0 15 changes 0 15 2 Version 0 15 2 September 4 2014 Bug fixes Fixed handling of the p parameter of the Minkowski distance that was previously ignored in nearest neighbors models By user Nikolay Mayorov nmayorov Fixed duplicated alphas in class linear model LassoLars with early stopping on 32 bit Python By Olivier Grisel and Fabian Pedregosa Fixed the build under Windows when scikit learn is built with MSVC while NumPy is built with MinGW By Olivier Grisel and user Federico Vaggi FedericoV Fixed an array index overflow bug in the coordinate descent solver By Gael Varoquaux Better handling of numpy 1 9 deprecation warnings By Gael Varoquaux Removed unnecessary data copy in class cluster KMeans By Gael Varoquaux Explicitly close open files to avoid ResourceWarnings under Python 3 By Calvin Giles The transform of class discriminant analysis LinearDiscriminantAnalysis now projects the input on the most discriminant directions By Martin Billinger Fixed potential overflow in tree safe realloc by Lars Buitinck Performance optimization in class isotonic IsotonicRegression By Robert Bradshaw nose is non longer a runtime dependency to import sklearn only for running the tests By Joel Nothman Many documentation and website fixes by Joel Nothman Lars Buitinck user Matt Pico MattpSoftware and others changes 0 15 1 Version 0 15 1 August 1 2014 Bug fixes Made cross validation cross val score use cross validation KFold instead of cross validation StratifiedKFold on multi output classification problems By user Nikolay Mayorov nmayorov Support unseen labels class preprocessing LabelBinarizer to restore the default behavior of 0 14 1 for backward compatibility By user Hamzeh Alsalhi hamsal Fixed the class cluster KMeans stopping criterion that prevented early convergence detection By Edward Raff and Gael Varoquaux Fixed the behavior of class multiclass OneVsOneClassifier in case of ties at the per class vote level by computing the correct per class sum of prediction scores By Andreas M ller Made cross validation cross val score and grid search GridSearchCV accept Python lists as input data This is especially useful for cross validation and model selection of text processing pipelines By Andreas M ller Fixed data input checks of most estimators to accept input data that implements the NumPy array protocol This is the case for for pandas Series and pandas DataFrame in recent versions of pandas By Gael Varoquaux Fixed a regression for class linear model SGDClassifier with class weight auto on data with non contiguous labels By Olivier Grisel changes 0 15 Version 0 15 July 15 2014 Highlights Many speed and memory improvements all across the code Huge speed and memory improvements to random forests and extra trees that also benefit better from parallel computing Incremental fit to class BernoulliRBM neural network BernoulliRBM Added class cluster AgglomerativeClustering for hierarchical agglomerative clustering with average linkage complete linkage and ward strategies Added class linear model RANSACRegressor for robust regression models Added dimensionality reduction with class manifold TSNE which can be used to visualize high dimensional data Changelog New features Added class ensemble BaggingClassifier and class ensemble BaggingRegressor meta estimators for ensembling any kind of base estimator See the ref Bagging bagging section of the user guide for details and examples By Gilles Louppe New unsupervised feature selection algorithm class feature selection VarianceThreshold by Lars Buitinck Added class linear model RANSACRegressor meta estimator for the robust fitting of regression models By user Johannes Sch nberger ahojnnes Added class cluster AgglomerativeClustering for hierarchical agglomerative clustering with average linkage complete linkage and ward strategies by Nelle Varoquaux and Gael Varoquaux Shorthand constructors func pipeline make pipeline and func pipeline make union were added by Lars Buitinck Shuffle option for cross validation StratifiedKFold By user Jeffrey Blackburne jblackburne Incremental learning partial fit for Gaussian Naive Bayes by Imran Haque Added partial fit to class BernoulliRBM neural network BernoulliRBM By user Danny Sullivan dsullivan7 Added learning curve utility to chart performance with respect to training size See ref sphx glr auto examples model selection plot learning curve py By Alexander Fabisch Add positive option in class LassoCV linear model LassoCV and class ElasticNetCV linear model ElasticNetCV By Brian Wignall and Alexandre Gramfort Added class linear model MultiTaskElasticNetCV and class linear model MultiTaskLassoCV By Manoj Kumar Added class manifold TSNE By Alexander Fabisch Enhancements Add sparse input support to class ensemble AdaBoostClassifier and class ensemble AdaBoostRegressor meta estimators By user Hamzeh Alsalhi hamsal Memory improvements of decision trees by Arnaud Joly Decision trees can now be built in best first manner by using max leaf nodes as the stopping criteria Refactored the tree code to use either a stack or a priority queue for tree building By Peter Prettenhofer and Gilles Louppe Decision trees can now be fitted on fortran and c style arrays and non continuous arrays without the need to make a copy If the input array has a different dtype than np float32 a fortran style copy will be made since fortran style memory layout has speed advantages By Peter Prettenhofer and Gilles Louppe Speed improvement of regression trees by optimizing the the computation of the mean square error criterion This lead to speed improvement of the tree forest and gradient boosting tree modules By Arnaud Joly The img to graph and grid tograph functions in mod sklearn feature extraction image now return np ndarray instead of np matrix when return as np ndarray See the Notes section for more information on compatibility Changed the internal storage of decision trees to use a struct array This fixed some small bugs while improving code and providing a small speed gain By Joel Nothman Reduce memory usage and overhead when fitting and predicting with forests of randomized trees in parallel with n jobs 1 by leveraging new threading backend of joblib 0 8 and releasing the GIL in the tree fitting Cython code By Olivier Grisel and Gilles Louppe Speed improvement of the sklearn ensemble gradient boosting module By Gilles Louppe and Peter Prettenhofer Various enhancements to the sklearn ensemble gradient boosting module a warm start argument to fit additional trees a max leaf nodes argument to fit GBM style trees a monitor fit argument to inspect the estimator during training and refactoring of the verbose code By Peter Prettenhofer Faster sklearn ensemble ExtraTrees by caching feature values By Arnaud Joly Faster depth based tree building algorithm such as decision tree random forest extra trees or gradient tree boosting with depth based growing strategy by avoiding trying to split on found constant features in the sample subset By Arnaud Joly Add min weight fraction leaf pre pruning parameter to tree based methods the minimum weighted fraction of the input samples required to be at a leaf node By Noel Dawe Added func metrics pairwise distances argmin min by Philippe Gervais Added predict method to class cluster AffinityPropagation and class cluster MeanShift by Mathieu Blondel Vector and matrix multiplications have been optimised throughout the library by Denis Engemann and Alexandre Gramfort In particular they should take less memory with older NumPy versions prior to 1 7 2 Precision recall and ROC examples now use train test split and have more explanation of why these metrics are useful By Kyle Kastner The training algorithm for class decomposition NMF is faster for sparse matrices and has much lower memory complexity meaning it will scale up gracefully to large datasets By Lars Buitinck Added svd method option with default value to randomized to class decomposition FactorAnalysis to save memory and significantly speedup computation by Denis Engemann and Alexandre Gramfort Changed cross validation StratifiedKFold to try and preserve as much of the original ordering of samples as possible so as not to hide overfitting on datasets with a non negligible level of samples dependency By Daniel Nouri and Olivier Grisel Add multi output support to class gaussian process GaussianProcessRegressor by John Novak Support for precomputed distance matrices in nearest neighbor estimators by Robert Layton and Joel Nothman Norm computations optimized for NumPy 1 6 and later versions by Lars Buitinck In particular the k means algorithm no longer needs a temporary data structure the size of its input class dummy DummyClassifier can now be used to predict a constant output value By Manoj Kumar class dummy DummyRegressor has now a strategy parameter which allows to predict the mean the median of the training set or a constant output value By user Maheshakya Wijewardena maheshakya Multi label classification output in multilabel indicator format is now supported by func metrics roc auc score and func metrics average precision score by Arnaud Joly Significant performance improvements more than 100x speedup for large problems in class isotonic IsotonicRegression by Andrew Tulloch Speed and memory usage improvements to the SGD algorithm for linear models it now uses threads not separate processes when n jobs 1 By Lars Buitinck Grid search and cross validation allow NaNs in the input arrays so that preprocessors such as preprocessing Imputer can be trained within the cross validation loop avoiding potentially skewed results Ridge regression can now deal with sample weights in feature space only sample space until then By user Michael Eickenberg eickenberg Both solutions are provided by the Cholesky solver Several classification and regression metrics now support weighted samples with the new sample weight argument func metrics accuracy score func metrics zero one loss func metrics precision score func metrics average precision score func metrics f1 score func metrics fbeta score func metrics recall score func metrics roc auc score func metrics explained variance score func metrics mean squared error func metrics mean absolute error func metrics r2 score By Noel Dawe Speed up of the sample generator func datasets make multilabel classification By Joel Nothman Documentation improvements The Working With Text Data tutorial has now been worked in to the main documentation s tutorial section Includes exercises and skeletons for tutorial presentation Original tutorial created by several authors including Olivier Grisel Lars Buitinck and many others Tutorial integration into the scikit learn documentation by Jaques Grobler Added ref Computational Performance computational performance documentation Discussion and examples of prediction latency throughput and different factors that have influence over speed Additional tips for building faster models and choosing a relevant compromise between speed and predictive power By user Eustache Diemert oddskool Bug fixes Fixed bug in class decomposition MiniBatchDictionaryLearning partial fit was not working properly Fixed bug in linear model stochastic gradient l1 ratio was used as 1 0 l1 ratio Fixed bug in class multiclass OneVsOneClassifier with string labels Fixed a bug in class LassoCV linear model LassoCV and class ElasticNetCV linear model ElasticNetCV they would not pre compute the Gram matrix with precompute True or precompute auto and n samples n features By Manoj Kumar Fixed incorrect estimation of the degrees of freedom in func feature selection f regression when variates are not centered By user Virgile Fritsch VirgileFritsch Fixed a race condition in parallel processing with pre dispatch all for instance in cross val score By Olivier Grisel Raise error in class cluster FeatureAgglomeration and cluster WardAgglomeration when no samples are given rather than returning meaningless clustering Fixed bug in gradient boosting GradientBoostingRegressor with loss huber gamma might have not been initialized Fixed feature importances as computed with a forest of randomized trees when fit with sample weight None and or with bootstrap True By Gilles Louppe API changes summary sklearn hmm is deprecated Its removal is planned for the 0 17 release Use of covariance EllipticEnvelop has now been removed after deprecation Please use class covariance EllipticEnvelope instead cluster Ward is deprecated Use class cluster AgglomerativeClustering instead cluster WardClustering is deprecated Use class cluster AgglomerativeClustering instead cross validation Bootstrap is deprecated cross validation KFold or cross validation ShuffleSplit are recommended instead Direct support for the sequence of sequences or list of lists multilabel format is deprecated To convert to and from the supported binary indicator matrix format use class preprocessing MultiLabelBinarizer By Joel Nothman Add score method to class decomposition PCA following the model of probabilistic PCA and deprecate ProbabilisticPCA model whose score implementation is not correct The computation now also exploits the matrix inversion lemma for faster computation By Alexandre Gramfort The score method of class decomposition FactorAnalysis now returns the average log likelihood of the samples Use score samples to get log likelihood of each sample By Alexandre Gramfort Generating boolean masks the setting indices False from cross validation generators is deprecated Support for masks will be removed in 0 17 The generators have produced arrays of indices by default since 0 10 By Joel Nothman 1 d arrays containing strings with dtype object as used in Pandas are now considered valid classification targets This fixes a regression from version 0 13 in some classifiers By Joel Nothman Fix wrong explained variance ratio attribute in RandomizedPCA By Alexandre Gramfort Fit alphas for each l1 ratio instead of mean l1 ratio in class linear model ElasticNetCV and class linear model LassoCV This changes the shape of alphas from n alphas to n l1 ratio n alphas if the l1 ratio provided is a 1 D array like object of length greater than one By Manoj Kumar Fix class linear model ElasticNetCV and class linear model LassoCV when fitting intercept and input data is sparse The automatic grid of alphas was not computed correctly and the scaling with normalize was wrong By Manoj Kumar Fix wrong maximal number of features drawn max features at each split for decision trees random forests and gradient tree boosting Previously the count for the number of drawn features started only after one non constant features in the split This bug fix will affect computational and generalization performance of those algorithms in the presence of constant features To get back previous generalization performance you should modify the value of max features By Arnaud Joly Fix wrong maximal number of features drawn max features at each split for class ensemble ExtraTreesClassifier and class ensemble ExtraTreesRegressor Previously only non constant features in the split was counted as drawn Now constant features are counted as drawn Furthermore at least one feature must be non constant in order to make a valid split This bug fix will affect computational and generalization performance of extra trees in the presence of constant features To get back previous generalization performance you should modify the value of max features By Arnaud Joly Fix func utils class weight compute class weight when class weight auto Previously it was broken for input of non integer dtype and the weighted array that was returned was wrong By Manoj Kumar Fix cross validation Bootstrap to return ValueError when n train n test n By user Ronald Phlypo rphlypo People List of contributors for release 0 15 by number of commits 312 Olivier Grisel 275 Lars Buitinck 221 Gael Varoquaux 148 Arnaud Joly 134 Johannes Sch nberger 119 Gilles Louppe 113 Joel Nothman 111 Alexandre Gramfort 95 Jaques Grobler 89 Denis Engemann 83 Peter Prettenhofer 83 Alexander Fabisch 62 Mathieu Blondel 60 Eustache Diemert 60 Nelle Varoquaux 49 Michael Bommarito 45 Manoj Kumar S 28 Kyle Kastner 26 Andreas Mueller 22 Noel Dawe 21 Maheshakya Wijewardena 21 Brooke Osborn 21 Hamzeh Alsalhi 21 Jake VanderPlas 21 Philippe Gervais 19 Bala Subrahmanyam Varanasi 12 Ronald Phlypo 10 Mikhail Korobov 8 Thomas Unterthiner 8 Jeffrey Blackburne 8 eltermann 8 bwignall 7 Ankit Agrawal 7 CJ Carey 6 Daniel Nouri 6 Chen Liu 6 Michael Eickenberg 6 ugurthemaster 5 Aaron Schumacher 5 Baptiste Lagarde 5 Rajat Khanduja 5 Robert McGibbon 5 Sergio Pascual 4 Alexis Metaireau 4 Ignacio Rossi 4 Virgile Fritsch 4 Sebastian S ger 4 Ilambharathi Kanniah 4 sdenton4 4 Robert Layton 4 Alyssa 4 Amos Waterland 3 Andrew Tulloch 3 murad 3 Steven Maude 3 Karol Pysniak 3 Jacques Kvam 3 cgohlke 3 cjlin 3 Michael Becker 3 hamzeh 3 Eric Jacobsen 3 john collins 3 kaushik94 3 Erwin Marsi 2 csytracy 2 LK 2 Vlad Niculae 2 Laurent Direr 2 Erik Shilts 2 Raul Garreta 2 Yoshiki V zquez Baeza 2 Yung Siang Liau 2 abhishek thakur 2 James Yu 2 Rohit Sivaprasad 2 Roland Szabo 2 amormachine 2 Alexis Mignon 2 Oscar Carlsson 2 Nantas Nardelli 2 jess010 2 kowalski87 2 Andrew Clegg 2 Federico Vaggi 2 Simon Frid 2 F lix Antoine Fortin 1 Ralf Gommers 1 t aft 1 Ronan Amicel 1 Rupesh Kumar Srivastava 1 Ryan Wang 1 Samuel Charron 1 Samuel St Jean 1 Fabian Pedregosa 1 Skipper Seabold 1 Stefan Walk 1 Stefan van der Walt 1 Stephan Hoyer 1 Allen Riddell 1 Valentin Haenel 1 Vijay Ramesh 1 Will Myers 1 Yaroslav Halchenko 1 Yoni Ben Meshulam 1 Yury V Zaytsev 1 adrinjalali 1 ai8rahim 1 alemagnani 1 alex 1 benjamin wilson 1 chalmerlowe 1 dzikie dro d e 1 jamestwebber 1 matrixorz 1 popo 1 samuela 1 Fran ois Boulogne 1 Alexander Measure 1 Ethan White 1 Guilherme Trein 1 Hendrik Heuer 1 IvicaJovic 1 Jan Hendrik Metzen 1 Jean Michel Rouly 1 Eduardo Ari o de la Rubia 1 Jelle Zijlstra 1 Eddy L O Jansson 1 Denis 1 John 1 John Schmidt 1 Jorge Ca ardo Alastuey 1 Joseph Perla 1 Joshua Vredevoogd 1 Jos Ricardo 1 Julien Miotte 1 Kemal Eren 1 Kenta Sato 1 David Cournapeau 1 Kyle Kelley 1 Daniele Medri 1 Laurent Luce 1 Laurent Pierron 1 Luis Pedro Coelho 1 DanielWeitzenfeld 1 Craig Thompson 1 Chyi Kwei Yau 1 Matthew Brett 1 Matthias Feurer 1 Max Linke 1 Chris Filo Gorgolewski 1 Charles Earl 1 Michael Hanke 1 Michele Orr 1 Bryan Lunt 1 Brian Kearns 1 Paul Butler 1 Pawe Mandera 1 Peter 1 Andrew Ash 1 Pietro Zambelli 1 staubda |
scikit-learn sklearn contributors rst Version 0 14 changes014 | .. include:: _contributors.rst
.. currentmodule:: sklearn
============
Version 0.14
============
.. _changes_0_14:
Version 0.14
===============
**August 7, 2013**
Changelog
---------
- Missing values with sparse and dense matrices can be imputed with the
transformer `preprocessing.Imputer` by `Nicolas Trésegnie`_.
- The core implementation of decisions trees has been rewritten from
scratch, allowing for faster tree induction and lower memory
consumption in all tree-based estimators. By `Gilles Louppe`_.
- Added :class:`ensemble.AdaBoostClassifier` and
:class:`ensemble.AdaBoostRegressor`, by `Noel Dawe`_ and
`Gilles Louppe`_. See the :ref:`AdaBoost <adaboost>` section of the user
guide for details and examples.
- Added `grid_search.RandomizedSearchCV` and
`grid_search.ParameterSampler` for randomized hyperparameter
optimization. By `Andreas Müller`_.
- Added :ref:`biclustering <biclustering>` algorithms
(`sklearn.cluster.bicluster.SpectralCoclustering` and
`sklearn.cluster.bicluster.SpectralBiclustering`), data
generation methods (:func:`sklearn.datasets.make_biclusters` and
:func:`sklearn.datasets.make_checkerboard`), and scoring metrics
(:func:`sklearn.metrics.consensus_score`). By `Kemal Eren`_.
- Added :ref:`Restricted Boltzmann Machines<rbm>`
(:class:`neural_network.BernoulliRBM`). By `Yann Dauphin`_.
- Python 3 support by :user:`Justin Vincent <justinvf>`, `Lars Buitinck`_,
:user:`Subhodeep Moitra <smoitra87>` and `Olivier Grisel`_. All tests now pass under
Python 3.3.
- Ability to pass one penalty (alpha value) per target in
:class:`linear_model.Ridge`, by @eickenberg and `Mathieu Blondel`_.
- Fixed `sklearn.linear_model.stochastic_gradient.py` L2 regularization
issue (minor practical significance).
By :user:`Norbert Crombach <norbert>` and `Mathieu Blondel`_ .
- Added an interactive version of `Andreas Müller`_'s
`Machine Learning Cheat Sheet (for scikit-learn)
<https://peekaboo-vision.blogspot.de/2013/01/machine-learning-cheat-sheet-for-scikit.html>`_
to the documentation. See :ref:`Choosing the right estimator <ml_map>`.
By `Jaques Grobler`_.
- `grid_search.GridSearchCV` and
`cross_validation.cross_val_score` now support the use of advanced
scoring function such as area under the ROC curve and f-beta scores.
See :ref:`scoring_parameter` for details. By `Andreas Müller`_
and `Lars Buitinck`_.
Passing a function from :mod:`sklearn.metrics` as ``score_func`` is
deprecated.
- Multi-label classification output is now supported by
:func:`metrics.accuracy_score`, :func:`metrics.zero_one_loss`,
:func:`metrics.f1_score`, :func:`metrics.fbeta_score`,
:func:`metrics.classification_report`,
:func:`metrics.precision_score` and :func:`metrics.recall_score`
by `Arnaud Joly`_.
- Two new metrics :func:`metrics.hamming_loss` and
`metrics.jaccard_similarity_score`
are added with multi-label support by `Arnaud Joly`_.
- Speed and memory usage improvements in
:class:`feature_extraction.text.CountVectorizer` and
:class:`feature_extraction.text.TfidfVectorizer`,
by Jochen Wersdörfer and Roman Sinayev.
- The ``min_df`` parameter in
:class:`feature_extraction.text.CountVectorizer` and
:class:`feature_extraction.text.TfidfVectorizer`, which used to be 2,
has been reset to 1 to avoid unpleasant surprises (empty vocabularies)
for novice users who try it out on tiny document collections.
A value of at least 2 is still recommended for practical use.
- :class:`svm.LinearSVC`, :class:`linear_model.SGDClassifier` and
:class:`linear_model.SGDRegressor` now have a ``sparsify`` method that
converts their ``coef_`` into a sparse matrix, meaning stored models
trained using these estimators can be made much more compact.
- :class:`linear_model.SGDClassifier` now produces multiclass probability
estimates when trained under log loss or modified Huber loss.
- Hyperlinks to documentation in example code on the website by
:user:`Martin Luessi <mluessi>`.
- Fixed bug in :class:`preprocessing.MinMaxScaler` causing incorrect scaling
of the features for non-default ``feature_range`` settings. By `Andreas
Müller`_.
- ``max_features`` in :class:`tree.DecisionTreeClassifier`,
:class:`tree.DecisionTreeRegressor` and all derived ensemble estimators
now supports percentage values. By `Gilles Louppe`_.
- Performance improvements in :class:`isotonic.IsotonicRegression` by
`Nelle Varoquaux`_.
- :func:`metrics.accuracy_score` has an option normalize to return
the fraction or the number of correctly classified sample
by `Arnaud Joly`_.
- Added :func:`metrics.log_loss` that computes log loss, aka cross-entropy
loss. By Jochen Wersdörfer and `Lars Buitinck`_.
- A bug that caused :class:`ensemble.AdaBoostClassifier`'s to output
incorrect probabilities has been fixed.
- Feature selectors now share a mixin providing consistent ``transform``,
``inverse_transform`` and ``get_support`` methods. By `Joel Nothman`_.
- A fitted `grid_search.GridSearchCV` or
`grid_search.RandomizedSearchCV` can now generally be pickled.
By `Joel Nothman`_.
- Refactored and vectorized implementation of :func:`metrics.roc_curve`
and :func:`metrics.precision_recall_curve`. By `Joel Nothman`_.
- The new estimator :class:`sklearn.decomposition.TruncatedSVD`
performs dimensionality reduction using SVD on sparse matrices,
and can be used for latent semantic analysis (LSA).
By `Lars Buitinck`_.
- Added self-contained example of out-of-core learning on text data
:ref:`sphx_glr_auto_examples_applications_plot_out_of_core_classification.py`.
By :user:`Eustache Diemert <oddskool>`.
- The default number of components for
`sklearn.decomposition.RandomizedPCA` is now correctly documented
to be ``n_features``. This was the default behavior, so programs using it
will continue to work as they did.
- :class:`sklearn.cluster.KMeans` now fits several orders of magnitude
faster on sparse data (the speedup depends on the sparsity). By
`Lars Buitinck`_.
- Reduce memory footprint of FastICA by `Denis Engemann`_ and
`Alexandre Gramfort`_.
- Verbose output in `sklearn.ensemble.gradient_boosting` now uses
a column format and prints progress in decreasing frequency.
It also shows the remaining time. By `Peter Prettenhofer`_.
- `sklearn.ensemble.gradient_boosting` provides out-of-bag improvement
`oob_improvement_`
rather than the OOB score for model selection. An example that shows
how to use OOB estimates to select the number of trees was added.
By `Peter Prettenhofer`_.
- Most metrics now support string labels for multiclass classification
by `Arnaud Joly`_ and `Lars Buitinck`_.
- New OrthogonalMatchingPursuitCV class by `Alexandre Gramfort`_
and `Vlad Niculae`_.
- Fixed a bug in `sklearn.covariance.GraphLassoCV`: the
'alphas' parameter now works as expected when given a list of
values. By Philippe Gervais.
- Fixed an important bug in `sklearn.covariance.GraphLassoCV`
that prevented all folds provided by a CV object to be used (only
the first 3 were used). When providing a CV object, execution
time may thus increase significantly compared to the previous
version (bug results are correct now). By Philippe Gervais.
- `cross_validation.cross_val_score` and the `grid_search`
module is now tested with multi-output data by `Arnaud Joly`_.
- :func:`datasets.make_multilabel_classification` can now return
the output in label indicator multilabel format by `Arnaud Joly`_.
- K-nearest neighbors, :class:`neighbors.KNeighborsRegressor`
and :class:`neighbors.RadiusNeighborsRegressor`,
and radius neighbors, :class:`neighbors.RadiusNeighborsRegressor` and
:class:`neighbors.RadiusNeighborsClassifier` support multioutput data
by `Arnaud Joly`_.
- Random state in LibSVM-based estimators (:class:`svm.SVC`, :class:`svm.NuSVC`,
:class:`svm.OneClassSVM`, :class:`svm.SVR`, :class:`svm.NuSVR`) can now be
controlled. This is useful to ensure consistency in the probability
estimates for the classifiers trained with ``probability=True``. By
`Vlad Niculae`_.
- Out-of-core learning support for discrete naive Bayes classifiers
:class:`sklearn.naive_bayes.MultinomialNB` and
:class:`sklearn.naive_bayes.BernoulliNB` by adding the ``partial_fit``
method by `Olivier Grisel`_.
- New website design and navigation by `Gilles Louppe`_, `Nelle Varoquaux`_,
Vincent Michel and `Andreas Müller`_.
- Improved documentation on :ref:`multi-class, multi-label and multi-output
classification <multiclass>` by `Yannick Schwartz`_ and `Arnaud Joly`_.
- Better input and error handling in the :mod:`sklearn.metrics` module by
`Arnaud Joly`_ and `Joel Nothman`_.
- Speed optimization of the `hmm` module by :user:`Mikhail Korobov <kmike>`
- Significant speed improvements for :class:`sklearn.cluster.DBSCAN`
by `cleverless <https://github.com/cleverless>`_
API changes summary
-------------------
- The `auc_score` was renamed :func:`metrics.roc_auc_score`.
- Testing scikit-learn with ``sklearn.test()`` is deprecated. Use
``nosetests sklearn`` from the command line.
- Feature importances in :class:`tree.DecisionTreeClassifier`,
:class:`tree.DecisionTreeRegressor` and all derived ensemble estimators
are now computed on the fly when accessing the ``feature_importances_``
attribute. Setting ``compute_importances=True`` is no longer required.
By `Gilles Louppe`_.
- :class:`linear_model.lasso_path` and
:class:`linear_model.enet_path` can return its results in the same
format as that of :class:`linear_model.lars_path`. This is done by
setting the ``return_models`` parameter to ``False``. By
`Jaques Grobler`_ and `Alexandre Gramfort`_
- `grid_search.IterGrid` was renamed to `grid_search.ParameterGrid`.
- Fixed bug in `KFold` causing imperfect class balance in some
cases. By `Alexandre Gramfort`_ and Tadej Janež.
- :class:`sklearn.neighbors.BallTree` has been refactored, and a
:class:`sklearn.neighbors.KDTree` has been
added which shares the same interface. The Ball Tree now works with
a wide variety of distance metrics. Both classes have many new
methods, including single-tree and dual-tree queries, breadth-first
and depth-first searching, and more advanced queries such as
kernel density estimation and 2-point correlation functions.
By `Jake Vanderplas`_
- Support for scipy.spatial.cKDTree within neighbors queries has been
removed, and the functionality replaced with the new
:class:`sklearn.neighbors.KDTree` class.
- :class:`sklearn.neighbors.KernelDensity` has been added, which performs
efficient kernel density estimation with a variety of kernels.
- :class:`sklearn.decomposition.KernelPCA` now always returns output with
``n_components`` components, unless the new parameter ``remove_zero_eig``
is set to ``True``. This new behavior is consistent with the way
kernel PCA was always documented; previously, the removal of components
with zero eigenvalues was tacitly performed on all data.
- ``gcv_mode="auto"`` no longer tries to perform SVD on a densified
sparse matrix in :class:`sklearn.linear_model.RidgeCV`.
- Sparse matrix support in `sklearn.decomposition.RandomizedPCA`
is now deprecated in favor of the new ``TruncatedSVD``.
- `cross_validation.KFold` and
`cross_validation.StratifiedKFold` now enforce `n_folds >= 2`
otherwise a ``ValueError`` is raised. By `Olivier Grisel`_.
- :func:`datasets.load_files`'s ``charset`` and ``charset_errors``
parameters were renamed ``encoding`` and ``decode_errors``.
- Attribute ``oob_score_`` in :class:`sklearn.ensemble.GradientBoostingRegressor`
and :class:`sklearn.ensemble.GradientBoostingClassifier`
is deprecated and has been replaced by ``oob_improvement_`` .
- Attributes in OrthogonalMatchingPursuit have been deprecated
(copy_X, Gram, ...) and precompute_gram renamed precompute
for consistency. See #2224.
- :class:`sklearn.preprocessing.StandardScaler` now converts integer input
to float, and raises a warning. Previously it rounded for dense integer
input.
- :class:`sklearn.multiclass.OneVsRestClassifier` now has a
``decision_function`` method. This will return the distance of each
sample from the decision boundary for each class, as long as the
underlying estimators implement the ``decision_function`` method.
By `Kyle Kastner`_.
- Better input validation, warning on unexpected shapes for y.
People
------
List of contributors for release 0.14 by number of commits.
* 277 Gilles Louppe
* 245 Lars Buitinck
* 187 Andreas Mueller
* 124 Arnaud Joly
* 112 Jaques Grobler
* 109 Gael Varoquaux
* 107 Olivier Grisel
* 102 Noel Dawe
* 99 Kemal Eren
* 79 Joel Nothman
* 75 Jake VanderPlas
* 73 Nelle Varoquaux
* 71 Vlad Niculae
* 65 Peter Prettenhofer
* 64 Alexandre Gramfort
* 54 Mathieu Blondel
* 38 Nicolas Trésegnie
* 35 eustache
* 27 Denis Engemann
* 25 Yann N. Dauphin
* 19 Justin Vincent
* 17 Robert Layton
* 15 Doug Coleman
* 14 Michael Eickenberg
* 13 Robert Marchman
* 11 Fabian Pedregosa
* 11 Philippe Gervais
* 10 Jim Holmström
* 10 Tadej Janež
* 10 syhw
* 9 Mikhail Korobov
* 9 Steven De Gryze
* 8 sergeyf
* 7 Ben Root
* 7 Hrishikesh Huilgolkar
* 6 Kyle Kastner
* 6 Martin Luessi
* 6 Rob Speer
* 5 Federico Vaggi
* 5 Raul Garreta
* 5 Rob Zinkov
* 4 Ken Geis
* 3 A. Flaxman
* 3 Denton Cockburn
* 3 Dougal Sutherland
* 3 Ian Ozsvald
* 3 Johannes Schönberger
* 3 Robert McGibbon
* 3 Roman Sinayev
* 3 Szabo Roland
* 2 Diego Molla
* 2 Imran Haque
* 2 Jochen Wersdörfer
* 2 Sergey Karayev
* 2 Yannick Schwartz
* 2 jamestwebber
* 1 Abhijeet Kolhe
* 1 Alexander Fabisch
* 1 Bastiaan van den Berg
* 1 Benjamin Peterson
* 1 Daniel Velkov
* 1 Fazlul Shahriar
* 1 Felix Brockherde
* 1 Félix-Antoine Fortin
* 1 Harikrishnan S
* 1 Jack Hale
* 1 JakeMick
* 1 James McDermott
* 1 John Benediktsson
* 1 John Zwinck
* 1 Joshua Vredevoogd
* 1 Justin Pati
* 1 Kevin Hughes
* 1 Kyle Kelley
* 1 Matthias Ekman
* 1 Miroslav Shubernetskiy
* 1 Naoki Orii
* 1 Norbert Crombach
* 1 Rafael Cunha de Almeida
* 1 Rolando Espinoza La fuente
* 1 Seamus Abshere
* 1 Sergey Feldman
* 1 Sergio Medina
* 1 Stefano Lattarini
* 1 Steve Koch
* 1 Sturla Molden
* 1 Thomas Jarosch
* 1 Yaroslav Halchenko | scikit-learn | include contributors rst currentmodule sklearn Version 0 14 changes 0 14 Version 0 14 August 7 2013 Changelog Missing values with sparse and dense matrices can be imputed with the transformer preprocessing Imputer by Nicolas Tr segnie The core implementation of decisions trees has been rewritten from scratch allowing for faster tree induction and lower memory consumption in all tree based estimators By Gilles Louppe Added class ensemble AdaBoostClassifier and class ensemble AdaBoostRegressor by Noel Dawe and Gilles Louppe See the ref AdaBoost adaboost section of the user guide for details and examples Added grid search RandomizedSearchCV and grid search ParameterSampler for randomized hyperparameter optimization By Andreas M ller Added ref biclustering biclustering algorithms sklearn cluster bicluster SpectralCoclustering and sklearn cluster bicluster SpectralBiclustering data generation methods func sklearn datasets make biclusters and func sklearn datasets make checkerboard and scoring metrics func sklearn metrics consensus score By Kemal Eren Added ref Restricted Boltzmann Machines rbm class neural network BernoulliRBM By Yann Dauphin Python 3 support by user Justin Vincent justinvf Lars Buitinck user Subhodeep Moitra smoitra87 and Olivier Grisel All tests now pass under Python 3 3 Ability to pass one penalty alpha value per target in class linear model Ridge by eickenberg and Mathieu Blondel Fixed sklearn linear model stochastic gradient py L2 regularization issue minor practical significance By user Norbert Crombach norbert and Mathieu Blondel Added an interactive version of Andreas M ller s Machine Learning Cheat Sheet for scikit learn https peekaboo vision blogspot de 2013 01 machine learning cheat sheet for scikit html to the documentation See ref Choosing the right estimator ml map By Jaques Grobler grid search GridSearchCV and cross validation cross val score now support the use of advanced scoring function such as area under the ROC curve and f beta scores See ref scoring parameter for details By Andreas M ller and Lars Buitinck Passing a function from mod sklearn metrics as score func is deprecated Multi label classification output is now supported by func metrics accuracy score func metrics zero one loss func metrics f1 score func metrics fbeta score func metrics classification report func metrics precision score and func metrics recall score by Arnaud Joly Two new metrics func metrics hamming loss and metrics jaccard similarity score are added with multi label support by Arnaud Joly Speed and memory usage improvements in class feature extraction text CountVectorizer and class feature extraction text TfidfVectorizer by Jochen Wersd rfer and Roman Sinayev The min df parameter in class feature extraction text CountVectorizer and class feature extraction text TfidfVectorizer which used to be 2 has been reset to 1 to avoid unpleasant surprises empty vocabularies for novice users who try it out on tiny document collections A value of at least 2 is still recommended for practical use class svm LinearSVC class linear model SGDClassifier and class linear model SGDRegressor now have a sparsify method that converts their coef into a sparse matrix meaning stored models trained using these estimators can be made much more compact class linear model SGDClassifier now produces multiclass probability estimates when trained under log loss or modified Huber loss Hyperlinks to documentation in example code on the website by user Martin Luessi mluessi Fixed bug in class preprocessing MinMaxScaler causing incorrect scaling of the features for non default feature range settings By Andreas M ller max features in class tree DecisionTreeClassifier class tree DecisionTreeRegressor and all derived ensemble estimators now supports percentage values By Gilles Louppe Performance improvements in class isotonic IsotonicRegression by Nelle Varoquaux func metrics accuracy score has an option normalize to return the fraction or the number of correctly classified sample by Arnaud Joly Added func metrics log loss that computes log loss aka cross entropy loss By Jochen Wersd rfer and Lars Buitinck A bug that caused class ensemble AdaBoostClassifier s to output incorrect probabilities has been fixed Feature selectors now share a mixin providing consistent transform inverse transform and get support methods By Joel Nothman A fitted grid search GridSearchCV or grid search RandomizedSearchCV can now generally be pickled By Joel Nothman Refactored and vectorized implementation of func metrics roc curve and func metrics precision recall curve By Joel Nothman The new estimator class sklearn decomposition TruncatedSVD performs dimensionality reduction using SVD on sparse matrices and can be used for latent semantic analysis LSA By Lars Buitinck Added self contained example of out of core learning on text data ref sphx glr auto examples applications plot out of core classification py By user Eustache Diemert oddskool The default number of components for sklearn decomposition RandomizedPCA is now correctly documented to be n features This was the default behavior so programs using it will continue to work as they did class sklearn cluster KMeans now fits several orders of magnitude faster on sparse data the speedup depends on the sparsity By Lars Buitinck Reduce memory footprint of FastICA by Denis Engemann and Alexandre Gramfort Verbose output in sklearn ensemble gradient boosting now uses a column format and prints progress in decreasing frequency It also shows the remaining time By Peter Prettenhofer sklearn ensemble gradient boosting provides out of bag improvement oob improvement rather than the OOB score for model selection An example that shows how to use OOB estimates to select the number of trees was added By Peter Prettenhofer Most metrics now support string labels for multiclass classification by Arnaud Joly and Lars Buitinck New OrthogonalMatchingPursuitCV class by Alexandre Gramfort and Vlad Niculae Fixed a bug in sklearn covariance GraphLassoCV the alphas parameter now works as expected when given a list of values By Philippe Gervais Fixed an important bug in sklearn covariance GraphLassoCV that prevented all folds provided by a CV object to be used only the first 3 were used When providing a CV object execution time may thus increase significantly compared to the previous version bug results are correct now By Philippe Gervais cross validation cross val score and the grid search module is now tested with multi output data by Arnaud Joly func datasets make multilabel classification can now return the output in label indicator multilabel format by Arnaud Joly K nearest neighbors class neighbors KNeighborsRegressor and class neighbors RadiusNeighborsRegressor and radius neighbors class neighbors RadiusNeighborsRegressor and class neighbors RadiusNeighborsClassifier support multioutput data by Arnaud Joly Random state in LibSVM based estimators class svm SVC class svm NuSVC class svm OneClassSVM class svm SVR class svm NuSVR can now be controlled This is useful to ensure consistency in the probability estimates for the classifiers trained with probability True By Vlad Niculae Out of core learning support for discrete naive Bayes classifiers class sklearn naive bayes MultinomialNB and class sklearn naive bayes BernoulliNB by adding the partial fit method by Olivier Grisel New website design and navigation by Gilles Louppe Nelle Varoquaux Vincent Michel and Andreas M ller Improved documentation on ref multi class multi label and multi output classification multiclass by Yannick Schwartz and Arnaud Joly Better input and error handling in the mod sklearn metrics module by Arnaud Joly and Joel Nothman Speed optimization of the hmm module by user Mikhail Korobov kmike Significant speed improvements for class sklearn cluster DBSCAN by cleverless https github com cleverless API changes summary The auc score was renamed func metrics roc auc score Testing scikit learn with sklearn test is deprecated Use nosetests sklearn from the command line Feature importances in class tree DecisionTreeClassifier class tree DecisionTreeRegressor and all derived ensemble estimators are now computed on the fly when accessing the feature importances attribute Setting compute importances True is no longer required By Gilles Louppe class linear model lasso path and class linear model enet path can return its results in the same format as that of class linear model lars path This is done by setting the return models parameter to False By Jaques Grobler and Alexandre Gramfort grid search IterGrid was renamed to grid search ParameterGrid Fixed bug in KFold causing imperfect class balance in some cases By Alexandre Gramfort and Tadej Jane class sklearn neighbors BallTree has been refactored and a class sklearn neighbors KDTree has been added which shares the same interface The Ball Tree now works with a wide variety of distance metrics Both classes have many new methods including single tree and dual tree queries breadth first and depth first searching and more advanced queries such as kernel density estimation and 2 point correlation functions By Jake Vanderplas Support for scipy spatial cKDTree within neighbors queries has been removed and the functionality replaced with the new class sklearn neighbors KDTree class class sklearn neighbors KernelDensity has been added which performs efficient kernel density estimation with a variety of kernels class sklearn decomposition KernelPCA now always returns output with n components components unless the new parameter remove zero eig is set to True This new behavior is consistent with the way kernel PCA was always documented previously the removal of components with zero eigenvalues was tacitly performed on all data gcv mode auto no longer tries to perform SVD on a densified sparse matrix in class sklearn linear model RidgeCV Sparse matrix support in sklearn decomposition RandomizedPCA is now deprecated in favor of the new TruncatedSVD cross validation KFold and cross validation StratifiedKFold now enforce n folds 2 otherwise a ValueError is raised By Olivier Grisel func datasets load files s charset and charset errors parameters were renamed encoding and decode errors Attribute oob score in class sklearn ensemble GradientBoostingRegressor and class sklearn ensemble GradientBoostingClassifier is deprecated and has been replaced by oob improvement Attributes in OrthogonalMatchingPursuit have been deprecated copy X Gram and precompute gram renamed precompute for consistency See 2224 class sklearn preprocessing StandardScaler now converts integer input to float and raises a warning Previously it rounded for dense integer input class sklearn multiclass OneVsRestClassifier now has a decision function method This will return the distance of each sample from the decision boundary for each class as long as the underlying estimators implement the decision function method By Kyle Kastner Better input validation warning on unexpected shapes for y People List of contributors for release 0 14 by number of commits 277 Gilles Louppe 245 Lars Buitinck 187 Andreas Mueller 124 Arnaud Joly 112 Jaques Grobler 109 Gael Varoquaux 107 Olivier Grisel 102 Noel Dawe 99 Kemal Eren 79 Joel Nothman 75 Jake VanderPlas 73 Nelle Varoquaux 71 Vlad Niculae 65 Peter Prettenhofer 64 Alexandre Gramfort 54 Mathieu Blondel 38 Nicolas Tr segnie 35 eustache 27 Denis Engemann 25 Yann N Dauphin 19 Justin Vincent 17 Robert Layton 15 Doug Coleman 14 Michael Eickenberg 13 Robert Marchman 11 Fabian Pedregosa 11 Philippe Gervais 10 Jim Holmstr m 10 Tadej Jane 10 syhw 9 Mikhail Korobov 9 Steven De Gryze 8 sergeyf 7 Ben Root 7 Hrishikesh Huilgolkar 6 Kyle Kastner 6 Martin Luessi 6 Rob Speer 5 Federico Vaggi 5 Raul Garreta 5 Rob Zinkov 4 Ken Geis 3 A Flaxman 3 Denton Cockburn 3 Dougal Sutherland 3 Ian Ozsvald 3 Johannes Sch nberger 3 Robert McGibbon 3 Roman Sinayev 3 Szabo Roland 2 Diego Molla 2 Imran Haque 2 Jochen Wersd rfer 2 Sergey Karayev 2 Yannick Schwartz 2 jamestwebber 1 Abhijeet Kolhe 1 Alexander Fabisch 1 Bastiaan van den Berg 1 Benjamin Peterson 1 Daniel Velkov 1 Fazlul Shahriar 1 Felix Brockherde 1 F lix Antoine Fortin 1 Harikrishnan S 1 Jack Hale 1 JakeMick 1 James McDermott 1 John Benediktsson 1 John Zwinck 1 Joshua Vredevoogd 1 Justin Pati 1 Kevin Hughes 1 Kyle Kelley 1 Matthias Ekman 1 Miroslav Shubernetskiy 1 Naoki Orii 1 Norbert Crombach 1 Rafael Cunha de Almeida 1 Rolando Espinoza La fuente 1 Seamus Abshere 1 Sergey Feldman 1 Sergio Medina 1 Stefano Lattarini 1 Steve Koch 1 Sturla Molden 1 Thomas Jarosch 1 Yaroslav Halchenko |
scikit-learn sklearn contributors rst Version 0 16 changes0161 | .. include:: _contributors.rst
.. currentmodule:: sklearn
============
Version 0.16
============
.. _changes_0_16_1:
Version 0.16.1
===============
**April 14, 2015**
Changelog
---------
Bug fixes
.........
- Allow input data larger than ``block_size`` in
:class:`covariance.LedoitWolf` by `Andreas Müller`_.
- Fix a bug in :class:`isotonic.IsotonicRegression` deduplication that
caused unstable result in :class:`calibration.CalibratedClassifierCV` by
`Jan Hendrik Metzen`_.
- Fix sorting of labels in func:`preprocessing.label_binarize` by Michael Heilman.
- Fix several stability and convergence issues in
:class:`cross_decomposition.CCA` and
:class:`cross_decomposition.PLSCanonical` by `Andreas Müller`_
- Fix a bug in :class:`cluster.KMeans` when ``precompute_distances=False``
on fortran-ordered data.
- Fix a speed regression in :class:`ensemble.RandomForestClassifier`'s ``predict``
and ``predict_proba`` by `Andreas Müller`_.
- Fix a regression where ``utils.shuffle`` converted lists and dataframes to arrays, by `Olivier Grisel`_
.. _changes_0_16:
Version 0.16
============
**March 26, 2015**
Highlights
-----------
- Speed improvements (notably in :class:`cluster.DBSCAN`), reduced memory
requirements, bug-fixes and better default settings.
- Multinomial Logistic regression and a path algorithm in
:class:`linear_model.LogisticRegressionCV`.
- Out-of core learning of PCA via :class:`decomposition.IncrementalPCA`.
- Probability calibration of classifiers using
:class:`calibration.CalibratedClassifierCV`.
- :class:`cluster.Birch` clustering method for large-scale datasets.
- Scalable approximate nearest neighbors search with Locality-sensitive
hashing forests in `neighbors.LSHForest`.
- Improved error messages and better validation when using malformed input data.
- More robust integration with pandas dataframes.
Changelog
---------
New features
............
- The new `neighbors.LSHForest` implements locality-sensitive hashing
for approximate nearest neighbors search. By :user:`Maheshakya Wijewardena<maheshakya>`.
- Added :class:`svm.LinearSVR`. This class uses the liblinear implementation
of Support Vector Regression which is much faster for large
sample sizes than :class:`svm.SVR` with linear kernel. By
`Fabian Pedregosa`_ and Qiang Luo.
- Incremental fit for :class:`GaussianNB <naive_bayes.GaussianNB>`.
- Added ``sample_weight`` support to :class:`dummy.DummyClassifier` and
:class:`dummy.DummyRegressor`. By `Arnaud Joly`_.
- Added the :func:`metrics.label_ranking_average_precision_score` metrics.
By `Arnaud Joly`_.
- Add the :func:`metrics.coverage_error` metrics. By `Arnaud Joly`_.
- Added :class:`linear_model.LogisticRegressionCV`. By
`Manoj Kumar`_, `Fabian Pedregosa`_, `Gael Varoquaux`_
and `Alexandre Gramfort`_.
- Added ``warm_start`` constructor parameter to make it possible for any
trained forest model to grow additional trees incrementally. By
:user:`Laurent Direr<ldirer>`.
- Added ``sample_weight`` support to :class:`ensemble.GradientBoostingClassifier` and
:class:`ensemble.GradientBoostingRegressor`. By `Peter Prettenhofer`_.
- Added :class:`decomposition.IncrementalPCA`, an implementation of the PCA
algorithm that supports out-of-core learning with a ``partial_fit``
method. By `Kyle Kastner`_.
- Averaged SGD for :class:`SGDClassifier <linear_model.SGDClassifier>`
and :class:`SGDRegressor <linear_model.SGDRegressor>` By
:user:`Danny Sullivan <dsullivan7>`.
- Added `cross_val_predict`
function which computes cross-validated estimates. By `Luis Pedro Coelho`_
- Added :class:`linear_model.TheilSenRegressor`, a robust
generalized-median-based estimator. By :user:`Florian Wilhelm <FlorianWilhelm>`.
- Added :func:`metrics.median_absolute_error`, a robust metric.
By `Gael Varoquaux`_ and :user:`Florian Wilhelm <FlorianWilhelm>`.
- Add :class:`cluster.Birch`, an online clustering algorithm. By
`Manoj Kumar`_, `Alexandre Gramfort`_ and `Joel Nothman`_.
- Added shrinkage support to :class:`discriminant_analysis.LinearDiscriminantAnalysis`
using two new solvers. By :user:`Clemens Brunner <cle1109>` and `Martin Billinger`_.
- Added :class:`kernel_ridge.KernelRidge`, an implementation of
kernelized ridge regression.
By `Mathieu Blondel`_ and `Jan Hendrik Metzen`_.
- All solvers in :class:`linear_model.Ridge` now support `sample_weight`.
By `Mathieu Blondel`_.
- Added `cross_validation.PredefinedSplit` cross-validation
for fixed user-provided cross-validation folds.
By :user:`Thomas Unterthiner <untom>`.
- Added :class:`calibration.CalibratedClassifierCV`, an approach for
calibrating the predicted probabilities of a classifier.
By `Alexandre Gramfort`_, `Jan Hendrik Metzen`_, `Mathieu Blondel`_
and :user:`Balazs Kegl <kegl>`.
Enhancements
............
- Add option ``return_distance`` in `hierarchical.ward_tree`
to return distances between nodes for both structured and unstructured
versions of the algorithm. By `Matteo Visconti di Oleggio Castello`_.
The same option was added in `hierarchical.linkage_tree`.
By `Manoj Kumar`_
- Add support for sample weights in scorer objects. Metrics with sample
weight support will automatically benefit from it. By `Noel Dawe`_ and
`Vlad Niculae`_.
- Added ``newton-cg`` and `lbfgs` solver support in
:class:`linear_model.LogisticRegression`. By `Manoj Kumar`_.
- Add ``selection="random"`` parameter to implement stochastic coordinate
descent for :class:`linear_model.Lasso`, :class:`linear_model.ElasticNet`
and related. By `Manoj Kumar`_.
- Add ``sample_weight`` parameter to
`metrics.jaccard_similarity_score` and :func:`metrics.log_loss`.
By :user:`Jatin Shah <jatinshah>`.
- Support sparse multilabel indicator representation in
:class:`preprocessing.LabelBinarizer` and
:class:`multiclass.OneVsRestClassifier` (by :user:`Hamzeh Alsalhi <hamsal>` with thanks
to Rohit Sivaprasad), as well as evaluation metrics (by
`Joel Nothman`_).
- Add ``sample_weight`` parameter to `metrics.jaccard_similarity_score`.
By `Jatin Shah`.
- Add support for multiclass in `metrics.hinge_loss`. Added ``labels=None``
as optional parameter. By `Saurabh Jha`.
- Add ``sample_weight`` parameter to `metrics.hinge_loss`.
By `Saurabh Jha`.
- Add ``multi_class="multinomial"`` option in
:class:`linear_model.LogisticRegression` to implement a Logistic
Regression solver that minimizes the cross-entropy or multinomial loss
instead of the default One-vs-Rest setting. Supports `lbfgs` and
`newton-cg` solvers. By `Lars Buitinck`_ and `Manoj Kumar`_. Solver option
`newton-cg` by Simon Wu.
- ``DictVectorizer`` can now perform ``fit_transform`` on an iterable in a
single pass, when giving the option ``sort=False``. By :user:`Dan
Blanchard <dan-blanchard>`.
- :class:`model_selection.GridSearchCV` and
:class:`model_selection.RandomizedSearchCV` can now be configured to work
with estimators that may fail and raise errors on individual folds. This
option is controlled by the `error_score` parameter. This does not affect
errors raised on re-fit. By :user:`Michal Romaniuk <romaniukm>`.
- Add ``digits`` parameter to `metrics.classification_report` to allow
report to show different precision of floating point numbers. By
:user:`Ian Gilmore <agileminor>`.
- Add a quantile prediction strategy to the :class:`dummy.DummyRegressor`.
By :user:`Aaron Staple <staple>`.
- Add ``handle_unknown`` option to :class:`preprocessing.OneHotEncoder` to
handle unknown categorical features more gracefully during transform.
By `Manoj Kumar`_.
- Added support for sparse input data to decision trees and their ensembles.
By `Fares Hedyati`_ and `Arnaud Joly`_.
- Optimized :class:`cluster.AffinityPropagation` by reducing the number of
memory allocations of large temporary data-structures. By `Antony Lee`_.
- Parellization of the computation of feature importances in random forest.
By `Olivier Grisel`_ and `Arnaud Joly`_.
- Add ``n_iter_`` attribute to estimators that accept a ``max_iter`` attribute
in their constructor. By `Manoj Kumar`_.
- Added decision function for :class:`multiclass.OneVsOneClassifier`
By `Raghav RV`_ and :user:`Kyle Beauchamp <kyleabeauchamp>`.
- `neighbors.kneighbors_graph` and `radius_neighbors_graph`
support non-Euclidean metrics. By `Manoj Kumar`_
- Parameter ``connectivity`` in :class:`cluster.AgglomerativeClustering`
and family now accept callables that return a connectivity matrix.
By `Manoj Kumar`_.
- Sparse support for :func:`metrics.pairwise.paired_distances`. By `Joel Nothman`_.
- :class:`cluster.DBSCAN` now supports sparse input and sample weights and
has been optimized: the inner loop has been rewritten in Cython and
radius neighbors queries are now computed in batch. By `Joel Nothman`_
and `Lars Buitinck`_.
- Add ``class_weight`` parameter to automatically weight samples by class
frequency for :class:`ensemble.RandomForestClassifier`,
:class:`tree.DecisionTreeClassifier`, :class:`ensemble.ExtraTreesClassifier`
and :class:`tree.ExtraTreeClassifier`. By `Trevor Stephens`_.
- `grid_search.RandomizedSearchCV` now does sampling without
replacement if all parameters are given as lists. By `Andreas Müller`_.
- Parallelized calculation of :func:`metrics.pairwise_distances` is now supported
for scipy metrics and custom callables. By `Joel Nothman`_.
- Allow the fitting and scoring of all clustering algorithms in
:class:`pipeline.Pipeline`. By `Andreas Müller`_.
- More robust seeding and improved error messages in :class:`cluster.MeanShift`
by `Andreas Müller`_.
- Make the stopping criterion for `mixture.GMM`,
`mixture.DPGMM` and `mixture.VBGMM` less dependent on the
number of samples by thresholding the average log-likelihood change
instead of its sum over all samples. By `Hervé Bredin`_.
- The outcome of :func:`manifold.spectral_embedding` was made deterministic
by flipping the sign of eigenvectors. By :user:`Hasil Sharma <Hasil-Sharma>`.
- Significant performance and memory usage improvements in
:class:`preprocessing.PolynomialFeatures`. By `Eric Martin`_.
- Numerical stability improvements for :class:`preprocessing.StandardScaler`
and :func:`preprocessing.scale`. By `Nicolas Goix`_
- :class:`svm.SVC` fitted on sparse input now implements ``decision_function``.
By `Rob Zinkov`_ and `Andreas Müller`_.
- `cross_validation.train_test_split` now preserves the input type,
instead of converting to numpy arrays.
Documentation improvements
..........................
- Added example of using :class:`pipeline.FeatureUnion` for heterogeneous input.
By :user:`Matt Terry <mrterry>`
- Documentation on scorers was improved, to highlight the handling of loss
functions. By :user:`Matt Pico <MattpSoftware>`.
- A discrepancy between liblinear output and scikit-learn's wrappers
is now noted. By `Manoj Kumar`_.
- Improved documentation generation: examples referring to a class or
function are now shown in a gallery on the class/function's API reference
page. By `Joel Nothman`_.
- More explicit documentation of sample generators and of data
transformation. By `Joel Nothman`_.
- :class:`sklearn.neighbors.BallTree` and :class:`sklearn.neighbors.KDTree`
used to point to empty pages stating that they are aliases of BinaryTree.
This has been fixed to show the correct class docs. By `Manoj Kumar`_.
- Added silhouette plots for analysis of KMeans clustering using
:func:`metrics.silhouette_samples` and :func:`metrics.silhouette_score`.
See :ref:`sphx_glr_auto_examples_cluster_plot_kmeans_silhouette_analysis.py`
Bug fixes
.........
- Metaestimators now support ducktyping for the presence of ``decision_function``,
``predict_proba`` and other methods. This fixes behavior of
`grid_search.GridSearchCV`,
`grid_search.RandomizedSearchCV`, :class:`pipeline.Pipeline`,
:class:`feature_selection.RFE`, :class:`feature_selection.RFECV` when nested.
By `Joel Nothman`_
- The ``scoring`` attribute of grid-search and cross-validation methods is no longer
ignored when a `grid_search.GridSearchCV` is given as a base estimator or
the base estimator doesn't have predict.
- The function `hierarchical.ward_tree` now returns the children in
the same order for both the structured and unstructured versions. By
`Matteo Visconti di Oleggio Castello`_.
- :class:`feature_selection.RFECV` now correctly handles cases when
``step`` is not equal to 1. By :user:`Nikolay Mayorov <nmayorov>`
- The :class:`decomposition.PCA` now undoes whitening in its
``inverse_transform``. Also, its ``components_`` now always have unit
length. By :user:`Michael Eickenberg <eickenberg>`.
- Fix incomplete download of the dataset when
`datasets.download_20newsgroups` is called. By `Manoj Kumar`_.
- Various fixes to the Gaussian processes subpackage by Vincent Dubourg
and Jan Hendrik Metzen.
- Calling ``partial_fit`` with ``class_weight=='auto'`` throws an
appropriate error message and suggests a work around.
By :user:`Danny Sullivan <dsullivan7>`.
- :class:`RBFSampler <kernel_approximation.RBFSampler>` with ``gamma=g``
formerly approximated :func:`rbf_kernel <metrics.pairwise.rbf_kernel>`
with ``gamma=g/2.``; the definition of ``gamma`` is now consistent,
which may substantially change your results if you use a fixed value.
(If you cross-validated over ``gamma``, it probably doesn't matter
too much.) By :user:`Dougal Sutherland <dougalsutherland>`.
- Pipeline object delegate the ``classes_`` attribute to the underlying
estimator. It allows, for instance, to make bagging of a pipeline object.
By `Arnaud Joly`_
- :class:`neighbors.NearestCentroid` now uses the median as the centroid
when metric is set to ``manhattan``. It was using the mean before.
By `Manoj Kumar`_
- Fix numerical stability issues in :class:`linear_model.SGDClassifier`
and :class:`linear_model.SGDRegressor` by clipping large gradients and
ensuring that weight decay rescaling is always positive (for large
l2 regularization and large learning rate values).
By `Olivier Grisel`_
- When `compute_full_tree` is set to "auto", the full tree is
built when n_clusters is high and is early stopped when n_clusters is
low, while the behavior should be vice-versa in
:class:`cluster.AgglomerativeClustering` (and friends).
This has been fixed By `Manoj Kumar`_
- Fix lazy centering of data in :func:`linear_model.enet_path` and
:func:`linear_model.lasso_path`. It was centered around one. It has
been changed to be centered around the origin. By `Manoj Kumar`_
- Fix handling of precomputed affinity matrices in
:class:`cluster.AgglomerativeClustering` when using connectivity
constraints. By :user:`Cathy Deng <cathydeng>`
- Correct ``partial_fit`` handling of ``class_prior`` for
:class:`sklearn.naive_bayes.MultinomialNB` and
:class:`sklearn.naive_bayes.BernoulliNB`. By `Trevor Stephens`_.
- Fixed a crash in :func:`metrics.precision_recall_fscore_support`
when using unsorted ``labels`` in the multi-label setting.
By `Andreas Müller`_.
- Avoid skipping the first nearest neighbor in the methods ``radius_neighbors``,
``kneighbors``, ``kneighbors_graph`` and ``radius_neighbors_graph`` in
:class:`sklearn.neighbors.NearestNeighbors` and family, when the query
data is not the same as fit data. By `Manoj Kumar`_.
- Fix log-density calculation in the `mixture.GMM` with
tied covariance. By `Will Dawson`_
- Fixed a scaling error in :class:`feature_selection.SelectFdr`
where a factor ``n_features`` was missing. By `Andrew Tulloch`_
- Fix zero division in :class:`neighbors.KNeighborsRegressor` and related
classes when using distance weighting and having identical data points.
By `Garret-R <https://github.com/Garrett-R>`_.
- Fixed round off errors with non positive-definite covariance matrices
in GMM. By :user:`Alexis Mignon <AlexisMignon>`.
- Fixed a error in the computation of conditional probabilities in
:class:`naive_bayes.BernoulliNB`. By `Hanna Wallach`_.
- Make the method ``radius_neighbors`` of
:class:`neighbors.NearestNeighbors` return the samples lying on the
boundary for ``algorithm='brute'``. By `Yan Yi`_.
- Flip sign of ``dual_coef_`` of :class:`svm.SVC`
to make it consistent with the documentation and
``decision_function``. By Artem Sobolev.
- Fixed handling of ties in :class:`isotonic.IsotonicRegression`.
We now use the weighted average of targets (secondary method). By
`Andreas Müller`_ and `Michael Bommarito <http://bommaritollc.com/>`_.
API changes summary
-------------------
- `GridSearchCV` and
`cross_val_score` and other
meta-estimators don't convert pandas DataFrames into arrays any more,
allowing DataFrame specific operations in custom estimators.
- `multiclass.fit_ovr`, `multiclass.predict_ovr`,
`predict_proba_ovr`,
`multiclass.fit_ovo`, `multiclass.predict_ovo`,
`multiclass.fit_ecoc` and `multiclass.predict_ecoc`
are deprecated. Use the underlying estimators instead.
- Nearest neighbors estimators used to take arbitrary keyword arguments
and pass these to their distance metric. This will no longer be supported
in scikit-learn 0.18; use the ``metric_params`` argument instead.
- `n_jobs` parameter of the fit method shifted to the constructor of the
LinearRegression class.
- The ``predict_proba`` method of :class:`multiclass.OneVsRestClassifier`
now returns two probabilities per sample in the multiclass case; this
is consistent with other estimators and with the method's documentation,
but previous versions accidentally returned only the positive
probability. Fixed by Will Lamond and `Lars Buitinck`_.
- Change default value of precompute in :class:`linear_model.ElasticNet` and
:class:`linear_model.Lasso` to False. Setting precompute to "auto" was found
to be slower when n_samples > n_features since the computation of the Gram
matrix is computationally expensive and outweighs the benefit of fitting the
Gram for just one alpha.
``precompute="auto"`` is now deprecated and will be removed in 0.18
By `Manoj Kumar`_.
- Expose ``positive`` option in :func:`linear_model.enet_path` and
:func:`linear_model.enet_path` which constrains coefficients to be
positive. By `Manoj Kumar`_.
- Users should now supply an explicit ``average`` parameter to
:func:`sklearn.metrics.f1_score`, :func:`sklearn.metrics.fbeta_score`,
:func:`sklearn.metrics.recall_score` and
:func:`sklearn.metrics.precision_score` when performing multiclass
or multilabel (i.e. not binary) classification. By `Joel Nothman`_.
- `scoring` parameter for cross validation now accepts `'f1_micro'`,
`'f1_macro'` or `'f1_weighted'`. `'f1'` is now for binary classification
only. Similar changes apply to `'precision'` and `'recall'`.
By `Joel Nothman`_.
- The ``fit_intercept``, ``normalize`` and ``return_models`` parameters in
:func:`linear_model.enet_path` and :func:`linear_model.lasso_path` have
been removed. They were deprecated since 0.14
- From now onwards, all estimators will uniformly raise ``NotFittedError``
when any of the ``predict`` like methods are called before the model is fit.
By `Raghav RV`_.
- Input data validation was refactored for more consistent input
validation. The ``check_arrays`` function was replaced by ``check_array``
and ``check_X_y``. By `Andreas Müller`_.
- Allow ``X=None`` in the methods ``radius_neighbors``, ``kneighbors``,
``kneighbors_graph`` and ``radius_neighbors_graph`` in
:class:`sklearn.neighbors.NearestNeighbors` and family. If set to None,
then for every sample this avoids setting the sample itself as the
first nearest neighbor. By `Manoj Kumar`_.
- Add parameter ``include_self`` in :func:`neighbors.kneighbors_graph`
and :func:`neighbors.radius_neighbors_graph` which has to be explicitly
set by the user. If set to True, then the sample itself is considered
as the first nearest neighbor.
- `thresh` parameter is deprecated in favor of new `tol` parameter in
`GMM`, `DPGMM` and `VBGMM`. See `Enhancements`
section for details. By `Hervé Bredin`_.
- Estimators will treat input with dtype object as numeric when possible.
By `Andreas Müller`_
- Estimators now raise `ValueError` consistently when fitted on empty
data (less than 1 sample or less than 1 feature for 2D input).
By `Olivier Grisel`_.
- The ``shuffle`` option of :class:`.linear_model.SGDClassifier`,
:class:`linear_model.SGDRegressor`, :class:`linear_model.Perceptron`,
:class:`linear_model.PassiveAggressiveClassifier` and
:class:`linear_model.PassiveAggressiveRegressor` now defaults to ``True``.
- :class:`cluster.DBSCAN` now uses a deterministic initialization. The
`random_state` parameter is deprecated. By :user:`Erich Schubert <kno10>`.
Code Contributors
-----------------
A. Flaxman, Aaron Schumacher, Aaron Staple, abhishek thakur, Akshay, akshayah3,
Aldrian Obaja, Alexander Fabisch, Alexandre Gramfort, Alexis Mignon, Anders
Aagaard, Andreas Mueller, Andreas van Cranenburgh, Andrew Tulloch, Andrew
Walker, Antony Lee, Arnaud Joly, banilo, Barmaley.exe, Ben Davies, Benedikt
Koehler, bhsu, Boris Feld, Borja Ayerdi, Boyuan Deng, Brent Pedersen, Brian
Wignall, Brooke Osborn, Calvin Giles, Cathy Deng, Celeo, cgohlke, chebee7i,
Christian Stade-Schuldt, Christof Angermueller, Chyi-Kwei Yau, CJ Carey,
Clemens Brunner, Daiki Aminaka, Dan Blanchard, danfrankj, Danny Sullivan, David
Fletcher, Dmitrijs Milajevs, Dougal J. Sutherland, Erich Schubert, Fabian
Pedregosa, Florian Wilhelm, floydsoft, Félix-Antoine Fortin, Gael Varoquaux,
Garrett-R, Gilles Louppe, gpassino, gwulfs, Hampus Bengtsson, Hamzeh Alsalhi,
Hanna Wallach, Harry Mavroforakis, Hasil Sharma, Helder, Herve Bredin,
Hsiang-Fu Yu, Hugues SALAMIN, Ian Gilmore, Ilambharathi Kanniah, Imran Haque,
isms, Jake VanderPlas, Jan Dlabal, Jan Hendrik Metzen, Jatin Shah, Javier López
Peña, jdcaballero, Jean Kossaifi, Jeff Hammerbacher, Joel Nothman, Jonathan
Helmus, Joseph, Kaicheng Zhang, Kevin Markham, Kyle Beauchamp, Kyle Kastner,
Lagacherie Matthieu, Lars Buitinck, Laurent Direr, leepei, Loic Esteve, Luis
Pedro Coelho, Lukas Michelbacher, maheshakya, Manoj Kumar, Manuel, Mario
Michael Krell, Martin, Martin Billinger, Martin Ku, Mateusz Susik, Mathieu
Blondel, Matt Pico, Matt Terry, Matteo Visconti dOC, Matti Lyra, Max Linke,
Mehdi Cherti, Michael Bommarito, Michael Eickenberg, Michal Romaniuk, MLG,
mr.Shu, Nelle Varoquaux, Nicola Montecchio, Nicolas, Nikolay Mayorov, Noel
Dawe, Okal Billy, Olivier Grisel, Óscar Nájera, Paolo Puggioni, Peter
Prettenhofer, Pratap Vardhan, pvnguyen, queqichao, Rafael Carrascosa, Raghav R
V, Rahiel Kasim, Randall Mason, Rob Zinkov, Robert Bradshaw, Saket Choudhary,
Sam Nicholls, Samuel Charron, Saurabh Jha, sethdandridge, sinhrks, snuderl,
Stefan Otte, Stefan van der Walt, Steve Tjoa, swu, Sylvain Zimmer, tejesh95,
terrycojones, Thomas Delteil, Thomas Unterthiner, Tomas Kazmar, trevorstephens,
tttthomasssss, Tzu-Ming Kuo, ugurcaliskan, ugurthemaster, Vinayak Mehta,
Vincent Dubourg, Vjacheslav Murashkin, Vlad Niculae, wadawson, Wei Xue, Will
Lamond, Wu Jiang, x0l, Xinfan Meng, Yan Yi, Yu-Chin | scikit-learn | include contributors rst currentmodule sklearn Version 0 16 changes 0 16 1 Version 0 16 1 April 14 2015 Changelog Bug fixes Allow input data larger than block size in class covariance LedoitWolf by Andreas M ller Fix a bug in class isotonic IsotonicRegression deduplication that caused unstable result in class calibration CalibratedClassifierCV by Jan Hendrik Metzen Fix sorting of labels in func preprocessing label binarize by Michael Heilman Fix several stability and convergence issues in class cross decomposition CCA and class cross decomposition PLSCanonical by Andreas M ller Fix a bug in class cluster KMeans when precompute distances False on fortran ordered data Fix a speed regression in class ensemble RandomForestClassifier s predict and predict proba by Andreas M ller Fix a regression where utils shuffle converted lists and dataframes to arrays by Olivier Grisel changes 0 16 Version 0 16 March 26 2015 Highlights Speed improvements notably in class cluster DBSCAN reduced memory requirements bug fixes and better default settings Multinomial Logistic regression and a path algorithm in class linear model LogisticRegressionCV Out of core learning of PCA via class decomposition IncrementalPCA Probability calibration of classifiers using class calibration CalibratedClassifierCV class cluster Birch clustering method for large scale datasets Scalable approximate nearest neighbors search with Locality sensitive hashing forests in neighbors LSHForest Improved error messages and better validation when using malformed input data More robust integration with pandas dataframes Changelog New features The new neighbors LSHForest implements locality sensitive hashing for approximate nearest neighbors search By user Maheshakya Wijewardena maheshakya Added class svm LinearSVR This class uses the liblinear implementation of Support Vector Regression which is much faster for large sample sizes than class svm SVR with linear kernel By Fabian Pedregosa and Qiang Luo Incremental fit for class GaussianNB naive bayes GaussianNB Added sample weight support to class dummy DummyClassifier and class dummy DummyRegressor By Arnaud Joly Added the func metrics label ranking average precision score metrics By Arnaud Joly Add the func metrics coverage error metrics By Arnaud Joly Added class linear model LogisticRegressionCV By Manoj Kumar Fabian Pedregosa Gael Varoquaux and Alexandre Gramfort Added warm start constructor parameter to make it possible for any trained forest model to grow additional trees incrementally By user Laurent Direr ldirer Added sample weight support to class ensemble GradientBoostingClassifier and class ensemble GradientBoostingRegressor By Peter Prettenhofer Added class decomposition IncrementalPCA an implementation of the PCA algorithm that supports out of core learning with a partial fit method By Kyle Kastner Averaged SGD for class SGDClassifier linear model SGDClassifier and class SGDRegressor linear model SGDRegressor By user Danny Sullivan dsullivan7 Added cross val predict function which computes cross validated estimates By Luis Pedro Coelho Added class linear model TheilSenRegressor a robust generalized median based estimator By user Florian Wilhelm FlorianWilhelm Added func metrics median absolute error a robust metric By Gael Varoquaux and user Florian Wilhelm FlorianWilhelm Add class cluster Birch an online clustering algorithm By Manoj Kumar Alexandre Gramfort and Joel Nothman Added shrinkage support to class discriminant analysis LinearDiscriminantAnalysis using two new solvers By user Clemens Brunner cle1109 and Martin Billinger Added class kernel ridge KernelRidge an implementation of kernelized ridge regression By Mathieu Blondel and Jan Hendrik Metzen All solvers in class linear model Ridge now support sample weight By Mathieu Blondel Added cross validation PredefinedSplit cross validation for fixed user provided cross validation folds By user Thomas Unterthiner untom Added class calibration CalibratedClassifierCV an approach for calibrating the predicted probabilities of a classifier By Alexandre Gramfort Jan Hendrik Metzen Mathieu Blondel and user Balazs Kegl kegl Enhancements Add option return distance in hierarchical ward tree to return distances between nodes for both structured and unstructured versions of the algorithm By Matteo Visconti di Oleggio Castello The same option was added in hierarchical linkage tree By Manoj Kumar Add support for sample weights in scorer objects Metrics with sample weight support will automatically benefit from it By Noel Dawe and Vlad Niculae Added newton cg and lbfgs solver support in class linear model LogisticRegression By Manoj Kumar Add selection random parameter to implement stochastic coordinate descent for class linear model Lasso class linear model ElasticNet and related By Manoj Kumar Add sample weight parameter to metrics jaccard similarity score and func metrics log loss By user Jatin Shah jatinshah Support sparse multilabel indicator representation in class preprocessing LabelBinarizer and class multiclass OneVsRestClassifier by user Hamzeh Alsalhi hamsal with thanks to Rohit Sivaprasad as well as evaluation metrics by Joel Nothman Add sample weight parameter to metrics jaccard similarity score By Jatin Shah Add support for multiclass in metrics hinge loss Added labels None as optional parameter By Saurabh Jha Add sample weight parameter to metrics hinge loss By Saurabh Jha Add multi class multinomial option in class linear model LogisticRegression to implement a Logistic Regression solver that minimizes the cross entropy or multinomial loss instead of the default One vs Rest setting Supports lbfgs and newton cg solvers By Lars Buitinck and Manoj Kumar Solver option newton cg by Simon Wu DictVectorizer can now perform fit transform on an iterable in a single pass when giving the option sort False By user Dan Blanchard dan blanchard class model selection GridSearchCV and class model selection RandomizedSearchCV can now be configured to work with estimators that may fail and raise errors on individual folds This option is controlled by the error score parameter This does not affect errors raised on re fit By user Michal Romaniuk romaniukm Add digits parameter to metrics classification report to allow report to show different precision of floating point numbers By user Ian Gilmore agileminor Add a quantile prediction strategy to the class dummy DummyRegressor By user Aaron Staple staple Add handle unknown option to class preprocessing OneHotEncoder to handle unknown categorical features more gracefully during transform By Manoj Kumar Added support for sparse input data to decision trees and their ensembles By Fares Hedyati and Arnaud Joly Optimized class cluster AffinityPropagation by reducing the number of memory allocations of large temporary data structures By Antony Lee Parellization of the computation of feature importances in random forest By Olivier Grisel and Arnaud Joly Add n iter attribute to estimators that accept a max iter attribute in their constructor By Manoj Kumar Added decision function for class multiclass OneVsOneClassifier By Raghav RV and user Kyle Beauchamp kyleabeauchamp neighbors kneighbors graph and radius neighbors graph support non Euclidean metrics By Manoj Kumar Parameter connectivity in class cluster AgglomerativeClustering and family now accept callables that return a connectivity matrix By Manoj Kumar Sparse support for func metrics pairwise paired distances By Joel Nothman class cluster DBSCAN now supports sparse input and sample weights and has been optimized the inner loop has been rewritten in Cython and radius neighbors queries are now computed in batch By Joel Nothman and Lars Buitinck Add class weight parameter to automatically weight samples by class frequency for class ensemble RandomForestClassifier class tree DecisionTreeClassifier class ensemble ExtraTreesClassifier and class tree ExtraTreeClassifier By Trevor Stephens grid search RandomizedSearchCV now does sampling without replacement if all parameters are given as lists By Andreas M ller Parallelized calculation of func metrics pairwise distances is now supported for scipy metrics and custom callables By Joel Nothman Allow the fitting and scoring of all clustering algorithms in class pipeline Pipeline By Andreas M ller More robust seeding and improved error messages in class cluster MeanShift by Andreas M ller Make the stopping criterion for mixture GMM mixture DPGMM and mixture VBGMM less dependent on the number of samples by thresholding the average log likelihood change instead of its sum over all samples By Herv Bredin The outcome of func manifold spectral embedding was made deterministic by flipping the sign of eigenvectors By user Hasil Sharma Hasil Sharma Significant performance and memory usage improvements in class preprocessing PolynomialFeatures By Eric Martin Numerical stability improvements for class preprocessing StandardScaler and func preprocessing scale By Nicolas Goix class svm SVC fitted on sparse input now implements decision function By Rob Zinkov and Andreas M ller cross validation train test split now preserves the input type instead of converting to numpy arrays Documentation improvements Added example of using class pipeline FeatureUnion for heterogeneous input By user Matt Terry mrterry Documentation on scorers was improved to highlight the handling of loss functions By user Matt Pico MattpSoftware A discrepancy between liblinear output and scikit learn s wrappers is now noted By Manoj Kumar Improved documentation generation examples referring to a class or function are now shown in a gallery on the class function s API reference page By Joel Nothman More explicit documentation of sample generators and of data transformation By Joel Nothman class sklearn neighbors BallTree and class sklearn neighbors KDTree used to point to empty pages stating that they are aliases of BinaryTree This has been fixed to show the correct class docs By Manoj Kumar Added silhouette plots for analysis of KMeans clustering using func metrics silhouette samples and func metrics silhouette score See ref sphx glr auto examples cluster plot kmeans silhouette analysis py Bug fixes Metaestimators now support ducktyping for the presence of decision function predict proba and other methods This fixes behavior of grid search GridSearchCV grid search RandomizedSearchCV class pipeline Pipeline class feature selection RFE class feature selection RFECV when nested By Joel Nothman The scoring attribute of grid search and cross validation methods is no longer ignored when a grid search GridSearchCV is given as a base estimator or the base estimator doesn t have predict The function hierarchical ward tree now returns the children in the same order for both the structured and unstructured versions By Matteo Visconti di Oleggio Castello class feature selection RFECV now correctly handles cases when step is not equal to 1 By user Nikolay Mayorov nmayorov The class decomposition PCA now undoes whitening in its inverse transform Also its components now always have unit length By user Michael Eickenberg eickenberg Fix incomplete download of the dataset when datasets download 20newsgroups is called By Manoj Kumar Various fixes to the Gaussian processes subpackage by Vincent Dubourg and Jan Hendrik Metzen Calling partial fit with class weight auto throws an appropriate error message and suggests a work around By user Danny Sullivan dsullivan7 class RBFSampler kernel approximation RBFSampler with gamma g formerly approximated func rbf kernel metrics pairwise rbf kernel with gamma g 2 the definition of gamma is now consistent which may substantially change your results if you use a fixed value If you cross validated over gamma it probably doesn t matter too much By user Dougal Sutherland dougalsutherland Pipeline object delegate the classes attribute to the underlying estimator It allows for instance to make bagging of a pipeline object By Arnaud Joly class neighbors NearestCentroid now uses the median as the centroid when metric is set to manhattan It was using the mean before By Manoj Kumar Fix numerical stability issues in class linear model SGDClassifier and class linear model SGDRegressor by clipping large gradients and ensuring that weight decay rescaling is always positive for large l2 regularization and large learning rate values By Olivier Grisel When compute full tree is set to auto the full tree is built when n clusters is high and is early stopped when n clusters is low while the behavior should be vice versa in class cluster AgglomerativeClustering and friends This has been fixed By Manoj Kumar Fix lazy centering of data in func linear model enet path and func linear model lasso path It was centered around one It has been changed to be centered around the origin By Manoj Kumar Fix handling of precomputed affinity matrices in class cluster AgglomerativeClustering when using connectivity constraints By user Cathy Deng cathydeng Correct partial fit handling of class prior for class sklearn naive bayes MultinomialNB and class sklearn naive bayes BernoulliNB By Trevor Stephens Fixed a crash in func metrics precision recall fscore support when using unsorted labels in the multi label setting By Andreas M ller Avoid skipping the first nearest neighbor in the methods radius neighbors kneighbors kneighbors graph and radius neighbors graph in class sklearn neighbors NearestNeighbors and family when the query data is not the same as fit data By Manoj Kumar Fix log density calculation in the mixture GMM with tied covariance By Will Dawson Fixed a scaling error in class feature selection SelectFdr where a factor n features was missing By Andrew Tulloch Fix zero division in class neighbors KNeighborsRegressor and related classes when using distance weighting and having identical data points By Garret R https github com Garrett R Fixed round off errors with non positive definite covariance matrices in GMM By user Alexis Mignon AlexisMignon Fixed a error in the computation of conditional probabilities in class naive bayes BernoulliNB By Hanna Wallach Make the method radius neighbors of class neighbors NearestNeighbors return the samples lying on the boundary for algorithm brute By Yan Yi Flip sign of dual coef of class svm SVC to make it consistent with the documentation and decision function By Artem Sobolev Fixed handling of ties in class isotonic IsotonicRegression We now use the weighted average of targets secondary method By Andreas M ller and Michael Bommarito http bommaritollc com API changes summary GridSearchCV and cross val score and other meta estimators don t convert pandas DataFrames into arrays any more allowing DataFrame specific operations in custom estimators multiclass fit ovr multiclass predict ovr predict proba ovr multiclass fit ovo multiclass predict ovo multiclass fit ecoc and multiclass predict ecoc are deprecated Use the underlying estimators instead Nearest neighbors estimators used to take arbitrary keyword arguments and pass these to their distance metric This will no longer be supported in scikit learn 0 18 use the metric params argument instead n jobs parameter of the fit method shifted to the constructor of the LinearRegression class The predict proba method of class multiclass OneVsRestClassifier now returns two probabilities per sample in the multiclass case this is consistent with other estimators and with the method s documentation but previous versions accidentally returned only the positive probability Fixed by Will Lamond and Lars Buitinck Change default value of precompute in class linear model ElasticNet and class linear model Lasso to False Setting precompute to auto was found to be slower when n samples n features since the computation of the Gram matrix is computationally expensive and outweighs the benefit of fitting the Gram for just one alpha precompute auto is now deprecated and will be removed in 0 18 By Manoj Kumar Expose positive option in func linear model enet path and func linear model enet path which constrains coefficients to be positive By Manoj Kumar Users should now supply an explicit average parameter to func sklearn metrics f1 score func sklearn metrics fbeta score func sklearn metrics recall score and func sklearn metrics precision score when performing multiclass or multilabel i e not binary classification By Joel Nothman scoring parameter for cross validation now accepts f1 micro f1 macro or f1 weighted f1 is now for binary classification only Similar changes apply to precision and recall By Joel Nothman The fit intercept normalize and return models parameters in func linear model enet path and func linear model lasso path have been removed They were deprecated since 0 14 From now onwards all estimators will uniformly raise NotFittedError when any of the predict like methods are called before the model is fit By Raghav RV Input data validation was refactored for more consistent input validation The check arrays function was replaced by check array and check X y By Andreas M ller Allow X None in the methods radius neighbors kneighbors kneighbors graph and radius neighbors graph in class sklearn neighbors NearestNeighbors and family If set to None then for every sample this avoids setting the sample itself as the first nearest neighbor By Manoj Kumar Add parameter include self in func neighbors kneighbors graph and func neighbors radius neighbors graph which has to be explicitly set by the user If set to True then the sample itself is considered as the first nearest neighbor thresh parameter is deprecated in favor of new tol parameter in GMM DPGMM and VBGMM See Enhancements section for details By Herv Bredin Estimators will treat input with dtype object as numeric when possible By Andreas M ller Estimators now raise ValueError consistently when fitted on empty data less than 1 sample or less than 1 feature for 2D input By Olivier Grisel The shuffle option of class linear model SGDClassifier class linear model SGDRegressor class linear model Perceptron class linear model PassiveAggressiveClassifier and class linear model PassiveAggressiveRegressor now defaults to True class cluster DBSCAN now uses a deterministic initialization The random state parameter is deprecated By user Erich Schubert kno10 Code Contributors A Flaxman Aaron Schumacher Aaron Staple abhishek thakur Akshay akshayah3 Aldrian Obaja Alexander Fabisch Alexandre Gramfort Alexis Mignon Anders Aagaard Andreas Mueller Andreas van Cranenburgh Andrew Tulloch Andrew Walker Antony Lee Arnaud Joly banilo Barmaley exe Ben Davies Benedikt Koehler bhsu Boris Feld Borja Ayerdi Boyuan Deng Brent Pedersen Brian Wignall Brooke Osborn Calvin Giles Cathy Deng Celeo cgohlke chebee7i Christian Stade Schuldt Christof Angermueller Chyi Kwei Yau CJ Carey Clemens Brunner Daiki Aminaka Dan Blanchard danfrankj Danny Sullivan David Fletcher Dmitrijs Milajevs Dougal J Sutherland Erich Schubert Fabian Pedregosa Florian Wilhelm floydsoft F lix Antoine Fortin Gael Varoquaux Garrett R Gilles Louppe gpassino gwulfs Hampus Bengtsson Hamzeh Alsalhi Hanna Wallach Harry Mavroforakis Hasil Sharma Helder Herve Bredin Hsiang Fu Yu Hugues SALAMIN Ian Gilmore Ilambharathi Kanniah Imran Haque isms Jake VanderPlas Jan Dlabal Jan Hendrik Metzen Jatin Shah Javier L pez Pe a jdcaballero Jean Kossaifi Jeff Hammerbacher Joel Nothman Jonathan Helmus Joseph Kaicheng Zhang Kevin Markham Kyle Beauchamp Kyle Kastner Lagacherie Matthieu Lars Buitinck Laurent Direr leepei Loic Esteve Luis Pedro Coelho Lukas Michelbacher maheshakya Manoj Kumar Manuel Mario Michael Krell Martin Martin Billinger Martin Ku Mateusz Susik Mathieu Blondel Matt Pico Matt Terry Matteo Visconti dOC Matti Lyra Max Linke Mehdi Cherti Michael Bommarito Michael Eickenberg Michal Romaniuk MLG mr Shu Nelle Varoquaux Nicola Montecchio Nicolas Nikolay Mayorov Noel Dawe Okal Billy Olivier Grisel scar N jera Paolo Puggioni Peter Prettenhofer Pratap Vardhan pvnguyen queqichao Rafael Carrascosa Raghav R V Rahiel Kasim Randall Mason Rob Zinkov Robert Bradshaw Saket Choudhary Sam Nicholls Samuel Charron Saurabh Jha sethdandridge sinhrks snuderl Stefan Otte Stefan van der Walt Steve Tjoa swu Sylvain Zimmer tejesh95 terrycojones Thomas Delteil Thomas Unterthiner Tomas Kazmar trevorstephens tttthomasssss Tzu Ming Kuo ugurcaliskan ugurthemaster Vinayak Mehta Vincent Dubourg Vjacheslav Murashkin Vlad Niculae wadawson Wei Xue Will Lamond Wu Jiang x0l Xinfan Meng Yan Yi Yu Chin |
scikit-learn sklearn contributors rst Older Versions changes012 1 | .. include:: _contributors.rst
.. currentmodule:: sklearn
==============
Older Versions
==============
.. _changes_0_12.1:
Version 0.12.1
===============
**October 8, 2012**
The 0.12.1 release is a bug-fix release with no additional features, but is
instead a set of bug fixes
Changelog
----------
- Improved numerical stability in spectral embedding by `Gael
Varoquaux`_
- Doctest under windows 64bit by `Gael Varoquaux`_
- Documentation fixes for elastic net by `Andreas Müller`_ and
`Alexandre Gramfort`_
- Proper behavior with fortran-ordered NumPy arrays by `Gael Varoquaux`_
- Make GridSearchCV work with non-CSR sparse matrix by `Lars Buitinck`_
- Fix parallel computing in MDS by `Gael Varoquaux`_
- Fix Unicode support in count vectorizer by `Andreas Müller`_
- Fix MinCovDet breaking with X.shape = (3, 1) by :user:`Virgile Fritsch <VirgileFritsch>`
- Fix clone of SGD objects by `Peter Prettenhofer`_
- Stabilize GMM by :user:`Virgile Fritsch <VirgileFritsch>`
People
------
* 14 `Peter Prettenhofer`_
* 12 `Gael Varoquaux`_
* 10 `Andreas Müller`_
* 5 `Lars Buitinck`_
* 3 :user:`Virgile Fritsch <VirgileFritsch>`
* 1 `Alexandre Gramfort`_
* 1 `Gilles Louppe`_
* 1 `Mathieu Blondel`_
.. _changes_0_12:
Version 0.12
============
**September 4, 2012**
Changelog
---------
- Various speed improvements of the :ref:`decision trees <tree>` module, by
`Gilles Louppe`_.
- :class:`~ensemble.GradientBoostingRegressor` and
:class:`~ensemble.GradientBoostingClassifier` now support feature subsampling
via the ``max_features`` argument, by `Peter Prettenhofer`_.
- Added Huber and Quantile loss functions to
:class:`~ensemble.GradientBoostingRegressor`, by `Peter Prettenhofer`_.
- :ref:`Decision trees <tree>` and :ref:`forests of randomized trees <forest>`
now support multi-output classification and regression problems, by
`Gilles Louppe`_.
- Added :class:`~preprocessing.LabelEncoder`, a simple utility class to
normalize labels or transform non-numerical labels, by `Mathieu Blondel`_.
- Added the epsilon-insensitive loss and the ability to make probabilistic
predictions with the modified huber loss in :ref:`sgd`, by
`Mathieu Blondel`_.
- Added :ref:`multidimensional_scaling`, by Nelle Varoquaux.
- SVMlight file format loader now detects compressed (gzip/bzip2) files and
decompresses them on the fly, by `Lars Buitinck`_.
- SVMlight file format serializer now preserves double precision floating
point values, by `Olivier Grisel`_.
- A common testing framework for all estimators was added, by `Andreas Müller`_.
- Understandable error messages for estimators that do not accept
sparse input by `Gael Varoquaux`_
- Speedups in hierarchical clustering by `Gael Varoquaux`_. In
particular building the tree now supports early stopping. This is
useful when the number of clusters is not small compared to the
number of samples.
- Add MultiTaskLasso and MultiTaskElasticNet for joint feature selection,
by `Alexandre Gramfort`_.
- Added `metrics.auc_score` and
:func:`metrics.average_precision_score` convenience functions by `Andreas
Müller`_.
- Improved sparse matrix support in the :ref:`feature_selection`
module by `Andreas Müller`_.
- New word boundaries-aware character n-gram analyzer for the
:ref:`text_feature_extraction` module by :user:`@kernc <kernc>`.
- Fixed bug in spectral clustering that led to single point clusters
by `Andreas Müller`_.
- In :class:`~feature_extraction.text.CountVectorizer`, added an option to
ignore infrequent words, ``min_df`` by `Andreas Müller`_.
- Add support for multiple targets in some linear models (ElasticNet, Lasso
and OrthogonalMatchingPursuit) by `Vlad Niculae`_ and
`Alexandre Gramfort`_.
- Fixes in `decomposition.ProbabilisticPCA` score function by Wei Li.
- Fixed feature importance computation in
:ref:`gradient_boosting`.
API changes summary
-------------------
- The old ``scikits.learn`` package has disappeared; all code should import
from ``sklearn`` instead, which was introduced in 0.9.
- In :func:`metrics.roc_curve`, the ``thresholds`` array is now returned
with it's order reversed, in order to keep it consistent with the order
of the returned ``fpr`` and ``tpr``.
- In `hmm` objects, like `hmm.GaussianHMM`,
`hmm.MultinomialHMM`, etc., all parameters must be passed to the
object when initialising it and not through ``fit``. Now ``fit`` will
only accept the data as an input parameter.
- For all SVM classes, a faulty behavior of ``gamma`` was fixed. Previously,
the default gamma value was only computed the first time ``fit`` was called
and then stored. It is now recalculated on every call to ``fit``.
- All ``Base`` classes are now abstract meta classes so that they can not be
instantiated.
- :func:`cluster.ward_tree` now also returns the parent array. This is
necessary for early-stopping in which case the tree is not
completely built.
- In :class:`~feature_extraction.text.CountVectorizer` the parameters
``min_n`` and ``max_n`` were joined to the parameter ``n_gram_range`` to
enable grid-searching both at once.
- In :class:`~feature_extraction.text.CountVectorizer`, words that appear
only in one document are now ignored by default. To reproduce
the previous behavior, set ``min_df=1``.
- Fixed API inconsistency: :meth:`linear_model.SGDClassifier.predict_proba` now
returns 2d array when fit on two classes.
- Fixed API inconsistency: :meth:`discriminant_analysis.QuadraticDiscriminantAnalysis.decision_function`
and :meth:`discriminant_analysis.LinearDiscriminantAnalysis.decision_function` now return 1d arrays
when fit on two classes.
- Grid of alphas used for fitting :class:`~linear_model.LassoCV` and
:class:`~linear_model.ElasticNetCV` is now stored
in the attribute ``alphas_`` rather than overriding the init parameter
``alphas``.
- Linear models when alpha is estimated by cross-validation store
the estimated value in the ``alpha_`` attribute rather than just
``alpha`` or ``best_alpha``.
- :class:`~ensemble.GradientBoostingClassifier` now supports
:meth:`~ensemble.GradientBoostingClassifier.staged_predict_proba`, and
:meth:`~ensemble.GradientBoostingClassifier.staged_predict`.
- `svm.sparse.SVC` and other sparse SVM classes are now deprecated.
The all classes in the :ref:`svm` module now automatically select the
sparse or dense representation base on the input.
- All clustering algorithms now interpret the array ``X`` given to ``fit`` as
input data, in particular :class:`~cluster.SpectralClustering` and
:class:`~cluster.AffinityPropagation` which previously expected affinity matrices.
- For clustering algorithms that take the desired number of clusters as a parameter,
this parameter is now called ``n_clusters``.
People
------
* 267 `Andreas Müller`_
* 94 `Gilles Louppe`_
* 89 `Gael Varoquaux`_
* 79 `Peter Prettenhofer`_
* 60 `Mathieu Blondel`_
* 57 `Alexandre Gramfort`_
* 52 `Vlad Niculae`_
* 45 `Lars Buitinck`_
* 44 Nelle Varoquaux
* 37 `Jaques Grobler`_
* 30 Alexis Mignon
* 30 Immanuel Bayer
* 27 `Olivier Grisel`_
* 16 Subhodeep Moitra
* 13 Yannick Schwartz
* 12 :user:`@kernc <kernc>`
* 11 :user:`Virgile Fritsch <VirgileFritsch>`
* 9 Daniel Duckworth
* 9 `Fabian Pedregosa`_
* 9 `Robert Layton`_
* 8 John Benediktsson
* 7 Marko Burjek
* 5 `Nicolas Pinto`_
* 4 Alexandre Abraham
* 4 `Jake Vanderplas`_
* 3 `Brian Holt`_
* 3 `Edouard Duchesnay`_
* 3 Florian Hoenig
* 3 flyingimmidev
* 2 Francois Savard
* 2 Hannes Schulz
* 2 Peter Welinder
* 2 `Yaroslav Halchenko`_
* 2 Wei Li
* 1 Alex Companioni
* 1 Brandyn A. White
* 1 Bussonnier Matthias
* 1 Charles-Pierre Astolfi
* 1 Dan O'Huiginn
* 1 David Cournapeau
* 1 Keith Goodman
* 1 Ludwig Schwardt
* 1 Olivier Hervieu
* 1 Sergio Medina
* 1 Shiqiao Du
* 1 Tim Sheerman-Chase
* 1 buguen
.. _changes_0_11:
Version 0.11
============
**May 7, 2012**
Changelog
---------
Highlights
.............
- Gradient boosted regression trees (:ref:`gradient_boosting`)
for classification and regression by `Peter Prettenhofer`_
and `Scott White`_ .
- Simple dict-based feature loader with support for categorical variables
(:class:`~feature_extraction.DictVectorizer`) by `Lars Buitinck`_.
- Added Matthews correlation coefficient (:func:`metrics.matthews_corrcoef`)
and added macro and micro average options to
:func:`~metrics.precision_score`, :func:`metrics.recall_score` and
:func:`~metrics.f1_score` by `Satrajit Ghosh`_.
- :ref:`out_of_bag` of generalization error for :ref:`ensemble`
by `Andreas Müller`_.
- Randomized sparse linear models for feature
selection, by `Alexandre Gramfort`_ and `Gael Varoquaux`_
- :ref:`label_propagation` for semi-supervised learning, by Clay
Woolam. **Note** the semi-supervised API is still work in progress,
and may change.
- Added BIC/AIC model selection to classical :ref:`gmm` and unified
the API with the remainder of scikit-learn, by `Bertrand Thirion`_
- Added `sklearn.cross_validation.StratifiedShuffleSplit`, which is
a `sklearn.cross_validation.ShuffleSplit` with balanced splits,
by Yannick Schwartz.
- :class:`~sklearn.neighbors.NearestCentroid` classifier added, along with a
``shrink_threshold`` parameter, which implements **shrunken centroid
classification**, by `Robert Layton`_.
Other changes
..............
- Merged dense and sparse implementations of :ref:`sgd` module and
exposed utility extension types for sequential
datasets ``seq_dataset`` and weight vectors ``weight_vector``
by `Peter Prettenhofer`_.
- Added ``partial_fit`` (support for online/minibatch learning) and
warm_start to the :ref:`sgd` module by `Mathieu Blondel`_.
- Dense and sparse implementations of :ref:`svm` classes and
:class:`~linear_model.LogisticRegression` merged by `Lars Buitinck`_.
- Regressors can now be used as base estimator in the :ref:`multiclass`
module by `Mathieu Blondel`_.
- Added n_jobs option to :func:`metrics.pairwise_distances`
and :func:`metrics.pairwise.pairwise_kernels` for parallel computation,
by `Mathieu Blondel`_.
- :ref:`k_means` can now be run in parallel, using the ``n_jobs`` argument
to either :ref:`k_means` or :class:`cluster.KMeans`, by `Robert Layton`_.
- Improved :ref:`cross_validation` and :ref:`grid_search` documentation
and introduced the new `cross_validation.train_test_split`
helper function by `Olivier Grisel`_
- :class:`~svm.SVC` members ``coef_`` and ``intercept_`` changed sign for
consistency with ``decision_function``; for ``kernel==linear``,
``coef_`` was fixed in the one-vs-one case, by `Andreas Müller`_.
- Performance improvements to efficient leave-one-out cross-validated
Ridge regression, esp. for the ``n_samples > n_features`` case, in
:class:`~linear_model.RidgeCV`, by Reuben Fletcher-Costin.
- Refactoring and simplification of the :ref:`text_feature_extraction`
API and fixed a bug that caused possible negative IDF,
by `Olivier Grisel`_.
- Beam pruning option in `_BaseHMM` module has been removed since it
is difficult to Cythonize. If you are interested in contributing a Cython
version, you can use the python version in the git history as a reference.
- Classes in :ref:`neighbors` now support arbitrary Minkowski metric for
nearest neighbors searches. The metric can be specified by argument ``p``.
API changes summary
-------------------
- `covariance.EllipticEnvelop` is now deprecated.
Please use :class:`~covariance.EllipticEnvelope` instead.
- ``NeighborsClassifier`` and ``NeighborsRegressor`` are gone in the module
:ref:`neighbors`. Use the classes :class:`~neighbors.KNeighborsClassifier`,
:class:`~neighbors.RadiusNeighborsClassifier`, :class:`~neighbors.KNeighborsRegressor`
and/or :class:`~neighbors.RadiusNeighborsRegressor` instead.
- Sparse classes in the :ref:`sgd` module are now deprecated.
- In `mixture.GMM`, `mixture.DPGMM` and `mixture.VBGMM`,
parameters must be passed to an object when initialising it and not through
``fit``. Now ``fit`` will only accept the data as an input parameter.
- methods ``rvs`` and ``decode`` in `GMM` module are now deprecated.
``sample`` and ``score`` or ``predict`` should be used instead.
- attribute ``_scores`` and ``_pvalues`` in univariate feature selection
objects are now deprecated.
``scores_`` or ``pvalues_`` should be used instead.
- In :class:`~linear_model.LogisticRegression`, :class:`~svm.LinearSVC`,
:class:`~svm.SVC` and :class:`~svm.NuSVC`, the ``class_weight`` parameter is
now an initialization parameter, not a parameter to fit. This makes grid
searches over this parameter possible.
- LFW ``data`` is now always shape ``(n_samples, n_features)`` to be
consistent with the Olivetti faces dataset. Use ``images`` and
``pairs`` attribute to access the natural images shapes instead.
- In :class:`~svm.LinearSVC`, the meaning of the ``multi_class`` parameter
changed. Options now are ``'ovr'`` and ``'crammer_singer'``, with
``'ovr'`` being the default. This does not change the default behavior
but hopefully is less confusing.
- Class `feature_selection.text.Vectorizer` is deprecated and
replaced by `feature_selection.text.TfidfVectorizer`.
- The preprocessor / analyzer nested structure for text feature
extraction has been removed. All those features are
now directly passed as flat constructor arguments
to `feature_selection.text.TfidfVectorizer` and
`feature_selection.text.CountVectorizer`, in particular the
following parameters are now used:
- ``analyzer`` can be ``'word'`` or ``'char'`` to switch the default
analysis scheme, or use a specific python callable (as previously).
- ``tokenizer`` and ``preprocessor`` have been introduced to make it
still possible to customize those steps with the new API.
- ``input`` explicitly control how to interpret the sequence passed to
``fit`` and ``predict``: filenames, file objects or direct (byte or
Unicode) strings.
- charset decoding is explicit and strict by default.
- the ``vocabulary``, fitted or not is now stored in the
``vocabulary_`` attribute to be consistent with the project
conventions.
- Class `feature_selection.text.TfidfVectorizer` now derives directly
from `feature_selection.text.CountVectorizer` to make grid
search trivial.
- methods ``rvs`` in `_BaseHMM` module are now deprecated.
``sample`` should be used instead.
- Beam pruning option in `_BaseHMM` module is removed since it is
difficult to be Cythonized. If you are interested, you can look in the
history codes by git.
- The SVMlight format loader now supports files with both zero-based and
one-based column indices, since both occur "in the wild".
- Arguments in class :class:`~model_selection.ShuffleSplit` are now consistent with
:class:`~model_selection.StratifiedShuffleSplit`. Arguments ``test_fraction`` and
``train_fraction`` are deprecated and renamed to ``test_size`` and
``train_size`` and can accept both ``float`` and ``int``.
- Arguments in class `Bootstrap` are now consistent with
:class:`~model_selection.StratifiedShuffleSplit`. Arguments ``n_test`` and
``n_train`` are deprecated and renamed to ``test_size`` and
``train_size`` and can accept both ``float`` and ``int``.
- Argument ``p`` added to classes in :ref:`neighbors` to specify an
arbitrary Minkowski metric for nearest neighbors searches.
People
------
* 282 `Andreas Müller`_
* 239 `Peter Prettenhofer`_
* 198 `Gael Varoquaux`_
* 129 `Olivier Grisel`_
* 114 `Mathieu Blondel`_
* 103 Clay Woolam
* 96 `Lars Buitinck`_
* 88 `Jaques Grobler`_
* 82 `Alexandre Gramfort`_
* 50 `Bertrand Thirion`_
* 42 `Robert Layton`_
* 28 flyingimmidev
* 26 `Jake Vanderplas`_
* 26 Shiqiao Du
* 21 `Satrajit Ghosh`_
* 17 `David Marek`_
* 17 `Gilles Louppe`_
* 14 `Vlad Niculae`_
* 11 Yannick Schwartz
* 10 `Fabian Pedregosa`_
* 9 fcostin
* 7 Nick Wilson
* 5 Adrien Gaidon
* 5 `Nicolas Pinto`_
* 4 `David Warde-Farley`_
* 5 Nelle Varoquaux
* 5 Emmanuelle Gouillart
* 3 Joonas Sillanpää
* 3 Paolo Losi
* 2 Charles McCarthy
* 2 Roy Hyunjin Han
* 2 Scott White
* 2 ibayer
* 1 Brandyn White
* 1 Carlos Scheidegger
* 1 Claire Revillet
* 1 Conrad Lee
* 1 `Edouard Duchesnay`_
* 1 Jan Hendrik Metzen
* 1 Meng Xinfan
* 1 `Rob Zinkov`_
* 1 Shiqiao
* 1 Udi Weinsberg
* 1 Virgile Fritsch
* 1 Xinfan Meng
* 1 Yaroslav Halchenko
* 1 jansoe
* 1 Leon Palafox
.. _changes_0_10:
Version 0.10
============
**January 11, 2012**
Changelog
---------
- Python 2.5 compatibility was dropped; the minimum Python version needed
to use scikit-learn is now 2.6.
- :ref:`sparse_inverse_covariance` estimation using the graph Lasso, with
associated cross-validated estimator, by `Gael Varoquaux`_
- New :ref:`Tree <tree>` module by `Brian Holt`_, `Peter Prettenhofer`_,
`Satrajit Ghosh`_ and `Gilles Louppe`_. The module comes with complete
documentation and examples.
- Fixed a bug in the RFE module by `Gilles Louppe`_ (issue #378).
- Fixed a memory leak in :ref:`svm` module by `Brian Holt`_ (issue #367).
- Faster tests by `Fabian Pedregosa`_ and others.
- Silhouette Coefficient cluster analysis evaluation metric added as
:func:`~sklearn.metrics.silhouette_score` by Robert Layton.
- Fixed a bug in :ref:`k_means` in the handling of the ``n_init`` parameter:
the clustering algorithm used to be run ``n_init`` times but the last
solution was retained instead of the best solution by `Olivier Grisel`_.
- Minor refactoring in :ref:`sgd` module; consolidated dense and sparse
predict methods; Enhanced test time performance by converting model
parameters to fortran-style arrays after fitting (only multi-class).
- Adjusted Mutual Information metric added as
:func:`~sklearn.metrics.adjusted_mutual_info_score` by Robert Layton.
- Models like SVC/SVR/LinearSVC/LogisticRegression from libsvm/liblinear
now support scaling of C regularization parameter by the number of
samples by `Alexandre Gramfort`_.
- New :ref:`Ensemble Methods <ensemble>` module by `Gilles Louppe`_ and
`Brian Holt`_. The module comes with the random forest algorithm and the
extra-trees method, along with documentation and examples.
- :ref:`outlier_detection`: outlier and novelty detection, by
:user:`Virgile Fritsch <VirgileFritsch>`.
- :ref:`kernel_approximation`: a transform implementing kernel
approximation for fast SGD on non-linear kernels by
`Andreas Müller`_.
- Fixed a bug due to atom swapping in :ref:`OMP` by `Vlad Niculae`_.
- :ref:`SparseCoder` by `Vlad Niculae`_.
- :ref:`mini_batch_kmeans` performance improvements by `Olivier Grisel`_.
- :ref:`k_means` support for sparse matrices by `Mathieu Blondel`_.
- Improved documentation for developers and for the :mod:`sklearn.utils`
module, by `Jake Vanderplas`_.
- Vectorized 20newsgroups dataset loader
(:func:`~sklearn.datasets.fetch_20newsgroups_vectorized`) by
`Mathieu Blondel`_.
- :ref:`multiclass` by `Lars Buitinck`_.
- Utilities for fast computation of mean and variance for sparse matrices
by `Mathieu Blondel`_.
- Make :func:`~sklearn.preprocessing.scale` and
`sklearn.preprocessing.Scaler` work on sparse matrices by
`Olivier Grisel`_
- Feature importances using decision trees and/or forest of trees,
by `Gilles Louppe`_.
- Parallel implementation of forests of randomized trees by
`Gilles Louppe`_.
- `sklearn.cross_validation.ShuffleSplit` can subsample the train
sets as well as the test sets by `Olivier Grisel`_.
- Errors in the build of the documentation fixed by `Andreas Müller`_.
API changes summary
-------------------
Here are the code migration instructions when upgrading from scikit-learn
version 0.9:
- Some estimators that may overwrite their inputs to save memory previously
had ``overwrite_`` parameters; these have been replaced with ``copy_``
parameters with exactly the opposite meaning.
This particularly affects some of the estimators in :mod:`~sklearn.linear_model`.
The default behavior is still to copy everything passed in.
- The SVMlight dataset loader :func:`~sklearn.datasets.load_svmlight_file` no
longer supports loading two files at once; use ``load_svmlight_files``
instead. Also, the (unused) ``buffer_mb`` parameter is gone.
- Sparse estimators in the :ref:`sgd` module use dense parameter vector
``coef_`` instead of ``sparse_coef_``. This significantly improves
test time performance.
- The :ref:`covariance` module now has a robust estimator of
covariance, the Minimum Covariance Determinant estimator.
- Cluster evaluation metrics in :mod:`~sklearn.metrics.cluster` have been refactored
but the changes are backwards compatible. They have been moved to the
`metrics.cluster.supervised`, along with
`metrics.cluster.unsupervised` which contains the Silhouette
Coefficient.
- The ``permutation_test_score`` function now behaves the same way as
``cross_val_score`` (i.e. uses the mean score across the folds.)
- Cross Validation generators now use integer indices (``indices=True``)
by default instead of boolean masks. This make it more intuitive to
use with sparse matrix data.
- The functions used for sparse coding, ``sparse_encode`` and
``sparse_encode_parallel`` have been combined into
:func:`~sklearn.decomposition.sparse_encode`, and the shapes of the arrays
have been transposed for consistency with the matrix factorization setting,
as opposed to the regression setting.
- Fixed an off-by-one error in the SVMlight/LibSVM file format handling;
files generated using :func:`~sklearn.datasets.dump_svmlight_file` should be
re-generated. (They should continue to work, but accidentally had one
extra column of zeros prepended.)
- ``BaseDictionaryLearning`` class replaced by ``SparseCodingMixin``.
- `sklearn.utils.extmath.fast_svd` has been renamed
:func:`~sklearn.utils.extmath.randomized_svd` and the default
oversampling is now fixed to 10 additional random vectors instead
of doubling the number of components to extract. The new behavior
follows the reference paper.
People
------
The following people contributed to scikit-learn since last release:
* 246 `Andreas Müller`_
* 242 `Olivier Grisel`_
* 220 `Gilles Louppe`_
* 183 `Brian Holt`_
* 166 `Gael Varoquaux`_
* 144 `Lars Buitinck`_
* 73 `Vlad Niculae`_
* 65 `Peter Prettenhofer`_
* 64 `Fabian Pedregosa`_
* 60 Robert Layton
* 55 `Mathieu Blondel`_
* 52 `Jake Vanderplas`_
* 44 Noel Dawe
* 38 `Alexandre Gramfort`_
* 24 :user:`Virgile Fritsch <VirgileFritsch>`
* 23 `Satrajit Ghosh`_
* 3 Jan Hendrik Metzen
* 3 Kenneth C. Arnold
* 3 Shiqiao Du
* 3 Tim Sheerman-Chase
* 3 `Yaroslav Halchenko`_
* 2 Bala Subrahmanyam Varanasi
* 2 DraXus
* 2 Michael Eickenberg
* 1 Bogdan Trach
* 1 Félix-Antoine Fortin
* 1 Juan Manuel Caicedo Carvajal
* 1 Nelle Varoquaux
* 1 `Nicolas Pinto`_
* 1 Tiziano Zito
* 1 Xinfan Meng
.. _changes_0_9:
Version 0.9
===========
**September 21, 2011**
scikit-learn 0.9 was released on September 2011, three months after the 0.8
release and includes the new modules :ref:`manifold`, :ref:`dirichlet_process`
as well as several new algorithms and documentation improvements.
This release also includes the dictionary-learning work developed by
`Vlad Niculae`_ as part of the `Google Summer of Code
<https://developers.google.com/open-source/gsoc>`_ program.
.. |banner1| image:: ../auto_examples/manifold/images/thumb/sphx_glr_plot_compare_methods_thumb.png
:target: ../auto_examples/manifold/plot_compare_methods.html
.. |banner2| image:: ../auto_examples/linear_model/images/thumb/sphx_glr_plot_omp_thumb.png
:target: ../auto_examples/linear_model/plot_omp.html
.. |banner3| image:: ../auto_examples/decomposition/images/thumb/sphx_glr_plot_kernel_pca_thumb.png
:target: ../auto_examples/decomposition/plot_kernel_pca.html
.. |center-div| raw:: html
<div style="text-align: center; margin: 0px 0 -5px 0;">
.. |end-div| raw:: html
</div>
|center-div| |banner2| |banner1| |banner3| |end-div|
Changelog
---------
- New :ref:`manifold` module by `Jake Vanderplas`_ and
`Fabian Pedregosa`_.
- New :ref:`Dirichlet Process <dirichlet_process>` Gaussian Mixture
Model by `Alexandre Passos`_
- :ref:`neighbors` module refactoring by `Jake Vanderplas`_ :
general refactoring, support for sparse matrices in input, speed and
documentation improvements. See the next section for a full list of API
changes.
- Improvements on the :ref:`feature_selection` module by
`Gilles Louppe`_ : refactoring of the RFE classes, documentation
rewrite, increased efficiency and minor API changes.
- :ref:`SparsePCA` by `Vlad Niculae`_, `Gael Varoquaux`_ and
`Alexandre Gramfort`_
- Printing an estimator now behaves independently of architectures
and Python version thanks to :user:`Jean Kossaifi <JeanKossaifi>`.
- :ref:`Loader for libsvm/svmlight format <libsvm_loader>` by
`Mathieu Blondel`_ and `Lars Buitinck`_
- Documentation improvements: thumbnails in
example gallery by `Fabian Pedregosa`_.
- Important bugfixes in :ref:`svm` module (segfaults, bad
performance) by `Fabian Pedregosa`_.
- Added :ref:`multinomial_naive_bayes` and :ref:`bernoulli_naive_bayes`
by `Lars Buitinck`_
- Text feature extraction optimizations by Lars Buitinck
- Chi-Square feature selection
(:func:`feature_selection.chi2`) by `Lars Buitinck`_.
- :ref:`sample_generators` module refactoring by `Gilles Louppe`_
- :ref:`multiclass` by `Mathieu Blondel`_
- Ball tree rewrite by `Jake Vanderplas`_
- Implementation of :ref:`dbscan` algorithm by Robert Layton
- Kmeans predict and transform by Robert Layton
- Preprocessing module refactoring by `Olivier Grisel`_
- Faster mean shift by Conrad Lee
- New ``Bootstrap``, :ref:`ShuffleSplit` and various other
improvements in cross validation schemes by `Olivier Grisel`_ and
`Gael Varoquaux`_
- Adjusted Rand index and V-Measure clustering evaluation metrics by `Olivier Grisel`_
- Added :class:`Orthogonal Matching Pursuit <linear_model.OrthogonalMatchingPursuit>` by `Vlad Niculae`_
- Added 2D-patch extractor utilities in the :ref:`feature_extraction` module by `Vlad Niculae`_
- Implementation of :class:`~linear_model.LassoLarsCV`
(cross-validated Lasso solver using the Lars algorithm) and
:class:`~linear_model.LassoLarsIC` (BIC/AIC model
selection in Lars) by `Gael Varoquaux`_
and `Alexandre Gramfort`_
- Scalability improvements to :func:`metrics.roc_curve` by Olivier Hervieu
- Distance helper functions :func:`metrics.pairwise_distances`
and :func:`metrics.pairwise.pairwise_kernels` by Robert Layton
- :class:`Mini-Batch K-Means <cluster.MiniBatchKMeans>` by Nelle Varoquaux and Peter Prettenhofer.
- mldata utilities by Pietro Berkes.
- :ref:`olivetti_faces_dataset` by `David Warde-Farley`_.
API changes summary
-------------------
Here are the code migration instructions when upgrading from scikit-learn
version 0.8:
- The ``scikits.learn`` package was renamed ``sklearn``. There is
still a ``scikits.learn`` package alias for backward compatibility.
Third-party projects with a dependency on scikit-learn 0.9+ should
upgrade their codebase. For instance, under Linux / MacOSX just run
(make a backup first!)::
find -name "*.py" | xargs sed -i 's/\bscikits.learn\b/sklearn/g'
- Estimators no longer accept model parameters as ``fit`` arguments:
instead all parameters must be only be passed as constructor
arguments or using the now public ``set_params`` method inherited
from :class:`~base.BaseEstimator`.
Some estimators can still accept keyword arguments on the ``fit``
but this is restricted to data-dependent values (e.g. a Gram matrix
or an affinity matrix that are precomputed from the ``X`` data matrix.
- The ``cross_val`` package has been renamed to ``cross_validation``
although there is also a ``cross_val`` package alias in place for
backward compatibility.
Third-party projects with a dependency on scikit-learn 0.9+ should
upgrade their codebase. For instance, under Linux / MacOSX just run
(make a backup first!)::
find -name "*.py" | xargs sed -i 's/\bcross_val\b/cross_validation/g'
- The ``score_func`` argument of the
``sklearn.cross_validation.cross_val_score`` function is now expected
to accept ``y_test`` and ``y_predicted`` as only arguments for
classification and regression tasks or ``X_test`` for unsupervised
estimators.
- ``gamma`` parameter for support vector machine algorithms is set
to ``1 / n_features`` by default, instead of ``1 / n_samples``.
- The ``sklearn.hmm`` has been marked as orphaned: it will be removed
from scikit-learn in version 0.11 unless someone steps up to
contribute documentation, examples and fix lurking numerical
stability issues.
- ``sklearn.neighbors`` has been made into a submodule. The two previously
available estimators, ``NeighborsClassifier`` and ``NeighborsRegressor``
have been marked as deprecated. Their functionality has been divided
among five new classes: ``NearestNeighbors`` for unsupervised neighbors
searches, ``KNeighborsClassifier`` & ``RadiusNeighborsClassifier``
for supervised classification problems, and ``KNeighborsRegressor``
& ``RadiusNeighborsRegressor`` for supervised regression problems.
- ``sklearn.ball_tree.BallTree`` has been moved to
``sklearn.neighbors.BallTree``. Using the former will generate a warning.
- ``sklearn.linear_model.LARS()`` and related classes (LassoLARS,
LassoLARSCV, etc.) have been renamed to
``sklearn.linear_model.Lars()``.
- All distance metrics and kernels in ``sklearn.metrics.pairwise`` now have a Y
parameter, which by default is None. If not given, the result is the distance
(or kernel similarity) between each sample in Y. If given, the result is the
pairwise distance (or kernel similarity) between samples in X to Y.
- ``sklearn.metrics.pairwise.l1_distance`` is now called ``manhattan_distance``,
and by default returns the pairwise distance. For the component wise distance,
set the parameter ``sum_over_features`` to ``False``.
Backward compatibility package aliases and other deprecated classes and
functions will be removed in version 0.11.
People
------
38 people contributed to this release.
- 387 `Vlad Niculae`_
- 320 `Olivier Grisel`_
- 192 `Lars Buitinck`_
- 179 `Gael Varoquaux`_
- 168 `Fabian Pedregosa`_ (`INRIA`_, `Parietal Team`_)
- 127 `Jake Vanderplas`_
- 120 `Mathieu Blondel`_
- 85 `Alexandre Passos`_
- 67 `Alexandre Gramfort`_
- 57 `Peter Prettenhofer`_
- 56 `Gilles Louppe`_
- 42 Robert Layton
- 38 Nelle Varoquaux
- 32 :user:`Jean Kossaifi <JeanKossaifi>`
- 30 Conrad Lee
- 22 Pietro Berkes
- 18 andy
- 17 David Warde-Farley
- 12 Brian Holt
- 11 Robert
- 8 Amit Aides
- 8 :user:`Virgile Fritsch <VirgileFritsch>`
- 7 `Yaroslav Halchenko`_
- 6 Salvatore Masecchia
- 5 Paolo Losi
- 4 Vincent Schut
- 3 Alexis Metaireau
- 3 Bryan Silverthorn
- 3 `Andreas Müller`_
- 2 Minwoo Jake Lee
- 1 Emmanuelle Gouillart
- 1 Keith Goodman
- 1 Lucas Wiman
- 1 `Nicolas Pinto`_
- 1 Thouis (Ray) Jones
- 1 Tim Sheerman-Chase
.. _changes_0_8:
Version 0.8
===========
**May 11, 2011**
scikit-learn 0.8 was released on May 2011, one month after the first
"international" `scikit-learn coding sprint
<https://github.com/scikit-learn/scikit-learn/wiki/Upcoming-events>`_ and is
marked by the inclusion of important modules: :ref:`hierarchical_clustering`,
:ref:`cross_decomposition`, :ref:`NMF`, initial support for Python 3 and by important
enhancements and bug fixes.
Changelog
---------
Several new modules where introduced during this release:
- New :ref:`hierarchical_clustering` module by Vincent Michel,
`Bertrand Thirion`_, `Alexandre Gramfort`_ and `Gael Varoquaux`_.
- :ref:`kernel_pca` implementation by `Mathieu Blondel`_
- :ref:`labeled_faces_in_the_wild_dataset` by `Olivier Grisel`_.
- New :ref:`cross_decomposition` module by `Edouard Duchesnay`_.
- :ref:`NMF` module `Vlad Niculae`_
- Implementation of the :ref:`oracle_approximating_shrinkage` algorithm by
:user:`Virgile Fritsch <VirgileFritsch>` in the :ref:`covariance` module.
Some other modules benefited from significant improvements or cleanups.
- Initial support for Python 3: builds and imports cleanly,
some modules are usable while others have failing tests by `Fabian Pedregosa`_.
- :class:`~decomposition.PCA` is now usable from the Pipeline object by `Olivier Grisel`_.
- Guide :ref:`performance-howto` by `Olivier Grisel`_.
- Fixes for memory leaks in libsvm bindings, 64-bit safer BallTree by Lars Buitinck.
- bug and style fixing in :ref:`k_means` algorithm by Jan Schlüter.
- Add attribute converged to Gaussian Mixture Models by Vincent Schut.
- Implemented ``transform``, ``predict_log_proba`` in
:class:`~discriminant_analysis.LinearDiscriminantAnalysis` By `Mathieu Blondel`_.
- Refactoring in the :ref:`svm` module and bug fixes by `Fabian Pedregosa`_,
`Gael Varoquaux`_ and Amit Aides.
- Refactored SGD module (removed code duplication, better variable naming),
added interface for sample weight by `Peter Prettenhofer`_.
- Wrapped BallTree with Cython by Thouis (Ray) Jones.
- Added function :func:`svm.l1_min_c` by Paolo Losi.
- Typos, doc style, etc. by `Yaroslav Halchenko`_, `Gael Varoquaux`_,
`Olivier Grisel`_, Yann Malet, `Nicolas Pinto`_, Lars Buitinck and
`Fabian Pedregosa`_.
People
-------
People that made this release possible preceded by number of commits:
- 159 `Olivier Grisel`_
- 96 `Gael Varoquaux`_
- 96 `Vlad Niculae`_
- 94 `Fabian Pedregosa`_
- 36 `Alexandre Gramfort`_
- 32 Paolo Losi
- 31 `Edouard Duchesnay`_
- 30 `Mathieu Blondel`_
- 25 `Peter Prettenhofer`_
- 22 `Nicolas Pinto`_
- 11 :user:`Virgile Fritsch <VirgileFritsch>`
- 7 Lars Buitinck
- 6 Vincent Michel
- 5 `Bertrand Thirion`_
- 4 Thouis (Ray) Jones
- 4 Vincent Schut
- 3 Jan Schlüter
- 2 Julien Miotte
- 2 `Matthieu Perrot`_
- 2 Yann Malet
- 2 `Yaroslav Halchenko`_
- 1 Amit Aides
- 1 `Andreas Müller`_
- 1 Feth Arezki
- 1 Meng Xinfan
.. _changes_0_7:
Version 0.7
===========
**March 2, 2011**
scikit-learn 0.7 was released in March 2011, roughly three months
after the 0.6 release. This release is marked by the speed
improvements in existing algorithms like k-Nearest Neighbors and
K-Means algorithm and by the inclusion of an efficient algorithm for
computing the Ridge Generalized Cross Validation solution. Unlike the
preceding release, no new modules where added to this release.
Changelog
---------
- Performance improvements for Gaussian Mixture Model sampling [Jan
Schlüter].
- Implementation of efficient leave-one-out cross-validated Ridge in
:class:`~linear_model.RidgeCV` [`Mathieu Blondel`_]
- Better handling of collinearity and early stopping in
:func:`linear_model.lars_path` [`Alexandre Gramfort`_ and `Fabian
Pedregosa`_].
- Fixes for liblinear ordering of labels and sign of coefficients
[Dan Yamins, Paolo Losi, `Mathieu Blondel`_ and `Fabian Pedregosa`_].
- Performance improvements for Nearest Neighbors algorithm in
high-dimensional spaces [`Fabian Pedregosa`_].
- Performance improvements for :class:`~cluster.KMeans` [`Gael
Varoquaux`_ and `James Bergstra`_].
- Sanity checks for SVM-based classes [`Mathieu Blondel`_].
- Refactoring of `neighbors.NeighborsClassifier` and
:func:`neighbors.kneighbors_graph`: added different algorithms for
the k-Nearest Neighbor Search and implemented a more stable
algorithm for finding barycenter weights. Also added some
developer documentation for this module, see
`notes_neighbors
<https://github.com/scikit-learn/scikit-learn/wiki/Neighbors-working-notes>`_ for more information [`Fabian Pedregosa`_].
- Documentation improvements: Added `pca.RandomizedPCA` and
:class:`~linear_model.LogisticRegression` to the class
reference. Also added references of matrices used for clustering
and other fixes [`Gael Varoquaux`_, `Fabian Pedregosa`_, `Mathieu
Blondel`_, `Olivier Grisel`_, Virgile Fritsch , Emmanuelle
Gouillart]
- Binded decision_function in classes that make use of liblinear_,
dense and sparse variants, like :class:`~svm.LinearSVC` or
:class:`~linear_model.LogisticRegression` [`Fabian Pedregosa`_].
- Performance and API improvements to
:func:`metrics.pairwise.euclidean_distances` and to
`pca.RandomizedPCA` [`James Bergstra`_].
- Fix compilation issues under NetBSD [Kamel Ibn Hassen Derouiche]
- Allow input sequences of different lengths in `hmm.GaussianHMM`
[`Ron Weiss`_].
- Fix bug in affinity propagation caused by incorrect indexing [Xinfan Meng]
People
------
People that made this release possible preceded by number of commits:
- 85 `Fabian Pedregosa`_
- 67 `Mathieu Blondel`_
- 20 `Alexandre Gramfort`_
- 19 `James Bergstra`_
- 14 Dan Yamins
- 13 `Olivier Grisel`_
- 12 `Gael Varoquaux`_
- 4 `Edouard Duchesnay`_
- 4 `Ron Weiss`_
- 2 Satrajit Ghosh
- 2 Vincent Dubourg
- 1 Emmanuelle Gouillart
- 1 Kamel Ibn Hassen Derouiche
- 1 Paolo Losi
- 1 VirgileFritsch
- 1 `Yaroslav Halchenko`_
- 1 Xinfan Meng
.. _changes_0_6:
Version 0.6
===========
**December 21, 2010**
scikit-learn 0.6 was released on December 2010. It is marked by the
inclusion of several new modules and a general renaming of old
ones. It is also marked by the inclusion of new example, including
applications to real-world datasets.
Changelog
---------
- New `stochastic gradient
<https://scikit-learn.org/stable/modules/sgd.html>`_ descent
module by Peter Prettenhofer. The module comes with complete
documentation and examples.
- Improved svm module: memory consumption has been reduced by 50%,
heuristic to automatically set class weights, possibility to
assign weights to samples (see
:ref:`sphx_glr_auto_examples_svm_plot_weighted_samples.py` for an example).
- New :ref:`gaussian_process` module by Vincent Dubourg. This module
also has great documentation and some very neat examples. See
example_gaussian_process_plot_gp_regression.py or
example_gaussian_process_plot_gp_probabilistic_classification_after_regression.py
for a taste of what can be done.
- It is now possible to use liblinear's Multi-class SVC (option
multi_class in :class:`~svm.LinearSVC`)
- New features and performance improvements of text feature
extraction.
- Improved sparse matrix support, both in main classes
(:class:`~model_selection.GridSearchCV`) as in modules
sklearn.svm.sparse and sklearn.linear_model.sparse.
- Lots of cool new examples and a new section that uses real-world
datasets was created. These include:
:ref:`sphx_glr_auto_examples_applications_plot_face_recognition.py`,
:ref:`sphx_glr_auto_examples_applications_plot_species_distribution_modeling.py`,
:ref:`sphx_glr_auto_examples_applications_wikipedia_principal_eigenvector.py` and
others.
- Faster :ref:`least_angle_regression` algorithm. It is now 2x
faster than the R version on worst case and up to 10x times faster
on some cases.
- Faster coordinate descent algorithm. In particular, the full path
version of lasso (:func:`linear_model.lasso_path`) is more than
200x times faster than before.
- It is now possible to get probability estimates from a
:class:`~linear_model.LogisticRegression` model.
- module renaming: the glm module has been renamed to linear_model,
the gmm module has been included into the more general mixture
model and the sgd module has been included in linear_model.
- Lots of bug fixes and documentation improvements.
People
------
People that made this release possible preceded by number of commits:
* 207 `Olivier Grisel`_
* 167 `Fabian Pedregosa`_
* 97 `Peter Prettenhofer`_
* 68 `Alexandre Gramfort`_
* 59 `Mathieu Blondel`_
* 55 `Gael Varoquaux`_
* 33 Vincent Dubourg
* 21 `Ron Weiss`_
* 9 Bertrand Thirion
* 3 `Alexandre Passos`_
* 3 Anne-Laure Fouque
* 2 Ronan Amicel
* 1 `Christian Osendorfer`_
.. _changes_0_5:
Version 0.5
===========
**October 11, 2010**
Changelog
---------
New classes
-----------
- Support for sparse matrices in some classifiers of modules
``svm`` and ``linear_model`` (see `svm.sparse.SVC`,
`svm.sparse.SVR`, `svm.sparse.LinearSVC`,
`linear_model.sparse.Lasso`, `linear_model.sparse.ElasticNet`)
- New :class:`~pipeline.Pipeline` object to compose different estimators.
- Recursive Feature Elimination routines in module
:ref:`feature_selection`.
- Addition of various classes capable of cross validation in the
linear_model module (:class:`~linear_model.LassoCV`, :class:`~linear_model.ElasticNetCV`,
etc.).
- New, more efficient LARS algorithm implementation. The Lasso
variant of the algorithm is also implemented. See
:class:`~linear_model.lars_path`, :class:`~linear_model.Lars` and
:class:`~linear_model.LassoLars`.
- New Hidden Markov Models module (see classes
`hmm.GaussianHMM`, `hmm.MultinomialHMM`, `hmm.GMMHMM`)
- New module feature_extraction (see :ref:`class reference
<feature_extraction_ref>`)
- New FastICA algorithm in module sklearn.fastica
Documentation
-------------
- Improved documentation for many modules, now separating
narrative documentation from the class reference. As an example,
see `documentation for the SVM module
<https://scikit-learn.org/stable/modules/svm.html>`_ and the
complete `class reference
<https://scikit-learn.org/stable/modules/classes.html>`_.
Fixes
-----
- API changes: adhere variable names to PEP-8, give more
meaningful names.
- Fixes for svm module to run on a shared memory context
(multiprocessing).
- It is again possible to generate latex (and thus PDF) from the
sphinx docs.
Examples
--------
- new examples using some of the mlcomp datasets:
``sphx_glr_auto_examples_mlcomp_sparse_document_classification.py`` (since removed) and
:ref:`sphx_glr_auto_examples_text_plot_document_classification_20newsgroups.py`
- Many more examples. `See here
<https://scikit-learn.org/stable/auto_examples/index.html>`_
the full list of examples.
External dependencies
---------------------
- Joblib is now a dependency of this package, although it is
shipped with (sklearn.externals.joblib).
Removed modules
---------------
- Module ann (Artificial Neural Networks) has been removed from
the distribution. Users wanting this sort of algorithms should
take a look into pybrain.
Misc
----
- New sphinx theme for the web page.
Authors
-------
The following is a list of authors for this release, preceded by
number of commits:
* 262 Fabian Pedregosa
* 240 Gael Varoquaux
* 149 Alexandre Gramfort
* 116 Olivier Grisel
* 40 Vincent Michel
* 38 Ron Weiss
* 23 Matthieu Perrot
* 10 Bertrand Thirion
* 7 Yaroslav Halchenko
* 9 VirgileFritsch
* 6 Edouard Duchesnay
* 4 Mathieu Blondel
* 1 Ariel Rokem
* 1 Matthieu Brucher
Version 0.4
===========
**August 26, 2010**
Changelog
---------
Major changes in this release include:
- Coordinate Descent algorithm (Lasso, ElasticNet) refactoring &
speed improvements (roughly 100x times faster).
- Coordinate Descent Refactoring (and bug fixing) for consistency
with R's package GLMNET.
- New metrics module.
- New GMM module contributed by Ron Weiss.
- Implementation of the LARS algorithm (without Lasso variant for now).
- feature_selection module redesign.
- Migration to GIT as version control system.
- Removal of obsolete attrselect module.
- Rename of private compiled extensions (added underscore).
- Removal of legacy unmaintained code.
- Documentation improvements (both docstring and rst).
- Improvement of the build system to (optionally) link with MKL.
Also, provide a lite BLAS implementation in case no system-wide BLAS is
found.
- Lots of new examples.
- Many, many bug fixes ...
Authors
-------
The committer list for this release is the following (preceded by number
of commits):
* 143 Fabian Pedregosa
* 35 Alexandre Gramfort
* 34 Olivier Grisel
* 11 Gael Varoquaux
* 5 Yaroslav Halchenko
* 2 Vincent Michel
* 1 Chris Filo Gorgolewski
Earlier versions
================
Earlier versions included contributions by Fred Mailhot, David Cooke,
David Huard, Dave Morrill, Ed Schofield, Travis Oliphant, Pearu Peterson. | scikit-learn | include contributors rst currentmodule sklearn Older Versions changes 0 12 1 Version 0 12 1 October 8 2012 The 0 12 1 release is a bug fix release with no additional features but is instead a set of bug fixes Changelog Improved numerical stability in spectral embedding by Gael Varoquaux Doctest under windows 64bit by Gael Varoquaux Documentation fixes for elastic net by Andreas M ller and Alexandre Gramfort Proper behavior with fortran ordered NumPy arrays by Gael Varoquaux Make GridSearchCV work with non CSR sparse matrix by Lars Buitinck Fix parallel computing in MDS by Gael Varoquaux Fix Unicode support in count vectorizer by Andreas M ller Fix MinCovDet breaking with X shape 3 1 by user Virgile Fritsch VirgileFritsch Fix clone of SGD objects by Peter Prettenhofer Stabilize GMM by user Virgile Fritsch VirgileFritsch People 14 Peter Prettenhofer 12 Gael Varoquaux 10 Andreas M ller 5 Lars Buitinck 3 user Virgile Fritsch VirgileFritsch 1 Alexandre Gramfort 1 Gilles Louppe 1 Mathieu Blondel changes 0 12 Version 0 12 September 4 2012 Changelog Various speed improvements of the ref decision trees tree module by Gilles Louppe class ensemble GradientBoostingRegressor and class ensemble GradientBoostingClassifier now support feature subsampling via the max features argument by Peter Prettenhofer Added Huber and Quantile loss functions to class ensemble GradientBoostingRegressor by Peter Prettenhofer ref Decision trees tree and ref forests of randomized trees forest now support multi output classification and regression problems by Gilles Louppe Added class preprocessing LabelEncoder a simple utility class to normalize labels or transform non numerical labels by Mathieu Blondel Added the epsilon insensitive loss and the ability to make probabilistic predictions with the modified huber loss in ref sgd by Mathieu Blondel Added ref multidimensional scaling by Nelle Varoquaux SVMlight file format loader now detects compressed gzip bzip2 files and decompresses them on the fly by Lars Buitinck SVMlight file format serializer now preserves double precision floating point values by Olivier Grisel A common testing framework for all estimators was added by Andreas M ller Understandable error messages for estimators that do not accept sparse input by Gael Varoquaux Speedups in hierarchical clustering by Gael Varoquaux In particular building the tree now supports early stopping This is useful when the number of clusters is not small compared to the number of samples Add MultiTaskLasso and MultiTaskElasticNet for joint feature selection by Alexandre Gramfort Added metrics auc score and func metrics average precision score convenience functions by Andreas M ller Improved sparse matrix support in the ref feature selection module by Andreas M ller New word boundaries aware character n gram analyzer for the ref text feature extraction module by user kernc kernc Fixed bug in spectral clustering that led to single point clusters by Andreas M ller In class feature extraction text CountVectorizer added an option to ignore infrequent words min df by Andreas M ller Add support for multiple targets in some linear models ElasticNet Lasso and OrthogonalMatchingPursuit by Vlad Niculae and Alexandre Gramfort Fixes in decomposition ProbabilisticPCA score function by Wei Li Fixed feature importance computation in ref gradient boosting API changes summary The old scikits learn package has disappeared all code should import from sklearn instead which was introduced in 0 9 In func metrics roc curve the thresholds array is now returned with it s order reversed in order to keep it consistent with the order of the returned fpr and tpr In hmm objects like hmm GaussianHMM hmm MultinomialHMM etc all parameters must be passed to the object when initialising it and not through fit Now fit will only accept the data as an input parameter For all SVM classes a faulty behavior of gamma was fixed Previously the default gamma value was only computed the first time fit was called and then stored It is now recalculated on every call to fit All Base classes are now abstract meta classes so that they can not be instantiated func cluster ward tree now also returns the parent array This is necessary for early stopping in which case the tree is not completely built In class feature extraction text CountVectorizer the parameters min n and max n were joined to the parameter n gram range to enable grid searching both at once In class feature extraction text CountVectorizer words that appear only in one document are now ignored by default To reproduce the previous behavior set min df 1 Fixed API inconsistency meth linear model SGDClassifier predict proba now returns 2d array when fit on two classes Fixed API inconsistency meth discriminant analysis QuadraticDiscriminantAnalysis decision function and meth discriminant analysis LinearDiscriminantAnalysis decision function now return 1d arrays when fit on two classes Grid of alphas used for fitting class linear model LassoCV and class linear model ElasticNetCV is now stored in the attribute alphas rather than overriding the init parameter alphas Linear models when alpha is estimated by cross validation store the estimated value in the alpha attribute rather than just alpha or best alpha class ensemble GradientBoostingClassifier now supports meth ensemble GradientBoostingClassifier staged predict proba and meth ensemble GradientBoostingClassifier staged predict svm sparse SVC and other sparse SVM classes are now deprecated The all classes in the ref svm module now automatically select the sparse or dense representation base on the input All clustering algorithms now interpret the array X given to fit as input data in particular class cluster SpectralClustering and class cluster AffinityPropagation which previously expected affinity matrices For clustering algorithms that take the desired number of clusters as a parameter this parameter is now called n clusters People 267 Andreas M ller 94 Gilles Louppe 89 Gael Varoquaux 79 Peter Prettenhofer 60 Mathieu Blondel 57 Alexandre Gramfort 52 Vlad Niculae 45 Lars Buitinck 44 Nelle Varoquaux 37 Jaques Grobler 30 Alexis Mignon 30 Immanuel Bayer 27 Olivier Grisel 16 Subhodeep Moitra 13 Yannick Schwartz 12 user kernc kernc 11 user Virgile Fritsch VirgileFritsch 9 Daniel Duckworth 9 Fabian Pedregosa 9 Robert Layton 8 John Benediktsson 7 Marko Burjek 5 Nicolas Pinto 4 Alexandre Abraham 4 Jake Vanderplas 3 Brian Holt 3 Edouard Duchesnay 3 Florian Hoenig 3 flyingimmidev 2 Francois Savard 2 Hannes Schulz 2 Peter Welinder 2 Yaroslav Halchenko 2 Wei Li 1 Alex Companioni 1 Brandyn A White 1 Bussonnier Matthias 1 Charles Pierre Astolfi 1 Dan O Huiginn 1 David Cournapeau 1 Keith Goodman 1 Ludwig Schwardt 1 Olivier Hervieu 1 Sergio Medina 1 Shiqiao Du 1 Tim Sheerman Chase 1 buguen changes 0 11 Version 0 11 May 7 2012 Changelog Highlights Gradient boosted regression trees ref gradient boosting for classification and regression by Peter Prettenhofer and Scott White Simple dict based feature loader with support for categorical variables class feature extraction DictVectorizer by Lars Buitinck Added Matthews correlation coefficient func metrics matthews corrcoef and added macro and micro average options to func metrics precision score func metrics recall score and func metrics f1 score by Satrajit Ghosh ref out of bag of generalization error for ref ensemble by Andreas M ller Randomized sparse linear models for feature selection by Alexandre Gramfort and Gael Varoquaux ref label propagation for semi supervised learning by Clay Woolam Note the semi supervised API is still work in progress and may change Added BIC AIC model selection to classical ref gmm and unified the API with the remainder of scikit learn by Bertrand Thirion Added sklearn cross validation StratifiedShuffleSplit which is a sklearn cross validation ShuffleSplit with balanced splits by Yannick Schwartz class sklearn neighbors NearestCentroid classifier added along with a shrink threshold parameter which implements shrunken centroid classification by Robert Layton Other changes Merged dense and sparse implementations of ref sgd module and exposed utility extension types for sequential datasets seq dataset and weight vectors weight vector by Peter Prettenhofer Added partial fit support for online minibatch learning and warm start to the ref sgd module by Mathieu Blondel Dense and sparse implementations of ref svm classes and class linear model LogisticRegression merged by Lars Buitinck Regressors can now be used as base estimator in the ref multiclass module by Mathieu Blondel Added n jobs option to func metrics pairwise distances and func metrics pairwise pairwise kernels for parallel computation by Mathieu Blondel ref k means can now be run in parallel using the n jobs argument to either ref k means or class cluster KMeans by Robert Layton Improved ref cross validation and ref grid search documentation and introduced the new cross validation train test split helper function by Olivier Grisel class svm SVC members coef and intercept changed sign for consistency with decision function for kernel linear coef was fixed in the one vs one case by Andreas M ller Performance improvements to efficient leave one out cross validated Ridge regression esp for the n samples n features case in class linear model RidgeCV by Reuben Fletcher Costin Refactoring and simplification of the ref text feature extraction API and fixed a bug that caused possible negative IDF by Olivier Grisel Beam pruning option in BaseHMM module has been removed since it is difficult to Cythonize If you are interested in contributing a Cython version you can use the python version in the git history as a reference Classes in ref neighbors now support arbitrary Minkowski metric for nearest neighbors searches The metric can be specified by argument p API changes summary covariance EllipticEnvelop is now deprecated Please use class covariance EllipticEnvelope instead NeighborsClassifier and NeighborsRegressor are gone in the module ref neighbors Use the classes class neighbors KNeighborsClassifier class neighbors RadiusNeighborsClassifier class neighbors KNeighborsRegressor and or class neighbors RadiusNeighborsRegressor instead Sparse classes in the ref sgd module are now deprecated In mixture GMM mixture DPGMM and mixture VBGMM parameters must be passed to an object when initialising it and not through fit Now fit will only accept the data as an input parameter methods rvs and decode in GMM module are now deprecated sample and score or predict should be used instead attribute scores and pvalues in univariate feature selection objects are now deprecated scores or pvalues should be used instead In class linear model LogisticRegression class svm LinearSVC class svm SVC and class svm NuSVC the class weight parameter is now an initialization parameter not a parameter to fit This makes grid searches over this parameter possible LFW data is now always shape n samples n features to be consistent with the Olivetti faces dataset Use images and pairs attribute to access the natural images shapes instead In class svm LinearSVC the meaning of the multi class parameter changed Options now are ovr and crammer singer with ovr being the default This does not change the default behavior but hopefully is less confusing Class feature selection text Vectorizer is deprecated and replaced by feature selection text TfidfVectorizer The preprocessor analyzer nested structure for text feature extraction has been removed All those features are now directly passed as flat constructor arguments to feature selection text TfidfVectorizer and feature selection text CountVectorizer in particular the following parameters are now used analyzer can be word or char to switch the default analysis scheme or use a specific python callable as previously tokenizer and preprocessor have been introduced to make it still possible to customize those steps with the new API input explicitly control how to interpret the sequence passed to fit and predict filenames file objects or direct byte or Unicode strings charset decoding is explicit and strict by default the vocabulary fitted or not is now stored in the vocabulary attribute to be consistent with the project conventions Class feature selection text TfidfVectorizer now derives directly from feature selection text CountVectorizer to make grid search trivial methods rvs in BaseHMM module are now deprecated sample should be used instead Beam pruning option in BaseHMM module is removed since it is difficult to be Cythonized If you are interested you can look in the history codes by git The SVMlight format loader now supports files with both zero based and one based column indices since both occur in the wild Arguments in class class model selection ShuffleSplit are now consistent with class model selection StratifiedShuffleSplit Arguments test fraction and train fraction are deprecated and renamed to test size and train size and can accept both float and int Arguments in class Bootstrap are now consistent with class model selection StratifiedShuffleSplit Arguments n test and n train are deprecated and renamed to test size and train size and can accept both float and int Argument p added to classes in ref neighbors to specify an arbitrary Minkowski metric for nearest neighbors searches People 282 Andreas M ller 239 Peter Prettenhofer 198 Gael Varoquaux 129 Olivier Grisel 114 Mathieu Blondel 103 Clay Woolam 96 Lars Buitinck 88 Jaques Grobler 82 Alexandre Gramfort 50 Bertrand Thirion 42 Robert Layton 28 flyingimmidev 26 Jake Vanderplas 26 Shiqiao Du 21 Satrajit Ghosh 17 David Marek 17 Gilles Louppe 14 Vlad Niculae 11 Yannick Schwartz 10 Fabian Pedregosa 9 fcostin 7 Nick Wilson 5 Adrien Gaidon 5 Nicolas Pinto 4 David Warde Farley 5 Nelle Varoquaux 5 Emmanuelle Gouillart 3 Joonas Sillanp 3 Paolo Losi 2 Charles McCarthy 2 Roy Hyunjin Han 2 Scott White 2 ibayer 1 Brandyn White 1 Carlos Scheidegger 1 Claire Revillet 1 Conrad Lee 1 Edouard Duchesnay 1 Jan Hendrik Metzen 1 Meng Xinfan 1 Rob Zinkov 1 Shiqiao 1 Udi Weinsberg 1 Virgile Fritsch 1 Xinfan Meng 1 Yaroslav Halchenko 1 jansoe 1 Leon Palafox changes 0 10 Version 0 10 January 11 2012 Changelog Python 2 5 compatibility was dropped the minimum Python version needed to use scikit learn is now 2 6 ref sparse inverse covariance estimation using the graph Lasso with associated cross validated estimator by Gael Varoquaux New ref Tree tree module by Brian Holt Peter Prettenhofer Satrajit Ghosh and Gilles Louppe The module comes with complete documentation and examples Fixed a bug in the RFE module by Gilles Louppe issue 378 Fixed a memory leak in ref svm module by Brian Holt issue 367 Faster tests by Fabian Pedregosa and others Silhouette Coefficient cluster analysis evaluation metric added as func sklearn metrics silhouette score by Robert Layton Fixed a bug in ref k means in the handling of the n init parameter the clustering algorithm used to be run n init times but the last solution was retained instead of the best solution by Olivier Grisel Minor refactoring in ref sgd module consolidated dense and sparse predict methods Enhanced test time performance by converting model parameters to fortran style arrays after fitting only multi class Adjusted Mutual Information metric added as func sklearn metrics adjusted mutual info score by Robert Layton Models like SVC SVR LinearSVC LogisticRegression from libsvm liblinear now support scaling of C regularization parameter by the number of samples by Alexandre Gramfort New ref Ensemble Methods ensemble module by Gilles Louppe and Brian Holt The module comes with the random forest algorithm and the extra trees method along with documentation and examples ref outlier detection outlier and novelty detection by user Virgile Fritsch VirgileFritsch ref kernel approximation a transform implementing kernel approximation for fast SGD on non linear kernels by Andreas M ller Fixed a bug due to atom swapping in ref OMP by Vlad Niculae ref SparseCoder by Vlad Niculae ref mini batch kmeans performance improvements by Olivier Grisel ref k means support for sparse matrices by Mathieu Blondel Improved documentation for developers and for the mod sklearn utils module by Jake Vanderplas Vectorized 20newsgroups dataset loader func sklearn datasets fetch 20newsgroups vectorized by Mathieu Blondel ref multiclass by Lars Buitinck Utilities for fast computation of mean and variance for sparse matrices by Mathieu Blondel Make func sklearn preprocessing scale and sklearn preprocessing Scaler work on sparse matrices by Olivier Grisel Feature importances using decision trees and or forest of trees by Gilles Louppe Parallel implementation of forests of randomized trees by Gilles Louppe sklearn cross validation ShuffleSplit can subsample the train sets as well as the test sets by Olivier Grisel Errors in the build of the documentation fixed by Andreas M ller API changes summary Here are the code migration instructions when upgrading from scikit learn version 0 9 Some estimators that may overwrite their inputs to save memory previously had overwrite parameters these have been replaced with copy parameters with exactly the opposite meaning This particularly affects some of the estimators in mod sklearn linear model The default behavior is still to copy everything passed in The SVMlight dataset loader func sklearn datasets load svmlight file no longer supports loading two files at once use load svmlight files instead Also the unused buffer mb parameter is gone Sparse estimators in the ref sgd module use dense parameter vector coef instead of sparse coef This significantly improves test time performance The ref covariance module now has a robust estimator of covariance the Minimum Covariance Determinant estimator Cluster evaluation metrics in mod sklearn metrics cluster have been refactored but the changes are backwards compatible They have been moved to the metrics cluster supervised along with metrics cluster unsupervised which contains the Silhouette Coefficient The permutation test score function now behaves the same way as cross val score i e uses the mean score across the folds Cross Validation generators now use integer indices indices True by default instead of boolean masks This make it more intuitive to use with sparse matrix data The functions used for sparse coding sparse encode and sparse encode parallel have been combined into func sklearn decomposition sparse encode and the shapes of the arrays have been transposed for consistency with the matrix factorization setting as opposed to the regression setting Fixed an off by one error in the SVMlight LibSVM file format handling files generated using func sklearn datasets dump svmlight file should be re generated They should continue to work but accidentally had one extra column of zeros prepended BaseDictionaryLearning class replaced by SparseCodingMixin sklearn utils extmath fast svd has been renamed func sklearn utils extmath randomized svd and the default oversampling is now fixed to 10 additional random vectors instead of doubling the number of components to extract The new behavior follows the reference paper People The following people contributed to scikit learn since last release 246 Andreas M ller 242 Olivier Grisel 220 Gilles Louppe 183 Brian Holt 166 Gael Varoquaux 144 Lars Buitinck 73 Vlad Niculae 65 Peter Prettenhofer 64 Fabian Pedregosa 60 Robert Layton 55 Mathieu Blondel 52 Jake Vanderplas 44 Noel Dawe 38 Alexandre Gramfort 24 user Virgile Fritsch VirgileFritsch 23 Satrajit Ghosh 3 Jan Hendrik Metzen 3 Kenneth C Arnold 3 Shiqiao Du 3 Tim Sheerman Chase 3 Yaroslav Halchenko 2 Bala Subrahmanyam Varanasi 2 DraXus 2 Michael Eickenberg 1 Bogdan Trach 1 F lix Antoine Fortin 1 Juan Manuel Caicedo Carvajal 1 Nelle Varoquaux 1 Nicolas Pinto 1 Tiziano Zito 1 Xinfan Meng changes 0 9 Version 0 9 September 21 2011 scikit learn 0 9 was released on September 2011 three months after the 0 8 release and includes the new modules ref manifold ref dirichlet process as well as several new algorithms and documentation improvements This release also includes the dictionary learning work developed by Vlad Niculae as part of the Google Summer of Code https developers google com open source gsoc program banner1 image auto examples manifold images thumb sphx glr plot compare methods thumb png target auto examples manifold plot compare methods html banner2 image auto examples linear model images thumb sphx glr plot omp thumb png target auto examples linear model plot omp html banner3 image auto examples decomposition images thumb sphx glr plot kernel pca thumb png target auto examples decomposition plot kernel pca html center div raw html div style text align center margin 0px 0 5px 0 end div raw html div center div banner2 banner1 banner3 end div Changelog New ref manifold module by Jake Vanderplas and Fabian Pedregosa New ref Dirichlet Process dirichlet process Gaussian Mixture Model by Alexandre Passos ref neighbors module refactoring by Jake Vanderplas general refactoring support for sparse matrices in input speed and documentation improvements See the next section for a full list of API changes Improvements on the ref feature selection module by Gilles Louppe refactoring of the RFE classes documentation rewrite increased efficiency and minor API changes ref SparsePCA by Vlad Niculae Gael Varoquaux and Alexandre Gramfort Printing an estimator now behaves independently of architectures and Python version thanks to user Jean Kossaifi JeanKossaifi ref Loader for libsvm svmlight format libsvm loader by Mathieu Blondel and Lars Buitinck Documentation improvements thumbnails in example gallery by Fabian Pedregosa Important bugfixes in ref svm module segfaults bad performance by Fabian Pedregosa Added ref multinomial naive bayes and ref bernoulli naive bayes by Lars Buitinck Text feature extraction optimizations by Lars Buitinck Chi Square feature selection func feature selection chi2 by Lars Buitinck ref sample generators module refactoring by Gilles Louppe ref multiclass by Mathieu Blondel Ball tree rewrite by Jake Vanderplas Implementation of ref dbscan algorithm by Robert Layton Kmeans predict and transform by Robert Layton Preprocessing module refactoring by Olivier Grisel Faster mean shift by Conrad Lee New Bootstrap ref ShuffleSplit and various other improvements in cross validation schemes by Olivier Grisel and Gael Varoquaux Adjusted Rand index and V Measure clustering evaluation metrics by Olivier Grisel Added class Orthogonal Matching Pursuit linear model OrthogonalMatchingPursuit by Vlad Niculae Added 2D patch extractor utilities in the ref feature extraction module by Vlad Niculae Implementation of class linear model LassoLarsCV cross validated Lasso solver using the Lars algorithm and class linear model LassoLarsIC BIC AIC model selection in Lars by Gael Varoquaux and Alexandre Gramfort Scalability improvements to func metrics roc curve by Olivier Hervieu Distance helper functions func metrics pairwise distances and func metrics pairwise pairwise kernels by Robert Layton class Mini Batch K Means cluster MiniBatchKMeans by Nelle Varoquaux and Peter Prettenhofer mldata utilities by Pietro Berkes ref olivetti faces dataset by David Warde Farley API changes summary Here are the code migration instructions when upgrading from scikit learn version 0 8 The scikits learn package was renamed sklearn There is still a scikits learn package alias for backward compatibility Third party projects with a dependency on scikit learn 0 9 should upgrade their codebase For instance under Linux MacOSX just run make a backup first find name py xargs sed i s bscikits learn b sklearn g Estimators no longer accept model parameters as fit arguments instead all parameters must be only be passed as constructor arguments or using the now public set params method inherited from class base BaseEstimator Some estimators can still accept keyword arguments on the fit but this is restricted to data dependent values e g a Gram matrix or an affinity matrix that are precomputed from the X data matrix The cross val package has been renamed to cross validation although there is also a cross val package alias in place for backward compatibility Third party projects with a dependency on scikit learn 0 9 should upgrade their codebase For instance under Linux MacOSX just run make a backup first find name py xargs sed i s bcross val b cross validation g The score func argument of the sklearn cross validation cross val score function is now expected to accept y test and y predicted as only arguments for classification and regression tasks or X test for unsupervised estimators gamma parameter for support vector machine algorithms is set to 1 n features by default instead of 1 n samples The sklearn hmm has been marked as orphaned it will be removed from scikit learn in version 0 11 unless someone steps up to contribute documentation examples and fix lurking numerical stability issues sklearn neighbors has been made into a submodule The two previously available estimators NeighborsClassifier and NeighborsRegressor have been marked as deprecated Their functionality has been divided among five new classes NearestNeighbors for unsupervised neighbors searches KNeighborsClassifier RadiusNeighborsClassifier for supervised classification problems and KNeighborsRegressor RadiusNeighborsRegressor for supervised regression problems sklearn ball tree BallTree has been moved to sklearn neighbors BallTree Using the former will generate a warning sklearn linear model LARS and related classes LassoLARS LassoLARSCV etc have been renamed to sklearn linear model Lars All distance metrics and kernels in sklearn metrics pairwise now have a Y parameter which by default is None If not given the result is the distance or kernel similarity between each sample in Y If given the result is the pairwise distance or kernel similarity between samples in X to Y sklearn metrics pairwise l1 distance is now called manhattan distance and by default returns the pairwise distance For the component wise distance set the parameter sum over features to False Backward compatibility package aliases and other deprecated classes and functions will be removed in version 0 11 People 38 people contributed to this release 387 Vlad Niculae 320 Olivier Grisel 192 Lars Buitinck 179 Gael Varoquaux 168 Fabian Pedregosa INRIA Parietal Team 127 Jake Vanderplas 120 Mathieu Blondel 85 Alexandre Passos 67 Alexandre Gramfort 57 Peter Prettenhofer 56 Gilles Louppe 42 Robert Layton 38 Nelle Varoquaux 32 user Jean Kossaifi JeanKossaifi 30 Conrad Lee 22 Pietro Berkes 18 andy 17 David Warde Farley 12 Brian Holt 11 Robert 8 Amit Aides 8 user Virgile Fritsch VirgileFritsch 7 Yaroslav Halchenko 6 Salvatore Masecchia 5 Paolo Losi 4 Vincent Schut 3 Alexis Metaireau 3 Bryan Silverthorn 3 Andreas M ller 2 Minwoo Jake Lee 1 Emmanuelle Gouillart 1 Keith Goodman 1 Lucas Wiman 1 Nicolas Pinto 1 Thouis Ray Jones 1 Tim Sheerman Chase changes 0 8 Version 0 8 May 11 2011 scikit learn 0 8 was released on May 2011 one month after the first international scikit learn coding sprint https github com scikit learn scikit learn wiki Upcoming events and is marked by the inclusion of important modules ref hierarchical clustering ref cross decomposition ref NMF initial support for Python 3 and by important enhancements and bug fixes Changelog Several new modules where introduced during this release New ref hierarchical clustering module by Vincent Michel Bertrand Thirion Alexandre Gramfort and Gael Varoquaux ref kernel pca implementation by Mathieu Blondel ref labeled faces in the wild dataset by Olivier Grisel New ref cross decomposition module by Edouard Duchesnay ref NMF module Vlad Niculae Implementation of the ref oracle approximating shrinkage algorithm by user Virgile Fritsch VirgileFritsch in the ref covariance module Some other modules benefited from significant improvements or cleanups Initial support for Python 3 builds and imports cleanly some modules are usable while others have failing tests by Fabian Pedregosa class decomposition PCA is now usable from the Pipeline object by Olivier Grisel Guide ref performance howto by Olivier Grisel Fixes for memory leaks in libsvm bindings 64 bit safer BallTree by Lars Buitinck bug and style fixing in ref k means algorithm by Jan Schl ter Add attribute converged to Gaussian Mixture Models by Vincent Schut Implemented transform predict log proba in class discriminant analysis LinearDiscriminantAnalysis By Mathieu Blondel Refactoring in the ref svm module and bug fixes by Fabian Pedregosa Gael Varoquaux and Amit Aides Refactored SGD module removed code duplication better variable naming added interface for sample weight by Peter Prettenhofer Wrapped BallTree with Cython by Thouis Ray Jones Added function func svm l1 min c by Paolo Losi Typos doc style etc by Yaroslav Halchenko Gael Varoquaux Olivier Grisel Yann Malet Nicolas Pinto Lars Buitinck and Fabian Pedregosa People People that made this release possible preceded by number of commits 159 Olivier Grisel 96 Gael Varoquaux 96 Vlad Niculae 94 Fabian Pedregosa 36 Alexandre Gramfort 32 Paolo Losi 31 Edouard Duchesnay 30 Mathieu Blondel 25 Peter Prettenhofer 22 Nicolas Pinto 11 user Virgile Fritsch VirgileFritsch 7 Lars Buitinck 6 Vincent Michel 5 Bertrand Thirion 4 Thouis Ray Jones 4 Vincent Schut 3 Jan Schl ter 2 Julien Miotte 2 Matthieu Perrot 2 Yann Malet 2 Yaroslav Halchenko 1 Amit Aides 1 Andreas M ller 1 Feth Arezki 1 Meng Xinfan changes 0 7 Version 0 7 March 2 2011 scikit learn 0 7 was released in March 2011 roughly three months after the 0 6 release This release is marked by the speed improvements in existing algorithms like k Nearest Neighbors and K Means algorithm and by the inclusion of an efficient algorithm for computing the Ridge Generalized Cross Validation solution Unlike the preceding release no new modules where added to this release Changelog Performance improvements for Gaussian Mixture Model sampling Jan Schl ter Implementation of efficient leave one out cross validated Ridge in class linear model RidgeCV Mathieu Blondel Better handling of collinearity and early stopping in func linear model lars path Alexandre Gramfort and Fabian Pedregosa Fixes for liblinear ordering of labels and sign of coefficients Dan Yamins Paolo Losi Mathieu Blondel and Fabian Pedregosa Performance improvements for Nearest Neighbors algorithm in high dimensional spaces Fabian Pedregosa Performance improvements for class cluster KMeans Gael Varoquaux and James Bergstra Sanity checks for SVM based classes Mathieu Blondel Refactoring of neighbors NeighborsClassifier and func neighbors kneighbors graph added different algorithms for the k Nearest Neighbor Search and implemented a more stable algorithm for finding barycenter weights Also added some developer documentation for this module see notes neighbors https github com scikit learn scikit learn wiki Neighbors working notes for more information Fabian Pedregosa Documentation improvements Added pca RandomizedPCA and class linear model LogisticRegression to the class reference Also added references of matrices used for clustering and other fixes Gael Varoquaux Fabian Pedregosa Mathieu Blondel Olivier Grisel Virgile Fritsch Emmanuelle Gouillart Binded decision function in classes that make use of liblinear dense and sparse variants like class svm LinearSVC or class linear model LogisticRegression Fabian Pedregosa Performance and API improvements to func metrics pairwise euclidean distances and to pca RandomizedPCA James Bergstra Fix compilation issues under NetBSD Kamel Ibn Hassen Derouiche Allow input sequences of different lengths in hmm GaussianHMM Ron Weiss Fix bug in affinity propagation caused by incorrect indexing Xinfan Meng People People that made this release possible preceded by number of commits 85 Fabian Pedregosa 67 Mathieu Blondel 20 Alexandre Gramfort 19 James Bergstra 14 Dan Yamins 13 Olivier Grisel 12 Gael Varoquaux 4 Edouard Duchesnay 4 Ron Weiss 2 Satrajit Ghosh 2 Vincent Dubourg 1 Emmanuelle Gouillart 1 Kamel Ibn Hassen Derouiche 1 Paolo Losi 1 VirgileFritsch 1 Yaroslav Halchenko 1 Xinfan Meng changes 0 6 Version 0 6 December 21 2010 scikit learn 0 6 was released on December 2010 It is marked by the inclusion of several new modules and a general renaming of old ones It is also marked by the inclusion of new example including applications to real world datasets Changelog New stochastic gradient https scikit learn org stable modules sgd html descent module by Peter Prettenhofer The module comes with complete documentation and examples Improved svm module memory consumption has been reduced by 50 heuristic to automatically set class weights possibility to assign weights to samples see ref sphx glr auto examples svm plot weighted samples py for an example New ref gaussian process module by Vincent Dubourg This module also has great documentation and some very neat examples See example gaussian process plot gp regression py or example gaussian process plot gp probabilistic classification after regression py for a taste of what can be done It is now possible to use liblinear s Multi class SVC option multi class in class svm LinearSVC New features and performance improvements of text feature extraction Improved sparse matrix support both in main classes class model selection GridSearchCV as in modules sklearn svm sparse and sklearn linear model sparse Lots of cool new examples and a new section that uses real world datasets was created These include ref sphx glr auto examples applications plot face recognition py ref sphx glr auto examples applications plot species distribution modeling py ref sphx glr auto examples applications wikipedia principal eigenvector py and others Faster ref least angle regression algorithm It is now 2x faster than the R version on worst case and up to 10x times faster on some cases Faster coordinate descent algorithm In particular the full path version of lasso func linear model lasso path is more than 200x times faster than before It is now possible to get probability estimates from a class linear model LogisticRegression model module renaming the glm module has been renamed to linear model the gmm module has been included into the more general mixture model and the sgd module has been included in linear model Lots of bug fixes and documentation improvements People People that made this release possible preceded by number of commits 207 Olivier Grisel 167 Fabian Pedregosa 97 Peter Prettenhofer 68 Alexandre Gramfort 59 Mathieu Blondel 55 Gael Varoquaux 33 Vincent Dubourg 21 Ron Weiss 9 Bertrand Thirion 3 Alexandre Passos 3 Anne Laure Fouque 2 Ronan Amicel 1 Christian Osendorfer changes 0 5 Version 0 5 October 11 2010 Changelog New classes Support for sparse matrices in some classifiers of modules svm and linear model see svm sparse SVC svm sparse SVR svm sparse LinearSVC linear model sparse Lasso linear model sparse ElasticNet New class pipeline Pipeline object to compose different estimators Recursive Feature Elimination routines in module ref feature selection Addition of various classes capable of cross validation in the linear model module class linear model LassoCV class linear model ElasticNetCV etc New more efficient LARS algorithm implementation The Lasso variant of the algorithm is also implemented See class linear model lars path class linear model Lars and class linear model LassoLars New Hidden Markov Models module see classes hmm GaussianHMM hmm MultinomialHMM hmm GMMHMM New module feature extraction see ref class reference feature extraction ref New FastICA algorithm in module sklearn fastica Documentation Improved documentation for many modules now separating narrative documentation from the class reference As an example see documentation for the SVM module https scikit learn org stable modules svm html and the complete class reference https scikit learn org stable modules classes html Fixes API changes adhere variable names to PEP 8 give more meaningful names Fixes for svm module to run on a shared memory context multiprocessing It is again possible to generate latex and thus PDF from the sphinx docs Examples new examples using some of the mlcomp datasets sphx glr auto examples mlcomp sparse document classification py since removed and ref sphx glr auto examples text plot document classification 20newsgroups py Many more examples See here https scikit learn org stable auto examples index html the full list of examples External dependencies Joblib is now a dependency of this package although it is shipped with sklearn externals joblib Removed modules Module ann Artificial Neural Networks has been removed from the distribution Users wanting this sort of algorithms should take a look into pybrain Misc New sphinx theme for the web page Authors The following is a list of authors for this release preceded by number of commits 262 Fabian Pedregosa 240 Gael Varoquaux 149 Alexandre Gramfort 116 Olivier Grisel 40 Vincent Michel 38 Ron Weiss 23 Matthieu Perrot 10 Bertrand Thirion 7 Yaroslav Halchenko 9 VirgileFritsch 6 Edouard Duchesnay 4 Mathieu Blondel 1 Ariel Rokem 1 Matthieu Brucher Version 0 4 August 26 2010 Changelog Major changes in this release include Coordinate Descent algorithm Lasso ElasticNet refactoring speed improvements roughly 100x times faster Coordinate Descent Refactoring and bug fixing for consistency with R s package GLMNET New metrics module New GMM module contributed by Ron Weiss Implementation of the LARS algorithm without Lasso variant for now feature selection module redesign Migration to GIT as version control system Removal of obsolete attrselect module Rename of private compiled extensions added underscore Removal of legacy unmaintained code Documentation improvements both docstring and rst Improvement of the build system to optionally link with MKL Also provide a lite BLAS implementation in case no system wide BLAS is found Lots of new examples Many many bug fixes Authors The committer list for this release is the following preceded by number of commits 143 Fabian Pedregosa 35 Alexandre Gramfort 34 Olivier Grisel 11 Gael Varoquaux 5 Yaroslav Halchenko 2 Vincent Michel 1 Chris Filo Gorgolewski Earlier versions Earlier versions included contributions by Fred Mailhot David Cooke David Huard Dave Morrill Ed Schofield Travis Oliphant Pearu Peterson |
scikit-learn sklearn contributors rst Version 0 17 changes0171 | .. include:: _contributors.rst
.. currentmodule:: sklearn
============
Version 0.17
============
.. _changes_0_17_1:
Version 0.17.1
==============
**February 18, 2016**
Changelog
---------
Bug fixes
.........
- Upgrade vendored joblib to version 0.9.4 that fixes an important bug in
``joblib.Parallel`` that can silently yield to wrong results when working
on datasets larger than 1MB:
https://github.com/joblib/joblib/blob/0.9.4/CHANGES.rst
- Fixed reading of Bunch pickles generated with scikit-learn
version <= 0.16. This can affect users who have already
downloaded a dataset with scikit-learn 0.16 and are loading it
with scikit-learn 0.17. See :issue:`6196` for
how this affected :func:`datasets.fetch_20newsgroups`. By `Loic
Esteve`_.
- Fixed a bug that prevented using ROC AUC score to perform grid search on
several CPU / cores on large arrays. See :issue:`6147`
By `Olivier Grisel`_.
- Fixed a bug that prevented to properly set the ``presort`` parameter
in :class:`ensemble.GradientBoostingRegressor`. See :issue:`5857`
By Andrew McCulloh.
- Fixed a joblib error when evaluating the perplexity of a
:class:`decomposition.LatentDirichletAllocation` model. See :issue:`6258`
By Chyi-Kwei Yau.
.. _changes_0_17:
Version 0.17
============
**November 5, 2015**
Changelog
---------
New features
............
- All the Scaler classes but :class:`preprocessing.RobustScaler` can be fitted online by
calling `partial_fit`. By :user:`Giorgio Patrini <giorgiop>`.
- The new class :class:`ensemble.VotingClassifier` implements a
"majority rule" / "soft voting" ensemble classifier to combine
estimators for classification. By `Sebastian Raschka`_.
- The new class :class:`preprocessing.RobustScaler` provides an
alternative to :class:`preprocessing.StandardScaler` for feature-wise
centering and range normalization that is robust to outliers.
By :user:`Thomas Unterthiner <untom>`.
- The new class :class:`preprocessing.MaxAbsScaler` provides an
alternative to :class:`preprocessing.MinMaxScaler` for feature-wise
range normalization when the data is already centered or sparse.
By :user:`Thomas Unterthiner <untom>`.
- The new class :class:`preprocessing.FunctionTransformer` turns a Python
function into a ``Pipeline``-compatible transformer object.
By Joe Jevnik.
- The new classes `cross_validation.LabelKFold` and
`cross_validation.LabelShuffleSplit` generate train-test folds,
respectively similar to `cross_validation.KFold` and
`cross_validation.ShuffleSplit`, except that the folds are
conditioned on a label array. By `Brian McFee`_, :user:`Jean
Kossaifi <JeanKossaifi>` and `Gilles Louppe`_.
- :class:`decomposition.LatentDirichletAllocation` implements the Latent
Dirichlet Allocation topic model with online variational
inference. By :user:`Chyi-Kwei Yau <chyikwei>`, with code based on an implementation
by Matt Hoffman. (:issue:`3659`)
- The new solver ``sag`` implements a Stochastic Average Gradient descent
and is available in both :class:`linear_model.LogisticRegression` and
:class:`linear_model.Ridge`. This solver is very efficient for large
datasets. By :user:`Danny Sullivan <dsullivan7>` and `Tom Dupre la Tour`_.
(:issue:`4738`)
- The new solver ``cd`` implements a Coordinate Descent in
:class:`decomposition.NMF`. Previous solver based on Projected Gradient is
still available setting new parameter ``solver`` to ``pg``, but is
deprecated and will be removed in 0.19, along with
`decomposition.ProjectedGradientNMF` and parameters ``sparseness``,
``eta``, ``beta`` and ``nls_max_iter``. New parameters ``alpha`` and
``l1_ratio`` control L1 and L2 regularization, and ``shuffle`` adds a
shuffling step in the ``cd`` solver.
By `Tom Dupre la Tour`_ and `Mathieu Blondel`_.
Enhancements
............
- :class:`manifold.TSNE` now supports approximate optimization via the
Barnes-Hut method, leading to much faster fitting. By Christopher Erick Moody.
(:issue:`4025`)
- :class:`cluster.MeanShift` now supports parallel execution,
as implemented in the ``mean_shift`` function. By :user:`Martino
Sorbaro <martinosorb>`.
- :class:`naive_bayes.GaussianNB` now supports fitting with ``sample_weight``.
By `Jan Hendrik Metzen`_.
- :class:`dummy.DummyClassifier` now supports a prior fitting strategy.
By `Arnaud Joly`_.
- Added a ``fit_predict`` method for `mixture.GMM` and subclasses.
By :user:`Cory Lorenz <clorenz7>`.
- Added the :func:`metrics.label_ranking_loss` metric.
By `Arnaud Joly`_.
- Added the :func:`metrics.cohen_kappa_score` metric.
- Added a ``warm_start`` constructor parameter to the bagging ensemble
models to increase the size of the ensemble. By :user:`Tim Head <betatim>`.
- Added option to use multi-output regression metrics without averaging.
By Konstantin Shmelkov and :user:`Michael Eickenberg<eickenberg>`.
- Added ``stratify`` option to `cross_validation.train_test_split`
for stratified splitting. By Miroslav Batchkarov.
- The :func:`tree.export_graphviz` function now supports aesthetic
improvements for :class:`tree.DecisionTreeClassifier` and
:class:`tree.DecisionTreeRegressor`, including options for coloring nodes
by their majority class or impurity, showing variable names, and using
node proportions instead of raw sample counts. By `Trevor Stephens`_.
- Improved speed of ``newton-cg`` solver in
:class:`linear_model.LogisticRegression`, by avoiding loss computation.
By `Mathieu Blondel`_ and `Tom Dupre la Tour`_.
- The ``class_weight="auto"`` heuristic in classifiers supporting
``class_weight`` was deprecated and replaced by the ``class_weight="balanced"``
option, which has a simpler formula and interpretation.
By `Hanna Wallach`_ and `Andreas Müller`_.
- Add ``class_weight`` parameter to automatically weight samples by class
frequency for :class:`linear_model.PassiveAggressiveClassifier`. By
`Trevor Stephens`_.
- Added backlinks from the API reference pages to the user guide. By
`Andreas Müller`_.
- The ``labels`` parameter to :func:`sklearn.metrics.f1_score`,
:func:`sklearn.metrics.fbeta_score`,
:func:`sklearn.metrics.recall_score` and
:func:`sklearn.metrics.precision_score` has been extended.
It is now possible to ignore one or more labels, such as where
a multiclass problem has a majority class to ignore. By `Joel Nothman`_.
- Add ``sample_weight`` support to :class:`linear_model.RidgeClassifier`.
By `Trevor Stephens`_.
- Provide an option for sparse output from
:func:`sklearn.metrics.pairwise.cosine_similarity`. By
:user:`Jaidev Deshpande <jaidevd>`.
- Add :func:`preprocessing.minmax_scale` to provide a function interface for
:class:`preprocessing.MinMaxScaler`. By :user:`Thomas Unterthiner <untom>`.
- ``dump_svmlight_file`` now handles multi-label datasets.
By Chih-Wei Chang.
- RCV1 dataset loader (:func:`sklearn.datasets.fetch_rcv1`).
By `Tom Dupre la Tour`_.
- The "Wisconsin Breast Cancer" classical two-class classification dataset
is now included in scikit-learn, available with
:func:`datasets.load_breast_cancer`.
- Upgraded to joblib 0.9.3 to benefit from the new automatic batching of
short tasks. This makes it possible for scikit-learn to benefit from
parallelism when many very short tasks are executed in parallel, for
instance by the `grid_search.GridSearchCV` meta-estimator
with ``n_jobs > 1`` used with a large grid of parameters on a small
dataset. By `Vlad Niculae`_, `Olivier Grisel`_ and `Loic Esteve`_.
- For more details about changes in joblib 0.9.3 see the release notes:
https://github.com/joblib/joblib/blob/master/CHANGES.rst#release-093
- Improved speed (3 times per iteration) of
`decomposition.DictLearning` with coordinate descent method
from :class:`linear_model.Lasso`. By :user:`Arthur Mensch <arthurmensch>`.
- Parallel processing (threaded) for queries of nearest neighbors
(using the ball-tree) by Nikolay Mayorov.
- Allow :func:`datasets.make_multilabel_classification` to output
a sparse ``y``. By Kashif Rasul.
- :class:`cluster.DBSCAN` now accepts a sparse matrix of precomputed
distances, allowing memory-efficient distance precomputation. By
`Joel Nothman`_.
- :class:`tree.DecisionTreeClassifier` now exposes an ``apply`` method
for retrieving the leaf indices samples are predicted as. By
:user:`Daniel Galvez <galv>` and `Gilles Louppe`_.
- Speed up decision tree regressors, random forest regressors, extra trees
regressors and gradient boosting estimators by computing a proxy
of the impurity improvement during the tree growth. The proxy quantity is
such that the split that maximizes this value also maximizes the impurity
improvement. By `Arnaud Joly`_, :user:`Jacob Schreiber <jmschrei>`
and `Gilles Louppe`_.
- Speed up tree based methods by reducing the number of computations needed
when computing the impurity measure taking into account linear
relationship of the computed statistics. The effect is particularly
visible with extra trees and on datasets with categorical or sparse
features. By `Arnaud Joly`_.
- :class:`ensemble.GradientBoostingRegressor` and
:class:`ensemble.GradientBoostingClassifier` now expose an ``apply``
method for retrieving the leaf indices each sample ends up in under
each try. By :user:`Jacob Schreiber <jmschrei>`.
- Add ``sample_weight`` support to :class:`linear_model.LinearRegression`.
By Sonny Hu. (:issue:`#4881`)
- Add ``n_iter_without_progress`` to :class:`manifold.TSNE` to control
the stopping criterion. By Santi Villalba. (:issue:`5186`)
- Added optional parameter ``random_state`` in :class:`linear_model.Ridge`
, to set the seed of the pseudo random generator used in ``sag`` solver. By `Tom Dupre la Tour`_.
- Added optional parameter ``warm_start`` in
:class:`linear_model.LogisticRegression`. If set to True, the solvers
``lbfgs``, ``newton-cg`` and ``sag`` will be initialized with the
coefficients computed in the previous fit. By `Tom Dupre la Tour`_.
- Added ``sample_weight`` support to :class:`linear_model.LogisticRegression` for
the ``lbfgs``, ``newton-cg``, and ``sag`` solvers. By `Valentin Stolbunov`_.
Support added to the ``liblinear`` solver. By `Manoj Kumar`_.
- Added optional parameter ``presort`` to :class:`ensemble.GradientBoostingRegressor`
and :class:`ensemble.GradientBoostingClassifier`, keeping default behavior
the same. This allows gradient boosters to turn off presorting when building
deep trees or using sparse data. By :user:`Jacob Schreiber <jmschrei>`.
- Altered :func:`metrics.roc_curve` to drop unnecessary thresholds by
default. By :user:`Graham Clenaghan <gclenaghan>`.
- Added :class:`feature_selection.SelectFromModel` meta-transformer which can
be used along with estimators that have `coef_` or `feature_importances_`
attribute to select important features of the input data. By
:user:`Maheshakya Wijewardena <maheshakya>`, `Joel Nothman`_ and `Manoj Kumar`_.
- Added :func:`metrics.pairwise.laplacian_kernel`. By `Clyde Fare <https://github.com/Clyde-fare>`_.
- `covariance.GraphLasso` allows separate control of the convergence criterion
for the Elastic-Net subproblem via the ``enet_tol`` parameter.
- Improved verbosity in :class:`decomposition.DictionaryLearning`.
- :class:`ensemble.RandomForestClassifier` and
:class:`ensemble.RandomForestRegressor` no longer explicitly store the
samples used in bagging, resulting in a much reduced memory footprint for
storing random forest models.
- Added ``positive`` option to :class:`linear_model.Lars` and
:func:`linear_model.lars_path` to force coefficients to be positive.
(:issue:`5131`)
- Added the ``X_norm_squared`` parameter to :func:`metrics.pairwise.euclidean_distances`
to provide precomputed squared norms for ``X``.
- Added the ``fit_predict`` method to :class:`pipeline.Pipeline`.
- Added the :func:`preprocessing.minmax_scale` function.
Bug fixes
.........
- Fixed non-determinism in :class:`dummy.DummyClassifier` with sparse
multi-label output. By `Andreas Müller`_.
- Fixed the output shape of :class:`linear_model.RANSACRegressor` to
``(n_samples, )``. By `Andreas Müller`_.
- Fixed bug in `decomposition.DictLearning` when ``n_jobs < 0``. By
`Andreas Müller`_.
- Fixed bug where `grid_search.RandomizedSearchCV` could consume a
lot of memory for large discrete grids. By `Joel Nothman`_.
- Fixed bug in :class:`linear_model.LogisticRegressionCV` where `penalty` was ignored
in the final fit. By `Manoj Kumar`_.
- Fixed bug in `ensemble.forest.ForestClassifier` while computing
oob_score and X is a sparse.csc_matrix. By :user:`Ankur Ankan <ankurankan>`.
- All regressors now consistently handle and warn when given ``y`` that is of
shape ``(n_samples, 1)``. By `Andreas Müller`_ and Henry Lin.
(:issue:`5431`)
- Fix in :class:`cluster.KMeans` cluster reassignment for sparse input by
`Lars Buitinck`_.
- Fixed a bug in :class:`discriminant_analysis.LinearDiscriminantAnalysis` that
could cause asymmetric covariance matrices when using shrinkage. By `Martin
Billinger`_.
- Fixed `cross_validation.cross_val_predict` for estimators with
sparse predictions. By Buddha Prakash.
- Fixed the ``predict_proba`` method of :class:`linear_model.LogisticRegression`
to use soft-max instead of one-vs-rest normalization. By `Manoj Kumar`_.
(:issue:`5182`)
- Fixed the `partial_fit` method of :class:`linear_model.SGDClassifier`
when called with ``average=True``. By :user:`Andrew Lamb <andylamb>`.
(:issue:`5282`)
- Dataset fetchers use different filenames under Python 2 and Python 3 to
avoid pickling compatibility issues. By `Olivier Grisel`_.
(:issue:`5355`)
- Fixed a bug in :class:`naive_bayes.GaussianNB` which caused classification
results to depend on scale. By `Jake Vanderplas`_.
- Fixed temporarily :class:`linear_model.Ridge`, which was incorrect
when fitting the intercept in the case of sparse data. The fix
automatically changes the solver to 'sag' in this case.
:issue:`5360` by `Tom Dupre la Tour`_.
- Fixed a performance bug in `decomposition.RandomizedPCA` on data
with a large number of features and fewer samples. (:issue:`4478`)
By `Andreas Müller`_, `Loic Esteve`_ and :user:`Giorgio Patrini <giorgiop>`.
- Fixed bug in `cross_decomposition.PLS` that yielded unstable and
platform dependent output, and failed on `fit_transform`.
By :user:`Arthur Mensch <arthurmensch>`.
- Fixes to the ``Bunch`` class used to store datasets.
- Fixed `ensemble.plot_partial_dependence` ignoring the
``percentiles`` parameter.
- Providing a ``set`` as vocabulary in ``CountVectorizer`` no longer
leads to inconsistent results when pickling.
- Fixed the conditions on when a precomputed Gram matrix needs to
be recomputed in :class:`linear_model.LinearRegression`,
:class:`linear_model.OrthogonalMatchingPursuit`,
:class:`linear_model.Lasso` and :class:`linear_model.ElasticNet`.
- Fixed inconsistent memory layout in the coordinate descent solver
that affected `linear_model.DictionaryLearning` and
`covariance.GraphLasso`. (:issue:`5337`)
By `Olivier Grisel`_.
- :class:`manifold.LocallyLinearEmbedding` no longer ignores the ``reg``
parameter.
- Nearest Neighbor estimators with custom distance metrics can now be pickled.
(:issue:`4362`)
- Fixed a bug in :class:`pipeline.FeatureUnion` where ``transformer_weights``
were not properly handled when performing grid-searches.
- Fixed a bug in :class:`linear_model.LogisticRegression` and
:class:`linear_model.LogisticRegressionCV` when using
``class_weight='balanced'`` or ``class_weight='auto'``.
By `Tom Dupre la Tour`_.
- Fixed bug :issue:`5495` when
doing OVR(SVC(decision_function_shape="ovr")). Fixed by
:user:`Elvis Dohmatob <dohmatob>`.
API changes summary
-------------------
- Attribute `data_min`, `data_max` and `data_range` in
:class:`preprocessing.MinMaxScaler` are deprecated and won't be available
from 0.19. Instead, the class now exposes `data_min_`, `data_max_`
and `data_range_`. By :user:`Giorgio Patrini <giorgiop>`.
- All Scaler classes now have an `scale_` attribute, the feature-wise
rescaling applied by their `transform` methods. The old attribute `std_`
in :class:`preprocessing.StandardScaler` is deprecated and superseded
by `scale_`; it won't be available in 0.19. By :user:`Giorgio Patrini <giorgiop>`.
- :class:`svm.SVC` and :class:`svm.NuSVC` now have an ``decision_function_shape``
parameter to make their decision function of shape ``(n_samples, n_classes)``
by setting ``decision_function_shape='ovr'``. This will be the default behavior
starting in 0.19. By `Andreas Müller`_.
- Passing 1D data arrays as input to estimators is now deprecated as it
caused confusion in how the array elements should be interpreted
as features or as samples. All data arrays are now expected
to be explicitly shaped ``(n_samples, n_features)``.
By :user:`Vighnesh Birodkar <vighneshbirodkar>`.
- `lda.LDA` and `qda.QDA` have been moved to
:class:`discriminant_analysis.LinearDiscriminantAnalysis` and
:class:`discriminant_analysis.QuadraticDiscriminantAnalysis`.
- The ``store_covariance`` and ``tol`` parameters have been moved from
the fit method to the constructor in
:class:`discriminant_analysis.LinearDiscriminantAnalysis` and the
``store_covariances`` and ``tol`` parameters have been moved from the
fit method to the constructor in
:class:`discriminant_analysis.QuadraticDiscriminantAnalysis`.
- Models inheriting from ``_LearntSelectorMixin`` will no longer support the
transform methods. (i.e, RandomForests, GradientBoosting, LogisticRegression,
DecisionTrees, SVMs and SGD related models). Wrap these models around the
metatransfomer :class:`feature_selection.SelectFromModel` to remove
features (according to `coefs_` or `feature_importances_`)
which are below a certain threshold value instead.
- :class:`cluster.KMeans` re-runs cluster-assignments in case of non-convergence,
to ensure consistency of ``predict(X)`` and ``labels_``. By
:user:`Vighnesh Birodkar <vighneshbirodkar>`.
- Classifier and Regressor models are now tagged as such using the
``_estimator_type`` attribute.
- Cross-validation iterators always provide indices into training and test set,
not boolean masks.
- The ``decision_function`` on all regressors was deprecated and will be
removed in 0.19. Use ``predict`` instead.
- `datasets.load_lfw_pairs` is deprecated and will be removed in 0.19.
Use :func:`datasets.fetch_lfw_pairs` instead.
- The deprecated ``hmm`` module was removed.
- The deprecated ``Bootstrap`` cross-validation iterator was removed.
- The deprecated ``Ward`` and ``WardAgglomerative`` classes have been removed.
Use :class:`cluster.AgglomerativeClustering` instead.
- `cross_validation.check_cv` is now a public function.
- The property ``residues_`` of :class:`linear_model.LinearRegression` is deprecated
and will be removed in 0.19.
- The deprecated ``n_jobs`` parameter of :class:`linear_model.LinearRegression` has been moved
to the constructor.
- Removed deprecated ``class_weight`` parameter from :class:`linear_model.SGDClassifier`'s ``fit``
method. Use the construction parameter instead.
- The deprecated support for the sequence of sequences (or list of lists) multilabel
format was removed. To convert to and from the supported binary
indicator matrix format, use
:class:`MultiLabelBinarizer <preprocessing.MultiLabelBinarizer>`.
- The behavior of calling the ``inverse_transform`` method of ``Pipeline.pipeline`` will
change in 0.19. It will no longer reshape one-dimensional input to two-dimensional input.
- The deprecated attributes ``indicator_matrix_``, ``multilabel_`` and ``classes_`` of
:class:`preprocessing.LabelBinarizer` were removed.
- Using ``gamma=0`` in :class:`svm.SVC` and :class:`svm.SVR` to automatically set the
gamma to ``1. / n_features`` is deprecated and will be removed in 0.19.
Use ``gamma="auto"`` instead.
Code Contributors
-----------------
Aaron Schumacher, Adithya Ganesh, akitty, Alexandre Gramfort, Alexey Grigorev,
Ali Baharev, Allen Riddell, Ando Saabas, Andreas Mueller, Andrew Lamb, Anish
Shah, Ankur Ankan, Anthony Erlinger, Ari Rouvinen, Arnaud Joly, Arnaud Rachez,
Arthur Mensch, banilo, Barmaley.exe, benjaminirving, Boyuan Deng, Brett Naul,
Brian McFee, Buddha Prakash, Chi Zhang, Chih-Wei Chang, Christof Angermueller,
Christoph Gohlke, Christophe Bourguignat, Christopher Erick Moody, Chyi-Kwei
Yau, Cindy Sridharan, CJ Carey, Clyde-fare, Cory Lorenz, Dan Blanchard, Daniel
Galvez, Daniel Kronovet, Danny Sullivan, Data1010, David, David D Lowe, David
Dotson, djipey, Dmitry Spikhalskiy, Donne Martin, Dougal J. Sutherland, Dougal
Sutherland, edson duarte, Eduardo Caro, Eric Larson, Eric Martin, Erich
Schubert, Fernando Carrillo, Frank C. Eckert, Frank Zalkow, Gael Varoquaux,
Ganiev Ibraim, Gilles Louppe, Giorgio Patrini, giorgiop, Graham Clenaghan,
Gryllos Prokopis, gwulfs, Henry Lin, Hsuan-Tien Lin, Immanuel Bayer, Ishank
Gulati, Jack Martin, Jacob Schreiber, Jaidev Deshpande, Jake Vanderplas, Jan
Hendrik Metzen, Jean Kossaifi, Jeffrey04, Jeremy, jfraj, Jiali Mei,
Joe Jevnik, Joel Nothman, John Kirkham, John Wittenauer, Joseph, Joshua Loyal,
Jungkook Park, KamalakerDadi, Kashif Rasul, Keith Goodman, Kian Ho, Konstantin
Shmelkov, Kyler Brown, Lars Buitinck, Lilian Besson, Loic Esteve, Louis Tiao,
maheshakya, Maheshakya Wijewardena, Manoj Kumar, MarkTab marktab.net, Martin
Ku, Martin Spacek, MartinBpr, martinosorb, MaryanMorel, Masafumi Oyamada,
Mathieu Blondel, Matt Krump, Matti Lyra, Maxim Kolganov, mbillinger, mhg,
Michael Heilman, Michael Patterson, Miroslav Batchkarov, Nelle Varoquaux,
Nicolas, Nikolay Mayorov, Olivier Grisel, Omer Katz, Óscar Nájera, Pauli
Virtanen, Peter Fischer, Peter Prettenhofer, Phil Roth, pianomania, Preston
Parry, Raghav RV, Rob Zinkov, Robert Layton, Rohan Ramanath, Saket Choudhary,
Sam Zhang, santi, saurabh.bansod, scls19fr, Sebastian Raschka, Sebastian
Saeger, Shivan Sornarajah, SimonPL, sinhrks, Skipper Seabold, Sonny Hu, sseg,
Stephen Hoover, Steven De Gryze, Steven Seguin, Theodore Vasiloudis, Thomas
Unterthiner, Tiago Freitas Pereira, Tian Wang, Tim Head, Timothy Hopper,
tokoroten, Tom Dupré la Tour, Trevor Stephens, Valentin Stolbunov, Vighnesh
Birodkar, Vinayak Mehta, Vincent, Vincent Michel, vstolbunov, wangz10, Wei Xue,
Yucheng Low, Yury Zhauniarovich, Zac Stewart, zhai_pro, Zichen Wang | scikit-learn | include contributors rst currentmodule sklearn Version 0 17 changes 0 17 1 Version 0 17 1 February 18 2016 Changelog Bug fixes Upgrade vendored joblib to version 0 9 4 that fixes an important bug in joblib Parallel that can silently yield to wrong results when working on datasets larger than 1MB https github com joblib joblib blob 0 9 4 CHANGES rst Fixed reading of Bunch pickles generated with scikit learn version 0 16 This can affect users who have already downloaded a dataset with scikit learn 0 16 and are loading it with scikit learn 0 17 See issue 6196 for how this affected func datasets fetch 20newsgroups By Loic Esteve Fixed a bug that prevented using ROC AUC score to perform grid search on several CPU cores on large arrays See issue 6147 By Olivier Grisel Fixed a bug that prevented to properly set the presort parameter in class ensemble GradientBoostingRegressor See issue 5857 By Andrew McCulloh Fixed a joblib error when evaluating the perplexity of a class decomposition LatentDirichletAllocation model See issue 6258 By Chyi Kwei Yau changes 0 17 Version 0 17 November 5 2015 Changelog New features All the Scaler classes but class preprocessing RobustScaler can be fitted online by calling partial fit By user Giorgio Patrini giorgiop The new class class ensemble VotingClassifier implements a majority rule soft voting ensemble classifier to combine estimators for classification By Sebastian Raschka The new class class preprocessing RobustScaler provides an alternative to class preprocessing StandardScaler for feature wise centering and range normalization that is robust to outliers By user Thomas Unterthiner untom The new class class preprocessing MaxAbsScaler provides an alternative to class preprocessing MinMaxScaler for feature wise range normalization when the data is already centered or sparse By user Thomas Unterthiner untom The new class class preprocessing FunctionTransformer turns a Python function into a Pipeline compatible transformer object By Joe Jevnik The new classes cross validation LabelKFold and cross validation LabelShuffleSplit generate train test folds respectively similar to cross validation KFold and cross validation ShuffleSplit except that the folds are conditioned on a label array By Brian McFee user Jean Kossaifi JeanKossaifi and Gilles Louppe class decomposition LatentDirichletAllocation implements the Latent Dirichlet Allocation topic model with online variational inference By user Chyi Kwei Yau chyikwei with code based on an implementation by Matt Hoffman issue 3659 The new solver sag implements a Stochastic Average Gradient descent and is available in both class linear model LogisticRegression and class linear model Ridge This solver is very efficient for large datasets By user Danny Sullivan dsullivan7 and Tom Dupre la Tour issue 4738 The new solver cd implements a Coordinate Descent in class decomposition NMF Previous solver based on Projected Gradient is still available setting new parameter solver to pg but is deprecated and will be removed in 0 19 along with decomposition ProjectedGradientNMF and parameters sparseness eta beta and nls max iter New parameters alpha and l1 ratio control L1 and L2 regularization and shuffle adds a shuffling step in the cd solver By Tom Dupre la Tour and Mathieu Blondel Enhancements class manifold TSNE now supports approximate optimization via the Barnes Hut method leading to much faster fitting By Christopher Erick Moody issue 4025 class cluster MeanShift now supports parallel execution as implemented in the mean shift function By user Martino Sorbaro martinosorb class naive bayes GaussianNB now supports fitting with sample weight By Jan Hendrik Metzen class dummy DummyClassifier now supports a prior fitting strategy By Arnaud Joly Added a fit predict method for mixture GMM and subclasses By user Cory Lorenz clorenz7 Added the func metrics label ranking loss metric By Arnaud Joly Added the func metrics cohen kappa score metric Added a warm start constructor parameter to the bagging ensemble models to increase the size of the ensemble By user Tim Head betatim Added option to use multi output regression metrics without averaging By Konstantin Shmelkov and user Michael Eickenberg eickenberg Added stratify option to cross validation train test split for stratified splitting By Miroslav Batchkarov The func tree export graphviz function now supports aesthetic improvements for class tree DecisionTreeClassifier and class tree DecisionTreeRegressor including options for coloring nodes by their majority class or impurity showing variable names and using node proportions instead of raw sample counts By Trevor Stephens Improved speed of newton cg solver in class linear model LogisticRegression by avoiding loss computation By Mathieu Blondel and Tom Dupre la Tour The class weight auto heuristic in classifiers supporting class weight was deprecated and replaced by the class weight balanced option which has a simpler formula and interpretation By Hanna Wallach and Andreas M ller Add class weight parameter to automatically weight samples by class frequency for class linear model PassiveAggressiveClassifier By Trevor Stephens Added backlinks from the API reference pages to the user guide By Andreas M ller The labels parameter to func sklearn metrics f1 score func sklearn metrics fbeta score func sklearn metrics recall score and func sklearn metrics precision score has been extended It is now possible to ignore one or more labels such as where a multiclass problem has a majority class to ignore By Joel Nothman Add sample weight support to class linear model RidgeClassifier By Trevor Stephens Provide an option for sparse output from func sklearn metrics pairwise cosine similarity By user Jaidev Deshpande jaidevd Add func preprocessing minmax scale to provide a function interface for class preprocessing MinMaxScaler By user Thomas Unterthiner untom dump svmlight file now handles multi label datasets By Chih Wei Chang RCV1 dataset loader func sklearn datasets fetch rcv1 By Tom Dupre la Tour The Wisconsin Breast Cancer classical two class classification dataset is now included in scikit learn available with func datasets load breast cancer Upgraded to joblib 0 9 3 to benefit from the new automatic batching of short tasks This makes it possible for scikit learn to benefit from parallelism when many very short tasks are executed in parallel for instance by the grid search GridSearchCV meta estimator with n jobs 1 used with a large grid of parameters on a small dataset By Vlad Niculae Olivier Grisel and Loic Esteve For more details about changes in joblib 0 9 3 see the release notes https github com joblib joblib blob master CHANGES rst release 093 Improved speed 3 times per iteration of decomposition DictLearning with coordinate descent method from class linear model Lasso By user Arthur Mensch arthurmensch Parallel processing threaded for queries of nearest neighbors using the ball tree by Nikolay Mayorov Allow func datasets make multilabel classification to output a sparse y By Kashif Rasul class cluster DBSCAN now accepts a sparse matrix of precomputed distances allowing memory efficient distance precomputation By Joel Nothman class tree DecisionTreeClassifier now exposes an apply method for retrieving the leaf indices samples are predicted as By user Daniel Galvez galv and Gilles Louppe Speed up decision tree regressors random forest regressors extra trees regressors and gradient boosting estimators by computing a proxy of the impurity improvement during the tree growth The proxy quantity is such that the split that maximizes this value also maximizes the impurity improvement By Arnaud Joly user Jacob Schreiber jmschrei and Gilles Louppe Speed up tree based methods by reducing the number of computations needed when computing the impurity measure taking into account linear relationship of the computed statistics The effect is particularly visible with extra trees and on datasets with categorical or sparse features By Arnaud Joly class ensemble GradientBoostingRegressor and class ensemble GradientBoostingClassifier now expose an apply method for retrieving the leaf indices each sample ends up in under each try By user Jacob Schreiber jmschrei Add sample weight support to class linear model LinearRegression By Sonny Hu issue 4881 Add n iter without progress to class manifold TSNE to control the stopping criterion By Santi Villalba issue 5186 Added optional parameter random state in class linear model Ridge to set the seed of the pseudo random generator used in sag solver By Tom Dupre la Tour Added optional parameter warm start in class linear model LogisticRegression If set to True the solvers lbfgs newton cg and sag will be initialized with the coefficients computed in the previous fit By Tom Dupre la Tour Added sample weight support to class linear model LogisticRegression for the lbfgs newton cg and sag solvers By Valentin Stolbunov Support added to the liblinear solver By Manoj Kumar Added optional parameter presort to class ensemble GradientBoostingRegressor and class ensemble GradientBoostingClassifier keeping default behavior the same This allows gradient boosters to turn off presorting when building deep trees or using sparse data By user Jacob Schreiber jmschrei Altered func metrics roc curve to drop unnecessary thresholds by default By user Graham Clenaghan gclenaghan Added class feature selection SelectFromModel meta transformer which can be used along with estimators that have coef or feature importances attribute to select important features of the input data By user Maheshakya Wijewardena maheshakya Joel Nothman and Manoj Kumar Added func metrics pairwise laplacian kernel By Clyde Fare https github com Clyde fare covariance GraphLasso allows separate control of the convergence criterion for the Elastic Net subproblem via the enet tol parameter Improved verbosity in class decomposition DictionaryLearning class ensemble RandomForestClassifier and class ensemble RandomForestRegressor no longer explicitly store the samples used in bagging resulting in a much reduced memory footprint for storing random forest models Added positive option to class linear model Lars and func linear model lars path to force coefficients to be positive issue 5131 Added the X norm squared parameter to func metrics pairwise euclidean distances to provide precomputed squared norms for X Added the fit predict method to class pipeline Pipeline Added the func preprocessing minmax scale function Bug fixes Fixed non determinism in class dummy DummyClassifier with sparse multi label output By Andreas M ller Fixed the output shape of class linear model RANSACRegressor to n samples By Andreas M ller Fixed bug in decomposition DictLearning when n jobs 0 By Andreas M ller Fixed bug where grid search RandomizedSearchCV could consume a lot of memory for large discrete grids By Joel Nothman Fixed bug in class linear model LogisticRegressionCV where penalty was ignored in the final fit By Manoj Kumar Fixed bug in ensemble forest ForestClassifier while computing oob score and X is a sparse csc matrix By user Ankur Ankan ankurankan All regressors now consistently handle and warn when given y that is of shape n samples 1 By Andreas M ller and Henry Lin issue 5431 Fix in class cluster KMeans cluster reassignment for sparse input by Lars Buitinck Fixed a bug in class discriminant analysis LinearDiscriminantAnalysis that could cause asymmetric covariance matrices when using shrinkage By Martin Billinger Fixed cross validation cross val predict for estimators with sparse predictions By Buddha Prakash Fixed the predict proba method of class linear model LogisticRegression to use soft max instead of one vs rest normalization By Manoj Kumar issue 5182 Fixed the partial fit method of class linear model SGDClassifier when called with average True By user Andrew Lamb andylamb issue 5282 Dataset fetchers use different filenames under Python 2 and Python 3 to avoid pickling compatibility issues By Olivier Grisel issue 5355 Fixed a bug in class naive bayes GaussianNB which caused classification results to depend on scale By Jake Vanderplas Fixed temporarily class linear model Ridge which was incorrect when fitting the intercept in the case of sparse data The fix automatically changes the solver to sag in this case issue 5360 by Tom Dupre la Tour Fixed a performance bug in decomposition RandomizedPCA on data with a large number of features and fewer samples issue 4478 By Andreas M ller Loic Esteve and user Giorgio Patrini giorgiop Fixed bug in cross decomposition PLS that yielded unstable and platform dependent output and failed on fit transform By user Arthur Mensch arthurmensch Fixes to the Bunch class used to store datasets Fixed ensemble plot partial dependence ignoring the percentiles parameter Providing a set as vocabulary in CountVectorizer no longer leads to inconsistent results when pickling Fixed the conditions on when a precomputed Gram matrix needs to be recomputed in class linear model LinearRegression class linear model OrthogonalMatchingPursuit class linear model Lasso and class linear model ElasticNet Fixed inconsistent memory layout in the coordinate descent solver that affected linear model DictionaryLearning and covariance GraphLasso issue 5337 By Olivier Grisel class manifold LocallyLinearEmbedding no longer ignores the reg parameter Nearest Neighbor estimators with custom distance metrics can now be pickled issue 4362 Fixed a bug in class pipeline FeatureUnion where transformer weights were not properly handled when performing grid searches Fixed a bug in class linear model LogisticRegression and class linear model LogisticRegressionCV when using class weight balanced or class weight auto By Tom Dupre la Tour Fixed bug issue 5495 when doing OVR SVC decision function shape ovr Fixed by user Elvis Dohmatob dohmatob API changes summary Attribute data min data max and data range in class preprocessing MinMaxScaler are deprecated and won t be available from 0 19 Instead the class now exposes data min data max and data range By user Giorgio Patrini giorgiop All Scaler classes now have an scale attribute the feature wise rescaling applied by their transform methods The old attribute std in class preprocessing StandardScaler is deprecated and superseded by scale it won t be available in 0 19 By user Giorgio Patrini giorgiop class svm SVC and class svm NuSVC now have an decision function shape parameter to make their decision function of shape n samples n classes by setting decision function shape ovr This will be the default behavior starting in 0 19 By Andreas M ller Passing 1D data arrays as input to estimators is now deprecated as it caused confusion in how the array elements should be interpreted as features or as samples All data arrays are now expected to be explicitly shaped n samples n features By user Vighnesh Birodkar vighneshbirodkar lda LDA and qda QDA have been moved to class discriminant analysis LinearDiscriminantAnalysis and class discriminant analysis QuadraticDiscriminantAnalysis The store covariance and tol parameters have been moved from the fit method to the constructor in class discriminant analysis LinearDiscriminantAnalysis and the store covariances and tol parameters have been moved from the fit method to the constructor in class discriminant analysis QuadraticDiscriminantAnalysis Models inheriting from LearntSelectorMixin will no longer support the transform methods i e RandomForests GradientBoosting LogisticRegression DecisionTrees SVMs and SGD related models Wrap these models around the metatransfomer class feature selection SelectFromModel to remove features according to coefs or feature importances which are below a certain threshold value instead class cluster KMeans re runs cluster assignments in case of non convergence to ensure consistency of predict X and labels By user Vighnesh Birodkar vighneshbirodkar Classifier and Regressor models are now tagged as such using the estimator type attribute Cross validation iterators always provide indices into training and test set not boolean masks The decision function on all regressors was deprecated and will be removed in 0 19 Use predict instead datasets load lfw pairs is deprecated and will be removed in 0 19 Use func datasets fetch lfw pairs instead The deprecated hmm module was removed The deprecated Bootstrap cross validation iterator was removed The deprecated Ward and WardAgglomerative classes have been removed Use class cluster AgglomerativeClustering instead cross validation check cv is now a public function The property residues of class linear model LinearRegression is deprecated and will be removed in 0 19 The deprecated n jobs parameter of class linear model LinearRegression has been moved to the constructor Removed deprecated class weight parameter from class linear model SGDClassifier s fit method Use the construction parameter instead The deprecated support for the sequence of sequences or list of lists multilabel format was removed To convert to and from the supported binary indicator matrix format use class MultiLabelBinarizer preprocessing MultiLabelBinarizer The behavior of calling the inverse transform method of Pipeline pipeline will change in 0 19 It will no longer reshape one dimensional input to two dimensional input The deprecated attributes indicator matrix multilabel and classes of class preprocessing LabelBinarizer were removed Using gamma 0 in class svm SVC and class svm SVR to automatically set the gamma to 1 n features is deprecated and will be removed in 0 19 Use gamma auto instead Code Contributors Aaron Schumacher Adithya Ganesh akitty Alexandre Gramfort Alexey Grigorev Ali Baharev Allen Riddell Ando Saabas Andreas Mueller Andrew Lamb Anish Shah Ankur Ankan Anthony Erlinger Ari Rouvinen Arnaud Joly Arnaud Rachez Arthur Mensch banilo Barmaley exe benjaminirving Boyuan Deng Brett Naul Brian McFee Buddha Prakash Chi Zhang Chih Wei Chang Christof Angermueller Christoph Gohlke Christophe Bourguignat Christopher Erick Moody Chyi Kwei Yau Cindy Sridharan CJ Carey Clyde fare Cory Lorenz Dan Blanchard Daniel Galvez Daniel Kronovet Danny Sullivan Data1010 David David D Lowe David Dotson djipey Dmitry Spikhalskiy Donne Martin Dougal J Sutherland Dougal Sutherland edson duarte Eduardo Caro Eric Larson Eric Martin Erich Schubert Fernando Carrillo Frank C Eckert Frank Zalkow Gael Varoquaux Ganiev Ibraim Gilles Louppe Giorgio Patrini giorgiop Graham Clenaghan Gryllos Prokopis gwulfs Henry Lin Hsuan Tien Lin Immanuel Bayer Ishank Gulati Jack Martin Jacob Schreiber Jaidev Deshpande Jake Vanderplas Jan Hendrik Metzen Jean Kossaifi Jeffrey04 Jeremy jfraj Jiali Mei Joe Jevnik Joel Nothman John Kirkham John Wittenauer Joseph Joshua Loyal Jungkook Park KamalakerDadi Kashif Rasul Keith Goodman Kian Ho Konstantin Shmelkov Kyler Brown Lars Buitinck Lilian Besson Loic Esteve Louis Tiao maheshakya Maheshakya Wijewardena Manoj Kumar MarkTab marktab net Martin Ku Martin Spacek MartinBpr martinosorb MaryanMorel Masafumi Oyamada Mathieu Blondel Matt Krump Matti Lyra Maxim Kolganov mbillinger mhg Michael Heilman Michael Patterson Miroslav Batchkarov Nelle Varoquaux Nicolas Nikolay Mayorov Olivier Grisel Omer Katz scar N jera Pauli Virtanen Peter Fischer Peter Prettenhofer Phil Roth pianomania Preston Parry Raghav RV Rob Zinkov Robert Layton Rohan Ramanath Saket Choudhary Sam Zhang santi saurabh bansod scls19fr Sebastian Raschka Sebastian Saeger Shivan Sornarajah SimonPL sinhrks Skipper Seabold Sonny Hu sseg Stephen Hoover Steven De Gryze Steven Seguin Theodore Vasiloudis Thomas Unterthiner Tiago Freitas Pereira Tian Wang Tim Head Timothy Hopper tokoroten Tom Dupr la Tour Trevor Stephens Valentin Stolbunov Vighnesh Birodkar Vinayak Mehta Vincent Vincent Michel vstolbunov wangz10 Wei Xue Yucheng Low Yury Zhauniarovich Zac Stewart zhai pro Zichen Wang |
scikit-learn orphan Who is using scikit learn Testimonials testimonials | :orphan:
.. title:: Testimonials
.. _testimonials:
==========================
Who is using scikit-learn?
==========================
`J.P.Morgan <https://www.jpmorgan.com>`_
----------------------------------------
.. div:: sk-text-image-grid-large
.. div:: text-box
Scikit-learn is an indispensable part of the Python machine learning
toolkit at JPMorgan. It is very widely used across all parts of the bank
for classification, predictive analytics, and very many other machine
learning tasks. Its straightforward API, its breadth of algorithms, and
the quality of its documentation combine to make scikit-learn
simultaneously very approachable and very powerful.
.. rst-class:: annotation
Stephen Simmons, VP, Athena Research, JPMorgan
.. div:: image-box
.. image:: images/jpmorgan.png
:target: https://www.jpmorgan.com
`Spotify <https://www.spotify.com>`_
------------------------------------
.. div:: sk-text-image-grid-large
.. div:: text-box
Scikit-learn provides a toolbox with solid implementations of a bunch of
state-of-the-art models and makes it easy to plug them into existing
applications. We've been using it quite a lot for music recommendations at
Spotify and I think it's the most well-designed ML package I've seen so far.
.. rst-class:: annotation
Erik Bernhardsson, Engineering Manager Music Discovery & Machine Learning, Spotify
.. div:: image-box
.. image:: images/spotify.png
:target: https://www.spotify.com
`Inria <https://www.inria.fr/>`_
--------------------------------
.. div:: sk-text-image-grid-large
.. div:: text-box
At INRIA, we use scikit-learn to support leading-edge basic research in many
teams: `Parietal <https://team.inria.fr/parietal/>`_ for neuroimaging, `Lear
<https://lear.inrialpes.fr/>`_ for computer vision, `Visages
<https://team.inria.fr/visages/>`_ for medical image analysis, `Privatics
<https://team.inria.fr/privatics>`_ for security. The project is a fantastic
tool to address difficult applications of machine learning in an academic
environment as it is performant and versatile, but all easy-to-use and well
documented, which makes it well suited to grad students.
.. rst-class:: annotation
Gaël Varoquaux, research at Parietal
.. div:: image-box
.. image:: images/inria.png
:target: https://www.inria.fr/
`betaworks <https://betaworks.com>`_
------------------------------------
.. div:: sk-text-image-grid-large
.. div:: text-box
Betaworks is a NYC-based startup studio that builds new products, grows
companies, and invests in others. Over the past 8 years we've launched a
handful of social data analytics-driven services, such as Bitly, Chartbeat,
digg and Scale Model. Consistently the betaworks data science team uses
Scikit-learn for a variety of tasks. From exploratory analysis, to product
development, it is an essential part of our toolkit. Recent uses are included
in `digg's new video recommender system
<https://medium.com/i-data/the-digg-video-recommender-2f9ade7c4ba3>`_,
and Poncho's `dynamic heuristic subspace clustering
<https://medium.com/@DiggData/scaling-poncho-using-data-ca24569d56fd>`_.
.. rst-class:: annotation
Gilad Lotan, Chief Data Scientist
.. div:: image-box
.. image:: images/betaworks.png
:target: https://betaworks.com
`Hugging Face <https://huggingface.co>`_
----------------------------------------
.. div:: sk-text-image-grid-large
.. div:: text-box
At Hugging Face we're using NLP and probabilistic models to generate
conversational Artificial intelligences that are fun to chat with. Despite using
deep neural nets for `a few <https://medium.com/huggingface/understanding-emotions-from-keras-to-pytorch-3ccb61d5a983>`_
of our `NLP tasks <https://huggingface.co/coref/>`_, scikit-learn is still the
bread-and-butter of our daily machine learning routine. The ease of use and
predictability of the interface, as well as the straightforward mathematical
explanations that are here when you need them, is the killer feature. We use a
variety of scikit-learn models in production and they are also operationally very
pleasant to work with.
.. rst-class:: annotation
Julien Chaumond, Chief Technology Officer
.. div:: image-box
.. image:: images/huggingface.png
:target: https://huggingface.co
`Evernote <https://evernote.com>`_
----------------------------------
.. div:: sk-text-image-grid-large
.. div:: text-box
Building a classifier is typically an iterative process of exploring
the data, selecting the features (the attributes of the data believed
to be predictive in some way), training the models, and finally
evaluating them. For many of these tasks, we relied on the excellent
scikit-learn package for Python.
`Read more <http://blog.evernote.com/tech/2013/01/22/stay-classified/>`_
.. rst-class:: annotation
Mark Ayzenshtat, VP, Augmented Intelligence
.. div:: image-box
.. image:: images/evernote.png
:target: https://evernote.com
`Télécom ParisTech <https://www.telecom-paristech.fr/>`_
--------------------------------------------------------
.. div:: sk-text-image-grid-large
.. div:: text-box
At Telecom ParisTech, scikit-learn is used for hands-on sessions and home
assignments in introductory and advanced machine learning courses. The classes
are for undergrads and masters students. The great benefit of scikit-learn is
its fast learning curve that allows students to quickly start working on
interesting and motivating problems.
.. rst-class:: annotation
Alexandre Gramfort, Assistant Professor
.. div:: image-box
.. image:: images/telecomparistech.jpg
:target: https://www.telecom-paristech.fr/
`Booking.com <https://www.booking.com>`_
----------------------------------------
.. div:: sk-text-image-grid-large
.. div:: text-box
At Booking.com, we use machine learning algorithms for many different
applications, such as recommending hotels and destinations to our customers,
detecting fraudulent reservations, or scheduling our customer service agents.
Scikit-learn is one of the tools we use when implementing standard algorithms
for prediction tasks. Its API and documentations are excellent and make it easy
to use. The scikit-learn developers do a great job of incorporating state of
the art implementations and new algorithms into the package. Thus, scikit-learn
provides convenient access to a wide spectrum of algorithms, and allows us to
readily find the right tool for the right job.
.. rst-class:: annotation
Melanie Mueller, Data Scientist
.. div:: image-box
.. image:: images/booking.png
:target: https://www.booking.com
`AWeber <https://www.aweber.com/>`_
-----------------------------------
.. div:: sk-text-image-grid-large
.. div:: text-box
The scikit-learn toolkit is indispensable for the Data Analysis and Management
team at AWeber. It allows us to do AWesome stuff we would not otherwise have
the time or resources to accomplish. The documentation is excellent, allowing
new engineers to quickly evaluate and apply many different algorithms to our
data. The text feature extraction utilities are useful when working with the
large volume of email content we have at AWeber. The RandomizedPCA
implementation, along with Pipelining and FeatureUnions, allows us to develop
complex machine learning algorithms efficiently and reliably.
Anyone interested in learning more about how AWeber deploys scikit-learn in a
production environment should check out talks from PyData Boston by AWeber's
Michael Becker available at https://github.com/mdbecker/pydata_2013.
.. rst-class:: annotation
Michael Becker, Software Engineer, Data Analysis and Management Ninjas
.. div:: image-box
.. image:: images/aweber.png
:target: https://www.aweber.com
`Yhat <https://www.yhat.com>`_
------------------------------
.. div:: sk-text-image-grid-large
.. div:: text-box
The combination of consistent APIs, thorough documentation, and top notch
implementation make scikit-learn our favorite machine learning package in
Python. scikit-learn makes doing advanced analysis in Python accessible to
anyone. At Yhat, we make it easy to integrate these models into your production
applications. Thus eliminating the unnecessary dev time encountered
productionizing analytical work.
.. rst-class:: annotation
Greg Lamp, Co-founder
.. div:: image-box
.. image:: images/yhat.png
:target: https://www.yhat.com
`Rangespan <http://www.rangespan.com>`_
---------------------------------------
.. div:: sk-text-image-grid-large
.. div:: text-box
The Python scikit-learn toolkit is a core tool in the data science
group at Rangespan. Its large collection of well documented models and
algorithms allow our team of data scientists to prototype fast and
quickly iterate to find the right solution to our learning problems.
We find that scikit-learn is not only the right tool for prototyping,
but its careful and well tested implementation give us the confidence
to run scikit-learn models in production.
.. rst-class:: annotation
Jurgen Van Gael, Data Science Director
.. div:: image-box
.. image:: images/rangespan.png
:target: http://www.rangespan.com
`Birchbox <https://www.birchbox.com>`_
--------------------------------------
.. div:: sk-text-image-grid-large
.. div:: text-box
At Birchbox, we face a range of machine learning problems typical to
E-commerce: product recommendation, user clustering, inventory prediction,
trends detection, etc. Scikit-learn lets us experiment with many models,
especially in the exploration phase of a new project: the data can be passed
around in a consistent way; models are easy to save and reuse; updates keep us
informed of new developments from the pattern discovery research community.
Scikit-learn is an important tool for our team, built the right way in the
right language.
.. rst-class:: annotation
Thierry Bertin-Mahieux, Data Scientist
.. div:: image-box
.. image:: images/birchbox.jpg
:target: https://www.birchbox.com
`Bestofmedia Group <http://www.bestofmedia.com>`_
-------------------------------------------------
.. div:: sk-text-image-grid-large
.. div:: text-box
Scikit-learn is our #1 toolkit for all things machine learning
at Bestofmedia. We use it for a variety of tasks (e.g. spam fighting,
ad click prediction, various ranking models) thanks to the varied,
state-of-the-art algorithm implementations packaged into it.
In the lab it accelerates prototyping of complex pipelines. In
production I can say it has proven to be robust and efficient enough
to be deployed for business critical components.
.. rst-class:: annotation
Eustache Diemert, Lead Scientist
.. div:: image-box
.. image:: images/bestofmedia-logo.png
:target: http://www.bestofmedia.com
`Change.org <https://www.change.org>`_
--------------------------------------
.. div:: sk-text-image-grid-large
.. div:: text-box
At change.org we automate the use of scikit-learn's RandomForestClassifier
in our production systems to drive email targeting that reaches millions
of users across the world each week. In the lab, scikit-learn's ease-of-use,
performance, and overall variety of algorithms implemented has proved invaluable
in giving us a single reliable source to turn to for our machine-learning needs.
.. rst-class:: annotation
Vijay Ramesh, Software Engineer in Data/science at Change.org
.. div:: image-box
.. image:: images/change-logo.png
:target: https://www.change.org
`PHIMECA Engineering <https://www.phimeca.com/?lang=en>`_
---------------------------------------------------------
.. div:: sk-text-image-grid-large
.. div:: text-box
At PHIMECA Engineering, we use scikit-learn estimators as surrogates for
expensive-to-evaluate numerical models (mostly but not exclusively
finite-element mechanical models) for speeding up the intensive post-processing
operations involved in our simulation-based decision making framework.
Scikit-learn's fit/predict API together with its efficient cross-validation
tools considerably eases the task of selecting the best-fit estimator. We are
also using scikit-learn for illustrating concepts in our training sessions.
Trainees are always impressed by the ease-of-use of scikit-learn despite the
apparent theoretical complexity of machine learning.
.. rst-class:: annotation
Vincent Dubourg, PHIMECA Engineering, PhD Engineer
.. div:: image-box
.. image:: images/phimeca.png
:target: https://www.phimeca.com/?lang=en
`HowAboutWe <http://www.howaboutwe.com/>`_
------------------------------------------
.. div:: sk-text-image-grid-large
.. div:: text-box
At HowAboutWe, scikit-learn lets us implement a wide array of machine learning
techniques in analysis and in production, despite having a small team. We use
scikit-learn's classification algorithms to predict user behavior, enabling us
to (for example) estimate the value of leads from a given traffic source early
in the lead's tenure on our site. Also, our users' profiles consist of
primarily unstructured data (answers to open-ended questions), so we use
scikit-learn's feature extraction and dimensionality reduction tools to
translate these unstructured data into inputs for our matchmaking system.
.. rst-class:: annotation
Daniel Weitzenfeld, Senior Data Scientist at HowAboutWe
.. div:: image-box
.. image:: images/howaboutwe.png
:target: http://www.howaboutwe.com/
`PeerIndex <https://www.brandwatch.com/peerindex-and-brandwatch>`_
------------------------------------------------------------------
.. div:: sk-text-image-grid-large
.. div:: text-box
At PeerIndex we use scientific methodology to build the Influence Graph - a
unique dataset that allows us to identify who's really influential and in which
context. To do this, we have to tackle a range of machine learning and
predictive modeling problems. Scikit-learn has emerged as our primary tool for
developing prototypes and making quick progress. From predicting missing data
and classifying tweets to clustering communities of social media users, scikit-
learn proved useful in a variety of applications. Its very intuitive interface
and excellent compatibility with other python tools makes it and indispensable
tool in our daily research efforts.
.. rst-class:: annotation
Ferenc Huszar, Senior Data Scientist at Peerindex
.. div:: image-box
.. image:: images/peerindex.png
:target: https://www.brandwatch.com/peerindex-and-brandwatch
`DataRobot <https://www.datarobot.com>`_
----------------------------------------
.. div:: sk-text-image-grid-large
.. div:: text-box
DataRobot is building next generation predictive analytics software to make data
scientists more productive, and scikit-learn is an integral part of our system. The
variety of machine learning techniques in combination with the solid implementations
that scikit-learn offers makes it a one-stop-shopping library for machine learning
in Python. Moreover, its consistent API, well-tested code and permissive licensing
allow us to use it in a production environment. Scikit-learn has literally saved us
years of work we would have had to do ourselves to bring our product to market.
.. rst-class:: annotation
Jeremy Achin, CEO & Co-founder DataRobot Inc.
.. div:: image-box
.. image:: images/datarobot.png
:target: https://www.datarobot.com
`OkCupid <https://www.okcupid.com/>`_
-------------------------------------
.. div:: sk-text-image-grid-large
.. div:: text-box
We're using scikit-learn at OkCupid to evaluate and improve our matchmaking
system. The range of features it has, especially preprocessing utilities, means
we can use it for a wide variety of projects, and it's performant enough to
handle the volume of data that we need to sort through. The documentation is
really thorough, as well, which makes the library quite easy to use.
.. rst-class:: annotation
David Koh - Senior Data Scientist at OkCupid
.. div:: image-box
.. image:: images/okcupid.png
:target: https://www.okcupid.com
`Lovely <https://livelovely.com/>`_
-----------------------------------
.. div:: sk-text-image-grid-large
.. div:: text-box
At Lovely, we strive to deliver the best apartment marketplace, with respect to
our users and our listings. From understanding user behavior, improving data
quality, and detecting fraud, scikit-learn is a regular tool for gathering
insights, predictive modeling and improving our product. The easy-to-read
documentation and intuitive architecture of the API makes machine learning both
explorable and accessible to a wide range of python developers. I'm constantly
recommending that more developers and scientists try scikit-learn.
.. rst-class:: annotation
Simon Frid - Data Scientist, Lead at Lovely
.. div:: image-box
.. image:: images/lovely.png
:target: https://livelovely.com
`Data Publica <http://www.data-publica.com/>`_
----------------------------------------------
.. div:: sk-text-image-grid-large
.. div:: text-box
Data Publica builds a new predictive sales tool for commercial and marketing teams
called C-Radar. We extensively use scikit-learn to build segmentations of customers
through clustering, and to predict future customers based on past partnerships
success or failure. We also categorize companies using their website communication
thanks to scikit-learn and its machine learning algorithm implementations.
Eventually, machine learning makes it possible to detect weak signals that
traditional tools cannot see. All these complex tasks are performed in an easy and
straightforward way thanks to the great quality of the scikit-learn framework.
.. rst-class:: annotation
Guillaume Lebourgeois & Samuel Charron - Data Scientists at Data Publica
.. div:: image-box
.. image:: images/datapublica.png
:target: http://www.data-publica.com/
`Machinalis <https://www.machinalis.com/>`_
-------------------------------------------
.. div:: sk-text-image-grid-large
.. div:: text-box
Scikit-learn is the cornerstone of all the machine learning projects carried at
Machinalis. It has a consistent API, a wide selection of algorithms and lots of
auxiliary tools to deal with the boilerplate. We have used it in production
environments on a variety of projects including click-through rate prediction,
`information extraction <https://github.com/machinalis/iepy>`_, and even counting
sheep!
In fact, we use it so much that we've started to freeze our common use cases
into Python packages, some of them open-sourced, like `FeatureForge
<https://github.com/machinalis/featureforge>`_. Scikit-learn in one word: Awesome.
.. rst-class:: annotation
Rafael Carrascosa, Lead developer
.. div:: image-box
.. image:: images/machinalis.png
:target: https://www.machinalis.com/
`solido <https://www.solidodesign.com/>`_
-----------------------------------------
.. div:: sk-text-image-grid-large
.. div:: text-box
Scikit-learn is helping to drive Moore's Law, via Solido. Solido creates
computer-aided design tools used by the majority of top-20 semiconductor
companies and fabs, to design the bleeding-edge chips inside smartphones,
automobiles, and more. Scikit-learn helps to power Solido's algorithms for
rare-event estimation, worst-case verification, optimization, and more. At
Solido, we are particularly fond of scikit-learn's libraries for Gaussian
Process models, large-scale regularized linear regression, and classification.
Scikit-learn has increased our productivity, because for many ML problems we no
longer need to “roll our own” code. `This PyData 2014 talk
<https://www.youtube.com/watch?v=Jm-eBD9xR3w>`_ has details.
.. rst-class:: annotation
Trent McConaghy, founder, Solido Design Automation Inc.
.. div:: image-box
.. image:: images/solido_logo.png
:target: https://www.solidodesign.com/
`INFONEA <http://www.infonea.com/en/>`_
---------------------------------------
.. div:: sk-text-image-grid-large
.. div:: text-box
We employ scikit-learn for rapid prototyping and custom-made Data Science
solutions within our in-memory based Business Intelligence Software
INFONEA®. As a well-documented and comprehensive collection of
state-of-the-art algorithms and pipelining methods, scikit-learn enables
us to provide flexible and scalable scientific analysis solutions. Thus,
scikit-learn is immensely valuable in realizing a powerful integration of
Data Science technology within self-service business analytics.
.. rst-class:: annotation
Thorsten Kranz, Data Scientist, Coma Soft AG.
.. div:: image-box
.. image:: images/infonea.jpg
:target: http://www.infonea.com/en/
`Dataiku <https://www.dataiku.com/>`_
-------------------------------------
.. div:: sk-text-image-grid-large
.. div:: text-box
Our software, Data Science Studio (DSS), enables users to create data services
that combine `ETL <https://en.wikipedia.org/wiki/Extract,_transform,_load>`_ with
Machine Learning. Our Machine Learning module integrates
many scikit-learn algorithms. The scikit-learn library is a perfect integration
with DSS because it offers algorithms for virtually all business cases. Our goal
is to offer a transparent and flexible tool that makes it easier to optimize
time consuming aspects of building a data service, preparing data, and training
machine learning algorithms on all types of data.
.. rst-class:: annotation
Florian Douetteau, CEO, Dataiku
.. div:: image-box
.. image:: images/dataiku_logo.png
:target: https://www.dataiku.com/
`Otto Group <https://ottogroup.com/>`_
--------------------------------------
.. div:: sk-text-image-grid-large
.. div:: text-box
Here at Otto Group, one of global Big Five B2C online retailers, we are using
scikit-learn in all aspects of our daily work from data exploration to development
of machine learning application to the productive deployment of those services.
It helps us to tackle machine learning problems ranging from e-commerce to logistics.
It consistent APIs enabled us to build the `Palladium REST-API framework
<https://github.com/ottogroup/palladium/>`_ around it and continuously deliver
scikit-learn based services.
.. rst-class:: annotation
Christian Rammig, Head of Data Science, Otto Group
.. div:: image-box
.. image:: images/ottogroup_logo.png
:target: https://ottogroup.com
`Zopa <https://zopa.com/>`_
---------------------------
.. div:: sk-text-image-grid-large
.. div:: text-box
At Zopa, the first ever Peer-to-Peer lending platform, we extensively use
scikit-learn to run the business and optimize our users' experience. It powers our
Machine Learning models involved in credit risk, fraud risk, marketing, and pricing,
and has been used for originating at least 1 billion GBP worth of Zopa loans. It is
very well documented, powerful, and simple to use. We are grateful for the
capabilities it has provided, and for allowing us to deliver on our mission of
making money simple and fair.
.. rst-class:: annotation
Vlasios Vasileiou, Head of Data Science, Zopa
.. div:: image-box
.. image:: images/zopa.png
:target: https://zopa.com
`MARS <https://www.mars.com/global>`_
-------------------------------------
.. div:: sk-text-image-grid-large
.. div:: text-box
Scikit-Learn is integral to the Machine Learning Ecosystem at Mars. Whether
we're designing better recipes for petfood or closely analysing our cocoa
supply chain, Scikit-Learn is used as a tool for rapidly prototyping ideas
and taking them to production. This allows us to better understand and meet
the needs of our consumers worldwide. Scikit-Learn's feature-rich toolset is
easy to use and equips our associates with the capabilities they need to
solve the business challenges they face every day.
.. rst-class:: annotation
Michael Fitzke, Next Generation Technologies Sr Leader, Mars Inc.
.. div:: image-box
.. image:: images/mars.png
:target: https://www.mars.com/global
`BNP Paribas Cardif <https://www.bnpparibascardif.com/>`_
---------------------------------------------------------
.. div:: sk-text-image-grid-large
.. div:: text-box
BNP Paribas Cardif uses scikit-learn for several of its machine learning models
in production. Our internal community of developers and data scientists has
been using scikit-learn since 2015, for several reasons: the quality of the
developments, documentation and contribution governance, and the sheer size of
the contributing community. We even explicitly mention the use of
scikit-learn's pipelines in our internal model risk governance as one of our
good practices to decrease operational risks and overfitting risk. As a way to
support open source software development and in particular scikit-learn
project, we decided to participate to scikit-learn's consortium at La Fondation
Inria since its creation in 2018.
.. rst-class:: annotation
Sébastien Conort, Chief Data Scientist, BNP Paribas Cardif
.. div:: image-box
.. image:: images/bnp_paribas_cardif.png
:target: https://www.bnpparibascardif.com/ | scikit-learn | orphan title Testimonials testimonials Who is using scikit learn J P Morgan https www jpmorgan com div sk text image grid large div text box Scikit learn is an indispensable part of the Python machine learning toolkit at JPMorgan It is very widely used across all parts of the bank for classification predictive analytics and very many other machine learning tasks Its straightforward API its breadth of algorithms and the quality of its documentation combine to make scikit learn simultaneously very approachable and very powerful rst class annotation Stephen Simmons VP Athena Research JPMorgan div image box image images jpmorgan png target https www jpmorgan com Spotify https www spotify com div sk text image grid large div text box Scikit learn provides a toolbox with solid implementations of a bunch of state of the art models and makes it easy to plug them into existing applications We ve been using it quite a lot for music recommendations at Spotify and I think it s the most well designed ML package I ve seen so far rst class annotation Erik Bernhardsson Engineering Manager Music Discovery Machine Learning Spotify div image box image images spotify png target https www spotify com Inria https www inria fr div sk text image grid large div text box At INRIA we use scikit learn to support leading edge basic research in many teams Parietal https team inria fr parietal for neuroimaging Lear https lear inrialpes fr for computer vision Visages https team inria fr visages for medical image analysis Privatics https team inria fr privatics for security The project is a fantastic tool to address difficult applications of machine learning in an academic environment as it is performant and versatile but all easy to use and well documented which makes it well suited to grad students rst class annotation Ga l Varoquaux research at Parietal div image box image images inria png target https www inria fr betaworks https betaworks com div sk text image grid large div text box Betaworks is a NYC based startup studio that builds new products grows companies and invests in others Over the past 8 years we ve launched a handful of social data analytics driven services such as Bitly Chartbeat digg and Scale Model Consistently the betaworks data science team uses Scikit learn for a variety of tasks From exploratory analysis to product development it is an essential part of our toolkit Recent uses are included in digg s new video recommender system https medium com i data the digg video recommender 2f9ade7c4ba3 and Poncho s dynamic heuristic subspace clustering https medium com DiggData scaling poncho using data ca24569d56fd rst class annotation Gilad Lotan Chief Data Scientist div image box image images betaworks png target https betaworks com Hugging Face https huggingface co div sk text image grid large div text box At Hugging Face we re using NLP and probabilistic models to generate conversational Artificial intelligences that are fun to chat with Despite using deep neural nets for a few https medium com huggingface understanding emotions from keras to pytorch 3ccb61d5a983 of our NLP tasks https huggingface co coref scikit learn is still the bread and butter of our daily machine learning routine The ease of use and predictability of the interface as well as the straightforward mathematical explanations that are here when you need them is the killer feature We use a variety of scikit learn models in production and they are also operationally very pleasant to work with rst class annotation Julien Chaumond Chief Technology Officer div image box image images huggingface png target https huggingface co Evernote https evernote com div sk text image grid large div text box Building a classifier is typically an iterative process of exploring the data selecting the features the attributes of the data believed to be predictive in some way training the models and finally evaluating them For many of these tasks we relied on the excellent scikit learn package for Python Read more http blog evernote com tech 2013 01 22 stay classified rst class annotation Mark Ayzenshtat VP Augmented Intelligence div image box image images evernote png target https evernote com T l com ParisTech https www telecom paristech fr div sk text image grid large div text box At Telecom ParisTech scikit learn is used for hands on sessions and home assignments in introductory and advanced machine learning courses The classes are for undergrads and masters students The great benefit of scikit learn is its fast learning curve that allows students to quickly start working on interesting and motivating problems rst class annotation Alexandre Gramfort Assistant Professor div image box image images telecomparistech jpg target https www telecom paristech fr Booking com https www booking com div sk text image grid large div text box At Booking com we use machine learning algorithms for many different applications such as recommending hotels and destinations to our customers detecting fraudulent reservations or scheduling our customer service agents Scikit learn is one of the tools we use when implementing standard algorithms for prediction tasks Its API and documentations are excellent and make it easy to use The scikit learn developers do a great job of incorporating state of the art implementations and new algorithms into the package Thus scikit learn provides convenient access to a wide spectrum of algorithms and allows us to readily find the right tool for the right job rst class annotation Melanie Mueller Data Scientist div image box image images booking png target https www booking com AWeber https www aweber com div sk text image grid large div text box The scikit learn toolkit is indispensable for the Data Analysis and Management team at AWeber It allows us to do AWesome stuff we would not otherwise have the time or resources to accomplish The documentation is excellent allowing new engineers to quickly evaluate and apply many different algorithms to our data The text feature extraction utilities are useful when working with the large volume of email content we have at AWeber The RandomizedPCA implementation along with Pipelining and FeatureUnions allows us to develop complex machine learning algorithms efficiently and reliably Anyone interested in learning more about how AWeber deploys scikit learn in a production environment should check out talks from PyData Boston by AWeber s Michael Becker available at https github com mdbecker pydata 2013 rst class annotation Michael Becker Software Engineer Data Analysis and Management Ninjas div image box image images aweber png target https www aweber com Yhat https www yhat com div sk text image grid large div text box The combination of consistent APIs thorough documentation and top notch implementation make scikit learn our favorite machine learning package in Python scikit learn makes doing advanced analysis in Python accessible to anyone At Yhat we make it easy to integrate these models into your production applications Thus eliminating the unnecessary dev time encountered productionizing analytical work rst class annotation Greg Lamp Co founder div image box image images yhat png target https www yhat com Rangespan http www rangespan com div sk text image grid large div text box The Python scikit learn toolkit is a core tool in the data science group at Rangespan Its large collection of well documented models and algorithms allow our team of data scientists to prototype fast and quickly iterate to find the right solution to our learning problems We find that scikit learn is not only the right tool for prototyping but its careful and well tested implementation give us the confidence to run scikit learn models in production rst class annotation Jurgen Van Gael Data Science Director div image box image images rangespan png target http www rangespan com Birchbox https www birchbox com div sk text image grid large div text box At Birchbox we face a range of machine learning problems typical to E commerce product recommendation user clustering inventory prediction trends detection etc Scikit learn lets us experiment with many models especially in the exploration phase of a new project the data can be passed around in a consistent way models are easy to save and reuse updates keep us informed of new developments from the pattern discovery research community Scikit learn is an important tool for our team built the right way in the right language rst class annotation Thierry Bertin Mahieux Data Scientist div image box image images birchbox jpg target https www birchbox com Bestofmedia Group http www bestofmedia com div sk text image grid large div text box Scikit learn is our 1 toolkit for all things machine learning at Bestofmedia We use it for a variety of tasks e g spam fighting ad click prediction various ranking models thanks to the varied state of the art algorithm implementations packaged into it In the lab it accelerates prototyping of complex pipelines In production I can say it has proven to be robust and efficient enough to be deployed for business critical components rst class annotation Eustache Diemert Lead Scientist div image box image images bestofmedia logo png target http www bestofmedia com Change org https www change org div sk text image grid large div text box At change org we automate the use of scikit learn s RandomForestClassifier in our production systems to drive email targeting that reaches millions of users across the world each week In the lab scikit learn s ease of use performance and overall variety of algorithms implemented has proved invaluable in giving us a single reliable source to turn to for our machine learning needs rst class annotation Vijay Ramesh Software Engineer in Data science at Change org div image box image images change logo png target https www change org PHIMECA Engineering https www phimeca com lang en div sk text image grid large div text box At PHIMECA Engineering we use scikit learn estimators as surrogates for expensive to evaluate numerical models mostly but not exclusively finite element mechanical models for speeding up the intensive post processing operations involved in our simulation based decision making framework Scikit learn s fit predict API together with its efficient cross validation tools considerably eases the task of selecting the best fit estimator We are also using scikit learn for illustrating concepts in our training sessions Trainees are always impressed by the ease of use of scikit learn despite the apparent theoretical complexity of machine learning rst class annotation Vincent Dubourg PHIMECA Engineering PhD Engineer div image box image images phimeca png target https www phimeca com lang en HowAboutWe http www howaboutwe com div sk text image grid large div text box At HowAboutWe scikit learn lets us implement a wide array of machine learning techniques in analysis and in production despite having a small team We use scikit learn s classification algorithms to predict user behavior enabling us to for example estimate the value of leads from a given traffic source early in the lead s tenure on our site Also our users profiles consist of primarily unstructured data answers to open ended questions so we use scikit learn s feature extraction and dimensionality reduction tools to translate these unstructured data into inputs for our matchmaking system rst class annotation Daniel Weitzenfeld Senior Data Scientist at HowAboutWe div image box image images howaboutwe png target http www howaboutwe com PeerIndex https www brandwatch com peerindex and brandwatch div sk text image grid large div text box At PeerIndex we use scientific methodology to build the Influence Graph a unique dataset that allows us to identify who s really influential and in which context To do this we have to tackle a range of machine learning and predictive modeling problems Scikit learn has emerged as our primary tool for developing prototypes and making quick progress From predicting missing data and classifying tweets to clustering communities of social media users scikit learn proved useful in a variety of applications Its very intuitive interface and excellent compatibility with other python tools makes it and indispensable tool in our daily research efforts rst class annotation Ferenc Huszar Senior Data Scientist at Peerindex div image box image images peerindex png target https www brandwatch com peerindex and brandwatch DataRobot https www datarobot com div sk text image grid large div text box DataRobot is building next generation predictive analytics software to make data scientists more productive and scikit learn is an integral part of our system The variety of machine learning techniques in combination with the solid implementations that scikit learn offers makes it a one stop shopping library for machine learning in Python Moreover its consistent API well tested code and permissive licensing allow us to use it in a production environment Scikit learn has literally saved us years of work we would have had to do ourselves to bring our product to market rst class annotation Jeremy Achin CEO Co founder DataRobot Inc div image box image images datarobot png target https www datarobot com OkCupid https www okcupid com div sk text image grid large div text box We re using scikit learn at OkCupid to evaluate and improve our matchmaking system The range of features it has especially preprocessing utilities means we can use it for a wide variety of projects and it s performant enough to handle the volume of data that we need to sort through The documentation is really thorough as well which makes the library quite easy to use rst class annotation David Koh Senior Data Scientist at OkCupid div image box image images okcupid png target https www okcupid com Lovely https livelovely com div sk text image grid large div text box At Lovely we strive to deliver the best apartment marketplace with respect to our users and our listings From understanding user behavior improving data quality and detecting fraud scikit learn is a regular tool for gathering insights predictive modeling and improving our product The easy to read documentation and intuitive architecture of the API makes machine learning both explorable and accessible to a wide range of python developers I m constantly recommending that more developers and scientists try scikit learn rst class annotation Simon Frid Data Scientist Lead at Lovely div image box image images lovely png target https livelovely com Data Publica http www data publica com div sk text image grid large div text box Data Publica builds a new predictive sales tool for commercial and marketing teams called C Radar We extensively use scikit learn to build segmentations of customers through clustering and to predict future customers based on past partnerships success or failure We also categorize companies using their website communication thanks to scikit learn and its machine learning algorithm implementations Eventually machine learning makes it possible to detect weak signals that traditional tools cannot see All these complex tasks are performed in an easy and straightforward way thanks to the great quality of the scikit learn framework rst class annotation Guillaume Lebourgeois Samuel Charron Data Scientists at Data Publica div image box image images datapublica png target http www data publica com Machinalis https www machinalis com div sk text image grid large div text box Scikit learn is the cornerstone of all the machine learning projects carried at Machinalis It has a consistent API a wide selection of algorithms and lots of auxiliary tools to deal with the boilerplate We have used it in production environments on a variety of projects including click through rate prediction information extraction https github com machinalis iepy and even counting sheep In fact we use it so much that we ve started to freeze our common use cases into Python packages some of them open sourced like FeatureForge https github com machinalis featureforge Scikit learn in one word Awesome rst class annotation Rafael Carrascosa Lead developer div image box image images machinalis png target https www machinalis com solido https www solidodesign com div sk text image grid large div text box Scikit learn is helping to drive Moore s Law via Solido Solido creates computer aided design tools used by the majority of top 20 semiconductor companies and fabs to design the bleeding edge chips inside smartphones automobiles and more Scikit learn helps to power Solido s algorithms for rare event estimation worst case verification optimization and more At Solido we are particularly fond of scikit learn s libraries for Gaussian Process models large scale regularized linear regression and classification Scikit learn has increased our productivity because for many ML problems we no longer need to roll our own code This PyData 2014 talk https www youtube com watch v Jm eBD9xR3w has details rst class annotation Trent McConaghy founder Solido Design Automation Inc div image box image images solido logo png target https www solidodesign com INFONEA http www infonea com en div sk text image grid large div text box We employ scikit learn for rapid prototyping and custom made Data Science solutions within our in memory based Business Intelligence Software INFONEA As a well documented and comprehensive collection of state of the art algorithms and pipelining methods scikit learn enables us to provide flexible and scalable scientific analysis solutions Thus scikit learn is immensely valuable in realizing a powerful integration of Data Science technology within self service business analytics rst class annotation Thorsten Kranz Data Scientist Coma Soft AG div image box image images infonea jpg target http www infonea com en Dataiku https www dataiku com div sk text image grid large div text box Our software Data Science Studio DSS enables users to create data services that combine ETL https en wikipedia org wiki Extract transform load with Machine Learning Our Machine Learning module integrates many scikit learn algorithms The scikit learn library is a perfect integration with DSS because it offers algorithms for virtually all business cases Our goal is to offer a transparent and flexible tool that makes it easier to optimize time consuming aspects of building a data service preparing data and training machine learning algorithms on all types of data rst class annotation Florian Douetteau CEO Dataiku div image box image images dataiku logo png target https www dataiku com Otto Group https ottogroup com div sk text image grid large div text box Here at Otto Group one of global Big Five B2C online retailers we are using scikit learn in all aspects of our daily work from data exploration to development of machine learning application to the productive deployment of those services It helps us to tackle machine learning problems ranging from e commerce to logistics It consistent APIs enabled us to build the Palladium REST API framework https github com ottogroup palladium around it and continuously deliver scikit learn based services rst class annotation Christian Rammig Head of Data Science Otto Group div image box image images ottogroup logo png target https ottogroup com Zopa https zopa com div sk text image grid large div text box At Zopa the first ever Peer to Peer lending platform we extensively use scikit learn to run the business and optimize our users experience It powers our Machine Learning models involved in credit risk fraud risk marketing and pricing and has been used for originating at least 1 billion GBP worth of Zopa loans It is very well documented powerful and simple to use We are grateful for the capabilities it has provided and for allowing us to deliver on our mission of making money simple and fair rst class annotation Vlasios Vasileiou Head of Data Science Zopa div image box image images zopa png target https zopa com MARS https www mars com global div sk text image grid large div text box Scikit Learn is integral to the Machine Learning Ecosystem at Mars Whether we re designing better recipes for petfood or closely analysing our cocoa supply chain Scikit Learn is used as a tool for rapidly prototyping ideas and taking them to production This allows us to better understand and meet the needs of our consumers worldwide Scikit Learn s feature rich toolset is easy to use and equips our associates with the capabilities they need to solve the business challenges they face every day rst class annotation Michael Fitzke Next Generation Technologies Sr Leader Mars Inc div image box image images mars png target https www mars com global BNP Paribas Cardif https www bnpparibascardif com div sk text image grid large div text box BNP Paribas Cardif uses scikit learn for several of its machine learning models in production Our internal community of developers and data scientists has been using scikit learn since 2015 for several reasons the quality of the developments documentation and contribution governance and the sheer size of the contributing community We even explicitly mention the use of scikit learn s pipelines in our internal model risk governance as one of our good practices to decrease operational risks and overfitting risk As a way to support open source software development and in particular scikit learn project we decided to participate to scikit learn s consortium at La Fondation Inria since its creation in 2018 rst class annotation S bastien Conort Chief Data Scientist BNP Paribas Cardif div image box image images bnp paribas cardif png target https www bnpparibascardif com |
scikit-learn In addition scikit learn includes various random sample generators that samplegenerators sklearn datasets Generated datasets can be used to build artificial datasets of controlled size and complexity | .. _sample_generators:
Generated datasets
==================
.. currentmodule:: sklearn.datasets
In addition, scikit-learn includes various random sample generators that
can be used to build artificial datasets of controlled size and complexity.
Generators for classification and clustering
--------------------------------------------
These generators produce a matrix of features and corresponding discrete
targets.
Single label
~~~~~~~~~~~~
:func:`make_blobs` creates a multiclass dataset by allocating each class to one
normally-distributed cluster of points. It provides control over the centers and
standard deviations of each cluster. This dataset is used to demonstrate clustering.
.. plot::
:context: close-figs
:scale: 70
:align: center
import matplotlib.pyplot as plt
from sklearn.datasets import make_blobs
X, y = make_blobs(centers=3, cluster_std=0.5, random_state=0)
plt.scatter(X[:, 0], X[:, 1], c=y)
plt.title("Three normally-distributed clusters")
plt.show()
:func:`make_classification` also creates multiclass datasets but specializes in
introducing noise by way of: correlated, redundant and uninformative features; multiple
Gaussian clusters per class; and linear transformations of the feature space.
.. plot::
:context: close-figs
:scale: 70
:align: center
import matplotlib.pyplot as plt
from sklearn.datasets import make_classification
fig, axs = plt.subplots(1, 3, figsize=(12, 4), sharey=True, sharex=True)
titles = ["Two classes,\none informative feature,\none cluster per class",
"Two classes,\ntwo informative features,\ntwo clusters per class",
"Three classes,\ntwo informative features,\none cluster per class"]
params = [
{"n_informative": 1, "n_clusters_per_class": 1, "n_classes": 2},
{"n_informative": 2, "n_clusters_per_class": 2, "n_classes": 2},
{"n_informative": 2, "n_clusters_per_class": 1, "n_classes": 3}
]
for i, param in enumerate(params):
X, Y = make_classification(n_features=2, n_redundant=0, random_state=1, **param)
axs[i].scatter(X[:, 0], X[:, 1], c=Y)
axs[i].set_title(titles[i])
plt.tight_layout()
plt.show()
:func:`make_gaussian_quantiles` divides a single Gaussian cluster into
near-equal-size classes separated by concentric hyperspheres.
.. plot::
:context: close-figs
:scale: 70
:align: center
import matplotlib.pyplot as plt
from sklearn.datasets import make_gaussian_quantiles
X, Y = make_gaussian_quantiles(n_features=2, n_classes=3, random_state=0)
plt.scatter(X[:, 0], X[:, 1], c=Y)
plt.title("Gaussian divided into three quantiles")
plt.show()
:func:`make_hastie_10_2` generates a similar binary, 10-dimensional problem.
:func:`make_circles` and :func:`make_moons` generate 2D binary classification
datasets that are challenging to certain algorithms (e.g., centroid-based
clustering or linear classification), including optional Gaussian noise.
They are useful for visualization. :func:`make_circles` produces Gaussian data
with a spherical decision boundary for binary classification, while
:func:`make_moons` produces two interleaving half-circles.
.. plot::
:context: close-figs
:scale: 70
:align: center
import matplotlib.pyplot as plt
from sklearn.datasets import make_circles, make_moons
fig, (ax1, ax2) = plt.subplots(nrows=1, ncols=2, figsize=(8, 4))
X, Y = make_circles(noise=0.1, factor=0.3, random_state=0)
ax1.scatter(X[:, 0], X[:, 1], c=Y)
ax1.set_title("make_circles")
X, Y = make_moons(noise=0.1, random_state=0)
ax2.scatter(X[:, 0], X[:, 1], c=Y)
ax2.set_title("make_moons")
plt.tight_layout()
plt.show()
Multilabel
~~~~~~~~~~
:func:`make_multilabel_classification` generates random samples with multiple
labels, reflecting a bag of words drawn from a mixture of topics. The number of
topics for each document is drawn from a Poisson distribution, and the topics
themselves are drawn from a fixed random distribution. Similarly, the number of
words is drawn from Poisson, with words drawn from a multinomial, where each
topic defines a probability distribution over words. Simplifications with
respect to true bag-of-words mixtures include:
* Per-topic word distributions are independently drawn, where in reality all
would be affected by a sparse base distribution, and would be correlated.
* For a document generated from multiple topics, all topics are weighted
equally in generating its bag of words.
* Documents without labels words at random, rather than from a base
distribution.
.. image:: ../auto_examples/datasets/images/sphx_glr_plot_random_multilabel_dataset_001.png
:target: ../auto_examples/datasets/plot_random_multilabel_dataset.html
:scale: 50
:align: center
Biclustering
~~~~~~~~~~~~
.. autosummary::
make_biclusters
make_checkerboard
Generators for regression
-------------------------
:func:`make_regression` produces regression targets as an optionally-sparse
random linear combination of random features, with noise. Its informative
features may be uncorrelated, or low rank (few features account for most of the
variance).
Other regression generators generate functions deterministically from
randomized features. :func:`make_sparse_uncorrelated` produces a target as a
linear combination of four features with fixed coefficients.
Others encode explicitly non-linear relations:
:func:`make_friedman1` is related by polynomial and sine transforms;
:func:`make_friedman2` includes feature multiplication and reciprocation; and
:func:`make_friedman3` is similar with an arctan transformation on the target.
Generators for manifold learning
--------------------------------
.. autosummary::
make_s_curve
make_swiss_roll
Generators for decomposition
----------------------------
.. autosummary::
make_low_rank_matrix
make_sparse_coded_signal
make_spd_matrix
make_sparse_spd_matrix | scikit-learn | sample generators Generated datasets currentmodule sklearn datasets In addition scikit learn includes various random sample generators that can be used to build artificial datasets of controlled size and complexity Generators for classification and clustering These generators produce a matrix of features and corresponding discrete targets Single label func make blobs creates a multiclass dataset by allocating each class to one normally distributed cluster of points It provides control over the centers and standard deviations of each cluster This dataset is used to demonstrate clustering plot context close figs scale 70 align center import matplotlib pyplot as plt from sklearn datasets import make blobs X y make blobs centers 3 cluster std 0 5 random state 0 plt scatter X 0 X 1 c y plt title Three normally distributed clusters plt show func make classification also creates multiclass datasets but specializes in introducing noise by way of correlated redundant and uninformative features multiple Gaussian clusters per class and linear transformations of the feature space plot context close figs scale 70 align center import matplotlib pyplot as plt from sklearn datasets import make classification fig axs plt subplots 1 3 figsize 12 4 sharey True sharex True titles Two classes none informative feature none cluster per class Two classes ntwo informative features ntwo clusters per class Three classes ntwo informative features none cluster per class params n informative 1 n clusters per class 1 n classes 2 n informative 2 n clusters per class 2 n classes 2 n informative 2 n clusters per class 1 n classes 3 for i param in enumerate params X Y make classification n features 2 n redundant 0 random state 1 param axs i scatter X 0 X 1 c Y axs i set title titles i plt tight layout plt show func make gaussian quantiles divides a single Gaussian cluster into near equal size classes separated by concentric hyperspheres plot context close figs scale 70 align center import matplotlib pyplot as plt from sklearn datasets import make gaussian quantiles X Y make gaussian quantiles n features 2 n classes 3 random state 0 plt scatter X 0 X 1 c Y plt title Gaussian divided into three quantiles plt show func make hastie 10 2 generates a similar binary 10 dimensional problem func make circles and func make moons generate 2D binary classification datasets that are challenging to certain algorithms e g centroid based clustering or linear classification including optional Gaussian noise They are useful for visualization func make circles produces Gaussian data with a spherical decision boundary for binary classification while func make moons produces two interleaving half circles plot context close figs scale 70 align center import matplotlib pyplot as plt from sklearn datasets import make circles make moons fig ax1 ax2 plt subplots nrows 1 ncols 2 figsize 8 4 X Y make circles noise 0 1 factor 0 3 random state 0 ax1 scatter X 0 X 1 c Y ax1 set title make circles X Y make moons noise 0 1 random state 0 ax2 scatter X 0 X 1 c Y ax2 set title make moons plt tight layout plt show Multilabel func make multilabel classification generates random samples with multiple labels reflecting a bag of words drawn from a mixture of topics The number of topics for each document is drawn from a Poisson distribution and the topics themselves are drawn from a fixed random distribution Similarly the number of words is drawn from Poisson with words drawn from a multinomial where each topic defines a probability distribution over words Simplifications with respect to true bag of words mixtures include Per topic word distributions are independently drawn where in reality all would be affected by a sparse base distribution and would be correlated For a document generated from multiple topics all topics are weighted equally in generating its bag of words Documents without labels words at random rather than from a base distribution image auto examples datasets images sphx glr plot random multilabel dataset 001 png target auto examples datasets plot random multilabel dataset html scale 50 align center Biclustering autosummary make biclusters make checkerboard Generators for regression func make regression produces regression targets as an optionally sparse random linear combination of random features with noise Its informative features may be uncorrelated or low rank few features account for most of the variance Other regression generators generate functions deterministically from randomized features func make sparse uncorrelated produces a target as a linear combination of four features with fixed coefficients Others encode explicitly non linear relations func make friedman1 is related by polynomial and sine transforms func make friedman2 includes feature multiplication and reciprocation and func make friedman3 is similar with an arctan transformation on the target Generators for manifold learning autosummary make s curve make swiss roll Generators for decomposition autosummary make low rank matrix make sparse coded signal make spd matrix make sparse spd matrix |
scikit-learn scalingstrategies For some applications the amount of examples features or both and or the speed at which they need to be processed are challenging for traditional approaches In these cases scikit learn has a number of options you can consider to make your system scale Strategies to scale computationally bigger data | .. _scaling_strategies:
Strategies to scale computationally: bigger data
=================================================
For some applications the amount of examples, features (or both) and/or the
speed at which they need to be processed are challenging for traditional
approaches. In these cases scikit-learn has a number of options you can
consider to make your system scale.
Scaling with instances using out-of-core learning
--------------------------------------------------
Out-of-core (or "external memory") learning is a technique used to learn from
data that cannot fit in a computer's main memory (RAM).
Here is a sketch of a system designed to achieve this goal:
1. a way to stream instances
2. a way to extract features from instances
3. an incremental algorithm
Streaming instances
....................
Basically, 1. may be a reader that yields instances from files on a
hard drive, a database, from a network stream etc. However,
details on how to achieve this are beyond the scope of this documentation.
Extracting features
...................
\2. could be any relevant way to extract features among the
different :ref:`feature extraction <feature_extraction>` methods supported by
scikit-learn. However, when working with data that needs vectorization and
where the set of features or values is not known in advance one should take
explicit care. A good example is text classification where unknown terms are
likely to be found during training. It is possible to use a stateful
vectorizer if making multiple passes over the data is reasonable from an
application point of view. Otherwise, one can turn up the difficulty by using
a stateless feature extractor. Currently the preferred way to do this is to
use the so-called :ref:`hashing trick<feature_hashing>` as implemented by
:class:`sklearn.feature_extraction.FeatureHasher` for datasets with categorical
variables represented as list of Python dicts or
:class:`sklearn.feature_extraction.text.HashingVectorizer` for text documents.
Incremental learning
.....................
Finally, for 3. we have a number of options inside scikit-learn. Although not
all algorithms can learn incrementally (i.e. without seeing all the instances
at once), all estimators implementing the ``partial_fit`` API are candidates.
Actually, the ability to learn incrementally from a mini-batch of instances
(sometimes called "online learning") is key to out-of-core learning as it
guarantees that at any given time there will be only a small amount of
instances in the main memory. Choosing a good size for the mini-batch that
balances relevancy and memory footprint could involve some tuning [1]_.
Here is a list of incremental estimators for different tasks:
- Classification
+ :class:`sklearn.naive_bayes.MultinomialNB`
+ :class:`sklearn.naive_bayes.BernoulliNB`
+ :class:`sklearn.linear_model.Perceptron`
+ :class:`sklearn.linear_model.SGDClassifier`
+ :class:`sklearn.linear_model.PassiveAggressiveClassifier`
+ :class:`sklearn.neural_network.MLPClassifier`
- Regression
+ :class:`sklearn.linear_model.SGDRegressor`
+ :class:`sklearn.linear_model.PassiveAggressiveRegressor`
+ :class:`sklearn.neural_network.MLPRegressor`
- Clustering
+ :class:`sklearn.cluster.MiniBatchKMeans`
+ :class:`sklearn.cluster.Birch`
- Decomposition / feature Extraction
+ :class:`sklearn.decomposition.MiniBatchDictionaryLearning`
+ :class:`sklearn.decomposition.IncrementalPCA`
+ :class:`sklearn.decomposition.LatentDirichletAllocation`
+ :class:`sklearn.decomposition.MiniBatchNMF`
- Preprocessing
+ :class:`sklearn.preprocessing.StandardScaler`
+ :class:`sklearn.preprocessing.MinMaxScaler`
+ :class:`sklearn.preprocessing.MaxAbsScaler`
For classification, a somewhat important thing to note is that although a
stateless feature extraction routine may be able to cope with new/unseen
attributes, the incremental learner itself may be unable to cope with
new/unseen targets classes. In this case you have to pass all the possible
classes to the first ``partial_fit`` call using the ``classes=`` parameter.
Another aspect to consider when choosing a proper algorithm is that not all of
them put the same importance on each example over time. Namely, the
``Perceptron`` is still sensitive to badly labeled examples even after many
examples whereas the ``SGD*`` and ``PassiveAggressive*`` families are more
robust to this kind of artifacts. Conversely, the latter also tend to give less
importance to remarkably different, yet properly labeled examples when they
come late in the stream as their learning rate decreases over time.
Examples
..........
Finally, we have a full-fledged example of
:ref:`sphx_glr_auto_examples_applications_plot_out_of_core_classification.py`. It is aimed at
providing a starting point for people wanting to build out-of-core learning
systems and demonstrates most of the notions discussed above.
Furthermore, it also shows the evolution of the performance of different
algorithms with the number of processed examples.
.. |accuracy_over_time| image:: ../auto_examples/applications/images/sphx_glr_plot_out_of_core_classification_001.png
:target: ../auto_examples/applications/plot_out_of_core_classification.html
:scale: 80
.. centered:: |accuracy_over_time|
Now looking at the computation time of the different parts, we see that the
vectorization is much more expensive than learning itself. From the different
algorithms, ``MultinomialNB`` is the most expensive, but its overhead can be
mitigated by increasing the size of the mini-batches (exercise: change
``minibatch_size`` to 100 and 10000 in the program and compare).
.. |computation_time| image:: ../auto_examples/applications/images/sphx_glr_plot_out_of_core_classification_003.png
:target: ../auto_examples/applications/plot_out_of_core_classification.html
:scale: 80
.. centered:: |computation_time|
Notes
......
.. [1] Depending on the algorithm the mini-batch size can influence results or
not. SGD*, PassiveAggressive*, and discrete NaiveBayes are truly online
and are not affected by batch size. Conversely, MiniBatchKMeans
convergence rate is affected by the batch size. Also, its memory
footprint can vary dramatically with batch size. | scikit-learn | scaling strategies Strategies to scale computationally bigger data For some applications the amount of examples features or both and or the speed at which they need to be processed are challenging for traditional approaches In these cases scikit learn has a number of options you can consider to make your system scale Scaling with instances using out of core learning Out of core or external memory learning is a technique used to learn from data that cannot fit in a computer s main memory RAM Here is a sketch of a system designed to achieve this goal 1 a way to stream instances 2 a way to extract features from instances 3 an incremental algorithm Streaming instances Basically 1 may be a reader that yields instances from files on a hard drive a database from a network stream etc However details on how to achieve this are beyond the scope of this documentation Extracting features 2 could be any relevant way to extract features among the different ref feature extraction feature extraction methods supported by scikit learn However when working with data that needs vectorization and where the set of features or values is not known in advance one should take explicit care A good example is text classification where unknown terms are likely to be found during training It is possible to use a stateful vectorizer if making multiple passes over the data is reasonable from an application point of view Otherwise one can turn up the difficulty by using a stateless feature extractor Currently the preferred way to do this is to use the so called ref hashing trick feature hashing as implemented by class sklearn feature extraction FeatureHasher for datasets with categorical variables represented as list of Python dicts or class sklearn feature extraction text HashingVectorizer for text documents Incremental learning Finally for 3 we have a number of options inside scikit learn Although not all algorithms can learn incrementally i e without seeing all the instances at once all estimators implementing the partial fit API are candidates Actually the ability to learn incrementally from a mini batch of instances sometimes called online learning is key to out of core learning as it guarantees that at any given time there will be only a small amount of instances in the main memory Choosing a good size for the mini batch that balances relevancy and memory footprint could involve some tuning 1 Here is a list of incremental estimators for different tasks Classification class sklearn naive bayes MultinomialNB class sklearn naive bayes BernoulliNB class sklearn linear model Perceptron class sklearn linear model SGDClassifier class sklearn linear model PassiveAggressiveClassifier class sklearn neural network MLPClassifier Regression class sklearn linear model SGDRegressor class sklearn linear model PassiveAggressiveRegressor class sklearn neural network MLPRegressor Clustering class sklearn cluster MiniBatchKMeans class sklearn cluster Birch Decomposition feature Extraction class sklearn decomposition MiniBatchDictionaryLearning class sklearn decomposition IncrementalPCA class sklearn decomposition LatentDirichletAllocation class sklearn decomposition MiniBatchNMF Preprocessing class sklearn preprocessing StandardScaler class sklearn preprocessing MinMaxScaler class sklearn preprocessing MaxAbsScaler For classification a somewhat important thing to note is that although a stateless feature extraction routine may be able to cope with new unseen attributes the incremental learner itself may be unable to cope with new unseen targets classes In this case you have to pass all the possible classes to the first partial fit call using the classes parameter Another aspect to consider when choosing a proper algorithm is that not all of them put the same importance on each example over time Namely the Perceptron is still sensitive to badly labeled examples even after many examples whereas the SGD and PassiveAggressive families are more robust to this kind of artifacts Conversely the latter also tend to give less importance to remarkably different yet properly labeled examples when they come late in the stream as their learning rate decreases over time Examples Finally we have a full fledged example of ref sphx glr auto examples applications plot out of core classification py It is aimed at providing a starting point for people wanting to build out of core learning systems and demonstrates most of the notions discussed above Furthermore it also shows the evolution of the performance of different algorithms with the number of processed examples accuracy over time image auto examples applications images sphx glr plot out of core classification 001 png target auto examples applications plot out of core classification html scale 80 centered accuracy over time Now looking at the computation time of the different parts we see that the vectorization is much more expensive than learning itself From the different algorithms MultinomialNB is the most expensive but its overhead can be mitigated by increasing the size of the mini batches exercise change minibatch size to 100 and 10000 in the program and compare computation time image auto examples applications images sphx glr plot out of core classification 003 png target auto examples applications plot out of core classification html scale 80 centered computation time Notes 1 Depending on the algorithm the mini batch size can influence results or not SGD PassiveAggressive and discrete NaiveBayes are truly online and are not affected by batch size Conversely MiniBatchKMeans convergence rate is affected by the batch size Also its memory footprint can vary dramatically with batch size |
scikit-learn correspond to certain kernels as they are used for example in support vector machines see kernelapproximation This submodule contains functions that approximate the feature mappings that The following feature functions perform non linear transformations of the Kernel Approximation input which can serve as a basis for linear classification or other | .. _kernel_approximation:
Kernel Approximation
====================
This submodule contains functions that approximate the feature mappings that
correspond to certain kernels, as they are used for example in support vector
machines (see :ref:`svm`).
The following feature functions perform non-linear transformations of the
input, which can serve as a basis for linear classification or other
algorithms.
.. currentmodule:: sklearn.linear_model
The advantage of using approximate explicit feature maps compared to the
`kernel trick <https://en.wikipedia.org/wiki/Kernel_trick>`_,
which makes use of feature maps implicitly, is that explicit mappings
can be better suited for online learning and can significantly reduce the cost
of learning with very large datasets.
Standard kernelized SVMs do not scale well to large datasets, but using an
approximate kernel map it is possible to use much more efficient linear SVMs.
In particular, the combination of kernel map approximations with
:class:`SGDClassifier` can make non-linear learning on large datasets possible.
Since there has not been much empirical work using approximate embeddings, it
is advisable to compare results against exact kernel methods when possible.
.. seealso::
:ref:`polynomial_regression` for an exact polynomial transformation.
.. currentmodule:: sklearn.kernel_approximation
.. _nystroem_kernel_approx:
Nystroem Method for Kernel Approximation
----------------------------------------
The Nystroem method, as implemented in :class:`Nystroem` is a general method for
reduced rank approximations of kernels. It achieves this by subsampling without
replacement rows/columns of the data on which the kernel is evaluated. While the
computational complexity of the exact method is
:math:`\mathcal{O}(n^3_{\text{samples}})`, the complexity of the approximation
is :math:`\mathcal{O}(n^2_{\text{components}} \cdot n_{\text{samples}})`, where
one can set :math:`n_{\text{components}} \ll n_{\text{samples}}` without a
significative decrease in performance [WS2001]_.
We can construct the eigendecomposition of the kernel matrix :math:`K`, based
on the features of the data, and then split it into sampled and unsampled data
points.
.. math::
K = U \Lambda U^T
= \begin{bmatrix} U_1 \\ U_2\end{bmatrix} \Lambda \begin{bmatrix} U_1 \\ U_2 \end{bmatrix}^T
= \begin{bmatrix} U_1 \Lambda U_1^T & U_1 \Lambda U_2^T \\ U_2 \Lambda U_1^T & U_2 \Lambda U_2^T \end{bmatrix}
\equiv \begin{bmatrix} K_{11} & K_{12} \\ K_{21} & K_{22} \end{bmatrix}
where:
* :math:`U` is orthonormal
* :math:`\Lambda` is diagonal matrix of eigenvalues
* :math:`U_1` is orthonormal matrix of samples that were chosen
* :math:`U_2` is orthonormal matrix of samples that were not chosen
Given that :math:`U_1 \Lambda U_1^T` can be obtained by orthonormalization of
the matrix :math:`K_{11}`, and :math:`U_2 \Lambda U_1^T` can be evaluated (as
well as its transpose), the only remaining term to elucidate is
:math:`U_2 \Lambda U_2^T`. To do this we can express it in terms of the already
evaluated matrices:
.. math::
\begin{align} U_2 \Lambda U_2^T &= \left(K_{21} U_1 \Lambda^{-1}\right) \Lambda \left(K_{21} U_1 \Lambda^{-1}\right)^T
\\&= K_{21} U_1 (\Lambda^{-1} \Lambda) \Lambda^{-1} U_1^T K_{21}^T
\\&= K_{21} U_1 \Lambda^{-1} U_1^T K_{21}^T
\\&= K_{21} K_{11}^{-1} K_{21}^T
\\&= \left( K_{21} K_{11}^{-\frac12} \right) \left( K_{21} K_{11}^{-\frac12} \right)^T
.\end{align}
During ``fit``, the class :class:`Nystroem` evaluates the basis :math:`U_1`, and
computes the normalization constant, :math:`K_{11}^{-\frac12}`. Later, during
``transform``, the kernel matrix is determined between the basis (given by the
`components_` attribute) and the new data points, ``X``. This matrix is then
multiplied by the ``normalization_`` matrix for the final result.
By default :class:`Nystroem` uses the ``rbf`` kernel, but it can use any kernel
function or a precomputed kernel matrix. The number of samples used - which is
also the dimensionality of the features computed - is given by the parameter
``n_components``.
.. rubric:: Examples
* See the example entitled
:ref:`sphx_glr_auto_examples_applications_plot_cyclical_feature_engineering.py`,
that shows an efficient machine learning pipeline that uses a
:class:`Nystroem` kernel.
.. _rbf_kernel_approx:
Radial Basis Function Kernel
----------------------------
The :class:`RBFSampler` constructs an approximate mapping for the radial basis
function kernel, also known as *Random Kitchen Sinks* [RR2007]_. This
transformation can be used to explicitly model a kernel map, prior to applying
a linear algorithm, for example a linear SVM::
>>> from sklearn.kernel_approximation import RBFSampler
>>> from sklearn.linear_model import SGDClassifier
>>> X = [[0, 0], [1, 1], [1, 0], [0, 1]]
>>> y = [0, 0, 1, 1]
>>> rbf_feature = RBFSampler(gamma=1, random_state=1)
>>> X_features = rbf_feature.fit_transform(X)
>>> clf = SGDClassifier(max_iter=5)
>>> clf.fit(X_features, y)
SGDClassifier(max_iter=5)
>>> clf.score(X_features, y)
1.0
The mapping relies on a Monte Carlo approximation to the
kernel values. The ``fit`` function performs the Monte Carlo sampling, whereas
the ``transform`` method performs the mapping of the data. Because of the
inherent randomness of the process, results may vary between different calls to
the ``fit`` function.
The ``fit`` function takes two arguments:
``n_components``, which is the target dimensionality of the feature transform,
and ``gamma``, the parameter of the RBF-kernel. A higher ``n_components`` will
result in a better approximation of the kernel and will yield results more
similar to those produced by a kernel SVM. Note that "fitting" the feature
function does not actually depend on the data given to the ``fit`` function.
Only the dimensionality of the data is used.
Details on the method can be found in [RR2007]_.
For a given value of ``n_components`` :class:`RBFSampler` is often less accurate
as :class:`Nystroem`. :class:`RBFSampler` is cheaper to compute, though, making
use of larger feature spaces more efficient.
.. figure:: ../auto_examples/miscellaneous/images/sphx_glr_plot_kernel_approximation_002.png
:target: ../auto_examples/miscellaneous/plot_kernel_approximation.html
:scale: 50%
:align: center
Comparing an exact RBF kernel (left) with the approximation (right)
.. rubric:: Examples
* :ref:`sphx_glr_auto_examples_miscellaneous_plot_kernel_approximation.py`
.. _additive_chi_kernel_approx:
Additive Chi Squared Kernel
---------------------------
The additive chi squared kernel is a kernel on histograms, often used in computer vision.
The additive chi squared kernel as used here is given by
.. math::
k(x, y) = \sum_i \frac{2x_iy_i}{x_i+y_i}
This is not exactly the same as :func:`sklearn.metrics.pairwise.additive_chi2_kernel`.
The authors of [VZ2010]_ prefer the version above as it is always positive
definite.
Since the kernel is additive, it is possible to treat all components
:math:`x_i` separately for embedding. This makes it possible to sample
the Fourier transform in regular intervals, instead of approximating
using Monte Carlo sampling.
The class :class:`AdditiveChi2Sampler` implements this component wise
deterministic sampling. Each component is sampled :math:`n` times, yielding
:math:`2n+1` dimensions per input dimension (the multiple of two stems
from the real and complex part of the Fourier transform).
In the literature, :math:`n` is usually chosen to be 1 or 2, transforming
the dataset to size ``n_samples * 5 * n_features`` (in the case of :math:`n=2`).
The approximate feature map provided by :class:`AdditiveChi2Sampler` can be combined
with the approximate feature map provided by :class:`RBFSampler` to yield an approximate
feature map for the exponentiated chi squared kernel.
See the [VZ2010]_ for details and [VVZ2010]_ for combination with the :class:`RBFSampler`.
.. _skewed_chi_kernel_approx:
Skewed Chi Squared Kernel
-------------------------
The skewed chi squared kernel is given by:
.. math::
k(x,y) = \prod_i \frac{2\sqrt{x_i+c}\sqrt{y_i+c}}{x_i + y_i + 2c}
It has properties that are similar to the exponentiated chi squared kernel
often used in computer vision, but allows for a simple Monte Carlo
approximation of the feature map.
The usage of the :class:`SkewedChi2Sampler` is the same as the usage described
above for the :class:`RBFSampler`. The only difference is in the free
parameter, that is called :math:`c`.
For a motivation for this mapping and the mathematical details see [LS2010]_.
.. _polynomial_kernel_approx:
Polynomial Kernel Approximation via Tensor Sketch
-------------------------------------------------
The :ref:`polynomial kernel <polynomial_kernel>` is a popular type of kernel
function given by:
.. math::
k(x, y) = (\gamma x^\top y +c_0)^d
where:
* ``x``, ``y`` are the input vectors
* ``d`` is the kernel degree
Intuitively, the feature space of the polynomial kernel of degree `d`
consists of all possible degree-`d` products among input features, which enables
learning algorithms using this kernel to account for interactions between features.
The TensorSketch [PP2013]_ method, as implemented in :class:`PolynomialCountSketch`, is a
scalable, input data independent method for polynomial kernel approximation.
It is based on the concept of Count sketch [WIKICS]_ [CCF2002]_ , a dimensionality
reduction technique similar to feature hashing, which instead uses several
independent hash functions. TensorSketch obtains a Count Sketch of the outer product
of two vectors (or a vector with itself), which can be used as an approximation of the
polynomial kernel feature space. In particular, instead of explicitly computing
the outer product, TensorSketch computes the Count Sketch of the vectors and then
uses polynomial multiplication via the Fast Fourier Transform to compute the
Count Sketch of their outer product.
Conveniently, the training phase of TensorSketch simply consists of initializing
some random variables. It is thus independent of the input data, i.e. it only
depends on the number of input features, but not the data values.
In addition, this method can transform samples in
:math:`\mathcal{O}(n_{\text{samples}}(n_{\text{features}} + n_{\text{components}} \log(n_{\text{components}})))`
time, where :math:`n_{\text{components}}` is the desired output dimension,
determined by ``n_components``.
.. rubric:: Examples
* :ref:`sphx_glr_auto_examples_kernel_approximation_plot_scalable_poly_kernels.py`
.. _tensor_sketch_kernel_approx:
Mathematical Details
--------------------
Kernel methods like support vector machines or kernelized
PCA rely on a property of reproducing kernel Hilbert spaces.
For any positive definite kernel function :math:`k` (a so called Mercer kernel),
it is guaranteed that there exists a mapping :math:`\phi`
into a Hilbert space :math:`\mathcal{H}`, such that
.. math::
k(x,y) = \langle \phi(x), \phi(y) \rangle
Where :math:`\langle \cdot, \cdot \rangle` denotes the inner product in the
Hilbert space.
If an algorithm, such as a linear support vector machine or PCA,
relies only on the scalar product of data points :math:`x_i`, one may use
the value of :math:`k(x_i, x_j)`, which corresponds to applying the algorithm
to the mapped data points :math:`\phi(x_i)`.
The advantage of using :math:`k` is that the mapping :math:`\phi` never has
to be calculated explicitly, allowing for arbitrary large
features (even infinite).
One drawback of kernel methods is, that it might be necessary
to store many kernel values :math:`k(x_i, x_j)` during optimization.
If a kernelized classifier is applied to new data :math:`y_j`,
:math:`k(x_i, y_j)` needs to be computed to make predictions,
possibly for many different :math:`x_i` in the training set.
The classes in this submodule allow to approximate the embedding
:math:`\phi`, thereby working explicitly with the representations
:math:`\phi(x_i)`, which obviates the need to apply the kernel
or store training examples.
.. rubric:: References
.. [WS2001] `"Using the Nyström method to speed up kernel machines"
<https://papers.nips.cc/paper_files/paper/2000/hash/19de10adbaa1b2ee13f77f679fa1483a-Abstract.html>`_
Williams, C.K.I.; Seeger, M. - 2001.
.. [RR2007] `"Random features for large-scale kernel machines"
<https://papers.nips.cc/paper/2007/hash/013a006f03dbc5392effeb8f18fda755-Abstract.html>`_
Rahimi, A. and Recht, B. - Advances in neural information processing 2007,
.. [LS2010] `"Random Fourier approximations for skewed multiplicative histogram kernels"
<https://www.researchgate.net/publication/221114584_Random_Fourier_Approximations_for_Skewed_Multiplicative_Histogram_Kernels>`_
Li, F., Ionescu, C., and Sminchisescu, C.
- Pattern Recognition, DAGM 2010, Lecture Notes in Computer Science.
.. [VZ2010] `"Efficient additive kernels via explicit feature maps"
<https://www.robots.ox.ac.uk/~vgg/publications/2011/Vedaldi11/vedaldi11.pdf>`_
Vedaldi, A. and Zisserman, A. - Computer Vision and Pattern Recognition 2010
.. [VVZ2010] `"Generalized RBF feature maps for Efficient Detection"
<https://www.robots.ox.ac.uk/~vgg/publications/2010/Sreekanth10/sreekanth10.pdf>`_
Vempati, S. and Vedaldi, A. and Zisserman, A. and Jawahar, CV - 2010
.. [PP2013] :doi:`"Fast and scalable polynomial kernels via explicit feature maps"
<10.1145/2487575.2487591>`
Pham, N., & Pagh, R. - 2013
.. [CCF2002] `"Finding frequent items in data streams"
<https://www.cs.princeton.edu/courses/archive/spring04/cos598B/bib/CharikarCF.pdf>`_
Charikar, M., Chen, K., & Farach-Colton - 2002
.. [WIKICS] `"Wikipedia: Count sketch"
<https://en.wikipedia.org/wiki/Count_sketch>`_ | scikit-learn | kernel approximation Kernel Approximation This submodule contains functions that approximate the feature mappings that correspond to certain kernels as they are used for example in support vector machines see ref svm The following feature functions perform non linear transformations of the input which can serve as a basis for linear classification or other algorithms currentmodule sklearn linear model The advantage of using approximate explicit feature maps compared to the kernel trick https en wikipedia org wiki Kernel trick which makes use of feature maps implicitly is that explicit mappings can be better suited for online learning and can significantly reduce the cost of learning with very large datasets Standard kernelized SVMs do not scale well to large datasets but using an approximate kernel map it is possible to use much more efficient linear SVMs In particular the combination of kernel map approximations with class SGDClassifier can make non linear learning on large datasets possible Since there has not been much empirical work using approximate embeddings it is advisable to compare results against exact kernel methods when possible seealso ref polynomial regression for an exact polynomial transformation currentmodule sklearn kernel approximation nystroem kernel approx Nystroem Method for Kernel Approximation The Nystroem method as implemented in class Nystroem is a general method for reduced rank approximations of kernels It achieves this by subsampling without replacement rows columns of the data on which the kernel is evaluated While the computational complexity of the exact method is math mathcal O n 3 text samples the complexity of the approximation is math mathcal O n 2 text components cdot n text samples where one can set math n text components ll n text samples without a significative decrease in performance WS2001 We can construct the eigendecomposition of the kernel matrix math K based on the features of the data and then split it into sampled and unsampled data points math K U Lambda U T begin bmatrix U 1 U 2 end bmatrix Lambda begin bmatrix U 1 U 2 end bmatrix T begin bmatrix U 1 Lambda U 1 T U 1 Lambda U 2 T U 2 Lambda U 1 T U 2 Lambda U 2 T end bmatrix equiv begin bmatrix K 11 K 12 K 21 K 22 end bmatrix where math U is orthonormal math Lambda is diagonal matrix of eigenvalues math U 1 is orthonormal matrix of samples that were chosen math U 2 is orthonormal matrix of samples that were not chosen Given that math U 1 Lambda U 1 T can be obtained by orthonormalization of the matrix math K 11 and math U 2 Lambda U 1 T can be evaluated as well as its transpose the only remaining term to elucidate is math U 2 Lambda U 2 T To do this we can express it in terms of the already evaluated matrices math begin align U 2 Lambda U 2 T left K 21 U 1 Lambda 1 right Lambda left K 21 U 1 Lambda 1 right T K 21 U 1 Lambda 1 Lambda Lambda 1 U 1 T K 21 T K 21 U 1 Lambda 1 U 1 T K 21 T K 21 K 11 1 K 21 T left K 21 K 11 frac12 right left K 21 K 11 frac12 right T end align During fit the class class Nystroem evaluates the basis math U 1 and computes the normalization constant math K 11 frac12 Later during transform the kernel matrix is determined between the basis given by the components attribute and the new data points X This matrix is then multiplied by the normalization matrix for the final result By default class Nystroem uses the rbf kernel but it can use any kernel function or a precomputed kernel matrix The number of samples used which is also the dimensionality of the features computed is given by the parameter n components rubric Examples See the example entitled ref sphx glr auto examples applications plot cyclical feature engineering py that shows an efficient machine learning pipeline that uses a class Nystroem kernel rbf kernel approx Radial Basis Function Kernel The class RBFSampler constructs an approximate mapping for the radial basis function kernel also known as Random Kitchen Sinks RR2007 This transformation can be used to explicitly model a kernel map prior to applying a linear algorithm for example a linear SVM from sklearn kernel approximation import RBFSampler from sklearn linear model import SGDClassifier X 0 0 1 1 1 0 0 1 y 0 0 1 1 rbf feature RBFSampler gamma 1 random state 1 X features rbf feature fit transform X clf SGDClassifier max iter 5 clf fit X features y SGDClassifier max iter 5 clf score X features y 1 0 The mapping relies on a Monte Carlo approximation to the kernel values The fit function performs the Monte Carlo sampling whereas the transform method performs the mapping of the data Because of the inherent randomness of the process results may vary between different calls to the fit function The fit function takes two arguments n components which is the target dimensionality of the feature transform and gamma the parameter of the RBF kernel A higher n components will result in a better approximation of the kernel and will yield results more similar to those produced by a kernel SVM Note that fitting the feature function does not actually depend on the data given to the fit function Only the dimensionality of the data is used Details on the method can be found in RR2007 For a given value of n components class RBFSampler is often less accurate as class Nystroem class RBFSampler is cheaper to compute though making use of larger feature spaces more efficient figure auto examples miscellaneous images sphx glr plot kernel approximation 002 png target auto examples miscellaneous plot kernel approximation html scale 50 align center Comparing an exact RBF kernel left with the approximation right rubric Examples ref sphx glr auto examples miscellaneous plot kernel approximation py additive chi kernel approx Additive Chi Squared Kernel The additive chi squared kernel is a kernel on histograms often used in computer vision The additive chi squared kernel as used here is given by math k x y sum i frac 2x iy i x i y i This is not exactly the same as func sklearn metrics pairwise additive chi2 kernel The authors of VZ2010 prefer the version above as it is always positive definite Since the kernel is additive it is possible to treat all components math x i separately for embedding This makes it possible to sample the Fourier transform in regular intervals instead of approximating using Monte Carlo sampling The class class AdditiveChi2Sampler implements this component wise deterministic sampling Each component is sampled math n times yielding math 2n 1 dimensions per input dimension the multiple of two stems from the real and complex part of the Fourier transform In the literature math n is usually chosen to be 1 or 2 transforming the dataset to size n samples 5 n features in the case of math n 2 The approximate feature map provided by class AdditiveChi2Sampler can be combined with the approximate feature map provided by class RBFSampler to yield an approximate feature map for the exponentiated chi squared kernel See the VZ2010 for details and VVZ2010 for combination with the class RBFSampler skewed chi kernel approx Skewed Chi Squared Kernel The skewed chi squared kernel is given by math k x y prod i frac 2 sqrt x i c sqrt y i c x i y i 2c It has properties that are similar to the exponentiated chi squared kernel often used in computer vision but allows for a simple Monte Carlo approximation of the feature map The usage of the class SkewedChi2Sampler is the same as the usage described above for the class RBFSampler The only difference is in the free parameter that is called math c For a motivation for this mapping and the mathematical details see LS2010 polynomial kernel approx Polynomial Kernel Approximation via Tensor Sketch The ref polynomial kernel polynomial kernel is a popular type of kernel function given by math k x y gamma x top y c 0 d where x y are the input vectors d is the kernel degree Intuitively the feature space of the polynomial kernel of degree d consists of all possible degree d products among input features which enables learning algorithms using this kernel to account for interactions between features The TensorSketch PP2013 method as implemented in class PolynomialCountSketch is a scalable input data independent method for polynomial kernel approximation It is based on the concept of Count sketch WIKICS CCF2002 a dimensionality reduction technique similar to feature hashing which instead uses several independent hash functions TensorSketch obtains a Count Sketch of the outer product of two vectors or a vector with itself which can be used as an approximation of the polynomial kernel feature space In particular instead of explicitly computing the outer product TensorSketch computes the Count Sketch of the vectors and then uses polynomial multiplication via the Fast Fourier Transform to compute the Count Sketch of their outer product Conveniently the training phase of TensorSketch simply consists of initializing some random variables It is thus independent of the input data i e it only depends on the number of input features but not the data values In addition this method can transform samples in math mathcal O n text samples n text features n text components log n text components time where math n text components is the desired output dimension determined by n components rubric Examples ref sphx glr auto examples kernel approximation plot scalable poly kernels py tensor sketch kernel approx Mathematical Details Kernel methods like support vector machines or kernelized PCA rely on a property of reproducing kernel Hilbert spaces For any positive definite kernel function math k a so called Mercer kernel it is guaranteed that there exists a mapping math phi into a Hilbert space math mathcal H such that math k x y langle phi x phi y rangle Where math langle cdot cdot rangle denotes the inner product in the Hilbert space If an algorithm such as a linear support vector machine or PCA relies only on the scalar product of data points math x i one may use the value of math k x i x j which corresponds to applying the algorithm to the mapped data points math phi x i The advantage of using math k is that the mapping math phi never has to be calculated explicitly allowing for arbitrary large features even infinite One drawback of kernel methods is that it might be necessary to store many kernel values math k x i x j during optimization If a kernelized classifier is applied to new data math y j math k x i y j needs to be computed to make predictions possibly for many different math x i in the training set The classes in this submodule allow to approximate the embedding math phi thereby working explicitly with the representations math phi x i which obviates the need to apply the kernel or store training examples rubric References WS2001 Using the Nystr m method to speed up kernel machines https papers nips cc paper files paper 2000 hash 19de10adbaa1b2ee13f77f679fa1483a Abstract html Williams C K I Seeger M 2001 RR2007 Random features for large scale kernel machines https papers nips cc paper 2007 hash 013a006f03dbc5392effeb8f18fda755 Abstract html Rahimi A and Recht B Advances in neural information processing 2007 LS2010 Random Fourier approximations for skewed multiplicative histogram kernels https www researchgate net publication 221114584 Random Fourier Approximations for Skewed Multiplicative Histogram Kernels Li F Ionescu C and Sminchisescu C Pattern Recognition DAGM 2010 Lecture Notes in Computer Science VZ2010 Efficient additive kernels via explicit feature maps https www robots ox ac uk vgg publications 2011 Vedaldi11 vedaldi11 pdf Vedaldi A and Zisserman A Computer Vision and Pattern Recognition 2010 VVZ2010 Generalized RBF feature maps for Efficient Detection https www robots ox ac uk vgg publications 2010 Sreekanth10 sreekanth10 pdf Vempati S and Vedaldi A and Zisserman A and Jawahar CV 2010 PP2013 doi Fast and scalable polynomial kernels via explicit feature maps 10 1145 2487575 2487591 Pham N Pagh R 2013 CCF2002 Finding frequent items in data streams https www cs princeton edu courses archive spring04 cos598B bib CharikarCF pdf Charikar M Chen K Farach Colton 2002 WIKICS Wikipedia Count sketch https en wikipedia org wiki Count sketch |
scikit-learn neuralnetworksunsupervised Neural network models unsupervised sklearn neuralnetwork rbm | .. _neural_networks_unsupervised:
====================================
Neural network models (unsupervised)
====================================
.. currentmodule:: sklearn.neural_network
.. _rbm:
Restricted Boltzmann machines
=============================
Restricted Boltzmann machines (RBM) are unsupervised nonlinear feature learners
based on a probabilistic model. The features extracted by an RBM or a hierarchy
of RBMs often give good results when fed into a linear classifier such as a
linear SVM or a perceptron.
The model makes assumptions regarding the distribution of inputs. At the moment,
scikit-learn only provides :class:`BernoulliRBM`, which assumes the inputs are
either binary values or values between 0 and 1, each encoding the probability
that the specific feature would be turned on.
The RBM tries to maximize the likelihood of the data using a particular
graphical model. The parameter learning algorithm used (:ref:`Stochastic
Maximum Likelihood <sml>`) prevents the representations from straying far
from the input data, which makes them capture interesting regularities, but
makes the model less useful for small datasets, and usually not useful for
density estimation.
The method gained popularity for initializing deep neural networks with the
weights of independent RBMs. This method is known as unsupervised pre-training.
.. figure:: ../auto_examples/neural_networks/images/sphx_glr_plot_rbm_logistic_classification_001.png
:target: ../auto_examples/neural_networks/plot_rbm_logistic_classification.html
:align: center
:scale: 100%
.. rubric:: Examples
* :ref:`sphx_glr_auto_examples_neural_networks_plot_rbm_logistic_classification.py`
Graphical model and parametrization
-----------------------------------
The graphical model of an RBM is a fully-connected bipartite graph.
.. image:: ../images/rbm_graph.png
:align: center
The nodes are random variables whose states depend on the state of the other
nodes they are connected to. The model is therefore parameterized by the
weights of the connections, as well as one intercept (bias) term for each
visible and hidden unit, omitted from the image for simplicity.
The energy function measures the quality of a joint assignment:
.. math::
E(\mathbf{v}, \mathbf{h}) = -\sum_i \sum_j w_{ij}v_ih_j - \sum_i b_iv_i
- \sum_j c_jh_j
In the formula above, :math:`\mathbf{b}` and :math:`\mathbf{c}` are the
intercept vectors for the visible and hidden layers, respectively. The
joint probability of the model is defined in terms of the energy:
.. math::
P(\mathbf{v}, \mathbf{h}) = \frac{e^{-E(\mathbf{v}, \mathbf{h})}}{Z}
The word *restricted* refers to the bipartite structure of the model, which
prohibits direct interaction between hidden units, or between visible units.
This means that the following conditional independencies are assumed:
.. math::
h_i \bot h_j | \mathbf{v} \\
v_i \bot v_j | \mathbf{h}
The bipartite structure allows for the use of efficient block Gibbs sampling for
inference.
Bernoulli Restricted Boltzmann machines
---------------------------------------
In the :class:`BernoulliRBM`, all units are binary stochastic units. This
means that the input data should either be binary, or real-valued between 0 and
1 signifying the probability that the visible unit would turn on or off. This
is a good model for character recognition, where the interest is on which
pixels are active and which aren't. For images of natural scenes it no longer
fits because of background, depth and the tendency of neighbouring pixels to
take the same values.
The conditional probability distribution of each unit is given by the
logistic sigmoid activation function of the input it receives:
.. math::
P(v_i=1|\mathbf{h}) = \sigma(\sum_j w_{ij}h_j + b_i) \\
P(h_i=1|\mathbf{v}) = \sigma(\sum_i w_{ij}v_i + c_j)
where :math:`\sigma` is the logistic sigmoid function:
.. math::
\sigma(x) = \frac{1}{1 + e^{-x}}
.. _sml:
Stochastic Maximum Likelihood learning
--------------------------------------
The training algorithm implemented in :class:`BernoulliRBM` is known as
Stochastic Maximum Likelihood (SML) or Persistent Contrastive Divergence
(PCD). Optimizing maximum likelihood directly is infeasible because of
the form of the data likelihood:
.. math::
\log P(v) = \log \sum_h e^{-E(v, h)} - \log \sum_{x, y} e^{-E(x, y)}
For simplicity the equation above is written for a single training example.
The gradient with respect to the weights is formed of two terms corresponding to
the ones above. They are usually known as the positive gradient and the negative
gradient, because of their respective signs. In this implementation, the
gradients are estimated over mini-batches of samples.
In maximizing the log-likelihood, the positive gradient makes the model prefer
hidden states that are compatible with the observed training data. Because of
the bipartite structure of RBMs, it can be computed efficiently. The
negative gradient, however, is intractable. Its goal is to lower the energy of
joint states that the model prefers, therefore making it stay true to the data.
It can be approximated by Markov chain Monte Carlo using block Gibbs sampling by
iteratively sampling each of :math:`v` and :math:`h` given the other, until the
chain mixes. Samples generated in this way are sometimes referred as fantasy
particles. This is inefficient and it is difficult to determine whether the
Markov chain mixes.
The Contrastive Divergence method suggests to stop the chain after a small
number of iterations, :math:`k`, usually even 1. This method is fast and has
low variance, but the samples are far from the model distribution.
Persistent Contrastive Divergence addresses this. Instead of starting a new
chain each time the gradient is needed, and performing only one Gibbs sampling
step, in PCD we keep a number of chains (fantasy particles) that are updated
:math:`k` Gibbs steps after each weight update. This allows the particles to
explore the space more thoroughly.
.. rubric:: References
* `"A fast learning algorithm for deep belief nets"
<https://www.cs.toronto.edu/~hinton/absps/fastnc.pdf>`_,
G. Hinton, S. Osindero, Y.-W. Teh, 2006
* `"Training Restricted Boltzmann Machines using Approximations to
the Likelihood Gradient"
<https://www.cs.toronto.edu/~tijmen/pcd/pcd.pdf>`_,
T. Tieleman, 2008 | scikit-learn | neural networks unsupervised Neural network models unsupervised currentmodule sklearn neural network rbm Restricted Boltzmann machines Restricted Boltzmann machines RBM are unsupervised nonlinear feature learners based on a probabilistic model The features extracted by an RBM or a hierarchy of RBMs often give good results when fed into a linear classifier such as a linear SVM or a perceptron The model makes assumptions regarding the distribution of inputs At the moment scikit learn only provides class BernoulliRBM which assumes the inputs are either binary values or values between 0 and 1 each encoding the probability that the specific feature would be turned on The RBM tries to maximize the likelihood of the data using a particular graphical model The parameter learning algorithm used ref Stochastic Maximum Likelihood sml prevents the representations from straying far from the input data which makes them capture interesting regularities but makes the model less useful for small datasets and usually not useful for density estimation The method gained popularity for initializing deep neural networks with the weights of independent RBMs This method is known as unsupervised pre training figure auto examples neural networks images sphx glr plot rbm logistic classification 001 png target auto examples neural networks plot rbm logistic classification html align center scale 100 rubric Examples ref sphx glr auto examples neural networks plot rbm logistic classification py Graphical model and parametrization The graphical model of an RBM is a fully connected bipartite graph image images rbm graph png align center The nodes are random variables whose states depend on the state of the other nodes they are connected to The model is therefore parameterized by the weights of the connections as well as one intercept bias term for each visible and hidden unit omitted from the image for simplicity The energy function measures the quality of a joint assignment math E mathbf v mathbf h sum i sum j w ij v ih j sum i b iv i sum j c jh j In the formula above math mathbf b and math mathbf c are the intercept vectors for the visible and hidden layers respectively The joint probability of the model is defined in terms of the energy math P mathbf v mathbf h frac e E mathbf v mathbf h Z The word restricted refers to the bipartite structure of the model which prohibits direct interaction between hidden units or between visible units This means that the following conditional independencies are assumed math h i bot h j mathbf v v i bot v j mathbf h The bipartite structure allows for the use of efficient block Gibbs sampling for inference Bernoulli Restricted Boltzmann machines In the class BernoulliRBM all units are binary stochastic units This means that the input data should either be binary or real valued between 0 and 1 signifying the probability that the visible unit would turn on or off This is a good model for character recognition where the interest is on which pixels are active and which aren t For images of natural scenes it no longer fits because of background depth and the tendency of neighbouring pixels to take the same values The conditional probability distribution of each unit is given by the logistic sigmoid activation function of the input it receives math P v i 1 mathbf h sigma sum j w ij h j b i P h i 1 mathbf v sigma sum i w ij v i c j where math sigma is the logistic sigmoid function math sigma x frac 1 1 e x sml Stochastic Maximum Likelihood learning The training algorithm implemented in class BernoulliRBM is known as Stochastic Maximum Likelihood SML or Persistent Contrastive Divergence PCD Optimizing maximum likelihood directly is infeasible because of the form of the data likelihood math log P v log sum h e E v h log sum x y e E x y For simplicity the equation above is written for a single training example The gradient with respect to the weights is formed of two terms corresponding to the ones above They are usually known as the positive gradient and the negative gradient because of their respective signs In this implementation the gradients are estimated over mini batches of samples In maximizing the log likelihood the positive gradient makes the model prefer hidden states that are compatible with the observed training data Because of the bipartite structure of RBMs it can be computed efficiently The negative gradient however is intractable Its goal is to lower the energy of joint states that the model prefers therefore making it stay true to the data It can be approximated by Markov chain Monte Carlo using block Gibbs sampling by iteratively sampling each of math v and math h given the other until the chain mixes Samples generated in this way are sometimes referred as fantasy particles This is inefficient and it is difficult to determine whether the Markov chain mixes The Contrastive Divergence method suggests to stop the chain after a small number of iterations math k usually even 1 This method is fast and has low variance but the samples are far from the model distribution Persistent Contrastive Divergence addresses this Instead of starting a new chain each time the gradient is needed and performing only one Gibbs sampling step in PCD we keep a number of chains fantasy particles that are updated math k Gibbs steps after each weight update This allows the particles to explore the space more thoroughly rubric References A fast learning algorithm for deep belief nets https www cs toronto edu hinton absps fastnc pdf G Hinton S Osindero Y W Teh 2006 Training Restricted Boltzmann Machines using Approximations to the Likelihood Gradient https www cs toronto edu tijmen pcd pcd pdf T Tieleman 2008 |
scikit-learn Density estimation walks the line between unsupervised learning feature Density Estimation density estimation techniques are mixture models such as Jake Vanderplas vanderplas astro washington edu densityestimation engineering and data modeling Some of the most popular and useful | .. _density_estimation:
==================
Density Estimation
==================
.. sectionauthor:: Jake Vanderplas <[email protected]>
Density estimation walks the line between unsupervised learning, feature
engineering, and data modeling. Some of the most popular and useful
density estimation techniques are mixture models such as
Gaussian Mixtures (:class:`~sklearn.mixture.GaussianMixture`), and
neighbor-based approaches such as the kernel density estimate
(:class:`~sklearn.neighbors.KernelDensity`).
Gaussian Mixtures are discussed more fully in the context of
:ref:`clustering <clustering>`, because the technique is also useful as
an unsupervised clustering scheme.
Density estimation is a very simple concept, and most people are already
familiar with one common density estimation technique: the histogram.
Density Estimation: Histograms
==============================
A histogram is a simple visualization of data where bins are defined, and the
number of data points within each bin is tallied. An example of a histogram
can be seen in the upper-left panel of the following figure:
.. |hist_to_kde| image:: ../auto_examples/neighbors/images/sphx_glr_plot_kde_1d_001.png
:target: ../auto_examples/neighbors/plot_kde_1d.html
:scale: 80
.. centered:: |hist_to_kde|
A major problem with histograms, however, is that the choice of binning can
have a disproportionate effect on the resulting visualization. Consider the
upper-right panel of the above figure. It shows a histogram over the same
data, with the bins shifted right. The results of the two visualizations look
entirely different, and might lead to different interpretations of the data.
Intuitively, one can also think of a histogram as a stack of blocks, one block
per point. By stacking the blocks in the appropriate grid space, we recover
the histogram. But what if, instead of stacking the blocks on a regular grid,
we center each block on the point it represents, and sum the total height at
each location? This idea leads to the lower-left visualization. It is perhaps
not as clean as a histogram, but the fact that the data drive the block
locations mean that it is a much better representation of the underlying
data.
This visualization is an example of a *kernel density estimation*, in this case
with a top-hat kernel (i.e. a square block at each point). We can recover a
smoother distribution by using a smoother kernel. The bottom-right plot shows
a Gaussian kernel density estimate, in which each point contributes a Gaussian
curve to the total. The result is a smooth density estimate which is derived
from the data, and functions as a powerful non-parametric model of the
distribution of points.
.. _kernel_density:
Kernel Density Estimation
=========================
Kernel density estimation in scikit-learn is implemented in the
:class:`~sklearn.neighbors.KernelDensity` estimator, which uses the
Ball Tree or KD Tree for efficient queries (see :ref:`neighbors` for
a discussion of these). Though the above example
uses a 1D data set for simplicity, kernel density estimation can be
performed in any number of dimensions, though in practice the curse of
dimensionality causes its performance to degrade in high dimensions.
In the following figure, 100 points are drawn from a bimodal distribution,
and the kernel density estimates are shown for three choices of kernels:
.. |kde_1d_distribution| image:: ../auto_examples/neighbors/images/sphx_glr_plot_kde_1d_003.png
:target: ../auto_examples/neighbors/plot_kde_1d.html
:scale: 80
.. centered:: |kde_1d_distribution|
It's clear how the kernel shape affects the smoothness of the resulting
distribution. The scikit-learn kernel density estimator can be used as
follows:
>>> from sklearn.neighbors import KernelDensity
>>> import numpy as np
>>> X = np.array([[-1, -1], [-2, -1], [-3, -2], [1, 1], [2, 1], [3, 2]])
>>> kde = KernelDensity(kernel='gaussian', bandwidth=0.2).fit(X)
>>> kde.score_samples(X)
array([-0.41075698, -0.41075698, -0.41076071, -0.41075698, -0.41075698,
-0.41076071])
Here we have used ``kernel='gaussian'``, as seen above.
Mathematically, a kernel is a positive function :math:`K(x;h)`
which is controlled by the bandwidth parameter :math:`h`.
Given this kernel form, the density estimate at a point :math:`y` within
a group of points :math:`x_i; i=1\cdots N` is given by:
.. math::
\rho_K(y) = \sum_{i=1}^{N} K(y - x_i; h)
The bandwidth here acts as a smoothing parameter, controlling the tradeoff
between bias and variance in the result. A large bandwidth leads to a very
smooth (i.e. high-bias) density distribution. A small bandwidth leads
to an unsmooth (i.e. high-variance) density distribution.
The parameter `bandwidth` controls this smoothing. One can either set
manually this parameter or use Scott's and Silverman's estimation
methods.
:class:`~sklearn.neighbors.KernelDensity` implements several common kernel
forms, which are shown in the following figure:
.. |kde_kernels| image:: ../auto_examples/neighbors/images/sphx_glr_plot_kde_1d_002.png
:target: ../auto_examples/neighbors/plot_kde_1d.html
:scale: 80
.. centered:: |kde_kernels|
.. dropdown:: Kernels' mathematical expressions
The form of these kernels is as follows:
* Gaussian kernel (``kernel = 'gaussian'``)
:math:`K(x; h) \propto \exp(- \frac{x^2}{2h^2} )`
* Tophat kernel (``kernel = 'tophat'``)
:math:`K(x; h) \propto 1` if :math:`x < h`
* Epanechnikov kernel (``kernel = 'epanechnikov'``)
:math:`K(x; h) \propto 1 - \frac{x^2}{h^2}`
* Exponential kernel (``kernel = 'exponential'``)
:math:`K(x; h) \propto \exp(-x/h)`
* Linear kernel (``kernel = 'linear'``)
:math:`K(x; h) \propto 1 - x/h` if :math:`x < h`
* Cosine kernel (``kernel = 'cosine'``)
:math:`K(x; h) \propto \cos(\frac{\pi x}{2h})` if :math:`x < h`
The kernel density estimator can be used with any of the valid distance
metrics (see :class:`~sklearn.metrics.DistanceMetric` for a list of
available metrics), though the results are properly normalized only
for the Euclidean metric. One particularly useful metric is the
`Haversine distance <https://en.wikipedia.org/wiki/Haversine_formula>`_
which measures the angular distance between points on a sphere. Here
is an example of using a kernel density estimate for a visualization
of geospatial data, in this case the distribution of observations of two
different species on the South American continent:
.. |species_kde| image:: ../auto_examples/neighbors/images/sphx_glr_plot_species_kde_001.png
:target: ../auto_examples/neighbors/plot_species_kde.html
:scale: 80
.. centered:: |species_kde|
One other useful application of kernel density estimation is to learn a
non-parametric generative model of a dataset in order to efficiently
draw new samples from this generative model.
Here is an example of using this process to
create a new set of hand-written digits, using a Gaussian kernel learned
on a PCA projection of the data:
.. |digits_kde| image:: ../auto_examples/neighbors/images/sphx_glr_plot_digits_kde_sampling_001.png
:target: ../auto_examples/neighbors/plot_digits_kde_sampling.html
:scale: 80
.. centered:: |digits_kde|
The "new" data consists of linear combinations of the input data, with weights
probabilistically drawn given the KDE model.
.. rubric:: Examples
* :ref:`sphx_glr_auto_examples_neighbors_plot_kde_1d.py`: computation of simple kernel
density estimates in one dimension.
* :ref:`sphx_glr_auto_examples_neighbors_plot_digits_kde_sampling.py`: an example of using
Kernel Density estimation to learn a generative model of the hand-written
digits data, and drawing new samples from this model.
* :ref:`sphx_glr_auto_examples_neighbors_plot_species_kde.py`: an example of Kernel Density
estimation using the Haversine distance metric to visualize geospatial data | scikit-learn | density estimation Density Estimation sectionauthor Jake Vanderplas vanderplas astro washington edu Density estimation walks the line between unsupervised learning feature engineering and data modeling Some of the most popular and useful density estimation techniques are mixture models such as Gaussian Mixtures class sklearn mixture GaussianMixture and neighbor based approaches such as the kernel density estimate class sklearn neighbors KernelDensity Gaussian Mixtures are discussed more fully in the context of ref clustering clustering because the technique is also useful as an unsupervised clustering scheme Density estimation is a very simple concept and most people are already familiar with one common density estimation technique the histogram Density Estimation Histograms A histogram is a simple visualization of data where bins are defined and the number of data points within each bin is tallied An example of a histogram can be seen in the upper left panel of the following figure hist to kde image auto examples neighbors images sphx glr plot kde 1d 001 png target auto examples neighbors plot kde 1d html scale 80 centered hist to kde A major problem with histograms however is that the choice of binning can have a disproportionate effect on the resulting visualization Consider the upper right panel of the above figure It shows a histogram over the same data with the bins shifted right The results of the two visualizations look entirely different and might lead to different interpretations of the data Intuitively one can also think of a histogram as a stack of blocks one block per point By stacking the blocks in the appropriate grid space we recover the histogram But what if instead of stacking the blocks on a regular grid we center each block on the point it represents and sum the total height at each location This idea leads to the lower left visualization It is perhaps not as clean as a histogram but the fact that the data drive the block locations mean that it is a much better representation of the underlying data This visualization is an example of a kernel density estimation in this case with a top hat kernel i e a square block at each point We can recover a smoother distribution by using a smoother kernel The bottom right plot shows a Gaussian kernel density estimate in which each point contributes a Gaussian curve to the total The result is a smooth density estimate which is derived from the data and functions as a powerful non parametric model of the distribution of points kernel density Kernel Density Estimation Kernel density estimation in scikit learn is implemented in the class sklearn neighbors KernelDensity estimator which uses the Ball Tree or KD Tree for efficient queries see ref neighbors for a discussion of these Though the above example uses a 1D data set for simplicity kernel density estimation can be performed in any number of dimensions though in practice the curse of dimensionality causes its performance to degrade in high dimensions In the following figure 100 points are drawn from a bimodal distribution and the kernel density estimates are shown for three choices of kernels kde 1d distribution image auto examples neighbors images sphx glr plot kde 1d 003 png target auto examples neighbors plot kde 1d html scale 80 centered kde 1d distribution It s clear how the kernel shape affects the smoothness of the resulting distribution The scikit learn kernel density estimator can be used as follows from sklearn neighbors import KernelDensity import numpy as np X np array 1 1 2 1 3 2 1 1 2 1 3 2 kde KernelDensity kernel gaussian bandwidth 0 2 fit X kde score samples X array 0 41075698 0 41075698 0 41076071 0 41075698 0 41075698 0 41076071 Here we have used kernel gaussian as seen above Mathematically a kernel is a positive function math K x h which is controlled by the bandwidth parameter math h Given this kernel form the density estimate at a point math y within a group of points math x i i 1 cdots N is given by math rho K y sum i 1 N K y x i h The bandwidth here acts as a smoothing parameter controlling the tradeoff between bias and variance in the result A large bandwidth leads to a very smooth i e high bias density distribution A small bandwidth leads to an unsmooth i e high variance density distribution The parameter bandwidth controls this smoothing One can either set manually this parameter or use Scott s and Silverman s estimation methods class sklearn neighbors KernelDensity implements several common kernel forms which are shown in the following figure kde kernels image auto examples neighbors images sphx glr plot kde 1d 002 png target auto examples neighbors plot kde 1d html scale 80 centered kde kernels dropdown Kernels mathematical expressions The form of these kernels is as follows Gaussian kernel kernel gaussian math K x h propto exp frac x 2 2h 2 Tophat kernel kernel tophat math K x h propto 1 if math x h Epanechnikov kernel kernel epanechnikov math K x h propto 1 frac x 2 h 2 Exponential kernel kernel exponential math K x h propto exp x h Linear kernel kernel linear math K x h propto 1 x h if math x h Cosine kernel kernel cosine math K x h propto cos frac pi x 2h if math x h The kernel density estimator can be used with any of the valid distance metrics see class sklearn metrics DistanceMetric for a list of available metrics though the results are properly normalized only for the Euclidean metric One particularly useful metric is the Haversine distance https en wikipedia org wiki Haversine formula which measures the angular distance between points on a sphere Here is an example of using a kernel density estimate for a visualization of geospatial data in this case the distribution of observations of two different species on the South American continent species kde image auto examples neighbors images sphx glr plot species kde 001 png target auto examples neighbors plot species kde html scale 80 centered species kde One other useful application of kernel density estimation is to learn a non parametric generative model of a dataset in order to efficiently draw new samples from this generative model Here is an example of using this process to create a new set of hand written digits using a Gaussian kernel learned on a PCA projection of the data digits kde image auto examples neighbors images sphx glr plot digits kde sampling 001 png target auto examples neighbors plot digits kde sampling html scale 80 centered digits kde The new data consists of linear combinations of the input data with weights probabilistically drawn given the KDE model rubric Examples ref sphx glr auto examples neighbors plot kde 1d py computation of simple kernel density estimates in one dimension ref sphx glr auto examples neighbors plot digits kde sampling py an example of using Kernel Density estimation to learn a generative model of the hand written digits data and drawing new samples from this model ref sphx glr auto examples neighbors plot species kde py an example of Kernel Density estimation using the Haversine distance metric to visualize geospatial data |
scikit-learn sklearn gaussianprocess to solve regression and probabilistic classification problems Gaussian Processes Gaussian Processes GP are a nonparametric supervised learning method used gaussianprocess | .. _gaussian_process:
==================
Gaussian Processes
==================
.. currentmodule:: sklearn.gaussian_process
**Gaussian Processes (GP)** are a nonparametric supervised learning method used
to solve *regression* and *probabilistic classification* problems.
The advantages of Gaussian processes are:
- The prediction interpolates the observations (at least for regular
kernels).
- The prediction is probabilistic (Gaussian) so that one can compute
empirical confidence intervals and decide based on those if one should
refit (online fitting, adaptive fitting) the prediction in some
region of interest.
- Versatile: different :ref:`kernels
<gp_kernels>` can be specified. Common kernels are provided, but
it is also possible to specify custom kernels.
The disadvantages of Gaussian processes include:
- Our implementation is not sparse, i.e., they use the whole samples/features
information to perform the prediction.
- They lose efficiency in high dimensional spaces -- namely when the number
of features exceeds a few dozens.
.. _gpr:
Gaussian Process Regression (GPR)
=================================
.. currentmodule:: sklearn.gaussian_process
The :class:`GaussianProcessRegressor` implements Gaussian processes (GP) for
regression purposes. For this, the prior of the GP needs to be specified. GP
will combine this prior and the likelihood function based on training samples.
It allows to give a probabilistic approach to prediction by giving the mean and
standard deviation as output when predicting.
.. figure:: ../auto_examples/gaussian_process/images/sphx_glr_plot_gpr_noisy_targets_002.png
:target: ../auto_examples/gaussian_process/plot_gpr_noisy_targets.html
:align: center
The prior mean is assumed to be constant and zero (for `normalize_y=False`) or
the training data's mean (for `normalize_y=True`). The prior's covariance is
specified by passing a :ref:`kernel <gp_kernels>` object. The hyperparameters
of the kernel are optimized when fitting the :class:`GaussianProcessRegressor`
by maximizing the log-marginal-likelihood (LML) based on the passed
`optimizer`. As the LML may have multiple local optima, the optimizer can be
started repeatedly by specifying `n_restarts_optimizer`. The first run is
always conducted starting from the initial hyperparameter values of the kernel;
subsequent runs are conducted from hyperparameter values that have been chosen
randomly from the range of allowed values. If the initial hyperparameters
should be kept fixed, `None` can be passed as optimizer.
The noise level in the targets can be specified by passing it via the parameter
`alpha`, either globally as a scalar or per datapoint. Note that a moderate
noise level can also be helpful for dealing with numeric instabilities during
fitting as it is effectively implemented as Tikhonov regularization, i.e., by
adding it to the diagonal of the kernel matrix. An alternative to specifying
the noise level explicitly is to include a
:class:`~sklearn.gaussian_process.kernels.WhiteKernel` component into the
kernel, which can estimate the global noise level from the data (see example
below). The figure below shows the effect of noisy target handled by setting
the parameter `alpha`.
.. figure:: ../auto_examples/gaussian_process/images/sphx_glr_plot_gpr_noisy_targets_003.png
:target: ../auto_examples/gaussian_process/plot_gpr_noisy_targets.html
:align: center
The implementation is based on Algorithm 2.1 of [RW2006]_. In addition to
the API of standard scikit-learn estimators, :class:`GaussianProcessRegressor`:
* allows prediction without prior fitting (based on the GP prior)
* provides an additional method ``sample_y(X)``, which evaluates samples
drawn from the GPR (prior or posterior) at given inputs
* exposes a method ``log_marginal_likelihood(theta)``, which can be used
externally for other ways of selecting hyperparameters, e.g., via
Markov chain Monte Carlo.
.. rubric:: Examples
* :ref:`sphx_glr_auto_examples_gaussian_process_plot_gpr_noisy_targets.py`
* :ref:`sphx_glr_auto_examples_gaussian_process_plot_gpr_noisy.py`
* :ref:`sphx_glr_auto_examples_gaussian_process_plot_compare_gpr_krr.py`
* :ref:`sphx_glr_auto_examples_gaussian_process_plot_gpr_co2.py`
.. _gpc:
Gaussian Process Classification (GPC)
=====================================
.. currentmodule:: sklearn.gaussian_process
The :class:`GaussianProcessClassifier` implements Gaussian processes (GP) for
classification purposes, more specifically for probabilistic classification,
where test predictions take the form of class probabilities.
GaussianProcessClassifier places a GP prior on a latent function :math:`f`,
which is then squashed through a link function to obtain the probabilistic
classification. The latent function :math:`f` is a so-called nuisance function,
whose values are not observed and are not relevant by themselves.
Its purpose is to allow a convenient formulation of the model, and :math:`f`
is removed (integrated out) during prediction. GaussianProcessClassifier
implements the logistic link function, for which the integral cannot be
computed analytically but is easily approximated in the binary case.
In contrast to the regression setting, the posterior of the latent function
:math:`f` is not Gaussian even for a GP prior since a Gaussian likelihood is
inappropriate for discrete class labels. Rather, a non-Gaussian likelihood
corresponding to the logistic link function (logit) is used.
GaussianProcessClassifier approximates the non-Gaussian posterior with a
Gaussian based on the Laplace approximation. More details can be found in
Chapter 3 of [RW2006]_.
The GP prior mean is assumed to be zero. The prior's
covariance is specified by passing a :ref:`kernel <gp_kernels>` object. The
hyperparameters of the kernel are optimized during fitting of
GaussianProcessRegressor by maximizing the log-marginal-likelihood (LML) based
on the passed ``optimizer``. As the LML may have multiple local optima, the
optimizer can be started repeatedly by specifying ``n_restarts_optimizer``. The
first run is always conducted starting from the initial hyperparameter values
of the kernel; subsequent runs are conducted from hyperparameter values
that have been chosen randomly from the range of allowed values.
If the initial hyperparameters should be kept fixed, `None` can be passed as
optimizer.
:class:`GaussianProcessClassifier` supports multi-class classification
by performing either one-versus-rest or one-versus-one based training and
prediction. In one-versus-rest, one binary Gaussian process classifier is
fitted for each class, which is trained to separate this class from the rest.
In "one_vs_one", one binary Gaussian process classifier is fitted for each pair
of classes, which is trained to separate these two classes. The predictions of
these binary predictors are combined into multi-class predictions. See the
section on :ref:`multi-class classification <multiclass>` for more details.
In the case of Gaussian process classification, "one_vs_one" might be
computationally cheaper since it has to solve many problems involving only a
subset of the whole training set rather than fewer problems on the whole
dataset. Since Gaussian process classification scales cubically with the size
of the dataset, this might be considerably faster. However, note that
"one_vs_one" does not support predicting probability estimates but only plain
predictions. Moreover, note that :class:`GaussianProcessClassifier` does not
(yet) implement a true multi-class Laplace approximation internally, but
as discussed above is based on solving several binary classification tasks
internally, which are combined using one-versus-rest or one-versus-one.
GPC examples
============
Probabilistic predictions with GPC
----------------------------------
This example illustrates the predicted probability of GPC for an RBF kernel
with different choices of the hyperparameters. The first figure shows the
predicted probability of GPC with arbitrarily chosen hyperparameters and with
the hyperparameters corresponding to the maximum log-marginal-likelihood (LML).
While the hyperparameters chosen by optimizing LML have a considerably larger
LML, they perform slightly worse according to the log-loss on test data. The
figure shows that this is because they exhibit a steep change of the class
probabilities at the class boundaries (which is good) but have predicted
probabilities close to 0.5 far away from the class boundaries (which is bad)
This undesirable effect is caused by the Laplace approximation used
internally by GPC.
The second figure shows the log-marginal-likelihood for different choices of
the kernel's hyperparameters, highlighting the two choices of the
hyperparameters used in the first figure by black dots.
.. figure:: ../auto_examples/gaussian_process/images/sphx_glr_plot_gpc_001.png
:target: ../auto_examples/gaussian_process/plot_gpc.html
:align: center
.. figure:: ../auto_examples/gaussian_process/images/sphx_glr_plot_gpc_002.png
:target: ../auto_examples/gaussian_process/plot_gpc.html
:align: center
Illustration of GPC on the XOR dataset
--------------------------------------
.. currentmodule:: sklearn.gaussian_process.kernels
This example illustrates GPC on XOR data. Compared are a stationary, isotropic
kernel (:class:`RBF`) and a non-stationary kernel (:class:`DotProduct`). On
this particular dataset, the :class:`DotProduct` kernel obtains considerably
better results because the class-boundaries are linear and coincide with the
coordinate axes. In practice, however, stationary kernels such as :class:`RBF`
often obtain better results.
.. figure:: ../auto_examples/gaussian_process/images/sphx_glr_plot_gpc_xor_001.png
:target: ../auto_examples/gaussian_process/plot_gpc_xor.html
:align: center
.. currentmodule:: sklearn.gaussian_process
Gaussian process classification (GPC) on iris dataset
-----------------------------------------------------
This example illustrates the predicted probability of GPC for an isotropic
and anisotropic RBF kernel on a two-dimensional version for the iris-dataset.
This illustrates the applicability of GPC to non-binary classification.
The anisotropic RBF kernel obtains slightly higher log-marginal-likelihood by
assigning different length-scales to the two feature dimensions.
.. figure:: ../auto_examples/gaussian_process/images/sphx_glr_plot_gpc_iris_001.png
:target: ../auto_examples/gaussian_process/plot_gpc_iris.html
:align: center
.. _gp_kernels:
Kernels for Gaussian Processes
==============================
.. currentmodule:: sklearn.gaussian_process.kernels
Kernels (also called "covariance functions" in the context of GPs) are a crucial
ingredient of GPs which determine the shape of prior and posterior of the GP.
They encode the assumptions on the function being learned by defining the "similarity"
of two datapoints combined with the assumption that similar datapoints should
have similar target values. Two categories of kernels can be distinguished:
stationary kernels depend only on the distance of two datapoints and not on their
absolute values :math:`k(x_i, x_j)= k(d(x_i, x_j))` and are thus invariant to
translations in the input space, while non-stationary kernels
depend also on the specific values of the datapoints. Stationary kernels can further
be subdivided into isotropic and anisotropic kernels, where isotropic kernels are
also invariant to rotations in the input space. For more details, we refer to
Chapter 4 of [RW2006]_. For guidance on how to best combine different kernels,
we refer to [Duv2014]_.
.. dropdown:: Gaussian Process Kernel API
The main usage of a :class:`Kernel` is to compute the GP's covariance between
datapoints. For this, the method ``__call__`` of the kernel can be called. This
method can either be used to compute the "auto-covariance" of all pairs of
datapoints in a 2d array X, or the "cross-covariance" of all combinations
of datapoints of a 2d array X with datapoints in a 2d array Y. The following
identity holds true for all kernels k (except for the :class:`WhiteKernel`):
``k(X) == K(X, Y=X)``
If only the diagonal of the auto-covariance is being used, the method ``diag()``
of a kernel can be called, which is more computationally efficient than the
equivalent call to ``__call__``: ``np.diag(k(X, X)) == k.diag(X)``
Kernels are parameterized by a vector :math:`\theta` of hyperparameters. These
hyperparameters can for instance control length-scales or periodicity of a
kernel (see below). All kernels support computing analytic gradients
of the kernel's auto-covariance with respect to :math:`log(\theta)` via setting
``eval_gradient=True`` in the ``__call__`` method.
That is, a ``(len(X), len(X), len(theta))`` array is returned where the entry
``[i, j, l]`` contains :math:`\frac{\partial k_\theta(x_i, x_j)}{\partial log(\theta_l)}`.
This gradient is used by the Gaussian process (both regressor and classifier)
in computing the gradient of the log-marginal-likelihood, which in turn is used
to determine the value of :math:`\theta`, which maximizes the log-marginal-likelihood,
via gradient ascent. For each hyperparameter, the initial value and the
bounds need to be specified when creating an instance of the kernel. The
current value of :math:`\theta` can be get and set via the property
``theta`` of the kernel object. Moreover, the bounds of the hyperparameters can be
accessed by the property ``bounds`` of the kernel. Note that both properties
(theta and bounds) return log-transformed values of the internally used values
since those are typically more amenable to gradient-based optimization.
The specification of each hyperparameter is stored in the form of an instance of
:class:`Hyperparameter` in the respective kernel. Note that a kernel using a
hyperparameter with name "x" must have the attributes self.x and self.x_bounds.
The abstract base class for all kernels is :class:`Kernel`. Kernel implements a
similar interface as :class:`~sklearn.base.BaseEstimator`, providing the
methods ``get_params()``, ``set_params()``, and ``clone()``. This allows
setting kernel values also via meta-estimators such as
:class:`~sklearn.pipeline.Pipeline` or
:class:`~sklearn.model_selection.GridSearchCV`. Note that due to the nested
structure of kernels (by applying kernel operators, see below), the names of
kernel parameters might become relatively complicated. In general, for a binary
kernel operator, parameters of the left operand are prefixed with ``k1__`` and
parameters of the right operand with ``k2__``. An additional convenience method
is ``clone_with_theta(theta)``, which returns a cloned version of the kernel
but with the hyperparameters set to ``theta``. An illustrative example:
>>> from sklearn.gaussian_process.kernels import ConstantKernel, RBF
>>> kernel = ConstantKernel(constant_value=1.0, constant_value_bounds=(0.0, 10.0)) * RBF(length_scale=0.5, length_scale_bounds=(0.0, 10.0)) + RBF(length_scale=2.0, length_scale_bounds=(0.0, 10.0))
>>> for hyperparameter in kernel.hyperparameters: print(hyperparameter)
Hyperparameter(name='k1__k1__constant_value', value_type='numeric', bounds=array([[ 0., 10.]]), n_elements=1, fixed=False)
Hyperparameter(name='k1__k2__length_scale', value_type='numeric', bounds=array([[ 0., 10.]]), n_elements=1, fixed=False)
Hyperparameter(name='k2__length_scale', value_type='numeric', bounds=array([[ 0., 10.]]), n_elements=1, fixed=False)
>>> params = kernel.get_params()
>>> for key in sorted(params): print("%s : %s" % (key, params[key]))
k1 : 1**2 * RBF(length_scale=0.5)
k1__k1 : 1**2
k1__k1__constant_value : 1.0
k1__k1__constant_value_bounds : (0.0, 10.0)
k1__k2 : RBF(length_scale=0.5)
k1__k2__length_scale : 0.5
k1__k2__length_scale_bounds : (0.0, 10.0)
k2 : RBF(length_scale=2)
k2__length_scale : 2.0
k2__length_scale_bounds : (0.0, 10.0)
>>> print(kernel.theta) # Note: log-transformed
[ 0. -0.69314718 0.69314718]
>>> print(kernel.bounds) # Note: log-transformed
[[ -inf 2.30258509]
[ -inf 2.30258509]
[ -inf 2.30258509]]
All Gaussian process kernels are interoperable with :mod:`sklearn.metrics.pairwise`
and vice versa: instances of subclasses of :class:`Kernel` can be passed as
``metric`` to ``pairwise_kernels`` from :mod:`sklearn.metrics.pairwise`. Moreover,
kernel functions from pairwise can be used as GP kernels by using the wrapper
class :class:`PairwiseKernel`. The only caveat is that the gradient of
the hyperparameters is not analytic but numeric and all those kernels support
only isotropic distances. The parameter ``gamma`` is considered to be a
hyperparameter and may be optimized. The other kernel parameters are set
directly at initialization and are kept fixed.
Basic kernels
-------------
The :class:`ConstantKernel` kernel can be used as part of a :class:`Product`
kernel where it scales the magnitude of the other factor (kernel) or as part
of a :class:`Sum` kernel, where it modifies the mean of the Gaussian process.
It depends on a parameter :math:`constant\_value`. It is defined as:
.. math::
k(x_i, x_j) = constant\_value \;\forall\; x_1, x_2
The main use-case of the :class:`WhiteKernel` kernel is as part of a
sum-kernel where it explains the noise-component of the signal. Tuning its
parameter :math:`noise\_level` corresponds to estimating the noise-level.
It is defined as:
.. math::
k(x_i, x_j) = noise\_level \text{ if } x_i == x_j \text{ else } 0
Kernel operators
----------------
Kernel operators take one or two base kernels and combine them into a new
kernel. The :class:`Sum` kernel takes two kernels :math:`k_1` and :math:`k_2`
and combines them via :math:`k_{sum}(X, Y) = k_1(X, Y) + k_2(X, Y)`.
The :class:`Product` kernel takes two kernels :math:`k_1` and :math:`k_2`
and combines them via :math:`k_{product}(X, Y) = k_1(X, Y) * k_2(X, Y)`.
The :class:`Exponentiation` kernel takes one base kernel and a scalar parameter
:math:`p` and combines them via
:math:`k_{exp}(X, Y) = k(X, Y)^p`.
Note that magic methods ``__add__``, ``__mul___`` and ``__pow__`` are
overridden on the Kernel objects, so one can use e.g. ``RBF() + RBF()`` as
a shortcut for ``Sum(RBF(), RBF())``.
Radial basis function (RBF) kernel
----------------------------------
The :class:`RBF` kernel is a stationary kernel. It is also known as the "squared
exponential" kernel. It is parameterized by a length-scale parameter :math:`l>0`, which
can either be a scalar (isotropic variant of the kernel) or a vector with the same
number of dimensions as the inputs :math:`x` (anisotropic variant of the kernel).
The kernel is given by:
.. math::
k(x_i, x_j) = \text{exp}\left(- \frac{d(x_i, x_j)^2}{2l^2} \right)
where :math:`d(\cdot, \cdot)` is the Euclidean distance.
This kernel is infinitely differentiable, which implies that GPs with this
kernel as covariance function have mean square derivatives of all orders, and are thus
very smooth. The prior and posterior of a GP resulting from an RBF kernel are shown in
the following figure:
.. figure:: ../auto_examples/gaussian_process/images/sphx_glr_plot_gpr_prior_posterior_001.png
:target: ../auto_examples/gaussian_process/plot_gpr_prior_posterior.html
:align: center
Matérn kernel
-------------
The :class:`Matern` kernel is a stationary kernel and a generalization of the
:class:`RBF` kernel. It has an additional parameter :math:`\nu` which controls
the smoothness of the resulting function. It is parameterized by a length-scale parameter :math:`l>0`, which can either be a scalar (isotropic variant of the kernel) or a vector with the same number of dimensions as the inputs :math:`x` (anisotropic variant of the kernel).
.. dropdown:: Mathematical implementation of Matérn kernel
The kernel is given by:
.. math::
k(x_i, x_j) = \frac{1}{\Gamma(\nu)2^{\nu-1}}\Bigg(\frac{\sqrt{2\nu}}{l} d(x_i , x_j )\Bigg)^\nu K_\nu\Bigg(\frac{\sqrt{2\nu}}{l} d(x_i , x_j )\Bigg),
where :math:`d(\cdot,\cdot)` is the Euclidean distance, :math:`K_\nu(\cdot)` is a modified Bessel function and :math:`\Gamma(\cdot)` is the gamma function.
As :math:`\nu\rightarrow\infty`, the Matérn kernel converges to the RBF kernel.
When :math:`\nu = 1/2`, the Matérn kernel becomes identical to the absolute
exponential kernel, i.e.,
.. math::
k(x_i, x_j) = \exp \Bigg(- \frac{1}{l} d(x_i , x_j ) \Bigg) \quad \quad \nu= \tfrac{1}{2}
In particular, :math:`\nu = 3/2`:
.. math::
k(x_i, x_j) = \Bigg(1 + \frac{\sqrt{3}}{l} d(x_i , x_j )\Bigg) \exp \Bigg(-\frac{\sqrt{3}}{l} d(x_i , x_j ) \Bigg) \quad \quad \nu= \tfrac{3}{2}
and :math:`\nu = 5/2`:
.. math::
k(x_i, x_j) = \Bigg(1 + \frac{\sqrt{5}}{l} d(x_i , x_j ) +\frac{5}{3l} d(x_i , x_j )^2 \Bigg) \exp \Bigg(-\frac{\sqrt{5}}{l} d(x_i , x_j ) \Bigg) \quad \quad \nu= \tfrac{5}{2}
are popular choices for learning functions that are not infinitely
differentiable (as assumed by the RBF kernel) but at least once (:math:`\nu =
3/2`) or twice differentiable (:math:`\nu = 5/2`).
The flexibility of controlling the smoothness of the learned function via :math:`\nu`
allows adapting to the properties of the true underlying functional relation.
The prior and posterior of a GP resulting from a Matérn kernel are shown in
the following figure:
.. figure:: ../auto_examples/gaussian_process/images/sphx_glr_plot_gpr_prior_posterior_005.png
:target: ../auto_examples/gaussian_process/plot_gpr_prior_posterior.html
:align: center
See [RW2006]_, pp84 for further details regarding the
different variants of the Matérn kernel.
Rational quadratic kernel
-------------------------
The :class:`RationalQuadratic` kernel can be seen as a scale mixture (an infinite sum)
of :class:`RBF` kernels with different characteristic length-scales. It is parameterized
by a length-scale parameter :math:`l>0` and a scale mixture parameter :math:`\alpha>0`
Only the isotropic variant where :math:`l` is a scalar is supported at the moment.
The kernel is given by:
.. math::
k(x_i, x_j) = \left(1 + \frac{d(x_i, x_j)^2}{2\alpha l^2}\right)^{-\alpha}
The prior and posterior of a GP resulting from a :class:`RationalQuadratic` kernel are shown in
the following figure:
.. figure:: ../auto_examples/gaussian_process/images/sphx_glr_plot_gpr_prior_posterior_002.png
:target: ../auto_examples/gaussian_process/plot_gpr_prior_posterior.html
:align: center
Exp-Sine-Squared kernel
-----------------------
The :class:`ExpSineSquared` kernel allows modeling periodic functions.
It is parameterized by a length-scale parameter :math:`l>0` and a periodicity parameter
:math:`p>0`. Only the isotropic variant where :math:`l` is a scalar is supported at the moment.
The kernel is given by:
.. math::
k(x_i, x_j) = \text{exp}\left(- \frac{ 2\sin^2(\pi d(x_i, x_j) / p) }{ l^ 2} \right)
The prior and posterior of a GP resulting from an ExpSineSquared kernel are shown in
the following figure:
.. figure:: ../auto_examples/gaussian_process/images/sphx_glr_plot_gpr_prior_posterior_003.png
:target: ../auto_examples/gaussian_process/plot_gpr_prior_posterior.html
:align: center
Dot-Product kernel
------------------
The :class:`DotProduct` kernel is non-stationary and can be obtained from linear regression
by putting :math:`N(0, 1)` priors on the coefficients of :math:`x_d (d = 1, . . . , D)` and
a prior of :math:`N(0, \sigma_0^2)` on the bias. The :class:`DotProduct` kernel is invariant to a rotation
of the coordinates about the origin, but not translations.
It is parameterized by a parameter :math:`\sigma_0^2`. For :math:`\sigma_0^2 = 0`, the kernel
is called the homogeneous linear kernel, otherwise it is inhomogeneous. The kernel is given by
.. math::
k(x_i, x_j) = \sigma_0 ^ 2 + x_i \cdot x_j
The :class:`DotProduct` kernel is commonly combined with exponentiation. An example with exponent 2 is
shown in the following figure:
.. figure:: ../auto_examples/gaussian_process/images/sphx_glr_plot_gpr_prior_posterior_004.png
:target: ../auto_examples/gaussian_process/plot_gpr_prior_posterior.html
:align: center
References
----------
.. [RW2006] `Carl E. Rasmussen and Christopher K.I. Williams,
"Gaussian Processes for Machine Learning",
MIT Press 2006 <https://www.gaussianprocess.org/gpml/chapters/RW.pdf>`_
.. [Duv2014] `David Duvenaud, "The Kernel Cookbook: Advice on Covariance functions", 2014
<https://www.cs.toronto.edu/~duvenaud/cookbook/>`_
.. currentmodule:: sklearn.gaussian_process | scikit-learn | gaussian process Gaussian Processes currentmodule sklearn gaussian process Gaussian Processes GP are a nonparametric supervised learning method used to solve regression and probabilistic classification problems The advantages of Gaussian processes are The prediction interpolates the observations at least for regular kernels The prediction is probabilistic Gaussian so that one can compute empirical confidence intervals and decide based on those if one should refit online fitting adaptive fitting the prediction in some region of interest Versatile different ref kernels gp kernels can be specified Common kernels are provided but it is also possible to specify custom kernels The disadvantages of Gaussian processes include Our implementation is not sparse i e they use the whole samples features information to perform the prediction They lose efficiency in high dimensional spaces namely when the number of features exceeds a few dozens gpr Gaussian Process Regression GPR currentmodule sklearn gaussian process The class GaussianProcessRegressor implements Gaussian processes GP for regression purposes For this the prior of the GP needs to be specified GP will combine this prior and the likelihood function based on training samples It allows to give a probabilistic approach to prediction by giving the mean and standard deviation as output when predicting figure auto examples gaussian process images sphx glr plot gpr noisy targets 002 png target auto examples gaussian process plot gpr noisy targets html align center The prior mean is assumed to be constant and zero for normalize y False or the training data s mean for normalize y True The prior s covariance is specified by passing a ref kernel gp kernels object The hyperparameters of the kernel are optimized when fitting the class GaussianProcessRegressor by maximizing the log marginal likelihood LML based on the passed optimizer As the LML may have multiple local optima the optimizer can be started repeatedly by specifying n restarts optimizer The first run is always conducted starting from the initial hyperparameter values of the kernel subsequent runs are conducted from hyperparameter values that have been chosen randomly from the range of allowed values If the initial hyperparameters should be kept fixed None can be passed as optimizer The noise level in the targets can be specified by passing it via the parameter alpha either globally as a scalar or per datapoint Note that a moderate noise level can also be helpful for dealing with numeric instabilities during fitting as it is effectively implemented as Tikhonov regularization i e by adding it to the diagonal of the kernel matrix An alternative to specifying the noise level explicitly is to include a class sklearn gaussian process kernels WhiteKernel component into the kernel which can estimate the global noise level from the data see example below The figure below shows the effect of noisy target handled by setting the parameter alpha figure auto examples gaussian process images sphx glr plot gpr noisy targets 003 png target auto examples gaussian process plot gpr noisy targets html align center The implementation is based on Algorithm 2 1 of RW2006 In addition to the API of standard scikit learn estimators class GaussianProcessRegressor allows prediction without prior fitting based on the GP prior provides an additional method sample y X which evaluates samples drawn from the GPR prior or posterior at given inputs exposes a method log marginal likelihood theta which can be used externally for other ways of selecting hyperparameters e g via Markov chain Monte Carlo rubric Examples ref sphx glr auto examples gaussian process plot gpr noisy targets py ref sphx glr auto examples gaussian process plot gpr noisy py ref sphx glr auto examples gaussian process plot compare gpr krr py ref sphx glr auto examples gaussian process plot gpr co2 py gpc Gaussian Process Classification GPC currentmodule sklearn gaussian process The class GaussianProcessClassifier implements Gaussian processes GP for classification purposes more specifically for probabilistic classification where test predictions take the form of class probabilities GaussianProcessClassifier places a GP prior on a latent function math f which is then squashed through a link function to obtain the probabilistic classification The latent function math f is a so called nuisance function whose values are not observed and are not relevant by themselves Its purpose is to allow a convenient formulation of the model and math f is removed integrated out during prediction GaussianProcessClassifier implements the logistic link function for which the integral cannot be computed analytically but is easily approximated in the binary case In contrast to the regression setting the posterior of the latent function math f is not Gaussian even for a GP prior since a Gaussian likelihood is inappropriate for discrete class labels Rather a non Gaussian likelihood corresponding to the logistic link function logit is used GaussianProcessClassifier approximates the non Gaussian posterior with a Gaussian based on the Laplace approximation More details can be found in Chapter 3 of RW2006 The GP prior mean is assumed to be zero The prior s covariance is specified by passing a ref kernel gp kernels object The hyperparameters of the kernel are optimized during fitting of GaussianProcessRegressor by maximizing the log marginal likelihood LML based on the passed optimizer As the LML may have multiple local optima the optimizer can be started repeatedly by specifying n restarts optimizer The first run is always conducted starting from the initial hyperparameter values of the kernel subsequent runs are conducted from hyperparameter values that have been chosen randomly from the range of allowed values If the initial hyperparameters should be kept fixed None can be passed as optimizer class GaussianProcessClassifier supports multi class classification by performing either one versus rest or one versus one based training and prediction In one versus rest one binary Gaussian process classifier is fitted for each class which is trained to separate this class from the rest In one vs one one binary Gaussian process classifier is fitted for each pair of classes which is trained to separate these two classes The predictions of these binary predictors are combined into multi class predictions See the section on ref multi class classification multiclass for more details In the case of Gaussian process classification one vs one might be computationally cheaper since it has to solve many problems involving only a subset of the whole training set rather than fewer problems on the whole dataset Since Gaussian process classification scales cubically with the size of the dataset this might be considerably faster However note that one vs one does not support predicting probability estimates but only plain predictions Moreover note that class GaussianProcessClassifier does not yet implement a true multi class Laplace approximation internally but as discussed above is based on solving several binary classification tasks internally which are combined using one versus rest or one versus one GPC examples Probabilistic predictions with GPC This example illustrates the predicted probability of GPC for an RBF kernel with different choices of the hyperparameters The first figure shows the predicted probability of GPC with arbitrarily chosen hyperparameters and with the hyperparameters corresponding to the maximum log marginal likelihood LML While the hyperparameters chosen by optimizing LML have a considerably larger LML they perform slightly worse according to the log loss on test data The figure shows that this is because they exhibit a steep change of the class probabilities at the class boundaries which is good but have predicted probabilities close to 0 5 far away from the class boundaries which is bad This undesirable effect is caused by the Laplace approximation used internally by GPC The second figure shows the log marginal likelihood for different choices of the kernel s hyperparameters highlighting the two choices of the hyperparameters used in the first figure by black dots figure auto examples gaussian process images sphx glr plot gpc 001 png target auto examples gaussian process plot gpc html align center figure auto examples gaussian process images sphx glr plot gpc 002 png target auto examples gaussian process plot gpc html align center Illustration of GPC on the XOR dataset currentmodule sklearn gaussian process kernels This example illustrates GPC on XOR data Compared are a stationary isotropic kernel class RBF and a non stationary kernel class DotProduct On this particular dataset the class DotProduct kernel obtains considerably better results because the class boundaries are linear and coincide with the coordinate axes In practice however stationary kernels such as class RBF often obtain better results figure auto examples gaussian process images sphx glr plot gpc xor 001 png target auto examples gaussian process plot gpc xor html align center currentmodule sklearn gaussian process Gaussian process classification GPC on iris dataset This example illustrates the predicted probability of GPC for an isotropic and anisotropic RBF kernel on a two dimensional version for the iris dataset This illustrates the applicability of GPC to non binary classification The anisotropic RBF kernel obtains slightly higher log marginal likelihood by assigning different length scales to the two feature dimensions figure auto examples gaussian process images sphx glr plot gpc iris 001 png target auto examples gaussian process plot gpc iris html align center gp kernels Kernels for Gaussian Processes currentmodule sklearn gaussian process kernels Kernels also called covariance functions in the context of GPs are a crucial ingredient of GPs which determine the shape of prior and posterior of the GP They encode the assumptions on the function being learned by defining the similarity of two datapoints combined with the assumption that similar datapoints should have similar target values Two categories of kernels can be distinguished stationary kernels depend only on the distance of two datapoints and not on their absolute values math k x i x j k d x i x j and are thus invariant to translations in the input space while non stationary kernels depend also on the specific values of the datapoints Stationary kernels can further be subdivided into isotropic and anisotropic kernels where isotropic kernels are also invariant to rotations in the input space For more details we refer to Chapter 4 of RW2006 For guidance on how to best combine different kernels we refer to Duv2014 dropdown Gaussian Process Kernel API The main usage of a class Kernel is to compute the GP s covariance between datapoints For this the method call of the kernel can be called This method can either be used to compute the auto covariance of all pairs of datapoints in a 2d array X or the cross covariance of all combinations of datapoints of a 2d array X with datapoints in a 2d array Y The following identity holds true for all kernels k except for the class WhiteKernel k X K X Y X If only the diagonal of the auto covariance is being used the method diag of a kernel can be called which is more computationally efficient than the equivalent call to call np diag k X X k diag X Kernels are parameterized by a vector math theta of hyperparameters These hyperparameters can for instance control length scales or periodicity of a kernel see below All kernels support computing analytic gradients of the kernel s auto covariance with respect to math log theta via setting eval gradient True in the call method That is a len X len X len theta array is returned where the entry i j l contains math frac partial k theta x i x j partial log theta l This gradient is used by the Gaussian process both regressor and classifier in computing the gradient of the log marginal likelihood which in turn is used to determine the value of math theta which maximizes the log marginal likelihood via gradient ascent For each hyperparameter the initial value and the bounds need to be specified when creating an instance of the kernel The current value of math theta can be get and set via the property theta of the kernel object Moreover the bounds of the hyperparameters can be accessed by the property bounds of the kernel Note that both properties theta and bounds return log transformed values of the internally used values since those are typically more amenable to gradient based optimization The specification of each hyperparameter is stored in the form of an instance of class Hyperparameter in the respective kernel Note that a kernel using a hyperparameter with name x must have the attributes self x and self x bounds The abstract base class for all kernels is class Kernel Kernel implements a similar interface as class sklearn base BaseEstimator providing the methods get params set params and clone This allows setting kernel values also via meta estimators such as class sklearn pipeline Pipeline or class sklearn model selection GridSearchCV Note that due to the nested structure of kernels by applying kernel operators see below the names of kernel parameters might become relatively complicated In general for a binary kernel operator parameters of the left operand are prefixed with k1 and parameters of the right operand with k2 An additional convenience method is clone with theta theta which returns a cloned version of the kernel but with the hyperparameters set to theta An illustrative example from sklearn gaussian process kernels import ConstantKernel RBF kernel ConstantKernel constant value 1 0 constant value bounds 0 0 10 0 RBF length scale 0 5 length scale bounds 0 0 10 0 RBF length scale 2 0 length scale bounds 0 0 10 0 for hyperparameter in kernel hyperparameters print hyperparameter Hyperparameter name k1 k1 constant value value type numeric bounds array 0 10 n elements 1 fixed False Hyperparameter name k1 k2 length scale value type numeric bounds array 0 10 n elements 1 fixed False Hyperparameter name k2 length scale value type numeric bounds array 0 10 n elements 1 fixed False params kernel get params for key in sorted params print s s key params key k1 1 2 RBF length scale 0 5 k1 k1 1 2 k1 k1 constant value 1 0 k1 k1 constant value bounds 0 0 10 0 k1 k2 RBF length scale 0 5 k1 k2 length scale 0 5 k1 k2 length scale bounds 0 0 10 0 k2 RBF length scale 2 k2 length scale 2 0 k2 length scale bounds 0 0 10 0 print kernel theta Note log transformed 0 0 69314718 0 69314718 print kernel bounds Note log transformed inf 2 30258509 inf 2 30258509 inf 2 30258509 All Gaussian process kernels are interoperable with mod sklearn metrics pairwise and vice versa instances of subclasses of class Kernel can be passed as metric to pairwise kernels from mod sklearn metrics pairwise Moreover kernel functions from pairwise can be used as GP kernels by using the wrapper class class PairwiseKernel The only caveat is that the gradient of the hyperparameters is not analytic but numeric and all those kernels support only isotropic distances The parameter gamma is considered to be a hyperparameter and may be optimized The other kernel parameters are set directly at initialization and are kept fixed Basic kernels The class ConstantKernel kernel can be used as part of a class Product kernel where it scales the magnitude of the other factor kernel or as part of a class Sum kernel where it modifies the mean of the Gaussian process It depends on a parameter math constant value It is defined as math k x i x j constant value forall x 1 x 2 The main use case of the class WhiteKernel kernel is as part of a sum kernel where it explains the noise component of the signal Tuning its parameter math noise level corresponds to estimating the noise level It is defined as math k x i x j noise level text if x i x j text else 0 Kernel operators Kernel operators take one or two base kernels and combine them into a new kernel The class Sum kernel takes two kernels math k 1 and math k 2 and combines them via math k sum X Y k 1 X Y k 2 X Y The class Product kernel takes two kernels math k 1 and math k 2 and combines them via math k product X Y k 1 X Y k 2 X Y The class Exponentiation kernel takes one base kernel and a scalar parameter math p and combines them via math k exp X Y k X Y p Note that magic methods add mul and pow are overridden on the Kernel objects so one can use e g RBF RBF as a shortcut for Sum RBF RBF Radial basis function RBF kernel The class RBF kernel is a stationary kernel It is also known as the squared exponential kernel It is parameterized by a length scale parameter math l 0 which can either be a scalar isotropic variant of the kernel or a vector with the same number of dimensions as the inputs math x anisotropic variant of the kernel The kernel is given by math k x i x j text exp left frac d x i x j 2 2l 2 right where math d cdot cdot is the Euclidean distance This kernel is infinitely differentiable which implies that GPs with this kernel as covariance function have mean square derivatives of all orders and are thus very smooth The prior and posterior of a GP resulting from an RBF kernel are shown in the following figure figure auto examples gaussian process images sphx glr plot gpr prior posterior 001 png target auto examples gaussian process plot gpr prior posterior html align center Mat rn kernel The class Matern kernel is a stationary kernel and a generalization of the class RBF kernel It has an additional parameter math nu which controls the smoothness of the resulting function It is parameterized by a length scale parameter math l 0 which can either be a scalar isotropic variant of the kernel or a vector with the same number of dimensions as the inputs math x anisotropic variant of the kernel dropdown Mathematical implementation of Mat rn kernel The kernel is given by math k x i x j frac 1 Gamma nu 2 nu 1 Bigg frac sqrt 2 nu l d x i x j Bigg nu K nu Bigg frac sqrt 2 nu l d x i x j Bigg where math d cdot cdot is the Euclidean distance math K nu cdot is a modified Bessel function and math Gamma cdot is the gamma function As math nu rightarrow infty the Mat rn kernel converges to the RBF kernel When math nu 1 2 the Mat rn kernel becomes identical to the absolute exponential kernel i e math k x i x j exp Bigg frac 1 l d x i x j Bigg quad quad nu tfrac 1 2 In particular math nu 3 2 math k x i x j Bigg 1 frac sqrt 3 l d x i x j Bigg exp Bigg frac sqrt 3 l d x i x j Bigg quad quad nu tfrac 3 2 and math nu 5 2 math k x i x j Bigg 1 frac sqrt 5 l d x i x j frac 5 3l d x i x j 2 Bigg exp Bigg frac sqrt 5 l d x i x j Bigg quad quad nu tfrac 5 2 are popular choices for learning functions that are not infinitely differentiable as assumed by the RBF kernel but at least once math nu 3 2 or twice differentiable math nu 5 2 The flexibility of controlling the smoothness of the learned function via math nu allows adapting to the properties of the true underlying functional relation The prior and posterior of a GP resulting from a Mat rn kernel are shown in the following figure figure auto examples gaussian process images sphx glr plot gpr prior posterior 005 png target auto examples gaussian process plot gpr prior posterior html align center See RW2006 pp84 for further details regarding the different variants of the Mat rn kernel Rational quadratic kernel The class RationalQuadratic kernel can be seen as a scale mixture an infinite sum of class RBF kernels with different characteristic length scales It is parameterized by a length scale parameter math l 0 and a scale mixture parameter math alpha 0 Only the isotropic variant where math l is a scalar is supported at the moment The kernel is given by math k x i x j left 1 frac d x i x j 2 2 alpha l 2 right alpha The prior and posterior of a GP resulting from a class RationalQuadratic kernel are shown in the following figure figure auto examples gaussian process images sphx glr plot gpr prior posterior 002 png target auto examples gaussian process plot gpr prior posterior html align center Exp Sine Squared kernel The class ExpSineSquared kernel allows modeling periodic functions It is parameterized by a length scale parameter math l 0 and a periodicity parameter math p 0 Only the isotropic variant where math l is a scalar is supported at the moment The kernel is given by math k x i x j text exp left frac 2 sin 2 pi d x i x j p l 2 right The prior and posterior of a GP resulting from an ExpSineSquared kernel are shown in the following figure figure auto examples gaussian process images sphx glr plot gpr prior posterior 003 png target auto examples gaussian process plot gpr prior posterior html align center Dot Product kernel The class DotProduct kernel is non stationary and can be obtained from linear regression by putting math N 0 1 priors on the coefficients of math x d d 1 D and a prior of math N 0 sigma 0 2 on the bias The class DotProduct kernel is invariant to a rotation of the coordinates about the origin but not translations It is parameterized by a parameter math sigma 0 2 For math sigma 0 2 0 the kernel is called the homogeneous linear kernel otherwise it is inhomogeneous The kernel is given by math k x i x j sigma 0 2 x i cdot x j The class DotProduct kernel is commonly combined with exponentiation An example with exponent 2 is shown in the following figure figure auto examples gaussian process images sphx glr plot gpr prior posterior 004 png target auto examples gaussian process plot gpr prior posterior html align center References RW2006 Carl E Rasmussen and Christopher K I Williams Gaussian Processes for Machine Learning MIT Press 2006 https www gaussianprocess org gpml chapters RW pdf Duv2014 David Duvenaud The Kernel Cookbook Advice on Covariance functions 2014 https www cs toronto edu duvenaud cookbook currentmodule sklearn gaussian process |
scikit-learn sklearn and Quadratic Linear and Quadratic Discriminant Analysis ldaqda Linear Discriminant Analysis | .. _lda_qda:
==========================================
Linear and Quadratic Discriminant Analysis
==========================================
.. currentmodule:: sklearn
Linear Discriminant Analysis
(:class:`~discriminant_analysis.LinearDiscriminantAnalysis`) and Quadratic
Discriminant Analysis
(:class:`~discriminant_analysis.QuadraticDiscriminantAnalysis`) are two classic
classifiers, with, as their names suggest, a linear and a quadratic decision
surface, respectively.
These classifiers are attractive because they have closed-form solutions that
can be easily computed, are inherently multiclass, have proven to work well in
practice, and have no hyperparameters to tune.
.. |ldaqda| image:: ../auto_examples/classification/images/sphx_glr_plot_lda_qda_001.png
:target: ../auto_examples/classification/plot_lda_qda.html
:scale: 80
.. centered:: |ldaqda|
The plot shows decision boundaries for Linear Discriminant Analysis and
Quadratic Discriminant Analysis. The bottom row demonstrates that Linear
Discriminant Analysis can only learn linear boundaries, while Quadratic
Discriminant Analysis can learn quadratic boundaries and is therefore more
flexible.
.. rubric:: Examples
* :ref:`sphx_glr_auto_examples_classification_plot_lda_qda.py`: Comparison of LDA and
QDA on synthetic data.
Dimensionality reduction using Linear Discriminant Analysis
===========================================================
:class:`~discriminant_analysis.LinearDiscriminantAnalysis` can be used to
perform supervised dimensionality reduction, by projecting the input data to a
linear subspace consisting of the directions which maximize the separation
between classes (in a precise sense discussed in the mathematics section
below). The dimension of the output is necessarily less than the number of
classes, so this is in general a rather strong dimensionality reduction, and
only makes sense in a multiclass setting.
This is implemented in the `transform` method. The desired dimensionality can
be set using the ``n_components`` parameter. This parameter has no influence
on the `fit` and `predict` methods.
.. rubric:: Examples
* :ref:`sphx_glr_auto_examples_decomposition_plot_pca_vs_lda.py`: Comparison of LDA and
PCA for dimensionality reduction of the Iris dataset
.. _lda_qda_math:
Mathematical formulation of the LDA and QDA classifiers
=======================================================
Both LDA and QDA can be derived from simple probabilistic models which model
the class conditional distribution of the data :math:`P(X|y=k)` for each class
:math:`k`. Predictions can then be obtained by using Bayes' rule, for each
training sample :math:`x \in \mathcal{R}^d`:
.. math::
P(y=k | x) = \frac{P(x | y=k) P(y=k)}{P(x)} = \frac{P(x | y=k) P(y = k)}{ \sum_{l} P(x | y=l) \cdot P(y=l)}
and we select the class :math:`k` which maximizes this posterior probability.
More specifically, for linear and quadratic discriminant analysis,
:math:`P(x|y)` is modeled as a multivariate Gaussian distribution with
density:
.. math:: P(x | y=k) = \frac{1}{(2\pi)^{d/2} |\Sigma_k|^{1/2}}\exp\left(-\frac{1}{2} (x-\mu_k)^t \Sigma_k^{-1} (x-\mu_k)\right)
where :math:`d` is the number of features.
QDA
---
According to the model above, the log of the posterior is:
.. math::
\log P(y=k | x) &= \log P(x | y=k) + \log P(y = k) + Cst \\
&= -\frac{1}{2} \log |\Sigma_k| -\frac{1}{2} (x-\mu_k)^t \Sigma_k^{-1} (x-\mu_k) + \log P(y = k) + Cst,
where the constant term :math:`Cst` corresponds to the denominator
:math:`P(x)`, in addition to other constant terms from the Gaussian. The
predicted class is the one that maximises this log-posterior.
.. note:: **Relation with Gaussian Naive Bayes**
If in the QDA model one assumes that the covariance matrices are diagonal,
then the inputs are assumed to be conditionally independent in each class,
and the resulting classifier is equivalent to the Gaussian Naive Bayes
classifier :class:`naive_bayes.GaussianNB`.
LDA
---
LDA is a special case of QDA, where the Gaussians for each class are assumed
to share the same covariance matrix: :math:`\Sigma_k = \Sigma` for all
:math:`k`. This reduces the log posterior to:
.. math:: \log P(y=k | x) = -\frac{1}{2} (x-\mu_k)^t \Sigma^{-1} (x-\mu_k) + \log P(y = k) + Cst.
The term :math:`(x-\mu_k)^t \Sigma^{-1} (x-\mu_k)` corresponds to the
`Mahalanobis Distance <https://en.wikipedia.org/wiki/Mahalanobis_distance>`_
between the sample :math:`x` and the mean :math:`\mu_k`. The Mahalanobis
distance tells how close :math:`x` is from :math:`\mu_k`, while also
accounting for the variance of each feature. We can thus interpret LDA as
assigning :math:`x` to the class whose mean is the closest in terms of
Mahalanobis distance, while also accounting for the class prior
probabilities.
The log-posterior of LDA can also be written [3]_ as:
.. math::
\log P(y=k | x) = \omega_k^t x + \omega_{k0} + Cst.
where :math:`\omega_k = \Sigma^{-1} \mu_k` and :math:`\omega_{k0} =
-\frac{1}{2} \mu_k^t\Sigma^{-1}\mu_k + \log P (y = k)`. These quantities
correspond to the `coef_` and `intercept_` attributes, respectively.
From the above formula, it is clear that LDA has a linear decision surface.
In the case of QDA, there are no assumptions on the covariance matrices
:math:`\Sigma_k` of the Gaussians, leading to quadratic decision surfaces.
See [1]_ for more details.
Mathematical formulation of LDA dimensionality reduction
========================================================
First note that the K means :math:`\mu_k` are vectors in
:math:`\mathcal{R}^d`, and they lie in an affine subspace :math:`H` of
dimension at most :math:`K - 1` (2 points lie on a line, 3 points lie on a
plane, etc.).
As mentioned above, we can interpret LDA as assigning :math:`x` to the class
whose mean :math:`\mu_k` is the closest in terms of Mahalanobis distance,
while also accounting for the class prior probabilities. Alternatively, LDA
is equivalent to first *sphering* the data so that the covariance matrix is
the identity, and then assigning :math:`x` to the closest mean in terms of
Euclidean distance (still accounting for the class priors).
Computing Euclidean distances in this d-dimensional space is equivalent to
first projecting the data points into :math:`H`, and computing the distances
there (since the other dimensions will contribute equally to each class in
terms of distance). In other words, if :math:`x` is closest to :math:`\mu_k`
in the original space, it will also be the case in :math:`H`.
This shows that, implicit in the LDA
classifier, there is a dimensionality reduction by linear projection onto a
:math:`K-1` dimensional space.
We can reduce the dimension even more, to a chosen :math:`L`, by projecting
onto the linear subspace :math:`H_L` which maximizes the variance of the
:math:`\mu^*_k` after projection (in effect, we are doing a form of PCA for the
transformed class means :math:`\mu^*_k`). This :math:`L` corresponds to the
``n_components`` parameter used in the
:func:`~discriminant_analysis.LinearDiscriminantAnalysis.transform` method. See
[1]_ for more details.
Shrinkage and Covariance Estimator
==================================
Shrinkage is a form of regularization used to improve the estimation of
covariance matrices in situations where the number of training samples is
small compared to the number of features.
In this scenario, the empirical sample covariance is a poor
estimator, and shrinkage helps improving the generalization performance of
the classifier.
Shrinkage LDA can be used by setting the ``shrinkage`` parameter of
the :class:`~discriminant_analysis.LinearDiscriminantAnalysis` class to 'auto'.
This automatically determines the optimal shrinkage parameter in an analytic
way following the lemma introduced by Ledoit and Wolf [2]_. Note that
currently shrinkage only works when setting the ``solver`` parameter to 'lsqr'
or 'eigen'.
The ``shrinkage`` parameter can also be manually set between 0 and 1. In
particular, a value of 0 corresponds to no shrinkage (which means the empirical
covariance matrix will be used) and a value of 1 corresponds to complete
shrinkage (which means that the diagonal matrix of variances will be used as
an estimate for the covariance matrix). Setting this parameter to a value
between these two extrema will estimate a shrunk version of the covariance
matrix.
The shrunk Ledoit and Wolf estimator of covariance may not always be the
best choice. For example if the distribution of the data
is normally distributed, the
Oracle Approximating Shrinkage estimator :class:`sklearn.covariance.OAS`
yields a smaller Mean Squared Error than the one given by Ledoit and Wolf's
formula used with shrinkage="auto". In LDA, the data are assumed to be gaussian
conditionally to the class. If these assumptions hold, using LDA with
the OAS estimator of covariance will yield a better classification
accuracy than if Ledoit and Wolf or the empirical covariance estimator is used.
The covariance estimator can be chosen using with the ``covariance_estimator``
parameter of the :class:`discriminant_analysis.LinearDiscriminantAnalysis`
class. A covariance estimator should have a :term:`fit` method and a
``covariance_`` attribute like all covariance estimators in the
:mod:`sklearn.covariance` module.
.. |shrinkage| image:: ../auto_examples/classification/images/sphx_glr_plot_lda_001.png
:target: ../auto_examples/classification/plot_lda.html
:scale: 75
.. centered:: |shrinkage|
.. rubric:: Examples
* :ref:`sphx_glr_auto_examples_classification_plot_lda.py`: Comparison of LDA classifiers
with Empirical, Ledoit Wolf and OAS covariance estimator.
Estimation algorithms
=====================
Using LDA and QDA requires computing the log-posterior which depends on the
class priors :math:`P(y=k)`, the class means :math:`\mu_k`, and the
covariance matrices.
The 'svd' solver is the default solver used for
:class:`~sklearn.discriminant_analysis.LinearDiscriminantAnalysis`, and it is
the only available solver for
:class:`~sklearn.discriminant_analysis.QuadraticDiscriminantAnalysis`.
It can perform both classification and transform (for LDA).
As it does not rely on the calculation of the covariance matrix, the 'svd'
solver may be preferable in situations where the number of features is large.
The 'svd' solver cannot be used with shrinkage.
For QDA, the use of the SVD solver relies on the fact that the covariance
matrix :math:`\Sigma_k` is, by definition, equal to :math:`\frac{1}{n - 1}
X_k^tX_k = \frac{1}{n - 1} V S^2 V^t` where :math:`V` comes from the SVD of the (centered)
matrix: :math:`X_k = U S V^t`. It turns out that we can compute the
log-posterior above without having to explicitly compute :math:`\Sigma`:
computing :math:`S` and :math:`V` via the SVD of :math:`X` is enough. For
LDA, two SVDs are computed: the SVD of the centered input matrix :math:`X`
and the SVD of the class-wise mean vectors.
The 'lsqr' solver is an efficient algorithm that only works for
classification. It needs to explicitly compute the covariance matrix
:math:`\Sigma`, and supports shrinkage and custom covariance estimators.
This solver computes the coefficients
:math:`\omega_k = \Sigma^{-1}\mu_k` by solving for :math:`\Sigma \omega =
\mu_k`, thus avoiding the explicit computation of the inverse
:math:`\Sigma^{-1}`.
The 'eigen' solver is based on the optimization of the between class scatter to
within class scatter ratio. It can be used for both classification and
transform, and it supports shrinkage. However, the 'eigen' solver needs to
compute the covariance matrix, so it might not be suitable for situations with
a high number of features.
.. rubric:: References
.. [1] "The Elements of Statistical Learning", Hastie T., Tibshirani R.,
Friedman J., Section 4.3, p.106-119, 2008.
.. [2] Ledoit O, Wolf M. Honey, I Shrunk the Sample Covariance Matrix.
The Journal of Portfolio Management 30(4), 110-119, 2004.
.. [3] R. O. Duda, P. E. Hart, D. G. Stork. Pattern Classification
(Second Edition), section 2.6.2. | scikit-learn | lda qda Linear and Quadratic Discriminant Analysis currentmodule sklearn Linear Discriminant Analysis class discriminant analysis LinearDiscriminantAnalysis and Quadratic Discriminant Analysis class discriminant analysis QuadraticDiscriminantAnalysis are two classic classifiers with as their names suggest a linear and a quadratic decision surface respectively These classifiers are attractive because they have closed form solutions that can be easily computed are inherently multiclass have proven to work well in practice and have no hyperparameters to tune ldaqda image auto examples classification images sphx glr plot lda qda 001 png target auto examples classification plot lda qda html scale 80 centered ldaqda The plot shows decision boundaries for Linear Discriminant Analysis and Quadratic Discriminant Analysis The bottom row demonstrates that Linear Discriminant Analysis can only learn linear boundaries while Quadratic Discriminant Analysis can learn quadratic boundaries and is therefore more flexible rubric Examples ref sphx glr auto examples classification plot lda qda py Comparison of LDA and QDA on synthetic data Dimensionality reduction using Linear Discriminant Analysis class discriminant analysis LinearDiscriminantAnalysis can be used to perform supervised dimensionality reduction by projecting the input data to a linear subspace consisting of the directions which maximize the separation between classes in a precise sense discussed in the mathematics section below The dimension of the output is necessarily less than the number of classes so this is in general a rather strong dimensionality reduction and only makes sense in a multiclass setting This is implemented in the transform method The desired dimensionality can be set using the n components parameter This parameter has no influence on the fit and predict methods rubric Examples ref sphx glr auto examples decomposition plot pca vs lda py Comparison of LDA and PCA for dimensionality reduction of the Iris dataset lda qda math Mathematical formulation of the LDA and QDA classifiers Both LDA and QDA can be derived from simple probabilistic models which model the class conditional distribution of the data math P X y k for each class math k Predictions can then be obtained by using Bayes rule for each training sample math x in mathcal R d math P y k x frac P x y k P y k P x frac P x y k P y k sum l P x y l cdot P y l and we select the class math k which maximizes this posterior probability More specifically for linear and quadratic discriminant analysis math P x y is modeled as a multivariate Gaussian distribution with density math P x y k frac 1 2 pi d 2 Sigma k 1 2 exp left frac 1 2 x mu k t Sigma k 1 x mu k right where math d is the number of features QDA According to the model above the log of the posterior is math log P y k x log P x y k log P y k Cst frac 1 2 log Sigma k frac 1 2 x mu k t Sigma k 1 x mu k log P y k Cst where the constant term math Cst corresponds to the denominator math P x in addition to other constant terms from the Gaussian The predicted class is the one that maximises this log posterior note Relation with Gaussian Naive Bayes If in the QDA model one assumes that the covariance matrices are diagonal then the inputs are assumed to be conditionally independent in each class and the resulting classifier is equivalent to the Gaussian Naive Bayes classifier class naive bayes GaussianNB LDA LDA is a special case of QDA where the Gaussians for each class are assumed to share the same covariance matrix math Sigma k Sigma for all math k This reduces the log posterior to math log P y k x frac 1 2 x mu k t Sigma 1 x mu k log P y k Cst The term math x mu k t Sigma 1 x mu k corresponds to the Mahalanobis Distance https en wikipedia org wiki Mahalanobis distance between the sample math x and the mean math mu k The Mahalanobis distance tells how close math x is from math mu k while also accounting for the variance of each feature We can thus interpret LDA as assigning math x to the class whose mean is the closest in terms of Mahalanobis distance while also accounting for the class prior probabilities The log posterior of LDA can also be written 3 as math log P y k x omega k t x omega k0 Cst where math omega k Sigma 1 mu k and math omega k0 frac 1 2 mu k t Sigma 1 mu k log P y k These quantities correspond to the coef and intercept attributes respectively From the above formula it is clear that LDA has a linear decision surface In the case of QDA there are no assumptions on the covariance matrices math Sigma k of the Gaussians leading to quadratic decision surfaces See 1 for more details Mathematical formulation of LDA dimensionality reduction First note that the K means math mu k are vectors in math mathcal R d and they lie in an affine subspace math H of dimension at most math K 1 2 points lie on a line 3 points lie on a plane etc As mentioned above we can interpret LDA as assigning math x to the class whose mean math mu k is the closest in terms of Mahalanobis distance while also accounting for the class prior probabilities Alternatively LDA is equivalent to first sphering the data so that the covariance matrix is the identity and then assigning math x to the closest mean in terms of Euclidean distance still accounting for the class priors Computing Euclidean distances in this d dimensional space is equivalent to first projecting the data points into math H and computing the distances there since the other dimensions will contribute equally to each class in terms of distance In other words if math x is closest to math mu k in the original space it will also be the case in math H This shows that implicit in the LDA classifier there is a dimensionality reduction by linear projection onto a math K 1 dimensional space We can reduce the dimension even more to a chosen math L by projecting onto the linear subspace math H L which maximizes the variance of the math mu k after projection in effect we are doing a form of PCA for the transformed class means math mu k This math L corresponds to the n components parameter used in the func discriminant analysis LinearDiscriminantAnalysis transform method See 1 for more details Shrinkage and Covariance Estimator Shrinkage is a form of regularization used to improve the estimation of covariance matrices in situations where the number of training samples is small compared to the number of features In this scenario the empirical sample covariance is a poor estimator and shrinkage helps improving the generalization performance of the classifier Shrinkage LDA can be used by setting the shrinkage parameter of the class discriminant analysis LinearDiscriminantAnalysis class to auto This automatically determines the optimal shrinkage parameter in an analytic way following the lemma introduced by Ledoit and Wolf 2 Note that currently shrinkage only works when setting the solver parameter to lsqr or eigen The shrinkage parameter can also be manually set between 0 and 1 In particular a value of 0 corresponds to no shrinkage which means the empirical covariance matrix will be used and a value of 1 corresponds to complete shrinkage which means that the diagonal matrix of variances will be used as an estimate for the covariance matrix Setting this parameter to a value between these two extrema will estimate a shrunk version of the covariance matrix The shrunk Ledoit and Wolf estimator of covariance may not always be the best choice For example if the distribution of the data is normally distributed the Oracle Approximating Shrinkage estimator class sklearn covariance OAS yields a smaller Mean Squared Error than the one given by Ledoit and Wolf s formula used with shrinkage auto In LDA the data are assumed to be gaussian conditionally to the class If these assumptions hold using LDA with the OAS estimator of covariance will yield a better classification accuracy than if Ledoit and Wolf or the empirical covariance estimator is used The covariance estimator can be chosen using with the covariance estimator parameter of the class discriminant analysis LinearDiscriminantAnalysis class A covariance estimator should have a term fit method and a covariance attribute like all covariance estimators in the mod sklearn covariance module shrinkage image auto examples classification images sphx glr plot lda 001 png target auto examples classification plot lda html scale 75 centered shrinkage rubric Examples ref sphx glr auto examples classification plot lda py Comparison of LDA classifiers with Empirical Ledoit Wolf and OAS covariance estimator Estimation algorithms Using LDA and QDA requires computing the log posterior which depends on the class priors math P y k the class means math mu k and the covariance matrices The svd solver is the default solver used for class sklearn discriminant analysis LinearDiscriminantAnalysis and it is the only available solver for class sklearn discriminant analysis QuadraticDiscriminantAnalysis It can perform both classification and transform for LDA As it does not rely on the calculation of the covariance matrix the svd solver may be preferable in situations where the number of features is large The svd solver cannot be used with shrinkage For QDA the use of the SVD solver relies on the fact that the covariance matrix math Sigma k is by definition equal to math frac 1 n 1 X k tX k frac 1 n 1 V S 2 V t where math V comes from the SVD of the centered matrix math X k U S V t It turns out that we can compute the log posterior above without having to explicitly compute math Sigma computing math S and math V via the SVD of math X is enough For LDA two SVDs are computed the SVD of the centered input matrix math X and the SVD of the class wise mean vectors The lsqr solver is an efficient algorithm that only works for classification It needs to explicitly compute the covariance matrix math Sigma and supports shrinkage and custom covariance estimators This solver computes the coefficients math omega k Sigma 1 mu k by solving for math Sigma omega mu k thus avoiding the explicit computation of the inverse math Sigma 1 The eigen solver is based on the optimization of the between class scatter to within class scatter ratio It can be used for both classification and transform and it supports shrinkage However the eigen solver needs to compute the covariance matrix so it might not be suitable for situations with a high number of features rubric References 1 The Elements of Statistical Learning Hastie T Tibshirani R Friedman J Section 4 3 p 106 119 2008 2 Ledoit O Wolf M Honey I Shrunk the Sample Covariance Matrix The Journal of Portfolio Management 30 4 110 119 2004 3 R O Duda P E Hart D G Stork Pattern Classification Second Edition section 2 6 2 |
scikit-learn semisupervised Semi supervised learning sklearn semisupervised https en wikipedia org wiki Semi supervisedlearning is a situation | .. _semi_supervised:
===================================================
Semi-supervised learning
===================================================
.. currentmodule:: sklearn.semi_supervised
`Semi-supervised learning
<https://en.wikipedia.org/wiki/Semi-supervised_learning>`_ is a situation
in which in your training data some of the samples are not labeled. The
semi-supervised estimators in :mod:`sklearn.semi_supervised` are able to
make use of this additional unlabeled data to better capture the shape of
the underlying data distribution and generalize better to new samples.
These algorithms can perform well when we have a very small amount of
labeled points and a large amount of unlabeled points.
.. topic:: Unlabeled entries in `y`
It is important to assign an identifier to unlabeled points along with the
labeled data when training the model with the ``fit`` method. The
identifier that this implementation uses is the integer value :math:`-1`.
Note that for string labels, the dtype of `y` should be object so that it
can contain both strings and integers.
.. note::
Semi-supervised algorithms need to make assumptions about the distribution
of the dataset in order to achieve performance gains. See `here
<https://en.wikipedia.org/wiki/Semi-supervised_learning#Assumptions>`_
for more details.
.. _self_training:
Self Training
=============
This self-training implementation is based on Yarowsky's [1]_ algorithm. Using
this algorithm, a given supervised classifier can function as a semi-supervised
classifier, allowing it to learn from unlabeled data.
:class:`SelfTrainingClassifier` can be called with any classifier that
implements `predict_proba`, passed as the parameter `base_classifier`. In
each iteration, the `base_classifier` predicts labels for the unlabeled
samples and adds a subset of these labels to the labeled dataset.
The choice of this subset is determined by the selection criterion. This
selection can be done using a `threshold` on the prediction probabilities, or
by choosing the `k_best` samples according to the prediction probabilities.
The labels used for the final fit as well as the iteration in which each sample
was labeled are available as attributes. The optional `max_iter` parameter
specifies how many times the loop is executed at most.
The `max_iter` parameter may be set to `None`, causing the algorithm to iterate
until all samples have labels or no new samples are selected in that iteration.
.. note::
When using the self-training classifier, the
:ref:`calibration <calibration>` of the classifier is important.
.. rubric:: Examples
* :ref:`sphx_glr_auto_examples_semi_supervised_plot_self_training_varying_threshold.py`
* :ref:`sphx_glr_auto_examples_semi_supervised_plot_semi_supervised_versus_svm_iris.py`
.. rubric:: References
.. [1] :doi:`"Unsupervised word sense disambiguation rivaling supervised methods"
<10.3115/981658.981684>`
David Yarowsky, Proceedings of the 33rd annual meeting on Association for
Computational Linguistics (ACL '95). Association for Computational Linguistics,
Stroudsburg, PA, USA, 189-196.
.. _label_propagation:
Label Propagation
=================
Label propagation denotes a few variations of semi-supervised graph
inference algorithms.
A few features available in this model:
* Used for classification tasks
* Kernel methods to project data into alternate dimensional spaces
`scikit-learn` provides two label propagation models:
:class:`LabelPropagation` and :class:`LabelSpreading`. Both work by
constructing a similarity graph over all items in the input dataset.
.. figure:: ../auto_examples/semi_supervised/images/sphx_glr_plot_label_propagation_structure_001.png
:target: ../auto_examples/semi_supervised/plot_label_propagation_structure.html
:align: center
:scale: 60%
**An illustration of label-propagation:** *the structure of unlabeled
observations is consistent with the class structure, and thus the
class label can be propagated to the unlabeled observations of the
training set.*
:class:`LabelPropagation` and :class:`LabelSpreading`
differ in modifications to the similarity matrix that graph and the
clamping effect on the label distributions.
Clamping allows the algorithm to change the weight of the true ground labeled
data to some degree. The :class:`LabelPropagation` algorithm performs hard
clamping of input labels, which means :math:`\alpha=0`. This clamping factor
can be relaxed, to say :math:`\alpha=0.2`, which means that we will always
retain 80 percent of our original label distribution, but the algorithm gets to
change its confidence of the distribution within 20 percent.
:class:`LabelPropagation` uses the raw similarity matrix constructed from
the data with no modifications. In contrast, :class:`LabelSpreading`
minimizes a loss function that has regularization properties, as such it
is often more robust to noise. The algorithm iterates on a modified
version of the original graph and normalizes the edge weights by
computing the normalized graph Laplacian matrix. This procedure is also
used in :ref:`spectral_clustering`.
Label propagation models have two built-in kernel methods. Choice of kernel
effects both scalability and performance of the algorithms. The following are
available:
* rbf (:math:`\exp(-\gamma |x-y|^2), \gamma > 0`). :math:`\gamma` is
specified by keyword gamma.
* knn (:math:`1[x' \in kNN(x)]`). :math:`k` is specified by keyword
n_neighbors.
The RBF kernel will produce a fully connected graph which is represented in memory
by a dense matrix. This matrix may be very large and combined with the cost of
performing a full matrix multiplication calculation for each iteration of the
algorithm can lead to prohibitively long running times. On the other hand,
the KNN kernel will produce a much more memory-friendly sparse matrix
which can drastically reduce running times.
.. rubric:: Examples
* :ref:`sphx_glr_auto_examples_semi_supervised_plot_semi_supervised_versus_svm_iris.py`
* :ref:`sphx_glr_auto_examples_semi_supervised_plot_label_propagation_structure.py`
* :ref:`sphx_glr_auto_examples_semi_supervised_plot_label_propagation_digits.py`
* :ref:`sphx_glr_auto_examples_semi_supervised_plot_label_propagation_digits_active_learning.py`
.. rubric:: References
[2] Yoshua Bengio, Olivier Delalleau, Nicolas Le Roux. In Semi-Supervised
Learning (2006), pp. 193-216
[3] Olivier Delalleau, Yoshua Bengio, Nicolas Le Roux. Efficient
Non-Parametric Function Induction in Semi-Supervised Learning. AISTAT 2005
https://www.gatsby.ucl.ac.uk/aistats/fullpapers/204.pdf | scikit-learn | semi supervised Semi supervised learning currentmodule sklearn semi supervised Semi supervised learning https en wikipedia org wiki Semi supervised learning is a situation in which in your training data some of the samples are not labeled The semi supervised estimators in mod sklearn semi supervised are able to make use of this additional unlabeled data to better capture the shape of the underlying data distribution and generalize better to new samples These algorithms can perform well when we have a very small amount of labeled points and a large amount of unlabeled points topic Unlabeled entries in y It is important to assign an identifier to unlabeled points along with the labeled data when training the model with the fit method The identifier that this implementation uses is the integer value math 1 Note that for string labels the dtype of y should be object so that it can contain both strings and integers note Semi supervised algorithms need to make assumptions about the distribution of the dataset in order to achieve performance gains See here https en wikipedia org wiki Semi supervised learning Assumptions for more details self training Self Training This self training implementation is based on Yarowsky s 1 algorithm Using this algorithm a given supervised classifier can function as a semi supervised classifier allowing it to learn from unlabeled data class SelfTrainingClassifier can be called with any classifier that implements predict proba passed as the parameter base classifier In each iteration the base classifier predicts labels for the unlabeled samples and adds a subset of these labels to the labeled dataset The choice of this subset is determined by the selection criterion This selection can be done using a threshold on the prediction probabilities or by choosing the k best samples according to the prediction probabilities The labels used for the final fit as well as the iteration in which each sample was labeled are available as attributes The optional max iter parameter specifies how many times the loop is executed at most The max iter parameter may be set to None causing the algorithm to iterate until all samples have labels or no new samples are selected in that iteration note When using the self training classifier the ref calibration calibration of the classifier is important rubric Examples ref sphx glr auto examples semi supervised plot self training varying threshold py ref sphx glr auto examples semi supervised plot semi supervised versus svm iris py rubric References 1 doi Unsupervised word sense disambiguation rivaling supervised methods 10 3115 981658 981684 David Yarowsky Proceedings of the 33rd annual meeting on Association for Computational Linguistics ACL 95 Association for Computational Linguistics Stroudsburg PA USA 189 196 label propagation Label Propagation Label propagation denotes a few variations of semi supervised graph inference algorithms A few features available in this model Used for classification tasks Kernel methods to project data into alternate dimensional spaces scikit learn provides two label propagation models class LabelPropagation and class LabelSpreading Both work by constructing a similarity graph over all items in the input dataset figure auto examples semi supervised images sphx glr plot label propagation structure 001 png target auto examples semi supervised plot label propagation structure html align center scale 60 An illustration of label propagation the structure of unlabeled observations is consistent with the class structure and thus the class label can be propagated to the unlabeled observations of the training set class LabelPropagation and class LabelSpreading differ in modifications to the similarity matrix that graph and the clamping effect on the label distributions Clamping allows the algorithm to change the weight of the true ground labeled data to some degree The class LabelPropagation algorithm performs hard clamping of input labels which means math alpha 0 This clamping factor can be relaxed to say math alpha 0 2 which means that we will always retain 80 percent of our original label distribution but the algorithm gets to change its confidence of the distribution within 20 percent class LabelPropagation uses the raw similarity matrix constructed from the data with no modifications In contrast class LabelSpreading minimizes a loss function that has regularization properties as such it is often more robust to noise The algorithm iterates on a modified version of the original graph and normalizes the edge weights by computing the normalized graph Laplacian matrix This procedure is also used in ref spectral clustering Label propagation models have two built in kernel methods Choice of kernel effects both scalability and performance of the algorithms The following are available rbf math exp gamma x y 2 gamma 0 math gamma is specified by keyword gamma knn math 1 x in kNN x math k is specified by keyword n neighbors The RBF kernel will produce a fully connected graph which is represented in memory by a dense matrix This matrix may be very large and combined with the cost of performing a full matrix multiplication calculation for each iteration of the algorithm can lead to prohibitively long running times On the other hand the KNN kernel will produce a much more memory friendly sparse matrix which can drastically reduce running times rubric Examples ref sphx glr auto examples semi supervised plot semi supervised versus svm iris py ref sphx glr auto examples semi supervised plot label propagation structure py ref sphx glr auto examples semi supervised plot label propagation digits py ref sphx glr auto examples semi supervised plot label propagation digits active learning py rubric References 2 Yoshua Bengio Olivier Delalleau Nicolas Le Roux In Semi Supervised Learning 2006 pp 193 216 3 Olivier Delalleau Yoshua Bengio Nicolas Le Roux Efficient Non Parametric Function Induction in Semi Supervised Learning AISTAT 2005 https www gatsby ucl ac uk aistats fullpapers 204 pdf |
scikit-learn Decomposing signals in components matrix factorization problems sklearn decomposition decompositions | .. _decompositions:
=================================================================
Decomposing signals in components (matrix factorization problems)
=================================================================
.. currentmodule:: sklearn.decomposition
.. _PCA:
Principal component analysis (PCA)
==================================
Exact PCA and probabilistic interpretation
------------------------------------------
PCA is used to decompose a multivariate dataset in a set of successive
orthogonal components that explain a maximum amount of the variance. In
scikit-learn, :class:`PCA` is implemented as a *transformer* object
that learns :math:`n` components in its ``fit`` method, and can be used on new
data to project it on these components.
PCA centers but does not scale the input data for each feature before
applying the SVD. The optional parameter ``whiten=True`` makes it
possible to project the data onto the singular space while scaling each
component to unit variance. This is often useful if the models down-stream make
strong assumptions on the isotropy of the signal: this is for example the case
for Support Vector Machines with the RBF kernel and the K-Means clustering
algorithm.
Below is an example of the iris dataset, which is comprised of 4
features, projected on the 2 dimensions that explain most variance:
.. figure:: ../auto_examples/decomposition/images/sphx_glr_plot_pca_vs_lda_001.png
:target: ../auto_examples/decomposition/plot_pca_vs_lda.html
:align: center
:scale: 75%
The :class:`PCA` object also provides a
probabilistic interpretation of the PCA that can give a likelihood of
data based on the amount of variance it explains. As such it implements a
:term:`score` method that can be used in cross-validation:
.. figure:: ../auto_examples/decomposition/images/sphx_glr_plot_pca_vs_fa_model_selection_001.png
:target: ../auto_examples/decomposition/plot_pca_vs_fa_model_selection.html
:align: center
:scale: 75%
.. rubric:: Examples
* :ref:`sphx_glr_auto_examples_decomposition_plot_pca_iris.py`
* :ref:`sphx_glr_auto_examples_decomposition_plot_pca_vs_lda.py`
* :ref:`sphx_glr_auto_examples_decomposition_plot_pca_vs_fa_model_selection.py`
.. _IncrementalPCA:
Incremental PCA
---------------
The :class:`PCA` object is very useful, but has certain limitations for
large datasets. The biggest limitation is that :class:`PCA` only supports
batch processing, which means all of the data to be processed must fit in main
memory. The :class:`IncrementalPCA` object uses a different form of
processing and allows for partial computations which almost
exactly match the results of :class:`PCA` while processing the data in a
minibatch fashion. :class:`IncrementalPCA` makes it possible to implement
out-of-core Principal Component Analysis either by:
* Using its ``partial_fit`` method on chunks of data fetched sequentially
from the local hard drive or a network database.
* Calling its fit method on a memory mapped file using
``numpy.memmap``.
:class:`IncrementalPCA` only stores estimates of component and noise variances,
in order update ``explained_variance_ratio_`` incrementally. This is why
memory usage depends on the number of samples per batch, rather than the
number of samples to be processed in the dataset.
As in :class:`PCA`, :class:`IncrementalPCA` centers but does not scale the
input data for each feature before applying the SVD.
.. figure:: ../auto_examples/decomposition/images/sphx_glr_plot_incremental_pca_001.png
:target: ../auto_examples/decomposition/plot_incremental_pca.html
:align: center
:scale: 75%
.. figure:: ../auto_examples/decomposition/images/sphx_glr_plot_incremental_pca_002.png
:target: ../auto_examples/decomposition/plot_incremental_pca.html
:align: center
:scale: 75%
.. rubric:: Examples
* :ref:`sphx_glr_auto_examples_decomposition_plot_incremental_pca.py`
.. _RandomizedPCA:
PCA using randomized SVD
------------------------
It is often interesting to project data to a lower-dimensional
space that preserves most of the variance, by dropping the singular vector
of components associated with lower singular values.
For instance, if we work with 64x64 pixel gray-level pictures
for face recognition,
the dimensionality of the data is 4096 and it is slow to train an
RBF support vector machine on such wide data. Furthermore we know that
the intrinsic dimensionality of the data is much lower than 4096 since all
pictures of human faces look somewhat alike.
The samples lie on a manifold of much lower
dimension (say around 200 for instance). The PCA algorithm can be used
to linearly transform the data while both reducing the dimensionality
and preserve most of the explained variance at the same time.
The class :class:`PCA` used with the optional parameter
``svd_solver='randomized'`` is very useful in that case: since we are going
to drop most of the singular vectors it is much more efficient to limit the
computation to an approximated estimate of the singular vectors we will keep
to actually perform the transform.
For instance, the following shows 16 sample portraits (centered around
0.0) from the Olivetti dataset. On the right hand side are the first 16
singular vectors reshaped as portraits. Since we only require the top
16 singular vectors of a dataset with size :math:`n_{samples} = 400`
and :math:`n_{features} = 64 \times 64 = 4096`, the computation time is
less than 1s:
.. |orig_img| image:: ../auto_examples/decomposition/images/sphx_glr_plot_faces_decomposition_001.png
:target: ../auto_examples/decomposition/plot_faces_decomposition.html
:scale: 60%
.. |pca_img| image:: ../auto_examples/decomposition/images/sphx_glr_plot_faces_decomposition_002.png
:target: ../auto_examples/decomposition/plot_faces_decomposition.html
:scale: 60%
.. centered:: |orig_img| |pca_img|
If we note :math:`n_{\max} = \max(n_{\mathrm{samples}}, n_{\mathrm{features}})` and
:math:`n_{\min} = \min(n_{\mathrm{samples}}, n_{\mathrm{features}})`, the time complexity
of the randomized :class:`PCA` is :math:`O(n_{\max}^2 \cdot n_{\mathrm{components}})`
instead of :math:`O(n_{\max}^2 \cdot n_{\min})` for the exact method
implemented in :class:`PCA`.
The memory footprint of randomized :class:`PCA` is also proportional to
:math:`2 \cdot n_{\max} \cdot n_{\mathrm{components}}` instead of :math:`n_{\max}
\cdot n_{\min}` for the exact method.
Note: the implementation of ``inverse_transform`` in :class:`PCA` with
``svd_solver='randomized'`` is not the exact inverse transform of
``transform`` even when ``whiten=False`` (default).
.. rubric:: Examples
* :ref:`sphx_glr_auto_examples_applications_plot_face_recognition.py`
* :ref:`sphx_glr_auto_examples_decomposition_plot_faces_decomposition.py`
.. rubric:: References
* Algorithm 4.3 in
:arxiv:`"Finding structure with randomness: Stochastic algorithms for
constructing approximate matrix decompositions" <0909.4061>`
Halko, et al., 2009
* :arxiv:`"An implementation of a randomized algorithm for principal component
analysis" <1412.3510>` A. Szlam et al. 2014
.. _SparsePCA:
Sparse principal components analysis (SparsePCA and MiniBatchSparsePCA)
-----------------------------------------------------------------------
:class:`SparsePCA` is a variant of PCA, with the goal of extracting the
set of sparse components that best reconstruct the data.
Mini-batch sparse PCA (:class:`MiniBatchSparsePCA`) is a variant of
:class:`SparsePCA` that is faster but less accurate. The increased speed is
reached by iterating over small chunks of the set of features, for a given
number of iterations.
Principal component analysis (:class:`PCA`) has the disadvantage that the
components extracted by this method have exclusively dense expressions, i.e.
they have non-zero coefficients when expressed as linear combinations of the
original variables. This can make interpretation difficult. In many cases,
the real underlying components can be more naturally imagined as sparse
vectors; for example in face recognition, components might naturally map to
parts of faces.
Sparse principal components yields a more parsimonious, interpretable
representation, clearly emphasizing which of the original features contribute
to the differences between samples.
The following example illustrates 16 components extracted using sparse PCA from
the Olivetti faces dataset. It can be seen how the regularization term induces
many zeros. Furthermore, the natural structure of the data causes the non-zero
coefficients to be vertically adjacent. The model does not enforce this
mathematically: each component is a vector :math:`h \in \mathbf{R}^{4096}`, and
there is no notion of vertical adjacency except during the human-friendly
visualization as 64x64 pixel images. The fact that the components shown below
appear local is the effect of the inherent structure of the data, which makes
such local patterns minimize reconstruction error. There exist sparsity-inducing
norms that take into account adjacency and different kinds of structure; see
[Jen09]_ for a review of such methods.
For more details on how to use Sparse PCA, see the Examples section, below.
.. |spca_img| image:: ../auto_examples/decomposition/images/sphx_glr_plot_faces_decomposition_005.png
:target: ../auto_examples/decomposition/plot_faces_decomposition.html
:scale: 60%
.. centered:: |pca_img| |spca_img|
Note that there are many different formulations for the Sparse PCA
problem. The one implemented here is based on [Mrl09]_ . The optimization
problem solved is a PCA problem (dictionary learning) with an
:math:`\ell_1` penalty on the components:
.. math::
(U^*, V^*) = \underset{U, V}{\operatorname{arg\,min\,}} & \frac{1}{2}
||X-UV||_{\text{Fro}}^2+\alpha||V||_{1,1} \\
\text{subject to } & ||U_k||_2 <= 1 \text{ for all }
0 \leq k < n_{components}
:math:`||.||_{\text{Fro}}` stands for the Frobenius norm and :math:`||.||_{1,1}`
stands for the entry-wise matrix norm which is the sum of the absolute values
of all the entries in the matrix.
The sparsity-inducing :math:`||.||_{1,1}` matrix norm also prevents learning
components from noise when few training samples are available. The degree
of penalization (and thus sparsity) can be adjusted through the
hyperparameter ``alpha``. Small values lead to a gently regularized
factorization, while larger values shrink many coefficients to zero.
.. note::
While in the spirit of an online algorithm, the class
:class:`MiniBatchSparsePCA` does not implement ``partial_fit`` because
the algorithm is online along the features direction, not the samples
direction.
.. rubric:: Examples
* :ref:`sphx_glr_auto_examples_decomposition_plot_faces_decomposition.py`
.. rubric:: References
.. [Mrl09] `"Online Dictionary Learning for Sparse Coding"
<https://www.di.ens.fr/~fbach/mairal_icml09.pdf>`_
J. Mairal, F. Bach, J. Ponce, G. Sapiro, 2009
.. [Jen09] `"Structured Sparse Principal Component Analysis"
<https://www.di.ens.fr/~fbach/sspca_AISTATS2010.pdf>`_
R. Jenatton, G. Obozinski, F. Bach, 2009
.. _kernel_PCA:
Kernel Principal Component Analysis (kPCA)
==========================================
Exact Kernel PCA
----------------
:class:`KernelPCA` is an extension of PCA which achieves non-linear
dimensionality reduction through the use of kernels (see :ref:`metrics`) [Scholkopf1997]_. It
has many applications including denoising, compression and structured
prediction (kernel dependency estimation). :class:`KernelPCA` supports both
``transform`` and ``inverse_transform``.
.. figure:: ../auto_examples/decomposition/images/sphx_glr_plot_kernel_pca_002.png
:target: ../auto_examples/decomposition/plot_kernel_pca.html
:align: center
:scale: 75%
.. note::
:meth:`KernelPCA.inverse_transform` relies on a kernel ridge to learn the
function mapping samples from the PCA basis into the original feature
space [Bakir2003]_. Thus, the reconstruction obtained with
:meth:`KernelPCA.inverse_transform` is an approximation. See the example
linked below for more details.
.. rubric:: Examples
* :ref:`sphx_glr_auto_examples_decomposition_plot_kernel_pca.py`
* :ref:`sphx_glr_auto_examples_applications_plot_digits_denoising.py`
.. rubric:: References
.. [Scholkopf1997] Schölkopf, Bernhard, Alexander Smola, and Klaus-Robert Müller.
`"Kernel principal component analysis."
<https://people.eecs.berkeley.edu/~wainwrig/stat241b/scholkopf_kernel.pdf>`_
International conference on artificial neural networks.
Springer, Berlin, Heidelberg, 1997.
.. [Bakir2003] Bakır, Gökhan H., Jason Weston, and Bernhard Schölkopf.
`"Learning to find pre-images."
<https://papers.nips.cc/paper/2003/file/ac1ad983e08ad3304a97e147f522747e-Paper.pdf>`_
Advances in neural information processing systems 16 (2003): 449-456.
.. _kPCA_Solvers:
Choice of solver for Kernel PCA
-------------------------------
While in :class:`PCA` the number of components is bounded by the number of
features, in :class:`KernelPCA` the number of components is bounded by the
number of samples. Many real-world datasets have large number of samples! In
these cases finding *all* the components with a full kPCA is a waste of
computation time, as data is mostly described by the first few components
(e.g. ``n_components<=100``). In other words, the centered Gram matrix that
is eigendecomposed in the Kernel PCA fitting process has an effective rank that
is much smaller than its size. This is a situation where approximate
eigensolvers can provide speedup with very low precision loss.
.. dropdown:: Eigensolvers
The optional parameter ``eigen_solver='randomized'`` can be used to
*significantly* reduce the computation time when the number of requested
``n_components`` is small compared with the number of samples. It relies on
randomized decomposition methods to find an approximate solution in a shorter
time.
The time complexity of the randomized :class:`KernelPCA` is
:math:`O(n_{\mathrm{samples}}^2 \cdot n_{\mathrm{components}})`
instead of :math:`O(n_{\mathrm{samples}}^3)` for the exact method
implemented with ``eigen_solver='dense'``.
The memory footprint of randomized :class:`KernelPCA` is also proportional to
:math:`2 \cdot n_{\mathrm{samples}} \cdot n_{\mathrm{components}}` instead of
:math:`n_{\mathrm{samples}}^2` for the exact method.
Note: this technique is the same as in :ref:`RandomizedPCA`.
In addition to the above two solvers, ``eigen_solver='arpack'`` can be used as
an alternate way to get an approximate decomposition. In practice, this method
only provides reasonable execution times when the number of components to find
is extremely small. It is enabled by default when the desired number of
components is less than 10 (strict) and the number of samples is more than 200
(strict). See :class:`KernelPCA` for details.
.. rubric:: References
* *dense* solver:
`scipy.linalg.eigh documentation
<https://docs.scipy.org/doc/scipy/reference/generated/scipy.linalg.eigh.html>`_
* *randomized* solver:
* Algorithm 4.3 in
:arxiv:`"Finding structure with randomness: Stochastic
algorithms for constructing approximate matrix decompositions" <0909.4061>`
Halko, et al. (2009)
* :arxiv:`"An implementation of a randomized algorithm
for principal component analysis" <1412.3510>`
A. Szlam et al. (2014)
* *arpack* solver:
`scipy.sparse.linalg.eigsh documentation
<https://docs.scipy.org/doc/scipy/reference/generated/scipy.sparse.linalg.eigsh.html>`_
R. B. Lehoucq, D. C. Sorensen, and C. Yang, (1998)
.. _LSA:
Truncated singular value decomposition and latent semantic analysis
===================================================================
:class:`TruncatedSVD` implements a variant of singular value decomposition
(SVD) that only computes the :math:`k` largest singular values,
where :math:`k` is a user-specified parameter.
:class:`TruncatedSVD` is very similar to :class:`PCA`, but differs
in that the matrix :math:`X` does not need to be centered.
When the columnwise (per-feature) means of :math:`X`
are subtracted from the feature values,
truncated SVD on the resulting matrix is equivalent to PCA.
.. dropdown:: About truncated SVD and latent semantic analysis (LSA)
When truncated SVD is applied to term-document matrices
(as returned by :class:`~sklearn.feature_extraction.text.CountVectorizer` or
:class:`~sklearn.feature_extraction.text.TfidfVectorizer`),
this transformation is known as
`latent semantic analysis <https://nlp.stanford.edu/IR-book/pdf/18lsi.pdf>`_
(LSA), because it transforms such matrices
to a "semantic" space of low dimensionality.
In particular, LSA is known to combat the effects of synonymy and polysemy
(both of which roughly mean there are multiple meanings per word),
which cause term-document matrices to be overly sparse
and exhibit poor similarity under measures such as cosine similarity.
.. note::
LSA is also known as latent semantic indexing, LSI,
though strictly that refers to its use in persistent indexes
for information retrieval purposes.
Mathematically, truncated SVD applied to training samples :math:`X`
produces a low-rank approximation :math:`X`:
.. math::
X \approx X_k = U_k \Sigma_k V_k^\top
After this operation, :math:`U_k \Sigma_k`
is the transformed training set with :math:`k` features
(called ``n_components`` in the API).
To also transform a test set :math:`X`, we multiply it with :math:`V_k`:
.. math::
X' = X V_k
.. note::
Most treatments of LSA in the natural language processing (NLP)
and information retrieval (IR) literature
swap the axes of the matrix :math:`X` so that it has shape
``(n_features, n_samples)``.
We present LSA in a different way that matches the scikit-learn API better,
but the singular values found are the same.
While the :class:`TruncatedSVD` transformer
works with any feature matrix,
using it on tf-idf matrices is recommended over raw frequency counts
in an LSA/document processing setting.
In particular, sublinear scaling and inverse document frequency
should be turned on (``sublinear_tf=True, use_idf=True``)
to bring the feature values closer to a Gaussian distribution,
compensating for LSA's erroneous assumptions about textual data.
.. rubric:: Examples
* :ref:`sphx_glr_auto_examples_text_plot_document_clustering.py`
.. rubric:: References
* Christopher D. Manning, Prabhakar Raghavan and Hinrich Schütze (2008),
*Introduction to Information Retrieval*, Cambridge University Press,
chapter 18: `Matrix decompositions & latent semantic indexing
<https://nlp.stanford.edu/IR-book/pdf/18lsi.pdf>`_
.. _DictionaryLearning:
Dictionary Learning
===================
.. _SparseCoder:
Sparse coding with a precomputed dictionary
-------------------------------------------
The :class:`SparseCoder` object is an estimator that can be used to transform signals
into sparse linear combination of atoms from a fixed, precomputed dictionary
such as a discrete wavelet basis. This object therefore does not
implement a ``fit`` method. The transformation amounts
to a sparse coding problem: finding a representation of the data as a linear
combination of as few dictionary atoms as possible. All variations of
dictionary learning implement the following transform methods, controllable via
the ``transform_method`` initialization parameter:
* Orthogonal matching pursuit (:ref:`omp`)
* Least-angle regression (:ref:`least_angle_regression`)
* Lasso computed by least-angle regression
* Lasso using coordinate descent (:ref:`lasso`)
* Thresholding
Thresholding is very fast but it does not yield accurate reconstructions.
They have been shown useful in literature for classification tasks. For image
reconstruction tasks, orthogonal matching pursuit yields the most accurate,
unbiased reconstruction.
The dictionary learning objects offer, via the ``split_code`` parameter, the
possibility to separate the positive and negative values in the results of
sparse coding. This is useful when dictionary learning is used for extracting
features that will be used for supervised learning, because it allows the
learning algorithm to assign different weights to negative loadings of a
particular atom, from to the corresponding positive loading.
The split code for a single sample has length ``2 * n_components``
and is constructed using the following rule: First, the regular code of length
``n_components`` is computed. Then, the first ``n_components`` entries of the
``split_code`` are
filled with the positive part of the regular code vector. The second half of
the split code is filled with the negative part of the code vector, only with
a positive sign. Therefore, the split_code is non-negative.
.. rubric:: Examples
* :ref:`sphx_glr_auto_examples_decomposition_plot_sparse_coding.py`
Generic dictionary learning
---------------------------
Dictionary learning (:class:`DictionaryLearning`) is a matrix factorization
problem that amounts to finding a (usually overcomplete) dictionary that will
perform well at sparsely encoding the fitted data.
Representing data as sparse combinations of atoms from an overcomplete
dictionary is suggested to be the way the mammalian primary visual cortex works.
Consequently, dictionary learning applied on image patches has been shown to
give good results in image processing tasks such as image completion,
inpainting and denoising, as well as for supervised recognition tasks.
Dictionary learning is an optimization problem solved by alternatively updating
the sparse code, as a solution to multiple Lasso problems, considering the
dictionary fixed, and then updating the dictionary to best fit the sparse code.
.. math::
(U^*, V^*) = \underset{U, V}{\operatorname{arg\,min\,}} & \frac{1}{2}
||X-UV||_{\text{Fro}}^2+\alpha||U||_{1,1} \\
\text{subject to } & ||V_k||_2 <= 1 \text{ for all }
0 \leq k < n_{\mathrm{atoms}}
.. |pca_img2| image:: ../auto_examples/decomposition/images/sphx_glr_plot_faces_decomposition_002.png
:target: ../auto_examples/decomposition/plot_faces_decomposition.html
:scale: 60%
.. |dict_img2| image:: ../auto_examples/decomposition/images/sphx_glr_plot_faces_decomposition_007.png
:target: ../auto_examples/decomposition/plot_faces_decomposition.html
:scale: 60%
.. centered:: |pca_img2| |dict_img2|
:math:`||.||_{\text{Fro}}` stands for the Frobenius norm and :math:`||.||_{1,1}`
stands for the entry-wise matrix norm which is the sum of the absolute values
of all the entries in the matrix.
After using such a procedure to fit the dictionary, the transform is simply a
sparse coding step that shares the same implementation with all dictionary
learning objects (see :ref:`SparseCoder`).
It is also possible to constrain the dictionary and/or code to be positive to
match constraints that may be present in the data. Below are the faces with
different positivity constraints applied. Red indicates negative values, blue
indicates positive values, and white represents zeros.
.. |dict_img_pos1| image:: ../auto_examples/decomposition/images/sphx_glr_plot_faces_decomposition_010.png
:target: ../auto_examples/decomposition/plot_image_denoising.html
:scale: 60%
.. |dict_img_pos2| image:: ../auto_examples/decomposition/images/sphx_glr_plot_faces_decomposition_011.png
:target: ../auto_examples/decomposition/plot_image_denoising.html
:scale: 60%
.. |dict_img_pos3| image:: ../auto_examples/decomposition/images/sphx_glr_plot_faces_decomposition_012.png
:target: ../auto_examples/decomposition/plot_image_denoising.html
:scale: 60%
.. |dict_img_pos4| image:: ../auto_examples/decomposition/images/sphx_glr_plot_faces_decomposition_013.png
:target: ../auto_examples/decomposition/plot_image_denoising.html
:scale: 60%
.. centered:: |dict_img_pos1| |dict_img_pos2|
.. centered:: |dict_img_pos3| |dict_img_pos4|
The following image shows how a dictionary learned from 4x4 pixel image patches
extracted from part of the image of a raccoon face looks like.
.. figure:: ../auto_examples/decomposition/images/sphx_glr_plot_image_denoising_001.png
:target: ../auto_examples/decomposition/plot_image_denoising.html
:align: center
:scale: 50%
.. rubric:: Examples
* :ref:`sphx_glr_auto_examples_decomposition_plot_image_denoising.py`
.. rubric:: References
* `"Online dictionary learning for sparse coding"
<https://www.di.ens.fr/~fbach/mairal_icml09.pdf>`_
J. Mairal, F. Bach, J. Ponce, G. Sapiro, 2009
.. _MiniBatchDictionaryLearning:
Mini-batch dictionary learning
------------------------------
:class:`MiniBatchDictionaryLearning` implements a faster, but less accurate
version of the dictionary learning algorithm that is better suited for large
datasets.
By default, :class:`MiniBatchDictionaryLearning` divides the data into
mini-batches and optimizes in an online manner by cycling over the mini-batches
for the specified number of iterations. However, at the moment it does not
implement a stopping condition.
The estimator also implements ``partial_fit``, which updates the dictionary by
iterating only once over a mini-batch. This can be used for online learning
when the data is not readily available from the start, or for when the data
does not fit into the memory.
.. currentmodule:: sklearn.cluster
.. image:: ../auto_examples/cluster/images/sphx_glr_plot_dict_face_patches_001.png
:target: ../auto_examples/cluster/plot_dict_face_patches.html
:scale: 50%
:align: right
.. topic:: **Clustering for dictionary learning**
Note that when using dictionary learning to extract a representation
(e.g. for sparse coding) clustering can be a good proxy to learn the
dictionary. For instance the :class:`MiniBatchKMeans` estimator is
computationally efficient and implements on-line learning with a
``partial_fit`` method.
Example: :ref:`sphx_glr_auto_examples_cluster_plot_dict_face_patches.py`
.. currentmodule:: sklearn.decomposition
.. _FA:
Factor Analysis
===============
In unsupervised learning we only have a dataset :math:`X = \{x_1, x_2, \dots, x_n
\}`. How can this dataset be described mathematically? A very simple
`continuous latent variable` model for :math:`X` is
.. math:: x_i = W h_i + \mu + \epsilon
The vector :math:`h_i` is called "latent" because it is unobserved. :math:`\epsilon` is
considered a noise term distributed according to a Gaussian with mean 0 and
covariance :math:`\Psi` (i.e. :math:`\epsilon \sim \mathcal{N}(0, \Psi)`), :math:`\mu` is some
arbitrary offset vector. Such a model is called "generative" as it describes
how :math:`x_i` is generated from :math:`h_i`. If we use all the :math:`x_i`'s as columns to form
a matrix :math:`\mathbf{X}` and all the :math:`h_i`'s as columns of a matrix :math:`\mathbf{H}`
then we can write (with suitably defined :math:`\mathbf{M}` and :math:`\mathbf{E}`):
.. math::
\mathbf{X} = W \mathbf{H} + \mathbf{M} + \mathbf{E}
In other words, we *decomposed* matrix :math:`\mathbf{X}`.
If :math:`h_i` is given, the above equation automatically implies the following
probabilistic interpretation:
.. math:: p(x_i|h_i) = \mathcal{N}(Wh_i + \mu, \Psi)
For a complete probabilistic model we also need a prior distribution for the
latent variable :math:`h`. The most straightforward assumption (based on the nice
properties of the Gaussian distribution) is :math:`h \sim \mathcal{N}(0,
\mathbf{I})`. This yields a Gaussian as the marginal distribution of :math:`x`:
.. math:: p(x) = \mathcal{N}(\mu, WW^T + \Psi)
Now, without any further assumptions the idea of having a latent variable :math:`h`
would be superfluous -- :math:`x` can be completely modelled with a mean
and a covariance. We need to impose some more specific structure on one
of these two parameters. A simple additional assumption regards the
structure of the error covariance :math:`\Psi`:
* :math:`\Psi = \sigma^2 \mathbf{I}`: This assumption leads to
the probabilistic model of :class:`PCA`.
* :math:`\Psi = \mathrm{diag}(\psi_1, \psi_2, \dots, \psi_n)`: This model is called
:class:`FactorAnalysis`, a classical statistical model. The matrix W is
sometimes called the "factor loading matrix".
Both models essentially estimate a Gaussian with a low-rank covariance matrix.
Because both models are probabilistic they can be integrated in more complex
models, e.g. Mixture of Factor Analysers. One gets very different models (e.g.
:class:`FastICA`) if non-Gaussian priors on the latent variables are assumed.
Factor analysis *can* produce similar components (the columns of its loading
matrix) to :class:`PCA`. However, one can not make any general statements
about these components (e.g. whether they are orthogonal):
.. |pca_img3| image:: ../auto_examples/decomposition/images/sphx_glr_plot_faces_decomposition_002.png
:target: ../auto_examples/decomposition/plot_faces_decomposition.html
:scale: 60%
.. |fa_img3| image:: ../auto_examples/decomposition/images/sphx_glr_plot_faces_decomposition_008.png
:target: ../auto_examples/decomposition/plot_faces_decomposition.html
:scale: 60%
.. centered:: |pca_img3| |fa_img3|
The main advantage for Factor Analysis over :class:`PCA` is that
it can model the variance in every direction of the input space independently
(heteroscedastic noise):
.. figure:: ../auto_examples/decomposition/images/sphx_glr_plot_faces_decomposition_009.png
:target: ../auto_examples/decomposition/plot_faces_decomposition.html
:align: center
:scale: 75%
This allows better model selection than probabilistic PCA in the presence
of heteroscedastic noise:
.. figure:: ../auto_examples/decomposition/images/sphx_glr_plot_pca_vs_fa_model_selection_002.png
:target: ../auto_examples/decomposition/plot_pca_vs_fa_model_selection.html
:align: center
:scale: 75%
Factor Analysis is often followed by a rotation of the factors (with the
parameter `rotation`), usually to improve interpretability. For example,
Varimax rotation maximizes the sum of the variances of the squared loadings,
i.e., it tends to produce sparser factors, which are influenced by only a few
features each (the "simple structure"). See e.g., the first example below.
.. rubric:: Examples
* :ref:`sphx_glr_auto_examples_decomposition_plot_varimax_fa.py`
* :ref:`sphx_glr_auto_examples_decomposition_plot_pca_vs_fa_model_selection.py`
.. _ICA:
Independent component analysis (ICA)
====================================
Independent component analysis separates a multivariate signal into
additive subcomponents that are maximally independent. It is
implemented in scikit-learn using the :class:`Fast ICA <FastICA>`
algorithm. Typically, ICA is not used for reducing dimensionality but
for separating superimposed signals. Since the ICA model does not include
a noise term, for the model to be correct, whitening must be applied.
This can be done internally using the whiten argument or manually using one
of the PCA variants.
It is classically used to separate mixed signals (a problem known as
*blind source separation*), as in the example below:
.. figure:: ../auto_examples/decomposition/images/sphx_glr_plot_ica_blind_source_separation_001.png
:target: ../auto_examples/decomposition/plot_ica_blind_source_separation.html
:align: center
:scale: 60%
ICA can also be used as yet another non linear decomposition that finds
components with some sparsity:
.. |pca_img4| image:: ../auto_examples/decomposition/images/sphx_glr_plot_faces_decomposition_002.png
:target: ../auto_examples/decomposition/plot_faces_decomposition.html
:scale: 60%
.. |ica_img4| image:: ../auto_examples/decomposition/images/sphx_glr_plot_faces_decomposition_004.png
:target: ../auto_examples/decomposition/plot_faces_decomposition.html
:scale: 60%
.. centered:: |pca_img4| |ica_img4|
.. rubric:: Examples
* :ref:`sphx_glr_auto_examples_decomposition_plot_ica_blind_source_separation.py`
* :ref:`sphx_glr_auto_examples_decomposition_plot_ica_vs_pca.py`
* :ref:`sphx_glr_auto_examples_decomposition_plot_faces_decomposition.py`
.. _NMF:
Non-negative matrix factorization (NMF or NNMF)
===============================================
NMF with the Frobenius norm
---------------------------
:class:`NMF` [1]_ is an alternative approach to decomposition that assumes that the
data and the components are non-negative. :class:`NMF` can be plugged in
instead of :class:`PCA` or its variants, in the cases where the data matrix
does not contain negative values. It finds a decomposition of samples
:math:`X` into two matrices :math:`W` and :math:`H` of non-negative elements,
by optimizing the distance :math:`d` between :math:`X` and the matrix product
:math:`WH`. The most widely used distance function is the squared Frobenius
norm, which is an obvious extension of the Euclidean norm to matrices:
.. math::
d_{\mathrm{Fro}}(X, Y) = \frac{1}{2} ||X - Y||_{\mathrm{Fro}}^2 = \frac{1}{2} \sum_{i,j} (X_{ij} - {Y}_{ij})^2
Unlike :class:`PCA`, the representation of a vector is obtained in an additive
fashion, by superimposing the components, without subtracting. Such additive
models are efficient for representing images and text.
It has been observed in [Hoyer, 2004] [2]_ that, when carefully constrained,
:class:`NMF` can produce a parts-based representation of the dataset,
resulting in interpretable models. The following example displays 16
sparse components found by :class:`NMF` from the images in the Olivetti
faces dataset, in comparison with the PCA eigenfaces.
.. |pca_img5| image:: ../auto_examples/decomposition/images/sphx_glr_plot_faces_decomposition_002.png
:target: ../auto_examples/decomposition/plot_faces_decomposition.html
:scale: 60%
.. |nmf_img5| image:: ../auto_examples/decomposition/images/sphx_glr_plot_faces_decomposition_003.png
:target: ../auto_examples/decomposition/plot_faces_decomposition.html
:scale: 60%
.. centered:: |pca_img5| |nmf_img5|
The `init` attribute determines the initialization method applied, which
has a great impact on the performance of the method. :class:`NMF` implements the
method Nonnegative Double Singular Value Decomposition. NNDSVD [4]_ is based on
two SVD processes, one approximating the data matrix, the other approximating
positive sections of the resulting partial SVD factors utilizing an algebraic
property of unit rank matrices. The basic NNDSVD algorithm is better fit for
sparse factorization. Its variants NNDSVDa (in which all zeros are set equal to
the mean of all elements of the data), and NNDSVDar (in which the zeros are set
to random perturbations less than the mean of the data divided by 100) are
recommended in the dense case.
Note that the Multiplicative Update ('mu') solver cannot update zeros present in
the initialization, so it leads to poorer results when used jointly with the
basic NNDSVD algorithm which introduces a lot of zeros; in this case, NNDSVDa or
NNDSVDar should be preferred.
:class:`NMF` can also be initialized with correctly scaled random non-negative
matrices by setting `init="random"`. An integer seed or a
``RandomState`` can also be passed to `random_state` to control
reproducibility.
In :class:`NMF`, L1 and L2 priors can be added to the loss function in order to
regularize the model. The L2 prior uses the Frobenius norm, while the L1 prior
uses an elementwise L1 norm. As in :class:`~sklearn.linear_model.ElasticNet`,
we control the combination of L1 and L2 with the `l1_ratio` (:math:`\rho`)
parameter, and the intensity of the regularization with the `alpha_W` and
`alpha_H` (:math:`\alpha_W` and :math:`\alpha_H`) parameters. The priors are
scaled by the number of samples (:math:`n\_samples`) for `H` and the number of
features (:math:`n\_features`) for `W` to keep their impact balanced with
respect to one another and to the data fit term as independent as possible of
the size of the training set. Then the priors terms are:
.. math::
(\alpha_W \rho ||W||_1 + \frac{\alpha_W(1-\rho)}{2} ||W||_{\mathrm{Fro}} ^ 2) * n\_features
+ (\alpha_H \rho ||H||_1 + \frac{\alpha_H(1-\rho)}{2} ||H||_{\mathrm{Fro}} ^ 2) * n\_samples
and the regularized objective function is:
.. math::
d_{\mathrm{Fro}}(X, WH)
+ (\alpha_W \rho ||W||_1 + \frac{\alpha_W(1-\rho)}{2} ||W||_{\mathrm{Fro}} ^ 2) * n\_features
+ (\alpha_H \rho ||H||_1 + \frac{\alpha_H(1-\rho)}{2} ||H||_{\mathrm{Fro}} ^ 2) * n\_samples
NMF with a beta-divergence
--------------------------
As described previously, the most widely used distance function is the squared
Frobenius norm, which is an obvious extension of the Euclidean norm to
matrices:
.. math::
d_{\mathrm{Fro}}(X, Y) = \frac{1}{2} ||X - Y||_{Fro}^2 = \frac{1}{2} \sum_{i,j} (X_{ij} - {Y}_{ij})^2
Other distance functions can be used in NMF as, for example, the (generalized)
Kullback-Leibler (KL) divergence, also referred as I-divergence:
.. math::
d_{KL}(X, Y) = \sum_{i,j} (X_{ij} \log(\frac{X_{ij}}{Y_{ij}}) - X_{ij} + Y_{ij})
Or, the Itakura-Saito (IS) divergence:
.. math::
d_{IS}(X, Y) = \sum_{i,j} (\frac{X_{ij}}{Y_{ij}} - \log(\frac{X_{ij}}{Y_{ij}}) - 1)
These three distances are special cases of the beta-divergence family, with
:math:`\beta = 2, 1, 0` respectively [6]_. The beta-divergence are
defined by :
.. math::
d_{\beta}(X, Y) = \sum_{i,j} \frac{1}{\beta(\beta - 1)}(X_{ij}^\beta + (\beta-1)Y_{ij}^\beta - \beta X_{ij} Y_{ij}^{\beta - 1})
.. image:: ../images/beta_divergence.png
:align: center
:scale: 75%
Note that this definition is not valid if :math:`\beta \in (0; 1)`, yet it can
be continuously extended to the definitions of :math:`d_{KL}` and :math:`d_{IS}`
respectively.
.. dropdown:: NMF implemented solvers
:class:`NMF` implements two solvers, using Coordinate Descent ('cd') [5]_, and
Multiplicative Update ('mu') [6]_. The 'mu' solver can optimize every
beta-divergence, including of course the Frobenius norm (:math:`\beta=2`), the
(generalized) Kullback-Leibler divergence (:math:`\beta=1`) and the
Itakura-Saito divergence (:math:`\beta=0`). Note that for
:math:`\beta \in (1; 2)`, the 'mu' solver is significantly faster than for other
values of :math:`\beta`. Note also that with a negative (or 0, i.e.
'itakura-saito') :math:`\beta`, the input matrix cannot contain zero values.
The 'cd' solver can only optimize the Frobenius norm. Due to the
underlying non-convexity of NMF, the different solvers may converge to
different minima, even when optimizing the same distance function.
NMF is best used with the ``fit_transform`` method, which returns the matrix W.
The matrix H is stored into the fitted model in the ``components_`` attribute;
the method ``transform`` will decompose a new matrix X_new based on these
stored components::
>>> import numpy as np
>>> X = np.array([[1, 1], [2, 1], [3, 1.2], [4, 1], [5, 0.8], [6, 1]])
>>> from sklearn.decomposition import NMF
>>> model = NMF(n_components=2, init='random', random_state=0)
>>> W = model.fit_transform(X)
>>> H = model.components_
>>> X_new = np.array([[1, 0], [1, 6.1], [1, 0], [1, 4], [3.2, 1], [0, 4]])
>>> W_new = model.transform(X_new)
.. rubric:: Examples
* :ref:`sphx_glr_auto_examples_decomposition_plot_faces_decomposition.py`
* :ref:`sphx_glr_auto_examples_applications_plot_topics_extraction_with_nmf_lda.py`
.. _MiniBatchNMF:
Mini-batch Non Negative Matrix Factorization
--------------------------------------------
:class:`MiniBatchNMF` [7]_ implements a faster, but less accurate version of the
non negative matrix factorization (i.e. :class:`~sklearn.decomposition.NMF`),
better suited for large datasets.
By default, :class:`MiniBatchNMF` divides the data into mini-batches and
optimizes the NMF model in an online manner by cycling over the mini-batches
for the specified number of iterations. The ``batch_size`` parameter controls
the size of the batches.
In order to speed up the mini-batch algorithm it is also possible to scale
past batches, giving them less importance than newer batches. This is done
introducing a so-called forgetting factor controlled by the ``forget_factor``
parameter.
The estimator also implements ``partial_fit``, which updates ``H`` by iterating
only once over a mini-batch. This can be used for online learning when the data
is not readily available from the start, or when the data does not fit into memory.
.. rubric:: References
.. [1] `"Learning the parts of objects by non-negative matrix factorization"
<http://www.cs.columbia.edu/~blei/fogm/2020F/readings/LeeSeung1999.pdf>`_
D. Lee, S. Seung, 1999
.. [2] `"Non-negative Matrix Factorization with Sparseness Constraints"
<https://www.jmlr.org/papers/volume5/hoyer04a/hoyer04a.pdf>`_
P. Hoyer, 2004
.. [4] `"SVD based initialization: A head start for nonnegative
matrix factorization"
<https://www.boutsidis.org/Boutsidis_PRE_08.pdf>`_
C. Boutsidis, E. Gallopoulos, 2008
.. [5] `"Fast local algorithms for large scale nonnegative matrix and tensor
factorizations."
<https://www.researchgate.net/profile/Anh-Huy-Phan/publication/220241471_Fast_Local_Algorithms_for_Large_Scale_Nonnegative_Matrix_and_Tensor_Factorizations>`_
A. Cichocki, A. Phan, 2009
.. [6] :arxiv:`"Algorithms for nonnegative matrix factorization with
the beta-divergence" <1010.1763>`
C. Fevotte, J. Idier, 2011
.. [7] :arxiv:`"Online algorithms for nonnegative matrix factorization with the
Itakura-Saito divergence" <1106.4198>`
A. Lefevre, F. Bach, C. Fevotte, 2011
.. _LatentDirichletAllocation:
Latent Dirichlet Allocation (LDA)
=================================
Latent Dirichlet Allocation is a generative probabilistic model for collections of
discrete dataset such as text corpora. It is also a topic model that is used for
discovering abstract topics from a collection of documents.
The graphical model of LDA is a three-level generative model:
.. image:: ../images/lda_model_graph.png
:align: center
Note on notations presented in the graphical model above, which can be found in
Hoffman et al. (2013):
* The corpus is a collection of :math:`D` documents.
* A document is a sequence of :math:`N` words.
* There are :math:`K` topics in the corpus.
* The boxes represent repeated sampling.
In the graphical model, each node is a random variable and has a role in the
generative process. A shaded node indicates an observed variable and an unshaded
node indicates a hidden (latent) variable. In this case, words in the corpus are
the only data that we observe. The latent variables determine the random mixture
of topics in the corpus and the distribution of words in the documents.
The goal of LDA is to use the observed words to infer the hidden topic
structure.
.. dropdown:: Details on modeling text corpora
When modeling text corpora, the model assumes the following generative process
for a corpus with :math:`D` documents and :math:`K` topics, with :math:`K`
corresponding to `n_components` in the API:
1. For each topic :math:`k \in K`, draw :math:`\beta_k \sim
\mathrm{Dirichlet}(\eta)`. This provides a distribution over the words,
i.e. the probability of a word appearing in topic :math:`k`.
:math:`\eta` corresponds to `topic_word_prior`.
2. For each document :math:`d \in D`, draw the topic proportions
:math:`\theta_d \sim \mathrm{Dirichlet}(\alpha)`. :math:`\alpha`
corresponds to `doc_topic_prior`.
3. For each word :math:`i` in document :math:`d`:
a. Draw the topic assignment :math:`z_{di} \sim \mathrm{Multinomial}
(\theta_d)`
b. Draw the observed word :math:`w_{ij} \sim \mathrm{Multinomial}
(\beta_{z_{di}})`
For parameter estimation, the posterior distribution is:
.. math::
p(z, \theta, \beta |w, \alpha, \eta) =
\frac{p(z, \theta, \beta|\alpha, \eta)}{p(w|\alpha, \eta)}
Since the posterior is intractable, variational Bayesian method
uses a simpler distribution :math:`q(z,\theta,\beta | \lambda, \phi, \gamma)`
to approximate it, and those variational parameters :math:`\lambda`,
:math:`\phi`, :math:`\gamma` are optimized to maximize the Evidence
Lower Bound (ELBO):
.. math::
\log\: P(w | \alpha, \eta) \geq L(w,\phi,\gamma,\lambda) \overset{\triangle}{=}
E_{q}[\log\:p(w,z,\theta,\beta|\alpha,\eta)] - E_{q}[\log\:q(z, \theta, \beta)]
Maximizing ELBO is equivalent to minimizing the Kullback-Leibler(KL) divergence
between :math:`q(z,\theta,\beta)` and the true posterior
:math:`p(z, \theta, \beta |w, \alpha, \eta)`.
:class:`LatentDirichletAllocation` implements the online variational Bayes
algorithm and supports both online and batch update methods.
While the batch method updates variational variables after each full pass through
the data, the online method updates variational variables from mini-batch data
points.
.. note::
Although the online method is guaranteed to converge to a local optimum point, the quality of
the optimum point and the speed of convergence may depend on mini-batch size and
attributes related to learning rate setting.
When :class:`LatentDirichletAllocation` is applied on a "document-term" matrix, the matrix
will be decomposed into a "topic-term" matrix and a "document-topic" matrix. While
"topic-term" matrix is stored as `components_` in the model, "document-topic" matrix
can be calculated from ``transform`` method.
:class:`LatentDirichletAllocation` also implements ``partial_fit`` method. This is used
when data can be fetched sequentially.
.. rubric:: Examples
* :ref:`sphx_glr_auto_examples_applications_plot_topics_extraction_with_nmf_lda.py`
.. rubric:: References
* `"Latent Dirichlet Allocation"
<https://www.jmlr.org/papers/volume3/blei03a/blei03a.pdf>`_
D. Blei, A. Ng, M. Jordan, 2003
* `"Online Learning for Latent Dirichlet Allocation”
<https://papers.nips.cc/paper/3902-online-learning-for-latent-dirichlet-allocation.pdf>`_
M. Hoffman, D. Blei, F. Bach, 2010
* `"Stochastic Variational Inference"
<https://www.cs.columbia.edu/~blei/papers/HoffmanBleiWangPaisley2013.pdf>`_
M. Hoffman, D. Blei, C. Wang, J. Paisley, 2013
* `"The varimax criterion for analytic rotation in factor analysis"
<https://link.springer.com/article/10.1007%2FBF02289233>`_
H. F. Kaiser, 1958
See also :ref:`nca_dim_reduction` for dimensionality reduction with
Neighborhood Components Analysis. | scikit-learn | decompositions Decomposing signals in components matrix factorization problems currentmodule sklearn decomposition PCA Principal component analysis PCA Exact PCA and probabilistic interpretation PCA is used to decompose a multivariate dataset in a set of successive orthogonal components that explain a maximum amount of the variance In scikit learn class PCA is implemented as a transformer object that learns math n components in its fit method and can be used on new data to project it on these components PCA centers but does not scale the input data for each feature before applying the SVD The optional parameter whiten True makes it possible to project the data onto the singular space while scaling each component to unit variance This is often useful if the models down stream make strong assumptions on the isotropy of the signal this is for example the case for Support Vector Machines with the RBF kernel and the K Means clustering algorithm Below is an example of the iris dataset which is comprised of 4 features projected on the 2 dimensions that explain most variance figure auto examples decomposition images sphx glr plot pca vs lda 001 png target auto examples decomposition plot pca vs lda html align center scale 75 The class PCA object also provides a probabilistic interpretation of the PCA that can give a likelihood of data based on the amount of variance it explains As such it implements a term score method that can be used in cross validation figure auto examples decomposition images sphx glr plot pca vs fa model selection 001 png target auto examples decomposition plot pca vs fa model selection html align center scale 75 rubric Examples ref sphx glr auto examples decomposition plot pca iris py ref sphx glr auto examples decomposition plot pca vs lda py ref sphx glr auto examples decomposition plot pca vs fa model selection py IncrementalPCA Incremental PCA The class PCA object is very useful but has certain limitations for large datasets The biggest limitation is that class PCA only supports batch processing which means all of the data to be processed must fit in main memory The class IncrementalPCA object uses a different form of processing and allows for partial computations which almost exactly match the results of class PCA while processing the data in a minibatch fashion class IncrementalPCA makes it possible to implement out of core Principal Component Analysis either by Using its partial fit method on chunks of data fetched sequentially from the local hard drive or a network database Calling its fit method on a memory mapped file using numpy memmap class IncrementalPCA only stores estimates of component and noise variances in order update explained variance ratio incrementally This is why memory usage depends on the number of samples per batch rather than the number of samples to be processed in the dataset As in class PCA class IncrementalPCA centers but does not scale the input data for each feature before applying the SVD figure auto examples decomposition images sphx glr plot incremental pca 001 png target auto examples decomposition plot incremental pca html align center scale 75 figure auto examples decomposition images sphx glr plot incremental pca 002 png target auto examples decomposition plot incremental pca html align center scale 75 rubric Examples ref sphx glr auto examples decomposition plot incremental pca py RandomizedPCA PCA using randomized SVD It is often interesting to project data to a lower dimensional space that preserves most of the variance by dropping the singular vector of components associated with lower singular values For instance if we work with 64x64 pixel gray level pictures for face recognition the dimensionality of the data is 4096 and it is slow to train an RBF support vector machine on such wide data Furthermore we know that the intrinsic dimensionality of the data is much lower than 4096 since all pictures of human faces look somewhat alike The samples lie on a manifold of much lower dimension say around 200 for instance The PCA algorithm can be used to linearly transform the data while both reducing the dimensionality and preserve most of the explained variance at the same time The class class PCA used with the optional parameter svd solver randomized is very useful in that case since we are going to drop most of the singular vectors it is much more efficient to limit the computation to an approximated estimate of the singular vectors we will keep to actually perform the transform For instance the following shows 16 sample portraits centered around 0 0 from the Olivetti dataset On the right hand side are the first 16 singular vectors reshaped as portraits Since we only require the top 16 singular vectors of a dataset with size math n samples 400 and math n features 64 times 64 4096 the computation time is less than 1s orig img image auto examples decomposition images sphx glr plot faces decomposition 001 png target auto examples decomposition plot faces decomposition html scale 60 pca img image auto examples decomposition images sphx glr plot faces decomposition 002 png target auto examples decomposition plot faces decomposition html scale 60 centered orig img pca img If we note math n max max n mathrm samples n mathrm features and math n min min n mathrm samples n mathrm features the time complexity of the randomized class PCA is math O n max 2 cdot n mathrm components instead of math O n max 2 cdot n min for the exact method implemented in class PCA The memory footprint of randomized class PCA is also proportional to math 2 cdot n max cdot n mathrm components instead of math n max cdot n min for the exact method Note the implementation of inverse transform in class PCA with svd solver randomized is not the exact inverse transform of transform even when whiten False default rubric Examples ref sphx glr auto examples applications plot face recognition py ref sphx glr auto examples decomposition plot faces decomposition py rubric References Algorithm 4 3 in arxiv Finding structure with randomness Stochastic algorithms for constructing approximate matrix decompositions 0909 4061 Halko et al 2009 arxiv An implementation of a randomized algorithm for principal component analysis 1412 3510 A Szlam et al 2014 SparsePCA Sparse principal components analysis SparsePCA and MiniBatchSparsePCA class SparsePCA is a variant of PCA with the goal of extracting the set of sparse components that best reconstruct the data Mini batch sparse PCA class MiniBatchSparsePCA is a variant of class SparsePCA that is faster but less accurate The increased speed is reached by iterating over small chunks of the set of features for a given number of iterations Principal component analysis class PCA has the disadvantage that the components extracted by this method have exclusively dense expressions i e they have non zero coefficients when expressed as linear combinations of the original variables This can make interpretation difficult In many cases the real underlying components can be more naturally imagined as sparse vectors for example in face recognition components might naturally map to parts of faces Sparse principal components yields a more parsimonious interpretable representation clearly emphasizing which of the original features contribute to the differences between samples The following example illustrates 16 components extracted using sparse PCA from the Olivetti faces dataset It can be seen how the regularization term induces many zeros Furthermore the natural structure of the data causes the non zero coefficients to be vertically adjacent The model does not enforce this mathematically each component is a vector math h in mathbf R 4096 and there is no notion of vertical adjacency except during the human friendly visualization as 64x64 pixel images The fact that the components shown below appear local is the effect of the inherent structure of the data which makes such local patterns minimize reconstruction error There exist sparsity inducing norms that take into account adjacency and different kinds of structure see Jen09 for a review of such methods For more details on how to use Sparse PCA see the Examples section below spca img image auto examples decomposition images sphx glr plot faces decomposition 005 png target auto examples decomposition plot faces decomposition html scale 60 centered pca img spca img Note that there are many different formulations for the Sparse PCA problem The one implemented here is based on Mrl09 The optimization problem solved is a PCA problem dictionary learning with an math ell 1 penalty on the components math U V underset U V operatorname arg min frac 1 2 X UV text Fro 2 alpha V 1 1 text subject to U k 2 1 text for all 0 leq k n components math text Fro stands for the Frobenius norm and math 1 1 stands for the entry wise matrix norm which is the sum of the absolute values of all the entries in the matrix The sparsity inducing math 1 1 matrix norm also prevents learning components from noise when few training samples are available The degree of penalization and thus sparsity can be adjusted through the hyperparameter alpha Small values lead to a gently regularized factorization while larger values shrink many coefficients to zero note While in the spirit of an online algorithm the class class MiniBatchSparsePCA does not implement partial fit because the algorithm is online along the features direction not the samples direction rubric Examples ref sphx glr auto examples decomposition plot faces decomposition py rubric References Mrl09 Online Dictionary Learning for Sparse Coding https www di ens fr fbach mairal icml09 pdf J Mairal F Bach J Ponce G Sapiro 2009 Jen09 Structured Sparse Principal Component Analysis https www di ens fr fbach sspca AISTATS2010 pdf R Jenatton G Obozinski F Bach 2009 kernel PCA Kernel Principal Component Analysis kPCA Exact Kernel PCA class KernelPCA is an extension of PCA which achieves non linear dimensionality reduction through the use of kernels see ref metrics Scholkopf1997 It has many applications including denoising compression and structured prediction kernel dependency estimation class KernelPCA supports both transform and inverse transform figure auto examples decomposition images sphx glr plot kernel pca 002 png target auto examples decomposition plot kernel pca html align center scale 75 note meth KernelPCA inverse transform relies on a kernel ridge to learn the function mapping samples from the PCA basis into the original feature space Bakir2003 Thus the reconstruction obtained with meth KernelPCA inverse transform is an approximation See the example linked below for more details rubric Examples ref sphx glr auto examples decomposition plot kernel pca py ref sphx glr auto examples applications plot digits denoising py rubric References Scholkopf1997 Sch lkopf Bernhard Alexander Smola and Klaus Robert M ller Kernel principal component analysis https people eecs berkeley edu wainwrig stat241b scholkopf kernel pdf International conference on artificial neural networks Springer Berlin Heidelberg 1997 Bakir2003 Bak r G khan H Jason Weston and Bernhard Sch lkopf Learning to find pre images https papers nips cc paper 2003 file ac1ad983e08ad3304a97e147f522747e Paper pdf Advances in neural information processing systems 16 2003 449 456 kPCA Solvers Choice of solver for Kernel PCA While in class PCA the number of components is bounded by the number of features in class KernelPCA the number of components is bounded by the number of samples Many real world datasets have large number of samples In these cases finding all the components with a full kPCA is a waste of computation time as data is mostly described by the first few components e g n components 100 In other words the centered Gram matrix that is eigendecomposed in the Kernel PCA fitting process has an effective rank that is much smaller than its size This is a situation where approximate eigensolvers can provide speedup with very low precision loss dropdown Eigensolvers The optional parameter eigen solver randomized can be used to significantly reduce the computation time when the number of requested n components is small compared with the number of samples It relies on randomized decomposition methods to find an approximate solution in a shorter time The time complexity of the randomized class KernelPCA is math O n mathrm samples 2 cdot n mathrm components instead of math O n mathrm samples 3 for the exact method implemented with eigen solver dense The memory footprint of randomized class KernelPCA is also proportional to math 2 cdot n mathrm samples cdot n mathrm components instead of math n mathrm samples 2 for the exact method Note this technique is the same as in ref RandomizedPCA In addition to the above two solvers eigen solver arpack can be used as an alternate way to get an approximate decomposition In practice this method only provides reasonable execution times when the number of components to find is extremely small It is enabled by default when the desired number of components is less than 10 strict and the number of samples is more than 200 strict See class KernelPCA for details rubric References dense solver scipy linalg eigh documentation https docs scipy org doc scipy reference generated scipy linalg eigh html randomized solver Algorithm 4 3 in arxiv Finding structure with randomness Stochastic algorithms for constructing approximate matrix decompositions 0909 4061 Halko et al 2009 arxiv An implementation of a randomized algorithm for principal component analysis 1412 3510 A Szlam et al 2014 arpack solver scipy sparse linalg eigsh documentation https docs scipy org doc scipy reference generated scipy sparse linalg eigsh html R B Lehoucq D C Sorensen and C Yang 1998 LSA Truncated singular value decomposition and latent semantic analysis class TruncatedSVD implements a variant of singular value decomposition SVD that only computes the math k largest singular values where math k is a user specified parameter class TruncatedSVD is very similar to class PCA but differs in that the matrix math X does not need to be centered When the columnwise per feature means of math X are subtracted from the feature values truncated SVD on the resulting matrix is equivalent to PCA dropdown About truncated SVD and latent semantic analysis LSA When truncated SVD is applied to term document matrices as returned by class sklearn feature extraction text CountVectorizer or class sklearn feature extraction text TfidfVectorizer this transformation is known as latent semantic analysis https nlp stanford edu IR book pdf 18lsi pdf LSA because it transforms such matrices to a semantic space of low dimensionality In particular LSA is known to combat the effects of synonymy and polysemy both of which roughly mean there are multiple meanings per word which cause term document matrices to be overly sparse and exhibit poor similarity under measures such as cosine similarity note LSA is also known as latent semantic indexing LSI though strictly that refers to its use in persistent indexes for information retrieval purposes Mathematically truncated SVD applied to training samples math X produces a low rank approximation math X math X approx X k U k Sigma k V k top After this operation math U k Sigma k is the transformed training set with math k features called n components in the API To also transform a test set math X we multiply it with math V k math X X V k note Most treatments of LSA in the natural language processing NLP and information retrieval IR literature swap the axes of the matrix math X so that it has shape n features n samples We present LSA in a different way that matches the scikit learn API better but the singular values found are the same While the class TruncatedSVD transformer works with any feature matrix using it on tf idf matrices is recommended over raw frequency counts in an LSA document processing setting In particular sublinear scaling and inverse document frequency should be turned on sublinear tf True use idf True to bring the feature values closer to a Gaussian distribution compensating for LSA s erroneous assumptions about textual data rubric Examples ref sphx glr auto examples text plot document clustering py rubric References Christopher D Manning Prabhakar Raghavan and Hinrich Sch tze 2008 Introduction to Information Retrieval Cambridge University Press chapter 18 Matrix decompositions latent semantic indexing https nlp stanford edu IR book pdf 18lsi pdf DictionaryLearning Dictionary Learning SparseCoder Sparse coding with a precomputed dictionary The class SparseCoder object is an estimator that can be used to transform signals into sparse linear combination of atoms from a fixed precomputed dictionary such as a discrete wavelet basis This object therefore does not implement a fit method The transformation amounts to a sparse coding problem finding a representation of the data as a linear combination of as few dictionary atoms as possible All variations of dictionary learning implement the following transform methods controllable via the transform method initialization parameter Orthogonal matching pursuit ref omp Least angle regression ref least angle regression Lasso computed by least angle regression Lasso using coordinate descent ref lasso Thresholding Thresholding is very fast but it does not yield accurate reconstructions They have been shown useful in literature for classification tasks For image reconstruction tasks orthogonal matching pursuit yields the most accurate unbiased reconstruction The dictionary learning objects offer via the split code parameter the possibility to separate the positive and negative values in the results of sparse coding This is useful when dictionary learning is used for extracting features that will be used for supervised learning because it allows the learning algorithm to assign different weights to negative loadings of a particular atom from to the corresponding positive loading The split code for a single sample has length 2 n components and is constructed using the following rule First the regular code of length n components is computed Then the first n components entries of the split code are filled with the positive part of the regular code vector The second half of the split code is filled with the negative part of the code vector only with a positive sign Therefore the split code is non negative rubric Examples ref sphx glr auto examples decomposition plot sparse coding py Generic dictionary learning Dictionary learning class DictionaryLearning is a matrix factorization problem that amounts to finding a usually overcomplete dictionary that will perform well at sparsely encoding the fitted data Representing data as sparse combinations of atoms from an overcomplete dictionary is suggested to be the way the mammalian primary visual cortex works Consequently dictionary learning applied on image patches has been shown to give good results in image processing tasks such as image completion inpainting and denoising as well as for supervised recognition tasks Dictionary learning is an optimization problem solved by alternatively updating the sparse code as a solution to multiple Lasso problems considering the dictionary fixed and then updating the dictionary to best fit the sparse code math U V underset U V operatorname arg min frac 1 2 X UV text Fro 2 alpha U 1 1 text subject to V k 2 1 text for all 0 leq k n mathrm atoms pca img2 image auto examples decomposition images sphx glr plot faces decomposition 002 png target auto examples decomposition plot faces decomposition html scale 60 dict img2 image auto examples decomposition images sphx glr plot faces decomposition 007 png target auto examples decomposition plot faces decomposition html scale 60 centered pca img2 dict img2 math text Fro stands for the Frobenius norm and math 1 1 stands for the entry wise matrix norm which is the sum of the absolute values of all the entries in the matrix After using such a procedure to fit the dictionary the transform is simply a sparse coding step that shares the same implementation with all dictionary learning objects see ref SparseCoder It is also possible to constrain the dictionary and or code to be positive to match constraints that may be present in the data Below are the faces with different positivity constraints applied Red indicates negative values blue indicates positive values and white represents zeros dict img pos1 image auto examples decomposition images sphx glr plot faces decomposition 010 png target auto examples decomposition plot image denoising html scale 60 dict img pos2 image auto examples decomposition images sphx glr plot faces decomposition 011 png target auto examples decomposition plot image denoising html scale 60 dict img pos3 image auto examples decomposition images sphx glr plot faces decomposition 012 png target auto examples decomposition plot image denoising html scale 60 dict img pos4 image auto examples decomposition images sphx glr plot faces decomposition 013 png target auto examples decomposition plot image denoising html scale 60 centered dict img pos1 dict img pos2 centered dict img pos3 dict img pos4 The following image shows how a dictionary learned from 4x4 pixel image patches extracted from part of the image of a raccoon face looks like figure auto examples decomposition images sphx glr plot image denoising 001 png target auto examples decomposition plot image denoising html align center scale 50 rubric Examples ref sphx glr auto examples decomposition plot image denoising py rubric References Online dictionary learning for sparse coding https www di ens fr fbach mairal icml09 pdf J Mairal F Bach J Ponce G Sapiro 2009 MiniBatchDictionaryLearning Mini batch dictionary learning class MiniBatchDictionaryLearning implements a faster but less accurate version of the dictionary learning algorithm that is better suited for large datasets By default class MiniBatchDictionaryLearning divides the data into mini batches and optimizes in an online manner by cycling over the mini batches for the specified number of iterations However at the moment it does not implement a stopping condition The estimator also implements partial fit which updates the dictionary by iterating only once over a mini batch This can be used for online learning when the data is not readily available from the start or for when the data does not fit into the memory currentmodule sklearn cluster image auto examples cluster images sphx glr plot dict face patches 001 png target auto examples cluster plot dict face patches html scale 50 align right topic Clustering for dictionary learning Note that when using dictionary learning to extract a representation e g for sparse coding clustering can be a good proxy to learn the dictionary For instance the class MiniBatchKMeans estimator is computationally efficient and implements on line learning with a partial fit method Example ref sphx glr auto examples cluster plot dict face patches py currentmodule sklearn decomposition FA Factor Analysis In unsupervised learning we only have a dataset math X x 1 x 2 dots x n How can this dataset be described mathematically A very simple continuous latent variable model for math X is math x i W h i mu epsilon The vector math h i is called latent because it is unobserved math epsilon is considered a noise term distributed according to a Gaussian with mean 0 and covariance math Psi i e math epsilon sim mathcal N 0 Psi math mu is some arbitrary offset vector Such a model is called generative as it describes how math x i is generated from math h i If we use all the math x i s as columns to form a matrix math mathbf X and all the math h i s as columns of a matrix math mathbf H then we can write with suitably defined math mathbf M and math mathbf E math mathbf X W mathbf H mathbf M mathbf E In other words we decomposed matrix math mathbf X If math h i is given the above equation automatically implies the following probabilistic interpretation math p x i h i mathcal N Wh i mu Psi For a complete probabilistic model we also need a prior distribution for the latent variable math h The most straightforward assumption based on the nice properties of the Gaussian distribution is math h sim mathcal N 0 mathbf I This yields a Gaussian as the marginal distribution of math x math p x mathcal N mu WW T Psi Now without any further assumptions the idea of having a latent variable math h would be superfluous math x can be completely modelled with a mean and a covariance We need to impose some more specific structure on one of these two parameters A simple additional assumption regards the structure of the error covariance math Psi math Psi sigma 2 mathbf I This assumption leads to the probabilistic model of class PCA math Psi mathrm diag psi 1 psi 2 dots psi n This model is called class FactorAnalysis a classical statistical model The matrix W is sometimes called the factor loading matrix Both models essentially estimate a Gaussian with a low rank covariance matrix Because both models are probabilistic they can be integrated in more complex models e g Mixture of Factor Analysers One gets very different models e g class FastICA if non Gaussian priors on the latent variables are assumed Factor analysis can produce similar components the columns of its loading matrix to class PCA However one can not make any general statements about these components e g whether they are orthogonal pca img3 image auto examples decomposition images sphx glr plot faces decomposition 002 png target auto examples decomposition plot faces decomposition html scale 60 fa img3 image auto examples decomposition images sphx glr plot faces decomposition 008 png target auto examples decomposition plot faces decomposition html scale 60 centered pca img3 fa img3 The main advantage for Factor Analysis over class PCA is that it can model the variance in every direction of the input space independently heteroscedastic noise figure auto examples decomposition images sphx glr plot faces decomposition 009 png target auto examples decomposition plot faces decomposition html align center scale 75 This allows better model selection than probabilistic PCA in the presence of heteroscedastic noise figure auto examples decomposition images sphx glr plot pca vs fa model selection 002 png target auto examples decomposition plot pca vs fa model selection html align center scale 75 Factor Analysis is often followed by a rotation of the factors with the parameter rotation usually to improve interpretability For example Varimax rotation maximizes the sum of the variances of the squared loadings i e it tends to produce sparser factors which are influenced by only a few features each the simple structure See e g the first example below rubric Examples ref sphx glr auto examples decomposition plot varimax fa py ref sphx glr auto examples decomposition plot pca vs fa model selection py ICA Independent component analysis ICA Independent component analysis separates a multivariate signal into additive subcomponents that are maximally independent It is implemented in scikit learn using the class Fast ICA FastICA algorithm Typically ICA is not used for reducing dimensionality but for separating superimposed signals Since the ICA model does not include a noise term for the model to be correct whitening must be applied This can be done internally using the whiten argument or manually using one of the PCA variants It is classically used to separate mixed signals a problem known as blind source separation as in the example below figure auto examples decomposition images sphx glr plot ica blind source separation 001 png target auto examples decomposition plot ica blind source separation html align center scale 60 ICA can also be used as yet another non linear decomposition that finds components with some sparsity pca img4 image auto examples decomposition images sphx glr plot faces decomposition 002 png target auto examples decomposition plot faces decomposition html scale 60 ica img4 image auto examples decomposition images sphx glr plot faces decomposition 004 png target auto examples decomposition plot faces decomposition html scale 60 centered pca img4 ica img4 rubric Examples ref sphx glr auto examples decomposition plot ica blind source separation py ref sphx glr auto examples decomposition plot ica vs pca py ref sphx glr auto examples decomposition plot faces decomposition py NMF Non negative matrix factorization NMF or NNMF NMF with the Frobenius norm class NMF 1 is an alternative approach to decomposition that assumes that the data and the components are non negative class NMF can be plugged in instead of class PCA or its variants in the cases where the data matrix does not contain negative values It finds a decomposition of samples math X into two matrices math W and math H of non negative elements by optimizing the distance math d between math X and the matrix product math WH The most widely used distance function is the squared Frobenius norm which is an obvious extension of the Euclidean norm to matrices math d mathrm Fro X Y frac 1 2 X Y mathrm Fro 2 frac 1 2 sum i j X ij Y ij 2 Unlike class PCA the representation of a vector is obtained in an additive fashion by superimposing the components without subtracting Such additive models are efficient for representing images and text It has been observed in Hoyer 2004 2 that when carefully constrained class NMF can produce a parts based representation of the dataset resulting in interpretable models The following example displays 16 sparse components found by class NMF from the images in the Olivetti faces dataset in comparison with the PCA eigenfaces pca img5 image auto examples decomposition images sphx glr plot faces decomposition 002 png target auto examples decomposition plot faces decomposition html scale 60 nmf img5 image auto examples decomposition images sphx glr plot faces decomposition 003 png target auto examples decomposition plot faces decomposition html scale 60 centered pca img5 nmf img5 The init attribute determines the initialization method applied which has a great impact on the performance of the method class NMF implements the method Nonnegative Double Singular Value Decomposition NNDSVD 4 is based on two SVD processes one approximating the data matrix the other approximating positive sections of the resulting partial SVD factors utilizing an algebraic property of unit rank matrices The basic NNDSVD algorithm is better fit for sparse factorization Its variants NNDSVDa in which all zeros are set equal to the mean of all elements of the data and NNDSVDar in which the zeros are set to random perturbations less than the mean of the data divided by 100 are recommended in the dense case Note that the Multiplicative Update mu solver cannot update zeros present in the initialization so it leads to poorer results when used jointly with the basic NNDSVD algorithm which introduces a lot of zeros in this case NNDSVDa or NNDSVDar should be preferred class NMF can also be initialized with correctly scaled random non negative matrices by setting init random An integer seed or a RandomState can also be passed to random state to control reproducibility In class NMF L1 and L2 priors can be added to the loss function in order to regularize the model The L2 prior uses the Frobenius norm while the L1 prior uses an elementwise L1 norm As in class sklearn linear model ElasticNet we control the combination of L1 and L2 with the l1 ratio math rho parameter and the intensity of the regularization with the alpha W and alpha H math alpha W and math alpha H parameters The priors are scaled by the number of samples math n samples for H and the number of features math n features for W to keep their impact balanced with respect to one another and to the data fit term as independent as possible of the size of the training set Then the priors terms are math alpha W rho W 1 frac alpha W 1 rho 2 W mathrm Fro 2 n features alpha H rho H 1 frac alpha H 1 rho 2 H mathrm Fro 2 n samples and the regularized objective function is math d mathrm Fro X WH alpha W rho W 1 frac alpha W 1 rho 2 W mathrm Fro 2 n features alpha H rho H 1 frac alpha H 1 rho 2 H mathrm Fro 2 n samples NMF with a beta divergence As described previously the most widely used distance function is the squared Frobenius norm which is an obvious extension of the Euclidean norm to matrices math d mathrm Fro X Y frac 1 2 X Y Fro 2 frac 1 2 sum i j X ij Y ij 2 Other distance functions can be used in NMF as for example the generalized Kullback Leibler KL divergence also referred as I divergence math d KL X Y sum i j X ij log frac X ij Y ij X ij Y ij Or the Itakura Saito IS divergence math d IS X Y sum i j frac X ij Y ij log frac X ij Y ij 1 These three distances are special cases of the beta divergence family with math beta 2 1 0 respectively 6 The beta divergence are defined by math d beta X Y sum i j frac 1 beta beta 1 X ij beta beta 1 Y ij beta beta X ij Y ij beta 1 image images beta divergence png align center scale 75 Note that this definition is not valid if math beta in 0 1 yet it can be continuously extended to the definitions of math d KL and math d IS respectively dropdown NMF implemented solvers class NMF implements two solvers using Coordinate Descent cd 5 and Multiplicative Update mu 6 The mu solver can optimize every beta divergence including of course the Frobenius norm math beta 2 the generalized Kullback Leibler divergence math beta 1 and the Itakura Saito divergence math beta 0 Note that for math beta in 1 2 the mu solver is significantly faster than for other values of math beta Note also that with a negative or 0 i e itakura saito math beta the input matrix cannot contain zero values The cd solver can only optimize the Frobenius norm Due to the underlying non convexity of NMF the different solvers may converge to different minima even when optimizing the same distance function NMF is best used with the fit transform method which returns the matrix W The matrix H is stored into the fitted model in the components attribute the method transform will decompose a new matrix X new based on these stored components import numpy as np X np array 1 1 2 1 3 1 2 4 1 5 0 8 6 1 from sklearn decomposition import NMF model NMF n components 2 init random random state 0 W model fit transform X H model components X new np array 1 0 1 6 1 1 0 1 4 3 2 1 0 4 W new model transform X new rubric Examples ref sphx glr auto examples decomposition plot faces decomposition py ref sphx glr auto examples applications plot topics extraction with nmf lda py MiniBatchNMF Mini batch Non Negative Matrix Factorization class MiniBatchNMF 7 implements a faster but less accurate version of the non negative matrix factorization i e class sklearn decomposition NMF better suited for large datasets By default class MiniBatchNMF divides the data into mini batches and optimizes the NMF model in an online manner by cycling over the mini batches for the specified number of iterations The batch size parameter controls the size of the batches In order to speed up the mini batch algorithm it is also possible to scale past batches giving them less importance than newer batches This is done introducing a so called forgetting factor controlled by the forget factor parameter The estimator also implements partial fit which updates H by iterating only once over a mini batch This can be used for online learning when the data is not readily available from the start or when the data does not fit into memory rubric References 1 Learning the parts of objects by non negative matrix factorization http www cs columbia edu blei fogm 2020F readings LeeSeung1999 pdf D Lee S Seung 1999 2 Non negative Matrix Factorization with Sparseness Constraints https www jmlr org papers volume5 hoyer04a hoyer04a pdf P Hoyer 2004 4 SVD based initialization A head start for nonnegative matrix factorization https www boutsidis org Boutsidis PRE 08 pdf C Boutsidis E Gallopoulos 2008 5 Fast local algorithms for large scale nonnegative matrix and tensor factorizations https www researchgate net profile Anh Huy Phan publication 220241471 Fast Local Algorithms for Large Scale Nonnegative Matrix and Tensor Factorizations A Cichocki A Phan 2009 6 arxiv Algorithms for nonnegative matrix factorization with the beta divergence 1010 1763 C Fevotte J Idier 2011 7 arxiv Online algorithms for nonnegative matrix factorization with the Itakura Saito divergence 1106 4198 A Lefevre F Bach C Fevotte 2011 LatentDirichletAllocation Latent Dirichlet Allocation LDA Latent Dirichlet Allocation is a generative probabilistic model for collections of discrete dataset such as text corpora It is also a topic model that is used for discovering abstract topics from a collection of documents The graphical model of LDA is a three level generative model image images lda model graph png align center Note on notations presented in the graphical model above which can be found in Hoffman et al 2013 The corpus is a collection of math D documents A document is a sequence of math N words There are math K topics in the corpus The boxes represent repeated sampling In the graphical model each node is a random variable and has a role in the generative process A shaded node indicates an observed variable and an unshaded node indicates a hidden latent variable In this case words in the corpus are the only data that we observe The latent variables determine the random mixture of topics in the corpus and the distribution of words in the documents The goal of LDA is to use the observed words to infer the hidden topic structure dropdown Details on modeling text corpora When modeling text corpora the model assumes the following generative process for a corpus with math D documents and math K topics with math K corresponding to n components in the API 1 For each topic math k in K draw math beta k sim mathrm Dirichlet eta This provides a distribution over the words i e the probability of a word appearing in topic math k math eta corresponds to topic word prior 2 For each document math d in D draw the topic proportions math theta d sim mathrm Dirichlet alpha math alpha corresponds to doc topic prior 3 For each word math i in document math d a Draw the topic assignment math z di sim mathrm Multinomial theta d b Draw the observed word math w ij sim mathrm Multinomial beta z di For parameter estimation the posterior distribution is math p z theta beta w alpha eta frac p z theta beta alpha eta p w alpha eta Since the posterior is intractable variational Bayesian method uses a simpler distribution math q z theta beta lambda phi gamma to approximate it and those variational parameters math lambda math phi math gamma are optimized to maximize the Evidence Lower Bound ELBO math log P w alpha eta geq L w phi gamma lambda overset triangle E q log p w z theta beta alpha eta E q log q z theta beta Maximizing ELBO is equivalent to minimizing the Kullback Leibler KL divergence between math q z theta beta and the true posterior math p z theta beta w alpha eta class LatentDirichletAllocation implements the online variational Bayes algorithm and supports both online and batch update methods While the batch method updates variational variables after each full pass through the data the online method updates variational variables from mini batch data points note Although the online method is guaranteed to converge to a local optimum point the quality of the optimum point and the speed of convergence may depend on mini batch size and attributes related to learning rate setting When class LatentDirichletAllocation is applied on a document term matrix the matrix will be decomposed into a topic term matrix and a document topic matrix While topic term matrix is stored as components in the model document topic matrix can be calculated from transform method class LatentDirichletAllocation also implements partial fit method This is used when data can be fetched sequentially rubric Examples ref sphx glr auto examples applications plot topics extraction with nmf lda py rubric References Latent Dirichlet Allocation https www jmlr org papers volume3 blei03a blei03a pdf D Blei A Ng M Jordan 2003 Online Learning for Latent Dirichlet Allocation https papers nips cc paper 3902 online learning for latent dirichlet allocation pdf M Hoffman D Blei F Bach 2010 Stochastic Variational Inference https www cs columbia edu blei papers HoffmanBleiWangPaisley2013 pdf M Hoffman D Blei C Wang J Paisley 2013 The varimax criterion for analytic rotation in factor analysis https link springer com article 10 1007 2FBF02289233 H F Kaiser 1958 See also ref nca dim reduction for dimensionality reduction with Neighborhood Components Analysis |
scikit-learn Each clustering algorithm comes in two variants a class that implements clustering of unlabeled data can be performed with the module Clustering | .. _clustering:
==========
Clustering
==========
`Clustering <https://en.wikipedia.org/wiki/Cluster_analysis>`__ of
unlabeled data can be performed with the module :mod:`sklearn.cluster`.
Each clustering algorithm comes in two variants: a class, that implements
the ``fit`` method to learn the clusters on train data, and a function,
that, given train data, returns an array of integer labels corresponding
to the different clusters. For the class, the labels over the training
data can be found in the ``labels_`` attribute.
.. currentmodule:: sklearn.cluster
.. topic:: Input data
One important thing to note is that the algorithms implemented in
this module can take different kinds of matrix as input. All the
methods accept standard data matrices of shape ``(n_samples, n_features)``.
These can be obtained from the classes in the :mod:`sklearn.feature_extraction`
module. For :class:`AffinityPropagation`, :class:`SpectralClustering`
and :class:`DBSCAN` one can also input similarity matrices of shape
``(n_samples, n_samples)``. These can be obtained from the functions
in the :mod:`sklearn.metrics.pairwise` module.
Overview of clustering methods
===============================
.. figure:: ../auto_examples/cluster/images/sphx_glr_plot_cluster_comparison_001.png
:target: ../auto_examples/cluster/plot_cluster_comparison.html
:align: center
:scale: 50
A comparison of the clustering algorithms in scikit-learn
.. list-table::
:header-rows: 1
:widths: 14 15 19 25 20
* - Method name
- Parameters
- Scalability
- Usecase
- Geometry (metric used)
* - :ref:`K-Means <k_means>`
- number of clusters
- Very large ``n_samples``, medium ``n_clusters`` with
:ref:`MiniBatch code <mini_batch_kmeans>`
- General-purpose, even cluster size, flat geometry,
not too many clusters, inductive
- Distances between points
* - :ref:`Affinity propagation <affinity_propagation>`
- damping, sample preference
- Not scalable with n_samples
- Many clusters, uneven cluster size, non-flat geometry, inductive
- Graph distance (e.g. nearest-neighbor graph)
* - :ref:`Mean-shift <mean_shift>`
- bandwidth
- Not scalable with ``n_samples``
- Many clusters, uneven cluster size, non-flat geometry, inductive
- Distances between points
* - :ref:`Spectral clustering <spectral_clustering>`
- number of clusters
- Medium ``n_samples``, small ``n_clusters``
- Few clusters, even cluster size, non-flat geometry, transductive
- Graph distance (e.g. nearest-neighbor graph)
* - :ref:`Ward hierarchical clustering <hierarchical_clustering>`
- number of clusters or distance threshold
- Large ``n_samples`` and ``n_clusters``
- Many clusters, possibly connectivity constraints, transductive
- Distances between points
* - :ref:`Agglomerative clustering <hierarchical_clustering>`
- number of clusters or distance threshold, linkage type, distance
- Large ``n_samples`` and ``n_clusters``
- Many clusters, possibly connectivity constraints, non Euclidean
distances, transductive
- Any pairwise distance
* - :ref:`DBSCAN <dbscan>`
- neighborhood size
- Very large ``n_samples``, medium ``n_clusters``
- Non-flat geometry, uneven cluster sizes, outlier removal,
transductive
- Distances between nearest points
* - :ref:`HDBSCAN <hdbscan>`
- minimum cluster membership, minimum point neighbors
- large ``n_samples``, medium ``n_clusters``
- Non-flat geometry, uneven cluster sizes, outlier removal,
transductive, hierarchical, variable cluster density
- Distances between nearest points
* - :ref:`OPTICS <optics>`
- minimum cluster membership
- Very large ``n_samples``, large ``n_clusters``
- Non-flat geometry, uneven cluster sizes, variable cluster density,
outlier removal, transductive
- Distances between points
* - :ref:`Gaussian mixtures <mixture>`
- many
- Not scalable
- Flat geometry, good for density estimation, inductive
- Mahalanobis distances to centers
* - :ref:`BIRCH <birch>`
- branching factor, threshold, optional global clusterer.
- Large ``n_clusters`` and ``n_samples``
- Large dataset, outlier removal, data reduction, inductive
- Euclidean distance between points
* - :ref:`Bisecting K-Means <bisect_k_means>`
- number of clusters
- Very large ``n_samples``, medium ``n_clusters``
- General-purpose, even cluster size, flat geometry,
no empty clusters, inductive, hierarchical
- Distances between points
Non-flat geometry clustering is useful when the clusters have a specific
shape, i.e. a non-flat manifold, and the standard euclidean distance is
not the right metric. This case arises in the two top rows of the figure
above.
Gaussian mixture models, useful for clustering, are described in
:ref:`another chapter of the documentation <mixture>` dedicated to
mixture models. KMeans can be seen as a special case of Gaussian mixture
model with equal covariance per component.
:term:`Transductive <transductive>` clustering methods (in contrast to
:term:`inductive` clustering methods) are not designed to be applied to new,
unseen data.
.. _k_means:
K-means
=======
The :class:`KMeans` algorithm clusters data by trying to separate samples in n
groups of equal variance, minimizing a criterion known as the *inertia* or
within-cluster sum-of-squares (see below). This algorithm requires the number
of clusters to be specified. It scales well to large numbers of samples and has
been used across a large range of application areas in many different fields.
The k-means algorithm divides a set of :math:`N` samples :math:`X` into
:math:`K` disjoint clusters :math:`C`, each described by the mean :math:`\mu_j`
of the samples in the cluster. The means are commonly called the cluster
"centroids"; note that they are not, in general, points from :math:`X`,
although they live in the same space.
The K-means algorithm aims to choose centroids that minimise the **inertia**,
or **within-cluster sum-of-squares criterion**:
.. math:: \sum_{i=0}^{n}\min_{\mu_j \in C}(||x_i - \mu_j||^2)
Inertia can be recognized as a measure of how internally coherent clusters are.
It suffers from various drawbacks:
- Inertia makes the assumption that clusters are convex and isotropic,
which is not always the case. It responds poorly to elongated clusters,
or manifolds with irregular shapes.
- Inertia is not a normalized metric: we just know that lower values are
better and zero is optimal. But in very high-dimensional spaces, Euclidean
distances tend to become inflated
(this is an instance of the so-called "curse of dimensionality").
Running a dimensionality reduction algorithm such as :ref:`PCA` prior to
k-means clustering can alleviate this problem and speed up the
computations.
.. image:: ../auto_examples/cluster/images/sphx_glr_plot_kmeans_assumptions_002.png
:target: ../auto_examples/cluster/plot_kmeans_assumptions.html
:align: center
:scale: 50
For more detailed descriptions of the issues shown above and how to address them,
refer to the examples :ref:`sphx_glr_auto_examples_cluster_plot_kmeans_assumptions.py`
and :ref:`sphx_glr_auto_examples_cluster_plot_kmeans_silhouette_analysis.py`.
K-means is often referred to as Lloyd's algorithm. In basic terms, the
algorithm has three steps. The first step chooses the initial centroids, with
the most basic method being to choose :math:`k` samples from the dataset
:math:`X`. After initialization, K-means consists of looping between the
two other steps. The first step assigns each sample to its nearest centroid.
The second step creates new centroids by taking the mean value of all of the
samples assigned to each previous centroid. The difference between the old
and the new centroids are computed and the algorithm repeats these last two
steps until this value is less than a threshold. In other words, it repeats
until the centroids do not move significantly.
.. image:: ../auto_examples/cluster/images/sphx_glr_plot_kmeans_digits_001.png
:target: ../auto_examples/cluster/plot_kmeans_digits.html
:align: right
:scale: 35
K-means is equivalent to the expectation-maximization algorithm
with a small, all-equal, diagonal covariance matrix.
The algorithm can also be understood through the concept of `Voronoi diagrams
<https://en.wikipedia.org/wiki/Voronoi_diagram>`_. First the Voronoi diagram of
the points is calculated using the current centroids. Each segment in the
Voronoi diagram becomes a separate cluster. Secondly, the centroids are updated
to the mean of each segment. The algorithm then repeats this until a stopping
criterion is fulfilled. Usually, the algorithm stops when the relative decrease
in the objective function between iterations is less than the given tolerance
value. This is not the case in this implementation: iteration stops when
centroids move less than the tolerance.
Given enough time, K-means will always converge, however this may be to a local
minimum. This is highly dependent on the initialization of the centroids.
As a result, the computation is often done several times, with different
initializations of the centroids. One method to help address this issue is the
k-means++ initialization scheme, which has been implemented in scikit-learn
(use the ``init='k-means++'`` parameter). This initializes the centroids to be
(generally) distant from each other, leading to probably better results than
random initialization, as shown in the reference. For detailed examples of
comparing different initialization schemes, refer to
:ref:`sphx_glr_auto_examples_cluster_plot_kmeans_digits.py` and
:ref:`sphx_glr_auto_examples_cluster_plot_kmeans_stability_low_dim_dense.py`.
K-means++ can also be called independently to select seeds for other
clustering algorithms, see :func:`sklearn.cluster.kmeans_plusplus` for details
and example usage.
The algorithm supports sample weights, which can be given by a parameter
``sample_weight``. This allows to assign more weight to some samples when
computing cluster centers and values of inertia. For example, assigning a
weight of 2 to a sample is equivalent to adding a duplicate of that sample
to the dataset :math:`X`.
.. rubric:: Examples
* :ref:`sphx_glr_auto_examples_text_plot_document_clustering.py`: Document clustering
using :class:`KMeans` and :class:`MiniBatchKMeans` based on sparse data
* :ref:`sphx_glr_auto_examples_cluster_plot_kmeans_plusplus.py`: Using K-means++
to select seeds for other clustering algorithms.
Low-level parallelism
---------------------
:class:`KMeans` benefits from OpenMP based parallelism through Cython. Small
chunks of data (256 samples) are processed in parallel, which in addition
yields a low memory footprint. For more details on how to control the number of
threads, please refer to our :ref:`parallelism` notes.
.. rubric:: Examples
* :ref:`sphx_glr_auto_examples_cluster_plot_kmeans_assumptions.py`: Demonstrating when
k-means performs intuitively and when it does not
* :ref:`sphx_glr_auto_examples_cluster_plot_kmeans_digits.py`: Clustering handwritten digits
.. dropdown:: References
* `"k-means++: The advantages of careful seeding"
<http://ilpubs.stanford.edu:8090/778/1/2006-13.pdf>`_
Arthur, David, and Sergei Vassilvitskii,
*Proceedings of the eighteenth annual ACM-SIAM symposium on Discrete
algorithms*, Society for Industrial and Applied Mathematics (2007)
.. _mini_batch_kmeans:
Mini Batch K-Means
------------------
The :class:`MiniBatchKMeans` is a variant of the :class:`KMeans` algorithm
which uses mini-batches to reduce the computation time, while still attempting
to optimise the same objective function. Mini-batches are subsets of the input
data, randomly sampled in each training iteration. These mini-batches
drastically reduce the amount of computation required to converge to a local
solution. In contrast to other algorithms that reduce the convergence time of
k-means, mini-batch k-means produces results that are generally only slightly
worse than the standard algorithm.
The algorithm iterates between two major steps, similar to vanilla k-means.
In the first step, :math:`b` samples are drawn randomly from the dataset, to form
a mini-batch. These are then assigned to the nearest centroid. In the second
step, the centroids are updated. In contrast to k-means, this is done on a
per-sample basis. For each sample in the mini-batch, the assigned centroid
is updated by taking the streaming average of the sample and all previous
samples assigned to that centroid. This has the effect of decreasing the
rate of change for a centroid over time. These steps are performed until
convergence or a predetermined number of iterations is reached.
:class:`MiniBatchKMeans` converges faster than :class:`KMeans`, but the quality
of the results is reduced. In practice this difference in quality can be quite
small, as shown in the example and cited reference.
.. figure:: ../auto_examples/cluster/images/sphx_glr_plot_mini_batch_kmeans_001.png
:target: ../auto_examples/cluster/plot_mini_batch_kmeans.html
:align: center
:scale: 100
.. rubric:: Examples
* :ref:`sphx_glr_auto_examples_cluster_plot_mini_batch_kmeans.py`: Comparison of
:class:`KMeans` and :class:`MiniBatchKMeans`
* :ref:`sphx_glr_auto_examples_text_plot_document_clustering.py`: Document clustering
using :class:`KMeans` and :class:`MiniBatchKMeans` based on sparse data
* :ref:`sphx_glr_auto_examples_cluster_plot_dict_face_patches.py`
.. dropdown:: References
* `"Web Scale K-Means clustering"
<https://www.eecs.tufts.edu/~dsculley/papers/fastkmeans.pdf>`_
D. Sculley, *Proceedings of the 19th international conference on World
wide web* (2010)
.. _affinity_propagation:
Affinity Propagation
====================
:class:`AffinityPropagation` creates clusters by sending messages between
pairs of samples until convergence. A dataset is then described using a small
number of exemplars, which are identified as those most representative of other
samples. The messages sent between pairs represent the suitability for one
sample to be the exemplar of the other, which is updated in response to the
values from other pairs. This updating happens iteratively until convergence,
at which point the final exemplars are chosen, and hence the final clustering
is given.
.. figure:: ../auto_examples/cluster/images/sphx_glr_plot_affinity_propagation_001.png
:target: ../auto_examples/cluster/plot_affinity_propagation.html
:align: center
:scale: 50
Affinity Propagation can be interesting as it chooses the number of
clusters based on the data provided. For this purpose, the two important
parameters are the *preference*, which controls how many exemplars are
used, and the *damping factor* which damps the responsibility and
availability messages to avoid numerical oscillations when updating these
messages.
The main drawback of Affinity Propagation is its complexity. The
algorithm has a time complexity of the order :math:`O(N^2 T)`, where :math:`N`
is the number of samples and :math:`T` is the number of iterations until
convergence. Further, the memory complexity is of the order
:math:`O(N^2)` if a dense similarity matrix is used, but reducible if a
sparse similarity matrix is used. This makes Affinity Propagation most
appropriate for small to medium sized datasets.
.. dropdown:: Algorithm description
The messages sent between points belong to one of two categories. The first is
the responsibility :math:`r(i, k)`, which is the accumulated evidence that
sample :math:`k` should be the exemplar for sample :math:`i`. The second is the
availability :math:`a(i, k)` which is the accumulated evidence that sample
:math:`i` should choose sample :math:`k` to be its exemplar, and considers the
values for all other samples that :math:`k` should be an exemplar. In this way,
exemplars are chosen by samples if they are (1) similar enough to many samples
and (2) chosen by many samples to be representative of themselves.
More formally, the responsibility of a sample :math:`k` to be the exemplar of
sample :math:`i` is given by:
.. math::
r(i, k) \leftarrow s(i, k) - max [ a(i, k') + s(i, k') \forall k' \neq k ]
Where :math:`s(i, k)` is the similarity between samples :math:`i` and :math:`k`.
The availability of sample :math:`k` to be the exemplar of sample :math:`i` is
given by:
.. math::
a(i, k) \leftarrow min [0, r(k, k) + \sum_{i'~s.t.~i' \notin \{i, k\}}{r(i',
k)}]
To begin with, all values for :math:`r` and :math:`a` are set to zero, and the
calculation of each iterates until convergence. As discussed above, in order to
avoid numerical oscillations when updating the messages, the damping factor
:math:`\lambda` is introduced to iteration process:
.. math:: r_{t+1}(i, k) = \lambda\cdot r_{t}(i, k) + (1-\lambda)\cdot r_{t+1}(i, k)
.. math:: a_{t+1}(i, k) = \lambda\cdot a_{t}(i, k) + (1-\lambda)\cdot a_{t+1}(i, k)
where :math:`t` indicates the iteration times.
.. rubric:: Examples
* :ref:`sphx_glr_auto_examples_cluster_plot_affinity_propagation.py`: Affinity
Propagation on a synthetic 2D datasets with 3 classes
* :ref:`sphx_glr_auto_examples_applications_plot_stock_market.py` Affinity Propagation
on financial time series to find groups of companies
.. _mean_shift:
Mean Shift
==========
:class:`MeanShift` clustering aims to discover *blobs* in a smooth density of
samples. It is a centroid based algorithm, which works by updating candidates
for centroids to be the mean of the points within a given region. These
candidates are then filtered in a post-processing stage to eliminate
near-duplicates to form the final set of centroids.
.. dropdown:: Mathematical details
The position of centroid candidates is iteratively adjusted using a technique
called hill climbing, which finds local maxima of the estimated probability
density. Given a candidate centroid :math:`x` for iteration :math:`t`, the
candidate is updated according to the following equation:
.. math::
x^{t+1} = x^t + m(x^t)
Where :math:`m` is the *mean shift* vector that is computed for each centroid
that points towards a region of the maximum increase in the density of points.
To compute :math:`m` we define :math:`N(x)` as the neighborhood of samples
within a given distance around :math:`x`. Then :math:`m` is computed using the
following equation, effectively updating a centroid to be the mean of the
samples within its neighborhood:
.. math::
m(x) = \frac{1}{|N(x)|} \sum_{x_j \in N(x)}x_j - x
In general, the equation for :math:`m` depends on a kernel used for density
estimation. The generic formula is:
.. math::
m(x) = \frac{\sum_{x_j \in N(x)}K(x_j - x)x_j}{\sum_{x_j \in N(x)}K(x_j -
x)} - x
In our implementation, :math:`K(x)` is equal to 1 if :math:`x` is small enough
and is equal to 0 otherwise. Effectively :math:`K(y - x)` indicates whether
:math:`y` is in the neighborhood of :math:`x`.
The algorithm automatically sets the number of clusters, instead of relying on a
parameter ``bandwidth``, which dictates the size of the region to search through.
This parameter can be set manually, but can be estimated using the provided
``estimate_bandwidth`` function, which is called if the bandwidth is not set.
The algorithm is not highly scalable, as it requires multiple nearest neighbor
searches during the execution of the algorithm. The algorithm is guaranteed to
converge, however the algorithm will stop iterating when the change in centroids
is small.
Labelling a new sample is performed by finding the nearest centroid for a
given sample.
.. figure:: ../auto_examples/cluster/images/sphx_glr_plot_mean_shift_001.png
:target: ../auto_examples/cluster/plot_mean_shift.html
:align: center
:scale: 50
.. rubric:: Examples
* :ref:`sphx_glr_auto_examples_cluster_plot_mean_shift.py`: Mean Shift clustering
on a synthetic 2D datasets with 3 classes.
.. dropdown:: References
* :doi:`"Mean shift: A robust approach toward feature space analysis"
<10.1109/34.1000236>` D. Comaniciu and P. Meer, *IEEE Transactions on Pattern
Analysis and Machine Intelligence* (2002)
.. _spectral_clustering:
Spectral clustering
===================
:class:`SpectralClustering` performs a low-dimension embedding of the
affinity matrix between samples, followed by clustering, e.g., by KMeans,
of the components of the eigenvectors in the low dimensional space.
It is especially computationally efficient if the affinity matrix is sparse
and the `amg` solver is used for the eigenvalue problem (Note, the `amg` solver
requires that the `pyamg <https://github.com/pyamg/pyamg>`_ module is installed.)
The present version of SpectralClustering requires the number of clusters
to be specified in advance. It works well for a small number of clusters,
but is not advised for many clusters.
For two clusters, SpectralClustering solves a convex relaxation of the
`normalized cuts <https://people.eecs.berkeley.edu/~malik/papers/SM-ncut.pdf>`_
problem on the similarity graph: cutting the graph in two so that the weight of
the edges cut is small compared to the weights of the edges inside each
cluster. This criteria is especially interesting when working on images, where
graph vertices are pixels, and weights of the edges of the similarity graph are
computed using a function of a gradient of the image.
.. |noisy_img| image:: ../auto_examples/cluster/images/sphx_glr_plot_segmentation_toy_001.png
:target: ../auto_examples/cluster/plot_segmentation_toy.html
:scale: 50
.. |segmented_img| image:: ../auto_examples/cluster/images/sphx_glr_plot_segmentation_toy_002.png
:target: ../auto_examples/cluster/plot_segmentation_toy.html
:scale: 50
.. centered:: |noisy_img| |segmented_img|
.. warning:: Transforming distance to well-behaved similarities
Note that if the values of your similarity matrix are not well
distributed, e.g. with negative values or with a distance matrix
rather than a similarity, the spectral problem will be singular and
the problem not solvable. In which case it is advised to apply a
transformation to the entries of the matrix. For instance, in the
case of a signed distance matrix, is common to apply a heat kernel::
similarity = np.exp(-beta * distance / distance.std())
See the examples for such an application.
.. rubric:: Examples
* :ref:`sphx_glr_auto_examples_cluster_plot_segmentation_toy.py`: Segmenting objects
from a noisy background using spectral clustering.
* :ref:`sphx_glr_auto_examples_cluster_plot_coin_segmentation.py`: Spectral clustering
to split the image of coins in regions.
.. |coin_kmeans| image:: ../auto_examples/cluster/images/sphx_glr_plot_coin_segmentation_001.png
:target: ../auto_examples/cluster/plot_coin_segmentation.html
:scale: 35
.. |coin_discretize| image:: ../auto_examples/cluster/images/sphx_glr_plot_coin_segmentation_002.png
:target: ../auto_examples/cluster/plot_coin_segmentation.html
:scale: 35
.. |coin_cluster_qr| image:: ../auto_examples/cluster/images/sphx_glr_plot_coin_segmentation_003.png
:target: ../auto_examples/cluster/plot_coin_segmentation.html
:scale: 35
Different label assignment strategies
-------------------------------------
Different label assignment strategies can be used, corresponding to the
``assign_labels`` parameter of :class:`SpectralClustering`.
``"kmeans"`` strategy can match finer details, but can be unstable.
In particular, unless you control the ``random_state``, it may not be
reproducible from run-to-run, as it depends on random initialization.
The alternative ``"discretize"`` strategy is 100% reproducible, but tends
to create parcels of fairly even and geometrical shape.
The recently added ``"cluster_qr"`` option is a deterministic alternative that
tends to create the visually best partitioning on the example application
below.
================================ ================================ ================================
``assign_labels="kmeans"`` ``assign_labels="discretize"`` ``assign_labels="cluster_qr"``
================================ ================================ ================================
|coin_kmeans| |coin_discretize| |coin_cluster_qr|
================================ ================================ ================================
.. dropdown:: References
* `"Multiclass spectral clustering"
<https://people.eecs.berkeley.edu/~jordan/courses/281B-spring04/readings/yu-shi.pdf>`_
Stella X. Yu, Jianbo Shi, 2003
* :doi:`"Simple, direct, and efficient multi-way spectral clustering"<10.1093/imaiai/iay008>`
Anil Damle, Victor Minden, Lexing Ying, 2019
.. _spectral_clustering_graph:
Spectral Clustering Graphs
--------------------------
Spectral Clustering can also be used to partition graphs via their spectral
embeddings. In this case, the affinity matrix is the adjacency matrix of the
graph, and SpectralClustering is initialized with `affinity='precomputed'`::
>>> from sklearn.cluster import SpectralClustering
>>> sc = SpectralClustering(3, affinity='precomputed', n_init=100,
... assign_labels='discretize')
>>> sc.fit_predict(adjacency_matrix) # doctest: +SKIP
.. dropdown:: References
* :doi:`"A Tutorial on Spectral Clustering" <10.1007/s11222-007-9033-z>` Ulrike
von Luxburg, 2007
* :doi:`"Normalized cuts and image segmentation" <10.1109/34.868688>` Jianbo
Shi, Jitendra Malik, 2000
* `"A Random Walks View of Spectral Segmentation"
<https://citeseerx.ist.psu.edu/doc_view/pid/84a86a69315e994cfd1e0c7debb86d62d7bd1f44>`_
Marina Meila, Jianbo Shi, 2001
* `"On Spectral Clustering: Analysis and an algorithm"
<https://citeseerx.ist.psu.edu/doc_view/pid/796c5d6336fc52aa84db575fb821c78918b65f58>`_
Andrew Y. Ng, Michael I. Jordan, Yair Weiss, 2001
* :arxiv:`"Preconditioned Spectral Clustering for Stochastic Block Partition
Streaming Graph Challenge" <1708.07481>` David Zhuzhunashvili, Andrew Knyazev
.. _hierarchical_clustering:
Hierarchical clustering
=======================
Hierarchical clustering is a general family of clustering algorithms that
build nested clusters by merging or splitting them successively. This
hierarchy of clusters is represented as a tree (or dendrogram). The root of the
tree is the unique cluster that gathers all the samples, the leaves being the
clusters with only one sample. See the `Wikipedia page
<https://en.wikipedia.org/wiki/Hierarchical_clustering>`_ for more details.
The :class:`AgglomerativeClustering` object performs a hierarchical clustering
using a bottom up approach: each observation starts in its own cluster, and
clusters are successively merged together. The linkage criteria determines the
metric used for the merge strategy:
- **Ward** minimizes the sum of squared differences within all clusters. It is a
variance-minimizing approach and in this sense is similar to the k-means
objective function but tackled with an agglomerative hierarchical
approach.
- **Maximum** or **complete linkage** minimizes the maximum distance between
observations of pairs of clusters.
- **Average linkage** minimizes the average of the distances between all
observations of pairs of clusters.
- **Single linkage** minimizes the distance between the closest
observations of pairs of clusters.
:class:`AgglomerativeClustering` can also scale to large number of samples
when it is used jointly with a connectivity matrix, but is computationally
expensive when no connectivity constraints are added between samples: it
considers at each step all the possible merges.
.. topic:: :class:`FeatureAgglomeration`
The :class:`FeatureAgglomeration` uses agglomerative clustering to
group together features that look very similar, thus decreasing the
number of features. It is a dimensionality reduction tool, see
:ref:`data_reduction`.
Different linkage type: Ward, complete, average, and single linkage
-------------------------------------------------------------------
:class:`AgglomerativeClustering` supports Ward, single, average, and complete
linkage strategies.
.. image:: ../auto_examples/cluster/images/sphx_glr_plot_linkage_comparison_001.png
:target: ../auto_examples/cluster/plot_linkage_comparison.html
:scale: 43
Agglomerative cluster has a "rich get richer" behavior that leads to
uneven cluster sizes. In this regard, single linkage is the worst
strategy, and Ward gives the most regular sizes. However, the affinity
(or distance used in clustering) cannot be varied with Ward, thus for non
Euclidean metrics, average linkage is a good alternative. Single linkage,
while not robust to noisy data, can be computed very efficiently and can
therefore be useful to provide hierarchical clustering of larger datasets.
Single linkage can also perform well on non-globular data.
.. rubric:: Examples
* :ref:`sphx_glr_auto_examples_cluster_plot_digits_linkage.py`: exploration of the
different linkage strategies in a real dataset.
* :ref:`sphx_glr_auto_examples_cluster_plot_linkage_comparison.py`: exploration of
the different linkage strategies in toy datasets.
Visualization of cluster hierarchy
----------------------------------
It's possible to visualize the tree representing the hierarchical merging of clusters
as a dendrogram. Visual inspection can often be useful for understanding the structure
of the data, though more so in the case of small sample sizes.
.. image:: ../auto_examples/cluster/images/sphx_glr_plot_agglomerative_dendrogram_001.png
:target: ../auto_examples/cluster/plot_agglomerative_dendrogram.html
:scale: 42
.. rubric:: Examples
* :ref:`sphx_glr_auto_examples_cluster_plot_agglomerative_dendrogram.py`
Adding connectivity constraints
-------------------------------
An interesting aspect of :class:`AgglomerativeClustering` is that
connectivity constraints can be added to this algorithm (only adjacent
clusters can be merged together), through a connectivity matrix that defines
for each sample the neighboring samples following a given structure of the
data. For instance, in the swiss-roll example below, the connectivity
constraints forbid the merging of points that are not adjacent on the swiss
roll, and thus avoid forming clusters that extend across overlapping folds of
the roll.
.. |unstructured| image:: ../auto_examples/cluster/images/sphx_glr_plot_ward_structured_vs_unstructured_001.png
:target: ../auto_examples/cluster/plot_ward_structured_vs_unstructured.html
:scale: 49
.. |structured| image:: ../auto_examples/cluster/images/sphx_glr_plot_ward_structured_vs_unstructured_002.png
:target: ../auto_examples/cluster/plot_ward_structured_vs_unstructured.html
:scale: 49
.. centered:: |unstructured| |structured|
These constraint are useful to impose a certain local structure, but they
also make the algorithm faster, especially when the number of the samples
is high.
The connectivity constraints are imposed via an connectivity matrix: a
scipy sparse matrix that has elements only at the intersection of a row
and a column with indices of the dataset that should be connected. This
matrix can be constructed from a-priori information: for instance, you
may wish to cluster web pages by only merging pages with a link pointing
from one to another. It can also be learned from the data, for instance
using :func:`sklearn.neighbors.kneighbors_graph` to restrict
merging to nearest neighbors as in :ref:`this example
<sphx_glr_auto_examples_cluster_plot_agglomerative_clustering.py>`, or
using :func:`sklearn.feature_extraction.image.grid_to_graph` to
enable only merging of neighboring pixels on an image, as in the
:ref:`coin <sphx_glr_auto_examples_cluster_plot_coin_ward_segmentation.py>` example.
.. warning:: **Connectivity constraints with single, average and complete linkage**
Connectivity constraints and single, complete or average linkage can enhance
the 'rich getting richer' aspect of agglomerative clustering,
particularly so if they are built with
:func:`sklearn.neighbors.kneighbors_graph`. In the limit of a small
number of clusters, they tend to give a few macroscopically occupied
clusters and almost empty ones. (see the discussion in
:ref:`sphx_glr_auto_examples_cluster_plot_agglomerative_clustering.py`).
Single linkage is the most brittle linkage option with regard to this issue.
.. image:: ../auto_examples/cluster/images/sphx_glr_plot_agglomerative_clustering_001.png
:target: ../auto_examples/cluster/plot_agglomerative_clustering.html
:scale: 38
.. image:: ../auto_examples/cluster/images/sphx_glr_plot_agglomerative_clustering_002.png
:target: ../auto_examples/cluster/plot_agglomerative_clustering.html
:scale: 38
.. image:: ../auto_examples/cluster/images/sphx_glr_plot_agglomerative_clustering_003.png
:target: ../auto_examples/cluster/plot_agglomerative_clustering.html
:scale: 38
.. image:: ../auto_examples/cluster/images/sphx_glr_plot_agglomerative_clustering_004.png
:target: ../auto_examples/cluster/plot_agglomerative_clustering.html
:scale: 38
.. rubric:: Examples
* :ref:`sphx_glr_auto_examples_cluster_plot_coin_ward_segmentation.py`: Ward
clustering to split the image of coins in regions.
* :ref:`sphx_glr_auto_examples_cluster_plot_ward_structured_vs_unstructured.py`: Example
of Ward algorithm on a swiss-roll, comparison of structured approaches
versus unstructured approaches.
* :ref:`sphx_glr_auto_examples_cluster_plot_feature_agglomeration_vs_univariate_selection.py`: Example
of dimensionality reduction with feature agglomeration based on Ward
hierarchical clustering.
* :ref:`sphx_glr_auto_examples_cluster_plot_agglomerative_clustering.py`
Varying the metric
-------------------
Single, average and complete linkage can be used with a variety of distances (or
affinities), in particular Euclidean distance (*l2*), Manhattan distance
(or Cityblock, or *l1*), cosine distance, or any precomputed affinity
matrix.
* *l1* distance is often good for sparse features, or sparse noise: i.e.
many of the features are zero, as in text mining using occurrences of
rare words.
* *cosine* distance is interesting because it is invariant to global
scalings of the signal.
The guidelines for choosing a metric is to use one that maximizes the
distance between samples in different classes, and minimizes that within
each class.
.. image:: ../auto_examples/cluster/images/sphx_glr_plot_agglomerative_clustering_metrics_005.png
:target: ../auto_examples/cluster/plot_agglomerative_clustering_metrics.html
:scale: 32
.. image:: ../auto_examples/cluster/images/sphx_glr_plot_agglomerative_clustering_metrics_006.png
:target: ../auto_examples/cluster/plot_agglomerative_clustering_metrics.html
:scale: 32
.. image:: ../auto_examples/cluster/images/sphx_glr_plot_agglomerative_clustering_metrics_007.png
:target: ../auto_examples/cluster/plot_agglomerative_clustering_metrics.html
:scale: 32
.. rubric:: Examples
* :ref:`sphx_glr_auto_examples_cluster_plot_agglomerative_clustering_metrics.py`
Bisecting K-Means
-----------------
.. _bisect_k_means:
The :class:`BisectingKMeans` is an iterative variant of :class:`KMeans`, using
divisive hierarchical clustering. Instead of creating all centroids at once, centroids
are picked progressively based on a previous clustering: a cluster is split into two
new clusters repeatedly until the target number of clusters is reached.
:class:`BisectingKMeans` is more efficient than :class:`KMeans` when the number of
clusters is large since it only works on a subset of the data at each bisection
while :class:`KMeans` always works on the entire dataset.
Although :class:`BisectingKMeans` can't benefit from the advantages of the `"k-means++"`
initialization by design, it will still produce comparable results than
`KMeans(init="k-means++")` in terms of inertia at cheaper computational costs, and will
likely produce better results than `KMeans` with a random initialization.
This variant is more efficient to agglomerative clustering if the number of clusters is
small compared to the number of data points.
This variant also does not produce empty clusters.
There exist two strategies for selecting the cluster to split:
- ``bisecting_strategy="largest_cluster"`` selects the cluster having the most points
- ``bisecting_strategy="biggest_inertia"`` selects the cluster with biggest inertia
(cluster with biggest Sum of Squared Errors within)
Picking by largest amount of data points in most cases produces result as
accurate as picking by inertia and is faster (especially for larger amount of data
points, where calculating error may be costly).
Picking by largest amount of data points will also likely produce clusters of similar
sizes while `KMeans` is known to produce clusters of different sizes.
Difference between Bisecting K-Means and regular K-Means can be seen on example
:ref:`sphx_glr_auto_examples_cluster_plot_bisect_kmeans.py`.
While the regular K-Means algorithm tends to create non-related clusters,
clusters from Bisecting K-Means are well ordered and create quite a visible hierarchy.
.. dropdown:: References
* `"A Comparison of Document Clustering Techniques"
<http://www.philippe-fournier-viger.com/spmf/bisectingkmeans.pdf>`_ Michael
Steinbach, George Karypis and Vipin Kumar, Department of Computer Science and
Egineering, University of Minnesota (June 2000)
* `"Performance Analysis of K-Means and Bisecting K-Means Algorithms in Weblog
Data"
<https://ijeter.everscience.org/Manuscripts/Volume-4/Issue-8/Vol-4-issue-8-M-23.pdf>`_
K.Abirami and Dr.P.Mayilvahanan, International Journal of Emerging
Technologies in Engineering Research (IJETER) Volume 4, Issue 8, (August 2016)
* `"Bisecting K-means Algorithm Based on K-valued Self-determining and
Clustering Center Optimization"
<http://www.jcomputers.us/vol13/jcp1306-01.pdf>`_ Jian Di, Xinyue Gou School
of Control and Computer Engineering,North China Electric Power University,
Baoding, Hebei, China (August 2017)
.. _dbscan:
DBSCAN
======
The :class:`DBSCAN` algorithm views clusters as areas of high density
separated by areas of low density. Due to this rather generic view, clusters
found by DBSCAN can be any shape, as opposed to k-means which assumes that
clusters are convex shaped. The central component to the DBSCAN is the concept
of *core samples*, which are samples that are in areas of high density. A
cluster is therefore a set of core samples, each close to each other
(measured by some distance measure)
and a set of non-core samples that are close to a core sample (but are not
themselves core samples). There are two parameters to the algorithm,
``min_samples`` and ``eps``,
which define formally what we mean when we say *dense*.
Higher ``min_samples`` or lower ``eps``
indicate higher density necessary to form a cluster.
More formally, we define a core sample as being a sample in the dataset such
that there exist ``min_samples`` other samples within a distance of
``eps``, which are defined as *neighbors* of the core sample. This tells
us that the core sample is in a dense area of the vector space. A cluster
is a set of core samples that can be built by recursively taking a core
sample, finding all of its neighbors that are core samples, finding all of
*their* neighbors that are core samples, and so on. A cluster also has a
set of non-core samples, which are samples that are neighbors of a core sample
in the cluster but are not themselves core samples. Intuitively, these samples
are on the fringes of a cluster.
Any core sample is part of a cluster, by definition. Any sample that is not a
core sample, and is at least ``eps`` in distance from any core sample, is
considered an outlier by the algorithm.
While the parameter ``min_samples`` primarily controls how tolerant the
algorithm is towards noise (on noisy and large data sets it may be desirable
to increase this parameter), the parameter ``eps`` is *crucial to choose
appropriately* for the data set and distance function and usually cannot be
left at the default value. It controls the local neighborhood of the points.
When chosen too small, most data will not be clustered at all (and labeled
as ``-1`` for "noise"). When chosen too large, it causes close clusters to
be merged into one cluster, and eventually the entire data set to be returned
as a single cluster. Some heuristics for choosing this parameter have been
discussed in the literature, for example based on a knee in the nearest neighbor
distances plot (as discussed in the references below).
In the figure below, the color indicates cluster membership, with large circles
indicating core samples found by the algorithm. Smaller circles are non-core
samples that are still part of a cluster. Moreover, the outliers are indicated
by black points below.
.. |dbscan_results| image:: ../auto_examples/cluster/images/sphx_glr_plot_dbscan_002.png
:target: ../auto_examples/cluster/plot_dbscan.html
:scale: 50
.. centered:: |dbscan_results|
.. rubric:: Examples
* :ref:`sphx_glr_auto_examples_cluster_plot_dbscan.py`
.. dropdown:: Implementation
The DBSCAN algorithm is deterministic, always generating the same clusters when
given the same data in the same order. However, the results can differ when
data is provided in a different order. First, even though the core samples will
always be assigned to the same clusters, the labels of those clusters will
depend on the order in which those samples are encountered in the data. Second
and more importantly, the clusters to which non-core samples are assigned can
differ depending on the data order. This would happen when a non-core sample
has a distance lower than ``eps`` to two core samples in different clusters. By
the triangular inequality, those two core samples must be more distant than
``eps`` from each other, or they would be in the same cluster. The non-core
sample is assigned to whichever cluster is generated first in a pass through the
data, and so the results will depend on the data ordering.
The current implementation uses ball trees and kd-trees to determine the
neighborhood of points, which avoids calculating the full distance matrix (as
was done in scikit-learn versions before 0.14). The possibility to use custom
metrics is retained; for details, see :class:`NearestNeighbors`.
.. dropdown:: Memory consumption for large sample sizes
This implementation is by default not memory efficient because it constructs a
full pairwise similarity matrix in the case where kd-trees or ball-trees cannot
be used (e.g., with sparse matrices). This matrix will consume :math:`n^2`
floats. A couple of mechanisms for getting around this are:
- Use :ref:`OPTICS <optics>` clustering in conjunction with the `extract_dbscan`
method. OPTICS clustering also calculates the full pairwise matrix, but only
keeps one row in memory at a time (memory complexity n).
- A sparse radius neighborhood graph (where missing entries are presumed to be
out of eps) can be precomputed in a memory-efficient way and dbscan can be run
over this with ``metric='precomputed'``. See
:meth:`sklearn.neighbors.NearestNeighbors.radius_neighbors_graph`.
- The dataset can be compressed, either by removing exact duplicates if these
occur in your data, or by using BIRCH. Then you only have a relatively small
number of representatives for a large number of points. You can then provide a
``sample_weight`` when fitting DBSCAN.
.. dropdown:: References
* `A Density-Based Algorithm for Discovering Clusters in Large Spatial
Databases with Noise <https://www.aaai.org/Papers/KDD/1996/KDD96-037.pdf>`_
Ester, M., H. P. Kriegel, J. Sander, and X. Xu, In Proceedings of the 2nd
International Conference on Knowledge Discovery and Data Mining, Portland, OR,
AAAI Press, pp. 226-231. 1996
* :doi:`DBSCAN revisited, revisited: why and how you should (still) use DBSCAN.
<10.1145/3068335>` Schubert, E., Sander, J., Ester, M., Kriegel, H. P., & Xu,
X. (2017). In ACM Transactions on Database Systems (TODS), 42(3), 19.
.. _hdbscan:
HDBSCAN
=======
The :class:`HDBSCAN` algorithm can be seen as an extension of :class:`DBSCAN`
and :class:`OPTICS`. Specifically, :class:`DBSCAN` assumes that the clustering
criterion (i.e. density requirement) is *globally homogeneous*.
In other words, :class:`DBSCAN` may struggle to successfully capture clusters
with different densities.
:class:`HDBSCAN` alleviates this assumption and explores all possible density
scales by building an alternative representation of the clustering problem.
.. note::
This implementation is adapted from the original implementation of HDBSCAN,
`scikit-learn-contrib/hdbscan <https://github.com/scikit-learn-contrib/hdbscan>`_ based on [LJ2017]_.
.. rubric:: Examples
* :ref:`sphx_glr_auto_examples_cluster_plot_hdbscan.py`
Mutual Reachability Graph
-------------------------
HDBSCAN first defines :math:`d_c(x_p)`, the *core distance* of a sample :math:`x_p`, as the
distance to its `min_samples` th-nearest neighbor, counting itself. For example,
if `min_samples=5` and :math:`x_*` is the 5th-nearest neighbor of :math:`x_p`
then the core distance is:
.. math:: d_c(x_p)=d(x_p, x_*).
Next it defines :math:`d_m(x_p, x_q)`, the *mutual reachability distance* of two points
:math:`x_p, x_q`, as:
.. math:: d_m(x_p, x_q) = \max\{d_c(x_p), d_c(x_q), d(x_p, x_q)\}
These two notions allow us to construct the *mutual reachability graph*
:math:`G_{ms}` defined for a fixed choice of `min_samples` by associating each
sample :math:`x_p` with a vertex of the graph, and thus edges between points
:math:`x_p, x_q` are the mutual reachability distance :math:`d_m(x_p, x_q)`
between them. We may build subsets of this graph, denoted as
:math:`G_{ms,\varepsilon}`, by removing any edges with value greater than :math:`\varepsilon`:
from the original graph. Any points whose core distance is less than :math:`\varepsilon`:
are at this staged marked as noise. The remaining points are then clustered by
finding the connected components of this trimmed graph.
.. note::
Taking the connected components of a trimmed graph :math:`G_{ms,\varepsilon}` is
equivalent to running DBSCAN* with `min_samples` and :math:`\varepsilon`. DBSCAN* is a
slightly modified version of DBSCAN mentioned in [CM2013]_.
Hierarchical Clustering
-----------------------
HDBSCAN can be seen as an algorithm which performs DBSCAN* clustering across all
values of :math:`\varepsilon`. As mentioned prior, this is equivalent to finding the connected
components of the mutual reachability graphs for all values of :math:`\varepsilon`. To do this
efficiently, HDBSCAN first extracts a minimum spanning tree (MST) from the fully
-connected mutual reachability graph, then greedily cuts the edges with highest
weight. An outline of the HDBSCAN algorithm is as follows:
1. Extract the MST of :math:`G_{ms}`.
2. Extend the MST by adding a "self edge" for each vertex, with weight equal
to the core distance of the underlying sample.
3. Initialize a single cluster and label for the MST.
4. Remove the edge with the greatest weight from the MST (ties are
removed simultaneously).
5. Assign cluster labels to the connected components which contain the
end points of the now-removed edge. If the component does not have at least
one edge it is instead assigned a "null" label marking it as noise.
6. Repeat 4-5 until there are no more connected components.
HDBSCAN is therefore able to obtain all possible partitions achievable by
DBSCAN* for a fixed choice of `min_samples` in a hierarchical fashion.
Indeed, this allows HDBSCAN to perform clustering across multiple densities
and as such it no longer needs :math:`\varepsilon` to be given as a hyperparameter. Instead
it relies solely on the choice of `min_samples`, which tends to be a more robust
hyperparameter.
.. |hdbscan_ground_truth| image:: ../auto_examples/cluster/images/sphx_glr_plot_hdbscan_005.png
:target: ../auto_examples/cluster/plot_hdbscan.html
:scale: 75
.. |hdbscan_results| image:: ../auto_examples/cluster/images/sphx_glr_plot_hdbscan_007.png
:target: ../auto_examples/cluster/plot_hdbscan.html
:scale: 75
.. centered:: |hdbscan_ground_truth|
.. centered:: |hdbscan_results|
HDBSCAN can be smoothed with an additional hyperparameter `min_cluster_size`
which specifies that during the hierarchical clustering, components with fewer
than `minimum_cluster_size` many samples are considered noise. In practice, one
can set `minimum_cluster_size = min_samples` to couple the parameters and
simplify the hyperparameter space.
.. rubric:: References
.. [CM2013] Campello, R.J.G.B., Moulavi, D., Sander, J. (2013). Density-Based
Clustering Based on Hierarchical Density Estimates. In: Pei, J., Tseng, V.S.,
Cao, L., Motoda, H., Xu, G. (eds) Advances in Knowledge Discovery and Data
Mining. PAKDD 2013. Lecture Notes in Computer Science(), vol 7819. Springer,
Berlin, Heidelberg. :doi:`Density-Based Clustering Based on Hierarchical
Density Estimates <10.1007/978-3-642-37456-2_14>`
.. [LJ2017] L. McInnes and J. Healy, (2017). Accelerated Hierarchical Density
Based Clustering. In: IEEE International Conference on Data Mining Workshops
(ICDMW), 2017, pp. 33-42. :doi:`Accelerated Hierarchical Density Based
Clustering <10.1109/ICDMW.2017.12>`
.. _optics:
OPTICS
======
The :class:`OPTICS` algorithm shares many similarities with the :class:`DBSCAN`
algorithm, and can be considered a generalization of DBSCAN that relaxes the
``eps`` requirement from a single value to a value range. The key difference
between DBSCAN and OPTICS is that the OPTICS algorithm builds a *reachability*
graph, which assigns each sample both a ``reachability_`` distance, and a spot
within the cluster ``ordering_`` attribute; these two attributes are assigned
when the model is fitted, and are used to determine cluster membership. If
OPTICS is run with the default value of *inf* set for ``max_eps``, then DBSCAN
style cluster extraction can be performed repeatedly in linear time for any
given ``eps`` value using the ``cluster_optics_dbscan`` method. Setting
``max_eps`` to a lower value will result in shorter run times, and can be
thought of as the maximum neighborhood radius from each point to find other
potential reachable points.
.. |optics_results| image:: ../auto_examples/cluster/images/sphx_glr_plot_optics_001.png
:target: ../auto_examples/cluster/plot_optics.html
:scale: 50
.. centered:: |optics_results|
The *reachability* distances generated by OPTICS allow for variable density
extraction of clusters within a single data set. As shown in the above plot,
combining *reachability* distances and data set ``ordering_`` produces a
*reachability plot*, where point density is represented on the Y-axis, and
points are ordered such that nearby points are adjacent. 'Cutting' the
reachability plot at a single value produces DBSCAN like results; all points
above the 'cut' are classified as noise, and each time that there is a break
when reading from left to right signifies a new cluster. The default cluster
extraction with OPTICS looks at the steep slopes within the graph to find
clusters, and the user can define what counts as a steep slope using the
parameter ``xi``. There are also other possibilities for analysis on the graph
itself, such as generating hierarchical representations of the data through
reachability-plot dendrograms, and the hierarchy of clusters detected by the
algorithm can be accessed through the ``cluster_hierarchy_`` parameter. The
plot above has been color-coded so that cluster colors in planar space match
the linear segment clusters of the reachability plot. Note that the blue and
red clusters are adjacent in the reachability plot, and can be hierarchically
represented as children of a larger parent cluster.
.. rubric:: Examples
* :ref:`sphx_glr_auto_examples_cluster_plot_optics.py`
.. dropdown:: Comparison with DBSCAN
The results from OPTICS ``cluster_optics_dbscan`` method and DBSCAN are very
similar, but not always identical; specifically, labeling of periphery and noise
points. This is in part because the first samples of each dense area processed
by OPTICS have a large reachability value while being close to other points in
their area, and will thus sometimes be marked as noise rather than periphery.
This affects adjacent points when they are considered as candidates for being
marked as either periphery or noise.
Note that for any single value of ``eps``, DBSCAN will tend to have a shorter
run time than OPTICS; however, for repeated runs at varying ``eps`` values, a
single run of OPTICS may require less cumulative runtime than DBSCAN. It is also
important to note that OPTICS' output is close to DBSCAN's only if ``eps`` and
``max_eps`` are close.
.. dropdown:: Computational Complexity
Spatial indexing trees are used to avoid calculating the full distance matrix,
and allow for efficient memory usage on large sets of samples. Different
distance metrics can be supplied via the ``metric`` keyword.
For large datasets, similar (but not identical) results can be obtained via
:class:`HDBSCAN`. The HDBSCAN implementation is multithreaded, and has better
algorithmic runtime complexity than OPTICS, at the cost of worse memory scaling.
For extremely large datasets that exhaust system memory using HDBSCAN, OPTICS
will maintain :math:`n` (as opposed to :math:`n^2`) memory scaling; however,
tuning of the ``max_eps`` parameter will likely need to be used to give a
solution in a reasonable amount of wall time.
.. dropdown:: References
* "OPTICS: ordering points to identify the clustering structure." Ankerst,
Mihael, Markus M. Breunig, Hans-Peter Kriegel, and Jörg Sander. In ACM Sigmod
Record, vol. 28, no. 2, pp. 49-60. ACM, 1999.
.. _birch:
BIRCH
=====
The :class:`Birch` builds a tree called the Clustering Feature Tree (CFT)
for the given data. The data is essentially lossy compressed to a set of
Clustering Feature nodes (CF Nodes). The CF Nodes have a number of
subclusters called Clustering Feature subclusters (CF Subclusters)
and these CF Subclusters located in the non-terminal CF Nodes
can have CF Nodes as children.
The CF Subclusters hold the necessary information for clustering which prevents
the need to hold the entire input data in memory. This information includes:
- Number of samples in a subcluster.
- Linear Sum - An n-dimensional vector holding the sum of all samples
- Squared Sum - Sum of the squared L2 norm of all samples.
- Centroids - To avoid recalculation linear sum / n_samples.
- Squared norm of the centroids.
The BIRCH algorithm has two parameters, the threshold and the branching factor.
The branching factor limits the number of subclusters in a node and the
threshold limits the distance between the entering sample and the existing
subclusters.
This algorithm can be viewed as an instance or data reduction method,
since it reduces the input data to a set of subclusters which are obtained directly
from the leaves of the CFT. This reduced data can be further processed by feeding
it into a global clusterer. This global clusterer can be set by ``n_clusters``.
If ``n_clusters`` is set to None, the subclusters from the leaves are directly
read off, otherwise a global clustering step labels these subclusters into global
clusters (labels) and the samples are mapped to the global label of the nearest subcluster.
.. dropdown:: Algorithm description
- A new sample is inserted into the root of the CF Tree which is a CF Node. It
is then merged with the subcluster of the root, that has the smallest radius
after merging, constrained by the threshold and branching factor conditions.
If the subcluster has any child node, then this is done repeatedly till it
reaches a leaf. After finding the nearest subcluster in the leaf, the
properties of this subcluster and the parent subclusters are recursively
updated.
- If the radius of the subcluster obtained by merging the new sample and the
nearest subcluster is greater than the square of the threshold and if the
number of subclusters is greater than the branching factor, then a space is
temporarily allocated to this new sample. The two farthest subclusters are
taken and the subclusters are divided into two groups on the basis of the
distance between these subclusters.
- If this split node has a parent subcluster and there is room for a new
subcluster, then the parent is split into two. If there is no room, then this
node is again split into two and the process is continued recursively, till it
reaches the root.
.. dropdown:: BIRCH or MiniBatchKMeans?
- BIRCH does not scale very well to high dimensional data. As a rule of thumb if
``n_features`` is greater than twenty, it is generally better to use MiniBatchKMeans.
- If the number of instances of data needs to be reduced, or if one wants a
large number of subclusters either as a preprocessing step or otherwise,
BIRCH is more useful than MiniBatchKMeans.
.. image:: ../auto_examples/cluster/images/sphx_glr_plot_birch_vs_minibatchkmeans_001.png
:target: ../auto_examples/cluster/plot_birch_vs_minibatchkmeans.html
.. dropdown:: How to use partial_fit?
To avoid the computation of global clustering, for every call of ``partial_fit``
the user is advised:
1. To set ``n_clusters=None`` initially.
2. Train all data by multiple calls to partial_fit.
3. Set ``n_clusters`` to a required value using
``brc.set_params(n_clusters=n_clusters)``.
4. Call ``partial_fit`` finally with no arguments, i.e. ``brc.partial_fit()``
which performs the global clustering.
.. dropdown:: References
* Tian Zhang, Raghu Ramakrishnan, Maron Livny BIRCH: An efficient data
clustering method for large databases.
https://www.cs.sfu.ca/CourseCentral/459/han/papers/zhang96.pdf
* Roberto Perdisci JBirch - Java implementation of BIRCH clustering algorithm
https://code.google.com/archive/p/jbirch
.. _clustering_evaluation:
Clustering performance evaluation
=================================
Evaluating the performance of a clustering algorithm is not as trivial as
counting the number of errors or the precision and recall of a supervised
classification algorithm. In particular any evaluation metric should not
take the absolute values of the cluster labels into account but rather
if this clustering define separations of the data similar to some ground
truth set of classes or satisfying some assumption such that members
belong to the same class are more similar than members of different
classes according to some similarity metric.
.. currentmodule:: sklearn.metrics
.. _rand_score:
.. _adjusted_rand_score:
Rand index
----------
Given the knowledge of the ground truth class assignments
``labels_true`` and our clustering algorithm assignments of the same
samples ``labels_pred``, the **(adjusted or unadjusted) Rand index**
is a function that measures the **similarity** of the two assignments,
ignoring permutations::
>>> from sklearn import metrics
>>> labels_true = [0, 0, 0, 1, 1, 1]
>>> labels_pred = [0, 0, 1, 1, 2, 2]
>>> metrics.rand_score(labels_true, labels_pred)
0.66...
The Rand index does not ensure to obtain a value close to 0.0 for a
random labelling. The adjusted Rand index **corrects for chance** and
will give such a baseline.
>>> metrics.adjusted_rand_score(labels_true, labels_pred)
0.24...
As with all clustering metrics, one can permute 0 and 1 in the predicted
labels, rename 2 to 3, and get the same score::
>>> labels_pred = [1, 1, 0, 0, 3, 3]
>>> metrics.rand_score(labels_true, labels_pred)
0.66...
>>> metrics.adjusted_rand_score(labels_true, labels_pred)
0.24...
Furthermore, both :func:`rand_score` :func:`adjusted_rand_score` are
**symmetric**: swapping the argument does not change the scores. They can
thus be used as **consensus measures**::
>>> metrics.rand_score(labels_pred, labels_true)
0.66...
>>> metrics.adjusted_rand_score(labels_pred, labels_true)
0.24...
Perfect labeling is scored 1.0::
>>> labels_pred = labels_true[:]
>>> metrics.rand_score(labels_true, labels_pred)
1.0
>>> metrics.adjusted_rand_score(labels_true, labels_pred)
1.0
Poorly agreeing labels (e.g. independent labelings) have lower scores,
and for the adjusted Rand index the score will be negative or close to
zero. However, for the unadjusted Rand index the score, while lower,
will not necessarily be close to zero.::
>>> labels_true = [0, 0, 0, 0, 0, 0, 1, 1]
>>> labels_pred = [0, 1, 2, 3, 4, 5, 5, 6]
>>> metrics.rand_score(labels_true, labels_pred)
0.39...
>>> metrics.adjusted_rand_score(labels_true, labels_pred)
-0.07...
.. topic:: Advantages:
- **Interpretability**: The unadjusted Rand index is proportional to the
number of sample pairs whose labels are the same in both `labels_pred` and
`labels_true`, or are different in both.
- **Random (uniform) label assignments have an adjusted Rand index score close
to 0.0** for any value of ``n_clusters`` and ``n_samples`` (which is not the
case for the unadjusted Rand index or the V-measure for instance).
- **Bounded range**: Lower values indicate different labelings, similar
clusterings have a high (adjusted or unadjusted) Rand index, 1.0 is the
perfect match score. The score range is [0, 1] for the unadjusted Rand index
and [-0.5, 1] for the adjusted Rand index.
- **No assumption is made on the cluster structure**: The (adjusted or
unadjusted) Rand index can be used to compare all kinds of clustering
algorithms, and can be used to compare clustering algorithms such as k-means
which assumes isotropic blob shapes with results of spectral clustering
algorithms which can find cluster with "folded" shapes.
.. topic:: Drawbacks:
- Contrary to inertia, the **(adjusted or unadjusted) Rand index requires
knowledge of the ground truth classes** which is almost never available in
practice or requires manual assignment by human annotators (as in the
supervised learning setting).
However (adjusted or unadjusted) Rand index can also be useful in a purely
unsupervised setting as a building block for a Consensus Index that can be
used for clustering model selection (TODO).
- The **unadjusted Rand index is often close to 1.0** even if the clusterings
themselves differ significantly. This can be understood when interpreting
the Rand index as the accuracy of element pair labeling resulting from the
clusterings: In practice there often is a majority of element pairs that are
assigned the ``different`` pair label under both the predicted and the
ground truth clustering resulting in a high proportion of pair labels that
agree, which leads subsequently to a high score.
.. rubric:: Examples
* :ref:`sphx_glr_auto_examples_cluster_plot_adjusted_for_chance_measures.py`:
Analysis of the impact of the dataset size on the value of
clustering measures for random assignments.
.. dropdown:: Mathematical formulation
If C is a ground truth class assignment and K the clustering, let us define
:math:`a` and :math:`b` as:
- :math:`a`, the number of pairs of elements that are in the same set in C and
in the same set in K
- :math:`b`, the number of pairs of elements that are in different sets in C and
in different sets in K
The unadjusted Rand index is then given by:
.. math:: \text{RI} = \frac{a + b}{C_2^{n_{samples}}}
where :math:`C_2^{n_{samples}}` is the total number of possible pairs in the
dataset. It does not matter if the calculation is performed on ordered pairs or
unordered pairs as long as the calculation is performed consistently.
However, the Rand index does not guarantee that random label assignments will
get a value close to zero (esp. if the number of clusters is in the same order
of magnitude as the number of samples).
To counter this effect we can discount the expected RI :math:`E[\text{RI}]` of
random labelings by defining the adjusted Rand index as follows:
.. math:: \text{ARI} = \frac{\text{RI} - E[\text{RI}]}{\max(\text{RI}) - E[\text{RI}]}
.. dropdown:: References
* `Comparing Partitions
<https://link.springer.com/article/10.1007%2FBF01908075>`_ L. Hubert and P.
Arabie, Journal of Classification 1985
* `Properties of the Hubert-Arabie adjusted Rand index
<https://psycnet.apa.org/record/2004-17801-007>`_ D. Steinley, Psychological
Methods 2004
* `Wikipedia entry for the Rand index
<https://en.wikipedia.org/wiki/Rand_index#Adjusted_Rand_index>`_
* :doi:`Minimum adjusted Rand index for two clusterings of a given size, 2022, J. E. Chacón and A. I. Rastrojo <10.1007/s11634-022-00491-w>`
.. _mutual_info_score:
Mutual Information based scores
-------------------------------
Given the knowledge of the ground truth class assignments ``labels_true`` and
our clustering algorithm assignments of the same samples ``labels_pred``, the
**Mutual Information** is a function that measures the **agreement** of the two
assignments, ignoring permutations. Two different normalized versions of this
measure are available, **Normalized Mutual Information (NMI)** and **Adjusted
Mutual Information (AMI)**. NMI is often used in the literature, while AMI was
proposed more recently and is **normalized against chance**::
>>> from sklearn import metrics
>>> labels_true = [0, 0, 0, 1, 1, 1]
>>> labels_pred = [0, 0, 1, 1, 2, 2]
>>> metrics.adjusted_mutual_info_score(labels_true, labels_pred) # doctest: +SKIP
0.22504...
One can permute 0 and 1 in the predicted labels, rename 2 to 3 and get
the same score::
>>> labels_pred = [1, 1, 0, 0, 3, 3]
>>> metrics.adjusted_mutual_info_score(labels_true, labels_pred) # doctest: +SKIP
0.22504...
All, :func:`mutual_info_score`, :func:`adjusted_mutual_info_score` and
:func:`normalized_mutual_info_score` are symmetric: swapping the argument does
not change the score. Thus they can be used as a **consensus measure**::
>>> metrics.adjusted_mutual_info_score(labels_pred, labels_true) # doctest: +SKIP
0.22504...
Perfect labeling is scored 1.0::
>>> labels_pred = labels_true[:]
>>> metrics.adjusted_mutual_info_score(labels_true, labels_pred) # doctest: +SKIP
1.0
>>> metrics.normalized_mutual_info_score(labels_true, labels_pred) # doctest: +SKIP
1.0
This is not true for ``mutual_info_score``, which is therefore harder to judge::
>>> metrics.mutual_info_score(labels_true, labels_pred) # doctest: +SKIP
0.69...
Bad (e.g. independent labelings) have non-positive scores::
>>> labels_true = [0, 1, 2, 0, 3, 4, 5, 1]
>>> labels_pred = [1, 1, 0, 0, 2, 2, 2, 2]
>>> metrics.adjusted_mutual_info_score(labels_true, labels_pred) # doctest: +SKIP
-0.10526...
.. topic:: Advantages:
- **Random (uniform) label assignments have a AMI score close to 0.0** for any
value of ``n_clusters`` and ``n_samples`` (which is not the case for raw
Mutual Information or the V-measure for instance).
- **Upper bound of 1**: Values close to zero indicate two label assignments
that are largely independent, while values close to one indicate significant
agreement. Further, an AMI of exactly 1 indicates that the two label
assignments are equal (with or without permutation).
.. topic:: Drawbacks:
- Contrary to inertia, **MI-based measures require the knowledge of the ground
truth classes** while almost never available in practice or requires manual
assignment by human annotators (as in the supervised learning setting).
However MI-based measures can also be useful in purely unsupervised setting
as a building block for a Consensus Index that can be used for clustering
model selection.
- NMI and MI are not adjusted against chance.
.. rubric:: Examples
* :ref:`sphx_glr_auto_examples_cluster_plot_adjusted_for_chance_measures.py`: Analysis
of the impact of the dataset size on the value of clustering measures for random
assignments. This example also includes the Adjusted Rand Index.
.. dropdown:: Mathematical formulation
Assume two label assignments (of the same N objects), :math:`U` and :math:`V`.
Their entropy is the amount of uncertainty for a partition set, defined by:
.. math:: H(U) = - \sum_{i=1}^{|U|}P(i)\log(P(i))
where :math:`P(i) = |U_i| / N` is the probability that an object picked at
random from :math:`U` falls into class :math:`U_i`. Likewise for :math:`V`:
.. math:: H(V) = - \sum_{j=1}^{|V|}P'(j)\log(P'(j))
With :math:`P'(j) = |V_j| / N`. The mutual information (MI) between :math:`U`
and :math:`V` is calculated by:
.. math:: \text{MI}(U, V) = \sum_{i=1}^{|U|}\sum_{j=1}^{|V|}P(i, j)\log\left(\frac{P(i,j)}{P(i)P'(j)}\right)
where :math:`P(i, j) = |U_i \cap V_j| / N` is the probability that an object
picked at random falls into both classes :math:`U_i` and :math:`V_j`.
It also can be expressed in set cardinality formulation:
.. math:: \text{MI}(U, V) = \sum_{i=1}^{|U|} \sum_{j=1}^{|V|} \frac{|U_i \cap V_j|}{N}\log\left(\frac{N|U_i \cap V_j|}{|U_i||V_j|}\right)
The normalized mutual information is defined as
.. math:: \text{NMI}(U, V) = \frac{\text{MI}(U, V)}{\text{mean}(H(U), H(V))}
This value of the mutual information and also the normalized variant is not
adjusted for chance and will tend to increase as the number of different labels
(clusters) increases, regardless of the actual amount of "mutual information"
between the label assignments.
The expected value for the mutual information can be calculated using the
following equation [VEB2009]_. In this equation, :math:`a_i = |U_i|` (the number
of elements in :math:`U_i`) and :math:`b_j = |V_j|` (the number of elements in
:math:`V_j`).
.. math:: E[\text{MI}(U,V)]=\sum_{i=1}^{|U|} \sum_{j=1}^{|V|} \sum_{n_{ij}=(a_i+b_j-N)^+
}^{\min(a_i, b_j)} \frac{n_{ij}}{N}\log \left( \frac{ N.n_{ij}}{a_i b_j}\right)
\frac{a_i!b_j!(N-a_i)!(N-b_j)!}{N!n_{ij}!(a_i-n_{ij})!(b_j-n_{ij})!
(N-a_i-b_j+n_{ij})!}
Using the expected value, the adjusted mutual information can then be calculated
using a similar form to that of the adjusted Rand index:
.. math:: \text{AMI} = \frac{\text{MI} - E[\text{MI}]}{\text{mean}(H(U), H(V)) - E[\text{MI}]}
For normalized mutual information and adjusted mutual information, the
normalizing value is typically some *generalized* mean of the entropies of each
clustering. Various generalized means exist, and no firm rules exist for
preferring one over the others. The decision is largely a field-by-field basis;
for instance, in community detection, the arithmetic mean is most common. Each
normalizing method provides "qualitatively similar behaviours" [YAT2016]_. In
our implementation, this is controlled by the ``average_method`` parameter.
Vinh et al. (2010) named variants of NMI and AMI by their averaging method
[VEB2010]_. Their 'sqrt' and 'sum' averages are the geometric and arithmetic
means; we use these more broadly common names.
.. rubric:: References
* Strehl, Alexander, and Joydeep Ghosh (2002). "Cluster ensembles - a
knowledge reuse framework for combining multiple partitions". Journal of
Machine Learning Research 3: 583-617. `doi:10.1162/153244303321897735
<http://strehl.com/download/strehl-jmlr02.pdf>`_.
* `Wikipedia entry for the (normalized) Mutual Information
<https://en.wikipedia.org/wiki/Mutual_Information>`_
* `Wikipedia entry for the Adjusted Mutual Information
<https://en.wikipedia.org/wiki/Adjusted_Mutual_Information>`_
.. [VEB2009] Vinh, Epps, and Bailey, (2009). "Information theoretic measures
for clusterings comparison". Proceedings of the 26th Annual International
Conference on Machine Learning - ICML '09. `doi:10.1145/1553374.1553511
<https://dl.acm.org/citation.cfm?doid=1553374.1553511>`_. ISBN
9781605585161.
.. [VEB2010] Vinh, Epps, and Bailey, (2010). "Information Theoretic Measures
for Clusterings Comparison: Variants, Properties, Normalization and
Correction for Chance". JMLR
<https://jmlr.csail.mit.edu/papers/volume11/vinh10a/vinh10a.pdf>
.. [YAT2016] Yang, Algesheimer, and Tessone, (2016). "A comparative analysis
of community detection algorithms on artificial networks". Scientific
Reports 6: 30750. `doi:10.1038/srep30750
<https://www.nature.com/articles/srep30750>`_.
.. _homogeneity_completeness:
Homogeneity, completeness and V-measure
---------------------------------------
Given the knowledge of the ground truth class assignments of the samples,
it is possible to define some intuitive metric using conditional entropy
analysis.
In particular Rosenberg and Hirschberg (2007) define the following two
desirable objectives for any cluster assignment:
- **homogeneity**: each cluster contains only members of a single class.
- **completeness**: all members of a given class are assigned to the same
cluster.
We can turn those concept as scores :func:`homogeneity_score` and
:func:`completeness_score`. Both are bounded below by 0.0 and above by
1.0 (higher is better)::
>>> from sklearn import metrics
>>> labels_true = [0, 0, 0, 1, 1, 1]
>>> labels_pred = [0, 0, 1, 1, 2, 2]
>>> metrics.homogeneity_score(labels_true, labels_pred)
0.66...
>>> metrics.completeness_score(labels_true, labels_pred)
0.42...
Their harmonic mean called **V-measure** is computed by
:func:`v_measure_score`::
>>> metrics.v_measure_score(labels_true, labels_pred)
0.51...
This function's formula is as follows:
.. math:: v = \frac{(1 + \beta) \times \text{homogeneity} \times \text{completeness}}{(\beta \times \text{homogeneity} + \text{completeness})}
`beta` defaults to a value of 1.0, but for using a value less than 1 for beta::
>>> metrics.v_measure_score(labels_true, labels_pred, beta=0.6)
0.54...
more weight will be attributed to homogeneity, and using a value greater than 1::
>>> metrics.v_measure_score(labels_true, labels_pred, beta=1.8)
0.48...
more weight will be attributed to completeness.
The V-measure is actually equivalent to the mutual information (NMI)
discussed above, with the aggregation function being the arithmetic mean [B2011]_.
Homogeneity, completeness and V-measure can be computed at once using
:func:`homogeneity_completeness_v_measure` as follows::
>>> metrics.homogeneity_completeness_v_measure(labels_true, labels_pred)
(0.66..., 0.42..., 0.51...)
The following clustering assignment is slightly better, since it is
homogeneous but not complete::
>>> labels_pred = [0, 0, 0, 1, 2, 2]
>>> metrics.homogeneity_completeness_v_measure(labels_true, labels_pred)
(1.0, 0.68..., 0.81...)
.. note::
:func:`v_measure_score` is **symmetric**: it can be used to evaluate
the **agreement** of two independent assignments on the same dataset.
This is not the case for :func:`completeness_score` and
:func:`homogeneity_score`: both are bound by the relationship::
homogeneity_score(a, b) == completeness_score(b, a)
.. topic:: Advantages:
- **Bounded scores**: 0.0 is as bad as it can be, 1.0 is a perfect score.
- Intuitive interpretation: clustering with bad V-measure can be
**qualitatively analyzed in terms of homogeneity and completeness** to
better feel what 'kind' of mistakes is done by the assignment.
- **No assumption is made on the cluster structure**: can be used to compare
clustering algorithms such as k-means which assumes isotropic blob shapes
with results of spectral clustering algorithms which can find cluster with
"folded" shapes.
.. topic:: Drawbacks:
- The previously introduced metrics are **not normalized with regards to
random labeling**: this means that depending on the number of samples,
clusters and ground truth classes, a completely random labeling will not
always yield the same values for homogeneity, completeness and hence
v-measure. In particular **random labeling won't yield zero scores
especially when the number of clusters is large**.
This problem can safely be ignored when the number of samples is more than a
thousand and the number of clusters is less than 10. **For smaller sample
sizes or larger number of clusters it is safer to use an adjusted index such
as the Adjusted Rand Index (ARI)**.
.. figure:: ../auto_examples/cluster/images/sphx_glr_plot_adjusted_for_chance_measures_001.png
:target: ../auto_examples/cluster/plot_adjusted_for_chance_measures.html
:align: center
:scale: 100
- These metrics **require the knowledge of the ground truth classes** while
almost never available in practice or requires manual assignment by human
annotators (as in the supervised learning setting).
.. rubric:: Examples
* :ref:`sphx_glr_auto_examples_cluster_plot_adjusted_for_chance_measures.py`: Analysis
of the impact of the dataset size on the value of clustering measures for
random assignments.
.. dropdown:: Mathematical formulation
Homogeneity and completeness scores are formally given by:
.. math:: h = 1 - \frac{H(C|K)}{H(C)}
.. math:: c = 1 - \frac{H(K|C)}{H(K)}
where :math:`H(C|K)` is the **conditional entropy of the classes given the
cluster assignments** and is given by:
.. math:: H(C|K) = - \sum_{c=1}^{|C|} \sum_{k=1}^{|K|} \frac{n_{c,k}}{n}
\cdot \log\left(\frac{n_{c,k}}{n_k}\right)
and :math:`H(C)` is the **entropy of the classes** and is given by:
.. math:: H(C) = - \sum_{c=1}^{|C|} \frac{n_c}{n} \cdot \log\left(\frac{n_c}{n}\right)
with :math:`n` the total number of samples, :math:`n_c` and :math:`n_k` the
number of samples respectively belonging to class :math:`c` and cluster
:math:`k`, and finally :math:`n_{c,k}` the number of samples from class
:math:`c` assigned to cluster :math:`k`.
The **conditional entropy of clusters given class** :math:`H(K|C)` and the
**entropy of clusters** :math:`H(K)` are defined in a symmetric manner.
Rosenberg and Hirschberg further define **V-measure** as the **harmonic mean of
homogeneity and completeness**:
.. math:: v = 2 \cdot \frac{h \cdot c}{h + c}
.. rubric:: References
* `V-Measure: A conditional entropy-based external cluster evaluation measure
<https://aclweb.org/anthology/D/D07/D07-1043.pdf>`_ Andrew Rosenberg and Julia
Hirschberg, 2007
.. [B2011] `Identification and Characterization of Events in Social Media
<http://www.cs.columbia.edu/~hila/hila-thesis-distributed.pdf>`_, Hila
Becker, PhD Thesis.
.. _fowlkes_mallows_scores:
Fowlkes-Mallows scores
----------------------
The original Fowlkes-Mallows index (FMI) was intended to measure the similarity
between two clustering results, which is inherently an unsupervised comparison.
The supervised adaptation of the Fowlkes-Mallows index
(as implemented in :func:`sklearn.metrics.fowlkes_mallows_score`) can be used
when the ground truth class assignments of the samples are known.
The FMI is defined as the geometric mean of the pairwise precision and recall:
.. math:: \text{FMI} = \frac{\text{TP}}{\sqrt{(\text{TP} + \text{FP}) (\text{TP} + \text{FN})}}
In the above formula:
* ``TP`` (**True Positive**): The number of pairs of points that are clustered together
both in the true labels and in the predicted labels.
* ``FP`` (**False Positive**): The number of pairs of points that are clustered together
in the predicted labels but not in the true labels.
* ``FN`` (**False Negative**): The number of pairs of points that are clustered together
in the true labels but not in the predicted labels.
The score ranges from 0 to 1. A high value indicates a good similarity
between two clusters.
>>> from sklearn import metrics
>>> labels_true = [0, 0, 0, 1, 1, 1]
>>> labels_pred = [0, 0, 1, 1, 2, 2]
>>> metrics.fowlkes_mallows_score(labels_true, labels_pred)
0.47140...
One can permute 0 and 1 in the predicted labels, rename 2 to 3 and get
the same score::
>>> labels_pred = [1, 1, 0, 0, 3, 3]
>>> metrics.fowlkes_mallows_score(labels_true, labels_pred)
0.47140...
Perfect labeling is scored 1.0::
>>> labels_pred = labels_true[:]
>>> metrics.fowlkes_mallows_score(labels_true, labels_pred)
1.0
Bad (e.g. independent labelings) have zero scores::
>>> labels_true = [0, 1, 2, 0, 3, 4, 5, 1]
>>> labels_pred = [1, 1, 0, 0, 2, 2, 2, 2]
>>> metrics.fowlkes_mallows_score(labels_true, labels_pred)
0.0
.. topic:: Advantages:
- **Random (uniform) label assignments have a FMI score close to 0.0** for any
value of ``n_clusters`` and ``n_samples`` (which is not the case for raw
Mutual Information or the V-measure for instance).
- **Upper-bounded at 1**: Values close to zero indicate two label assignments
that are largely independent, while values close to one indicate significant
agreement. Further, values of exactly 0 indicate **purely** independent
label assignments and a FMI of exactly 1 indicates that the two label
assignments are equal (with or without permutation).
- **No assumption is made on the cluster structure**: can be used to compare
clustering algorithms such as k-means which assumes isotropic blob shapes
with results of spectral clustering algorithms which can find cluster with
"folded" shapes.
.. topic:: Drawbacks:
- Contrary to inertia, **FMI-based measures require the knowledge of the
ground truth classes** while almost never available in practice or requires
manual assignment by human annotators (as in the supervised learning
setting).
.. dropdown:: References
* E. B. Fowkles and C. L. Mallows, 1983. "A method for comparing two
hierarchical clusterings". Journal of the American Statistical Association.
https://www.tandfonline.com/doi/abs/10.1080/01621459.1983.10478008
* `Wikipedia entry for the Fowlkes-Mallows Index
<https://en.wikipedia.org/wiki/Fowlkes-Mallows_index>`_
.. _silhouette_coefficient:
Silhouette Coefficient
----------------------
If the ground truth labels are not known, evaluation must be performed using
the model itself. The Silhouette Coefficient
(:func:`sklearn.metrics.silhouette_score`)
is an example of such an evaluation, where a
higher Silhouette Coefficient score relates to a model with better defined
clusters. The Silhouette Coefficient is defined for each sample and is composed
of two scores:
- **a**: The mean distance between a sample and all other points in the same
class.
- **b**: The mean distance between a sample and all other points in the *next
nearest cluster*.
The Silhouette Coefficient *s* for a single sample is then given as:
.. math:: s = \frac{b - a}{max(a, b)}
The Silhouette Coefficient for a set of samples is given as the mean of the
Silhouette Coefficient for each sample.
>>> from sklearn import metrics
>>> from sklearn.metrics import pairwise_distances
>>> from sklearn import datasets
>>> X, y = datasets.load_iris(return_X_y=True)
In normal usage, the Silhouette Coefficient is applied to the results of a
cluster analysis.
>>> import numpy as np
>>> from sklearn.cluster import KMeans
>>> kmeans_model = KMeans(n_clusters=3, random_state=1).fit(X)
>>> labels = kmeans_model.labels_
>>> metrics.silhouette_score(X, labels, metric='euclidean')
0.55...
.. topic:: Advantages:
- The score is bounded between -1 for incorrect clustering and +1 for highly
dense clustering. Scores around zero indicate overlapping clusters.
- The score is higher when clusters are dense and well separated, which
relates to a standard concept of a cluster.
.. topic:: Drawbacks:
- The Silhouette Coefficient is generally higher for convex clusters than
other concepts of clusters, such as density based clusters like those
obtained through DBSCAN.
.. rubric:: Examples
* :ref:`sphx_glr_auto_examples_cluster_plot_kmeans_silhouette_analysis.py` : In
this example the silhouette analysis is used to choose an optimal value for
n_clusters.
.. dropdown:: References
* Peter J. Rousseeuw (1987). :doi:`"Silhouettes: a Graphical Aid to the
Interpretation and Validation of Cluster Analysis"<10.1016/0377-0427(87)90125-7>`.
Computational and Applied Mathematics 20: 53-65.
.. _calinski_harabasz_index:
Calinski-Harabasz Index
-----------------------
If the ground truth labels are not known, the Calinski-Harabasz index
(:func:`sklearn.metrics.calinski_harabasz_score`) - also known as the Variance
Ratio Criterion - can be used to evaluate the model, where a higher
Calinski-Harabasz score relates to a model with better defined clusters.
The index is the ratio of the sum of between-clusters dispersion and of
within-cluster dispersion for all clusters (where dispersion is defined as the
sum of distances squared):
>>> from sklearn import metrics
>>> from sklearn.metrics import pairwise_distances
>>> from sklearn import datasets
>>> X, y = datasets.load_iris(return_X_y=True)
In normal usage, the Calinski-Harabasz index is applied to the results of a
cluster analysis:
>>> import numpy as np
>>> from sklearn.cluster import KMeans
>>> kmeans_model = KMeans(n_clusters=3, random_state=1).fit(X)
>>> labels = kmeans_model.labels_
>>> metrics.calinski_harabasz_score(X, labels)
561.59...
.. topic:: Advantages:
- The score is higher when clusters are dense and well separated, which
relates to a standard concept of a cluster.
- The score is fast to compute.
.. topic:: Drawbacks:
- The Calinski-Harabasz index is generally higher for convex clusters than
other concepts of clusters, such as density based clusters like those
obtained through DBSCAN.
.. dropdown:: Mathematical formulation
For a set of data :math:`E` of size :math:`n_E` which has been clustered into
:math:`k` clusters, the Calinski-Harabasz score :math:`s` is defined as the
ratio of the between-clusters dispersion mean and the within-cluster
dispersion:
.. math::
s = \frac{\mathrm{tr}(B_k)}{\mathrm{tr}(W_k)} \times \frac{n_E - k}{k - 1}
where :math:`\mathrm{tr}(B_k)` is trace of the between group dispersion matrix
and :math:`\mathrm{tr}(W_k)` is the trace of the within-cluster dispersion
matrix defined by:
.. math:: W_k = \sum_{q=1}^k \sum_{x \in C_q} (x - c_q) (x - c_q)^T
.. math:: B_k = \sum_{q=1}^k n_q (c_q - c_E) (c_q - c_E)^T
with :math:`C_q` the set of points in cluster :math:`q`, :math:`c_q` the
center of cluster :math:`q`, :math:`c_E` the center of :math:`E`, and
:math:`n_q` the number of points in cluster :math:`q`.
.. dropdown:: References
* Caliński, T., & Harabasz, J. (1974). `"A Dendrite Method for Cluster Analysis"
<https://www.researchgate.net/publication/233096619_A_Dendrite_Method_for_Cluster_Analysis>`_.
:doi:`Communications in Statistics-theory and Methods 3: 1-27
<10.1080/03610927408827101>`.
.. _davies-bouldin_index:
Davies-Bouldin Index
--------------------
If the ground truth labels are not known, the Davies-Bouldin index
(:func:`sklearn.metrics.davies_bouldin_score`) can be used to evaluate the
model, where a lower Davies-Bouldin index relates to a model with better
separation between the clusters.
This index signifies the average 'similarity' between clusters, where the
similarity is a measure that compares the distance between clusters with the
size of the clusters themselves.
Zero is the lowest possible score. Values closer to zero indicate a better
partition.
In normal usage, the Davies-Bouldin index is applied to the results of a
cluster analysis as follows:
>>> from sklearn import datasets
>>> iris = datasets.load_iris()
>>> X = iris.data
>>> from sklearn.cluster import KMeans
>>> from sklearn.metrics import davies_bouldin_score
>>> kmeans = KMeans(n_clusters=3, random_state=1).fit(X)
>>> labels = kmeans.labels_
>>> davies_bouldin_score(X, labels)
0.666...
.. topic:: Advantages:
- The computation of Davies-Bouldin is simpler than that of Silhouette scores.
- The index is solely based on quantities and features inherent to the dataset
as its computation only uses point-wise distances.
.. topic:: Drawbacks:
- The Davies-Boulding index is generally higher for convex clusters than other
concepts of clusters, such as density based clusters like those obtained
from DBSCAN.
- The usage of centroid distance limits the distance metric to Euclidean
space.
.. dropdown:: Mathematical formulation
The index is defined as the average similarity between each cluster :math:`C_i`
for :math:`i=1, ..., k` and its most similar one :math:`C_j`. In the context of
this index, similarity is defined as a measure :math:`R_{ij}` that trades off:
- :math:`s_i`, the average distance between each point of cluster :math:`i` and
the centroid of that cluster -- also know as cluster diameter.
- :math:`d_{ij}`, the distance between cluster centroids :math:`i` and
:math:`j`.
A simple choice to construct :math:`R_{ij}` so that it is nonnegative and
symmetric is:
.. math::
R_{ij} = \frac{s_i + s_j}{d_{ij}}
Then the Davies-Bouldin index is defined as:
.. math::
DB = \frac{1}{k} \sum_{i=1}^k \max_{i \neq j} R_{ij}
.. dropdown:: References
* Davies, David L.; Bouldin, Donald W. (1979). :doi:`"A Cluster Separation
Measure" <10.1109/TPAMI.1979.4766909>` IEEE Transactions on Pattern Analysis
and Machine Intelligence. PAMI-1 (2): 224-227.
* Halkidi, Maria; Batistakis, Yannis; Vazirgiannis, Michalis (2001). :doi:`"On
Clustering Validation Techniques" <10.1023/A:1012801612483>` Journal of
Intelligent Information Systems, 17(2-3), 107-145.
* `Wikipedia entry for Davies-Bouldin index
<https://en.wikipedia.org/wiki/Davies-Bouldin_index>`_.
.. _contingency_matrix:
Contingency Matrix
------------------
Contingency matrix (:func:`sklearn.metrics.cluster.contingency_matrix`)
reports the intersection cardinality for every true/predicted cluster pair.
The contingency matrix provides sufficient statistics for all clustering
metrics where the samples are independent and identically distributed and
one doesn't need to account for some instances not being clustered.
Here is an example::
>>> from sklearn.metrics.cluster import contingency_matrix
>>> x = ["a", "a", "a", "b", "b", "b"]
>>> y = [0, 0, 1, 1, 2, 2]
>>> contingency_matrix(x, y)
array([[2, 1, 0],
[0, 1, 2]])
The first row of output array indicates that there are three samples whose
true cluster is "a". Of them, two are in predicted cluster 0, one is in 1,
and none is in 2. And the second row indicates that there are three samples
whose true cluster is "b". Of them, none is in predicted cluster 0, one is in
1 and two are in 2.
A :ref:`confusion matrix <confusion_matrix>` for classification is a square
contingency matrix where the order of rows and columns correspond to a list
of classes.
.. topic:: Advantages:
- Allows to examine the spread of each true cluster across predicted clusters
and vice versa.
- The contingency table calculated is typically utilized in the calculation of
a similarity statistic (like the others listed in this document) between the
two clusterings.
.. topic:: Drawbacks:
- Contingency matrix is easy to interpret for a small number of clusters, but
becomes very hard to interpret for a large number of clusters.
- It doesn't give a single metric to use as an objective for clustering
optimisation.
.. dropdown:: References
* `Wikipedia entry for contingency matrix
<https://en.wikipedia.org/wiki/Contingency_table>`_
.. _pair_confusion_matrix:
Pair Confusion Matrix
---------------------
The pair confusion matrix
(:func:`sklearn.metrics.cluster.pair_confusion_matrix`) is a 2x2
similarity matrix
.. math::
C = \left[\begin{matrix}
C_{00} & C_{01} \\
C_{10} & C_{11}
\end{matrix}\right]
between two clusterings computed by considering all pairs of samples and
counting pairs that are assigned into the same or into different clusters
under the true and predicted clusterings.
It has the following entries:
:math:`C_{00}` : number of pairs with both clusterings having the samples
not clustered together
:math:`C_{10}` : number of pairs with the true label clustering having the
samples clustered together but the other clustering not having the samples
clustered together
:math:`C_{01}` : number of pairs with the true label clustering not having
the samples clustered together but the other clustering having the samples
clustered together
:math:`C_{11}` : number of pairs with both clusterings having the samples
clustered together
Considering a pair of samples that is clustered together a positive pair,
then as in binary classification the count of true negatives is
:math:`C_{00}`, false negatives is :math:`C_{10}`, true positives is
:math:`C_{11}` and false positives is :math:`C_{01}`.
Perfectly matching labelings have all non-zero entries on the
diagonal regardless of actual label values::
>>> from sklearn.metrics.cluster import pair_confusion_matrix
>>> pair_confusion_matrix([0, 0, 1, 1], [0, 0, 1, 1])
array([[8, 0],
[0, 4]])
::
>>> pair_confusion_matrix([0, 0, 1, 1], [1, 1, 0, 0])
array([[8, 0],
[0, 4]])
Labelings that assign all classes members to the same clusters
are complete but may not always be pure, hence penalized, and
have some off-diagonal non-zero entries::
>>> pair_confusion_matrix([0, 0, 1, 2], [0, 0, 1, 1])
array([[8, 2],
[0, 2]])
The matrix is not symmetric::
>>> pair_confusion_matrix([0, 0, 1, 1], [0, 0, 1, 2])
array([[8, 0],
[2, 2]])
If classes members are completely split across different clusters, the
assignment is totally incomplete, hence the matrix has all zero
diagonal entries::
>>> pair_confusion_matrix([0, 0, 0, 0], [0, 1, 2, 3])
array([[ 0, 0],
[12, 0]])
.. dropdown:: References
* :doi:`"Comparing Partitions" <10.1007/BF01908075>` L. Hubert and P. Arabie,
Journal of Classification 1985 | scikit-learn | clustering Clustering Clustering https en wikipedia org wiki Cluster analysis of unlabeled data can be performed with the module mod sklearn cluster Each clustering algorithm comes in two variants a class that implements the fit method to learn the clusters on train data and a function that given train data returns an array of integer labels corresponding to the different clusters For the class the labels over the training data can be found in the labels attribute currentmodule sklearn cluster topic Input data One important thing to note is that the algorithms implemented in this module can take different kinds of matrix as input All the methods accept standard data matrices of shape n samples n features These can be obtained from the classes in the mod sklearn feature extraction module For class AffinityPropagation class SpectralClustering and class DBSCAN one can also input similarity matrices of shape n samples n samples These can be obtained from the functions in the mod sklearn metrics pairwise module Overview of clustering methods figure auto examples cluster images sphx glr plot cluster comparison 001 png target auto examples cluster plot cluster comparison html align center scale 50 A comparison of the clustering algorithms in scikit learn list table header rows 1 widths 14 15 19 25 20 Method name Parameters Scalability Usecase Geometry metric used ref K Means k means number of clusters Very large n samples medium n clusters with ref MiniBatch code mini batch kmeans General purpose even cluster size flat geometry not too many clusters inductive Distances between points ref Affinity propagation affinity propagation damping sample preference Not scalable with n samples Many clusters uneven cluster size non flat geometry inductive Graph distance e g nearest neighbor graph ref Mean shift mean shift bandwidth Not scalable with n samples Many clusters uneven cluster size non flat geometry inductive Distances between points ref Spectral clustering spectral clustering number of clusters Medium n samples small n clusters Few clusters even cluster size non flat geometry transductive Graph distance e g nearest neighbor graph ref Ward hierarchical clustering hierarchical clustering number of clusters or distance threshold Large n samples and n clusters Many clusters possibly connectivity constraints transductive Distances between points ref Agglomerative clustering hierarchical clustering number of clusters or distance threshold linkage type distance Large n samples and n clusters Many clusters possibly connectivity constraints non Euclidean distances transductive Any pairwise distance ref DBSCAN dbscan neighborhood size Very large n samples medium n clusters Non flat geometry uneven cluster sizes outlier removal transductive Distances between nearest points ref HDBSCAN hdbscan minimum cluster membership minimum point neighbors large n samples medium n clusters Non flat geometry uneven cluster sizes outlier removal transductive hierarchical variable cluster density Distances between nearest points ref OPTICS optics minimum cluster membership Very large n samples large n clusters Non flat geometry uneven cluster sizes variable cluster density outlier removal transductive Distances between points ref Gaussian mixtures mixture many Not scalable Flat geometry good for density estimation inductive Mahalanobis distances to centers ref BIRCH birch branching factor threshold optional global clusterer Large n clusters and n samples Large dataset outlier removal data reduction inductive Euclidean distance between points ref Bisecting K Means bisect k means number of clusters Very large n samples medium n clusters General purpose even cluster size flat geometry no empty clusters inductive hierarchical Distances between points Non flat geometry clustering is useful when the clusters have a specific shape i e a non flat manifold and the standard euclidean distance is not the right metric This case arises in the two top rows of the figure above Gaussian mixture models useful for clustering are described in ref another chapter of the documentation mixture dedicated to mixture models KMeans can be seen as a special case of Gaussian mixture model with equal covariance per component term Transductive transductive clustering methods in contrast to term inductive clustering methods are not designed to be applied to new unseen data k means K means The class KMeans algorithm clusters data by trying to separate samples in n groups of equal variance minimizing a criterion known as the inertia or within cluster sum of squares see below This algorithm requires the number of clusters to be specified It scales well to large numbers of samples and has been used across a large range of application areas in many different fields The k means algorithm divides a set of math N samples math X into math K disjoint clusters math C each described by the mean math mu j of the samples in the cluster The means are commonly called the cluster centroids note that they are not in general points from math X although they live in the same space The K means algorithm aims to choose centroids that minimise the inertia or within cluster sum of squares criterion math sum i 0 n min mu j in C x i mu j 2 Inertia can be recognized as a measure of how internally coherent clusters are It suffers from various drawbacks Inertia makes the assumption that clusters are convex and isotropic which is not always the case It responds poorly to elongated clusters or manifolds with irregular shapes Inertia is not a normalized metric we just know that lower values are better and zero is optimal But in very high dimensional spaces Euclidean distances tend to become inflated this is an instance of the so called curse of dimensionality Running a dimensionality reduction algorithm such as ref PCA prior to k means clustering can alleviate this problem and speed up the computations image auto examples cluster images sphx glr plot kmeans assumptions 002 png target auto examples cluster plot kmeans assumptions html align center scale 50 For more detailed descriptions of the issues shown above and how to address them refer to the examples ref sphx glr auto examples cluster plot kmeans assumptions py and ref sphx glr auto examples cluster plot kmeans silhouette analysis py K means is often referred to as Lloyd s algorithm In basic terms the algorithm has three steps The first step chooses the initial centroids with the most basic method being to choose math k samples from the dataset math X After initialization K means consists of looping between the two other steps The first step assigns each sample to its nearest centroid The second step creates new centroids by taking the mean value of all of the samples assigned to each previous centroid The difference between the old and the new centroids are computed and the algorithm repeats these last two steps until this value is less than a threshold In other words it repeats until the centroids do not move significantly image auto examples cluster images sphx glr plot kmeans digits 001 png target auto examples cluster plot kmeans digits html align right scale 35 K means is equivalent to the expectation maximization algorithm with a small all equal diagonal covariance matrix The algorithm can also be understood through the concept of Voronoi diagrams https en wikipedia org wiki Voronoi diagram First the Voronoi diagram of the points is calculated using the current centroids Each segment in the Voronoi diagram becomes a separate cluster Secondly the centroids are updated to the mean of each segment The algorithm then repeats this until a stopping criterion is fulfilled Usually the algorithm stops when the relative decrease in the objective function between iterations is less than the given tolerance value This is not the case in this implementation iteration stops when centroids move less than the tolerance Given enough time K means will always converge however this may be to a local minimum This is highly dependent on the initialization of the centroids As a result the computation is often done several times with different initializations of the centroids One method to help address this issue is the k means initialization scheme which has been implemented in scikit learn use the init k means parameter This initializes the centroids to be generally distant from each other leading to probably better results than random initialization as shown in the reference For detailed examples of comparing different initialization schemes refer to ref sphx glr auto examples cluster plot kmeans digits py and ref sphx glr auto examples cluster plot kmeans stability low dim dense py K means can also be called independently to select seeds for other clustering algorithms see func sklearn cluster kmeans plusplus for details and example usage The algorithm supports sample weights which can be given by a parameter sample weight This allows to assign more weight to some samples when computing cluster centers and values of inertia For example assigning a weight of 2 to a sample is equivalent to adding a duplicate of that sample to the dataset math X rubric Examples ref sphx glr auto examples text plot document clustering py Document clustering using class KMeans and class MiniBatchKMeans based on sparse data ref sphx glr auto examples cluster plot kmeans plusplus py Using K means to select seeds for other clustering algorithms Low level parallelism class KMeans benefits from OpenMP based parallelism through Cython Small chunks of data 256 samples are processed in parallel which in addition yields a low memory footprint For more details on how to control the number of threads please refer to our ref parallelism notes rubric Examples ref sphx glr auto examples cluster plot kmeans assumptions py Demonstrating when k means performs intuitively and when it does not ref sphx glr auto examples cluster plot kmeans digits py Clustering handwritten digits dropdown References k means The advantages of careful seeding http ilpubs stanford edu 8090 778 1 2006 13 pdf Arthur David and Sergei Vassilvitskii Proceedings of the eighteenth annual ACM SIAM symposium on Discrete algorithms Society for Industrial and Applied Mathematics 2007 mini batch kmeans Mini Batch K Means The class MiniBatchKMeans is a variant of the class KMeans algorithm which uses mini batches to reduce the computation time while still attempting to optimise the same objective function Mini batches are subsets of the input data randomly sampled in each training iteration These mini batches drastically reduce the amount of computation required to converge to a local solution In contrast to other algorithms that reduce the convergence time of k means mini batch k means produces results that are generally only slightly worse than the standard algorithm The algorithm iterates between two major steps similar to vanilla k means In the first step math b samples are drawn randomly from the dataset to form a mini batch These are then assigned to the nearest centroid In the second step the centroids are updated In contrast to k means this is done on a per sample basis For each sample in the mini batch the assigned centroid is updated by taking the streaming average of the sample and all previous samples assigned to that centroid This has the effect of decreasing the rate of change for a centroid over time These steps are performed until convergence or a predetermined number of iterations is reached class MiniBatchKMeans converges faster than class KMeans but the quality of the results is reduced In practice this difference in quality can be quite small as shown in the example and cited reference figure auto examples cluster images sphx glr plot mini batch kmeans 001 png target auto examples cluster plot mini batch kmeans html align center scale 100 rubric Examples ref sphx glr auto examples cluster plot mini batch kmeans py Comparison of class KMeans and class MiniBatchKMeans ref sphx glr auto examples text plot document clustering py Document clustering using class KMeans and class MiniBatchKMeans based on sparse data ref sphx glr auto examples cluster plot dict face patches py dropdown References Web Scale K Means clustering https www eecs tufts edu dsculley papers fastkmeans pdf D Sculley Proceedings of the 19th international conference on World wide web 2010 affinity propagation Affinity Propagation class AffinityPropagation creates clusters by sending messages between pairs of samples until convergence A dataset is then described using a small number of exemplars which are identified as those most representative of other samples The messages sent between pairs represent the suitability for one sample to be the exemplar of the other which is updated in response to the values from other pairs This updating happens iteratively until convergence at which point the final exemplars are chosen and hence the final clustering is given figure auto examples cluster images sphx glr plot affinity propagation 001 png target auto examples cluster plot affinity propagation html align center scale 50 Affinity Propagation can be interesting as it chooses the number of clusters based on the data provided For this purpose the two important parameters are the preference which controls how many exemplars are used and the damping factor which damps the responsibility and availability messages to avoid numerical oscillations when updating these messages The main drawback of Affinity Propagation is its complexity The algorithm has a time complexity of the order math O N 2 T where math N is the number of samples and math T is the number of iterations until convergence Further the memory complexity is of the order math O N 2 if a dense similarity matrix is used but reducible if a sparse similarity matrix is used This makes Affinity Propagation most appropriate for small to medium sized datasets dropdown Algorithm description The messages sent between points belong to one of two categories The first is the responsibility math r i k which is the accumulated evidence that sample math k should be the exemplar for sample math i The second is the availability math a i k which is the accumulated evidence that sample math i should choose sample math k to be its exemplar and considers the values for all other samples that math k should be an exemplar In this way exemplars are chosen by samples if they are 1 similar enough to many samples and 2 chosen by many samples to be representative of themselves More formally the responsibility of a sample math k to be the exemplar of sample math i is given by math r i k leftarrow s i k max a i k s i k forall k neq k Where math s i k is the similarity between samples math i and math k The availability of sample math k to be the exemplar of sample math i is given by math a i k leftarrow min 0 r k k sum i s t i notin i k r i k To begin with all values for math r and math a are set to zero and the calculation of each iterates until convergence As discussed above in order to avoid numerical oscillations when updating the messages the damping factor math lambda is introduced to iteration process math r t 1 i k lambda cdot r t i k 1 lambda cdot r t 1 i k math a t 1 i k lambda cdot a t i k 1 lambda cdot a t 1 i k where math t indicates the iteration times rubric Examples ref sphx glr auto examples cluster plot affinity propagation py Affinity Propagation on a synthetic 2D datasets with 3 classes ref sphx glr auto examples applications plot stock market py Affinity Propagation on financial time series to find groups of companies mean shift Mean Shift class MeanShift clustering aims to discover blobs in a smooth density of samples It is a centroid based algorithm which works by updating candidates for centroids to be the mean of the points within a given region These candidates are then filtered in a post processing stage to eliminate near duplicates to form the final set of centroids dropdown Mathematical details The position of centroid candidates is iteratively adjusted using a technique called hill climbing which finds local maxima of the estimated probability density Given a candidate centroid math x for iteration math t the candidate is updated according to the following equation math x t 1 x t m x t Where math m is the mean shift vector that is computed for each centroid that points towards a region of the maximum increase in the density of points To compute math m we define math N x as the neighborhood of samples within a given distance around math x Then math m is computed using the following equation effectively updating a centroid to be the mean of the samples within its neighborhood math m x frac 1 N x sum x j in N x x j x In general the equation for math m depends on a kernel used for density estimation The generic formula is math m x frac sum x j in N x K x j x x j sum x j in N x K x j x x In our implementation math K x is equal to 1 if math x is small enough and is equal to 0 otherwise Effectively math K y x indicates whether math y is in the neighborhood of math x The algorithm automatically sets the number of clusters instead of relying on a parameter bandwidth which dictates the size of the region to search through This parameter can be set manually but can be estimated using the provided estimate bandwidth function which is called if the bandwidth is not set The algorithm is not highly scalable as it requires multiple nearest neighbor searches during the execution of the algorithm The algorithm is guaranteed to converge however the algorithm will stop iterating when the change in centroids is small Labelling a new sample is performed by finding the nearest centroid for a given sample figure auto examples cluster images sphx glr plot mean shift 001 png target auto examples cluster plot mean shift html align center scale 50 rubric Examples ref sphx glr auto examples cluster plot mean shift py Mean Shift clustering on a synthetic 2D datasets with 3 classes dropdown References doi Mean shift A robust approach toward feature space analysis 10 1109 34 1000236 D Comaniciu and P Meer IEEE Transactions on Pattern Analysis and Machine Intelligence 2002 spectral clustering Spectral clustering class SpectralClustering performs a low dimension embedding of the affinity matrix between samples followed by clustering e g by KMeans of the components of the eigenvectors in the low dimensional space It is especially computationally efficient if the affinity matrix is sparse and the amg solver is used for the eigenvalue problem Note the amg solver requires that the pyamg https github com pyamg pyamg module is installed The present version of SpectralClustering requires the number of clusters to be specified in advance It works well for a small number of clusters but is not advised for many clusters For two clusters SpectralClustering solves a convex relaxation of the normalized cuts https people eecs berkeley edu malik papers SM ncut pdf problem on the similarity graph cutting the graph in two so that the weight of the edges cut is small compared to the weights of the edges inside each cluster This criteria is especially interesting when working on images where graph vertices are pixels and weights of the edges of the similarity graph are computed using a function of a gradient of the image noisy img image auto examples cluster images sphx glr plot segmentation toy 001 png target auto examples cluster plot segmentation toy html scale 50 segmented img image auto examples cluster images sphx glr plot segmentation toy 002 png target auto examples cluster plot segmentation toy html scale 50 centered noisy img segmented img warning Transforming distance to well behaved similarities Note that if the values of your similarity matrix are not well distributed e g with negative values or with a distance matrix rather than a similarity the spectral problem will be singular and the problem not solvable In which case it is advised to apply a transformation to the entries of the matrix For instance in the case of a signed distance matrix is common to apply a heat kernel similarity np exp beta distance distance std See the examples for such an application rubric Examples ref sphx glr auto examples cluster plot segmentation toy py Segmenting objects from a noisy background using spectral clustering ref sphx glr auto examples cluster plot coin segmentation py Spectral clustering to split the image of coins in regions coin kmeans image auto examples cluster images sphx glr plot coin segmentation 001 png target auto examples cluster plot coin segmentation html scale 35 coin discretize image auto examples cluster images sphx glr plot coin segmentation 002 png target auto examples cluster plot coin segmentation html scale 35 coin cluster qr image auto examples cluster images sphx glr plot coin segmentation 003 png target auto examples cluster plot coin segmentation html scale 35 Different label assignment strategies Different label assignment strategies can be used corresponding to the assign labels parameter of class SpectralClustering kmeans strategy can match finer details but can be unstable In particular unless you control the random state it may not be reproducible from run to run as it depends on random initialization The alternative discretize strategy is 100 reproducible but tends to create parcels of fairly even and geometrical shape The recently added cluster qr option is a deterministic alternative that tends to create the visually best partitioning on the example application below assign labels kmeans assign labels discretize assign labels cluster qr coin kmeans coin discretize coin cluster qr dropdown References Multiclass spectral clustering https people eecs berkeley edu jordan courses 281B spring04 readings yu shi pdf Stella X Yu Jianbo Shi 2003 doi Simple direct and efficient multi way spectral clustering 10 1093 imaiai iay008 Anil Damle Victor Minden Lexing Ying 2019 spectral clustering graph Spectral Clustering Graphs Spectral Clustering can also be used to partition graphs via their spectral embeddings In this case the affinity matrix is the adjacency matrix of the graph and SpectralClustering is initialized with affinity precomputed from sklearn cluster import SpectralClustering sc SpectralClustering 3 affinity precomputed n init 100 assign labels discretize sc fit predict adjacency matrix doctest SKIP dropdown References doi A Tutorial on Spectral Clustering 10 1007 s11222 007 9033 z Ulrike von Luxburg 2007 doi Normalized cuts and image segmentation 10 1109 34 868688 Jianbo Shi Jitendra Malik 2000 A Random Walks View of Spectral Segmentation https citeseerx ist psu edu doc view pid 84a86a69315e994cfd1e0c7debb86d62d7bd1f44 Marina Meila Jianbo Shi 2001 On Spectral Clustering Analysis and an algorithm https citeseerx ist psu edu doc view pid 796c5d6336fc52aa84db575fb821c78918b65f58 Andrew Y Ng Michael I Jordan Yair Weiss 2001 arxiv Preconditioned Spectral Clustering for Stochastic Block Partition Streaming Graph Challenge 1708 07481 David Zhuzhunashvili Andrew Knyazev hierarchical clustering Hierarchical clustering Hierarchical clustering is a general family of clustering algorithms that build nested clusters by merging or splitting them successively This hierarchy of clusters is represented as a tree or dendrogram The root of the tree is the unique cluster that gathers all the samples the leaves being the clusters with only one sample See the Wikipedia page https en wikipedia org wiki Hierarchical clustering for more details The class AgglomerativeClustering object performs a hierarchical clustering using a bottom up approach each observation starts in its own cluster and clusters are successively merged together The linkage criteria determines the metric used for the merge strategy Ward minimizes the sum of squared differences within all clusters It is a variance minimizing approach and in this sense is similar to the k means objective function but tackled with an agglomerative hierarchical approach Maximum or complete linkage minimizes the maximum distance between observations of pairs of clusters Average linkage minimizes the average of the distances between all observations of pairs of clusters Single linkage minimizes the distance between the closest observations of pairs of clusters class AgglomerativeClustering can also scale to large number of samples when it is used jointly with a connectivity matrix but is computationally expensive when no connectivity constraints are added between samples it considers at each step all the possible merges topic class FeatureAgglomeration The class FeatureAgglomeration uses agglomerative clustering to group together features that look very similar thus decreasing the number of features It is a dimensionality reduction tool see ref data reduction Different linkage type Ward complete average and single linkage class AgglomerativeClustering supports Ward single average and complete linkage strategies image auto examples cluster images sphx glr plot linkage comparison 001 png target auto examples cluster plot linkage comparison html scale 43 Agglomerative cluster has a rich get richer behavior that leads to uneven cluster sizes In this regard single linkage is the worst strategy and Ward gives the most regular sizes However the affinity or distance used in clustering cannot be varied with Ward thus for non Euclidean metrics average linkage is a good alternative Single linkage while not robust to noisy data can be computed very efficiently and can therefore be useful to provide hierarchical clustering of larger datasets Single linkage can also perform well on non globular data rubric Examples ref sphx glr auto examples cluster plot digits linkage py exploration of the different linkage strategies in a real dataset ref sphx glr auto examples cluster plot linkage comparison py exploration of the different linkage strategies in toy datasets Visualization of cluster hierarchy It s possible to visualize the tree representing the hierarchical merging of clusters as a dendrogram Visual inspection can often be useful for understanding the structure of the data though more so in the case of small sample sizes image auto examples cluster images sphx glr plot agglomerative dendrogram 001 png target auto examples cluster plot agglomerative dendrogram html scale 42 rubric Examples ref sphx glr auto examples cluster plot agglomerative dendrogram py Adding connectivity constraints An interesting aspect of class AgglomerativeClustering is that connectivity constraints can be added to this algorithm only adjacent clusters can be merged together through a connectivity matrix that defines for each sample the neighboring samples following a given structure of the data For instance in the swiss roll example below the connectivity constraints forbid the merging of points that are not adjacent on the swiss roll and thus avoid forming clusters that extend across overlapping folds of the roll unstructured image auto examples cluster images sphx glr plot ward structured vs unstructured 001 png target auto examples cluster plot ward structured vs unstructured html scale 49 structured image auto examples cluster images sphx glr plot ward structured vs unstructured 002 png target auto examples cluster plot ward structured vs unstructured html scale 49 centered unstructured structured These constraint are useful to impose a certain local structure but they also make the algorithm faster especially when the number of the samples is high The connectivity constraints are imposed via an connectivity matrix a scipy sparse matrix that has elements only at the intersection of a row and a column with indices of the dataset that should be connected This matrix can be constructed from a priori information for instance you may wish to cluster web pages by only merging pages with a link pointing from one to another It can also be learned from the data for instance using func sklearn neighbors kneighbors graph to restrict merging to nearest neighbors as in ref this example sphx glr auto examples cluster plot agglomerative clustering py or using func sklearn feature extraction image grid to graph to enable only merging of neighboring pixels on an image as in the ref coin sphx glr auto examples cluster plot coin ward segmentation py example warning Connectivity constraints with single average and complete linkage Connectivity constraints and single complete or average linkage can enhance the rich getting richer aspect of agglomerative clustering particularly so if they are built with func sklearn neighbors kneighbors graph In the limit of a small number of clusters they tend to give a few macroscopically occupied clusters and almost empty ones see the discussion in ref sphx glr auto examples cluster plot agglomerative clustering py Single linkage is the most brittle linkage option with regard to this issue image auto examples cluster images sphx glr plot agglomerative clustering 001 png target auto examples cluster plot agglomerative clustering html scale 38 image auto examples cluster images sphx glr plot agglomerative clustering 002 png target auto examples cluster plot agglomerative clustering html scale 38 image auto examples cluster images sphx glr plot agglomerative clustering 003 png target auto examples cluster plot agglomerative clustering html scale 38 image auto examples cluster images sphx glr plot agglomerative clustering 004 png target auto examples cluster plot agglomerative clustering html scale 38 rubric Examples ref sphx glr auto examples cluster plot coin ward segmentation py Ward clustering to split the image of coins in regions ref sphx glr auto examples cluster plot ward structured vs unstructured py Example of Ward algorithm on a swiss roll comparison of structured approaches versus unstructured approaches ref sphx glr auto examples cluster plot feature agglomeration vs univariate selection py Example of dimensionality reduction with feature agglomeration based on Ward hierarchical clustering ref sphx glr auto examples cluster plot agglomerative clustering py Varying the metric Single average and complete linkage can be used with a variety of distances or affinities in particular Euclidean distance l2 Manhattan distance or Cityblock or l1 cosine distance or any precomputed affinity matrix l1 distance is often good for sparse features or sparse noise i e many of the features are zero as in text mining using occurrences of rare words cosine distance is interesting because it is invariant to global scalings of the signal The guidelines for choosing a metric is to use one that maximizes the distance between samples in different classes and minimizes that within each class image auto examples cluster images sphx glr plot agglomerative clustering metrics 005 png target auto examples cluster plot agglomerative clustering metrics html scale 32 image auto examples cluster images sphx glr plot agglomerative clustering metrics 006 png target auto examples cluster plot agglomerative clustering metrics html scale 32 image auto examples cluster images sphx glr plot agglomerative clustering metrics 007 png target auto examples cluster plot agglomerative clustering metrics html scale 32 rubric Examples ref sphx glr auto examples cluster plot agglomerative clustering metrics py Bisecting K Means bisect k means The class BisectingKMeans is an iterative variant of class KMeans using divisive hierarchical clustering Instead of creating all centroids at once centroids are picked progressively based on a previous clustering a cluster is split into two new clusters repeatedly until the target number of clusters is reached class BisectingKMeans is more efficient than class KMeans when the number of clusters is large since it only works on a subset of the data at each bisection while class KMeans always works on the entire dataset Although class BisectingKMeans can t benefit from the advantages of the k means initialization by design it will still produce comparable results than KMeans init k means in terms of inertia at cheaper computational costs and will likely produce better results than KMeans with a random initialization This variant is more efficient to agglomerative clustering if the number of clusters is small compared to the number of data points This variant also does not produce empty clusters There exist two strategies for selecting the cluster to split bisecting strategy largest cluster selects the cluster having the most points bisecting strategy biggest inertia selects the cluster with biggest inertia cluster with biggest Sum of Squared Errors within Picking by largest amount of data points in most cases produces result as accurate as picking by inertia and is faster especially for larger amount of data points where calculating error may be costly Picking by largest amount of data points will also likely produce clusters of similar sizes while KMeans is known to produce clusters of different sizes Difference between Bisecting K Means and regular K Means can be seen on example ref sphx glr auto examples cluster plot bisect kmeans py While the regular K Means algorithm tends to create non related clusters clusters from Bisecting K Means are well ordered and create quite a visible hierarchy dropdown References A Comparison of Document Clustering Techniques http www philippe fournier viger com spmf bisectingkmeans pdf Michael Steinbach George Karypis and Vipin Kumar Department of Computer Science and Egineering University of Minnesota June 2000 Performance Analysis of K Means and Bisecting K Means Algorithms in Weblog Data https ijeter everscience org Manuscripts Volume 4 Issue 8 Vol 4 issue 8 M 23 pdf K Abirami and Dr P Mayilvahanan International Journal of Emerging Technologies in Engineering Research IJETER Volume 4 Issue 8 August 2016 Bisecting K means Algorithm Based on K valued Self determining and Clustering Center Optimization http www jcomputers us vol13 jcp1306 01 pdf Jian Di Xinyue Gou School of Control and Computer Engineering North China Electric Power University Baoding Hebei China August 2017 dbscan DBSCAN The class DBSCAN algorithm views clusters as areas of high density separated by areas of low density Due to this rather generic view clusters found by DBSCAN can be any shape as opposed to k means which assumes that clusters are convex shaped The central component to the DBSCAN is the concept of core samples which are samples that are in areas of high density A cluster is therefore a set of core samples each close to each other measured by some distance measure and a set of non core samples that are close to a core sample but are not themselves core samples There are two parameters to the algorithm min samples and eps which define formally what we mean when we say dense Higher min samples or lower eps indicate higher density necessary to form a cluster More formally we define a core sample as being a sample in the dataset such that there exist min samples other samples within a distance of eps which are defined as neighbors of the core sample This tells us that the core sample is in a dense area of the vector space A cluster is a set of core samples that can be built by recursively taking a core sample finding all of its neighbors that are core samples finding all of their neighbors that are core samples and so on A cluster also has a set of non core samples which are samples that are neighbors of a core sample in the cluster but are not themselves core samples Intuitively these samples are on the fringes of a cluster Any core sample is part of a cluster by definition Any sample that is not a core sample and is at least eps in distance from any core sample is considered an outlier by the algorithm While the parameter min samples primarily controls how tolerant the algorithm is towards noise on noisy and large data sets it may be desirable to increase this parameter the parameter eps is crucial to choose appropriately for the data set and distance function and usually cannot be left at the default value It controls the local neighborhood of the points When chosen too small most data will not be clustered at all and labeled as 1 for noise When chosen too large it causes close clusters to be merged into one cluster and eventually the entire data set to be returned as a single cluster Some heuristics for choosing this parameter have been discussed in the literature for example based on a knee in the nearest neighbor distances plot as discussed in the references below In the figure below the color indicates cluster membership with large circles indicating core samples found by the algorithm Smaller circles are non core samples that are still part of a cluster Moreover the outliers are indicated by black points below dbscan results image auto examples cluster images sphx glr plot dbscan 002 png target auto examples cluster plot dbscan html scale 50 centered dbscan results rubric Examples ref sphx glr auto examples cluster plot dbscan py dropdown Implementation The DBSCAN algorithm is deterministic always generating the same clusters when given the same data in the same order However the results can differ when data is provided in a different order First even though the core samples will always be assigned to the same clusters the labels of those clusters will depend on the order in which those samples are encountered in the data Second and more importantly the clusters to which non core samples are assigned can differ depending on the data order This would happen when a non core sample has a distance lower than eps to two core samples in different clusters By the triangular inequality those two core samples must be more distant than eps from each other or they would be in the same cluster The non core sample is assigned to whichever cluster is generated first in a pass through the data and so the results will depend on the data ordering The current implementation uses ball trees and kd trees to determine the neighborhood of points which avoids calculating the full distance matrix as was done in scikit learn versions before 0 14 The possibility to use custom metrics is retained for details see class NearestNeighbors dropdown Memory consumption for large sample sizes This implementation is by default not memory efficient because it constructs a full pairwise similarity matrix in the case where kd trees or ball trees cannot be used e g with sparse matrices This matrix will consume math n 2 floats A couple of mechanisms for getting around this are Use ref OPTICS optics clustering in conjunction with the extract dbscan method OPTICS clustering also calculates the full pairwise matrix but only keeps one row in memory at a time memory complexity n A sparse radius neighborhood graph where missing entries are presumed to be out of eps can be precomputed in a memory efficient way and dbscan can be run over this with metric precomputed See meth sklearn neighbors NearestNeighbors radius neighbors graph The dataset can be compressed either by removing exact duplicates if these occur in your data or by using BIRCH Then you only have a relatively small number of representatives for a large number of points You can then provide a sample weight when fitting DBSCAN dropdown References A Density Based Algorithm for Discovering Clusters in Large Spatial Databases with Noise https www aaai org Papers KDD 1996 KDD96 037 pdf Ester M H P Kriegel J Sander and X Xu In Proceedings of the 2nd International Conference on Knowledge Discovery and Data Mining Portland OR AAAI Press pp 226 231 1996 doi DBSCAN revisited revisited why and how you should still use DBSCAN 10 1145 3068335 Schubert E Sander J Ester M Kriegel H P Xu X 2017 In ACM Transactions on Database Systems TODS 42 3 19 hdbscan HDBSCAN The class HDBSCAN algorithm can be seen as an extension of class DBSCAN and class OPTICS Specifically class DBSCAN assumes that the clustering criterion i e density requirement is globally homogeneous In other words class DBSCAN may struggle to successfully capture clusters with different densities class HDBSCAN alleviates this assumption and explores all possible density scales by building an alternative representation of the clustering problem note This implementation is adapted from the original implementation of HDBSCAN scikit learn contrib hdbscan https github com scikit learn contrib hdbscan based on LJ2017 rubric Examples ref sphx glr auto examples cluster plot hdbscan py Mutual Reachability Graph HDBSCAN first defines math d c x p the core distance of a sample math x p as the distance to its min samples th nearest neighbor counting itself For example if min samples 5 and math x is the 5th nearest neighbor of math x p then the core distance is math d c x p d x p x Next it defines math d m x p x q the mutual reachability distance of two points math x p x q as math d m x p x q max d c x p d c x q d x p x q These two notions allow us to construct the mutual reachability graph math G ms defined for a fixed choice of min samples by associating each sample math x p with a vertex of the graph and thus edges between points math x p x q are the mutual reachability distance math d m x p x q between them We may build subsets of this graph denoted as math G ms varepsilon by removing any edges with value greater than math varepsilon from the original graph Any points whose core distance is less than math varepsilon are at this staged marked as noise The remaining points are then clustered by finding the connected components of this trimmed graph note Taking the connected components of a trimmed graph math G ms varepsilon is equivalent to running DBSCAN with min samples and math varepsilon DBSCAN is a slightly modified version of DBSCAN mentioned in CM2013 Hierarchical Clustering HDBSCAN can be seen as an algorithm which performs DBSCAN clustering across all values of math varepsilon As mentioned prior this is equivalent to finding the connected components of the mutual reachability graphs for all values of math varepsilon To do this efficiently HDBSCAN first extracts a minimum spanning tree MST from the fully connected mutual reachability graph then greedily cuts the edges with highest weight An outline of the HDBSCAN algorithm is as follows 1 Extract the MST of math G ms 2 Extend the MST by adding a self edge for each vertex with weight equal to the core distance of the underlying sample 3 Initialize a single cluster and label for the MST 4 Remove the edge with the greatest weight from the MST ties are removed simultaneously 5 Assign cluster labels to the connected components which contain the end points of the now removed edge If the component does not have at least one edge it is instead assigned a null label marking it as noise 6 Repeat 4 5 until there are no more connected components HDBSCAN is therefore able to obtain all possible partitions achievable by DBSCAN for a fixed choice of min samples in a hierarchical fashion Indeed this allows HDBSCAN to perform clustering across multiple densities and as such it no longer needs math varepsilon to be given as a hyperparameter Instead it relies solely on the choice of min samples which tends to be a more robust hyperparameter hdbscan ground truth image auto examples cluster images sphx glr plot hdbscan 005 png target auto examples cluster plot hdbscan html scale 75 hdbscan results image auto examples cluster images sphx glr plot hdbscan 007 png target auto examples cluster plot hdbscan html scale 75 centered hdbscan ground truth centered hdbscan results HDBSCAN can be smoothed with an additional hyperparameter min cluster size which specifies that during the hierarchical clustering components with fewer than minimum cluster size many samples are considered noise In practice one can set minimum cluster size min samples to couple the parameters and simplify the hyperparameter space rubric References CM2013 Campello R J G B Moulavi D Sander J 2013 Density Based Clustering Based on Hierarchical Density Estimates In Pei J Tseng V S Cao L Motoda H Xu G eds Advances in Knowledge Discovery and Data Mining PAKDD 2013 Lecture Notes in Computer Science vol 7819 Springer Berlin Heidelberg doi Density Based Clustering Based on Hierarchical Density Estimates 10 1007 978 3 642 37456 2 14 LJ2017 L McInnes and J Healy 2017 Accelerated Hierarchical Density Based Clustering In IEEE International Conference on Data Mining Workshops ICDMW 2017 pp 33 42 doi Accelerated Hierarchical Density Based Clustering 10 1109 ICDMW 2017 12 optics OPTICS The class OPTICS algorithm shares many similarities with the class DBSCAN algorithm and can be considered a generalization of DBSCAN that relaxes the eps requirement from a single value to a value range The key difference between DBSCAN and OPTICS is that the OPTICS algorithm builds a reachability graph which assigns each sample both a reachability distance and a spot within the cluster ordering attribute these two attributes are assigned when the model is fitted and are used to determine cluster membership If OPTICS is run with the default value of inf set for max eps then DBSCAN style cluster extraction can be performed repeatedly in linear time for any given eps value using the cluster optics dbscan method Setting max eps to a lower value will result in shorter run times and can be thought of as the maximum neighborhood radius from each point to find other potential reachable points optics results image auto examples cluster images sphx glr plot optics 001 png target auto examples cluster plot optics html scale 50 centered optics results The reachability distances generated by OPTICS allow for variable density extraction of clusters within a single data set As shown in the above plot combining reachability distances and data set ordering produces a reachability plot where point density is represented on the Y axis and points are ordered such that nearby points are adjacent Cutting the reachability plot at a single value produces DBSCAN like results all points above the cut are classified as noise and each time that there is a break when reading from left to right signifies a new cluster The default cluster extraction with OPTICS looks at the steep slopes within the graph to find clusters and the user can define what counts as a steep slope using the parameter xi There are also other possibilities for analysis on the graph itself such as generating hierarchical representations of the data through reachability plot dendrograms and the hierarchy of clusters detected by the algorithm can be accessed through the cluster hierarchy parameter The plot above has been color coded so that cluster colors in planar space match the linear segment clusters of the reachability plot Note that the blue and red clusters are adjacent in the reachability plot and can be hierarchically represented as children of a larger parent cluster rubric Examples ref sphx glr auto examples cluster plot optics py dropdown Comparison with DBSCAN The results from OPTICS cluster optics dbscan method and DBSCAN are very similar but not always identical specifically labeling of periphery and noise points This is in part because the first samples of each dense area processed by OPTICS have a large reachability value while being close to other points in their area and will thus sometimes be marked as noise rather than periphery This affects adjacent points when they are considered as candidates for being marked as either periphery or noise Note that for any single value of eps DBSCAN will tend to have a shorter run time than OPTICS however for repeated runs at varying eps values a single run of OPTICS may require less cumulative runtime than DBSCAN It is also important to note that OPTICS output is close to DBSCAN s only if eps and max eps are close dropdown Computational Complexity Spatial indexing trees are used to avoid calculating the full distance matrix and allow for efficient memory usage on large sets of samples Different distance metrics can be supplied via the metric keyword For large datasets similar but not identical results can be obtained via class HDBSCAN The HDBSCAN implementation is multithreaded and has better algorithmic runtime complexity than OPTICS at the cost of worse memory scaling For extremely large datasets that exhaust system memory using HDBSCAN OPTICS will maintain math n as opposed to math n 2 memory scaling however tuning of the max eps parameter will likely need to be used to give a solution in a reasonable amount of wall time dropdown References OPTICS ordering points to identify the clustering structure Ankerst Mihael Markus M Breunig Hans Peter Kriegel and J rg Sander In ACM Sigmod Record vol 28 no 2 pp 49 60 ACM 1999 birch BIRCH The class Birch builds a tree called the Clustering Feature Tree CFT for the given data The data is essentially lossy compressed to a set of Clustering Feature nodes CF Nodes The CF Nodes have a number of subclusters called Clustering Feature subclusters CF Subclusters and these CF Subclusters located in the non terminal CF Nodes can have CF Nodes as children The CF Subclusters hold the necessary information for clustering which prevents the need to hold the entire input data in memory This information includes Number of samples in a subcluster Linear Sum An n dimensional vector holding the sum of all samples Squared Sum Sum of the squared L2 norm of all samples Centroids To avoid recalculation linear sum n samples Squared norm of the centroids The BIRCH algorithm has two parameters the threshold and the branching factor The branching factor limits the number of subclusters in a node and the threshold limits the distance between the entering sample and the existing subclusters This algorithm can be viewed as an instance or data reduction method since it reduces the input data to a set of subclusters which are obtained directly from the leaves of the CFT This reduced data can be further processed by feeding it into a global clusterer This global clusterer can be set by n clusters If n clusters is set to None the subclusters from the leaves are directly read off otherwise a global clustering step labels these subclusters into global clusters labels and the samples are mapped to the global label of the nearest subcluster dropdown Algorithm description A new sample is inserted into the root of the CF Tree which is a CF Node It is then merged with the subcluster of the root that has the smallest radius after merging constrained by the threshold and branching factor conditions If the subcluster has any child node then this is done repeatedly till it reaches a leaf After finding the nearest subcluster in the leaf the properties of this subcluster and the parent subclusters are recursively updated If the radius of the subcluster obtained by merging the new sample and the nearest subcluster is greater than the square of the threshold and if the number of subclusters is greater than the branching factor then a space is temporarily allocated to this new sample The two farthest subclusters are taken and the subclusters are divided into two groups on the basis of the distance between these subclusters If this split node has a parent subcluster and there is room for a new subcluster then the parent is split into two If there is no room then this node is again split into two and the process is continued recursively till it reaches the root dropdown BIRCH or MiniBatchKMeans BIRCH does not scale very well to high dimensional data As a rule of thumb if n features is greater than twenty it is generally better to use MiniBatchKMeans If the number of instances of data needs to be reduced or if one wants a large number of subclusters either as a preprocessing step or otherwise BIRCH is more useful than MiniBatchKMeans image auto examples cluster images sphx glr plot birch vs minibatchkmeans 001 png target auto examples cluster plot birch vs minibatchkmeans html dropdown How to use partial fit To avoid the computation of global clustering for every call of partial fit the user is advised 1 To set n clusters None initially 2 Train all data by multiple calls to partial fit 3 Set n clusters to a required value using brc set params n clusters n clusters 4 Call partial fit finally with no arguments i e brc partial fit which performs the global clustering dropdown References Tian Zhang Raghu Ramakrishnan Maron Livny BIRCH An efficient data clustering method for large databases https www cs sfu ca CourseCentral 459 han papers zhang96 pdf Roberto Perdisci JBirch Java implementation of BIRCH clustering algorithm https code google com archive p jbirch clustering evaluation Clustering performance evaluation Evaluating the performance of a clustering algorithm is not as trivial as counting the number of errors or the precision and recall of a supervised classification algorithm In particular any evaluation metric should not take the absolute values of the cluster labels into account but rather if this clustering define separations of the data similar to some ground truth set of classes or satisfying some assumption such that members belong to the same class are more similar than members of different classes according to some similarity metric currentmodule sklearn metrics rand score adjusted rand score Rand index Given the knowledge of the ground truth class assignments labels true and our clustering algorithm assignments of the same samples labels pred the adjusted or unadjusted Rand index is a function that measures the similarity of the two assignments ignoring permutations from sklearn import metrics labels true 0 0 0 1 1 1 labels pred 0 0 1 1 2 2 metrics rand score labels true labels pred 0 66 The Rand index does not ensure to obtain a value close to 0 0 for a random labelling The adjusted Rand index corrects for chance and will give such a baseline metrics adjusted rand score labels true labels pred 0 24 As with all clustering metrics one can permute 0 and 1 in the predicted labels rename 2 to 3 and get the same score labels pred 1 1 0 0 3 3 metrics rand score labels true labels pred 0 66 metrics adjusted rand score labels true labels pred 0 24 Furthermore both func rand score func adjusted rand score are symmetric swapping the argument does not change the scores They can thus be used as consensus measures metrics rand score labels pred labels true 0 66 metrics adjusted rand score labels pred labels true 0 24 Perfect labeling is scored 1 0 labels pred labels true metrics rand score labels true labels pred 1 0 metrics adjusted rand score labels true labels pred 1 0 Poorly agreeing labels e g independent labelings have lower scores and for the adjusted Rand index the score will be negative or close to zero However for the unadjusted Rand index the score while lower will not necessarily be close to zero labels true 0 0 0 0 0 0 1 1 labels pred 0 1 2 3 4 5 5 6 metrics rand score labels true labels pred 0 39 metrics adjusted rand score labels true labels pred 0 07 topic Advantages Interpretability The unadjusted Rand index is proportional to the number of sample pairs whose labels are the same in both labels pred and labels true or are different in both Random uniform label assignments have an adjusted Rand index score close to 0 0 for any value of n clusters and n samples which is not the case for the unadjusted Rand index or the V measure for instance Bounded range Lower values indicate different labelings similar clusterings have a high adjusted or unadjusted Rand index 1 0 is the perfect match score The score range is 0 1 for the unadjusted Rand index and 0 5 1 for the adjusted Rand index No assumption is made on the cluster structure The adjusted or unadjusted Rand index can be used to compare all kinds of clustering algorithms and can be used to compare clustering algorithms such as k means which assumes isotropic blob shapes with results of spectral clustering algorithms which can find cluster with folded shapes topic Drawbacks Contrary to inertia the adjusted or unadjusted Rand index requires knowledge of the ground truth classes which is almost never available in practice or requires manual assignment by human annotators as in the supervised learning setting However adjusted or unadjusted Rand index can also be useful in a purely unsupervised setting as a building block for a Consensus Index that can be used for clustering model selection TODO The unadjusted Rand index is often close to 1 0 even if the clusterings themselves differ significantly This can be understood when interpreting the Rand index as the accuracy of element pair labeling resulting from the clusterings In practice there often is a majority of element pairs that are assigned the different pair label under both the predicted and the ground truth clustering resulting in a high proportion of pair labels that agree which leads subsequently to a high score rubric Examples ref sphx glr auto examples cluster plot adjusted for chance measures py Analysis of the impact of the dataset size on the value of clustering measures for random assignments dropdown Mathematical formulation If C is a ground truth class assignment and K the clustering let us define math a and math b as math a the number of pairs of elements that are in the same set in C and in the same set in K math b the number of pairs of elements that are in different sets in C and in different sets in K The unadjusted Rand index is then given by math text RI frac a b C 2 n samples where math C 2 n samples is the total number of possible pairs in the dataset It does not matter if the calculation is performed on ordered pairs or unordered pairs as long as the calculation is performed consistently However the Rand index does not guarantee that random label assignments will get a value close to zero esp if the number of clusters is in the same order of magnitude as the number of samples To counter this effect we can discount the expected RI math E text RI of random labelings by defining the adjusted Rand index as follows math text ARI frac text RI E text RI max text RI E text RI dropdown References Comparing Partitions https link springer com article 10 1007 2FBF01908075 L Hubert and P Arabie Journal of Classification 1985 Properties of the Hubert Arabie adjusted Rand index https psycnet apa org record 2004 17801 007 D Steinley Psychological Methods 2004 Wikipedia entry for the Rand index https en wikipedia org wiki Rand index Adjusted Rand index doi Minimum adjusted Rand index for two clusterings of a given size 2022 J E Chac n and A I Rastrojo 10 1007 s11634 022 00491 w mutual info score Mutual Information based scores Given the knowledge of the ground truth class assignments labels true and our clustering algorithm assignments of the same samples labels pred the Mutual Information is a function that measures the agreement of the two assignments ignoring permutations Two different normalized versions of this measure are available Normalized Mutual Information NMI and Adjusted Mutual Information AMI NMI is often used in the literature while AMI was proposed more recently and is normalized against chance from sklearn import metrics labels true 0 0 0 1 1 1 labels pred 0 0 1 1 2 2 metrics adjusted mutual info score labels true labels pred doctest SKIP 0 22504 One can permute 0 and 1 in the predicted labels rename 2 to 3 and get the same score labels pred 1 1 0 0 3 3 metrics adjusted mutual info score labels true labels pred doctest SKIP 0 22504 All func mutual info score func adjusted mutual info score and func normalized mutual info score are symmetric swapping the argument does not change the score Thus they can be used as a consensus measure metrics adjusted mutual info score labels pred labels true doctest SKIP 0 22504 Perfect labeling is scored 1 0 labels pred labels true metrics adjusted mutual info score labels true labels pred doctest SKIP 1 0 metrics normalized mutual info score labels true labels pred doctest SKIP 1 0 This is not true for mutual info score which is therefore harder to judge metrics mutual info score labels true labels pred doctest SKIP 0 69 Bad e g independent labelings have non positive scores labels true 0 1 2 0 3 4 5 1 labels pred 1 1 0 0 2 2 2 2 metrics adjusted mutual info score labels true labels pred doctest SKIP 0 10526 topic Advantages Random uniform label assignments have a AMI score close to 0 0 for any value of n clusters and n samples which is not the case for raw Mutual Information or the V measure for instance Upper bound of 1 Values close to zero indicate two label assignments that are largely independent while values close to one indicate significant agreement Further an AMI of exactly 1 indicates that the two label assignments are equal with or without permutation topic Drawbacks Contrary to inertia MI based measures require the knowledge of the ground truth classes while almost never available in practice or requires manual assignment by human annotators as in the supervised learning setting However MI based measures can also be useful in purely unsupervised setting as a building block for a Consensus Index that can be used for clustering model selection NMI and MI are not adjusted against chance rubric Examples ref sphx glr auto examples cluster plot adjusted for chance measures py Analysis of the impact of the dataset size on the value of clustering measures for random assignments This example also includes the Adjusted Rand Index dropdown Mathematical formulation Assume two label assignments of the same N objects math U and math V Their entropy is the amount of uncertainty for a partition set defined by math H U sum i 1 U P i log P i where math P i U i N is the probability that an object picked at random from math U falls into class math U i Likewise for math V math H V sum j 1 V P j log P j With math P j V j N The mutual information MI between math U and math V is calculated by math text MI U V sum i 1 U sum j 1 V P i j log left frac P i j P i P j right where math P i j U i cap V j N is the probability that an object picked at random falls into both classes math U i and math V j It also can be expressed in set cardinality formulation math text MI U V sum i 1 U sum j 1 V frac U i cap V j N log left frac N U i cap V j U i V j right The normalized mutual information is defined as math text NMI U V frac text MI U V text mean H U H V This value of the mutual information and also the normalized variant is not adjusted for chance and will tend to increase as the number of different labels clusters increases regardless of the actual amount of mutual information between the label assignments The expected value for the mutual information can be calculated using the following equation VEB2009 In this equation math a i U i the number of elements in math U i and math b j V j the number of elements in math V j math E text MI U V sum i 1 U sum j 1 V sum n ij a i b j N min a i b j frac n ij N log left frac N n ij a i b j right frac a i b j N a i N b j N n ij a i n ij b j n ij N a i b j n ij Using the expected value the adjusted mutual information can then be calculated using a similar form to that of the adjusted Rand index math text AMI frac text MI E text MI text mean H U H V E text MI For normalized mutual information and adjusted mutual information the normalizing value is typically some generalized mean of the entropies of each clustering Various generalized means exist and no firm rules exist for preferring one over the others The decision is largely a field by field basis for instance in community detection the arithmetic mean is most common Each normalizing method provides qualitatively similar behaviours YAT2016 In our implementation this is controlled by the average method parameter Vinh et al 2010 named variants of NMI and AMI by their averaging method VEB2010 Their sqrt and sum averages are the geometric and arithmetic means we use these more broadly common names rubric References Strehl Alexander and Joydeep Ghosh 2002 Cluster ensembles a knowledge reuse framework for combining multiple partitions Journal of Machine Learning Research 3 583 617 doi 10 1162 153244303321897735 http strehl com download strehl jmlr02 pdf Wikipedia entry for the normalized Mutual Information https en wikipedia org wiki Mutual Information Wikipedia entry for the Adjusted Mutual Information https en wikipedia org wiki Adjusted Mutual Information VEB2009 Vinh Epps and Bailey 2009 Information theoretic measures for clusterings comparison Proceedings of the 26th Annual International Conference on Machine Learning ICML 09 doi 10 1145 1553374 1553511 https dl acm org citation cfm doid 1553374 1553511 ISBN 9781605585161 VEB2010 Vinh Epps and Bailey 2010 Information Theoretic Measures for Clusterings Comparison Variants Properties Normalization and Correction for Chance JMLR https jmlr csail mit edu papers volume11 vinh10a vinh10a pdf YAT2016 Yang Algesheimer and Tessone 2016 A comparative analysis of community detection algorithms on artificial networks Scientific Reports 6 30750 doi 10 1038 srep30750 https www nature com articles srep30750 homogeneity completeness Homogeneity completeness and V measure Given the knowledge of the ground truth class assignments of the samples it is possible to define some intuitive metric using conditional entropy analysis In particular Rosenberg and Hirschberg 2007 define the following two desirable objectives for any cluster assignment homogeneity each cluster contains only members of a single class completeness all members of a given class are assigned to the same cluster We can turn those concept as scores func homogeneity score and func completeness score Both are bounded below by 0 0 and above by 1 0 higher is better from sklearn import metrics labels true 0 0 0 1 1 1 labels pred 0 0 1 1 2 2 metrics homogeneity score labels true labels pred 0 66 metrics completeness score labels true labels pred 0 42 Their harmonic mean called V measure is computed by func v measure score metrics v measure score labels true labels pred 0 51 This function s formula is as follows math v frac 1 beta times text homogeneity times text completeness beta times text homogeneity text completeness beta defaults to a value of 1 0 but for using a value less than 1 for beta metrics v measure score labels true labels pred beta 0 6 0 54 more weight will be attributed to homogeneity and using a value greater than 1 metrics v measure score labels true labels pred beta 1 8 0 48 more weight will be attributed to completeness The V measure is actually equivalent to the mutual information NMI discussed above with the aggregation function being the arithmetic mean B2011 Homogeneity completeness and V measure can be computed at once using func homogeneity completeness v measure as follows metrics homogeneity completeness v measure labels true labels pred 0 66 0 42 0 51 The following clustering assignment is slightly better since it is homogeneous but not complete labels pred 0 0 0 1 2 2 metrics homogeneity completeness v measure labels true labels pred 1 0 0 68 0 81 note func v measure score is symmetric it can be used to evaluate the agreement of two independent assignments on the same dataset This is not the case for func completeness score and func homogeneity score both are bound by the relationship homogeneity score a b completeness score b a topic Advantages Bounded scores 0 0 is as bad as it can be 1 0 is a perfect score Intuitive interpretation clustering with bad V measure can be qualitatively analyzed in terms of homogeneity and completeness to better feel what kind of mistakes is done by the assignment No assumption is made on the cluster structure can be used to compare clustering algorithms such as k means which assumes isotropic blob shapes with results of spectral clustering algorithms which can find cluster with folded shapes topic Drawbacks The previously introduced metrics are not normalized with regards to random labeling this means that depending on the number of samples clusters and ground truth classes a completely random labeling will not always yield the same values for homogeneity completeness and hence v measure In particular random labeling won t yield zero scores especially when the number of clusters is large This problem can safely be ignored when the number of samples is more than a thousand and the number of clusters is less than 10 For smaller sample sizes or larger number of clusters it is safer to use an adjusted index such as the Adjusted Rand Index ARI figure auto examples cluster images sphx glr plot adjusted for chance measures 001 png target auto examples cluster plot adjusted for chance measures html align center scale 100 These metrics require the knowledge of the ground truth classes while almost never available in practice or requires manual assignment by human annotators as in the supervised learning setting rubric Examples ref sphx glr auto examples cluster plot adjusted for chance measures py Analysis of the impact of the dataset size on the value of clustering measures for random assignments dropdown Mathematical formulation Homogeneity and completeness scores are formally given by math h 1 frac H C K H C math c 1 frac H K C H K where math H C K is the conditional entropy of the classes given the cluster assignments and is given by math H C K sum c 1 C sum k 1 K frac n c k n cdot log left frac n c k n k right and math H C is the entropy of the classes and is given by math H C sum c 1 C frac n c n cdot log left frac n c n right with math n the total number of samples math n c and math n k the number of samples respectively belonging to class math c and cluster math k and finally math n c k the number of samples from class math c assigned to cluster math k The conditional entropy of clusters given class math H K C and the entropy of clusters math H K are defined in a symmetric manner Rosenberg and Hirschberg further define V measure as the harmonic mean of homogeneity and completeness math v 2 cdot frac h cdot c h c rubric References V Measure A conditional entropy based external cluster evaluation measure https aclweb org anthology D D07 D07 1043 pdf Andrew Rosenberg and Julia Hirschberg 2007 B2011 Identification and Characterization of Events in Social Media http www cs columbia edu hila hila thesis distributed pdf Hila Becker PhD Thesis fowlkes mallows scores Fowlkes Mallows scores The original Fowlkes Mallows index FMI was intended to measure the similarity between two clustering results which is inherently an unsupervised comparison The supervised adaptation of the Fowlkes Mallows index as implemented in func sklearn metrics fowlkes mallows score can be used when the ground truth class assignments of the samples are known The FMI is defined as the geometric mean of the pairwise precision and recall math text FMI frac text TP sqrt text TP text FP text TP text FN In the above formula TP True Positive The number of pairs of points that are clustered together both in the true labels and in the predicted labels FP False Positive The number of pairs of points that are clustered together in the predicted labels but not in the true labels FN False Negative The number of pairs of points that are clustered together in the true labels but not in the predicted labels The score ranges from 0 to 1 A high value indicates a good similarity between two clusters from sklearn import metrics labels true 0 0 0 1 1 1 labels pred 0 0 1 1 2 2 metrics fowlkes mallows score labels true labels pred 0 47140 One can permute 0 and 1 in the predicted labels rename 2 to 3 and get the same score labels pred 1 1 0 0 3 3 metrics fowlkes mallows score labels true labels pred 0 47140 Perfect labeling is scored 1 0 labels pred labels true metrics fowlkes mallows score labels true labels pred 1 0 Bad e g independent labelings have zero scores labels true 0 1 2 0 3 4 5 1 labels pred 1 1 0 0 2 2 2 2 metrics fowlkes mallows score labels true labels pred 0 0 topic Advantages Random uniform label assignments have a FMI score close to 0 0 for any value of n clusters and n samples which is not the case for raw Mutual Information or the V measure for instance Upper bounded at 1 Values close to zero indicate two label assignments that are largely independent while values close to one indicate significant agreement Further values of exactly 0 indicate purely independent label assignments and a FMI of exactly 1 indicates that the two label assignments are equal with or without permutation No assumption is made on the cluster structure can be used to compare clustering algorithms such as k means which assumes isotropic blob shapes with results of spectral clustering algorithms which can find cluster with folded shapes topic Drawbacks Contrary to inertia FMI based measures require the knowledge of the ground truth classes while almost never available in practice or requires manual assignment by human annotators as in the supervised learning setting dropdown References E B Fowkles and C L Mallows 1983 A method for comparing two hierarchical clusterings Journal of the American Statistical Association https www tandfonline com doi abs 10 1080 01621459 1983 10478008 Wikipedia entry for the Fowlkes Mallows Index https en wikipedia org wiki Fowlkes Mallows index silhouette coefficient Silhouette Coefficient If the ground truth labels are not known evaluation must be performed using the model itself The Silhouette Coefficient func sklearn metrics silhouette score is an example of such an evaluation where a higher Silhouette Coefficient score relates to a model with better defined clusters The Silhouette Coefficient is defined for each sample and is composed of two scores a The mean distance between a sample and all other points in the same class b The mean distance between a sample and all other points in the next nearest cluster The Silhouette Coefficient s for a single sample is then given as math s frac b a max a b The Silhouette Coefficient for a set of samples is given as the mean of the Silhouette Coefficient for each sample from sklearn import metrics from sklearn metrics import pairwise distances from sklearn import datasets X y datasets load iris return X y True In normal usage the Silhouette Coefficient is applied to the results of a cluster analysis import numpy as np from sklearn cluster import KMeans kmeans model KMeans n clusters 3 random state 1 fit X labels kmeans model labels metrics silhouette score X labels metric euclidean 0 55 topic Advantages The score is bounded between 1 for incorrect clustering and 1 for highly dense clustering Scores around zero indicate overlapping clusters The score is higher when clusters are dense and well separated which relates to a standard concept of a cluster topic Drawbacks The Silhouette Coefficient is generally higher for convex clusters than other concepts of clusters such as density based clusters like those obtained through DBSCAN rubric Examples ref sphx glr auto examples cluster plot kmeans silhouette analysis py In this example the silhouette analysis is used to choose an optimal value for n clusters dropdown References Peter J Rousseeuw 1987 doi Silhouettes a Graphical Aid to the Interpretation and Validation of Cluster Analysis 10 1016 0377 0427 87 90125 7 Computational and Applied Mathematics 20 53 65 calinski harabasz index Calinski Harabasz Index If the ground truth labels are not known the Calinski Harabasz index func sklearn metrics calinski harabasz score also known as the Variance Ratio Criterion can be used to evaluate the model where a higher Calinski Harabasz score relates to a model with better defined clusters The index is the ratio of the sum of between clusters dispersion and of within cluster dispersion for all clusters where dispersion is defined as the sum of distances squared from sklearn import metrics from sklearn metrics import pairwise distances from sklearn import datasets X y datasets load iris return X y True In normal usage the Calinski Harabasz index is applied to the results of a cluster analysis import numpy as np from sklearn cluster import KMeans kmeans model KMeans n clusters 3 random state 1 fit X labels kmeans model labels metrics calinski harabasz score X labels 561 59 topic Advantages The score is higher when clusters are dense and well separated which relates to a standard concept of a cluster The score is fast to compute topic Drawbacks The Calinski Harabasz index is generally higher for convex clusters than other concepts of clusters such as density based clusters like those obtained through DBSCAN dropdown Mathematical formulation For a set of data math E of size math n E which has been clustered into math k clusters the Calinski Harabasz score math s is defined as the ratio of the between clusters dispersion mean and the within cluster dispersion math s frac mathrm tr B k mathrm tr W k times frac n E k k 1 where math mathrm tr B k is trace of the between group dispersion matrix and math mathrm tr W k is the trace of the within cluster dispersion matrix defined by math W k sum q 1 k sum x in C q x c q x c q T math B k sum q 1 k n q c q c E c q c E T with math C q the set of points in cluster math q math c q the center of cluster math q math c E the center of math E and math n q the number of points in cluster math q dropdown References Cali ski T Harabasz J 1974 A Dendrite Method for Cluster Analysis https www researchgate net publication 233096619 A Dendrite Method for Cluster Analysis doi Communications in Statistics theory and Methods 3 1 27 10 1080 03610927408827101 davies bouldin index Davies Bouldin Index If the ground truth labels are not known the Davies Bouldin index func sklearn metrics davies bouldin score can be used to evaluate the model where a lower Davies Bouldin index relates to a model with better separation between the clusters This index signifies the average similarity between clusters where the similarity is a measure that compares the distance between clusters with the size of the clusters themselves Zero is the lowest possible score Values closer to zero indicate a better partition In normal usage the Davies Bouldin index is applied to the results of a cluster analysis as follows from sklearn import datasets iris datasets load iris X iris data from sklearn cluster import KMeans from sklearn metrics import davies bouldin score kmeans KMeans n clusters 3 random state 1 fit X labels kmeans labels davies bouldin score X labels 0 666 topic Advantages The computation of Davies Bouldin is simpler than that of Silhouette scores The index is solely based on quantities and features inherent to the dataset as its computation only uses point wise distances topic Drawbacks The Davies Boulding index is generally higher for convex clusters than other concepts of clusters such as density based clusters like those obtained from DBSCAN The usage of centroid distance limits the distance metric to Euclidean space dropdown Mathematical formulation The index is defined as the average similarity between each cluster math C i for math i 1 k and its most similar one math C j In the context of this index similarity is defined as a measure math R ij that trades off math s i the average distance between each point of cluster math i and the centroid of that cluster also know as cluster diameter math d ij the distance between cluster centroids math i and math j A simple choice to construct math R ij so that it is nonnegative and symmetric is math R ij frac s i s j d ij Then the Davies Bouldin index is defined as math DB frac 1 k sum i 1 k max i neq j R ij dropdown References Davies David L Bouldin Donald W 1979 doi A Cluster Separation Measure 10 1109 TPAMI 1979 4766909 IEEE Transactions on Pattern Analysis and Machine Intelligence PAMI 1 2 224 227 Halkidi Maria Batistakis Yannis Vazirgiannis Michalis 2001 doi On Clustering Validation Techniques 10 1023 A 1012801612483 Journal of Intelligent Information Systems 17 2 3 107 145 Wikipedia entry for Davies Bouldin index https en wikipedia org wiki Davies Bouldin index contingency matrix Contingency Matrix Contingency matrix func sklearn metrics cluster contingency matrix reports the intersection cardinality for every true predicted cluster pair The contingency matrix provides sufficient statistics for all clustering metrics where the samples are independent and identically distributed and one doesn t need to account for some instances not being clustered Here is an example from sklearn metrics cluster import contingency matrix x a a a b b b y 0 0 1 1 2 2 contingency matrix x y array 2 1 0 0 1 2 The first row of output array indicates that there are three samples whose true cluster is a Of them two are in predicted cluster 0 one is in 1 and none is in 2 And the second row indicates that there are three samples whose true cluster is b Of them none is in predicted cluster 0 one is in 1 and two are in 2 A ref confusion matrix confusion matrix for classification is a square contingency matrix where the order of rows and columns correspond to a list of classes topic Advantages Allows to examine the spread of each true cluster across predicted clusters and vice versa The contingency table calculated is typically utilized in the calculation of a similarity statistic like the others listed in this document between the two clusterings topic Drawbacks Contingency matrix is easy to interpret for a small number of clusters but becomes very hard to interpret for a large number of clusters It doesn t give a single metric to use as an objective for clustering optimisation dropdown References Wikipedia entry for contingency matrix https en wikipedia org wiki Contingency table pair confusion matrix Pair Confusion Matrix The pair confusion matrix func sklearn metrics cluster pair confusion matrix is a 2x2 similarity matrix math C left begin matrix C 00 C 01 C 10 C 11 end matrix right between two clusterings computed by considering all pairs of samples and counting pairs that are assigned into the same or into different clusters under the true and predicted clusterings It has the following entries math C 00 number of pairs with both clusterings having the samples not clustered together math C 10 number of pairs with the true label clustering having the samples clustered together but the other clustering not having the samples clustered together math C 01 number of pairs with the true label clustering not having the samples clustered together but the other clustering having the samples clustered together math C 11 number of pairs with both clusterings having the samples clustered together Considering a pair of samples that is clustered together a positive pair then as in binary classification the count of true negatives is math C 00 false negatives is math C 10 true positives is math C 11 and false positives is math C 01 Perfectly matching labelings have all non zero entries on the diagonal regardless of actual label values from sklearn metrics cluster import pair confusion matrix pair confusion matrix 0 0 1 1 0 0 1 1 array 8 0 0 4 pair confusion matrix 0 0 1 1 1 1 0 0 array 8 0 0 4 Labelings that assign all classes members to the same clusters are complete but may not always be pure hence penalized and have some off diagonal non zero entries pair confusion matrix 0 0 1 2 0 0 1 1 array 8 2 0 2 The matrix is not symmetric pair confusion matrix 0 0 1 1 0 0 1 2 array 8 0 2 2 If classes members are completely split across different clusters the assignment is totally incomplete hence the matrix has all zero diagonal entries pair confusion matrix 0 0 0 0 0 1 2 3 array 0 0 12 0 dropdown References doi Comparing Partitions 10 1007 BF01908075 L Hubert and P Arabie Journal of Classification 1985 |
scikit-learn sklearn naivebayes naivebayes Naive Bayes Naive Bayes methods are a set of supervised learning algorithms | .. _naive_bayes:
===========
Naive Bayes
===========
.. currentmodule:: sklearn.naive_bayes
Naive Bayes methods are a set of supervised learning algorithms
based on applying Bayes' theorem with the "naive" assumption of
conditional independence between every pair of features given the
value of the class variable. Bayes' theorem states the following
relationship, given class variable :math:`y` and dependent feature
vector :math:`x_1` through :math:`x_n`, :
.. math::
P(y \mid x_1, \dots, x_n) = \frac{P(y) P(x_1, \dots, x_n \mid y)}
{P(x_1, \dots, x_n)}
Using the naive conditional independence assumption that
.. math::
P(x_i | y, x_1, \dots, x_{i-1}, x_{i+1}, \dots, x_n) = P(x_i | y),
for all :math:`i`, this relationship is simplified to
.. math::
P(y \mid x_1, \dots, x_n) = \frac{P(y) \prod_{i=1}^{n} P(x_i \mid y)}
{P(x_1, \dots, x_n)}
Since :math:`P(x_1, \dots, x_n)` is constant given the input,
we can use the following classification rule:
.. math::
P(y \mid x_1, \dots, x_n) \propto P(y) \prod_{i=1}^{n} P(x_i \mid y)
\Downarrow
\hat{y} = \arg\max_y P(y) \prod_{i=1}^{n} P(x_i \mid y),
and we can use Maximum A Posteriori (MAP) estimation to estimate
:math:`P(y)` and :math:`P(x_i \mid y)`;
the former is then the relative frequency of class :math:`y`
in the training set.
The different naive Bayes classifiers differ mainly by the assumptions they
make regarding the distribution of :math:`P(x_i \mid y)`.
In spite of their apparently over-simplified assumptions, naive Bayes
classifiers have worked quite well in many real-world situations, famously
document classification and spam filtering. They require a small amount
of training data to estimate the necessary parameters. (For theoretical
reasons why naive Bayes works well, and on which types of data it does, see
the references below.)
Naive Bayes learners and classifiers can be extremely fast compared to more
sophisticated methods.
The decoupling of the class conditional feature distributions means that each
distribution can be independently estimated as a one dimensional distribution.
This in turn helps to alleviate problems stemming from the curse of
dimensionality.
On the flip side, although naive Bayes is known as a decent classifier,
it is known to be a bad estimator, so the probability outputs from
``predict_proba`` are not to be taken too seriously.
.. dropdown:: References
* H. Zhang (2004). `The optimality of Naive Bayes.
<https://www.cs.unb.ca/~hzhang/publications/FLAIRS04ZhangH.pdf>`_
Proc. FLAIRS.
.. _gaussian_naive_bayes:
Gaussian Naive Bayes
--------------------
:class:`GaussianNB` implements the Gaussian Naive Bayes algorithm for
classification. The likelihood of the features is assumed to be Gaussian:
.. math::
P(x_i \mid y) = \frac{1}{\sqrt{2\pi\sigma^2_y}} \exp\left(-\frac{(x_i - \mu_y)^2}{2\sigma^2_y}\right)
The parameters :math:`\sigma_y` and :math:`\mu_y`
are estimated using maximum likelihood.
>>> from sklearn.datasets import load_iris
>>> from sklearn.model_selection import train_test_split
>>> from sklearn.naive_bayes import GaussianNB
>>> X, y = load_iris(return_X_y=True)
>>> X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.5, random_state=0)
>>> gnb = GaussianNB()
>>> y_pred = gnb.fit(X_train, y_train).predict(X_test)
>>> print("Number of mislabeled points out of a total %d points : %d"
... % (X_test.shape[0], (y_test != y_pred).sum()))
Number of mislabeled points out of a total 75 points : 4
.. _multinomial_naive_bayes:
Multinomial Naive Bayes
-----------------------
:class:`MultinomialNB` implements the naive Bayes algorithm for multinomially
distributed data, and is one of the two classic naive Bayes variants used in
text classification (where the data are typically represented as word vector
counts, although tf-idf vectors are also known to work well in practice).
The distribution is parametrized by vectors
:math:`\theta_y = (\theta_{y1},\ldots,\theta_{yn})`
for each class :math:`y`, where :math:`n` is the number of features
(in text classification, the size of the vocabulary)
and :math:`\theta_{yi}` is the probability :math:`P(x_i \mid y)`
of feature :math:`i` appearing in a sample belonging to class :math:`y`.
The parameters :math:`\theta_y` is estimated by a smoothed
version of maximum likelihood, i.e. relative frequency counting:
.. math::
\hat{\theta}_{yi} = \frac{ N_{yi} + \alpha}{N_y + \alpha n}
where :math:`N_{yi} = \sum_{x \in T} x_i` is
the number of times feature :math:`i` appears in all samples of class :math:`y`
in the training set :math:`T`,
and :math:`N_{y} = \sum_{i=1}^{n} N_{yi}` is the total count of
all features for class :math:`y`.
The smoothing priors :math:`\alpha \ge 0` accounts for
features not present in the learning samples and prevents zero probabilities
in further computations.
Setting :math:`\alpha = 1` is called Laplace smoothing,
while :math:`\alpha < 1` is called Lidstone smoothing.
.. _complement_naive_bayes:
Complement Naive Bayes
----------------------
:class:`ComplementNB` implements the complement naive Bayes (CNB) algorithm.
CNB is an adaptation of the standard multinomial naive Bayes (MNB) algorithm
that is particularly suited for imbalanced data sets. Specifically, CNB uses
statistics from the *complement* of each class to compute the model's weights.
The inventors of CNB show empirically that the parameter estimates for CNB are
more stable than those for MNB. Further, CNB regularly outperforms MNB (often
by a considerable margin) on text classification tasks.
.. dropdown:: Weights calculation
The procedure for calculating the weights is as follows:
.. math::
\hat{\theta}_{ci} = \frac{\alpha_i + \sum_{j:y_j \neq c} d_{ij}}
{\alpha + \sum_{j:y_j \neq c} \sum_{k} d_{kj}}
w_{ci} = \log \hat{\theta}_{ci}
w_{ci} = \frac{w_{ci}}{\sum_{j} |w_{cj}|}
where the summations are over all documents :math:`j` not in class :math:`c`,
:math:`d_{ij}` is either the count or tf-idf value of term :math:`i` in document
:math:`j`, :math:`\alpha_i` is a smoothing hyperparameter like that found in
MNB, and :math:`\alpha = \sum_{i} \alpha_i`. The second normalization addresses
the tendency for longer documents to dominate parameter estimates in MNB. The
classification rule is:
.. math::
\hat{c} = \arg\min_c \sum_{i} t_i w_{ci}
i.e., a document is assigned to the class that is the *poorest* complement
match.
.. dropdown:: References
* Rennie, J. D., Shih, L., Teevan, J., & Karger, D. R. (2003).
`Tackling the poor assumptions of naive bayes text classifiers.
<https://people.csail.mit.edu/jrennie/papers/icml03-nb.pdf>`_
In ICML (Vol. 3, pp. 616-623).
.. _bernoulli_naive_bayes:
Bernoulli Naive Bayes
---------------------
:class:`BernoulliNB` implements the naive Bayes training and classification
algorithms for data that is distributed according to multivariate Bernoulli
distributions; i.e., there may be multiple features but each one is assumed
to be a binary-valued (Bernoulli, boolean) variable.
Therefore, this class requires samples to be represented as binary-valued
feature vectors; if handed any other kind of data, a :class:`BernoulliNB` instance
may binarize its input (depending on the ``binarize`` parameter).
The decision rule for Bernoulli naive Bayes is based on
.. math::
P(x_i \mid y) = P(x_i = 1 \mid y) x_i + (1 - P(x_i = 1 \mid y)) (1 - x_i)
which differs from multinomial NB's rule
in that it explicitly penalizes the non-occurrence of a feature :math:`i`
that is an indicator for class :math:`y`,
where the multinomial variant would simply ignore a non-occurring feature.
In the case of text classification, word occurrence vectors (rather than word
count vectors) may be used to train and use this classifier. :class:`BernoulliNB`
might perform better on some datasets, especially those with shorter documents.
It is advisable to evaluate both models, if time permits.
.. dropdown:: References
* C.D. Manning, P. Raghavan and H. Schütze (2008). Introduction to
Information Retrieval. Cambridge University Press, pp. 234-265.
* A. McCallum and K. Nigam (1998).
`A comparison of event models for Naive Bayes text classification.
<https://citeseerx.ist.psu.edu/doc_view/pid/04ce064505b1635583fa0d9cc07cac7e9ea993cc>`_
Proc. AAAI/ICML-98 Workshop on Learning for Text Categorization, pp. 41-48.
* V. Metsis, I. Androutsopoulos and G. Paliouras (2006).
`Spam filtering with Naive Bayes -- Which Naive Bayes?
<https://citeseerx.ist.psu.edu/doc_view/pid/8bd0934b366b539ec95e683ae39f8abb29ccc757>`_
3rd Conf. on Email and Anti-Spam (CEAS).
.. _categorical_naive_bayes:
Categorical Naive Bayes
-----------------------
:class:`CategoricalNB` implements the categorical naive Bayes
algorithm for categorically distributed data. It assumes that each feature,
which is described by the index :math:`i`, has its own categorical
distribution.
For each feature :math:`i` in the training set :math:`X`,
:class:`CategoricalNB` estimates a categorical distribution for each feature i
of X conditioned on the class y. The index set of the samples is defined as
:math:`J = \{ 1, \dots, m \}`, with :math:`m` as the number of samples.
.. dropdown:: Probability calculation
The probability of category :math:`t` in feature :math:`i` given class
:math:`c` is estimated as:
.. math::
P(x_i = t \mid y = c \: ;\, \alpha) = \frac{ N_{tic} + \alpha}{N_{c} +
\alpha n_i},
where :math:`N_{tic} = |\{j \in J \mid x_{ij} = t, y_j = c\}|` is the number
of times category :math:`t` appears in the samples :math:`x_{i}`, which belong
to class :math:`c`, :math:`N_{c} = |\{ j \in J\mid y_j = c\}|` is the number
of samples with class c, :math:`\alpha` is a smoothing parameter and
:math:`n_i` is the number of available categories of feature :math:`i`.
:class:`CategoricalNB` assumes that the sample matrix :math:`X` is encoded (for
instance with the help of :class:`~sklearn.preprocessing.OrdinalEncoder`) such
that all categories for each feature :math:`i` are represented with numbers
:math:`0, ..., n_i - 1` where :math:`n_i` is the number of available categories
of feature :math:`i`.
Out-of-core naive Bayes model fitting
-------------------------------------
Naive Bayes models can be used to tackle large scale classification problems
for which the full training set might not fit in memory. To handle this case,
:class:`MultinomialNB`, :class:`BernoulliNB`, and :class:`GaussianNB`
expose a ``partial_fit`` method that can be used
incrementally as done with other classifiers as demonstrated in
:ref:`sphx_glr_auto_examples_applications_plot_out_of_core_classification.py`. All naive Bayes
classifiers support sample weighting.
Contrary to the ``fit`` method, the first call to ``partial_fit`` needs to be
passed the list of all the expected class labels.
For an overview of available strategies in scikit-learn, see also the
:ref:`out-of-core learning <scaling_strategies>` documentation.
.. note::
The ``partial_fit`` method call of naive Bayes models introduces some
computational overhead. It is recommended to use data chunk sizes that are as
large as possible, that is as the available RAM allows. | scikit-learn | naive bayes Naive Bayes currentmodule sklearn naive bayes Naive Bayes methods are a set of supervised learning algorithms based on applying Bayes theorem with the naive assumption of conditional independence between every pair of features given the value of the class variable Bayes theorem states the following relationship given class variable math y and dependent feature vector math x 1 through math x n math P y mid x 1 dots x n frac P y P x 1 dots x n mid y P x 1 dots x n Using the naive conditional independence assumption that math P x i y x 1 dots x i 1 x i 1 dots x n P x i y for all math i this relationship is simplified to math P y mid x 1 dots x n frac P y prod i 1 n P x i mid y P x 1 dots x n Since math P x 1 dots x n is constant given the input we can use the following classification rule math P y mid x 1 dots x n propto P y prod i 1 n P x i mid y Downarrow hat y arg max y P y prod i 1 n P x i mid y and we can use Maximum A Posteriori MAP estimation to estimate math P y and math P x i mid y the former is then the relative frequency of class math y in the training set The different naive Bayes classifiers differ mainly by the assumptions they make regarding the distribution of math P x i mid y In spite of their apparently over simplified assumptions naive Bayes classifiers have worked quite well in many real world situations famously document classification and spam filtering They require a small amount of training data to estimate the necessary parameters For theoretical reasons why naive Bayes works well and on which types of data it does see the references below Naive Bayes learners and classifiers can be extremely fast compared to more sophisticated methods The decoupling of the class conditional feature distributions means that each distribution can be independently estimated as a one dimensional distribution This in turn helps to alleviate problems stemming from the curse of dimensionality On the flip side although naive Bayes is known as a decent classifier it is known to be a bad estimator so the probability outputs from predict proba are not to be taken too seriously dropdown References H Zhang 2004 The optimality of Naive Bayes https www cs unb ca hzhang publications FLAIRS04ZhangH pdf Proc FLAIRS gaussian naive bayes Gaussian Naive Bayes class GaussianNB implements the Gaussian Naive Bayes algorithm for classification The likelihood of the features is assumed to be Gaussian math P x i mid y frac 1 sqrt 2 pi sigma 2 y exp left frac x i mu y 2 2 sigma 2 y right The parameters math sigma y and math mu y are estimated using maximum likelihood from sklearn datasets import load iris from sklearn model selection import train test split from sklearn naive bayes import GaussianNB X y load iris return X y True X train X test y train y test train test split X y test size 0 5 random state 0 gnb GaussianNB y pred gnb fit X train y train predict X test print Number of mislabeled points out of a total d points d X test shape 0 y test y pred sum Number of mislabeled points out of a total 75 points 4 multinomial naive bayes Multinomial Naive Bayes class MultinomialNB implements the naive Bayes algorithm for multinomially distributed data and is one of the two classic naive Bayes variants used in text classification where the data are typically represented as word vector counts although tf idf vectors are also known to work well in practice The distribution is parametrized by vectors math theta y theta y1 ldots theta yn for each class math y where math n is the number of features in text classification the size of the vocabulary and math theta yi is the probability math P x i mid y of feature math i appearing in a sample belonging to class math y The parameters math theta y is estimated by a smoothed version of maximum likelihood i e relative frequency counting math hat theta yi frac N yi alpha N y alpha n where math N yi sum x in T x i is the number of times feature math i appears in all samples of class math y in the training set math T and math N y sum i 1 n N yi is the total count of all features for class math y The smoothing priors math alpha ge 0 accounts for features not present in the learning samples and prevents zero probabilities in further computations Setting math alpha 1 is called Laplace smoothing while math alpha 1 is called Lidstone smoothing complement naive bayes Complement Naive Bayes class ComplementNB implements the complement naive Bayes CNB algorithm CNB is an adaptation of the standard multinomial naive Bayes MNB algorithm that is particularly suited for imbalanced data sets Specifically CNB uses statistics from the complement of each class to compute the model s weights The inventors of CNB show empirically that the parameter estimates for CNB are more stable than those for MNB Further CNB regularly outperforms MNB often by a considerable margin on text classification tasks dropdown Weights calculation The procedure for calculating the weights is as follows math hat theta ci frac alpha i sum j y j neq c d ij alpha sum j y j neq c sum k d kj w ci log hat theta ci w ci frac w ci sum j w cj where the summations are over all documents math j not in class math c math d ij is either the count or tf idf value of term math i in document math j math alpha i is a smoothing hyperparameter like that found in MNB and math alpha sum i alpha i The second normalization addresses the tendency for longer documents to dominate parameter estimates in MNB The classification rule is math hat c arg min c sum i t i w ci i e a document is assigned to the class that is the poorest complement match dropdown References Rennie J D Shih L Teevan J Karger D R 2003 Tackling the poor assumptions of naive bayes text classifiers https people csail mit edu jrennie papers icml03 nb pdf In ICML Vol 3 pp 616 623 bernoulli naive bayes Bernoulli Naive Bayes class BernoulliNB implements the naive Bayes training and classification algorithms for data that is distributed according to multivariate Bernoulli distributions i e there may be multiple features but each one is assumed to be a binary valued Bernoulli boolean variable Therefore this class requires samples to be represented as binary valued feature vectors if handed any other kind of data a class BernoulliNB instance may binarize its input depending on the binarize parameter The decision rule for Bernoulli naive Bayes is based on math P x i mid y P x i 1 mid y x i 1 P x i 1 mid y 1 x i which differs from multinomial NB s rule in that it explicitly penalizes the non occurrence of a feature math i that is an indicator for class math y where the multinomial variant would simply ignore a non occurring feature In the case of text classification word occurrence vectors rather than word count vectors may be used to train and use this classifier class BernoulliNB might perform better on some datasets especially those with shorter documents It is advisable to evaluate both models if time permits dropdown References C D Manning P Raghavan and H Sch tze 2008 Introduction to Information Retrieval Cambridge University Press pp 234 265 A McCallum and K Nigam 1998 A comparison of event models for Naive Bayes text classification https citeseerx ist psu edu doc view pid 04ce064505b1635583fa0d9cc07cac7e9ea993cc Proc AAAI ICML 98 Workshop on Learning for Text Categorization pp 41 48 V Metsis I Androutsopoulos and G Paliouras 2006 Spam filtering with Naive Bayes Which Naive Bayes https citeseerx ist psu edu doc view pid 8bd0934b366b539ec95e683ae39f8abb29ccc757 3rd Conf on Email and Anti Spam CEAS categorical naive bayes Categorical Naive Bayes class CategoricalNB implements the categorical naive Bayes algorithm for categorically distributed data It assumes that each feature which is described by the index math i has its own categorical distribution For each feature math i in the training set math X class CategoricalNB estimates a categorical distribution for each feature i of X conditioned on the class y The index set of the samples is defined as math J 1 dots m with math m as the number of samples dropdown Probability calculation The probability of category math t in feature math i given class math c is estimated as math P x i t mid y c alpha frac N tic alpha N c alpha n i where math N tic j in J mid x ij t y j c is the number of times category math t appears in the samples math x i which belong to class math c math N c j in J mid y j c is the number of samples with class c math alpha is a smoothing parameter and math n i is the number of available categories of feature math i class CategoricalNB assumes that the sample matrix math X is encoded for instance with the help of class sklearn preprocessing OrdinalEncoder such that all categories for each feature math i are represented with numbers math 0 n i 1 where math n i is the number of available categories of feature math i Out of core naive Bayes model fitting Naive Bayes models can be used to tackle large scale classification problems for which the full training set might not fit in memory To handle this case class MultinomialNB class BernoulliNB and class GaussianNB expose a partial fit method that can be used incrementally as done with other classifiers as demonstrated in ref sphx glr auto examples applications plot out of core classification py All naive Bayes classifiers support sample weighting Contrary to the fit method the first call to partial fit needs to be passed the list of all the expected class labels For an overview of available strategies in scikit learn see also the ref out of core learning scaling strategies documentation note The partial fit method call of naive Bayes models introduces some computational overhead It is recommended to use data chunk sizes that are as large as possible that is as the available RAM allows |
scikit-learn The cross decomposition module contains supervised estimators for crossdecomposition Cross decomposition sklearn crossdecomposition dimensionality reduction and regression belonging to the Partial Least | .. _cross_decomposition:
===================
Cross decomposition
===================
.. currentmodule:: sklearn.cross_decomposition
The cross decomposition module contains **supervised** estimators for
dimensionality reduction and regression, belonging to the "Partial Least
Squares" family.
.. figure:: ../auto_examples/cross_decomposition/images/sphx_glr_plot_compare_cross_decomposition_001.png
:target: ../auto_examples/cross_decomposition/plot_compare_cross_decomposition.html
:scale: 75%
:align: center
Cross decomposition algorithms find the fundamental relations between two
matrices (X and Y). They are latent variable approaches to modeling the
covariance structures in these two spaces. They will try to find the
multidimensional direction in the X space that explains the maximum
multidimensional variance direction in the Y space. In other words, PLS
projects both `X` and `Y` into a lower-dimensional subspace such that the
covariance between `transformed(X)` and `transformed(Y)` is maximal.
PLS draws similarities with `Principal Component Regression
<https://en.wikipedia.org/wiki/Principal_component_regression>`_ (PCR), where
the samples are first projected into a lower-dimensional subspace, and the
targets `y` are predicted using `transformed(X)`. One issue with PCR is that
the dimensionality reduction is unsupervised, and may lose some important
variables: PCR would keep the features with the most variance, but it's
possible that features with a small variances are relevant from predicting
the target. In a way, PLS allows for the same kind of dimensionality
reduction, but by taking into account the targets `y`. An illustration of
this fact is given in the following example:
* :ref:`sphx_glr_auto_examples_cross_decomposition_plot_pcr_vs_pls.py`.
Apart from CCA, the PLS estimators are particularly suited when the matrix of
predictors has more variables than observations, and when there is
multicollinearity among the features. By contrast, standard linear regression
would fail in these cases unless it is regularized.
Classes included in this module are :class:`PLSRegression`,
:class:`PLSCanonical`, :class:`CCA` and :class:`PLSSVD`
PLSCanonical
------------
We here describe the algorithm used in :class:`PLSCanonical`. The other
estimators use variants of this algorithm, and are detailed below.
We recommend section [1]_ for more details and comparisons between these
algorithms. In [1]_, :class:`PLSCanonical` corresponds to "PLSW2A".
Given two centered matrices :math:`X \in \mathbb{R}^{n \times d}` and
:math:`Y \in \mathbb{R}^{n \times t}`, and a number of components :math:`K`,
:class:`PLSCanonical` proceeds as follows:
Set :math:`X_1` to :math:`X` and :math:`Y_1` to :math:`Y`. Then, for each
:math:`k \in [1, K]`:
- a) compute :math:`u_k \in \mathbb{R}^d` and :math:`v_k \in \mathbb{R}^t`,
the first left and right singular vectors of the cross-covariance matrix
:math:`C = X_k^T Y_k`.
:math:`u_k` and :math:`v_k` are called the *weights*.
By definition, :math:`u_k` and :math:`v_k` are
chosen so that they maximize the covariance between the projected
:math:`X_k` and the projected target, that is :math:`\text{Cov}(X_k u_k,
Y_k v_k)`.
- b) Project :math:`X_k` and :math:`Y_k` on the singular vectors to obtain
*scores*: :math:`\xi_k = X_k u_k` and :math:`\omega_k = Y_k v_k`
- c) Regress :math:`X_k` on :math:`\xi_k`, i.e. find a vector :math:`\gamma_k
\in \mathbb{R}^d` such that the rank-1 matrix :math:`\xi_k \gamma_k^T`
is as close as possible to :math:`X_k`. Do the same on :math:`Y_k` with
:math:`\omega_k` to obtain :math:`\delta_k`. The vectors
:math:`\gamma_k` and :math:`\delta_k` are called the *loadings*.
- d) *deflate* :math:`X_k` and :math:`Y_k`, i.e. subtract the rank-1
approximations: :math:`X_{k+1} = X_k - \xi_k \gamma_k^T`, and
:math:`Y_{k + 1} = Y_k - \omega_k \delta_k^T`.
At the end, we have approximated :math:`X` as a sum of rank-1 matrices:
:math:`X = \Xi \Gamma^T` where :math:`\Xi \in \mathbb{R}^{n \times K}`
contains the scores in its columns, and :math:`\Gamma^T \in \mathbb{R}^{K
\times d}` contains the loadings in its rows. Similarly for :math:`Y`, we
have :math:`Y = \Omega \Delta^T`.
Note that the scores matrices :math:`\Xi` and :math:`\Omega` correspond to
the projections of the training data :math:`X` and :math:`Y`, respectively.
Step *a)* may be performed in two ways: either by computing the whole SVD of
:math:`C` and only retain the singular vectors with the biggest singular
values, or by directly computing the singular vectors using the power method (cf section 11.3 in [1]_),
which corresponds to the `'nipals'` option of the `algorithm` parameter.
.. dropdown:: Transforming data
To transform :math:`X` into :math:`\bar{X}`, we need to find a projection
matrix :math:`P` such that :math:`\bar{X} = XP`. We know that for the
training data, :math:`\Xi = XP`, and :math:`X = \Xi \Gamma^T`. Setting
:math:`P = U(\Gamma^T U)^{-1}` where :math:`U` is the matrix with the
:math:`u_k` in the columns, we have :math:`XP = X U(\Gamma^T U)^{-1} = \Xi
(\Gamma^T U) (\Gamma^T U)^{-1} = \Xi` as desired. The rotation matrix
:math:`P` can be accessed from the `x_rotations_` attribute.
Similarly, :math:`Y` can be transformed using the rotation matrix
:math:`V(\Delta^T V)^{-1}`, accessed via the `y_rotations_` attribute.
.. dropdown:: Predicting the targets `Y`
To predict the targets of some data :math:`X`, we are looking for a
coefficient matrix :math:`\beta \in R^{d \times t}` such that :math:`Y =
X\beta`.
The idea is to try to predict the transformed targets :math:`\Omega` as a
function of the transformed samples :math:`\Xi`, by computing :math:`\alpha
\in \mathbb{R}` such that :math:`\Omega = \alpha \Xi`.
Then, we have :math:`Y = \Omega \Delta^T = \alpha \Xi \Delta^T`, and since
:math:`\Xi` is the transformed training data we have that :math:`Y = X \alpha
P \Delta^T`, and as a result the coefficient matrix :math:`\beta = \alpha P
\Delta^T`.
:math:`\beta` can be accessed through the `coef_` attribute.
PLSSVD
------
:class:`PLSSVD` is a simplified version of :class:`PLSCanonical`
described earlier: instead of iteratively deflating the matrices :math:`X_k`
and :math:`Y_k`, :class:`PLSSVD` computes the SVD of :math:`C = X^TY`
only *once*, and stores the `n_components` singular vectors corresponding to
the biggest singular values in the matrices `U` and `V`, corresponding to the
`x_weights_` and `y_weights_` attributes. Here, the transformed data is
simply `transformed(X) = XU` and `transformed(Y) = YV`.
If `n_components == 1`, :class:`PLSSVD` and :class:`PLSCanonical` are
strictly equivalent.
PLSRegression
-------------
The :class:`PLSRegression` estimator is similar to
:class:`PLSCanonical` with `algorithm='nipals'`, with 2 significant
differences:
- at step a) in the power method to compute :math:`u_k` and :math:`v_k`,
:math:`v_k` is never normalized.
- at step c), the targets :math:`Y_k` are approximated using the projection
of :math:`X_k` (i.e. :math:`\xi_k`) instead of the projection of
:math:`Y_k` (i.e. :math:`\omega_k`). In other words, the loadings
computation is different. As a result, the deflation in step d) will also
be affected.
These two modifications affect the output of `predict` and `transform`,
which are not the same as for :class:`PLSCanonical`. Also, while the number
of components is limited by `min(n_samples, n_features, n_targets)` in
:class:`PLSCanonical`, here the limit is the rank of :math:`X^TX`, i.e.
`min(n_samples, n_features)`.
:class:`PLSRegression` is also known as PLS1 (single targets) and PLS2
(multiple targets). Much like :class:`~sklearn.linear_model.Lasso`,
:class:`PLSRegression` is a form of regularized linear regression where the
number of components controls the strength of the regularization.
Canonical Correlation Analysis
------------------------------
Canonical Correlation Analysis was developed prior and independently to PLS.
But it turns out that :class:`CCA` is a special case of PLS, and corresponds
to PLS in "Mode B" in the literature.
:class:`CCA` differs from :class:`PLSCanonical` in the way the weights
:math:`u_k` and :math:`v_k` are computed in the power method of step a).
Details can be found in section 10 of [1]_.
Since :class:`CCA` involves the inversion of :math:`X_k^TX_k` and
:math:`Y_k^TY_k`, this estimator can be unstable if the number of features or
targets is greater than the number of samples.
.. rubric:: References
.. [1] `A survey of Partial Least Squares (PLS) methods, with emphasis on the two-block
case <https://stat.uw.edu/sites/default/files/files/reports/2000/tr371.pdf>`_,
JA Wegelin
.. rubric:: Examples
* :ref:`sphx_glr_auto_examples_cross_decomposition_plot_compare_cross_decomposition.py`
* :ref:`sphx_glr_auto_examples_cross_decomposition_plot_pcr_vs_pls.py` | scikit-learn | cross decomposition Cross decomposition currentmodule sklearn cross decomposition The cross decomposition module contains supervised estimators for dimensionality reduction and regression belonging to the Partial Least Squares family figure auto examples cross decomposition images sphx glr plot compare cross decomposition 001 png target auto examples cross decomposition plot compare cross decomposition html scale 75 align center Cross decomposition algorithms find the fundamental relations between two matrices X and Y They are latent variable approaches to modeling the covariance structures in these two spaces They will try to find the multidimensional direction in the X space that explains the maximum multidimensional variance direction in the Y space In other words PLS projects both X and Y into a lower dimensional subspace such that the covariance between transformed X and transformed Y is maximal PLS draws similarities with Principal Component Regression https en wikipedia org wiki Principal component regression PCR where the samples are first projected into a lower dimensional subspace and the targets y are predicted using transformed X One issue with PCR is that the dimensionality reduction is unsupervised and may lose some important variables PCR would keep the features with the most variance but it s possible that features with a small variances are relevant from predicting the target In a way PLS allows for the same kind of dimensionality reduction but by taking into account the targets y An illustration of this fact is given in the following example ref sphx glr auto examples cross decomposition plot pcr vs pls py Apart from CCA the PLS estimators are particularly suited when the matrix of predictors has more variables than observations and when there is multicollinearity among the features By contrast standard linear regression would fail in these cases unless it is regularized Classes included in this module are class PLSRegression class PLSCanonical class CCA and class PLSSVD PLSCanonical We here describe the algorithm used in class PLSCanonical The other estimators use variants of this algorithm and are detailed below We recommend section 1 for more details and comparisons between these algorithms In 1 class PLSCanonical corresponds to PLSW2A Given two centered matrices math X in mathbb R n times d and math Y in mathbb R n times t and a number of components math K class PLSCanonical proceeds as follows Set math X 1 to math X and math Y 1 to math Y Then for each math k in 1 K a compute math u k in mathbb R d and math v k in mathbb R t the first left and right singular vectors of the cross covariance matrix math C X k T Y k math u k and math v k are called the weights By definition math u k and math v k are chosen so that they maximize the covariance between the projected math X k and the projected target that is math text Cov X k u k Y k v k b Project math X k and math Y k on the singular vectors to obtain scores math xi k X k u k and math omega k Y k v k c Regress math X k on math xi k i e find a vector math gamma k in mathbb R d such that the rank 1 matrix math xi k gamma k T is as close as possible to math X k Do the same on math Y k with math omega k to obtain math delta k The vectors math gamma k and math delta k are called the loadings d deflate math X k and math Y k i e subtract the rank 1 approximations math X k 1 X k xi k gamma k T and math Y k 1 Y k omega k delta k T At the end we have approximated math X as a sum of rank 1 matrices math X Xi Gamma T where math Xi in mathbb R n times K contains the scores in its columns and math Gamma T in mathbb R K times d contains the loadings in its rows Similarly for math Y we have math Y Omega Delta T Note that the scores matrices math Xi and math Omega correspond to the projections of the training data math X and math Y respectively Step a may be performed in two ways either by computing the whole SVD of math C and only retain the singular vectors with the biggest singular values or by directly computing the singular vectors using the power method cf section 11 3 in 1 which corresponds to the nipals option of the algorithm parameter dropdown Transforming data To transform math X into math bar X we need to find a projection matrix math P such that math bar X XP We know that for the training data math Xi XP and math X Xi Gamma T Setting math P U Gamma T U 1 where math U is the matrix with the math u k in the columns we have math XP X U Gamma T U 1 Xi Gamma T U Gamma T U 1 Xi as desired The rotation matrix math P can be accessed from the x rotations attribute Similarly math Y can be transformed using the rotation matrix math V Delta T V 1 accessed via the y rotations attribute dropdown Predicting the targets Y To predict the targets of some data math X we are looking for a coefficient matrix math beta in R d times t such that math Y X beta The idea is to try to predict the transformed targets math Omega as a function of the transformed samples math Xi by computing math alpha in mathbb R such that math Omega alpha Xi Then we have math Y Omega Delta T alpha Xi Delta T and since math Xi is the transformed training data we have that math Y X alpha P Delta T and as a result the coefficient matrix math beta alpha P Delta T math beta can be accessed through the coef attribute PLSSVD class PLSSVD is a simplified version of class PLSCanonical described earlier instead of iteratively deflating the matrices math X k and math Y k class PLSSVD computes the SVD of math C X TY only once and stores the n components singular vectors corresponding to the biggest singular values in the matrices U and V corresponding to the x weights and y weights attributes Here the transformed data is simply transformed X XU and transformed Y YV If n components 1 class PLSSVD and class PLSCanonical are strictly equivalent PLSRegression The class PLSRegression estimator is similar to class PLSCanonical with algorithm nipals with 2 significant differences at step a in the power method to compute math u k and math v k math v k is never normalized at step c the targets math Y k are approximated using the projection of math X k i e math xi k instead of the projection of math Y k i e math omega k In other words the loadings computation is different As a result the deflation in step d will also be affected These two modifications affect the output of predict and transform which are not the same as for class PLSCanonical Also while the number of components is limited by min n samples n features n targets in class PLSCanonical here the limit is the rank of math X TX i e min n samples n features class PLSRegression is also known as PLS1 single targets and PLS2 multiple targets Much like class sklearn linear model Lasso class PLSRegression is a form of regularized linear regression where the number of components controls the strength of the regularization Canonical Correlation Analysis Canonical Correlation Analysis was developed prior and independently to PLS But it turns out that class CCA is a special case of PLS and corresponds to PLS in Mode B in the literature class CCA differs from class PLSCanonical in the way the weights math u k and math v k are computed in the power method of step a Details can be found in section 10 of 1 Since class CCA involves the inversion of math X k TX k and math Y k TY k this estimator can be unstable if the number of features or targets is greater than the number of samples rubric References 1 A survey of Partial Least Squares PLS methods with emphasis on the two block case https stat uw edu sites default files files reports 2000 tr371 pdf JA Wegelin rubric Examples ref sphx glr auto examples cross decomposition plot compare cross decomposition py ref sphx glr auto examples cross decomposition plot pcr vs pls py |
scikit-learn for and ref regression tree Decision Trees Decision Trees DTs are a non parametric supervised learning method used sklearn tree | .. _tree:
==============
Decision Trees
==============
.. currentmodule:: sklearn.tree
**Decision Trees (DTs)** are a non-parametric supervised learning method used
for :ref:`classification <tree_classification>` and :ref:`regression
<tree_regression>`. The goal is to create a model that predicts the value of a
target variable by learning simple decision rules inferred from the data
features. A tree can be seen as a piecewise constant approximation.
For instance, in the example below, decision trees learn from data to
approximate a sine curve with a set of if-then-else decision rules. The deeper
the tree, the more complex the decision rules and the fitter the model.
.. figure:: ../auto_examples/tree/images/sphx_glr_plot_tree_regression_001.png
:target: ../auto_examples/tree/plot_tree_regression.html
:scale: 75
:align: center
Some advantages of decision trees are:
- Simple to understand and to interpret. Trees can be visualized.
- Requires little data preparation. Other techniques often require data
normalization, dummy variables need to be created and blank values to
be removed. Some tree and algorithm combinations support
:ref:`missing values <tree_missing_value_support>`.
- The cost of using the tree (i.e., predicting data) is logarithmic in the
number of data points used to train the tree.
- Able to handle both numerical and categorical data. However, the scikit-learn
implementation does not support categorical variables for now. Other
techniques are usually specialized in analyzing datasets that have only one type
of variable. See :ref:`algorithms <tree_algorithms>` for more
information.
- Able to handle multi-output problems.
- Uses a white box model. If a given situation is observable in a model,
the explanation for the condition is easily explained by boolean logic.
By contrast, in a black box model (e.g., in an artificial neural
network), results may be more difficult to interpret.
- Possible to validate a model using statistical tests. That makes it
possible to account for the reliability of the model.
- Performs well even if its assumptions are somewhat violated by
the true model from which the data were generated.
The disadvantages of decision trees include:
- Decision-tree learners can create over-complex trees that do not
generalize the data well. This is called overfitting. Mechanisms
such as pruning, setting the minimum number of samples required
at a leaf node or setting the maximum depth of the tree are
necessary to avoid this problem.
- Decision trees can be unstable because small variations in the
data might result in a completely different tree being generated.
This problem is mitigated by using decision trees within an
ensemble.
- Predictions of decision trees are neither smooth nor continuous, but
piecewise constant approximations as seen in the above figure. Therefore,
they are not good at extrapolation.
- The problem of learning an optimal decision tree is known to be
NP-complete under several aspects of optimality and even for simple
concepts. Consequently, practical decision-tree learning algorithms
are based on heuristic algorithms such as the greedy algorithm where
locally optimal decisions are made at each node. Such algorithms
cannot guarantee to return the globally optimal decision tree. This
can be mitigated by training multiple trees in an ensemble learner,
where the features and samples are randomly sampled with replacement.
- There are concepts that are hard to learn because decision trees
do not express them easily, such as XOR, parity or multiplexer problems.
- Decision tree learners create biased trees if some classes dominate.
It is therefore recommended to balance the dataset prior to fitting
with the decision tree.
.. _tree_classification:
Classification
==============
:class:`DecisionTreeClassifier` is a class capable of performing multi-class
classification on a dataset.
As with other classifiers, :class:`DecisionTreeClassifier` takes as input two arrays:
an array X, sparse or dense, of shape ``(n_samples, n_features)`` holding the
training samples, and an array Y of integer values, shape ``(n_samples,)``,
holding the class labels for the training samples::
>>> from sklearn import tree
>>> X = [[0, 0], [1, 1]]
>>> Y = [0, 1]
>>> clf = tree.DecisionTreeClassifier()
>>> clf = clf.fit(X, Y)
After being fitted, the model can then be used to predict the class of samples::
>>> clf.predict([[2., 2.]])
array([1])
In case that there are multiple classes with the same and highest
probability, the classifier will predict the class with the lowest index
amongst those classes.
As an alternative to outputting a specific class, the probability of each class
can be predicted, which is the fraction of training samples of the class in a
leaf::
>>> clf.predict_proba([[2., 2.]])
array([[0., 1.]])
:class:`DecisionTreeClassifier` is capable of both binary (where the
labels are [-1, 1]) classification and multiclass (where the labels are
[0, ..., K-1]) classification.
Using the Iris dataset, we can construct a tree as follows::
>>> from sklearn.datasets import load_iris
>>> from sklearn import tree
>>> iris = load_iris()
>>> X, y = iris.data, iris.target
>>> clf = tree.DecisionTreeClassifier()
>>> clf = clf.fit(X, y)
Once trained, you can plot the tree with the :func:`plot_tree` function::
>>> tree.plot_tree(clf)
[...]
.. figure:: ../auto_examples/tree/images/sphx_glr_plot_iris_dtc_002.png
:target: ../auto_examples/tree/plot_iris_dtc.html
:scale: 75
:align: center
.. dropdown:: Alternative ways to export trees
We can also export the tree in `Graphviz
<https://www.graphviz.org/>`_ format using the :func:`export_graphviz`
exporter. If you use the `conda <https://conda.io>`_ package manager, the graphviz binaries
and the python package can be installed with `conda install python-graphviz`.
Alternatively binaries for graphviz can be downloaded from the graphviz project homepage,
and the Python wrapper installed from pypi with `pip install graphviz`.
Below is an example graphviz export of the above tree trained on the entire
iris dataset; the results are saved in an output file `iris.pdf`::
>>> import graphviz # doctest: +SKIP
>>> dot_data = tree.export_graphviz(clf, out_file=None) # doctest: +SKIP
>>> graph = graphviz.Source(dot_data) # doctest: +SKIP
>>> graph.render("iris") # doctest: +SKIP
The :func:`export_graphviz` exporter also supports a variety of aesthetic
options, including coloring nodes by their class (or value for regression) and
using explicit variable and class names if desired. Jupyter notebooks also
render these plots inline automatically::
>>> dot_data = tree.export_graphviz(clf, out_file=None, # doctest: +SKIP
... feature_names=iris.feature_names, # doctest: +SKIP
... class_names=iris.target_names, # doctest: +SKIP
... filled=True, rounded=True, # doctest: +SKIP
... special_characters=True) # doctest: +SKIP
>>> graph = graphviz.Source(dot_data) # doctest: +SKIP
>>> graph # doctest: +SKIP
.. only:: html
.. figure:: ../images/iris.svg
:align: center
.. only:: latex
.. figure:: ../images/iris.pdf
:align: center
.. figure:: ../auto_examples/tree/images/sphx_glr_plot_iris_dtc_001.png
:target: ../auto_examples/tree/plot_iris_dtc.html
:align: center
:scale: 75
Alternatively, the tree can also be exported in textual format with the
function :func:`export_text`. This method doesn't require the installation
of external libraries and is more compact:
>>> from sklearn.datasets import load_iris
>>> from sklearn.tree import DecisionTreeClassifier
>>> from sklearn.tree import export_text
>>> iris = load_iris()
>>> decision_tree = DecisionTreeClassifier(random_state=0, max_depth=2)
>>> decision_tree = decision_tree.fit(iris.data, iris.target)
>>> r = export_text(decision_tree, feature_names=iris['feature_names'])
>>> print(r)
|--- petal width (cm) <= 0.80
| |--- class: 0
|--- petal width (cm) > 0.80
| |--- petal width (cm) <= 1.75
| | |--- class: 1
| |--- petal width (cm) > 1.75
| | |--- class: 2
<BLANKLINE>
.. rubric:: Examples
* :ref:`sphx_glr_auto_examples_tree_plot_iris_dtc.py`
* :ref:`sphx_glr_auto_examples_tree_plot_unveil_tree_structure.py`
.. _tree_regression:
Regression
==========
.. figure:: ../auto_examples/tree/images/sphx_glr_plot_tree_regression_001.png
:target: ../auto_examples/tree/plot_tree_regression.html
:scale: 75
:align: center
Decision trees can also be applied to regression problems, using the
:class:`DecisionTreeRegressor` class.
As in the classification setting, the fit method will take as argument arrays X
and y, only that in this case y is expected to have floating point values
instead of integer values::
>>> from sklearn import tree
>>> X = [[0, 0], [2, 2]]
>>> y = [0.5, 2.5]
>>> clf = tree.DecisionTreeRegressor()
>>> clf = clf.fit(X, y)
>>> clf.predict([[1, 1]])
array([0.5])
.. rubric:: Examples
* :ref:`sphx_glr_auto_examples_tree_plot_tree_regression.py`
.. _tree_multioutput:
Multi-output problems
=====================
A multi-output problem is a supervised learning problem with several outputs
to predict, that is when Y is a 2d array of shape ``(n_samples, n_outputs)``.
When there is no correlation between the outputs, a very simple way to solve
this kind of problem is to build n independent models, i.e. one for each
output, and then to use those models to independently predict each one of the n
outputs. However, because it is likely that the output values related to the
same input are themselves correlated, an often better way is to build a single
model capable of predicting simultaneously all n outputs. First, it requires
lower training time since only a single estimator is built. Second, the
generalization accuracy of the resulting estimator may often be increased.
With regard to decision trees, this strategy can readily be used to support
multi-output problems. This requires the following changes:
- Store n output values in leaves, instead of 1;
- Use splitting criteria that compute the average reduction across all
n outputs.
This module offers support for multi-output problems by implementing this
strategy in both :class:`DecisionTreeClassifier` and
:class:`DecisionTreeRegressor`. If a decision tree is fit on an output array Y
of shape ``(n_samples, n_outputs)`` then the resulting estimator will:
* Output n_output values upon ``predict``;
* Output a list of n_output arrays of class probabilities upon
``predict_proba``.
The use of multi-output trees for regression is demonstrated in
:ref:`sphx_glr_auto_examples_tree_plot_tree_regression.py`. In this example, the input
X is a single real value and the outputs Y are the sine and cosine of X.
.. figure:: ../auto_examples/tree/images/sphx_glr_plot_tree_regression_002.png
:target: ../auto_examples/tree/plot_tree_regression.html
:scale: 75
:align: center
The use of multi-output trees for classification is demonstrated in
:ref:`sphx_glr_auto_examples_miscellaneous_plot_multioutput_face_completion.py`. In this example, the inputs
X are the pixels of the upper half of faces and the outputs Y are the pixels of
the lower half of those faces.
.. figure:: ../auto_examples/miscellaneous/images/sphx_glr_plot_multioutput_face_completion_001.png
:target: ../auto_examples/miscellaneous/plot_multioutput_face_completion.html
:scale: 75
:align: center
.. rubric:: Examples
* :ref:`sphx_glr_auto_examples_miscellaneous_plot_multioutput_face_completion.py`
.. rubric:: References
* M. Dumont et al, `Fast multi-class image annotation with random subwindows
and multiple output randomized trees
<http://www.montefiore.ulg.ac.be/services/stochastic/pubs/2009/DMWG09/dumont-visapp09-shortpaper.pdf>`_,
International Conference on Computer Vision Theory and Applications 2009
.. _tree_complexity:
Complexity
==========
In general, the run time cost to construct a balanced binary tree is
:math:`O(n_{samples}n_{features}\log(n_{samples}))` and query time
:math:`O(\log(n_{samples}))`. Although the tree construction algorithm attempts
to generate balanced trees, they will not always be balanced. Assuming that the
subtrees remain approximately balanced, the cost at each node consists of
searching through :math:`O(n_{features})` to find the feature that offers the
largest reduction in the impurity criterion, e.g. log loss (which is equivalent to an
information gain). This has a cost of
:math:`O(n_{features}n_{samples}\log(n_{samples}))` at each node, leading to a
total cost over the entire trees (by summing the cost at each node) of
:math:`O(n_{features}n_{samples}^{2}\log(n_{samples}))`.
Tips on practical use
=====================
* Decision trees tend to overfit on data with a large number of features.
Getting the right ratio of samples to number of features is important, since
a tree with few samples in high dimensional space is very likely to overfit.
* Consider performing dimensionality reduction (:ref:`PCA <PCA>`,
:ref:`ICA <ICA>`, or :ref:`feature_selection`) beforehand to
give your tree a better chance of finding features that are discriminative.
* :ref:`sphx_glr_auto_examples_tree_plot_unveil_tree_structure.py` will help
in gaining more insights about how the decision tree makes predictions, which is
important for understanding the important features in the data.
* Visualize your tree as you are training by using the ``export``
function. Use ``max_depth=3`` as an initial tree depth to get a feel for
how the tree is fitting to your data, and then increase the depth.
* Remember that the number of samples required to populate the tree doubles
for each additional level the tree grows to. Use ``max_depth`` to control
the size of the tree to prevent overfitting.
* Use ``min_samples_split`` or ``min_samples_leaf`` to ensure that multiple
samples inform every decision in the tree, by controlling which splits will
be considered. A very small number will usually mean the tree will overfit,
whereas a large number will prevent the tree from learning the data. Try
``min_samples_leaf=5`` as an initial value. If the sample size varies
greatly, a float number can be used as percentage in these two parameters.
While ``min_samples_split`` can create arbitrarily small leaves,
``min_samples_leaf`` guarantees that each leaf has a minimum size, avoiding
low-variance, over-fit leaf nodes in regression problems. For
classification with few classes, ``min_samples_leaf=1`` is often the best
choice.
Note that ``min_samples_split`` considers samples directly and independent of
``sample_weight``, if provided (e.g. a node with m weighted samples is still
treated as having exactly m samples). Consider ``min_weight_fraction_leaf`` or
``min_impurity_decrease`` if accounting for sample weights is required at splits.
* Balance your dataset before training to prevent the tree from being biased
toward the classes that are dominant. Class balancing can be done by
sampling an equal number of samples from each class, or preferably by
normalizing the sum of the sample weights (``sample_weight``) for each
class to the same value. Also note that weight-based pre-pruning criteria,
such as ``min_weight_fraction_leaf``, will then be less biased toward
dominant classes than criteria that are not aware of the sample weights,
like ``min_samples_leaf``.
* If the samples are weighted, it will be easier to optimize the tree
structure using weight-based pre-pruning criterion such as
``min_weight_fraction_leaf``, which ensure that leaf nodes contain at least
a fraction of the overall sum of the sample weights.
* All decision trees use ``np.float32`` arrays internally.
If training data is not in this format, a copy of the dataset will be made.
* If the input matrix X is very sparse, it is recommended to convert to sparse
``csc_matrix`` before calling fit and sparse ``csr_matrix`` before calling
predict. Training time can be orders of magnitude faster for a sparse
matrix input compared to a dense matrix when features have zero values in
most of the samples.
.. _tree_algorithms:
Tree algorithms: ID3, C4.5, C5.0 and CART
==========================================
What are all the various decision tree algorithms and how do they differ
from each other? Which one is implemented in scikit-learn?
.. dropdown:: Various decision tree algorithms
ID3_ (Iterative Dichotomiser 3) was developed in 1986 by Ross Quinlan.
The algorithm creates a multiway tree, finding for each node (i.e. in
a greedy manner) the categorical feature that will yield the largest
information gain for categorical targets. Trees are grown to their
maximum size and then a pruning step is usually applied to improve the
ability of the tree to generalize to unseen data.
C4.5 is the successor to ID3 and removed the restriction that features
must be categorical by dynamically defining a discrete attribute (based
on numerical variables) that partitions the continuous attribute value
into a discrete set of intervals. C4.5 converts the trained trees
(i.e. the output of the ID3 algorithm) into sets of if-then rules.
The accuracy of each rule is then evaluated to determine the order
in which they should be applied. Pruning is done by removing a rule's
precondition if the accuracy of the rule improves without it.
C5.0 is Quinlan's latest version release under a proprietary license.
It uses less memory and builds smaller rulesets than C4.5 while being
more accurate.
CART (Classification and Regression Trees) is very similar to C4.5, but
it differs in that it supports numerical target variables (regression) and
does not compute rule sets. CART constructs binary trees using the feature
and threshold that yield the largest information gain at each node.
scikit-learn uses an optimized version of the CART algorithm; however, the
scikit-learn implementation does not support categorical variables for now.
.. _ID3: https://en.wikipedia.org/wiki/ID3_algorithm
.. _tree_mathematical_formulation:
Mathematical formulation
========================
Given training vectors :math:`x_i \in R^n`, i=1,..., l and a label vector
:math:`y \in R^l`, a decision tree recursively partitions the feature space
such that the samples with the same labels or similar target values are grouped
together.
Let the data at node :math:`m` be represented by :math:`Q_m` with :math:`n_m`
samples. For each candidate split :math:`\theta = (j, t_m)` consisting of a
feature :math:`j` and threshold :math:`t_m`, partition the data into
:math:`Q_m^{left}(\theta)` and :math:`Q_m^{right}(\theta)` subsets
.. math::
Q_m^{left}(\theta) = \{(x, y) | x_j \leq t_m\}
Q_m^{right}(\theta) = Q_m \setminus Q_m^{left}(\theta)
The quality of a candidate split of node :math:`m` is then computed using an
impurity function or loss function :math:`H()`, the choice of which depends on
the task being solved (classification or regression)
.. math::
G(Q_m, \theta) = \frac{n_m^{left}}{n_m} H(Q_m^{left}(\theta))
+ \frac{n_m^{right}}{n_m} H(Q_m^{right}(\theta))
Select the parameters that minimises the impurity
.. math::
\theta^* = \operatorname{argmin}_\theta G(Q_m, \theta)
Recurse for subsets :math:`Q_m^{left}(\theta^*)` and
:math:`Q_m^{right}(\theta^*)` until the maximum allowable depth is reached,
:math:`n_m < \min_{samples}` or :math:`n_m = 1`.
Classification criteria
-----------------------
If a target is a classification outcome taking on values 0,1,...,K-1,
for node :math:`m`, let
.. math::
p_{mk} = \frac{1}{n_m} \sum_{y \in Q_m} I(y = k)
be the proportion of class k observations in node :math:`m`. If :math:`m` is a
terminal node, `predict_proba` for this region is set to :math:`p_{mk}`.
Common measures of impurity are the following.
Gini:
.. math::
H(Q_m) = \sum_k p_{mk} (1 - p_{mk})
Log Loss or Entropy:
.. math::
H(Q_m) = - \sum_k p_{mk} \log(p_{mk})
.. dropdown:: Shannon entropy
The entropy criterion computes the Shannon entropy of the possible classes. It
takes the class frequencies of the training data points that reached a given
leaf :math:`m` as their probability. Using the **Shannon entropy as tree node
splitting criterion is equivalent to minimizing the log loss** (also known as
cross-entropy and multinomial deviance) between the true labels :math:`y_i`
and the probabilistic predictions :math:`T_k(x_i)` of the tree model :math:`T` for class :math:`k`.
To see this, first recall that the log loss of a tree model :math:`T`
computed on a dataset :math:`D` is defined as follows:
.. math::
\mathrm{LL}(D, T) = -\frac{1}{n} \sum_{(x_i, y_i) \in D} \sum_k I(y_i = k) \log(T_k(x_i))
where :math:`D` is a training dataset of :math:`n` pairs :math:`(x_i, y_i)`.
In a classification tree, the predicted class probabilities within leaf nodes
are constant, that is: for all :math:`(x_i, y_i) \in Q_m`, one has:
:math:`T_k(x_i) = p_{mk}` for each class :math:`k`.
This property makes it possible to rewrite :math:`\mathrm{LL}(D, T)` as the
sum of the Shannon entropies computed for each leaf of :math:`T` weighted by
the number of training data points that reached each leaf:
.. math::
\mathrm{LL}(D, T) = \sum_{m \in T} \frac{n_m}{n} H(Q_m)
Regression criteria
-------------------
If the target is a continuous value, then for node :math:`m`, common
criteria to minimize as for determining locations for future splits are Mean
Squared Error (MSE or L2 error), Poisson deviance as well as Mean Absolute
Error (MAE or L1 error). MSE and Poisson deviance both set the predicted value
of terminal nodes to the learned mean value :math:`\bar{y}_m` of the node
whereas the MAE sets the predicted value of terminal nodes to the median
:math:`median(y)_m`.
Mean Squared Error:
.. math::
\bar{y}_m = \frac{1}{n_m} \sum_{y \in Q_m} y
H(Q_m) = \frac{1}{n_m} \sum_{y \in Q_m} (y - \bar{y}_m)^2
Mean Poisson deviance:
.. math::
H(Q_m) = \frac{2}{n_m} \sum_{y \in Q_m} (y \log\frac{y}{\bar{y}_m}
- y + \bar{y}_m)
Setting `criterion="poisson"` might be a good choice if your target is a count
or a frequency (count per some unit). In any case, :math:`y >= 0` is a
necessary condition to use this criterion. Note that it fits much slower than
the MSE criterion. For performance reasons the actual implementation minimizes
the half mean poisson deviance, i.e. the mean poisson deviance divided by 2.
Mean Absolute Error:
.. math::
median(y)_m = \underset{y \in Q_m}{\mathrm{median}}(y)
H(Q_m) = \frac{1}{n_m} \sum_{y \in Q_m} |y - median(y)_m|
Note that it fits much slower than the MSE criterion.
.. _tree_missing_value_support:
Missing Values Support
======================
:class:`DecisionTreeClassifier`, :class:`DecisionTreeRegressor`
have built-in support for missing values using `splitter='best'`, where
the splits are determined in a greedy fashion.
:class:`ExtraTreeClassifier`, and :class:`ExtraTreeRegressor` have built-in
support for missing values for `splitter='random'`, where the splits
are determined randomly. For more details on how the splitter differs on
non-missing values, see the :ref:`Forest section <forest>`.
The criterion supported when there are missing-values are
`'gini'`, `'entropy`', or `'log_loss'`, for classification or
`'squared_error'`, `'friedman_mse'`, or `'poisson'` for regression.
First we will describe how :class:`DecisionTreeClassifier`, :class:`DecisionTreeRegressor`
handle missing-values in the data.
For each potential threshold on the non-missing data, the splitter will evaluate
the split with all the missing values going to the left node or the right node.
Decisions are made as follows:
- By default when predicting, the samples with missing values are classified
with the class used in the split found during training::
>>> from sklearn.tree import DecisionTreeClassifier
>>> import numpy as np
>>> X = np.array([0, 1, 6, np.nan]).reshape(-1, 1)
>>> y = [0, 0, 1, 1]
>>> tree = DecisionTreeClassifier(random_state=0).fit(X, y)
>>> tree.predict(X)
array([0, 0, 1, 1])
- If the criterion evaluation is the same for both nodes,
then the tie for missing value at predict time is broken by going to the
right node. The splitter also checks the split where all the missing
values go to one child and non-missing values go to the other::
>>> from sklearn.tree import DecisionTreeClassifier
>>> import numpy as np
>>> X = np.array([np.nan, -1, np.nan, 1]).reshape(-1, 1)
>>> y = [0, 0, 1, 1]
>>> tree = DecisionTreeClassifier(random_state=0).fit(X, y)
>>> X_test = np.array([np.nan]).reshape(-1, 1)
>>> tree.predict(X_test)
array([1])
- If no missing values are seen during training for a given feature, then during
prediction missing values are mapped to the child with the most samples::
>>> from sklearn.tree import DecisionTreeClassifier
>>> import numpy as np
>>> X = np.array([0, 1, 2, 3]).reshape(-1, 1)
>>> y = [0, 1, 1, 1]
>>> tree = DecisionTreeClassifier(random_state=0).fit(X, y)
>>> X_test = np.array([np.nan]).reshape(-1, 1)
>>> tree.predict(X_test)
array([1])
:class:`ExtraTreeClassifier`, and :class:`ExtraTreeRegressor` handle missing values
in a slightly different way. When splitting a node, a random threshold will be chosen
to split the non-missing values on. Then the non-missing values will be sent to the
left and right child based on the randomly selected threshold, while the missing
values will also be randomly sent to the left or right child. This is repeated for
every feature considered at each split. The best split among these is chosen.
During prediction, the treatment of missing-values is the same as that of the
decision tree:
- By default when predicting, the samples with missing values are classified
with the class used in the split found during training.
- If no missing values are seen during training for a given feature, then during
prediction missing values are mapped to the child with the most samples.
.. _minimal_cost_complexity_pruning:
Minimal Cost-Complexity Pruning
===============================
Minimal cost-complexity pruning is an algorithm used to prune a tree to avoid
over-fitting, described in Chapter 3 of [BRE]_. This algorithm is parameterized
by :math:`\alpha\ge0` known as the complexity parameter. The complexity
parameter is used to define the cost-complexity measure, :math:`R_\alpha(T)` of
a given tree :math:`T`:
.. math::
R_\alpha(T) = R(T) + \alpha|\widetilde{T}|
where :math:`|\widetilde{T}|` is the number of terminal nodes in :math:`T` and :math:`R(T)`
is traditionally defined as the total misclassification rate of the terminal
nodes. Alternatively, scikit-learn uses the total sample weighted impurity of
the terminal nodes for :math:`R(T)`. As shown above, the impurity of a node
depends on the criterion. Minimal cost-complexity pruning finds the subtree of
:math:`T` that minimizes :math:`R_\alpha(T)`.
The cost complexity measure of a single node is
:math:`R_\alpha(t)=R(t)+\alpha`. The branch, :math:`T_t`, is defined to be a
tree where node :math:`t` is its root. In general, the impurity of a node
is greater than the sum of impurities of its terminal nodes,
:math:`R(T_t)<R(t)`. However, the cost complexity measure of a node,
:math:`t`, and its branch, :math:`T_t`, can be equal depending on
:math:`\alpha`. We define the effective :math:`\alpha` of a node to be the
value where they are equal, :math:`R_\alpha(T_t)=R_\alpha(t)` or
:math:`\alpha_{eff}(t)=\frac{R(t)-R(T_t)}{|T|-1}`. A non-terminal node
with the smallest value of :math:`\alpha_{eff}` is the weakest link and will
be pruned. This process stops when the pruned tree's minimal
:math:`\alpha_{eff}` is greater than the ``ccp_alpha`` parameter.
.. rubric:: Examples
* :ref:`sphx_glr_auto_examples_tree_plot_cost_complexity_pruning.py`
.. rubric:: References
.. [BRE] L. Breiman, J. Friedman, R. Olshen, and C. Stone. Classification
and Regression Trees. Wadsworth, Belmont, CA, 1984.
* https://en.wikipedia.org/wiki/Decision_tree_learning
* https://en.wikipedia.org/wiki/Predictive_analytics
* J.R. Quinlan. C4. 5: programs for machine learning. Morgan
Kaufmann, 1993.
* T. Hastie, R. Tibshirani and J. Friedman. Elements of Statistical
Learning, Springer, 2009. | scikit-learn | tree Decision Trees currentmodule sklearn tree Decision Trees DTs are a non parametric supervised learning method used for ref classification tree classification and ref regression tree regression The goal is to create a model that predicts the value of a target variable by learning simple decision rules inferred from the data features A tree can be seen as a piecewise constant approximation For instance in the example below decision trees learn from data to approximate a sine curve with a set of if then else decision rules The deeper the tree the more complex the decision rules and the fitter the model figure auto examples tree images sphx glr plot tree regression 001 png target auto examples tree plot tree regression html scale 75 align center Some advantages of decision trees are Simple to understand and to interpret Trees can be visualized Requires little data preparation Other techniques often require data normalization dummy variables need to be created and blank values to be removed Some tree and algorithm combinations support ref missing values tree missing value support The cost of using the tree i e predicting data is logarithmic in the number of data points used to train the tree Able to handle both numerical and categorical data However the scikit learn implementation does not support categorical variables for now Other techniques are usually specialized in analyzing datasets that have only one type of variable See ref algorithms tree algorithms for more information Able to handle multi output problems Uses a white box model If a given situation is observable in a model the explanation for the condition is easily explained by boolean logic By contrast in a black box model e g in an artificial neural network results may be more difficult to interpret Possible to validate a model using statistical tests That makes it possible to account for the reliability of the model Performs well even if its assumptions are somewhat violated by the true model from which the data were generated The disadvantages of decision trees include Decision tree learners can create over complex trees that do not generalize the data well This is called overfitting Mechanisms such as pruning setting the minimum number of samples required at a leaf node or setting the maximum depth of the tree are necessary to avoid this problem Decision trees can be unstable because small variations in the data might result in a completely different tree being generated This problem is mitigated by using decision trees within an ensemble Predictions of decision trees are neither smooth nor continuous but piecewise constant approximations as seen in the above figure Therefore they are not good at extrapolation The problem of learning an optimal decision tree is known to be NP complete under several aspects of optimality and even for simple concepts Consequently practical decision tree learning algorithms are based on heuristic algorithms such as the greedy algorithm where locally optimal decisions are made at each node Such algorithms cannot guarantee to return the globally optimal decision tree This can be mitigated by training multiple trees in an ensemble learner where the features and samples are randomly sampled with replacement There are concepts that are hard to learn because decision trees do not express them easily such as XOR parity or multiplexer problems Decision tree learners create biased trees if some classes dominate It is therefore recommended to balance the dataset prior to fitting with the decision tree tree classification Classification class DecisionTreeClassifier is a class capable of performing multi class classification on a dataset As with other classifiers class DecisionTreeClassifier takes as input two arrays an array X sparse or dense of shape n samples n features holding the training samples and an array Y of integer values shape n samples holding the class labels for the training samples from sklearn import tree X 0 0 1 1 Y 0 1 clf tree DecisionTreeClassifier clf clf fit X Y After being fitted the model can then be used to predict the class of samples clf predict 2 2 array 1 In case that there are multiple classes with the same and highest probability the classifier will predict the class with the lowest index amongst those classes As an alternative to outputting a specific class the probability of each class can be predicted which is the fraction of training samples of the class in a leaf clf predict proba 2 2 array 0 1 class DecisionTreeClassifier is capable of both binary where the labels are 1 1 classification and multiclass where the labels are 0 K 1 classification Using the Iris dataset we can construct a tree as follows from sklearn datasets import load iris from sklearn import tree iris load iris X y iris data iris target clf tree DecisionTreeClassifier clf clf fit X y Once trained you can plot the tree with the func plot tree function tree plot tree clf figure auto examples tree images sphx glr plot iris dtc 002 png target auto examples tree plot iris dtc html scale 75 align center dropdown Alternative ways to export trees We can also export the tree in Graphviz https www graphviz org format using the func export graphviz exporter If you use the conda https conda io package manager the graphviz binaries and the python package can be installed with conda install python graphviz Alternatively binaries for graphviz can be downloaded from the graphviz project homepage and the Python wrapper installed from pypi with pip install graphviz Below is an example graphviz export of the above tree trained on the entire iris dataset the results are saved in an output file iris pdf import graphviz doctest SKIP dot data tree export graphviz clf out file None doctest SKIP graph graphviz Source dot data doctest SKIP graph render iris doctest SKIP The func export graphviz exporter also supports a variety of aesthetic options including coloring nodes by their class or value for regression and using explicit variable and class names if desired Jupyter notebooks also render these plots inline automatically dot data tree export graphviz clf out file None doctest SKIP feature names iris feature names doctest SKIP class names iris target names doctest SKIP filled True rounded True doctest SKIP special characters True doctest SKIP graph graphviz Source dot data doctest SKIP graph doctest SKIP only html figure images iris svg align center only latex figure images iris pdf align center figure auto examples tree images sphx glr plot iris dtc 001 png target auto examples tree plot iris dtc html align center scale 75 Alternatively the tree can also be exported in textual format with the function func export text This method doesn t require the installation of external libraries and is more compact from sklearn datasets import load iris from sklearn tree import DecisionTreeClassifier from sklearn tree import export text iris load iris decision tree DecisionTreeClassifier random state 0 max depth 2 decision tree decision tree fit iris data iris target r export text decision tree feature names iris feature names print r petal width cm 0 80 class 0 petal width cm 0 80 petal width cm 1 75 class 1 petal width cm 1 75 class 2 BLANKLINE rubric Examples ref sphx glr auto examples tree plot iris dtc py ref sphx glr auto examples tree plot unveil tree structure py tree regression Regression figure auto examples tree images sphx glr plot tree regression 001 png target auto examples tree plot tree regression html scale 75 align center Decision trees can also be applied to regression problems using the class DecisionTreeRegressor class As in the classification setting the fit method will take as argument arrays X and y only that in this case y is expected to have floating point values instead of integer values from sklearn import tree X 0 0 2 2 y 0 5 2 5 clf tree DecisionTreeRegressor clf clf fit X y clf predict 1 1 array 0 5 rubric Examples ref sphx glr auto examples tree plot tree regression py tree multioutput Multi output problems A multi output problem is a supervised learning problem with several outputs to predict that is when Y is a 2d array of shape n samples n outputs When there is no correlation between the outputs a very simple way to solve this kind of problem is to build n independent models i e one for each output and then to use those models to independently predict each one of the n outputs However because it is likely that the output values related to the same input are themselves correlated an often better way is to build a single model capable of predicting simultaneously all n outputs First it requires lower training time since only a single estimator is built Second the generalization accuracy of the resulting estimator may often be increased With regard to decision trees this strategy can readily be used to support multi output problems This requires the following changes Store n output values in leaves instead of 1 Use splitting criteria that compute the average reduction across all n outputs This module offers support for multi output problems by implementing this strategy in both class DecisionTreeClassifier and class DecisionTreeRegressor If a decision tree is fit on an output array Y of shape n samples n outputs then the resulting estimator will Output n output values upon predict Output a list of n output arrays of class probabilities upon predict proba The use of multi output trees for regression is demonstrated in ref sphx glr auto examples tree plot tree regression py In this example the input X is a single real value and the outputs Y are the sine and cosine of X figure auto examples tree images sphx glr plot tree regression 002 png target auto examples tree plot tree regression html scale 75 align center The use of multi output trees for classification is demonstrated in ref sphx glr auto examples miscellaneous plot multioutput face completion py In this example the inputs X are the pixels of the upper half of faces and the outputs Y are the pixels of the lower half of those faces figure auto examples miscellaneous images sphx glr plot multioutput face completion 001 png target auto examples miscellaneous plot multioutput face completion html scale 75 align center rubric Examples ref sphx glr auto examples miscellaneous plot multioutput face completion py rubric References M Dumont et al Fast multi class image annotation with random subwindows and multiple output randomized trees http www montefiore ulg ac be services stochastic pubs 2009 DMWG09 dumont visapp09 shortpaper pdf International Conference on Computer Vision Theory and Applications 2009 tree complexity Complexity In general the run time cost to construct a balanced binary tree is math O n samples n features log n samples and query time math O log n samples Although the tree construction algorithm attempts to generate balanced trees they will not always be balanced Assuming that the subtrees remain approximately balanced the cost at each node consists of searching through math O n features to find the feature that offers the largest reduction in the impurity criterion e g log loss which is equivalent to an information gain This has a cost of math O n features n samples log n samples at each node leading to a total cost over the entire trees by summing the cost at each node of math O n features n samples 2 log n samples Tips on practical use Decision trees tend to overfit on data with a large number of features Getting the right ratio of samples to number of features is important since a tree with few samples in high dimensional space is very likely to overfit Consider performing dimensionality reduction ref PCA PCA ref ICA ICA or ref feature selection beforehand to give your tree a better chance of finding features that are discriminative ref sphx glr auto examples tree plot unveil tree structure py will help in gaining more insights about how the decision tree makes predictions which is important for understanding the important features in the data Visualize your tree as you are training by using the export function Use max depth 3 as an initial tree depth to get a feel for how the tree is fitting to your data and then increase the depth Remember that the number of samples required to populate the tree doubles for each additional level the tree grows to Use max depth to control the size of the tree to prevent overfitting Use min samples split or min samples leaf to ensure that multiple samples inform every decision in the tree by controlling which splits will be considered A very small number will usually mean the tree will overfit whereas a large number will prevent the tree from learning the data Try min samples leaf 5 as an initial value If the sample size varies greatly a float number can be used as percentage in these two parameters While min samples split can create arbitrarily small leaves min samples leaf guarantees that each leaf has a minimum size avoiding low variance over fit leaf nodes in regression problems For classification with few classes min samples leaf 1 is often the best choice Note that min samples split considers samples directly and independent of sample weight if provided e g a node with m weighted samples is still treated as having exactly m samples Consider min weight fraction leaf or min impurity decrease if accounting for sample weights is required at splits Balance your dataset before training to prevent the tree from being biased toward the classes that are dominant Class balancing can be done by sampling an equal number of samples from each class or preferably by normalizing the sum of the sample weights sample weight for each class to the same value Also note that weight based pre pruning criteria such as min weight fraction leaf will then be less biased toward dominant classes than criteria that are not aware of the sample weights like min samples leaf If the samples are weighted it will be easier to optimize the tree structure using weight based pre pruning criterion such as min weight fraction leaf which ensure that leaf nodes contain at least a fraction of the overall sum of the sample weights All decision trees use np float32 arrays internally If training data is not in this format a copy of the dataset will be made If the input matrix X is very sparse it is recommended to convert to sparse csc matrix before calling fit and sparse csr matrix before calling predict Training time can be orders of magnitude faster for a sparse matrix input compared to a dense matrix when features have zero values in most of the samples tree algorithms Tree algorithms ID3 C4 5 C5 0 and CART What are all the various decision tree algorithms and how do they differ from each other Which one is implemented in scikit learn dropdown Various decision tree algorithms ID3 Iterative Dichotomiser 3 was developed in 1986 by Ross Quinlan The algorithm creates a multiway tree finding for each node i e in a greedy manner the categorical feature that will yield the largest information gain for categorical targets Trees are grown to their maximum size and then a pruning step is usually applied to improve the ability of the tree to generalize to unseen data C4 5 is the successor to ID3 and removed the restriction that features must be categorical by dynamically defining a discrete attribute based on numerical variables that partitions the continuous attribute value into a discrete set of intervals C4 5 converts the trained trees i e the output of the ID3 algorithm into sets of if then rules The accuracy of each rule is then evaluated to determine the order in which they should be applied Pruning is done by removing a rule s precondition if the accuracy of the rule improves without it C5 0 is Quinlan s latest version release under a proprietary license It uses less memory and builds smaller rulesets than C4 5 while being more accurate CART Classification and Regression Trees is very similar to C4 5 but it differs in that it supports numerical target variables regression and does not compute rule sets CART constructs binary trees using the feature and threshold that yield the largest information gain at each node scikit learn uses an optimized version of the CART algorithm however the scikit learn implementation does not support categorical variables for now ID3 https en wikipedia org wiki ID3 algorithm tree mathematical formulation Mathematical formulation Given training vectors math x i in R n i 1 l and a label vector math y in R l a decision tree recursively partitions the feature space such that the samples with the same labels or similar target values are grouped together Let the data at node math m be represented by math Q m with math n m samples For each candidate split math theta j t m consisting of a feature math j and threshold math t m partition the data into math Q m left theta and math Q m right theta subsets math Q m left theta x y x j leq t m Q m right theta Q m setminus Q m left theta The quality of a candidate split of node math m is then computed using an impurity function or loss function math H the choice of which depends on the task being solved classification or regression math G Q m theta frac n m left n m H Q m left theta frac n m right n m H Q m right theta Select the parameters that minimises the impurity math theta operatorname argmin theta G Q m theta Recurse for subsets math Q m left theta and math Q m right theta until the maximum allowable depth is reached math n m min samples or math n m 1 Classification criteria If a target is a classification outcome taking on values 0 1 K 1 for node math m let math p mk frac 1 n m sum y in Q m I y k be the proportion of class k observations in node math m If math m is a terminal node predict proba for this region is set to math p mk Common measures of impurity are the following Gini math H Q m sum k p mk 1 p mk Log Loss or Entropy math H Q m sum k p mk log p mk dropdown Shannon entropy The entropy criterion computes the Shannon entropy of the possible classes It takes the class frequencies of the training data points that reached a given leaf math m as their probability Using the Shannon entropy as tree node splitting criterion is equivalent to minimizing the log loss also known as cross entropy and multinomial deviance between the true labels math y i and the probabilistic predictions math T k x i of the tree model math T for class math k To see this first recall that the log loss of a tree model math T computed on a dataset math D is defined as follows math mathrm LL D T frac 1 n sum x i y i in D sum k I y i k log T k x i where math D is a training dataset of math n pairs math x i y i In a classification tree the predicted class probabilities within leaf nodes are constant that is for all math x i y i in Q m one has math T k x i p mk for each class math k This property makes it possible to rewrite math mathrm LL D T as the sum of the Shannon entropies computed for each leaf of math T weighted by the number of training data points that reached each leaf math mathrm LL D T sum m in T frac n m n H Q m Regression criteria If the target is a continuous value then for node math m common criteria to minimize as for determining locations for future splits are Mean Squared Error MSE or L2 error Poisson deviance as well as Mean Absolute Error MAE or L1 error MSE and Poisson deviance both set the predicted value of terminal nodes to the learned mean value math bar y m of the node whereas the MAE sets the predicted value of terminal nodes to the median math median y m Mean Squared Error math bar y m frac 1 n m sum y in Q m y H Q m frac 1 n m sum y in Q m y bar y m 2 Mean Poisson deviance math H Q m frac 2 n m sum y in Q m y log frac y bar y m y bar y m Setting criterion poisson might be a good choice if your target is a count or a frequency count per some unit In any case math y 0 is a necessary condition to use this criterion Note that it fits much slower than the MSE criterion For performance reasons the actual implementation minimizes the half mean poisson deviance i e the mean poisson deviance divided by 2 Mean Absolute Error math median y m underset y in Q m mathrm median y H Q m frac 1 n m sum y in Q m y median y m Note that it fits much slower than the MSE criterion tree missing value support Missing Values Support class DecisionTreeClassifier class DecisionTreeRegressor have built in support for missing values using splitter best where the splits are determined in a greedy fashion class ExtraTreeClassifier and class ExtraTreeRegressor have built in support for missing values for splitter random where the splits are determined randomly For more details on how the splitter differs on non missing values see the ref Forest section forest The criterion supported when there are missing values are gini entropy or log loss for classification or squared error friedman mse or poisson for regression First we will describe how class DecisionTreeClassifier class DecisionTreeRegressor handle missing values in the data For each potential threshold on the non missing data the splitter will evaluate the split with all the missing values going to the left node or the right node Decisions are made as follows By default when predicting the samples with missing values are classified with the class used in the split found during training from sklearn tree import DecisionTreeClassifier import numpy as np X np array 0 1 6 np nan reshape 1 1 y 0 0 1 1 tree DecisionTreeClassifier random state 0 fit X y tree predict X array 0 0 1 1 If the criterion evaluation is the same for both nodes then the tie for missing value at predict time is broken by going to the right node The splitter also checks the split where all the missing values go to one child and non missing values go to the other from sklearn tree import DecisionTreeClassifier import numpy as np X np array np nan 1 np nan 1 reshape 1 1 y 0 0 1 1 tree DecisionTreeClassifier random state 0 fit X y X test np array np nan reshape 1 1 tree predict X test array 1 If no missing values are seen during training for a given feature then during prediction missing values are mapped to the child with the most samples from sklearn tree import DecisionTreeClassifier import numpy as np X np array 0 1 2 3 reshape 1 1 y 0 1 1 1 tree DecisionTreeClassifier random state 0 fit X y X test np array np nan reshape 1 1 tree predict X test array 1 class ExtraTreeClassifier and class ExtraTreeRegressor handle missing values in a slightly different way When splitting a node a random threshold will be chosen to split the non missing values on Then the non missing values will be sent to the left and right child based on the randomly selected threshold while the missing values will also be randomly sent to the left or right child This is repeated for every feature considered at each split The best split among these is chosen During prediction the treatment of missing values is the same as that of the decision tree By default when predicting the samples with missing values are classified with the class used in the split found during training If no missing values are seen during training for a given feature then during prediction missing values are mapped to the child with the most samples minimal cost complexity pruning Minimal Cost Complexity Pruning Minimal cost complexity pruning is an algorithm used to prune a tree to avoid over fitting described in Chapter 3 of BRE This algorithm is parameterized by math alpha ge0 known as the complexity parameter The complexity parameter is used to define the cost complexity measure math R alpha T of a given tree math T math R alpha T R T alpha widetilde T where math widetilde T is the number of terminal nodes in math T and math R T is traditionally defined as the total misclassification rate of the terminal nodes Alternatively scikit learn uses the total sample weighted impurity of the terminal nodes for math R T As shown above the impurity of a node depends on the criterion Minimal cost complexity pruning finds the subtree of math T that minimizes math R alpha T The cost complexity measure of a single node is math R alpha t R t alpha The branch math T t is defined to be a tree where node math t is its root In general the impurity of a node is greater than the sum of impurities of its terminal nodes math R T t R t However the cost complexity measure of a node math t and its branch math T t can be equal depending on math alpha We define the effective math alpha of a node to be the value where they are equal math R alpha T t R alpha t or math alpha eff t frac R t R T t T 1 A non terminal node with the smallest value of math alpha eff is the weakest link and will be pruned This process stops when the pruned tree s minimal math alpha eff is greater than the ccp alpha parameter rubric Examples ref sphx glr auto examples tree plot cost complexity pruning py rubric References BRE L Breiman J Friedman R Olshen and C Stone Classification and Regression Trees Wadsworth Belmont CA 1984 https en wikipedia org wiki Decision tree learning https en wikipedia org wiki Predictive analytics J R Quinlan C4 5 programs for machine learning Morgan Kaufmann 1993 T Hastie R Tibshirani and J Friedman Elements of Statistical Learning Springer 2009 |
scikit-learn sklearn Metrics and scoring quantifying the quality of predictions whichscoringfunction modelevaluation | .. currentmodule:: sklearn
.. _model_evaluation:
===========================================================
Metrics and scoring: quantifying the quality of predictions
===========================================================
.. _which_scoring_function:
Which scoring function should I use?
====================================
Before we take a closer look into the details of the many scores and
:term:`evaluation metrics`, we want to give some guidance, inspired by statistical
decision theory, on the choice of **scoring functions** for **supervised learning**,
see [Gneiting2009]_:
- *Which scoring function should I use?*
- *Which scoring function is a good one for my task?*
In a nutshell, if the scoring function is given, e.g. in a kaggle competition
or in a business context, use that one.
If you are free to choose, it starts by considering the ultimate goal and application
of the prediction. It is useful to distinguish two steps:
* Predicting
* Decision making
**Predicting:**
Usually, the response variable :math:`Y` is a random variable, in the sense that there
is *no deterministic* function :math:`Y = g(X)` of the features :math:`X`.
Instead, there is a probability distribution :math:`F` of :math:`Y`.
One can aim to predict the whole distribution, known as *probabilistic prediction*,
or---more the focus of scikit-learn---issue a *point prediction* (or point forecast)
by choosing a property or functional of that distribution :math:`F`.
Typical examples are the mean (expected value), the median or a quantile of the
response variable :math:`Y` (conditionally on :math:`X`).
Once that is settled, use a **strictly consistent** scoring function for that
(target) functional, see [Gneiting2009]_.
This means using a scoring function that is aligned with *measuring the distance
between predictions* `y_pred` *and the true target functional using observations of*
:math:`Y`, i.e. `y_true`.
For classification **strictly proper scoring rules**, see
`Wikipedia entry for Scoring rule <https://en.wikipedia.org/wiki/Scoring_rule>`_
and [Gneiting2007]_, coincide with strictly consistent scoring functions.
The table further below provides examples.
One could say that consistent scoring functions act as *truth serum* in that
they guarantee *"that truth telling [. . .] is an optimal strategy in
expectation"* [Gneiting2014]_.
Once a strictly consistent scoring function is chosen, it is best used for both: as
loss function for model training and as metric/score in model evaluation and model
comparison.
Note that for regressors, the prediction is done with :term:`predict` while for
classifiers it is usually :term:`predict_proba`.
**Decision Making:**
The most common decisions are done on binary classification tasks, where the result of
:term:`predict_proba` is turned into a single outcome, e.g., from the predicted
probability of rain a decision is made on how to act (whether to take mitigating
measures like an umbrella or not).
For classifiers, this is what :term:`predict` returns.
See also :ref:`TunedThresholdClassifierCV`.
There are many scoring functions which measure different aspects of such a
decision, most of them are covered with or derived from the
:func:`metrics.confusion_matrix`.
**List of strictly consistent scoring functions:**
Here, we list some of the most relevant statistical functionals and corresponding
strictly consistent scoring functions for tasks in practice. Note that the list is not
complete and that there are more of them.
For further criteria on how to select a specific one, see [Fissler2022]_.
================== =================================================== ==================== =================================
functional scoring or loss function response `y` prediction
================== =================================================== ==================== =================================
**Classification**
mean :ref:`Brier score <brier_score_loss>` :sup:`1` multi-class ``predict_proba``
mean :ref:`log loss <log_loss>` multi-class ``predict_proba``
mode :ref:`zero-one loss <zero_one_loss>` :sup:`2` multi-class ``predict``, categorical
**Regression**
mean :ref:`squared error <mean_squared_error>` :sup:`3` all reals ``predict``, all reals
mean :ref:`Poisson deviance <mean_tweedie_deviance>` non-negative ``predict``, strictly positive
mean :ref:`Gamma deviance <mean_tweedie_deviance>` strictly positive ``predict``, strictly positive
mean :ref:`Tweedie deviance <mean_tweedie_deviance>` depends on ``power`` ``predict``, depends on ``power``
median :ref:`absolute error <mean_absolute_error>` all reals ``predict``, all reals
quantile :ref:`pinball loss <pinball_loss>` all reals ``predict``, all reals
mode no consistent one exists reals
================== =================================================== ==================== =================================
:sup:`1` The Brier score is just a different name for the squared error in case of
classification.
:sup:`2` The zero-one loss is only consistent but not strictly consistent for the mode.
The zero-one loss is equivalent to one minus the accuracy score, meaning it gives
different score values but the same ranking.
:sup:`3` R² gives the same ranking as squared error.
**Fictitious Example:**
Let's make the above arguments more tangible. Consider a setting in network reliability
engineering, such as maintaining stable internet or Wi-Fi connections.
As provider of the network, you have access to the dataset of log entries of network
connections containing network load over time and many interesting features.
Your goal is to improve the reliability of the connections.
In fact, you promise your customers that on at least 99% of all days there are no
connection discontinuities larger than 1 minute.
Therefore, you are interested in a prediction of the 99% quantile (of longest
connection interruption duration per day) in order to know in advance when to add
more bandwidth and thereby satisfy your customers. So the *target functional* is the
99% quantile. From the table above, you choose the pinball loss as scoring function
(fair enough, not much choice given), for model training (e.g.
`HistGradientBoostingRegressor(loss="quantile", quantile=0.99)`) as well as model
evaluation (`mean_pinball_loss(..., alpha=0.99)` - we apologize for the different
argument names, `quantile` and `alpha`) be it in grid search for finding
hyperparameters or in comparing to other models like
`QuantileRegressor(quantile=0.99)`.
.. rubric:: References
.. [Gneiting2007] T. Gneiting and A. E. Raftery. :doi:`Strictly Proper
Scoring Rules, Prediction, and Estimation <10.1198/016214506000001437>`
In: Journal of the American Statistical Association 102 (2007),
pp. 359– 378.
`link to pdf <www.stat.washington.edu/people/raftery/Research/PDF/Gneiting2007jasa.pdf>`_
.. [Gneiting2009] T. Gneiting. :arxiv:`Making and Evaluating Point Forecasts
<0912.0902>`
Journal of the American Statistical Association 106 (2009): 746 - 762.
.. [Gneiting2014] T. Gneiting and M. Katzfuss. :doi:`Probabilistic Forecasting
<10.1146/annurev-st atistics-062713-085831>`. In: Annual Review of Statistics and Its Application 1.1 (2014), pp. 125–151.
.. [Fissler2022] T. Fissler, C. Lorentzen and M. Mayer. :arxiv:`Model
Comparison and Calibration Assessment: User Guide for Consistent Scoring
Functions in Machine Learning and Actuarial Practice. <2202.12780>`
.. _scoring_api_overview:
Scoring API overview
====================
There are 3 different APIs for evaluating the quality of a model's
predictions:
* **Estimator score method**: Estimators have a ``score`` method providing a
default evaluation criterion for the problem they are designed to solve.
Most commonly this is :ref:`accuracy <accuracy_score>` for classifiers and the
:ref:`coefficient of determination <r2_score>` (:math:`R^2`) for regressors.
Details for each estimator can be found in its documentation.
* **Scoring parameter**: Model-evaluation tools that use
:ref:`cross-validation <cross_validation>` (such as
:class:`model_selection.GridSearchCV`, :func:`model_selection.validation_curve` and
:class:`linear_model.LogisticRegressionCV`) rely on an internal *scoring* strategy.
This can be specified using the `scoring` parameter of that tool and is discussed
in the section :ref:`scoring_parameter`.
* **Metric functions**: The :mod:`sklearn.metrics` module implements functions
assessing prediction error for specific purposes. These metrics are detailed
in sections on :ref:`classification_metrics`,
:ref:`multilabel_ranking_metrics`, :ref:`regression_metrics` and
:ref:`clustering_metrics`.
Finally, :ref:`dummy_estimators` are useful to get a baseline
value of those metrics for random predictions.
.. seealso::
For "pairwise" metrics, between *samples* and not estimators or
predictions, see the :ref:`metrics` section.
.. _scoring_parameter:
The ``scoring`` parameter: defining model evaluation rules
==========================================================
Model selection and evaluation tools that internally use
:ref:`cross-validation <cross_validation>` (such as
:class:`model_selection.GridSearchCV`, :func:`model_selection.validation_curve` and
:class:`linear_model.LogisticRegressionCV`) take a ``scoring`` parameter that
controls what metric they apply to the estimators evaluated.
They can be specified in several ways:
* `None`: the estimator's default evaluation criterion (i.e., the metric used in the
estimator's `score` method) is used.
* :ref:`String name <scoring_string_names>`: common metrics can be passed via a string
name.
* :ref:`Callable <scoring_callable>`: more complex metrics can be passed via a custom
metric callable (e.g., function).
Some tools do also accept multiple metric evaluation. See :ref:`multimetric_scoring`
for details.
.. _scoring_string_names:
String name scorers
-------------------
For the most common use cases, you can designate a scorer object with the
``scoring`` parameter via a string name; the table below shows all possible values.
All scorer objects follow the convention that **higher return values are better
than lower return values**. Thus metrics which measure the distance between
the model and the data, like :func:`metrics.mean_squared_error`, are
available as 'neg_mean_squared_error' which return the negated value
of the metric.
==================================== ============================================== ==================================
Scoring string name Function Comment
==================================== ============================================== ==================================
**Classification**
'accuracy' :func:`metrics.accuracy_score`
'balanced_accuracy' :func:`metrics.balanced_accuracy_score`
'top_k_accuracy' :func:`metrics.top_k_accuracy_score`
'average_precision' :func:`metrics.average_precision_score`
'neg_brier_score' :func:`metrics.brier_score_loss`
'f1' :func:`metrics.f1_score` for binary targets
'f1_micro' :func:`metrics.f1_score` micro-averaged
'f1_macro' :func:`metrics.f1_score` macro-averaged
'f1_weighted' :func:`metrics.f1_score` weighted average
'f1_samples' :func:`metrics.f1_score` by multilabel sample
'neg_log_loss' :func:`metrics.log_loss` requires ``predict_proba`` support
'precision' etc. :func:`metrics.precision_score` suffixes apply as with 'f1'
'recall' etc. :func:`metrics.recall_score` suffixes apply as with 'f1'
'jaccard' etc. :func:`metrics.jaccard_score` suffixes apply as with 'f1'
'roc_auc' :func:`metrics.roc_auc_score`
'roc_auc_ovr' :func:`metrics.roc_auc_score`
'roc_auc_ovo' :func:`metrics.roc_auc_score`
'roc_auc_ovr_weighted' :func:`metrics.roc_auc_score`
'roc_auc_ovo_weighted' :func:`metrics.roc_auc_score`
'd2_log_loss_score' :func:`metrics.d2_log_loss_score`
**Clustering**
'adjusted_mutual_info_score' :func:`metrics.adjusted_mutual_info_score`
'adjusted_rand_score' :func:`metrics.adjusted_rand_score`
'completeness_score' :func:`metrics.completeness_score`
'fowlkes_mallows_score' :func:`metrics.fowlkes_mallows_score`
'homogeneity_score' :func:`metrics.homogeneity_score`
'mutual_info_score' :func:`metrics.mutual_info_score`
'normalized_mutual_info_score' :func:`metrics.normalized_mutual_info_score`
'rand_score' :func:`metrics.rand_score`
'v_measure_score' :func:`metrics.v_measure_score`
**Regression**
'explained_variance' :func:`metrics.explained_variance_score`
'neg_max_error' :func:`metrics.max_error`
'neg_mean_absolute_error' :func:`metrics.mean_absolute_error`
'neg_mean_squared_error' :func:`metrics.mean_squared_error`
'neg_root_mean_squared_error' :func:`metrics.root_mean_squared_error`
'neg_mean_squared_log_error' :func:`metrics.mean_squared_log_error`
'neg_root_mean_squared_log_error' :func:`metrics.root_mean_squared_log_error`
'neg_median_absolute_error' :func:`metrics.median_absolute_error`
'r2' :func:`metrics.r2_score`
'neg_mean_poisson_deviance' :func:`metrics.mean_poisson_deviance`
'neg_mean_gamma_deviance' :func:`metrics.mean_gamma_deviance`
'neg_mean_absolute_percentage_error' :func:`metrics.mean_absolute_percentage_error`
'd2_absolute_error_score' :func:`metrics.d2_absolute_error_score`
==================================== ============================================== ==================================
Usage examples:
>>> from sklearn import svm, datasets
>>> from sklearn.model_selection import cross_val_score
>>> X, y = datasets.load_iris(return_X_y=True)
>>> clf = svm.SVC(random_state=0)
>>> cross_val_score(clf, X, y, cv=5, scoring='recall_macro')
array([0.96..., 0.96..., 0.96..., 0.93..., 1. ])
.. note::
If a wrong scoring name is passed, an ``InvalidParameterError`` is raised.
You can retrieve the names of all available scorers by calling
:func:`~sklearn.metrics.get_scorer_names`.
.. currentmodule:: sklearn.metrics
.. _scoring_callable:
Callable scorers
----------------
For more complex use cases and more flexibility, you can pass a callable to
the `scoring` parameter. This can be done by:
* :ref:`scoring_adapt_metric`
* :ref:`scoring_custom` (most flexible)
.. _scoring_adapt_metric:
Adapting predefined metrics via `make_scorer`
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
The following metric functions are not implemented as named scorers,
sometimes because they require additional parameters, such as
:func:`fbeta_score`. They cannot be passed to the ``scoring``
parameters; instead their callable needs to be passed to
:func:`make_scorer` together with the value of the user-settable
parameters.
===================================== ========= ==============================================
Function Parameter Example usage
===================================== ========= ==============================================
**Classification**
:func:`metrics.fbeta_score` ``beta`` ``make_scorer(fbeta_score, beta=2)``
**Regression**
:func:`metrics.mean_tweedie_deviance` ``power`` ``make_scorer(mean_tweedie_deviance, power=1.5)``
:func:`metrics.mean_pinball_loss` ``alpha`` ``make_scorer(mean_pinball_loss, alpha=0.95)``
:func:`metrics.d2_tweedie_score` ``power`` ``make_scorer(d2_tweedie_score, power=1.5)``
:func:`metrics.d2_pinball_score` ``alpha`` ``make_scorer(d2_pinball_score, alpha=0.95)``
===================================== ========= ==============================================
One typical use case is to wrap an existing metric function from the library
with non-default values for its parameters, such as the ``beta`` parameter for
the :func:`fbeta_score` function::
>>> from sklearn.metrics import fbeta_score, make_scorer
>>> ftwo_scorer = make_scorer(fbeta_score, beta=2)
>>> from sklearn.model_selection import GridSearchCV
>>> from sklearn.svm import LinearSVC
>>> grid = GridSearchCV(LinearSVC(), param_grid={'C': [1, 10]},
... scoring=ftwo_scorer, cv=5)
The module :mod:`sklearn.metrics` also exposes a set of simple functions
measuring a prediction error given ground truth and prediction:
- functions ending with ``_score`` return a value to
maximize, the higher the better.
- functions ending with ``_error``, ``_loss``, or ``_deviance`` return a
value to minimize, the lower the better. When converting
into a scorer object using :func:`make_scorer`, set
the ``greater_is_better`` parameter to ``False`` (``True`` by default; see the
parameter description below).
.. _scoring_custom:
Creating a custom scorer object
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
You can create your own custom scorer object using
:func:`make_scorer` or for the most flexibility, from scratch. See below for details.
.. dropdown:: Custom scorer objects using `make_scorer`
You can build a completely custom scorer object
from a simple python function using :func:`make_scorer`, which can
take several parameters:
* the python function you want to use (``my_custom_loss_func``
in the example below)
* whether the python function returns a score (``greater_is_better=True``,
the default) or a loss (``greater_is_better=False``). If a loss, the output
of the python function is negated by the scorer object, conforming to
the cross validation convention that scorers return higher values for better models.
* for classification metrics only: whether the python function you provided requires
continuous decision certainties. If the scoring function only accepts probability
estimates (e.g. :func:`metrics.log_loss`), then one needs to set the parameter
`response_method="predict_proba"`. Some scoring
functions do not necessarily require probability estimates but rather non-thresholded
decision values (e.g. :func:`metrics.roc_auc_score`). In this case, one can provide a
list (e.g., `response_method=["decision_function", "predict_proba"]`),
and scorer will use the first available method, in the order given in the list,
to compute the scores.
* any additional parameters of the scoring function, such as ``beta`` or ``labels``.
Here is an example of building custom scorers, and of using the
``greater_is_better`` parameter::
>>> import numpy as np
>>> def my_custom_loss_func(y_true, y_pred):
... diff = np.abs(y_true - y_pred).max()
... return np.log1p(diff)
...
>>> # score will negate the return value of my_custom_loss_func,
>>> # which will be np.log(2), 0.693, given the values for X
>>> # and y defined below.
>>> score = make_scorer(my_custom_loss_func, greater_is_better=False)
>>> X = [[1], [1]]
>>> y = [0, 1]
>>> from sklearn.dummy import DummyClassifier
>>> clf = DummyClassifier(strategy='most_frequent', random_state=0)
>>> clf = clf.fit(X, y)
>>> my_custom_loss_func(y, clf.predict(X))
0.69...
>>> score(clf, X, y)
-0.69...
.. dropdown:: Custom scorer objects from scratch
You can generate even more flexible model scorers by constructing your own
scoring object from scratch, without using the :func:`make_scorer` factory.
For a callable to be a scorer, it needs to meet the protocol specified by
the following two rules:
- It can be called with parameters ``(estimator, X, y)``, where ``estimator``
is the model that should be evaluated, ``X`` is validation data, and ``y`` is
the ground truth target for ``X`` (in the supervised case) or ``None`` (in the
unsupervised case).
- It returns a floating point number that quantifies the
``estimator`` prediction quality on ``X``, with reference to ``y``.
Again, by convention higher numbers are better, so if your scorer
returns loss, that value should be negated.
- Advanced: If it requires extra metadata to be passed to it, it should expose
a ``get_metadata_routing`` method returning the requested metadata. The user
should be able to set the requested metadata via a ``set_score_request``
method. Please see :ref:`User Guide <metadata_routing>` and :ref:`Developer
Guide <sphx_glr_auto_examples_miscellaneous_plot_metadata_routing.py>` for
more details.
.. dropdown:: Using custom scorers in functions where n_jobs > 1
While defining the custom scoring function alongside the calling function
should work out of the box with the default joblib backend (loky),
importing it from another module will be a more robust approach and work
independently of the joblib backend.
For example, to use ``n_jobs`` greater than 1 in the example below,
``custom_scoring_function`` function is saved in a user-created module
(``custom_scorer_module.py``) and imported::
>>> from custom_scorer_module import custom_scoring_function # doctest: +SKIP
>>> cross_val_score(model,
... X_train,
... y_train,
... scoring=make_scorer(custom_scoring_function, greater_is_better=False),
... cv=5,
... n_jobs=-1) # doctest: +SKIP
.. _multimetric_scoring:
Using multiple metric evaluation
--------------------------------
Scikit-learn also permits evaluation of multiple metrics in ``GridSearchCV``,
``RandomizedSearchCV`` and ``cross_validate``.
There are three ways to specify multiple scoring metrics for the ``scoring``
parameter:
- As an iterable of string metrics::
>>> scoring = ['accuracy', 'precision']
- As a ``dict`` mapping the scorer name to the scoring function::
>>> from sklearn.metrics import accuracy_score
>>> from sklearn.metrics import make_scorer
>>> scoring = {'accuracy': make_scorer(accuracy_score),
... 'prec': 'precision'}
Note that the dict values can either be scorer functions or one of the
predefined metric strings.
- As a callable that returns a dictionary of scores::
>>> from sklearn.model_selection import cross_validate
>>> from sklearn.metrics import confusion_matrix
>>> # A sample toy binary classification dataset
>>> X, y = datasets.make_classification(n_classes=2, random_state=0)
>>> svm = LinearSVC(random_state=0)
>>> def confusion_matrix_scorer(clf, X, y):
... y_pred = clf.predict(X)
... cm = confusion_matrix(y, y_pred)
... return {'tn': cm[0, 0], 'fp': cm[0, 1],
... 'fn': cm[1, 0], 'tp': cm[1, 1]}
>>> cv_results = cross_validate(svm, X, y, cv=5,
... scoring=confusion_matrix_scorer)
>>> # Getting the test set true positive scores
>>> print(cv_results['test_tp'])
[10 9 8 7 8]
>>> # Getting the test set false negative scores
>>> print(cv_results['test_fn'])
[0 1 2 3 2]
.. _classification_metrics:
Classification metrics
=======================
.. currentmodule:: sklearn.metrics
The :mod:`sklearn.metrics` module implements several loss, score, and utility
functions to measure classification performance.
Some metrics might require probability estimates of the positive class,
confidence values, or binary decisions values.
Most implementations allow each sample to provide a weighted contribution
to the overall score, through the ``sample_weight`` parameter.
Some of these are restricted to the binary classification case:
.. autosummary::
precision_recall_curve
roc_curve
class_likelihood_ratios
det_curve
Others also work in the multiclass case:
.. autosummary::
balanced_accuracy_score
cohen_kappa_score
confusion_matrix
hinge_loss
matthews_corrcoef
roc_auc_score
top_k_accuracy_score
Some also work in the multilabel case:
.. autosummary::
accuracy_score
classification_report
f1_score
fbeta_score
hamming_loss
jaccard_score
log_loss
multilabel_confusion_matrix
precision_recall_fscore_support
precision_score
recall_score
roc_auc_score
zero_one_loss
d2_log_loss_score
And some work with binary and multilabel (but not multiclass) problems:
.. autosummary::
average_precision_score
In the following sub-sections, we will describe each of those functions,
preceded by some notes on common API and metric definition.
.. _average:
From binary to multiclass and multilabel
----------------------------------------
Some metrics are essentially defined for binary classification tasks (e.g.
:func:`f1_score`, :func:`roc_auc_score`). In these cases, by default
only the positive label is evaluated, assuming by default that the positive
class is labelled ``1`` (though this may be configurable through the
``pos_label`` parameter).
In extending a binary metric to multiclass or multilabel problems, the data
is treated as a collection of binary problems, one for each class.
There are then a number of ways to average binary metric calculations across
the set of classes, each of which may be useful in some scenario.
Where available, you should select among these using the ``average`` parameter.
* ``"macro"`` simply calculates the mean of the binary metrics,
giving equal weight to each class. In problems where infrequent classes
are nonetheless important, macro-averaging may be a means of highlighting
their performance. On the other hand, the assumption that all classes are
equally important is often untrue, such that macro-averaging will
over-emphasize the typically low performance on an infrequent class.
* ``"weighted"`` accounts for class imbalance by computing the average of
binary metrics in which each class's score is weighted by its presence in the
true data sample.
* ``"micro"`` gives each sample-class pair an equal contribution to the overall
metric (except as a result of sample-weight). Rather than summing the
metric per class, this sums the dividends and divisors that make up the
per-class metrics to calculate an overall quotient.
Micro-averaging may be preferred in multilabel settings, including
multiclass classification where a majority class is to be ignored.
* ``"samples"`` applies only to multilabel problems. It does not calculate a
per-class measure, instead calculating the metric over the true and predicted
classes for each sample in the evaluation data, and returning their
(``sample_weight``-weighted) average.
* Selecting ``average=None`` will return an array with the score for each
class.
While multiclass data is provided to the metric, like binary targets, as an
array of class labels, multilabel data is specified as an indicator matrix,
in which cell ``[i, j]`` has value 1 if sample ``i`` has label ``j`` and value
0 otherwise.
.. _accuracy_score:
Accuracy score
--------------
The :func:`accuracy_score` function computes the
`accuracy <https://en.wikipedia.org/wiki/Accuracy_and_precision>`_, either the fraction
(default) or the count (normalize=False) of correct predictions.
In multilabel classification, the function returns the subset accuracy. If
the entire set of predicted labels for a sample strictly match with the true
set of labels, then the subset accuracy is 1.0; otherwise it is 0.0.
If :math:`\hat{y}_i` is the predicted value of
the :math:`i`-th sample and :math:`y_i` is the corresponding true value,
then the fraction of correct predictions over :math:`n_\text{samples}` is
defined as
.. math::
\texttt{accuracy}(y, \hat{y}) = \frac{1}{n_\text{samples}} \sum_{i=0}^{n_\text{samples}-1} 1(\hat{y}_i = y_i)
where :math:`1(x)` is the `indicator function
<https://en.wikipedia.org/wiki/Indicator_function>`_.
>>> import numpy as np
>>> from sklearn.metrics import accuracy_score
>>> y_pred = [0, 2, 1, 3]
>>> y_true = [0, 1, 2, 3]
>>> accuracy_score(y_true, y_pred)
0.5
>>> accuracy_score(y_true, y_pred, normalize=False)
2.0
In the multilabel case with binary label indicators::
>>> accuracy_score(np.array([[0, 1], [1, 1]]), np.ones((2, 2)))
0.5
.. rubric:: Examples
* See :ref:`sphx_glr_auto_examples_model_selection_plot_permutation_tests_for_classification.py`
for an example of accuracy score usage using permutations of
the dataset.
.. _top_k_accuracy_score:
Top-k accuracy score
--------------------
The :func:`top_k_accuracy_score` function is a generalization of
:func:`accuracy_score`. The difference is that a prediction is considered
correct as long as the true label is associated with one of the ``k`` highest
predicted scores. :func:`accuracy_score` is the special case of `k = 1`.
The function covers the binary and multiclass classification cases but not the
multilabel case.
If :math:`\hat{f}_{i,j}` is the predicted class for the :math:`i`-th sample
corresponding to the :math:`j`-th largest predicted score and :math:`y_i` is the
corresponding true value, then the fraction of correct predictions over
:math:`n_\text{samples}` is defined as
.. math::
\texttt{top-k accuracy}(y, \hat{f}) = \frac{1}{n_\text{samples}} \sum_{i=0}^{n_\text{samples}-1} \sum_{j=1}^{k} 1(\hat{f}_{i,j} = y_i)
where :math:`k` is the number of guesses allowed and :math:`1(x)` is the
`indicator function <https://en.wikipedia.org/wiki/Indicator_function>`_.
>>> import numpy as np
>>> from sklearn.metrics import top_k_accuracy_score
>>> y_true = np.array([0, 1, 2, 2])
>>> y_score = np.array([[0.5, 0.2, 0.2],
... [0.3, 0.4, 0.2],
... [0.2, 0.4, 0.3],
... [0.7, 0.2, 0.1]])
>>> top_k_accuracy_score(y_true, y_score, k=2)
0.75
>>> # Not normalizing gives the number of "correctly" classified samples
>>> top_k_accuracy_score(y_true, y_score, k=2, normalize=False)
3
.. _balanced_accuracy_score:
Balanced accuracy score
-----------------------
The :func:`balanced_accuracy_score` function computes the `balanced accuracy
<https://en.wikipedia.org/wiki/Accuracy_and_precision>`_, which avoids inflated
performance estimates on imbalanced datasets. It is the macro-average of recall
scores per class or, equivalently, raw accuracy where each sample is weighted
according to the inverse prevalence of its true class.
Thus for balanced datasets, the score is equal to accuracy.
In the binary case, balanced accuracy is equal to the arithmetic mean of
`sensitivity <https://en.wikipedia.org/wiki/Sensitivity_and_specificity>`_
(true positive rate) and `specificity
<https://en.wikipedia.org/wiki/Sensitivity_and_specificity>`_ (true negative
rate), or the area under the ROC curve with binary predictions rather than
scores:
.. math::
\texttt{balanced-accuracy} = \frac{1}{2}\left( \frac{TP}{TP + FN} + \frac{TN}{TN + FP}\right )
If the classifier performs equally well on either class, this term reduces to
the conventional accuracy (i.e., the number of correct predictions divided by
the total number of predictions).
In contrast, if the conventional accuracy is above chance only because the
classifier takes advantage of an imbalanced test set, then the balanced
accuracy, as appropriate, will drop to :math:`\frac{1}{n\_classes}`.
The score ranges from 0 to 1, or when ``adjusted=True`` is used, it rescaled to
the range :math:`\frac{1}{1 - n\_classes}` to 1, inclusive, with
performance at random scoring 0.
If :math:`y_i` is the true value of the :math:`i`-th sample, and :math:`w_i`
is the corresponding sample weight, then we adjust the sample weight to:
.. math::
\hat{w}_i = \frac{w_i}{\sum_j{1(y_j = y_i) w_j}}
where :math:`1(x)` is the `indicator function <https://en.wikipedia.org/wiki/Indicator_function>`_.
Given predicted :math:`\hat{y}_i` for sample :math:`i`, balanced accuracy is
defined as:
.. math::
\texttt{balanced-accuracy}(y, \hat{y}, w) = \frac{1}{\sum{\hat{w}_i}} \sum_i 1(\hat{y}_i = y_i) \hat{w}_i
With ``adjusted=True``, balanced accuracy reports the relative increase from
:math:`\texttt{balanced-accuracy}(y, \mathbf{0}, w) =
\frac{1}{n\_classes}`. In the binary case, this is also known as
`*Youden's J statistic* <https://en.wikipedia.org/wiki/Youden%27s_J_statistic>`_,
or *informedness*.
.. note::
The multiclass definition here seems the most reasonable extension of the
metric used in binary classification, though there is no certain consensus
in the literature:
* Our definition: [Mosley2013]_, [Kelleher2015]_ and [Guyon2015]_, where
[Guyon2015]_ adopt the adjusted version to ensure that random predictions
have a score of :math:`0` and perfect predictions have a score of :math:`1`..
* Class balanced accuracy as described in [Mosley2013]_: the minimum between the precision
and the recall for each class is computed. Those values are then averaged over the total
number of classes to get the balanced accuracy.
* Balanced Accuracy as described in [Urbanowicz2015]_: the average of sensitivity and specificity
is computed for each class and then averaged over total number of classes.
.. rubric:: References
.. [Guyon2015] I. Guyon, K. Bennett, G. Cawley, H.J. Escalante, S. Escalera, T.K. Ho, N. Macià,
B. Ray, M. Saeed, A.R. Statnikov, E. Viegas, `Design of the 2015 ChaLearn AutoML Challenge
<https://ieeexplore.ieee.org/document/7280767>`_, IJCNN 2015.
.. [Mosley2013] L. Mosley, `A balanced approach to the multi-class imbalance problem
<https://lib.dr.iastate.edu/etd/13537/>`_, IJCV 2010.
.. [Kelleher2015] John. D. Kelleher, Brian Mac Namee, Aoife D'Arcy, `Fundamentals of
Machine Learning for Predictive Data Analytics: Algorithms, Worked Examples,
and Case Studies <https://mitpress.mit.edu/books/fundamentals-machine-learning-predictive-data-analytics>`_,
2015.
.. [Urbanowicz2015] Urbanowicz R.J., Moore, J.H. :doi:`ExSTraCS 2.0: description
and evaluation of a scalable learning classifier
system <10.1007/s12065-015-0128-8>`, Evol. Intel. (2015) 8: 89.
.. _cohen_kappa:
Cohen's kappa
-------------
The function :func:`cohen_kappa_score` computes `Cohen's kappa
<https://en.wikipedia.org/wiki/Cohen%27s_kappa>`_ statistic.
This measure is intended to compare labelings by different human annotators,
not a classifier versus a ground truth.
The kappa score is a number between -1 and 1.
Scores above .8 are generally considered good agreement;
zero or lower means no agreement (practically random labels).
Kappa scores can be computed for binary or multiclass problems,
but not for multilabel problems (except by manually computing a per-label score)
and not for more than two annotators.
>>> from sklearn.metrics import cohen_kappa_score
>>> labeling1 = [2, 0, 2, 2, 0, 1]
>>> labeling2 = [0, 0, 2, 2, 0, 2]
>>> cohen_kappa_score(labeling1, labeling2)
0.4285714285714286
.. _confusion_matrix:
Confusion matrix
----------------
The :func:`confusion_matrix` function evaluates
classification accuracy by computing the `confusion matrix
<https://en.wikipedia.org/wiki/Confusion_matrix>`_ with each row corresponding
to the true class (Wikipedia and other references may use different convention
for axes).
By definition, entry :math:`i, j` in a confusion matrix is
the number of observations actually in group :math:`i`, but
predicted to be in group :math:`j`. Here is an example::
>>> from sklearn.metrics import confusion_matrix
>>> y_true = [2, 0, 2, 2, 0, 1]
>>> y_pred = [0, 0, 2, 2, 0, 2]
>>> confusion_matrix(y_true, y_pred)
array([[2, 0, 0],
[0, 0, 1],
[1, 0, 2]])
:class:`ConfusionMatrixDisplay` can be used to visually represent a confusion
matrix as shown in the
:ref:`sphx_glr_auto_examples_model_selection_plot_confusion_matrix.py`
example, which creates the following figure:
.. image:: ../auto_examples/model_selection/images/sphx_glr_plot_confusion_matrix_001.png
:target: ../auto_examples/model_selection/plot_confusion_matrix.html
:scale: 75
:align: center
The parameter ``normalize`` allows to report ratios instead of counts. The
confusion matrix can be normalized in 3 different ways: ``'pred'``, ``'true'``,
and ``'all'`` which will divide the counts by the sum of each columns, rows, or
the entire matrix, respectively.
>>> y_true = [0, 0, 0, 1, 1, 1, 1, 1]
>>> y_pred = [0, 1, 0, 1, 0, 1, 0, 1]
>>> confusion_matrix(y_true, y_pred, normalize='all')
array([[0.25 , 0.125],
[0.25 , 0.375]])
For binary problems, we can get counts of true negatives, false positives,
false negatives and true positives as follows::
>>> y_true = [0, 0, 0, 1, 1, 1, 1, 1]
>>> y_pred = [0, 1, 0, 1, 0, 1, 0, 1]
>>> tn, fp, fn, tp = confusion_matrix(y_true, y_pred).ravel()
>>> tn, fp, fn, tp
(2, 1, 2, 3)
.. rubric:: Examples
* See :ref:`sphx_glr_auto_examples_model_selection_plot_confusion_matrix.py`
for an example of using a confusion matrix to evaluate classifier output
quality.
* See :ref:`sphx_glr_auto_examples_classification_plot_digits_classification.py`
for an example of using a confusion matrix to classify
hand-written digits.
* See :ref:`sphx_glr_auto_examples_text_plot_document_classification_20newsgroups.py`
for an example of using a confusion matrix to classify text
documents.
.. _classification_report:
Classification report
----------------------
The :func:`classification_report` function builds a text report showing the
main classification metrics. Here is a small example with custom ``target_names``
and inferred labels::
>>> from sklearn.metrics import classification_report
>>> y_true = [0, 1, 2, 2, 0]
>>> y_pred = [0, 0, 2, 1, 0]
>>> target_names = ['class 0', 'class 1', 'class 2']
>>> print(classification_report(y_true, y_pred, target_names=target_names))
precision recall f1-score support
<BLANKLINE>
class 0 0.67 1.00 0.80 2
class 1 0.00 0.00 0.00 1
class 2 1.00 0.50 0.67 2
<BLANKLINE>
accuracy 0.60 5
macro avg 0.56 0.50 0.49 5
weighted avg 0.67 0.60 0.59 5
<BLANKLINE>
.. rubric:: Examples
* See :ref:`sphx_glr_auto_examples_classification_plot_digits_classification.py`
for an example of classification report usage for
hand-written digits.
* See :ref:`sphx_glr_auto_examples_model_selection_plot_grid_search_digits.py`
for an example of classification report usage for
grid search with nested cross-validation.
.. _hamming_loss:
Hamming loss
-------------
The :func:`hamming_loss` computes the average Hamming loss or `Hamming
distance <https://en.wikipedia.org/wiki/Hamming_distance>`_ between two sets
of samples.
If :math:`\hat{y}_{i,j}` is the predicted value for the :math:`j`-th label of a
given sample :math:`i`, :math:`y_{i,j}` is the corresponding true value,
:math:`n_\text{samples}` is the number of samples and :math:`n_\text{labels}`
is the number of labels, then the Hamming loss :math:`L_{Hamming}` is defined
as:
.. math::
L_{Hamming}(y, \hat{y}) = \frac{1}{n_\text{samples} * n_\text{labels}} \sum_{i=0}^{n_\text{samples}-1} \sum_{j=0}^{n_\text{labels} - 1} 1(\hat{y}_{i,j} \not= y_{i,j})
where :math:`1(x)` is the `indicator function
<https://en.wikipedia.org/wiki/Indicator_function>`_.
The equation above does not hold true in the case of multiclass classification.
Please refer to the note below for more information. ::
>>> from sklearn.metrics import hamming_loss
>>> y_pred = [1, 2, 3, 4]
>>> y_true = [2, 2, 3, 4]
>>> hamming_loss(y_true, y_pred)
0.25
In the multilabel case with binary label indicators::
>>> hamming_loss(np.array([[0, 1], [1, 1]]), np.zeros((2, 2)))
0.75
.. note::
In multiclass classification, the Hamming loss corresponds to the Hamming
distance between ``y_true`` and ``y_pred`` which is similar to the
:ref:`zero_one_loss` function. However, while zero-one loss penalizes
prediction sets that do not strictly match true sets, the Hamming loss
penalizes individual labels. Thus the Hamming loss, upper bounded by the zero-one
loss, is always between zero and one, inclusive; and predicting a proper subset
or superset of the true labels will give a Hamming loss between
zero and one, exclusive.
.. _precision_recall_f_measure_metrics:
Precision, recall and F-measures
---------------------------------
Intuitively, `precision
<https://en.wikipedia.org/wiki/Precision_and_recall#Precision>`_ is the ability
of the classifier not to label as positive a sample that is negative, and
`recall <https://en.wikipedia.org/wiki/Precision_and_recall#Recall>`_ is the
ability of the classifier to find all the positive samples.
The `F-measure <https://en.wikipedia.org/wiki/F1_score>`_
(:math:`F_\beta` and :math:`F_1` measures) can be interpreted as a weighted
harmonic mean of the precision and recall. A
:math:`F_\beta` measure reaches its best value at 1 and its worst score at 0.
With :math:`\beta = 1`, :math:`F_\beta` and
:math:`F_1` are equivalent, and the recall and the precision are equally important.
The :func:`precision_recall_curve` computes a precision-recall curve
from the ground truth label and a score given by the classifier
by varying a decision threshold.
The :func:`average_precision_score` function computes the
`average precision <https://en.wikipedia.org/w/index.php?title=Information_retrieval&oldid=793358396#Average_precision>`_
(AP) from prediction scores. The value is between 0 and 1 and higher is better.
AP is defined as
.. math::
\text{AP} = \sum_n (R_n - R_{n-1}) P_n
where :math:`P_n` and :math:`R_n` are the precision and recall at the
nth threshold. With random predictions, the AP is the fraction of positive
samples.
References [Manning2008]_ and [Everingham2010]_ present alternative variants of
AP that interpolate the precision-recall curve. Currently,
:func:`average_precision_score` does not implement any interpolated variant.
References [Davis2006]_ and [Flach2015]_ describe why a linear interpolation of
points on the precision-recall curve provides an overly-optimistic measure of
classifier performance. This linear interpolation is used when computing area
under the curve with the trapezoidal rule in :func:`auc`.
Several functions allow you to analyze the precision, recall and F-measures
score:
.. autosummary::
average_precision_score
f1_score
fbeta_score
precision_recall_curve
precision_recall_fscore_support
precision_score
recall_score
Note that the :func:`precision_recall_curve` function is restricted to the
binary case. The :func:`average_precision_score` function supports multiclass
and multilabel formats by computing each class score in a One-vs-the-rest (OvR)
fashion and averaging them or not depending of its ``average`` argument value.
The :func:`PrecisionRecallDisplay.from_estimator` and
:func:`PrecisionRecallDisplay.from_predictions` functions will plot the
precision-recall curve as follows.
.. image:: ../auto_examples/model_selection/images/sphx_glr_plot_precision_recall_001.png
:target: ../auto_examples/model_selection/plot_precision_recall.html#plot-the-precision-recall-curve
:scale: 75
:align: center
.. rubric:: Examples
* See :ref:`sphx_glr_auto_examples_model_selection_plot_grid_search_digits.py`
for an example of :func:`precision_score` and :func:`recall_score` usage
to estimate parameters using grid search with nested cross-validation.
* See :ref:`sphx_glr_auto_examples_model_selection_plot_precision_recall.py`
for an example of :func:`precision_recall_curve` usage to evaluate
classifier output quality.
.. rubric:: References
.. [Manning2008] C.D. Manning, P. Raghavan, H. Schütze, `Introduction to Information Retrieval
<https://nlp.stanford.edu/IR-book/html/htmledition/evaluation-of-ranked-retrieval-results-1.html>`_,
2008.
.. [Everingham2010] M. Everingham, L. Van Gool, C.K.I. Williams, J. Winn, A. Zisserman,
`The Pascal Visual Object Classes (VOC) Challenge
<https://citeseerx.ist.psu.edu/doc_view/pid/b6bebfd529b233f00cb854b7d8070319600cf59d>`_,
IJCV 2010.
.. [Davis2006] J. Davis, M. Goadrich, `The Relationship Between Precision-Recall and ROC Curves
<https://www.biostat.wisc.edu/~page/rocpr.pdf>`_,
ICML 2006.
.. [Flach2015] P.A. Flach, M. Kull, `Precision-Recall-Gain Curves: PR Analysis Done Right
<https://papers.nips.cc/paper/5867-precision-recall-gain-curves-pr-analysis-done-right.pdf>`_,
NIPS 2015.
Binary classification
^^^^^^^^^^^^^^^^^^^^^
In a binary classification task, the terms ''positive'' and ''negative'' refer
to the classifier's prediction, and the terms ''true'' and ''false'' refer to
whether that prediction corresponds to the external judgment (sometimes known
as the ''observation''). Given these definitions, we can formulate the
following table:
+-------------------+------------------------------------------------+
| | Actual class (observation) |
+-------------------+---------------------+--------------------------+
| Predicted class | tp (true positive) | fp (false positive) |
| (expectation) | Correct result | Unexpected result |
| +---------------------+--------------------------+
| | fn (false negative) | tn (true negative) |
| | Missing result | Correct absence of result|
+-------------------+---------------------+--------------------------+
In this context, we can define the notions of precision and recall:
.. math::
\text{precision} = \frac{\text{tp}}{\text{tp} + \text{fp}},
.. math::
\text{recall} = \frac{\text{tp}}{\text{tp} + \text{fn}},
(Sometimes recall is also called ''sensitivity'')
F-measure is the weighted harmonic mean of precision and recall, with precision's
contribution to the mean weighted by some parameter :math:`\beta`:
.. math::
F_\beta = (1 + \beta^2) \frac{\text{precision} \times \text{recall}}{\beta^2 \text{precision} + \text{recall}}
To avoid division by zero when precision and recall are zero, Scikit-Learn calculates F-measure with this
otherwise-equivalent formula:
.. math::
F_\beta = \frac{(1 + \beta^2) \text{tp}}{(1 + \beta^2) \text{tp} + \text{fp} + \beta^2 \text{fn}}
Note that this formula is still undefined when there are no true positives, false
positives, or false negatives. By default, F-1 for a set of exclusively true negatives
is calculated as 0, however this behavior can be changed using the `zero_division`
parameter.
Here are some small examples in binary classification::
>>> from sklearn import metrics
>>> y_pred = [0, 1, 0, 0]
>>> y_true = [0, 1, 0, 1]
>>> metrics.precision_score(y_true, y_pred)
1.0
>>> metrics.recall_score(y_true, y_pred)
0.5
>>> metrics.f1_score(y_true, y_pred)
0.66...
>>> metrics.fbeta_score(y_true, y_pred, beta=0.5)
0.83...
>>> metrics.fbeta_score(y_true, y_pred, beta=1)
0.66...
>>> metrics.fbeta_score(y_true, y_pred, beta=2)
0.55...
>>> metrics.precision_recall_fscore_support(y_true, y_pred, beta=0.5)
(array([0.66..., 1. ]), array([1. , 0.5]), array([0.71..., 0.83...]), array([2, 2]))
>>> import numpy as np
>>> from sklearn.metrics import precision_recall_curve
>>> from sklearn.metrics import average_precision_score
>>> y_true = np.array([0, 0, 1, 1])
>>> y_scores = np.array([0.1, 0.4, 0.35, 0.8])
>>> precision, recall, threshold = precision_recall_curve(y_true, y_scores)
>>> precision
array([0.5 , 0.66..., 0.5 , 1. , 1. ])
>>> recall
array([1. , 1. , 0.5, 0.5, 0. ])
>>> threshold
array([0.1 , 0.35, 0.4 , 0.8 ])
>>> average_precision_score(y_true, y_scores)
0.83...
Multiclass and multilabel classification
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
In a multiclass and multilabel classification task, the notions of precision,
recall, and F-measures can be applied to each label independently.
There are a few ways to combine results across labels,
specified by the ``average`` argument to the
:func:`average_precision_score`, :func:`f1_score`,
:func:`fbeta_score`, :func:`precision_recall_fscore_support`,
:func:`precision_score` and :func:`recall_score` functions, as described
:ref:`above <average>`.
Note the following behaviors when averaging:
* If all labels are included, "micro"-averaging in a multiclass setting will produce
precision, recall and :math:`F` that are all identical to accuracy.
* "weighted" averaging may produce a F-score that is not between precision and recall.
* "macro" averaging for F-measures is calculated as the arithmetic mean over
per-label/class F-measures, not the harmonic mean over the arithmetic precision and
recall means. Both calculations can be seen in the literature but are not equivalent,
see [OB2019]_ for details.
To make this more explicit, consider the following notation:
* :math:`y` the set of *true* :math:`(sample, label)` pairs
* :math:`\hat{y}` the set of *predicted* :math:`(sample, label)` pairs
* :math:`L` the set of labels
* :math:`S` the set of samples
* :math:`y_s` the subset of :math:`y` with sample :math:`s`,
i.e. :math:`y_s := \left\{(s', l) \in y | s' = s\right\}`
* :math:`y_l` the subset of :math:`y` with label :math:`l`
* similarly, :math:`\hat{y}_s` and :math:`\hat{y}_l` are subsets of
:math:`\hat{y}`
* :math:`P(A, B) := \frac{\left| A \cap B \right|}{\left|B\right|}` for some
sets :math:`A` and :math:`B`
* :math:`R(A, B) := \frac{\left| A \cap B \right|}{\left|A\right|}`
(Conventions vary on handling :math:`A = \emptyset`; this implementation uses
:math:`R(A, B):=0`, and similar for :math:`P`.)
* :math:`F_\beta(A, B) := \left(1 + \beta^2\right) \frac{P(A, B) \times R(A, B)}{\beta^2 P(A, B) + R(A, B)}`
Then the metrics are defined as:
+---------------+------------------------------------------------------------------------------------------------------------------+------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------+
|``average`` | Precision | Recall | F\_beta |
+===============+==================================================================================================================+==================================================================================================================+======================================================================================================================+
|``"micro"`` | :math:`P(y, \hat{y})` | :math:`R(y, \hat{y})` | :math:`F_\beta(y, \hat{y})` |
+---------------+------------------------------------------------------------------------------------------------------------------+------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------+
|``"samples"`` | :math:`\frac{1}{\left|S\right|} \sum_{s \in S} P(y_s, \hat{y}_s)` | :math:`\frac{1}{\left|S\right|} \sum_{s \in S} R(y_s, \hat{y}_s)` | :math:`\frac{1}{\left|S\right|} \sum_{s \in S} F_\beta(y_s, \hat{y}_s)` |
+---------------+------------------------------------------------------------------------------------------------------------------+------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------+
|``"macro"`` | :math:`\frac{1}{\left|L\right|} \sum_{l \in L} P(y_l, \hat{y}_l)` | :math:`\frac{1}{\left|L\right|} \sum_{l \in L} R(y_l, \hat{y}_l)` | :math:`\frac{1}{\left|L\right|} \sum_{l \in L} F_\beta(y_l, \hat{y}_l)` |
+---------------+------------------------------------------------------------------------------------------------------------------+------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------+
|``"weighted"`` | :math:`\frac{1}{\sum_{l \in L} \left|y_l\right|} \sum_{l \in L} \left|y_l\right| P(y_l, \hat{y}_l)` | :math:`\frac{1}{\sum_{l \in L} \left|y_l\right|} \sum_{l \in L} \left|y_l\right| R(y_l, \hat{y}_l)` | :math:`\frac{1}{\sum_{l \in L} \left|y_l\right|} \sum_{l \in L} \left|y_l\right| F_\beta(y_l, \hat{y}_l)` |
+---------------+------------------------------------------------------------------------------------------------------------------+------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------+
|``None`` | :math:`\langle P(y_l, \hat{y}_l) | l \in L \rangle` | :math:`\langle R(y_l, \hat{y}_l) | l \in L \rangle` | :math:`\langle F_\beta(y_l, \hat{y}_l) | l \in L \rangle` |
+---------------+------------------------------------------------------------------------------------------------------------------+------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------+
>>> from sklearn import metrics
>>> y_true = [0, 1, 2, 0, 1, 2]
>>> y_pred = [0, 2, 1, 0, 0, 1]
>>> metrics.precision_score(y_true, y_pred, average='macro')
0.22...
>>> metrics.recall_score(y_true, y_pred, average='micro')
0.33...
>>> metrics.f1_score(y_true, y_pred, average='weighted')
0.26...
>>> metrics.fbeta_score(y_true, y_pred, average='macro', beta=0.5)
0.23...
>>> metrics.precision_recall_fscore_support(y_true, y_pred, beta=0.5, average=None)
(array([0.66..., 0. , 0. ]), array([1., 0., 0.]), array([0.71..., 0. , 0. ]), array([2, 2, 2]...))
For multiclass classification with a "negative class", it is possible to exclude some labels:
>>> metrics.recall_score(y_true, y_pred, labels=[1, 2], average='micro')
... # excluding 0, no labels were correctly recalled
0.0
Similarly, labels not present in the data sample may be accounted for in macro-averaging.
>>> metrics.precision_score(y_true, y_pred, labels=[0, 1, 2, 3], average='macro')
0.166...
.. rubric:: References
.. [OB2019] :arxiv:`Opitz, J., & Burst, S. (2019). "Macro f1 and macro f1."
<1911.03347>`
.. _jaccard_similarity_score:
Jaccard similarity coefficient score
-------------------------------------
The :func:`jaccard_score` function computes the average of `Jaccard similarity
coefficients <https://en.wikipedia.org/wiki/Jaccard_index>`_, also called the
Jaccard index, between pairs of label sets.
The Jaccard similarity coefficient with a ground truth label set :math:`y` and
predicted label set :math:`\hat{y}`, is defined as
.. math::
J(y, \hat{y}) = \frac{|y \cap \hat{y}|}{|y \cup \hat{y}|}.
The :func:`jaccard_score` (like :func:`precision_recall_fscore_support`) applies
natively to binary targets. By computing it set-wise it can be extended to apply
to multilabel and multiclass through the use of `average` (see
:ref:`above <average>`).
In the binary case::
>>> import numpy as np
>>> from sklearn.metrics import jaccard_score
>>> y_true = np.array([[0, 1, 1],
... [1, 1, 0]])
>>> y_pred = np.array([[1, 1, 1],
... [1, 0, 0]])
>>> jaccard_score(y_true[0], y_pred[0])
0.6666...
In the 2D comparison case (e.g. image similarity):
>>> jaccard_score(y_true, y_pred, average="micro")
0.6
In the multilabel case with binary label indicators::
>>> jaccard_score(y_true, y_pred, average='samples')
0.5833...
>>> jaccard_score(y_true, y_pred, average='macro')
0.6666...
>>> jaccard_score(y_true, y_pred, average=None)
array([0.5, 0.5, 1. ])
Multiclass problems are binarized and treated like the corresponding
multilabel problem::
>>> y_pred = [0, 2, 1, 2]
>>> y_true = [0, 1, 2, 2]
>>> jaccard_score(y_true, y_pred, average=None)
array([1. , 0. , 0.33...])
>>> jaccard_score(y_true, y_pred, average='macro')
0.44...
>>> jaccard_score(y_true, y_pred, average='micro')
0.33...
.. _hinge_loss:
Hinge loss
----------
The :func:`hinge_loss` function computes the average distance between
the model and the data using
`hinge loss <https://en.wikipedia.org/wiki/Hinge_loss>`_, a one-sided metric
that considers only prediction errors. (Hinge
loss is used in maximal margin classifiers such as support vector machines.)
If the true label :math:`y_i` of a binary classification task is encoded as
:math:`y_i=\left\{-1, +1\right\}` for every sample :math:`i`; and :math:`w_i`
is the corresponding predicted decision (an array of shape (`n_samples`,) as
output by the `decision_function` method), then the hinge loss is defined as:
.. math::
L_\text{Hinge}(y, w) = \frac{1}{n_\text{samples}} \sum_{i=0}^{n_\text{samples}-1} \max\left\{1 - w_i y_i, 0\right\}
If there are more than two labels, :func:`hinge_loss` uses a multiclass variant
due to Crammer & Singer.
`Here <https://jmlr.csail.mit.edu/papers/volume2/crammer01a/crammer01a.pdf>`_ is
the paper describing it.
In this case the predicted decision is an array of shape (`n_samples`,
`n_labels`). If :math:`w_{i, y_i}` is the predicted decision for the true label
:math:`y_i` of the :math:`i`-th sample; and
:math:`\hat{w}_{i, y_i} = \max\left\{w_{i, y_j}~|~y_j \ne y_i \right\}`
is the maximum of the
predicted decisions for all the other labels, then the multi-class hinge loss
is defined by:
.. math::
L_\text{Hinge}(y, w) = \frac{1}{n_\text{samples}}
\sum_{i=0}^{n_\text{samples}-1} \max\left\{1 + \hat{w}_{i, y_i}
- w_{i, y_i}, 0\right\}
Here is a small example demonstrating the use of the :func:`hinge_loss` function
with a svm classifier in a binary class problem::
>>> from sklearn import svm
>>> from sklearn.metrics import hinge_loss
>>> X = [[0], [1]]
>>> y = [-1, 1]
>>> est = svm.LinearSVC(random_state=0)
>>> est.fit(X, y)
LinearSVC(random_state=0)
>>> pred_decision = est.decision_function([[-2], [3], [0.5]])
>>> pred_decision
array([-2.18..., 2.36..., 0.09...])
>>> hinge_loss([-1, 1, 1], pred_decision)
0.3...
Here is an example demonstrating the use of the :func:`hinge_loss` function
with a svm classifier in a multiclass problem::
>>> X = np.array([[0], [1], [2], [3]])
>>> Y = np.array([0, 1, 2, 3])
>>> labels = np.array([0, 1, 2, 3])
>>> est = svm.LinearSVC()
>>> est.fit(X, Y)
LinearSVC()
>>> pred_decision = est.decision_function([[-1], [2], [3]])
>>> y_true = [0, 2, 3]
>>> hinge_loss(y_true, pred_decision, labels=labels)
0.56...
.. _log_loss:
Log loss
--------
Log loss, also called logistic regression loss or
cross-entropy loss, is defined on probability estimates. It is
commonly used in (multinomial) logistic regression and neural networks, as well
as in some variants of expectation-maximization, and can be used to evaluate the
probability outputs (``predict_proba``) of a classifier instead of its
discrete predictions.
For binary classification with a true label :math:`y \in \{0,1\}`
and a probability estimate :math:`p = \operatorname{Pr}(y = 1)`,
the log loss per sample is the negative log-likelihood
of the classifier given the true label:
.. math::
L_{\log}(y, p) = -\log \operatorname{Pr}(y|p) = -(y \log (p) + (1 - y) \log (1 - p))
This extends to the multiclass case as follows.
Let the true labels for a set of samples
be encoded as a 1-of-K binary indicator matrix :math:`Y`,
i.e., :math:`y_{i,k} = 1` if sample :math:`i` has label :math:`k`
taken from a set of :math:`K` labels.
Let :math:`P` be a matrix of probability estimates,
with :math:`p_{i,k} = \operatorname{Pr}(y_{i,k} = 1)`.
Then the log loss of the whole set is
.. math::
L_{\log}(Y, P) = -\log \operatorname{Pr}(Y|P) = - \frac{1}{N} \sum_{i=0}^{N-1} \sum_{k=0}^{K-1} y_{i,k} \log p_{i,k}
To see how this generalizes the binary log loss given above,
note that in the binary case,
:math:`p_{i,0} = 1 - p_{i,1}` and :math:`y_{i,0} = 1 - y_{i,1}`,
so expanding the inner sum over :math:`y_{i,k} \in \{0,1\}`
gives the binary log loss.
The :func:`log_loss` function computes log loss given a list of ground-truth
labels and a probability matrix, as returned by an estimator's ``predict_proba``
method.
>>> from sklearn.metrics import log_loss
>>> y_true = [0, 0, 1, 1]
>>> y_pred = [[.9, .1], [.8, .2], [.3, .7], [.01, .99]]
>>> log_loss(y_true, y_pred)
0.1738...
The first ``[.9, .1]`` in ``y_pred`` denotes 90% probability that the first
sample has label 0. The log loss is non-negative.
.. _matthews_corrcoef:
Matthews correlation coefficient
---------------------------------
The :func:`matthews_corrcoef` function computes the
`Matthew's correlation coefficient (MCC) <https://en.wikipedia.org/wiki/Matthews_correlation_coefficient>`_
for binary classes. Quoting Wikipedia:
"The Matthews correlation coefficient is used in machine learning as a
measure of the quality of binary (two-class) classifications. It takes
into account true and false positives and negatives and is generally
regarded as a balanced measure which can be used even if the classes are
of very different sizes. The MCC is in essence a correlation coefficient
value between -1 and +1. A coefficient of +1 represents a perfect
prediction, 0 an average random prediction and -1 an inverse prediction.
The statistic is also known as the phi coefficient."
In the binary (two-class) case, :math:`tp`, :math:`tn`, :math:`fp` and
:math:`fn` are respectively the number of true positives, true negatives, false
positives and false negatives, the MCC is defined as
.. math::
MCC = \frac{tp \times tn - fp \times fn}{\sqrt{(tp + fp)(tp + fn)(tn + fp)(tn + fn)}}.
In the multiclass case, the Matthews correlation coefficient can be `defined
<http://rk.kvl.dk/introduction/index.html>`_ in terms of a
:func:`confusion_matrix` :math:`C` for :math:`K` classes. To simplify the
definition consider the following intermediate variables:
* :math:`t_k=\sum_{i}^{K} C_{ik}` the number of times class :math:`k` truly occurred,
* :math:`p_k=\sum_{i}^{K} C_{ki}` the number of times class :math:`k` was predicted,
* :math:`c=\sum_{k}^{K} C_{kk}` the total number of samples correctly predicted,
* :math:`s=\sum_{i}^{K} \sum_{j}^{K} C_{ij}` the total number of samples.
Then the multiclass MCC is defined as:
.. math::
MCC = \frac{
c \times s - \sum_{k}^{K} p_k \times t_k
}{\sqrt{
(s^2 - \sum_{k}^{K} p_k^2) \times
(s^2 - \sum_{k}^{K} t_k^2)
}}
When there are more than two labels, the value of the MCC will no longer range
between -1 and +1. Instead the minimum value will be somewhere between -1 and 0
depending on the number and distribution of ground true labels. The maximum
value is always +1.
For additional information, see [WikipediaMCC2021]_.
Here is a small example illustrating the usage of the :func:`matthews_corrcoef`
function:
>>> from sklearn.metrics import matthews_corrcoef
>>> y_true = [+1, +1, +1, -1]
>>> y_pred = [+1, -1, +1, +1]
>>> matthews_corrcoef(y_true, y_pred)
-0.33...
.. rubric:: References
.. [WikipediaMCC2021] Wikipedia contributors. Phi coefficient.
Wikipedia, The Free Encyclopedia. April 21, 2021, 12:21 CEST.
Available at: https://en.wikipedia.org/wiki/Phi_coefficient
Accessed April 21, 2021.
.. _multilabel_confusion_matrix:
Multi-label confusion matrix
----------------------------
The :func:`multilabel_confusion_matrix` function computes class-wise (default)
or sample-wise (samplewise=True) multilabel confusion matrix to evaluate
the accuracy of a classification. multilabel_confusion_matrix also treats
multiclass data as if it were multilabel, as this is a transformation commonly
applied to evaluate multiclass problems with binary classification metrics
(such as precision, recall, etc.).
When calculating class-wise multilabel confusion matrix :math:`C`, the
count of true negatives for class :math:`i` is :math:`C_{i,0,0}`, false
negatives is :math:`C_{i,1,0}`, true positives is :math:`C_{i,1,1}`
and false positives is :math:`C_{i,0,1}`.
Here is an example demonstrating the use of the
:func:`multilabel_confusion_matrix` function with
:term:`multilabel indicator matrix` input::
>>> import numpy as np
>>> from sklearn.metrics import multilabel_confusion_matrix
>>> y_true = np.array([[1, 0, 1],
... [0, 1, 0]])
>>> y_pred = np.array([[1, 0, 0],
... [0, 1, 1]])
>>> multilabel_confusion_matrix(y_true, y_pred)
array([[[1, 0],
[0, 1]],
<BLANKLINE>
[[1, 0],
[0, 1]],
<BLANKLINE>
[[0, 1],
[1, 0]]])
Or a confusion matrix can be constructed for each sample's labels:
>>> multilabel_confusion_matrix(y_true, y_pred, samplewise=True)
array([[[1, 0],
[1, 1]],
<BLANKLINE>
[[1, 1],
[0, 1]]])
Here is an example demonstrating the use of the
:func:`multilabel_confusion_matrix` function with
:term:`multiclass` input::
>>> y_true = ["cat", "ant", "cat", "cat", "ant", "bird"]
>>> y_pred = ["ant", "ant", "cat", "cat", "ant", "cat"]
>>> multilabel_confusion_matrix(y_true, y_pred,
... labels=["ant", "bird", "cat"])
array([[[3, 1],
[0, 2]],
<BLANKLINE>
[[5, 0],
[1, 0]],
<BLANKLINE>
[[2, 1],
[1, 2]]])
Here are some examples demonstrating the use of the
:func:`multilabel_confusion_matrix` function to calculate recall
(or sensitivity), specificity, fall out and miss rate for each class in a
problem with multilabel indicator matrix input.
Calculating
`recall <https://en.wikipedia.org/wiki/Sensitivity_and_specificity>`__
(also called the true positive rate or the sensitivity) for each class::
>>> y_true = np.array([[0, 0, 1],
... [0, 1, 0],
... [1, 1, 0]])
>>> y_pred = np.array([[0, 1, 0],
... [0, 0, 1],
... [1, 1, 0]])
>>> mcm = multilabel_confusion_matrix(y_true, y_pred)
>>> tn = mcm[:, 0, 0]
>>> tp = mcm[:, 1, 1]
>>> fn = mcm[:, 1, 0]
>>> fp = mcm[:, 0, 1]
>>> tp / (tp + fn)
array([1. , 0.5, 0. ])
Calculating
`specificity <https://en.wikipedia.org/wiki/Sensitivity_and_specificity>`__
(also called the true negative rate) for each class::
>>> tn / (tn + fp)
array([1. , 0. , 0.5])
Calculating `fall out <https://en.wikipedia.org/wiki/False_positive_rate>`__
(also called the false positive rate) for each class::
>>> fp / (fp + tn)
array([0. , 1. , 0.5])
Calculating `miss rate
<https://en.wikipedia.org/wiki/False_positives_and_false_negatives>`__
(also called the false negative rate) for each class::
>>> fn / (fn + tp)
array([0. , 0.5, 1. ])
.. _roc_metrics:
Receiver operating characteristic (ROC)
---------------------------------------
The function :func:`roc_curve` computes the
`receiver operating characteristic curve, or ROC curve <https://en.wikipedia.org/wiki/Receiver_operating_characteristic>`_.
Quoting Wikipedia :
"A receiver operating characteristic (ROC), or simply ROC curve, is a
graphical plot which illustrates the performance of a binary classifier
system as its discrimination threshold is varied. It is created by plotting
the fraction of true positives out of the positives (TPR = true positive
rate) vs. the fraction of false positives out of the negatives (FPR = false
positive rate), at various threshold settings. TPR is also known as
sensitivity, and FPR is one minus the specificity or true negative rate."
This function requires the true binary value and the target scores, which can
either be probability estimates of the positive class, confidence values, or
binary decisions. Here is a small example of how to use the :func:`roc_curve`
function::
>>> import numpy as np
>>> from sklearn.metrics import roc_curve
>>> y = np.array([1, 1, 2, 2])
>>> scores = np.array([0.1, 0.4, 0.35, 0.8])
>>> fpr, tpr, thresholds = roc_curve(y, scores, pos_label=2)
>>> fpr
array([0. , 0. , 0.5, 0.5, 1. ])
>>> tpr
array([0. , 0.5, 0.5, 1. , 1. ])
>>> thresholds
array([ inf, 0.8 , 0.4 , 0.35, 0.1 ])
Compared to metrics such as the subset accuracy, the Hamming loss, or the
F1 score, ROC doesn't require optimizing a threshold for each label.
The :func:`roc_auc_score` function, denoted by ROC-AUC or AUROC, computes the
area under the ROC curve. By doing so, the curve information is summarized in
one number.
The following figure shows the ROC curve and ROC-AUC score for a classifier
aimed to distinguish the virginica flower from the rest of the species in the
:ref:`iris_dataset`:
.. image:: ../auto_examples/model_selection/images/sphx_glr_plot_roc_001.png
:target: ../auto_examples/model_selection/plot_roc.html
:scale: 75
:align: center
For more information see the `Wikipedia article on AUC
<https://en.wikipedia.org/wiki/Receiver_operating_characteristic#Area_under_the_curve>`_.
.. _roc_auc_binary:
Binary case
^^^^^^^^^^^
In the **binary case**, you can either provide the probability estimates, using
the `classifier.predict_proba()` method, or the non-thresholded decision values
given by the `classifier.decision_function()` method. In the case of providing
the probability estimates, the probability of the class with the
"greater label" should be provided. The "greater label" corresponds to
`classifier.classes_[1]` and thus `classifier.predict_proba(X)[:, 1]`.
Therefore, the `y_score` parameter is of size (n_samples,).
>>> from sklearn.datasets import load_breast_cancer
>>> from sklearn.linear_model import LogisticRegression
>>> from sklearn.metrics import roc_auc_score
>>> X, y = load_breast_cancer(return_X_y=True)
>>> clf = LogisticRegression(solver="liblinear").fit(X, y)
>>> clf.classes_
array([0, 1])
We can use the probability estimates corresponding to `clf.classes_[1]`.
>>> y_score = clf.predict_proba(X)[:, 1]
>>> roc_auc_score(y, y_score)
0.99...
Otherwise, we can use the non-thresholded decision values
>>> roc_auc_score(y, clf.decision_function(X))
0.99...
.. _roc_auc_multiclass:
Multi-class case
^^^^^^^^^^^^^^^^
The :func:`roc_auc_score` function can also be used in **multi-class
classification**. Two averaging strategies are currently supported: the
one-vs-one algorithm computes the average of the pairwise ROC AUC scores, and
the one-vs-rest algorithm computes the average of the ROC AUC scores for each
class against all other classes. In both cases, the predicted labels are
provided in an array with values from 0 to ``n_classes``, and the scores
correspond to the probability estimates that a sample belongs to a particular
class. The OvO and OvR algorithms support weighting uniformly
(``average='macro'``) and by prevalence (``average='weighted'``).
.. dropdown:: One-vs-one Algorithm
Computes the average AUC of all possible pairwise
combinations of classes. [HT2001]_ defines a multiclass AUC metric weighted
uniformly:
.. math::
\frac{1}{c(c-1)}\sum_{j=1}^{c}\sum_{k > j}^c (\text{AUC}(j | k) +
\text{AUC}(k | j))
where :math:`c` is the number of classes and :math:`\text{AUC}(j | k)` is the
AUC with class :math:`j` as the positive class and class :math:`k` as the
negative class. In general,
:math:`\text{AUC}(j | k) \neq \text{AUC}(k | j))` in the multiclass
case. This algorithm is used by setting the keyword argument ``multiclass``
to ``'ovo'`` and ``average`` to ``'macro'``.
The [HT2001]_ multiclass AUC metric can be extended to be weighted by the
prevalence:
.. math::
\frac{1}{c(c-1)}\sum_{j=1}^{c}\sum_{k > j}^c p(j \cup k)(
\text{AUC}(j | k) + \text{AUC}(k | j))
where :math:`c` is the number of classes. This algorithm is used by setting
the keyword argument ``multiclass`` to ``'ovo'`` and ``average`` to
``'weighted'``. The ``'weighted'`` option returns a prevalence-weighted average
as described in [FC2009]_.
.. dropdown:: One-vs-rest Algorithm
Computes the AUC of each class against the rest
[PD2000]_. The algorithm is functionally the same as the multilabel case. To
enable this algorithm set the keyword argument ``multiclass`` to ``'ovr'``.
Additionally to ``'macro'`` [F2006]_ and ``'weighted'`` [F2001]_ averaging, OvR
supports ``'micro'`` averaging.
In applications where a high false positive rate is not tolerable the parameter
``max_fpr`` of :func:`roc_auc_score` can be used to summarize the ROC curve up
to the given limit.
The following figure shows the micro-averaged ROC curve and its corresponding
ROC-AUC score for a classifier aimed to distinguish the different species in
the :ref:`iris_dataset`:
.. image:: ../auto_examples/model_selection/images/sphx_glr_plot_roc_002.png
:target: ../auto_examples/model_selection/plot_roc.html
:scale: 75
:align: center
.. _roc_auc_multilabel:
Multi-label case
^^^^^^^^^^^^^^^^
In **multi-label classification**, the :func:`roc_auc_score` function is
extended by averaging over the labels as :ref:`above <average>`. In this case,
you should provide a `y_score` of shape `(n_samples, n_classes)`. Thus, when
using the probability estimates, one needs to select the probability of the
class with the greater label for each output.
>>> from sklearn.datasets import make_multilabel_classification
>>> from sklearn.multioutput import MultiOutputClassifier
>>> X, y = make_multilabel_classification(random_state=0)
>>> inner_clf = LogisticRegression(solver="liblinear", random_state=0)
>>> clf = MultiOutputClassifier(inner_clf).fit(X, y)
>>> y_score = np.transpose([y_pred[:, 1] for y_pred in clf.predict_proba(X)])
>>> roc_auc_score(y, y_score, average=None)
array([0.82..., 0.86..., 0.94..., 0.85... , 0.94...])
And the decision values do not require such processing.
>>> from sklearn.linear_model import RidgeClassifierCV
>>> clf = RidgeClassifierCV().fit(X, y)
>>> y_score = clf.decision_function(X)
>>> roc_auc_score(y, y_score, average=None)
array([0.81..., 0.84... , 0.93..., 0.87..., 0.94...])
.. rubric:: Examples
* See :ref:`sphx_glr_auto_examples_model_selection_plot_roc.py` for an example of
using ROC to evaluate the quality of the output of a classifier.
* See :ref:`sphx_glr_auto_examples_model_selection_plot_roc_crossval.py` for an
example of using ROC to evaluate classifier output quality, using cross-validation.
* See :ref:`sphx_glr_auto_examples_applications_plot_species_distribution_modeling.py`
for an example of using ROC to model species distribution.
.. rubric:: References
.. [HT2001] Hand, D.J. and Till, R.J., (2001). `A simple generalisation
of the area under the ROC curve for multiple class classification problems.
<http://link.springer.com/article/10.1023/A:1010920819831>`_
Machine learning, 45(2), pp. 171-186.
.. [FC2009] Ferri, Cèsar & Hernandez-Orallo, Jose & Modroiu, R. (2009).
`An Experimental Comparison of Performance Measures for Classification.
<https://www.math.ucdavis.edu/~saito/data/roc/ferri-class-perf-metrics.pdf>`_
Pattern Recognition Letters. 30. 27-38.
.. [PD2000] Provost, F., Domingos, P. (2000). `Well-trained PETs: Improving
probability estimation trees
<https://fosterprovost.com/publication/well-trained-pets-improving-probability-estimation-trees/>`_
(Section 6.2), CeDER Working Paper #IS-00-04, Stern School of Business,
New York University.
.. [F2006] Fawcett, T., 2006. `An introduction to ROC analysis.
<http://www.sciencedirect.com/science/article/pii/S016786550500303X>`_
Pattern Recognition Letters, 27(8), pp. 861-874.
.. [F2001] Fawcett, T., 2001. `Using rule sets to maximize
ROC performance <https://ieeexplore.ieee.org/document/989510/>`_
In Data Mining, 2001.
Proceedings IEEE International Conference, pp. 131-138.
.. _det_curve:
Detection error tradeoff (DET)
------------------------------
The function :func:`det_curve` computes the
detection error tradeoff curve (DET) curve [WikipediaDET2017]_.
Quoting Wikipedia:
"A detection error tradeoff (DET) graph is a graphical plot of error rates
for binary classification systems, plotting false reject rate vs. false
accept rate. The x- and y-axes are scaled non-linearly by their standard
normal deviates (or just by logarithmic transformation), yielding tradeoff
curves that are more linear than ROC curves, and use most of the image area
to highlight the differences of importance in the critical operating region."
DET curves are a variation of receiver operating characteristic (ROC) curves
where False Negative Rate is plotted on the y-axis instead of True Positive
Rate.
DET curves are commonly plotted in normal deviate scale by transformation with
:math:`\phi^{-1}` (with :math:`\phi` being the cumulative distribution
function).
The resulting performance curves explicitly visualize the tradeoff of error
types for given classification algorithms.
See [Martin1997]_ for examples and further motivation.
This figure compares the ROC and DET curves of two example classifiers on the
same classification task:
.. image:: ../auto_examples/model_selection/images/sphx_glr_plot_det_001.png
:target: ../auto_examples/model_selection/plot_det.html
:scale: 75
:align: center
.. dropdown:: Properties
* DET curves form a linear curve in normal deviate scale if the detection
scores are normally (or close-to normally) distributed.
It was shown by [Navratil2007]_ that the reverse is not necessarily true and
even more general distributions are able to produce linear DET curves.
* The normal deviate scale transformation spreads out the points such that a
comparatively larger space of plot is occupied.
Therefore curves with similar classification performance might be easier to
distinguish on a DET plot.
* With False Negative Rate being "inverse" to True Positive Rate the point
of perfection for DET curves is the origin (in contrast to the top left
corner for ROC curves).
.. dropdown:: Applications and limitations
DET curves are intuitive to read and hence allow quick visual assessment of a
classifier's performance.
Additionally DET curves can be consulted for threshold analysis and operating
point selection.
This is particularly helpful if a comparison of error types is required.
On the other hand DET curves do not provide their metric as a single number.
Therefore for either automated evaluation or comparison to other
classification tasks metrics like the derived area under ROC curve might be
better suited.
.. rubric:: Examples
* See :ref:`sphx_glr_auto_examples_model_selection_plot_det.py`
for an example comparison between receiver operating characteristic (ROC)
curves and Detection error tradeoff (DET) curves.
.. rubric:: References
.. [WikipediaDET2017] Wikipedia contributors. Detection error tradeoff.
Wikipedia, The Free Encyclopedia. September 4, 2017, 23:33 UTC.
Available at: https://en.wikipedia.org/w/index.php?title=Detection_error_tradeoff&oldid=798982054.
Accessed February 19, 2018.
.. [Martin1997] A. Martin, G. Doddington, T. Kamm, M. Ordowski, and M. Przybocki,
`The DET Curve in Assessment of Detection Task Performance
<https://ccc.inaoep.mx/~villasen/bib/martin97det.pdf>`_, NIST 1997.
.. [Navratil2007] J. Navractil and D. Klusacek,
`"On Linear DETs" <https://ieeexplore.ieee.org/document/4218079>`_,
2007 IEEE International Conference on Acoustics,
Speech and Signal Processing - ICASSP '07, Honolulu,
HI, 2007, pp. IV-229-IV-232.
.. _zero_one_loss:
Zero one loss
--------------
The :func:`zero_one_loss` function computes the sum or the average of the 0-1
classification loss (:math:`L_{0-1}`) over :math:`n_{\text{samples}}`. By
default, the function normalizes over the sample. To get the sum of the
:math:`L_{0-1}`, set ``normalize`` to ``False``.
In multilabel classification, the :func:`zero_one_loss` scores a subset as
one if its labels strictly match the predictions, and as a zero if there
are any errors. By default, the function returns the percentage of imperfectly
predicted subsets. To get the count of such subsets instead, set
``normalize`` to ``False``
If :math:`\hat{y}_i` is the predicted value of
the :math:`i`-th sample and :math:`y_i` is the corresponding true value,
then the 0-1 loss :math:`L_{0-1}` is defined as:
.. math::
L_{0-1}(y, \hat{y}) = \frac{1}{n_\text{samples}} \sum_{i=0}^{n_\text{samples}-1} 1(\hat{y}_i \not= y_i)
where :math:`1(x)` is the `indicator function
<https://en.wikipedia.org/wiki/Indicator_function>`_. The zero one
loss can also be computed as :math:`zero-one loss = 1 - accuracy`.
>>> from sklearn.metrics import zero_one_loss
>>> y_pred = [1, 2, 3, 4]
>>> y_true = [2, 2, 3, 4]
>>> zero_one_loss(y_true, y_pred)
0.25
>>> zero_one_loss(y_true, y_pred, normalize=False)
1.0
In the multilabel case with binary label indicators, where the first label
set [0,1] has an error::
>>> zero_one_loss(np.array([[0, 1], [1, 1]]), np.ones((2, 2)))
0.5
>>> zero_one_loss(np.array([[0, 1], [1, 1]]), np.ones((2, 2)), normalize=False)
1.0
.. rubric:: Examples
* See :ref:`sphx_glr_auto_examples_feature_selection_plot_rfe_with_cross_validation.py`
for an example of zero one loss usage to perform recursive feature
elimination with cross-validation.
.. _brier_score_loss:
Brier score loss
----------------
The :func:`brier_score_loss` function computes the
`Brier score <https://en.wikipedia.org/wiki/Brier_score>`_
for binary classes [Brier1950]_. Quoting Wikipedia:
"The Brier score is a proper score function that measures the accuracy of
probabilistic predictions. It is applicable to tasks in which predictions
must assign probabilities to a set of mutually exclusive discrete outcomes."
This function returns the mean squared error of the actual outcome
:math:`y \in \{0,1\}` and the predicted probability estimate
:math:`p = \operatorname{Pr}(y = 1)` (:term:`predict_proba`) as outputted by:
.. math::
BS = \frac{1}{n_{\text{samples}}} \sum_{i=0}^{n_{\text{samples}} - 1}(y_i - p_i)^2
The Brier score loss is also between 0 to 1 and the lower the value (the mean
square difference is smaller), the more accurate the prediction is.
Here is a small example of usage of this function::
>>> import numpy as np
>>> from sklearn.metrics import brier_score_loss
>>> y_true = np.array([0, 1, 1, 0])
>>> y_true_categorical = np.array(["spam", "ham", "ham", "spam"])
>>> y_prob = np.array([0.1, 0.9, 0.8, 0.4])
>>> y_pred = np.array([0, 1, 1, 0])
>>> brier_score_loss(y_true, y_prob)
0.055
>>> brier_score_loss(y_true, 1 - y_prob, pos_label=0)
0.055
>>> brier_score_loss(y_true_categorical, y_prob, pos_label="ham")
0.055
>>> brier_score_loss(y_true, y_prob > 0.5)
0.0
The Brier score can be used to assess how well a classifier is calibrated.
However, a lower Brier score loss does not always mean a better calibration.
This is because, by analogy with the bias-variance decomposition of the mean
squared error, the Brier score loss can be decomposed as the sum of calibration
loss and refinement loss [Bella2012]_. Calibration loss is defined as the mean
squared deviation from empirical probabilities derived from the slope of ROC
segments. Refinement loss can be defined as the expected optimal loss as
measured by the area under the optimal cost curve. Refinement loss can change
independently from calibration loss, thus a lower Brier score loss does not
necessarily mean a better calibrated model. "Only when refinement loss remains
the same does a lower Brier score loss always mean better calibration"
[Bella2012]_, [Flach2008]_.
.. rubric:: Examples
* See :ref:`sphx_glr_auto_examples_calibration_plot_calibration.py`
for an example of Brier score loss usage to perform probability
calibration of classifiers.
.. rubric:: References
.. [Brier1950] G. Brier, `Verification of forecasts expressed in terms of probability
<ftp://ftp.library.noaa.gov/docs.lib/htdocs/rescue/mwr/078/mwr-078-01-0001.pdf>`_,
Monthly weather review 78.1 (1950)
.. [Bella2012] Bella, Ferri, Hernández-Orallo, and Ramírez-Quintana
`"Calibration of Machine Learning Models"
<http://dmip.webs.upv.es/papers/BFHRHandbook2010.pdf>`_
in Khosrow-Pour, M. "Machine learning: concepts, methodologies, tools
and applications." Hershey, PA: Information Science Reference (2012).
.. [Flach2008] Flach, Peter, and Edson Matsubara. `"On classification, ranking,
and probability estimation." <https://drops.dagstuhl.de/opus/volltexte/2008/1382/>`_
Dagstuhl Seminar Proceedings. Schloss Dagstuhl-Leibniz-Zentrum fr Informatik (2008).
.. _class_likelihood_ratios:
Class likelihood ratios
-----------------------
The :func:`class_likelihood_ratios` function computes the `positive and negative
likelihood ratios
<https://en.wikipedia.org/wiki/Likelihood_ratios_in_diagnostic_testing>`_
:math:`LR_\pm` for binary classes, which can be interpreted as the ratio of
post-test to pre-test odds as explained below. As a consequence, this metric is
invariant w.r.t. the class prevalence (the number of samples in the positive
class divided by the total number of samples) and **can be extrapolated between
populations regardless of any possible class imbalance.**
The :math:`LR_\pm` metrics are therefore very useful in settings where the data
available to learn and evaluate a classifier is a study population with nearly
balanced classes, such as a case-control study, while the target application,
i.e. the general population, has very low prevalence.
The positive likelihood ratio :math:`LR_+` is the probability of a classifier to
correctly predict that a sample belongs to the positive class divided by the
probability of predicting the positive class for a sample belonging to the
negative class:
.. math::
LR_+ = \frac{\text{PR}(P+|T+)}{\text{PR}(P+|T-)}.
The notation here refers to predicted (:math:`P`) or true (:math:`T`) label and
the sign :math:`+` and :math:`-` refer to the positive and negative class,
respectively, e.g. :math:`P+` stands for "predicted positive".
Analogously, the negative likelihood ratio :math:`LR_-` is the probability of a
sample of the positive class being classified as belonging to the negative class
divided by the probability of a sample of the negative class being correctly
classified:
.. math::
LR_- = \frac{\text{PR}(P-|T+)}{\text{PR}(P-|T-)}.
For classifiers above chance :math:`LR_+` above 1 **higher is better**, while
:math:`LR_-` ranges from 0 to 1 and **lower is better**.
Values of :math:`LR_\pm\approx 1` correspond to chance level.
Notice that probabilities differ from counts, for instance
:math:`\operatorname{PR}(P+|T+)` is not equal to the number of true positive
counts ``tp`` (see `the wikipedia page
<https://en.wikipedia.org/wiki/Likelihood_ratios_in_diagnostic_testing>`_ for
the actual formulas).
.. rubric:: Examples
* :ref:`sphx_glr_auto_examples_model_selection_plot_likelihood_ratios.py`
.. dropdown:: Interpretation across varying prevalence
Both class likelihood ratios are interpretable in terms of an odds ratio
(pre-test and post-tests):
.. math::
\text{post-test odds} = \text{Likelihood ratio} \times \text{pre-test odds}.
Odds are in general related to probabilities via
.. math::
\text{odds} = \frac{\text{probability}}{1 - \text{probability}},
or equivalently
.. math::
\text{probability} = \frac{\text{odds}}{1 + \text{odds}}.
On a given population, the pre-test probability is given by the prevalence. By
converting odds to probabilities, the likelihood ratios can be translated into a
probability of truly belonging to either class before and after a classifier
prediction:
.. math::
\text{post-test odds} = \text{Likelihood ratio} \times
\frac{\text{pre-test probability}}{1 - \text{pre-test probability}},
.. math::
\text{post-test probability} = \frac{\text{post-test odds}}{1 + \text{post-test odds}}.
.. dropdown:: Mathematical divergences
The positive likelihood ratio is undefined when :math:`fp = 0`, which can be
interpreted as the classifier perfectly identifying positive cases. If :math:`fp
= 0` and additionally :math:`tp = 0`, this leads to a zero/zero division. This
happens, for instance, when using a `DummyClassifier` that always predicts the
negative class and therefore the interpretation as a perfect classifier is lost.
The negative likelihood ratio is undefined when :math:`tn = 0`. Such divergence
is invalid, as :math:`LR_- > 1` would indicate an increase in the odds of a
sample belonging to the positive class after being classified as negative, as if
the act of classifying caused the positive condition. This includes the case of
a `DummyClassifier` that always predicts the positive class (i.e. when
:math:`tn=fn=0`).
Both class likelihood ratios are undefined when :math:`tp=fn=0`, which means
that no samples of the positive class were present in the testing set. This can
also happen when cross-validating highly imbalanced data.
In all the previous cases the :func:`class_likelihood_ratios` function raises by
default an appropriate warning message and returns `nan` to avoid pollution when
averaging over cross-validation folds.
For a worked-out demonstration of the :func:`class_likelihood_ratios` function,
see the example below.
.. dropdown:: References
* `Wikipedia entry for Likelihood ratios in diagnostic testing
<https://en.wikipedia.org/wiki/Likelihood_ratios_in_diagnostic_testing>`_
* Brenner, H., & Gefeller, O. (1997).
Variation of sensitivity, specificity, likelihood ratios and predictive
values with disease prevalence.
Statistics in medicine, 16(9), 981-991.
.. _d2_score_classification:
D² score for classification
---------------------------
The D² score computes the fraction of deviance explained.
It is a generalization of R², where the squared error is generalized and replaced
by a classification deviance of choice :math:`\text{dev}(y, \hat{y})`
(e.g., Log loss). D² is a form of a *skill score*.
It is calculated as
.. math::
D^2(y, \hat{y}) = 1 - \frac{\text{dev}(y, \hat{y})}{\text{dev}(y, y_{\text{null}})} \,.
Where :math:`y_{\text{null}}` is the optimal prediction of an intercept-only model
(e.g., the per-class proportion of `y_true` in the case of the Log loss).
Like R², the best possible score is 1.0 and it can be negative (because the
model can be arbitrarily worse). A constant model that always predicts
:math:`y_{\text{null}}`, disregarding the input features, would get a D² score
of 0.0.
.. dropdown:: D2 log loss score
The :func:`d2_log_loss_score` function implements the special case
of D² with the log loss, see :ref:`log_loss`, i.e.:
.. math::
\text{dev}(y, \hat{y}) = \text{log_loss}(y, \hat{y}).
Here are some usage examples of the :func:`d2_log_loss_score` function::
>>> from sklearn.metrics import d2_log_loss_score
>>> y_true = [1, 1, 2, 3]
>>> y_pred = [
... [0.5, 0.25, 0.25],
... [0.5, 0.25, 0.25],
... [0.5, 0.25, 0.25],
... [0.5, 0.25, 0.25],
... ]
>>> d2_log_loss_score(y_true, y_pred)
0.0
>>> y_true = [1, 2, 3]
>>> y_pred = [
... [0.98, 0.01, 0.01],
... [0.01, 0.98, 0.01],
... [0.01, 0.01, 0.98],
... ]
>>> d2_log_loss_score(y_true, y_pred)
0.981...
>>> y_true = [1, 2, 3]
>>> y_pred = [
... [0.1, 0.6, 0.3],
... [0.1, 0.6, 0.3],
... [0.4, 0.5, 0.1],
... ]
>>> d2_log_loss_score(y_true, y_pred)
-0.552...
.. _multilabel_ranking_metrics:
Multilabel ranking metrics
==========================
.. currentmodule:: sklearn.metrics
In multilabel learning, each sample can have any number of ground truth labels
associated with it. The goal is to give high scores and better rank to
the ground truth labels.
.. _coverage_error:
Coverage error
--------------
The :func:`coverage_error` function computes the average number of labels that
have to be included in the final prediction such that all true labels
are predicted. This is useful if you want to know how many top-scored-labels
you have to predict in average without missing any true one. The best value
of this metrics is thus the average number of true labels.
.. note::
Our implementation's score is 1 greater than the one given in Tsoumakas
et al., 2010. This extends it to handle the degenerate case in which an
instance has 0 true labels.
Formally, given a binary indicator matrix of the ground truth labels
:math:`y \in \left\{0, 1\right\}^{n_\text{samples} \times n_\text{labels}}` and the
score associated with each label
:math:`\hat{f} \in \mathbb{R}^{n_\text{samples} \times n_\text{labels}}`,
the coverage is defined as
.. math::
coverage(y, \hat{f}) = \frac{1}{n_{\text{samples}}}
\sum_{i=0}^{n_{\text{samples}} - 1} \max_{j:y_{ij} = 1} \text{rank}_{ij}
with :math:`\text{rank}_{ij} = \left|\left\{k: \hat{f}_{ik} \geq \hat{f}_{ij} \right\}\right|`.
Given the rank definition, ties in ``y_scores`` are broken by giving the
maximal rank that would have been assigned to all tied values.
Here is a small example of usage of this function::
>>> import numpy as np
>>> from sklearn.metrics import coverage_error
>>> y_true = np.array([[1, 0, 0], [0, 0, 1]])
>>> y_score = np.array([[0.75, 0.5, 1], [1, 0.2, 0.1]])
>>> coverage_error(y_true, y_score)
2.5
.. _label_ranking_average_precision:
Label ranking average precision
-------------------------------
The :func:`label_ranking_average_precision_score` function
implements label ranking average precision (LRAP). This metric is linked to
the :func:`average_precision_score` function, but is based on the notion of
label ranking instead of precision and recall.
Label ranking average precision (LRAP) averages over the samples the answer to
the following question: for each ground truth label, what fraction of
higher-ranked labels were true labels? This performance measure will be higher
if you are able to give better rank to the labels associated with each sample.
The obtained score is always strictly greater than 0, and the best value is 1.
If there is exactly one relevant label per sample, label ranking average
precision is equivalent to the `mean
reciprocal rank <https://en.wikipedia.org/wiki/Mean_reciprocal_rank>`_.
Formally, given a binary indicator matrix of the ground truth labels
:math:`y \in \left\{0, 1\right\}^{n_\text{samples} \times n_\text{labels}}`
and the score associated with each label
:math:`\hat{f} \in \mathbb{R}^{n_\text{samples} \times n_\text{labels}}`,
the average precision is defined as
.. math::
LRAP(y, \hat{f}) = \frac{1}{n_{\text{samples}}}
\sum_{i=0}^{n_{\text{samples}} - 1} \frac{1}{||y_i||_0}
\sum_{j:y_{ij} = 1} \frac{|\mathcal{L}_{ij}|}{\text{rank}_{ij}}
where
:math:`\mathcal{L}_{ij} = \left\{k: y_{ik} = 1, \hat{f}_{ik} \geq \hat{f}_{ij} \right\}`,
:math:`\text{rank}_{ij} = \left|\left\{k: \hat{f}_{ik} \geq \hat{f}_{ij} \right\}\right|`,
:math:`|\cdot|` computes the cardinality of the set (i.e., the number of
elements in the set), and :math:`||\cdot||_0` is the :math:`\ell_0` "norm"
(which computes the number of nonzero elements in a vector).
Here is a small example of usage of this function::
>>> import numpy as np
>>> from sklearn.metrics import label_ranking_average_precision_score
>>> y_true = np.array([[1, 0, 0], [0, 0, 1]])
>>> y_score = np.array([[0.75, 0.5, 1], [1, 0.2, 0.1]])
>>> label_ranking_average_precision_score(y_true, y_score)
0.416...
.. _label_ranking_loss:
Ranking loss
------------
The :func:`label_ranking_loss` function computes the ranking loss which
averages over the samples the number of label pairs that are incorrectly
ordered, i.e. true labels have a lower score than false labels, weighted by
the inverse of the number of ordered pairs of false and true labels.
The lowest achievable ranking loss is zero.
Formally, given a binary indicator matrix of the ground truth labels
:math:`y \in \left\{0, 1\right\}^{n_\text{samples} \times n_\text{labels}}` and the
score associated with each label
:math:`\hat{f} \in \mathbb{R}^{n_\text{samples} \times n_\text{labels}}`,
the ranking loss is defined as
.. math::
ranking\_loss(y, \hat{f}) = \frac{1}{n_{\text{samples}}}
\sum_{i=0}^{n_{\text{samples}} - 1} \frac{1}{||y_i||_0(n_\text{labels} - ||y_i||_0)}
\left|\left\{(k, l): \hat{f}_{ik} \leq \hat{f}_{il}, y_{ik} = 1, y_{il} = 0 \right\}\right|
where :math:`|\cdot|` computes the cardinality of the set (i.e., the number of
elements in the set) and :math:`||\cdot||_0` is the :math:`\ell_0` "norm"
(which computes the number of nonzero elements in a vector).
Here is a small example of usage of this function::
>>> import numpy as np
>>> from sklearn.metrics import label_ranking_loss
>>> y_true = np.array([[1, 0, 0], [0, 0, 1]])
>>> y_score = np.array([[0.75, 0.5, 1], [1, 0.2, 0.1]])
>>> label_ranking_loss(y_true, y_score)
0.75...
>>> # With the following prediction, we have perfect and minimal loss
>>> y_score = np.array([[1.0, 0.1, 0.2], [0.1, 0.2, 0.9]])
>>> label_ranking_loss(y_true, y_score)
0.0
.. dropdown:: References
* Tsoumakas, G., Katakis, I., & Vlahavas, I. (2010). Mining multi-label data. In
Data mining and knowledge discovery handbook (pp. 667-685). Springer US.
.. _ndcg:
Normalized Discounted Cumulative Gain
-------------------------------------
Discounted Cumulative Gain (DCG) and Normalized Discounted Cumulative Gain
(NDCG) are ranking metrics implemented in :func:`~sklearn.metrics.dcg_score`
and :func:`~sklearn.metrics.ndcg_score` ; they compare a predicted order to
ground-truth scores, such as the relevance of answers to a query.
From the Wikipedia page for Discounted Cumulative Gain:
"Discounted cumulative gain (DCG) is a measure of ranking quality. In
information retrieval, it is often used to measure effectiveness of web search
engine algorithms or related applications. Using a graded relevance scale of
documents in a search-engine result set, DCG measures the usefulness, or gain,
of a document based on its position in the result list. The gain is accumulated
from the top of the result list to the bottom, with the gain of each result
discounted at lower ranks"
DCG orders the true targets (e.g. relevance of query answers) in the predicted
order, then multiplies them by a logarithmic decay and sums the result. The sum
can be truncated after the first :math:`K` results, in which case we call it
DCG@K.
NDCG, or NDCG@K is DCG divided by the DCG obtained by a perfect prediction, so
that it is always between 0 and 1. Usually, NDCG is preferred to DCG.
Compared with the ranking loss, NDCG can take into account relevance scores,
rather than a ground-truth ranking. So if the ground-truth consists only of an
ordering, the ranking loss should be preferred; if the ground-truth consists of
actual usefulness scores (e.g. 0 for irrelevant, 1 for relevant, 2 for very
relevant), NDCG can be used.
For one sample, given the vector of continuous ground-truth values for each
target :math:`y \in \mathbb{R}^{M}`, where :math:`M` is the number of outputs, and
the prediction :math:`\hat{y}`, which induces the ranking function :math:`f`, the
DCG score is
.. math::
\sum_{r=1}^{\min(K, M)}\frac{y_{f(r)}}{\log(1 + r)}
and the NDCG score is the DCG score divided by the DCG score obtained for
:math:`y`.
.. dropdown:: References
* `Wikipedia entry for Discounted Cumulative Gain
<https://en.wikipedia.org/wiki/Discounted_cumulative_gain>`_
* Jarvelin, K., & Kekalainen, J. (2002).
Cumulated gain-based evaluation of IR techniques. ACM Transactions on
Information Systems (TOIS), 20(4), 422-446.
* Wang, Y., Wang, L., Li, Y., He, D., Chen, W., & Liu, T. Y. (2013, May).
A theoretical analysis of NDCG ranking measures. In Proceedings of the 26th
Annual Conference on Learning Theory (COLT 2013)
* McSherry, F., & Najork, M. (2008, March). Computing information retrieval
performance measures efficiently in the presence of tied scores. In
European conference on information retrieval (pp. 414-421). Springer,
Berlin, Heidelberg.
.. _regression_metrics:
Regression metrics
===================
.. currentmodule:: sklearn.metrics
The :mod:`sklearn.metrics` module implements several loss, score, and utility
functions to measure regression performance. Some of those have been enhanced
to handle the multioutput case: :func:`mean_squared_error`,
:func:`mean_absolute_error`, :func:`r2_score`,
:func:`explained_variance_score`, :func:`mean_pinball_loss`, :func:`d2_pinball_score`
and :func:`d2_absolute_error_score`.
These functions have a ``multioutput`` keyword argument which specifies the
way the scores or losses for each individual target should be averaged. The
default is ``'uniform_average'``, which specifies a uniformly weighted mean
over outputs. If an ``ndarray`` of shape ``(n_outputs,)`` is passed, then its
entries are interpreted as weights and an according weighted average is
returned. If ``multioutput`` is ``'raw_values'``, then all unaltered
individual scores or losses will be returned in an array of shape
``(n_outputs,)``.
The :func:`r2_score` and :func:`explained_variance_score` accept an additional
value ``'variance_weighted'`` for the ``multioutput`` parameter. This option
leads to a weighting of each individual score by the variance of the
corresponding target variable. This setting quantifies the globally captured
unscaled variance. If the target variables are of different scale, then this
score puts more importance on explaining the higher variance variables.
.. _r2_score:
R² score, the coefficient of determination
-------------------------------------------
The :func:`r2_score` function computes the `coefficient of
determination <https://en.wikipedia.org/wiki/Coefficient_of_determination>`_,
usually denoted as :math:`R^2`.
It represents the proportion of variance (of y) that has been explained by the
independent variables in the model. It provides an indication of goodness of
fit and therefore a measure of how well unseen samples are likely to be
predicted by the model, through the proportion of explained variance.
As such variance is dataset dependent, :math:`R^2` may not be meaningfully comparable
across different datasets. Best possible score is 1.0 and it can be negative
(because the model can be arbitrarily worse). A constant model that always
predicts the expected (average) value of y, disregarding the input features,
would get an :math:`R^2` score of 0.0.
Note: when the prediction residuals have zero mean, the :math:`R^2` score and
the :ref:`explained_variance_score` are identical.
If :math:`\hat{y}_i` is the predicted value of the :math:`i`-th sample
and :math:`y_i` is the corresponding true value for total :math:`n` samples,
the estimated :math:`R^2` is defined as:
.. math::
R^2(y, \hat{y}) = 1 - \frac{\sum_{i=1}^{n} (y_i - \hat{y}_i)^2}{\sum_{i=1}^{n} (y_i - \bar{y})^2}
where :math:`\bar{y} = \frac{1}{n} \sum_{i=1}^{n} y_i` and :math:`\sum_{i=1}^{n} (y_i - \hat{y}_i)^2 = \sum_{i=1}^{n} \epsilon_i^2`.
Note that :func:`r2_score` calculates unadjusted :math:`R^2` without correcting for
bias in sample variance of y.
In the particular case where the true target is constant, the :math:`R^2` score is
not finite: it is either ``NaN`` (perfect predictions) or ``-Inf`` (imperfect
predictions). Such non-finite scores may prevent correct model optimization
such as grid-search cross-validation to be performed correctly. For this reason
the default behaviour of :func:`r2_score` is to replace them with 1.0 (perfect
predictions) or 0.0 (imperfect predictions). If ``force_finite``
is set to ``False``, this score falls back on the original :math:`R^2` definition.
Here is a small example of usage of the :func:`r2_score` function::
>>> from sklearn.metrics import r2_score
>>> y_true = [3, -0.5, 2, 7]
>>> y_pred = [2.5, 0.0, 2, 8]
>>> r2_score(y_true, y_pred)
0.948...
>>> y_true = [[0.5, 1], [-1, 1], [7, -6]]
>>> y_pred = [[0, 2], [-1, 2], [8, -5]]
>>> r2_score(y_true, y_pred, multioutput='variance_weighted')
0.938...
>>> y_true = [[0.5, 1], [-1, 1], [7, -6]]
>>> y_pred = [[0, 2], [-1, 2], [8, -5]]
>>> r2_score(y_true, y_pred, multioutput='uniform_average')
0.936...
>>> r2_score(y_true, y_pred, multioutput='raw_values')
array([0.965..., 0.908...])
>>> r2_score(y_true, y_pred, multioutput=[0.3, 0.7])
0.925...
>>> y_true = [-2, -2, -2]
>>> y_pred = [-2, -2, -2]
>>> r2_score(y_true, y_pred)
1.0
>>> r2_score(y_true, y_pred, force_finite=False)
nan
>>> y_true = [-2, -2, -2]
>>> y_pred = [-2, -2, -2 + 1e-8]
>>> r2_score(y_true, y_pred)
0.0
>>> r2_score(y_true, y_pred, force_finite=False)
-inf
.. rubric:: Examples
* See :ref:`sphx_glr_auto_examples_linear_model_plot_lasso_and_elasticnet.py`
for an example of R² score usage to
evaluate Lasso and Elastic Net on sparse signals.
.. _mean_absolute_error:
Mean absolute error
-------------------
The :func:`mean_absolute_error` function computes `mean absolute
error <https://en.wikipedia.org/wiki/Mean_absolute_error>`_, a risk
metric corresponding to the expected value of the absolute error loss or
:math:`l1`-norm loss.
If :math:`\hat{y}_i` is the predicted value of the :math:`i`-th sample,
and :math:`y_i` is the corresponding true value, then the mean absolute error
(MAE) estimated over :math:`n_{\text{samples}}` is defined as
.. math::
\text{MAE}(y, \hat{y}) = \frac{1}{n_{\text{samples}}} \sum_{i=0}^{n_{\text{samples}}-1} \left| y_i - \hat{y}_i \right|.
Here is a small example of usage of the :func:`mean_absolute_error` function::
>>> from sklearn.metrics import mean_absolute_error
>>> y_true = [3, -0.5, 2, 7]
>>> y_pred = [2.5, 0.0, 2, 8]
>>> mean_absolute_error(y_true, y_pred)
0.5
>>> y_true = [[0.5, 1], [-1, 1], [7, -6]]
>>> y_pred = [[0, 2], [-1, 2], [8, -5]]
>>> mean_absolute_error(y_true, y_pred)
0.75
>>> mean_absolute_error(y_true, y_pred, multioutput='raw_values')
array([0.5, 1. ])
>>> mean_absolute_error(y_true, y_pred, multioutput=[0.3, 0.7])
0.85...
.. _mean_squared_error:
Mean squared error
-------------------
The :func:`mean_squared_error` function computes `mean squared
error <https://en.wikipedia.org/wiki/Mean_squared_error>`_, a risk
metric corresponding to the expected value of the squared (quadratic) error or
loss.
If :math:`\hat{y}_i` is the predicted value of the :math:`i`-th sample,
and :math:`y_i` is the corresponding true value, then the mean squared error
(MSE) estimated over :math:`n_{\text{samples}}` is defined as
.. math::
\text{MSE}(y, \hat{y}) = \frac{1}{n_\text{samples}} \sum_{i=0}^{n_\text{samples} - 1} (y_i - \hat{y}_i)^2.
Here is a small example of usage of the :func:`mean_squared_error`
function::
>>> from sklearn.metrics import mean_squared_error
>>> y_true = [3, -0.5, 2, 7]
>>> y_pred = [2.5, 0.0, 2, 8]
>>> mean_squared_error(y_true, y_pred)
0.375
>>> y_true = [[0.5, 1], [-1, 1], [7, -6]]
>>> y_pred = [[0, 2], [-1, 2], [8, -5]]
>>> mean_squared_error(y_true, y_pred)
0.7083...
.. rubric:: Examples
* See :ref:`sphx_glr_auto_examples_ensemble_plot_gradient_boosting_regression.py`
for an example of mean squared error usage to evaluate gradient boosting regression.
Taking the square root of the MSE, called the root mean squared error (RMSE), is another
common metric that provides a measure in the same units as the target variable. RSME is
available through the :func:`root_mean_squared_error` function.
.. _mean_squared_log_error:
Mean squared logarithmic error
------------------------------
The :func:`mean_squared_log_error` function computes a risk metric
corresponding to the expected value of the squared logarithmic (quadratic)
error or loss.
If :math:`\hat{y}_i` is the predicted value of the :math:`i`-th sample,
and :math:`y_i` is the corresponding true value, then the mean squared
logarithmic error (MSLE) estimated over :math:`n_{\text{samples}}` is
defined as
.. math::
\text{MSLE}(y, \hat{y}) = \frac{1}{n_\text{samples}} \sum_{i=0}^{n_\text{samples} - 1} (\log_e (1 + y_i) - \log_e (1 + \hat{y}_i) )^2.
Where :math:`\log_e (x)` means the natural logarithm of :math:`x`. This metric
is best to use when targets having exponential growth, such as population
counts, average sales of a commodity over a span of years etc. Note that this
metric penalizes an under-predicted estimate greater than an over-predicted
estimate.
Here is a small example of usage of the :func:`mean_squared_log_error`
function::
>>> from sklearn.metrics import mean_squared_log_error
>>> y_true = [3, 5, 2.5, 7]
>>> y_pred = [2.5, 5, 4, 8]
>>> mean_squared_log_error(y_true, y_pred)
0.039...
>>> y_true = [[0.5, 1], [1, 2], [7, 6]]
>>> y_pred = [[0.5, 2], [1, 2.5], [8, 8]]
>>> mean_squared_log_error(y_true, y_pred)
0.044...
The root mean squared logarithmic error (RMSLE) is available through the
:func:`root_mean_squared_log_error` function.
.. _mean_absolute_percentage_error:
Mean absolute percentage error
------------------------------
The :func:`mean_absolute_percentage_error` (MAPE), also known as mean absolute
percentage deviation (MAPD), is an evaluation metric for regression problems.
The idea of this metric is to be sensitive to relative errors. It is for example
not changed by a global scaling of the target variable.
If :math:`\hat{y}_i` is the predicted value of the :math:`i`-th sample
and :math:`y_i` is the corresponding true value, then the mean absolute percentage
error (MAPE) estimated over :math:`n_{\text{samples}}` is defined as
.. math::
\text{MAPE}(y, \hat{y}) = \frac{1}{n_{\text{samples}}} \sum_{i=0}^{n_{\text{samples}}-1} \frac{{}\left| y_i - \hat{y}_i \right|}{\max(\epsilon, \left| y_i \right|)}
where :math:`\epsilon` is an arbitrary small yet strictly positive number to
avoid undefined results when y is zero.
The :func:`mean_absolute_percentage_error` function supports multioutput.
Here is a small example of usage of the :func:`mean_absolute_percentage_error`
function::
>>> from sklearn.metrics import mean_absolute_percentage_error
>>> y_true = [1, 10, 1e6]
>>> y_pred = [0.9, 15, 1.2e6]
>>> mean_absolute_percentage_error(y_true, y_pred)
0.2666...
In above example, if we had used `mean_absolute_error`, it would have ignored
the small magnitude values and only reflected the error in prediction of highest
magnitude value. But that problem is resolved in case of MAPE because it calculates
relative percentage error with respect to actual output.
.. note::
The MAPE formula here does not represent the common "percentage" definition: the
percentage in the range [0, 100] is converted to a relative value in the range [0,
1] by dividing by 100. Thus, an error of 200% corresponds to a relative error of 2.
The motivation here is to have a range of values that is more consistent with other
error metrics in scikit-learn, such as `accuracy_score`.
To obtain the mean absolute percentage error as per the Wikipedia formula,
multiply the `mean_absolute_percentage_error` computed here by 100.
.. dropdown:: References
* `Wikipedia entry for Mean Absolute Percentage Error
<https://en.wikipedia.org/wiki/Mean_absolute_percentage_error>`_
.. _median_absolute_error:
Median absolute error
---------------------
The :func:`median_absolute_error` is particularly interesting because it is
robust to outliers. The loss is calculated by taking the median of all absolute
differences between the target and the prediction.
If :math:`\hat{y}_i` is the predicted value of the :math:`i`-th sample
and :math:`y_i` is the corresponding true value, then the median absolute error
(MedAE) estimated over :math:`n_{\text{samples}}` is defined as
.. math::
\text{MedAE}(y, \hat{y}) = \text{median}(\mid y_1 - \hat{y}_1 \mid, \ldots, \mid y_n - \hat{y}_n \mid).
The :func:`median_absolute_error` does not support multioutput.
Here is a small example of usage of the :func:`median_absolute_error`
function::
>>> from sklearn.metrics import median_absolute_error
>>> y_true = [3, -0.5, 2, 7]
>>> y_pred = [2.5, 0.0, 2, 8]
>>> median_absolute_error(y_true, y_pred)
0.5
.. _max_error:
Max error
-------------------
The :func:`max_error` function computes the maximum `residual error
<https://en.wikipedia.org/wiki/Errors_and_residuals>`_ , a metric
that captures the worst case error between the predicted value and
the true value. In a perfectly fitted single output regression
model, ``max_error`` would be ``0`` on the training set and though this
would be highly unlikely in the real world, this metric shows the
extent of error that the model had when it was fitted.
If :math:`\hat{y}_i` is the predicted value of the :math:`i`-th sample,
and :math:`y_i` is the corresponding true value, then the max error is
defined as
.. math::
\text{Max Error}(y, \hat{y}) = \max(| y_i - \hat{y}_i |)
Here is a small example of usage of the :func:`max_error` function::
>>> from sklearn.metrics import max_error
>>> y_true = [3, 2, 7, 1]
>>> y_pred = [9, 2, 7, 1]
>>> max_error(y_true, y_pred)
6
The :func:`max_error` does not support multioutput.
.. _explained_variance_score:
Explained variance score
-------------------------
The :func:`explained_variance_score` computes the `explained variance
regression score <https://en.wikipedia.org/wiki/Explained_variation>`_.
If :math:`\hat{y}` is the estimated target output, :math:`y` the corresponding
(correct) target output, and :math:`Var` is `Variance
<https://en.wikipedia.org/wiki/Variance>`_, the square of the standard deviation,
then the explained variance is estimated as follow:
.. math::
explained\_{}variance(y, \hat{y}) = 1 - \frac{Var\{ y - \hat{y}\}}{Var\{y\}}
The best possible score is 1.0, lower values are worse.
.. topic:: Link to :ref:`r2_score`
The difference between the explained variance score and the :ref:`r2_score`
is that the explained variance score does not account for
systematic offset in the prediction. For this reason, the
:ref:`r2_score` should be preferred in general.
In the particular case where the true target is constant, the Explained
Variance score is not finite: it is either ``NaN`` (perfect predictions) or
``-Inf`` (imperfect predictions). Such non-finite scores may prevent correct
model optimization such as grid-search cross-validation to be performed
correctly. For this reason the default behaviour of
:func:`explained_variance_score` is to replace them with 1.0 (perfect
predictions) or 0.0 (imperfect predictions). You can set the ``force_finite``
parameter to ``False`` to prevent this fix from happening and fallback on the
original Explained Variance score.
Here is a small example of usage of the :func:`explained_variance_score`
function::
>>> from sklearn.metrics import explained_variance_score
>>> y_true = [3, -0.5, 2, 7]
>>> y_pred = [2.5, 0.0, 2, 8]
>>> explained_variance_score(y_true, y_pred)
0.957...
>>> y_true = [[0.5, 1], [-1, 1], [7, -6]]
>>> y_pred = [[0, 2], [-1, 2], [8, -5]]
>>> explained_variance_score(y_true, y_pred, multioutput='raw_values')
array([0.967..., 1. ])
>>> explained_variance_score(y_true, y_pred, multioutput=[0.3, 0.7])
0.990...
>>> y_true = [-2, -2, -2]
>>> y_pred = [-2, -2, -2]
>>> explained_variance_score(y_true, y_pred)
1.0
>>> explained_variance_score(y_true, y_pred, force_finite=False)
nan
>>> y_true = [-2, -2, -2]
>>> y_pred = [-2, -2, -2 + 1e-8]
>>> explained_variance_score(y_true, y_pred)
0.0
>>> explained_variance_score(y_true, y_pred, force_finite=False)
-inf
.. _mean_tweedie_deviance:
Mean Poisson, Gamma, and Tweedie deviances
------------------------------------------
The :func:`mean_tweedie_deviance` function computes the `mean Tweedie
deviance error
<https://en.wikipedia.org/wiki/Tweedie_distribution#The_Tweedie_deviance>`_
with a ``power`` parameter (:math:`p`). This is a metric that elicits
predicted expectation values of regression targets.
Following special cases exist,
- when ``power=0`` it is equivalent to :func:`mean_squared_error`.
- when ``power=1`` it is equivalent to :func:`mean_poisson_deviance`.
- when ``power=2`` it is equivalent to :func:`mean_gamma_deviance`.
If :math:`\hat{y}_i` is the predicted value of the :math:`i`-th sample,
and :math:`y_i` is the corresponding true value, then the mean Tweedie
deviance error (D) for power :math:`p`, estimated over :math:`n_{\text{samples}}`
is defined as
.. math::
\text{D}(y, \hat{y}) = \frac{1}{n_\text{samples}}
\sum_{i=0}^{n_\text{samples} - 1}
\begin{cases}
(y_i-\hat{y}_i)^2, & \text{for }p=0\text{ (Normal)}\\
2(y_i \log(y_i/\hat{y}_i) + \hat{y}_i - y_i), & \text{for }p=1\text{ (Poisson)}\\
2(\log(\hat{y}_i/y_i) + y_i/\hat{y}_i - 1), & \text{for }p=2\text{ (Gamma)}\\
2\left(\frac{\max(y_i,0)^{2-p}}{(1-p)(2-p)}-
\frac{y_i\,\hat{y}_i^{1-p}}{1-p}+\frac{\hat{y}_i^{2-p}}{2-p}\right),
& \text{otherwise}
\end{cases}
Tweedie deviance is a homogeneous function of degree ``2-power``.
Thus, Gamma distribution with ``power=2`` means that simultaneously scaling
``y_true`` and ``y_pred`` has no effect on the deviance. For Poisson
distribution ``power=1`` the deviance scales linearly, and for Normal
distribution (``power=0``), quadratically. In general, the higher
``power`` the less weight is given to extreme deviations between true
and predicted targets.
For instance, let's compare the two predictions 1.5 and 150 that are both
50% larger than their corresponding true value.
The mean squared error (``power=0``) is very sensitive to the
prediction difference of the second point,::
>>> from sklearn.metrics import mean_tweedie_deviance
>>> mean_tweedie_deviance([1.0], [1.5], power=0)
0.25
>>> mean_tweedie_deviance([100.], [150.], power=0)
2500.0
If we increase ``power`` to 1,::
>>> mean_tweedie_deviance([1.0], [1.5], power=1)
0.18...
>>> mean_tweedie_deviance([100.], [150.], power=1)
18.9...
the difference in errors decreases. Finally, by setting, ``power=2``::
>>> mean_tweedie_deviance([1.0], [1.5], power=2)
0.14...
>>> mean_tweedie_deviance([100.], [150.], power=2)
0.14...
we would get identical errors. The deviance when ``power=2`` is thus only
sensitive to relative errors.
.. _pinball_loss:
Pinball loss
------------
The :func:`mean_pinball_loss` function is used to evaluate the predictive
performance of `quantile regression
<https://en.wikipedia.org/wiki/Quantile_regression>`_ models.
.. math::
\text{pinball}(y, \hat{y}) = \frac{1}{n_{\text{samples}}} \sum_{i=0}^{n_{\text{samples}}-1} \alpha \max(y_i - \hat{y}_i, 0) + (1 - \alpha) \max(\hat{y}_i - y_i, 0)
The value of pinball loss is equivalent to half of :func:`mean_absolute_error` when the quantile
parameter ``alpha`` is set to 0.5.
Here is a small example of usage of the :func:`mean_pinball_loss` function::
>>> from sklearn.metrics import mean_pinball_loss
>>> y_true = [1, 2, 3]
>>> mean_pinball_loss(y_true, [0, 2, 3], alpha=0.1)
0.03...
>>> mean_pinball_loss(y_true, [1, 2, 4], alpha=0.1)
0.3...
>>> mean_pinball_loss(y_true, [0, 2, 3], alpha=0.9)
0.3...
>>> mean_pinball_loss(y_true, [1, 2, 4], alpha=0.9)
0.03...
>>> mean_pinball_loss(y_true, y_true, alpha=0.1)
0.0
>>> mean_pinball_loss(y_true, y_true, alpha=0.9)
0.0
It is possible to build a scorer object with a specific choice of ``alpha``::
>>> from sklearn.metrics import make_scorer
>>> mean_pinball_loss_95p = make_scorer(mean_pinball_loss, alpha=0.95)
Such a scorer can be used to evaluate the generalization performance of a
quantile regressor via cross-validation:
>>> from sklearn.datasets import make_regression
>>> from sklearn.model_selection import cross_val_score
>>> from sklearn.ensemble import GradientBoostingRegressor
>>>
>>> X, y = make_regression(n_samples=100, random_state=0)
>>> estimator = GradientBoostingRegressor(
... loss="quantile",
... alpha=0.95,
... random_state=0,
... )
>>> cross_val_score(estimator, X, y, cv=5, scoring=mean_pinball_loss_95p)
array([13.6..., 9.7..., 23.3..., 9.5..., 10.4...])
It is also possible to build scorer objects for hyper-parameter tuning. The
sign of the loss must be switched to ensure that greater means better as
explained in the example linked below.
.. rubric:: Examples
* See :ref:`sphx_glr_auto_examples_ensemble_plot_gradient_boosting_quantile.py`
for an example of using the pinball loss to evaluate and tune the
hyper-parameters of quantile regression models on data with non-symmetric
noise and outliers.
.. _d2_score:
D² score
--------
The D² score computes the fraction of deviance explained.
It is a generalization of R², where the squared error is generalized and replaced
by a deviance of choice :math:`\text{dev}(y, \hat{y})`
(e.g., Tweedie, pinball or mean absolute error). D² is a form of a *skill score*.
It is calculated as
.. math::
D^2(y, \hat{y}) = 1 - \frac{\text{dev}(y, \hat{y})}{\text{dev}(y, y_{\text{null}})} \,.
Where :math:`y_{\text{null}}` is the optimal prediction of an intercept-only model
(e.g., the mean of `y_true` for the Tweedie case, the median for absolute
error and the alpha-quantile for pinball loss).
Like R², the best possible score is 1.0 and it can be negative (because the
model can be arbitrarily worse). A constant model that always predicts
:math:`y_{\text{null}}`, disregarding the input features, would get a D² score
of 0.0.
.. dropdown:: D² Tweedie score
The :func:`d2_tweedie_score` function implements the special case of D²
where :math:`\text{dev}(y, \hat{y})` is the Tweedie deviance, see :ref:`mean_tweedie_deviance`.
It is also known as D² Tweedie and is related to McFadden's likelihood ratio index.
The argument ``power`` defines the Tweedie power as for
:func:`mean_tweedie_deviance`. Note that for `power=0`,
:func:`d2_tweedie_score` equals :func:`r2_score` (for single targets).
A scorer object with a specific choice of ``power`` can be built by::
>>> from sklearn.metrics import d2_tweedie_score, make_scorer
>>> d2_tweedie_score_15 = make_scorer(d2_tweedie_score, power=1.5)
.. dropdown:: D² pinball score
The :func:`d2_pinball_score` function implements the special case
of D² with the pinball loss, see :ref:`pinball_loss`, i.e.:
.. math::
\text{dev}(y, \hat{y}) = \text{pinball}(y, \hat{y}).
The argument ``alpha`` defines the slope of the pinball loss as for
:func:`mean_pinball_loss` (:ref:`pinball_loss`). It determines the
quantile level ``alpha`` for which the pinball loss and also D²
are optimal. Note that for `alpha=0.5` (the default) :func:`d2_pinball_score`
equals :func:`d2_absolute_error_score`.
A scorer object with a specific choice of ``alpha`` can be built by::
>>> from sklearn.metrics import d2_pinball_score, make_scorer
>>> d2_pinball_score_08 = make_scorer(d2_pinball_score, alpha=0.8)
.. dropdown:: D² absolute error score
The :func:`d2_absolute_error_score` function implements the special case of
the :ref:`mean_absolute_error`:
.. math::
\text{dev}(y, \hat{y}) = \text{MAE}(y, \hat{y}).
Here are some usage examples of the :func:`d2_absolute_error_score` function::
>>> from sklearn.metrics import d2_absolute_error_score
>>> y_true = [3, -0.5, 2, 7]
>>> y_pred = [2.5, 0.0, 2, 8]
>>> d2_absolute_error_score(y_true, y_pred)
0.764...
>>> y_true = [1, 2, 3]
>>> y_pred = [1, 2, 3]
>>> d2_absolute_error_score(y_true, y_pred)
1.0
>>> y_true = [1, 2, 3]
>>> y_pred = [2, 2, 2]
>>> d2_absolute_error_score(y_true, y_pred)
0.0
.. _visualization_regression_evaluation:
Visual evaluation of regression models
--------------------------------------
Among methods to assess the quality of regression models, scikit-learn provides
the :class:`~sklearn.metrics.PredictionErrorDisplay` class. It allows to
visually inspect the prediction errors of a model in two different manners.
.. image:: ../auto_examples/model_selection/images/sphx_glr_plot_cv_predict_001.png
:target: ../auto_examples/model_selection/plot_cv_predict.html
:scale: 75
:align: center
The plot on the left shows the actual values vs predicted values. For a
noise-free regression task aiming to predict the (conditional) expectation of
`y`, a perfect regression model would display data points on the diagonal
defined by predicted equal to actual values. The further away from this optimal
line, the larger the error of the model. In a more realistic setting with
irreducible noise, that is, when not all the variations of `y` can be explained
by features in `X`, then the best model would lead to a cloud of points densely
arranged around the diagonal.
Note that the above only holds when the predicted values is the expected value
of `y` given `X`. This is typically the case for regression models that
minimize the mean squared error objective function or more generally the
:ref:`mean Tweedie deviance <mean_tweedie_deviance>` for any value of its
"power" parameter.
When plotting the predictions of an estimator that predicts a quantile
of `y` given `X`, e.g. :class:`~sklearn.linear_model.QuantileRegressor`
or any other model minimizing the :ref:`pinball loss <pinball_loss>`, a
fraction of the points are either expected to lie above or below the diagonal
depending on the estimated quantile level.
All in all, while intuitive to read, this plot does not really inform us on
what to do to obtain a better model.
The right-hand side plot shows the residuals (i.e. the difference between the
actual and the predicted values) vs. the predicted values.
This plot makes it easier to visualize if the residuals follow and
`homoscedastic or heteroschedastic
<https://en.wikipedia.org/wiki/Homoscedasticity_and_heteroscedasticity>`_
distribution.
In particular, if the true distribution of `y|X` is Poisson or Gamma
distributed, it is expected that the variance of the residuals of the optimal
model would grow with the predicted value of `E[y|X]` (either linearly for
Poisson or quadratically for Gamma).
When fitting a linear least squares regression model (see
:class:`~sklearn.linear_model.LinearRegression` and
:class:`~sklearn.linear_model.Ridge`), we can use this plot to check
if some of the `model assumptions
<https://en.wikipedia.org/wiki/Ordinary_least_squares#Assumptions>`_
are met, in particular that the residuals should be uncorrelated, their
expected value should be null and that their variance should be constant
(homoschedasticity).
If this is not the case, and in particular if the residuals plot show some
banana-shaped structure, this is a hint that the model is likely mis-specified
and that non-linear feature engineering or switching to a non-linear regression
model might be useful.
Refer to the example below to see a model evaluation that makes use of this
display.
.. rubric:: Examples
* See :ref:`sphx_glr_auto_examples_compose_plot_transformed_target.py` for
an example on how to use :class:`~sklearn.metrics.PredictionErrorDisplay`
to visualize the prediction quality improvement of a regression model
obtained by transforming the target before learning.
.. _clustering_metrics:
Clustering metrics
==================
.. currentmodule:: sklearn.metrics
The :mod:`sklearn.metrics` module implements several loss, score, and utility
functions to measure clustering performance. For more information see the
:ref:`clustering_evaluation` section for instance clustering, and
:ref:`biclustering_evaluation` for biclustering.
.. _dummy_estimators:
Dummy estimators
=================
.. currentmodule:: sklearn.dummy
When doing supervised learning, a simple sanity check consists of comparing
one's estimator against simple rules of thumb. :class:`DummyClassifier`
implements several such simple strategies for classification:
- ``stratified`` generates random predictions by respecting the training
set class distribution.
- ``most_frequent`` always predicts the most frequent label in the training set.
- ``prior`` always predicts the class that maximizes the class prior
(like ``most_frequent``) and ``predict_proba`` returns the class prior.
- ``uniform`` generates predictions uniformly at random.
- ``constant`` always predicts a constant label that is provided by the user.
A major motivation of this method is F1-scoring, when the positive class
is in the minority.
Note that with all these strategies, the ``predict`` method completely ignores
the input data!
To illustrate :class:`DummyClassifier`, first let's create an imbalanced
dataset::
>>> from sklearn.datasets import load_iris
>>> from sklearn.model_selection import train_test_split
>>> X, y = load_iris(return_X_y=True)
>>> y[y != 1] = -1
>>> X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=0)
Next, let's compare the accuracy of ``SVC`` and ``most_frequent``::
>>> from sklearn.dummy import DummyClassifier
>>> from sklearn.svm import SVC
>>> clf = SVC(kernel='linear', C=1).fit(X_train, y_train)
>>> clf.score(X_test, y_test)
0.63...
>>> clf = DummyClassifier(strategy='most_frequent', random_state=0)
>>> clf.fit(X_train, y_train)
DummyClassifier(random_state=0, strategy='most_frequent')
>>> clf.score(X_test, y_test)
0.57...
We see that ``SVC`` doesn't do much better than a dummy classifier. Now, let's
change the kernel::
>>> clf = SVC(kernel='rbf', C=1).fit(X_train, y_train)
>>> clf.score(X_test, y_test)
0.94...
We see that the accuracy was boosted to almost 100%. A cross validation
strategy is recommended for a better estimate of the accuracy, if it
is not too CPU costly. For more information see the :ref:`cross_validation`
section. Moreover if you want to optimize over the parameter space, it is highly
recommended to use an appropriate methodology; see the :ref:`grid_search`
section for details.
More generally, when the accuracy of a classifier is too close to random, it
probably means that something went wrong: features are not helpful, a
hyperparameter is not correctly tuned, the classifier is suffering from class
imbalance, etc...
:class:`DummyRegressor` also implements four simple rules of thumb for regression:
- ``mean`` always predicts the mean of the training targets.
- ``median`` always predicts the median of the training targets.
- ``quantile`` always predicts a user provided quantile of the training targets.
- ``constant`` always predicts a constant value that is provided by the user.
In all these strategies, the ``predict`` method completely ignores
the input data. | scikit-learn | currentmodule sklearn model evaluation Metrics and scoring quantifying the quality of predictions which scoring function Which scoring function should I use Before we take a closer look into the details of the many scores and term evaluation metrics we want to give some guidance inspired by statistical decision theory on the choice of scoring functions for supervised learning see Gneiting2009 Which scoring function should I use Which scoring function is a good one for my task In a nutshell if the scoring function is given e g in a kaggle competition or in a business context use that one If you are free to choose it starts by considering the ultimate goal and application of the prediction It is useful to distinguish two steps Predicting Decision making Predicting Usually the response variable math Y is a random variable in the sense that there is no deterministic function math Y g X of the features math X Instead there is a probability distribution math F of math Y One can aim to predict the whole distribution known as probabilistic prediction or more the focus of scikit learn issue a point prediction or point forecast by choosing a property or functional of that distribution math F Typical examples are the mean expected value the median or a quantile of the response variable math Y conditionally on math X Once that is settled use a strictly consistent scoring function for that target functional see Gneiting2009 This means using a scoring function that is aligned with measuring the distance between predictions y pred and the true target functional using observations of math Y i e y true For classification strictly proper scoring rules see Wikipedia entry for Scoring rule https en wikipedia org wiki Scoring rule and Gneiting2007 coincide with strictly consistent scoring functions The table further below provides examples One could say that consistent scoring functions act as truth serum in that they guarantee that truth telling is an optimal strategy in expectation Gneiting2014 Once a strictly consistent scoring function is chosen it is best used for both as loss function for model training and as metric score in model evaluation and model comparison Note that for regressors the prediction is done with term predict while for classifiers it is usually term predict proba Decision Making The most common decisions are done on binary classification tasks where the result of term predict proba is turned into a single outcome e g from the predicted probability of rain a decision is made on how to act whether to take mitigating measures like an umbrella or not For classifiers this is what term predict returns See also ref TunedThresholdClassifierCV There are many scoring functions which measure different aspects of such a decision most of them are covered with or derived from the func metrics confusion matrix List of strictly consistent scoring functions Here we list some of the most relevant statistical functionals and corresponding strictly consistent scoring functions for tasks in practice Note that the list is not complete and that there are more of them For further criteria on how to select a specific one see Fissler2022 functional scoring or loss function response y prediction Classification mean ref Brier score brier score loss sup 1 multi class predict proba mean ref log loss log loss multi class predict proba mode ref zero one loss zero one loss sup 2 multi class predict categorical Regression mean ref squared error mean squared error sup 3 all reals predict all reals mean ref Poisson deviance mean tweedie deviance non negative predict strictly positive mean ref Gamma deviance mean tweedie deviance strictly positive predict strictly positive mean ref Tweedie deviance mean tweedie deviance depends on power predict depends on power median ref absolute error mean absolute error all reals predict all reals quantile ref pinball loss pinball loss all reals predict all reals mode no consistent one exists reals sup 1 The Brier score is just a different name for the squared error in case of classification sup 2 The zero one loss is only consistent but not strictly consistent for the mode The zero one loss is equivalent to one minus the accuracy score meaning it gives different score values but the same ranking sup 3 R gives the same ranking as squared error Fictitious Example Let s make the above arguments more tangible Consider a setting in network reliability engineering such as maintaining stable internet or Wi Fi connections As provider of the network you have access to the dataset of log entries of network connections containing network load over time and many interesting features Your goal is to improve the reliability of the connections In fact you promise your customers that on at least 99 of all days there are no connection discontinuities larger than 1 minute Therefore you are interested in a prediction of the 99 quantile of longest connection interruption duration per day in order to know in advance when to add more bandwidth and thereby satisfy your customers So the target functional is the 99 quantile From the table above you choose the pinball loss as scoring function fair enough not much choice given for model training e g HistGradientBoostingRegressor loss quantile quantile 0 99 as well as model evaluation mean pinball loss alpha 0 99 we apologize for the different argument names quantile and alpha be it in grid search for finding hyperparameters or in comparing to other models like QuantileRegressor quantile 0 99 rubric References Gneiting2007 T Gneiting and A E Raftery doi Strictly Proper Scoring Rules Prediction and Estimation 10 1198 016214506000001437 In Journal of the American Statistical Association 102 2007 pp 359 378 link to pdf www stat washington edu people raftery Research PDF Gneiting2007jasa pdf Gneiting2009 T Gneiting arxiv Making and Evaluating Point Forecasts 0912 0902 Journal of the American Statistical Association 106 2009 746 762 Gneiting2014 T Gneiting and M Katzfuss doi Probabilistic Forecasting 10 1146 annurev st atistics 062713 085831 In Annual Review of Statistics and Its Application 1 1 2014 pp 125 151 Fissler2022 T Fissler C Lorentzen and M Mayer arxiv Model Comparison and Calibration Assessment User Guide for Consistent Scoring Functions in Machine Learning and Actuarial Practice 2202 12780 scoring api overview Scoring API overview There are 3 different APIs for evaluating the quality of a model s predictions Estimator score method Estimators have a score method providing a default evaluation criterion for the problem they are designed to solve Most commonly this is ref accuracy accuracy score for classifiers and the ref coefficient of determination r2 score math R 2 for regressors Details for each estimator can be found in its documentation Scoring parameter Model evaluation tools that use ref cross validation cross validation such as class model selection GridSearchCV func model selection validation curve and class linear model LogisticRegressionCV rely on an internal scoring strategy This can be specified using the scoring parameter of that tool and is discussed in the section ref scoring parameter Metric functions The mod sklearn metrics module implements functions assessing prediction error for specific purposes These metrics are detailed in sections on ref classification metrics ref multilabel ranking metrics ref regression metrics and ref clustering metrics Finally ref dummy estimators are useful to get a baseline value of those metrics for random predictions seealso For pairwise metrics between samples and not estimators or predictions see the ref metrics section scoring parameter The scoring parameter defining model evaluation rules Model selection and evaluation tools that internally use ref cross validation cross validation such as class model selection GridSearchCV func model selection validation curve and class linear model LogisticRegressionCV take a scoring parameter that controls what metric they apply to the estimators evaluated They can be specified in several ways None the estimator s default evaluation criterion i e the metric used in the estimator s score method is used ref String name scoring string names common metrics can be passed via a string name ref Callable scoring callable more complex metrics can be passed via a custom metric callable e g function Some tools do also accept multiple metric evaluation See ref multimetric scoring for details scoring string names String name scorers For the most common use cases you can designate a scorer object with the scoring parameter via a string name the table below shows all possible values All scorer objects follow the convention that higher return values are better than lower return values Thus metrics which measure the distance between the model and the data like func metrics mean squared error are available as neg mean squared error which return the negated value of the metric Scoring string name Function Comment Classification accuracy func metrics accuracy score balanced accuracy func metrics balanced accuracy score top k accuracy func metrics top k accuracy score average precision func metrics average precision score neg brier score func metrics brier score loss f1 func metrics f1 score for binary targets f1 micro func metrics f1 score micro averaged f1 macro func metrics f1 score macro averaged f1 weighted func metrics f1 score weighted average f1 samples func metrics f1 score by multilabel sample neg log loss func metrics log loss requires predict proba support precision etc func metrics precision score suffixes apply as with f1 recall etc func metrics recall score suffixes apply as with f1 jaccard etc func metrics jaccard score suffixes apply as with f1 roc auc func metrics roc auc score roc auc ovr func metrics roc auc score roc auc ovo func metrics roc auc score roc auc ovr weighted func metrics roc auc score roc auc ovo weighted func metrics roc auc score d2 log loss score func metrics d2 log loss score Clustering adjusted mutual info score func metrics adjusted mutual info score adjusted rand score func metrics adjusted rand score completeness score func metrics completeness score fowlkes mallows score func metrics fowlkes mallows score homogeneity score func metrics homogeneity score mutual info score func metrics mutual info score normalized mutual info score func metrics normalized mutual info score rand score func metrics rand score v measure score func metrics v measure score Regression explained variance func metrics explained variance score neg max error func metrics max error neg mean absolute error func metrics mean absolute error neg mean squared error func metrics mean squared error neg root mean squared error func metrics root mean squared error neg mean squared log error func metrics mean squared log error neg root mean squared log error func metrics root mean squared log error neg median absolute error func metrics median absolute error r2 func metrics r2 score neg mean poisson deviance func metrics mean poisson deviance neg mean gamma deviance func metrics mean gamma deviance neg mean absolute percentage error func metrics mean absolute percentage error d2 absolute error score func metrics d2 absolute error score Usage examples from sklearn import svm datasets from sklearn model selection import cross val score X y datasets load iris return X y True clf svm SVC random state 0 cross val score clf X y cv 5 scoring recall macro array 0 96 0 96 0 96 0 93 1 note If a wrong scoring name is passed an InvalidParameterError is raised You can retrieve the names of all available scorers by calling func sklearn metrics get scorer names currentmodule sklearn metrics scoring callable Callable scorers For more complex use cases and more flexibility you can pass a callable to the scoring parameter This can be done by ref scoring adapt metric ref scoring custom most flexible scoring adapt metric Adapting predefined metrics via make scorer The following metric functions are not implemented as named scorers sometimes because they require additional parameters such as func fbeta score They cannot be passed to the scoring parameters instead their callable needs to be passed to func make scorer together with the value of the user settable parameters Function Parameter Example usage Classification func metrics fbeta score beta make scorer fbeta score beta 2 Regression func metrics mean tweedie deviance power make scorer mean tweedie deviance power 1 5 func metrics mean pinball loss alpha make scorer mean pinball loss alpha 0 95 func metrics d2 tweedie score power make scorer d2 tweedie score power 1 5 func metrics d2 pinball score alpha make scorer d2 pinball score alpha 0 95 One typical use case is to wrap an existing metric function from the library with non default values for its parameters such as the beta parameter for the func fbeta score function from sklearn metrics import fbeta score make scorer ftwo scorer make scorer fbeta score beta 2 from sklearn model selection import GridSearchCV from sklearn svm import LinearSVC grid GridSearchCV LinearSVC param grid C 1 10 scoring ftwo scorer cv 5 The module mod sklearn metrics also exposes a set of simple functions measuring a prediction error given ground truth and prediction functions ending with score return a value to maximize the higher the better functions ending with error loss or deviance return a value to minimize the lower the better When converting into a scorer object using func make scorer set the greater is better parameter to False True by default see the parameter description below scoring custom Creating a custom scorer object You can create your own custom scorer object using func make scorer or for the most flexibility from scratch See below for details dropdown Custom scorer objects using make scorer You can build a completely custom scorer object from a simple python function using func make scorer which can take several parameters the python function you want to use my custom loss func in the example below whether the python function returns a score greater is better True the default or a loss greater is better False If a loss the output of the python function is negated by the scorer object conforming to the cross validation convention that scorers return higher values for better models for classification metrics only whether the python function you provided requires continuous decision certainties If the scoring function only accepts probability estimates e g func metrics log loss then one needs to set the parameter response method predict proba Some scoring functions do not necessarily require probability estimates but rather non thresholded decision values e g func metrics roc auc score In this case one can provide a list e g response method decision function predict proba and scorer will use the first available method in the order given in the list to compute the scores any additional parameters of the scoring function such as beta or labels Here is an example of building custom scorers and of using the greater is better parameter import numpy as np def my custom loss func y true y pred diff np abs y true y pred max return np log1p diff score will negate the return value of my custom loss func which will be np log 2 0 693 given the values for X and y defined below score make scorer my custom loss func greater is better False X 1 1 y 0 1 from sklearn dummy import DummyClassifier clf DummyClassifier strategy most frequent random state 0 clf clf fit X y my custom loss func y clf predict X 0 69 score clf X y 0 69 dropdown Custom scorer objects from scratch You can generate even more flexible model scorers by constructing your own scoring object from scratch without using the func make scorer factory For a callable to be a scorer it needs to meet the protocol specified by the following two rules It can be called with parameters estimator X y where estimator is the model that should be evaluated X is validation data and y is the ground truth target for X in the supervised case or None in the unsupervised case It returns a floating point number that quantifies the estimator prediction quality on X with reference to y Again by convention higher numbers are better so if your scorer returns loss that value should be negated Advanced If it requires extra metadata to be passed to it it should expose a get metadata routing method returning the requested metadata The user should be able to set the requested metadata via a set score request method Please see ref User Guide metadata routing and ref Developer Guide sphx glr auto examples miscellaneous plot metadata routing py for more details dropdown Using custom scorers in functions where n jobs 1 While defining the custom scoring function alongside the calling function should work out of the box with the default joblib backend loky importing it from another module will be a more robust approach and work independently of the joblib backend For example to use n jobs greater than 1 in the example below custom scoring function function is saved in a user created module custom scorer module py and imported from custom scorer module import custom scoring function doctest SKIP cross val score model X train y train scoring make scorer custom scoring function greater is better False cv 5 n jobs 1 doctest SKIP multimetric scoring Using multiple metric evaluation Scikit learn also permits evaluation of multiple metrics in GridSearchCV RandomizedSearchCV and cross validate There are three ways to specify multiple scoring metrics for the scoring parameter As an iterable of string metrics scoring accuracy precision As a dict mapping the scorer name to the scoring function from sklearn metrics import accuracy score from sklearn metrics import make scorer scoring accuracy make scorer accuracy score prec precision Note that the dict values can either be scorer functions or one of the predefined metric strings As a callable that returns a dictionary of scores from sklearn model selection import cross validate from sklearn metrics import confusion matrix A sample toy binary classification dataset X y datasets make classification n classes 2 random state 0 svm LinearSVC random state 0 def confusion matrix scorer clf X y y pred clf predict X cm confusion matrix y y pred return tn cm 0 0 fp cm 0 1 fn cm 1 0 tp cm 1 1 cv results cross validate svm X y cv 5 scoring confusion matrix scorer Getting the test set true positive scores print cv results test tp 10 9 8 7 8 Getting the test set false negative scores print cv results test fn 0 1 2 3 2 classification metrics Classification metrics currentmodule sklearn metrics The mod sklearn metrics module implements several loss score and utility functions to measure classification performance Some metrics might require probability estimates of the positive class confidence values or binary decisions values Most implementations allow each sample to provide a weighted contribution to the overall score through the sample weight parameter Some of these are restricted to the binary classification case autosummary precision recall curve roc curve class likelihood ratios det curve Others also work in the multiclass case autosummary balanced accuracy score cohen kappa score confusion matrix hinge loss matthews corrcoef roc auc score top k accuracy score Some also work in the multilabel case autosummary accuracy score classification report f1 score fbeta score hamming loss jaccard score log loss multilabel confusion matrix precision recall fscore support precision score recall score roc auc score zero one loss d2 log loss score And some work with binary and multilabel but not multiclass problems autosummary average precision score In the following sub sections we will describe each of those functions preceded by some notes on common API and metric definition average From binary to multiclass and multilabel Some metrics are essentially defined for binary classification tasks e g func f1 score func roc auc score In these cases by default only the positive label is evaluated assuming by default that the positive class is labelled 1 though this may be configurable through the pos label parameter In extending a binary metric to multiclass or multilabel problems the data is treated as a collection of binary problems one for each class There are then a number of ways to average binary metric calculations across the set of classes each of which may be useful in some scenario Where available you should select among these using the average parameter macro simply calculates the mean of the binary metrics giving equal weight to each class In problems where infrequent classes are nonetheless important macro averaging may be a means of highlighting their performance On the other hand the assumption that all classes are equally important is often untrue such that macro averaging will over emphasize the typically low performance on an infrequent class weighted accounts for class imbalance by computing the average of binary metrics in which each class s score is weighted by its presence in the true data sample micro gives each sample class pair an equal contribution to the overall metric except as a result of sample weight Rather than summing the metric per class this sums the dividends and divisors that make up the per class metrics to calculate an overall quotient Micro averaging may be preferred in multilabel settings including multiclass classification where a majority class is to be ignored samples applies only to multilabel problems It does not calculate a per class measure instead calculating the metric over the true and predicted classes for each sample in the evaluation data and returning their sample weight weighted average Selecting average None will return an array with the score for each class While multiclass data is provided to the metric like binary targets as an array of class labels multilabel data is specified as an indicator matrix in which cell i j has value 1 if sample i has label j and value 0 otherwise accuracy score Accuracy score The func accuracy score function computes the accuracy https en wikipedia org wiki Accuracy and precision either the fraction default or the count normalize False of correct predictions In multilabel classification the function returns the subset accuracy If the entire set of predicted labels for a sample strictly match with the true set of labels then the subset accuracy is 1 0 otherwise it is 0 0 If math hat y i is the predicted value of the math i th sample and math y i is the corresponding true value then the fraction of correct predictions over math n text samples is defined as math texttt accuracy y hat y frac 1 n text samples sum i 0 n text samples 1 1 hat y i y i where math 1 x is the indicator function https en wikipedia org wiki Indicator function import numpy as np from sklearn metrics import accuracy score y pred 0 2 1 3 y true 0 1 2 3 accuracy score y true y pred 0 5 accuracy score y true y pred normalize False 2 0 In the multilabel case with binary label indicators accuracy score np array 0 1 1 1 np ones 2 2 0 5 rubric Examples See ref sphx glr auto examples model selection plot permutation tests for classification py for an example of accuracy score usage using permutations of the dataset top k accuracy score Top k accuracy score The func top k accuracy score function is a generalization of func accuracy score The difference is that a prediction is considered correct as long as the true label is associated with one of the k highest predicted scores func accuracy score is the special case of k 1 The function covers the binary and multiclass classification cases but not the multilabel case If math hat f i j is the predicted class for the math i th sample corresponding to the math j th largest predicted score and math y i is the corresponding true value then the fraction of correct predictions over math n text samples is defined as math texttt top k accuracy y hat f frac 1 n text samples sum i 0 n text samples 1 sum j 1 k 1 hat f i j y i where math k is the number of guesses allowed and math 1 x is the indicator function https en wikipedia org wiki Indicator function import numpy as np from sklearn metrics import top k accuracy score y true np array 0 1 2 2 y score np array 0 5 0 2 0 2 0 3 0 4 0 2 0 2 0 4 0 3 0 7 0 2 0 1 top k accuracy score y true y score k 2 0 75 Not normalizing gives the number of correctly classified samples top k accuracy score y true y score k 2 normalize False 3 balanced accuracy score Balanced accuracy score The func balanced accuracy score function computes the balanced accuracy https en wikipedia org wiki Accuracy and precision which avoids inflated performance estimates on imbalanced datasets It is the macro average of recall scores per class or equivalently raw accuracy where each sample is weighted according to the inverse prevalence of its true class Thus for balanced datasets the score is equal to accuracy In the binary case balanced accuracy is equal to the arithmetic mean of sensitivity https en wikipedia org wiki Sensitivity and specificity true positive rate and specificity https en wikipedia org wiki Sensitivity and specificity true negative rate or the area under the ROC curve with binary predictions rather than scores math texttt balanced accuracy frac 1 2 left frac TP TP FN frac TN TN FP right If the classifier performs equally well on either class this term reduces to the conventional accuracy i e the number of correct predictions divided by the total number of predictions In contrast if the conventional accuracy is above chance only because the classifier takes advantage of an imbalanced test set then the balanced accuracy as appropriate will drop to math frac 1 n classes The score ranges from 0 to 1 or when adjusted True is used it rescaled to the range math frac 1 1 n classes to 1 inclusive with performance at random scoring 0 If math y i is the true value of the math i th sample and math w i is the corresponding sample weight then we adjust the sample weight to math hat w i frac w i sum j 1 y j y i w j where math 1 x is the indicator function https en wikipedia org wiki Indicator function Given predicted math hat y i for sample math i balanced accuracy is defined as math texttt balanced accuracy y hat y w frac 1 sum hat w i sum i 1 hat y i y i hat w i With adjusted True balanced accuracy reports the relative increase from math texttt balanced accuracy y mathbf 0 w frac 1 n classes In the binary case this is also known as Youden s J statistic https en wikipedia org wiki Youden 27s J statistic or informedness note The multiclass definition here seems the most reasonable extension of the metric used in binary classification though there is no certain consensus in the literature Our definition Mosley2013 Kelleher2015 and Guyon2015 where Guyon2015 adopt the adjusted version to ensure that random predictions have a score of math 0 and perfect predictions have a score of math 1 Class balanced accuracy as described in Mosley2013 the minimum between the precision and the recall for each class is computed Those values are then averaged over the total number of classes to get the balanced accuracy Balanced Accuracy as described in Urbanowicz2015 the average of sensitivity and specificity is computed for each class and then averaged over total number of classes rubric References Guyon2015 I Guyon K Bennett G Cawley H J Escalante S Escalera T K Ho N Maci B Ray M Saeed A R Statnikov E Viegas Design of the 2015 ChaLearn AutoML Challenge https ieeexplore ieee org document 7280767 IJCNN 2015 Mosley2013 L Mosley A balanced approach to the multi class imbalance problem https lib dr iastate edu etd 13537 IJCV 2010 Kelleher2015 John D Kelleher Brian Mac Namee Aoife D Arcy Fundamentals of Machine Learning for Predictive Data Analytics Algorithms Worked Examples and Case Studies https mitpress mit edu books fundamentals machine learning predictive data analytics 2015 Urbanowicz2015 Urbanowicz R J Moore J H doi ExSTraCS 2 0 description and evaluation of a scalable learning classifier system 10 1007 s12065 015 0128 8 Evol Intel 2015 8 89 cohen kappa Cohen s kappa The function func cohen kappa score computes Cohen s kappa https en wikipedia org wiki Cohen 27s kappa statistic This measure is intended to compare labelings by different human annotators not a classifier versus a ground truth The kappa score is a number between 1 and 1 Scores above 8 are generally considered good agreement zero or lower means no agreement practically random labels Kappa scores can be computed for binary or multiclass problems but not for multilabel problems except by manually computing a per label score and not for more than two annotators from sklearn metrics import cohen kappa score labeling1 2 0 2 2 0 1 labeling2 0 0 2 2 0 2 cohen kappa score labeling1 labeling2 0 4285714285714286 confusion matrix Confusion matrix The func confusion matrix function evaluates classification accuracy by computing the confusion matrix https en wikipedia org wiki Confusion matrix with each row corresponding to the true class Wikipedia and other references may use different convention for axes By definition entry math i j in a confusion matrix is the number of observations actually in group math i but predicted to be in group math j Here is an example from sklearn metrics import confusion matrix y true 2 0 2 2 0 1 y pred 0 0 2 2 0 2 confusion matrix y true y pred array 2 0 0 0 0 1 1 0 2 class ConfusionMatrixDisplay can be used to visually represent a confusion matrix as shown in the ref sphx glr auto examples model selection plot confusion matrix py example which creates the following figure image auto examples model selection images sphx glr plot confusion matrix 001 png target auto examples model selection plot confusion matrix html scale 75 align center The parameter normalize allows to report ratios instead of counts The confusion matrix can be normalized in 3 different ways pred true and all which will divide the counts by the sum of each columns rows or the entire matrix respectively y true 0 0 0 1 1 1 1 1 y pred 0 1 0 1 0 1 0 1 confusion matrix y true y pred normalize all array 0 25 0 125 0 25 0 375 For binary problems we can get counts of true negatives false positives false negatives and true positives as follows y true 0 0 0 1 1 1 1 1 y pred 0 1 0 1 0 1 0 1 tn fp fn tp confusion matrix y true y pred ravel tn fp fn tp 2 1 2 3 rubric Examples See ref sphx glr auto examples model selection plot confusion matrix py for an example of using a confusion matrix to evaluate classifier output quality See ref sphx glr auto examples classification plot digits classification py for an example of using a confusion matrix to classify hand written digits See ref sphx glr auto examples text plot document classification 20newsgroups py for an example of using a confusion matrix to classify text documents classification report Classification report The func classification report function builds a text report showing the main classification metrics Here is a small example with custom target names and inferred labels from sklearn metrics import classification report y true 0 1 2 2 0 y pred 0 0 2 1 0 target names class 0 class 1 class 2 print classification report y true y pred target names target names precision recall f1 score support BLANKLINE class 0 0 67 1 00 0 80 2 class 1 0 00 0 00 0 00 1 class 2 1 00 0 50 0 67 2 BLANKLINE accuracy 0 60 5 macro avg 0 56 0 50 0 49 5 weighted avg 0 67 0 60 0 59 5 BLANKLINE rubric Examples See ref sphx glr auto examples classification plot digits classification py for an example of classification report usage for hand written digits See ref sphx glr auto examples model selection plot grid search digits py for an example of classification report usage for grid search with nested cross validation hamming loss Hamming loss The func hamming loss computes the average Hamming loss or Hamming distance https en wikipedia org wiki Hamming distance between two sets of samples If math hat y i j is the predicted value for the math j th label of a given sample math i math y i j is the corresponding true value math n text samples is the number of samples and math n text labels is the number of labels then the Hamming loss math L Hamming is defined as math L Hamming y hat y frac 1 n text samples n text labels sum i 0 n text samples 1 sum j 0 n text labels 1 1 hat y i j not y i j where math 1 x is the indicator function https en wikipedia org wiki Indicator function The equation above does not hold true in the case of multiclass classification Please refer to the note below for more information from sklearn metrics import hamming loss y pred 1 2 3 4 y true 2 2 3 4 hamming loss y true y pred 0 25 In the multilabel case with binary label indicators hamming loss np array 0 1 1 1 np zeros 2 2 0 75 note In multiclass classification the Hamming loss corresponds to the Hamming distance between y true and y pred which is similar to the ref zero one loss function However while zero one loss penalizes prediction sets that do not strictly match true sets the Hamming loss penalizes individual labels Thus the Hamming loss upper bounded by the zero one loss is always between zero and one inclusive and predicting a proper subset or superset of the true labels will give a Hamming loss between zero and one exclusive precision recall f measure metrics Precision recall and F measures Intuitively precision https en wikipedia org wiki Precision and recall Precision is the ability of the classifier not to label as positive a sample that is negative and recall https en wikipedia org wiki Precision and recall Recall is the ability of the classifier to find all the positive samples The F measure https en wikipedia org wiki F1 score math F beta and math F 1 measures can be interpreted as a weighted harmonic mean of the precision and recall A math F beta measure reaches its best value at 1 and its worst score at 0 With math beta 1 math F beta and math F 1 are equivalent and the recall and the precision are equally important The func precision recall curve computes a precision recall curve from the ground truth label and a score given by the classifier by varying a decision threshold The func average precision score function computes the average precision https en wikipedia org w index php title Information retrieval oldid 793358396 Average precision AP from prediction scores The value is between 0 and 1 and higher is better AP is defined as math text AP sum n R n R n 1 P n where math P n and math R n are the precision and recall at the nth threshold With random predictions the AP is the fraction of positive samples References Manning2008 and Everingham2010 present alternative variants of AP that interpolate the precision recall curve Currently func average precision score does not implement any interpolated variant References Davis2006 and Flach2015 describe why a linear interpolation of points on the precision recall curve provides an overly optimistic measure of classifier performance This linear interpolation is used when computing area under the curve with the trapezoidal rule in func auc Several functions allow you to analyze the precision recall and F measures score autosummary average precision score f1 score fbeta score precision recall curve precision recall fscore support precision score recall score Note that the func precision recall curve function is restricted to the binary case The func average precision score function supports multiclass and multilabel formats by computing each class score in a One vs the rest OvR fashion and averaging them or not depending of its average argument value The func PrecisionRecallDisplay from estimator and func PrecisionRecallDisplay from predictions functions will plot the precision recall curve as follows image auto examples model selection images sphx glr plot precision recall 001 png target auto examples model selection plot precision recall html plot the precision recall curve scale 75 align center rubric Examples See ref sphx glr auto examples model selection plot grid search digits py for an example of func precision score and func recall score usage to estimate parameters using grid search with nested cross validation See ref sphx glr auto examples model selection plot precision recall py for an example of func precision recall curve usage to evaluate classifier output quality rubric References Manning2008 C D Manning P Raghavan H Sch tze Introduction to Information Retrieval https nlp stanford edu IR book html htmledition evaluation of ranked retrieval results 1 html 2008 Everingham2010 M Everingham L Van Gool C K I Williams J Winn A Zisserman The Pascal Visual Object Classes VOC Challenge https citeseerx ist psu edu doc view pid b6bebfd529b233f00cb854b7d8070319600cf59d IJCV 2010 Davis2006 J Davis M Goadrich The Relationship Between Precision Recall and ROC Curves https www biostat wisc edu page rocpr pdf ICML 2006 Flach2015 P A Flach M Kull Precision Recall Gain Curves PR Analysis Done Right https papers nips cc paper 5867 precision recall gain curves pr analysis done right pdf NIPS 2015 Binary classification In a binary classification task the terms positive and negative refer to the classifier s prediction and the terms true and false refer to whether that prediction corresponds to the external judgment sometimes known as the observation Given these definitions we can formulate the following table Actual class observation Predicted class tp true positive fp false positive expectation Correct result Unexpected result fn false negative tn true negative Missing result Correct absence of result In this context we can define the notions of precision and recall math text precision frac text tp text tp text fp math text recall frac text tp text tp text fn Sometimes recall is also called sensitivity F measure is the weighted harmonic mean of precision and recall with precision s contribution to the mean weighted by some parameter math beta math F beta 1 beta 2 frac text precision times text recall beta 2 text precision text recall To avoid division by zero when precision and recall are zero Scikit Learn calculates F measure with this otherwise equivalent formula math F beta frac 1 beta 2 text tp 1 beta 2 text tp text fp beta 2 text fn Note that this formula is still undefined when there are no true positives false positives or false negatives By default F 1 for a set of exclusively true negatives is calculated as 0 however this behavior can be changed using the zero division parameter Here are some small examples in binary classification from sklearn import metrics y pred 0 1 0 0 y true 0 1 0 1 metrics precision score y true y pred 1 0 metrics recall score y true y pred 0 5 metrics f1 score y true y pred 0 66 metrics fbeta score y true y pred beta 0 5 0 83 metrics fbeta score y true y pred beta 1 0 66 metrics fbeta score y true y pred beta 2 0 55 metrics precision recall fscore support y true y pred beta 0 5 array 0 66 1 array 1 0 5 array 0 71 0 83 array 2 2 import numpy as np from sklearn metrics import precision recall curve from sklearn metrics import average precision score y true np array 0 0 1 1 y scores np array 0 1 0 4 0 35 0 8 precision recall threshold precision recall curve y true y scores precision array 0 5 0 66 0 5 1 1 recall array 1 1 0 5 0 5 0 threshold array 0 1 0 35 0 4 0 8 average precision score y true y scores 0 83 Multiclass and multilabel classification In a multiclass and multilabel classification task the notions of precision recall and F measures can be applied to each label independently There are a few ways to combine results across labels specified by the average argument to the func average precision score func f1 score func fbeta score func precision recall fscore support func precision score and func recall score functions as described ref above average Note the following behaviors when averaging If all labels are included micro averaging in a multiclass setting will produce precision recall and math F that are all identical to accuracy weighted averaging may produce a F score that is not between precision and recall macro averaging for F measures is calculated as the arithmetic mean over per label class F measures not the harmonic mean over the arithmetic precision and recall means Both calculations can be seen in the literature but are not equivalent see OB2019 for details To make this more explicit consider the following notation math y the set of true math sample label pairs math hat y the set of predicted math sample label pairs math L the set of labels math S the set of samples math y s the subset of math y with sample math s i e math y s left s l in y s s right math y l the subset of math y with label math l similarly math hat y s and math hat y l are subsets of math hat y math P A B frac left A cap B right left B right for some sets math A and math B math R A B frac left A cap B right left A right Conventions vary on handling math A emptyset this implementation uses math R A B 0 and similar for math P math F beta A B left 1 beta 2 right frac P A B times R A B beta 2 P A B R A B Then the metrics are defined as average Precision Recall F beta micro math P y hat y math R y hat y math F beta y hat y samples math frac 1 left S right sum s in S P y s hat y s math frac 1 left S right sum s in S R y s hat y s math frac 1 left S right sum s in S F beta y s hat y s macro math frac 1 left L right sum l in L P y l hat y l math frac 1 left L right sum l in L R y l hat y l math frac 1 left L right sum l in L F beta y l hat y l weighted math frac 1 sum l in L left y l right sum l in L left y l right P y l hat y l math frac 1 sum l in L left y l right sum l in L left y l right R y l hat y l math frac 1 sum l in L left y l right sum l in L left y l right F beta y l hat y l None math langle P y l hat y l l in L rangle math langle R y l hat y l l in L rangle math langle F beta y l hat y l l in L rangle from sklearn import metrics y true 0 1 2 0 1 2 y pred 0 2 1 0 0 1 metrics precision score y true y pred average macro 0 22 metrics recall score y true y pred average micro 0 33 metrics f1 score y true y pred average weighted 0 26 metrics fbeta score y true y pred average macro beta 0 5 0 23 metrics precision recall fscore support y true y pred beta 0 5 average None array 0 66 0 0 array 1 0 0 array 0 71 0 0 array 2 2 2 For multiclass classification with a negative class it is possible to exclude some labels metrics recall score y true y pred labels 1 2 average micro excluding 0 no labels were correctly recalled 0 0 Similarly labels not present in the data sample may be accounted for in macro averaging metrics precision score y true y pred labels 0 1 2 3 average macro 0 166 rubric References OB2019 arxiv Opitz J Burst S 2019 Macro f1 and macro f1 1911 03347 jaccard similarity score Jaccard similarity coefficient score The func jaccard score function computes the average of Jaccard similarity coefficients https en wikipedia org wiki Jaccard index also called the Jaccard index between pairs of label sets The Jaccard similarity coefficient with a ground truth label set math y and predicted label set math hat y is defined as math J y hat y frac y cap hat y y cup hat y The func jaccard score like func precision recall fscore support applies natively to binary targets By computing it set wise it can be extended to apply to multilabel and multiclass through the use of average see ref above average In the binary case import numpy as np from sklearn metrics import jaccard score y true np array 0 1 1 1 1 0 y pred np array 1 1 1 1 0 0 jaccard score y true 0 y pred 0 0 6666 In the 2D comparison case e g image similarity jaccard score y true y pred average micro 0 6 In the multilabel case with binary label indicators jaccard score y true y pred average samples 0 5833 jaccard score y true y pred average macro 0 6666 jaccard score y true y pred average None array 0 5 0 5 1 Multiclass problems are binarized and treated like the corresponding multilabel problem y pred 0 2 1 2 y true 0 1 2 2 jaccard score y true y pred average None array 1 0 0 33 jaccard score y true y pred average macro 0 44 jaccard score y true y pred average micro 0 33 hinge loss Hinge loss The func hinge loss function computes the average distance between the model and the data using hinge loss https en wikipedia org wiki Hinge loss a one sided metric that considers only prediction errors Hinge loss is used in maximal margin classifiers such as support vector machines If the true label math y i of a binary classification task is encoded as math y i left 1 1 right for every sample math i and math w i is the corresponding predicted decision an array of shape n samples as output by the decision function method then the hinge loss is defined as math L text Hinge y w frac 1 n text samples sum i 0 n text samples 1 max left 1 w i y i 0 right If there are more than two labels func hinge loss uses a multiclass variant due to Crammer Singer Here https jmlr csail mit edu papers volume2 crammer01a crammer01a pdf is the paper describing it In this case the predicted decision is an array of shape n samples n labels If math w i y i is the predicted decision for the true label math y i of the math i th sample and math hat w i y i max left w i y j y j ne y i right is the maximum of the predicted decisions for all the other labels then the multi class hinge loss is defined by math L text Hinge y w frac 1 n text samples sum i 0 n text samples 1 max left 1 hat w i y i w i y i 0 right Here is a small example demonstrating the use of the func hinge loss function with a svm classifier in a binary class problem from sklearn import svm from sklearn metrics import hinge loss X 0 1 y 1 1 est svm LinearSVC random state 0 est fit X y LinearSVC random state 0 pred decision est decision function 2 3 0 5 pred decision array 2 18 2 36 0 09 hinge loss 1 1 1 pred decision 0 3 Here is an example demonstrating the use of the func hinge loss function with a svm classifier in a multiclass problem X np array 0 1 2 3 Y np array 0 1 2 3 labels np array 0 1 2 3 est svm LinearSVC est fit X Y LinearSVC pred decision est decision function 1 2 3 y true 0 2 3 hinge loss y true pred decision labels labels 0 56 log loss Log loss Log loss also called logistic regression loss or cross entropy loss is defined on probability estimates It is commonly used in multinomial logistic regression and neural networks as well as in some variants of expectation maximization and can be used to evaluate the probability outputs predict proba of a classifier instead of its discrete predictions For binary classification with a true label math y in 0 1 and a probability estimate math p operatorname Pr y 1 the log loss per sample is the negative log likelihood of the classifier given the true label math L log y p log operatorname Pr y p y log p 1 y log 1 p This extends to the multiclass case as follows Let the true labels for a set of samples be encoded as a 1 of K binary indicator matrix math Y i e math y i k 1 if sample math i has label math k taken from a set of math K labels Let math P be a matrix of probability estimates with math p i k operatorname Pr y i k 1 Then the log loss of the whole set is math L log Y P log operatorname Pr Y P frac 1 N sum i 0 N 1 sum k 0 K 1 y i k log p i k To see how this generalizes the binary log loss given above note that in the binary case math p i 0 1 p i 1 and math y i 0 1 y i 1 so expanding the inner sum over math y i k in 0 1 gives the binary log loss The func log loss function computes log loss given a list of ground truth labels and a probability matrix as returned by an estimator s predict proba method from sklearn metrics import log loss y true 0 0 1 1 y pred 9 1 8 2 3 7 01 99 log loss y true y pred 0 1738 The first 9 1 in y pred denotes 90 probability that the first sample has label 0 The log loss is non negative matthews corrcoef Matthews correlation coefficient The func matthews corrcoef function computes the Matthew s correlation coefficient MCC https en wikipedia org wiki Matthews correlation coefficient for binary classes Quoting Wikipedia The Matthews correlation coefficient is used in machine learning as a measure of the quality of binary two class classifications It takes into account true and false positives and negatives and is generally regarded as a balanced measure which can be used even if the classes are of very different sizes The MCC is in essence a correlation coefficient value between 1 and 1 A coefficient of 1 represents a perfect prediction 0 an average random prediction and 1 an inverse prediction The statistic is also known as the phi coefficient In the binary two class case math tp math tn math fp and math fn are respectively the number of true positives true negatives false positives and false negatives the MCC is defined as math MCC frac tp times tn fp times fn sqrt tp fp tp fn tn fp tn fn In the multiclass case the Matthews correlation coefficient can be defined http rk kvl dk introduction index html in terms of a func confusion matrix math C for math K classes To simplify the definition consider the following intermediate variables math t k sum i K C ik the number of times class math k truly occurred math p k sum i K C ki the number of times class math k was predicted math c sum k K C kk the total number of samples correctly predicted math s sum i K sum j K C ij the total number of samples Then the multiclass MCC is defined as math MCC frac c times s sum k K p k times t k sqrt s 2 sum k K p k 2 times s 2 sum k K t k 2 When there are more than two labels the value of the MCC will no longer range between 1 and 1 Instead the minimum value will be somewhere between 1 and 0 depending on the number and distribution of ground true labels The maximum value is always 1 For additional information see WikipediaMCC2021 Here is a small example illustrating the usage of the func matthews corrcoef function from sklearn metrics import matthews corrcoef y true 1 1 1 1 y pred 1 1 1 1 matthews corrcoef y true y pred 0 33 rubric References WikipediaMCC2021 Wikipedia contributors Phi coefficient Wikipedia The Free Encyclopedia April 21 2021 12 21 CEST Available at https en wikipedia org wiki Phi coefficient Accessed April 21 2021 multilabel confusion matrix Multi label confusion matrix The func multilabel confusion matrix function computes class wise default or sample wise samplewise True multilabel confusion matrix to evaluate the accuracy of a classification multilabel confusion matrix also treats multiclass data as if it were multilabel as this is a transformation commonly applied to evaluate multiclass problems with binary classification metrics such as precision recall etc When calculating class wise multilabel confusion matrix math C the count of true negatives for class math i is math C i 0 0 false negatives is math C i 1 0 true positives is math C i 1 1 and false positives is math C i 0 1 Here is an example demonstrating the use of the func multilabel confusion matrix function with term multilabel indicator matrix input import numpy as np from sklearn metrics import multilabel confusion matrix y true np array 1 0 1 0 1 0 y pred np array 1 0 0 0 1 1 multilabel confusion matrix y true y pred array 1 0 0 1 BLANKLINE 1 0 0 1 BLANKLINE 0 1 1 0 Or a confusion matrix can be constructed for each sample s labels multilabel confusion matrix y true y pred samplewise True array 1 0 1 1 BLANKLINE 1 1 0 1 Here is an example demonstrating the use of the func multilabel confusion matrix function with term multiclass input y true cat ant cat cat ant bird y pred ant ant cat cat ant cat multilabel confusion matrix y true y pred labels ant bird cat array 3 1 0 2 BLANKLINE 5 0 1 0 BLANKLINE 2 1 1 2 Here are some examples demonstrating the use of the func multilabel confusion matrix function to calculate recall or sensitivity specificity fall out and miss rate for each class in a problem with multilabel indicator matrix input Calculating recall https en wikipedia org wiki Sensitivity and specificity also called the true positive rate or the sensitivity for each class y true np array 0 0 1 0 1 0 1 1 0 y pred np array 0 1 0 0 0 1 1 1 0 mcm multilabel confusion matrix y true y pred tn mcm 0 0 tp mcm 1 1 fn mcm 1 0 fp mcm 0 1 tp tp fn array 1 0 5 0 Calculating specificity https en wikipedia org wiki Sensitivity and specificity also called the true negative rate for each class tn tn fp array 1 0 0 5 Calculating fall out https en wikipedia org wiki False positive rate also called the false positive rate for each class fp fp tn array 0 1 0 5 Calculating miss rate https en wikipedia org wiki False positives and false negatives also called the false negative rate for each class fn fn tp array 0 0 5 1 roc metrics Receiver operating characteristic ROC The function func roc curve computes the receiver operating characteristic curve or ROC curve https en wikipedia org wiki Receiver operating characteristic Quoting Wikipedia A receiver operating characteristic ROC or simply ROC curve is a graphical plot which illustrates the performance of a binary classifier system as its discrimination threshold is varied It is created by plotting the fraction of true positives out of the positives TPR true positive rate vs the fraction of false positives out of the negatives FPR false positive rate at various threshold settings TPR is also known as sensitivity and FPR is one minus the specificity or true negative rate This function requires the true binary value and the target scores which can either be probability estimates of the positive class confidence values or binary decisions Here is a small example of how to use the func roc curve function import numpy as np from sklearn metrics import roc curve y np array 1 1 2 2 scores np array 0 1 0 4 0 35 0 8 fpr tpr thresholds roc curve y scores pos label 2 fpr array 0 0 0 5 0 5 1 tpr array 0 0 5 0 5 1 1 thresholds array inf 0 8 0 4 0 35 0 1 Compared to metrics such as the subset accuracy the Hamming loss or the F1 score ROC doesn t require optimizing a threshold for each label The func roc auc score function denoted by ROC AUC or AUROC computes the area under the ROC curve By doing so the curve information is summarized in one number The following figure shows the ROC curve and ROC AUC score for a classifier aimed to distinguish the virginica flower from the rest of the species in the ref iris dataset image auto examples model selection images sphx glr plot roc 001 png target auto examples model selection plot roc html scale 75 align center For more information see the Wikipedia article on AUC https en wikipedia org wiki Receiver operating characteristic Area under the curve roc auc binary Binary case In the binary case you can either provide the probability estimates using the classifier predict proba method or the non thresholded decision values given by the classifier decision function method In the case of providing the probability estimates the probability of the class with the greater label should be provided The greater label corresponds to classifier classes 1 and thus classifier predict proba X 1 Therefore the y score parameter is of size n samples from sklearn datasets import load breast cancer from sklearn linear model import LogisticRegression from sklearn metrics import roc auc score X y load breast cancer return X y True clf LogisticRegression solver liblinear fit X y clf classes array 0 1 We can use the probability estimates corresponding to clf classes 1 y score clf predict proba X 1 roc auc score y y score 0 99 Otherwise we can use the non thresholded decision values roc auc score y clf decision function X 0 99 roc auc multiclass Multi class case The func roc auc score function can also be used in multi class classification Two averaging strategies are currently supported the one vs one algorithm computes the average of the pairwise ROC AUC scores and the one vs rest algorithm computes the average of the ROC AUC scores for each class against all other classes In both cases the predicted labels are provided in an array with values from 0 to n classes and the scores correspond to the probability estimates that a sample belongs to a particular class The OvO and OvR algorithms support weighting uniformly average macro and by prevalence average weighted dropdown One vs one Algorithm Computes the average AUC of all possible pairwise combinations of classes HT2001 defines a multiclass AUC metric weighted uniformly math frac 1 c c 1 sum j 1 c sum k j c text AUC j k text AUC k j where math c is the number of classes and math text AUC j k is the AUC with class math j as the positive class and class math k as the negative class In general math text AUC j k neq text AUC k j in the multiclass case This algorithm is used by setting the keyword argument multiclass to ovo and average to macro The HT2001 multiclass AUC metric can be extended to be weighted by the prevalence math frac 1 c c 1 sum j 1 c sum k j c p j cup k text AUC j k text AUC k j where math c is the number of classes This algorithm is used by setting the keyword argument multiclass to ovo and average to weighted The weighted option returns a prevalence weighted average as described in FC2009 dropdown One vs rest Algorithm Computes the AUC of each class against the rest PD2000 The algorithm is functionally the same as the multilabel case To enable this algorithm set the keyword argument multiclass to ovr Additionally to macro F2006 and weighted F2001 averaging OvR supports micro averaging In applications where a high false positive rate is not tolerable the parameter max fpr of func roc auc score can be used to summarize the ROC curve up to the given limit The following figure shows the micro averaged ROC curve and its corresponding ROC AUC score for a classifier aimed to distinguish the different species in the ref iris dataset image auto examples model selection images sphx glr plot roc 002 png target auto examples model selection plot roc html scale 75 align center roc auc multilabel Multi label case In multi label classification the func roc auc score function is extended by averaging over the labels as ref above average In this case you should provide a y score of shape n samples n classes Thus when using the probability estimates one needs to select the probability of the class with the greater label for each output from sklearn datasets import make multilabel classification from sklearn multioutput import MultiOutputClassifier X y make multilabel classification random state 0 inner clf LogisticRegression solver liblinear random state 0 clf MultiOutputClassifier inner clf fit X y y score np transpose y pred 1 for y pred in clf predict proba X roc auc score y y score average None array 0 82 0 86 0 94 0 85 0 94 And the decision values do not require such processing from sklearn linear model import RidgeClassifierCV clf RidgeClassifierCV fit X y y score clf decision function X roc auc score y y score average None array 0 81 0 84 0 93 0 87 0 94 rubric Examples See ref sphx glr auto examples model selection plot roc py for an example of using ROC to evaluate the quality of the output of a classifier See ref sphx glr auto examples model selection plot roc crossval py for an example of using ROC to evaluate classifier output quality using cross validation See ref sphx glr auto examples applications plot species distribution modeling py for an example of using ROC to model species distribution rubric References HT2001 Hand D J and Till R J 2001 A simple generalisation of the area under the ROC curve for multiple class classification problems http link springer com article 10 1023 A 1010920819831 Machine learning 45 2 pp 171 186 FC2009 Ferri C sar Hernandez Orallo Jose Modroiu R 2009 An Experimental Comparison of Performance Measures for Classification https www math ucdavis edu saito data roc ferri class perf metrics pdf Pattern Recognition Letters 30 27 38 PD2000 Provost F Domingos P 2000 Well trained PETs Improving probability estimation trees https fosterprovost com publication well trained pets improving probability estimation trees Section 6 2 CeDER Working Paper IS 00 04 Stern School of Business New York University F2006 Fawcett T 2006 An introduction to ROC analysis http www sciencedirect com science article pii S016786550500303X Pattern Recognition Letters 27 8 pp 861 874 F2001 Fawcett T 2001 Using rule sets to maximize ROC performance https ieeexplore ieee org document 989510 In Data Mining 2001 Proceedings IEEE International Conference pp 131 138 det curve Detection error tradeoff DET The function func det curve computes the detection error tradeoff curve DET curve WikipediaDET2017 Quoting Wikipedia A detection error tradeoff DET graph is a graphical plot of error rates for binary classification systems plotting false reject rate vs false accept rate The x and y axes are scaled non linearly by their standard normal deviates or just by logarithmic transformation yielding tradeoff curves that are more linear than ROC curves and use most of the image area to highlight the differences of importance in the critical operating region DET curves are a variation of receiver operating characteristic ROC curves where False Negative Rate is plotted on the y axis instead of True Positive Rate DET curves are commonly plotted in normal deviate scale by transformation with math phi 1 with math phi being the cumulative distribution function The resulting performance curves explicitly visualize the tradeoff of error types for given classification algorithms See Martin1997 for examples and further motivation This figure compares the ROC and DET curves of two example classifiers on the same classification task image auto examples model selection images sphx glr plot det 001 png target auto examples model selection plot det html scale 75 align center dropdown Properties DET curves form a linear curve in normal deviate scale if the detection scores are normally or close to normally distributed It was shown by Navratil2007 that the reverse is not necessarily true and even more general distributions are able to produce linear DET curves The normal deviate scale transformation spreads out the points such that a comparatively larger space of plot is occupied Therefore curves with similar classification performance might be easier to distinguish on a DET plot With False Negative Rate being inverse to True Positive Rate the point of perfection for DET curves is the origin in contrast to the top left corner for ROC curves dropdown Applications and limitations DET curves are intuitive to read and hence allow quick visual assessment of a classifier s performance Additionally DET curves can be consulted for threshold analysis and operating point selection This is particularly helpful if a comparison of error types is required On the other hand DET curves do not provide their metric as a single number Therefore for either automated evaluation or comparison to other classification tasks metrics like the derived area under ROC curve might be better suited rubric Examples See ref sphx glr auto examples model selection plot det py for an example comparison between receiver operating characteristic ROC curves and Detection error tradeoff DET curves rubric References WikipediaDET2017 Wikipedia contributors Detection error tradeoff Wikipedia The Free Encyclopedia September 4 2017 23 33 UTC Available at https en wikipedia org w index php title Detection error tradeoff oldid 798982054 Accessed February 19 2018 Martin1997 A Martin G Doddington T Kamm M Ordowski and M Przybocki The DET Curve in Assessment of Detection Task Performance https ccc inaoep mx villasen bib martin97det pdf NIST 1997 Navratil2007 J Navractil and D Klusacek On Linear DETs https ieeexplore ieee org document 4218079 2007 IEEE International Conference on Acoustics Speech and Signal Processing ICASSP 07 Honolulu HI 2007 pp IV 229 IV 232 zero one loss Zero one loss The func zero one loss function computes the sum or the average of the 0 1 classification loss math L 0 1 over math n text samples By default the function normalizes over the sample To get the sum of the math L 0 1 set normalize to False In multilabel classification the func zero one loss scores a subset as one if its labels strictly match the predictions and as a zero if there are any errors By default the function returns the percentage of imperfectly predicted subsets To get the count of such subsets instead set normalize to False If math hat y i is the predicted value of the math i th sample and math y i is the corresponding true value then the 0 1 loss math L 0 1 is defined as math L 0 1 y hat y frac 1 n text samples sum i 0 n text samples 1 1 hat y i not y i where math 1 x is the indicator function https en wikipedia org wiki Indicator function The zero one loss can also be computed as math zero one loss 1 accuracy from sklearn metrics import zero one loss y pred 1 2 3 4 y true 2 2 3 4 zero one loss y true y pred 0 25 zero one loss y true y pred normalize False 1 0 In the multilabel case with binary label indicators where the first label set 0 1 has an error zero one loss np array 0 1 1 1 np ones 2 2 0 5 zero one loss np array 0 1 1 1 np ones 2 2 normalize False 1 0 rubric Examples See ref sphx glr auto examples feature selection plot rfe with cross validation py for an example of zero one loss usage to perform recursive feature elimination with cross validation brier score loss Brier score loss The func brier score loss function computes the Brier score https en wikipedia org wiki Brier score for binary classes Brier1950 Quoting Wikipedia The Brier score is a proper score function that measures the accuracy of probabilistic predictions It is applicable to tasks in which predictions must assign probabilities to a set of mutually exclusive discrete outcomes This function returns the mean squared error of the actual outcome math y in 0 1 and the predicted probability estimate math p operatorname Pr y 1 term predict proba as outputted by math BS frac 1 n text samples sum i 0 n text samples 1 y i p i 2 The Brier score loss is also between 0 to 1 and the lower the value the mean square difference is smaller the more accurate the prediction is Here is a small example of usage of this function import numpy as np from sklearn metrics import brier score loss y true np array 0 1 1 0 y true categorical np array spam ham ham spam y prob np array 0 1 0 9 0 8 0 4 y pred np array 0 1 1 0 brier score loss y true y prob 0 055 brier score loss y true 1 y prob pos label 0 0 055 brier score loss y true categorical y prob pos label ham 0 055 brier score loss y true y prob 0 5 0 0 The Brier score can be used to assess how well a classifier is calibrated However a lower Brier score loss does not always mean a better calibration This is because by analogy with the bias variance decomposition of the mean squared error the Brier score loss can be decomposed as the sum of calibration loss and refinement loss Bella2012 Calibration loss is defined as the mean squared deviation from empirical probabilities derived from the slope of ROC segments Refinement loss can be defined as the expected optimal loss as measured by the area under the optimal cost curve Refinement loss can change independently from calibration loss thus a lower Brier score loss does not necessarily mean a better calibrated model Only when refinement loss remains the same does a lower Brier score loss always mean better calibration Bella2012 Flach2008 rubric Examples See ref sphx glr auto examples calibration plot calibration py for an example of Brier score loss usage to perform probability calibration of classifiers rubric References Brier1950 G Brier Verification of forecasts expressed in terms of probability ftp ftp library noaa gov docs lib htdocs rescue mwr 078 mwr 078 01 0001 pdf Monthly weather review 78 1 1950 Bella2012 Bella Ferri Hern ndez Orallo and Ram rez Quintana Calibration of Machine Learning Models http dmip webs upv es papers BFHRHandbook2010 pdf in Khosrow Pour M Machine learning concepts methodologies tools and applications Hershey PA Information Science Reference 2012 Flach2008 Flach Peter and Edson Matsubara On classification ranking and probability estimation https drops dagstuhl de opus volltexte 2008 1382 Dagstuhl Seminar Proceedings Schloss Dagstuhl Leibniz Zentrum fr Informatik 2008 class likelihood ratios Class likelihood ratios The func class likelihood ratios function computes the positive and negative likelihood ratios https en wikipedia org wiki Likelihood ratios in diagnostic testing math LR pm for binary classes which can be interpreted as the ratio of post test to pre test odds as explained below As a consequence this metric is invariant w r t the class prevalence the number of samples in the positive class divided by the total number of samples and can be extrapolated between populations regardless of any possible class imbalance The math LR pm metrics are therefore very useful in settings where the data available to learn and evaluate a classifier is a study population with nearly balanced classes such as a case control study while the target application i e the general population has very low prevalence The positive likelihood ratio math LR is the probability of a classifier to correctly predict that a sample belongs to the positive class divided by the probability of predicting the positive class for a sample belonging to the negative class math LR frac text PR P T text PR P T The notation here refers to predicted math P or true math T label and the sign math and math refer to the positive and negative class respectively e g math P stands for predicted positive Analogously the negative likelihood ratio math LR is the probability of a sample of the positive class being classified as belonging to the negative class divided by the probability of a sample of the negative class being correctly classified math LR frac text PR P T text PR P T For classifiers above chance math LR above 1 higher is better while math LR ranges from 0 to 1 and lower is better Values of math LR pm approx 1 correspond to chance level Notice that probabilities differ from counts for instance math operatorname PR P T is not equal to the number of true positive counts tp see the wikipedia page https en wikipedia org wiki Likelihood ratios in diagnostic testing for the actual formulas rubric Examples ref sphx glr auto examples model selection plot likelihood ratios py dropdown Interpretation across varying prevalence Both class likelihood ratios are interpretable in terms of an odds ratio pre test and post tests math text post test odds text Likelihood ratio times text pre test odds Odds are in general related to probabilities via math text odds frac text probability 1 text probability or equivalently math text probability frac text odds 1 text odds On a given population the pre test probability is given by the prevalence By converting odds to probabilities the likelihood ratios can be translated into a probability of truly belonging to either class before and after a classifier prediction math text post test odds text Likelihood ratio times frac text pre test probability 1 text pre test probability math text post test probability frac text post test odds 1 text post test odds dropdown Mathematical divergences The positive likelihood ratio is undefined when math fp 0 which can be interpreted as the classifier perfectly identifying positive cases If math fp 0 and additionally math tp 0 this leads to a zero zero division This happens for instance when using a DummyClassifier that always predicts the negative class and therefore the interpretation as a perfect classifier is lost The negative likelihood ratio is undefined when math tn 0 Such divergence is invalid as math LR 1 would indicate an increase in the odds of a sample belonging to the positive class after being classified as negative as if the act of classifying caused the positive condition This includes the case of a DummyClassifier that always predicts the positive class i e when math tn fn 0 Both class likelihood ratios are undefined when math tp fn 0 which means that no samples of the positive class were present in the testing set This can also happen when cross validating highly imbalanced data In all the previous cases the func class likelihood ratios function raises by default an appropriate warning message and returns nan to avoid pollution when averaging over cross validation folds For a worked out demonstration of the func class likelihood ratios function see the example below dropdown References Wikipedia entry for Likelihood ratios in diagnostic testing https en wikipedia org wiki Likelihood ratios in diagnostic testing Brenner H Gefeller O 1997 Variation of sensitivity specificity likelihood ratios and predictive values with disease prevalence Statistics in medicine 16 9 981 991 d2 score classification D score for classification The D score computes the fraction of deviance explained It is a generalization of R where the squared error is generalized and replaced by a classification deviance of choice math text dev y hat y e g Log loss D is a form of a skill score It is calculated as math D 2 y hat y 1 frac text dev y hat y text dev y y text null Where math y text null is the optimal prediction of an intercept only model e g the per class proportion of y true in the case of the Log loss Like R the best possible score is 1 0 and it can be negative because the model can be arbitrarily worse A constant model that always predicts math y text null disregarding the input features would get a D score of 0 0 dropdown D2 log loss score The func d2 log loss score function implements the special case of D with the log loss see ref log loss i e math text dev y hat y text log loss y hat y Here are some usage examples of the func d2 log loss score function from sklearn metrics import d2 log loss score y true 1 1 2 3 y pred 0 5 0 25 0 25 0 5 0 25 0 25 0 5 0 25 0 25 0 5 0 25 0 25 d2 log loss score y true y pred 0 0 y true 1 2 3 y pred 0 98 0 01 0 01 0 01 0 98 0 01 0 01 0 01 0 98 d2 log loss score y true y pred 0 981 y true 1 2 3 y pred 0 1 0 6 0 3 0 1 0 6 0 3 0 4 0 5 0 1 d2 log loss score y true y pred 0 552 multilabel ranking metrics Multilabel ranking metrics currentmodule sklearn metrics In multilabel learning each sample can have any number of ground truth labels associated with it The goal is to give high scores and better rank to the ground truth labels coverage error Coverage error The func coverage error function computes the average number of labels that have to be included in the final prediction such that all true labels are predicted This is useful if you want to know how many top scored labels you have to predict in average without missing any true one The best value of this metrics is thus the average number of true labels note Our implementation s score is 1 greater than the one given in Tsoumakas et al 2010 This extends it to handle the degenerate case in which an instance has 0 true labels Formally given a binary indicator matrix of the ground truth labels math y in left 0 1 right n text samples times n text labels and the score associated with each label math hat f in mathbb R n text samples times n text labels the coverage is defined as math coverage y hat f frac 1 n text samples sum i 0 n text samples 1 max j y ij 1 text rank ij with math text rank ij left left k hat f ik geq hat f ij right right Given the rank definition ties in y scores are broken by giving the maximal rank that would have been assigned to all tied values Here is a small example of usage of this function import numpy as np from sklearn metrics import coverage error y true np array 1 0 0 0 0 1 y score np array 0 75 0 5 1 1 0 2 0 1 coverage error y true y score 2 5 label ranking average precision Label ranking average precision The func label ranking average precision score function implements label ranking average precision LRAP This metric is linked to the func average precision score function but is based on the notion of label ranking instead of precision and recall Label ranking average precision LRAP averages over the samples the answer to the following question for each ground truth label what fraction of higher ranked labels were true labels This performance measure will be higher if you are able to give better rank to the labels associated with each sample The obtained score is always strictly greater than 0 and the best value is 1 If there is exactly one relevant label per sample label ranking average precision is equivalent to the mean reciprocal rank https en wikipedia org wiki Mean reciprocal rank Formally given a binary indicator matrix of the ground truth labels math y in left 0 1 right n text samples times n text labels and the score associated with each label math hat f in mathbb R n text samples times n text labels the average precision is defined as math LRAP y hat f frac 1 n text samples sum i 0 n text samples 1 frac 1 y i 0 sum j y ij 1 frac mathcal L ij text rank ij where math mathcal L ij left k y ik 1 hat f ik geq hat f ij right math text rank ij left left k hat f ik geq hat f ij right right math cdot computes the cardinality of the set i e the number of elements in the set and math cdot 0 is the math ell 0 norm which computes the number of nonzero elements in a vector Here is a small example of usage of this function import numpy as np from sklearn metrics import label ranking average precision score y true np array 1 0 0 0 0 1 y score np array 0 75 0 5 1 1 0 2 0 1 label ranking average precision score y true y score 0 416 label ranking loss Ranking loss The func label ranking loss function computes the ranking loss which averages over the samples the number of label pairs that are incorrectly ordered i e true labels have a lower score than false labels weighted by the inverse of the number of ordered pairs of false and true labels The lowest achievable ranking loss is zero Formally given a binary indicator matrix of the ground truth labels math y in left 0 1 right n text samples times n text labels and the score associated with each label math hat f in mathbb R n text samples times n text labels the ranking loss is defined as math ranking loss y hat f frac 1 n text samples sum i 0 n text samples 1 frac 1 y i 0 n text labels y i 0 left left k l hat f ik leq hat f il y ik 1 y il 0 right right where math cdot computes the cardinality of the set i e the number of elements in the set and math cdot 0 is the math ell 0 norm which computes the number of nonzero elements in a vector Here is a small example of usage of this function import numpy as np from sklearn metrics import label ranking loss y true np array 1 0 0 0 0 1 y score np array 0 75 0 5 1 1 0 2 0 1 label ranking loss y true y score 0 75 With the following prediction we have perfect and minimal loss y score np array 1 0 0 1 0 2 0 1 0 2 0 9 label ranking loss y true y score 0 0 dropdown References Tsoumakas G Katakis I Vlahavas I 2010 Mining multi label data In Data mining and knowledge discovery handbook pp 667 685 Springer US ndcg Normalized Discounted Cumulative Gain Discounted Cumulative Gain DCG and Normalized Discounted Cumulative Gain NDCG are ranking metrics implemented in func sklearn metrics dcg score and func sklearn metrics ndcg score they compare a predicted order to ground truth scores such as the relevance of answers to a query From the Wikipedia page for Discounted Cumulative Gain Discounted cumulative gain DCG is a measure of ranking quality In information retrieval it is often used to measure effectiveness of web search engine algorithms or related applications Using a graded relevance scale of documents in a search engine result set DCG measures the usefulness or gain of a document based on its position in the result list The gain is accumulated from the top of the result list to the bottom with the gain of each result discounted at lower ranks DCG orders the true targets e g relevance of query answers in the predicted order then multiplies them by a logarithmic decay and sums the result The sum can be truncated after the first math K results in which case we call it DCG K NDCG or NDCG K is DCG divided by the DCG obtained by a perfect prediction so that it is always between 0 and 1 Usually NDCG is preferred to DCG Compared with the ranking loss NDCG can take into account relevance scores rather than a ground truth ranking So if the ground truth consists only of an ordering the ranking loss should be preferred if the ground truth consists of actual usefulness scores e g 0 for irrelevant 1 for relevant 2 for very relevant NDCG can be used For one sample given the vector of continuous ground truth values for each target math y in mathbb R M where math M is the number of outputs and the prediction math hat y which induces the ranking function math f the DCG score is math sum r 1 min K M frac y f r log 1 r and the NDCG score is the DCG score divided by the DCG score obtained for math y dropdown References Wikipedia entry for Discounted Cumulative Gain https en wikipedia org wiki Discounted cumulative gain Jarvelin K Kekalainen J 2002 Cumulated gain based evaluation of IR techniques ACM Transactions on Information Systems TOIS 20 4 422 446 Wang Y Wang L Li Y He D Chen W Liu T Y 2013 May A theoretical analysis of NDCG ranking measures In Proceedings of the 26th Annual Conference on Learning Theory COLT 2013 McSherry F Najork M 2008 March Computing information retrieval performance measures efficiently in the presence of tied scores In European conference on information retrieval pp 414 421 Springer Berlin Heidelberg regression metrics Regression metrics currentmodule sklearn metrics The mod sklearn metrics module implements several loss score and utility functions to measure regression performance Some of those have been enhanced to handle the multioutput case func mean squared error func mean absolute error func r2 score func explained variance score func mean pinball loss func d2 pinball score and func d2 absolute error score These functions have a multioutput keyword argument which specifies the way the scores or losses for each individual target should be averaged The default is uniform average which specifies a uniformly weighted mean over outputs If an ndarray of shape n outputs is passed then its entries are interpreted as weights and an according weighted average is returned If multioutput is raw values then all unaltered individual scores or losses will be returned in an array of shape n outputs The func r2 score and func explained variance score accept an additional value variance weighted for the multioutput parameter This option leads to a weighting of each individual score by the variance of the corresponding target variable This setting quantifies the globally captured unscaled variance If the target variables are of different scale then this score puts more importance on explaining the higher variance variables r2 score R score the coefficient of determination The func r2 score function computes the coefficient of determination https en wikipedia org wiki Coefficient of determination usually denoted as math R 2 It represents the proportion of variance of y that has been explained by the independent variables in the model It provides an indication of goodness of fit and therefore a measure of how well unseen samples are likely to be predicted by the model through the proportion of explained variance As such variance is dataset dependent math R 2 may not be meaningfully comparable across different datasets Best possible score is 1 0 and it can be negative because the model can be arbitrarily worse A constant model that always predicts the expected average value of y disregarding the input features would get an math R 2 score of 0 0 Note when the prediction residuals have zero mean the math R 2 score and the ref explained variance score are identical If math hat y i is the predicted value of the math i th sample and math y i is the corresponding true value for total math n samples the estimated math R 2 is defined as math R 2 y hat y 1 frac sum i 1 n y i hat y i 2 sum i 1 n y i bar y 2 where math bar y frac 1 n sum i 1 n y i and math sum i 1 n y i hat y i 2 sum i 1 n epsilon i 2 Note that func r2 score calculates unadjusted math R 2 without correcting for bias in sample variance of y In the particular case where the true target is constant the math R 2 score is not finite it is either NaN perfect predictions or Inf imperfect predictions Such non finite scores may prevent correct model optimization such as grid search cross validation to be performed correctly For this reason the default behaviour of func r2 score is to replace them with 1 0 perfect predictions or 0 0 imperfect predictions If force finite is set to False this score falls back on the original math R 2 definition Here is a small example of usage of the func r2 score function from sklearn metrics import r2 score y true 3 0 5 2 7 y pred 2 5 0 0 2 8 r2 score y true y pred 0 948 y true 0 5 1 1 1 7 6 y pred 0 2 1 2 8 5 r2 score y true y pred multioutput variance weighted 0 938 y true 0 5 1 1 1 7 6 y pred 0 2 1 2 8 5 r2 score y true y pred multioutput uniform average 0 936 r2 score y true y pred multioutput raw values array 0 965 0 908 r2 score y true y pred multioutput 0 3 0 7 0 925 y true 2 2 2 y pred 2 2 2 r2 score y true y pred 1 0 r2 score y true y pred force finite False nan y true 2 2 2 y pred 2 2 2 1e 8 r2 score y true y pred 0 0 r2 score y true y pred force finite False inf rubric Examples See ref sphx glr auto examples linear model plot lasso and elasticnet py for an example of R score usage to evaluate Lasso and Elastic Net on sparse signals mean absolute error Mean absolute error The func mean absolute error function computes mean absolute error https en wikipedia org wiki Mean absolute error a risk metric corresponding to the expected value of the absolute error loss or math l1 norm loss If math hat y i is the predicted value of the math i th sample and math y i is the corresponding true value then the mean absolute error MAE estimated over math n text samples is defined as math text MAE y hat y frac 1 n text samples sum i 0 n text samples 1 left y i hat y i right Here is a small example of usage of the func mean absolute error function from sklearn metrics import mean absolute error y true 3 0 5 2 7 y pred 2 5 0 0 2 8 mean absolute error y true y pred 0 5 y true 0 5 1 1 1 7 6 y pred 0 2 1 2 8 5 mean absolute error y true y pred 0 75 mean absolute error y true y pred multioutput raw values array 0 5 1 mean absolute error y true y pred multioutput 0 3 0 7 0 85 mean squared error Mean squared error The func mean squared error function computes mean squared error https en wikipedia org wiki Mean squared error a risk metric corresponding to the expected value of the squared quadratic error or loss If math hat y i is the predicted value of the math i th sample and math y i is the corresponding true value then the mean squared error MSE estimated over math n text samples is defined as math text MSE y hat y frac 1 n text samples sum i 0 n text samples 1 y i hat y i 2 Here is a small example of usage of the func mean squared error function from sklearn metrics import mean squared error y true 3 0 5 2 7 y pred 2 5 0 0 2 8 mean squared error y true y pred 0 375 y true 0 5 1 1 1 7 6 y pred 0 2 1 2 8 5 mean squared error y true y pred 0 7083 rubric Examples See ref sphx glr auto examples ensemble plot gradient boosting regression py for an example of mean squared error usage to evaluate gradient boosting regression Taking the square root of the MSE called the root mean squared error RMSE is another common metric that provides a measure in the same units as the target variable RSME is available through the func root mean squared error function mean squared log error Mean squared logarithmic error The func mean squared log error function computes a risk metric corresponding to the expected value of the squared logarithmic quadratic error or loss If math hat y i is the predicted value of the math i th sample and math y i is the corresponding true value then the mean squared logarithmic error MSLE estimated over math n text samples is defined as math text MSLE y hat y frac 1 n text samples sum i 0 n text samples 1 log e 1 y i log e 1 hat y i 2 Where math log e x means the natural logarithm of math x This metric is best to use when targets having exponential growth such as population counts average sales of a commodity over a span of years etc Note that this metric penalizes an under predicted estimate greater than an over predicted estimate Here is a small example of usage of the func mean squared log error function from sklearn metrics import mean squared log error y true 3 5 2 5 7 y pred 2 5 5 4 8 mean squared log error y true y pred 0 039 y true 0 5 1 1 2 7 6 y pred 0 5 2 1 2 5 8 8 mean squared log error y true y pred 0 044 The root mean squared logarithmic error RMSLE is available through the func root mean squared log error function mean absolute percentage error Mean absolute percentage error The func mean absolute percentage error MAPE also known as mean absolute percentage deviation MAPD is an evaluation metric for regression problems The idea of this metric is to be sensitive to relative errors It is for example not changed by a global scaling of the target variable If math hat y i is the predicted value of the math i th sample and math y i is the corresponding true value then the mean absolute percentage error MAPE estimated over math n text samples is defined as math text MAPE y hat y frac 1 n text samples sum i 0 n text samples 1 frac left y i hat y i right max epsilon left y i right where math epsilon is an arbitrary small yet strictly positive number to avoid undefined results when y is zero The func mean absolute percentage error function supports multioutput Here is a small example of usage of the func mean absolute percentage error function from sklearn metrics import mean absolute percentage error y true 1 10 1e6 y pred 0 9 15 1 2e6 mean absolute percentage error y true y pred 0 2666 In above example if we had used mean absolute error it would have ignored the small magnitude values and only reflected the error in prediction of highest magnitude value But that problem is resolved in case of MAPE because it calculates relative percentage error with respect to actual output note The MAPE formula here does not represent the common percentage definition the percentage in the range 0 100 is converted to a relative value in the range 0 1 by dividing by 100 Thus an error of 200 corresponds to a relative error of 2 The motivation here is to have a range of values that is more consistent with other error metrics in scikit learn such as accuracy score To obtain the mean absolute percentage error as per the Wikipedia formula multiply the mean absolute percentage error computed here by 100 dropdown References Wikipedia entry for Mean Absolute Percentage Error https en wikipedia org wiki Mean absolute percentage error median absolute error Median absolute error The func median absolute error is particularly interesting because it is robust to outliers The loss is calculated by taking the median of all absolute differences between the target and the prediction If math hat y i is the predicted value of the math i th sample and math y i is the corresponding true value then the median absolute error MedAE estimated over math n text samples is defined as math text MedAE y hat y text median mid y 1 hat y 1 mid ldots mid y n hat y n mid The func median absolute error does not support multioutput Here is a small example of usage of the func median absolute error function from sklearn metrics import median absolute error y true 3 0 5 2 7 y pred 2 5 0 0 2 8 median absolute error y true y pred 0 5 max error Max error The func max error function computes the maximum residual error https en wikipedia org wiki Errors and residuals a metric that captures the worst case error between the predicted value and the true value In a perfectly fitted single output regression model max error would be 0 on the training set and though this would be highly unlikely in the real world this metric shows the extent of error that the model had when it was fitted If math hat y i is the predicted value of the math i th sample and math y i is the corresponding true value then the max error is defined as math text Max Error y hat y max y i hat y i Here is a small example of usage of the func max error function from sklearn metrics import max error y true 3 2 7 1 y pred 9 2 7 1 max error y true y pred 6 The func max error does not support multioutput explained variance score Explained variance score The func explained variance score computes the explained variance regression score https en wikipedia org wiki Explained variation If math hat y is the estimated target output math y the corresponding correct target output and math Var is Variance https en wikipedia org wiki Variance the square of the standard deviation then the explained variance is estimated as follow math explained variance y hat y 1 frac Var y hat y Var y The best possible score is 1 0 lower values are worse topic Link to ref r2 score The difference between the explained variance score and the ref r2 score is that the explained variance score does not account for systematic offset in the prediction For this reason the ref r2 score should be preferred in general In the particular case where the true target is constant the Explained Variance score is not finite it is either NaN perfect predictions or Inf imperfect predictions Such non finite scores may prevent correct model optimization such as grid search cross validation to be performed correctly For this reason the default behaviour of func explained variance score is to replace them with 1 0 perfect predictions or 0 0 imperfect predictions You can set the force finite parameter to False to prevent this fix from happening and fallback on the original Explained Variance score Here is a small example of usage of the func explained variance score function from sklearn metrics import explained variance score y true 3 0 5 2 7 y pred 2 5 0 0 2 8 explained variance score y true y pred 0 957 y true 0 5 1 1 1 7 6 y pred 0 2 1 2 8 5 explained variance score y true y pred multioutput raw values array 0 967 1 explained variance score y true y pred multioutput 0 3 0 7 0 990 y true 2 2 2 y pred 2 2 2 explained variance score y true y pred 1 0 explained variance score y true y pred force finite False nan y true 2 2 2 y pred 2 2 2 1e 8 explained variance score y true y pred 0 0 explained variance score y true y pred force finite False inf mean tweedie deviance Mean Poisson Gamma and Tweedie deviances The func mean tweedie deviance function computes the mean Tweedie deviance error https en wikipedia org wiki Tweedie distribution The Tweedie deviance with a power parameter math p This is a metric that elicits predicted expectation values of regression targets Following special cases exist when power 0 it is equivalent to func mean squared error when power 1 it is equivalent to func mean poisson deviance when power 2 it is equivalent to func mean gamma deviance If math hat y i is the predicted value of the math i th sample and math y i is the corresponding true value then the mean Tweedie deviance error D for power math p estimated over math n text samples is defined as math text D y hat y frac 1 n text samples sum i 0 n text samples 1 begin cases y i hat y i 2 text for p 0 text Normal 2 y i log y i hat y i hat y i y i text for p 1 text Poisson 2 log hat y i y i y i hat y i 1 text for p 2 text Gamma 2 left frac max y i 0 2 p 1 p 2 p frac y i hat y i 1 p 1 p frac hat y i 2 p 2 p right text otherwise end cases Tweedie deviance is a homogeneous function of degree 2 power Thus Gamma distribution with power 2 means that simultaneously scaling y true and y pred has no effect on the deviance For Poisson distribution power 1 the deviance scales linearly and for Normal distribution power 0 quadratically In general the higher power the less weight is given to extreme deviations between true and predicted targets For instance let s compare the two predictions 1 5 and 150 that are both 50 larger than their corresponding true value The mean squared error power 0 is very sensitive to the prediction difference of the second point from sklearn metrics import mean tweedie deviance mean tweedie deviance 1 0 1 5 power 0 0 25 mean tweedie deviance 100 150 power 0 2500 0 If we increase power to 1 mean tweedie deviance 1 0 1 5 power 1 0 18 mean tweedie deviance 100 150 power 1 18 9 the difference in errors decreases Finally by setting power 2 mean tweedie deviance 1 0 1 5 power 2 0 14 mean tweedie deviance 100 150 power 2 0 14 we would get identical errors The deviance when power 2 is thus only sensitive to relative errors pinball loss Pinball loss The func mean pinball loss function is used to evaluate the predictive performance of quantile regression https en wikipedia org wiki Quantile regression models math text pinball y hat y frac 1 n text samples sum i 0 n text samples 1 alpha max y i hat y i 0 1 alpha max hat y i y i 0 The value of pinball loss is equivalent to half of func mean absolute error when the quantile parameter alpha is set to 0 5 Here is a small example of usage of the func mean pinball loss function from sklearn metrics import mean pinball loss y true 1 2 3 mean pinball loss y true 0 2 3 alpha 0 1 0 03 mean pinball loss y true 1 2 4 alpha 0 1 0 3 mean pinball loss y true 0 2 3 alpha 0 9 0 3 mean pinball loss y true 1 2 4 alpha 0 9 0 03 mean pinball loss y true y true alpha 0 1 0 0 mean pinball loss y true y true alpha 0 9 0 0 It is possible to build a scorer object with a specific choice of alpha from sklearn metrics import make scorer mean pinball loss 95p make scorer mean pinball loss alpha 0 95 Such a scorer can be used to evaluate the generalization performance of a quantile regressor via cross validation from sklearn datasets import make regression from sklearn model selection import cross val score from sklearn ensemble import GradientBoostingRegressor X y make regression n samples 100 random state 0 estimator GradientBoostingRegressor loss quantile alpha 0 95 random state 0 cross val score estimator X y cv 5 scoring mean pinball loss 95p array 13 6 9 7 23 3 9 5 10 4 It is also possible to build scorer objects for hyper parameter tuning The sign of the loss must be switched to ensure that greater means better as explained in the example linked below rubric Examples See ref sphx glr auto examples ensemble plot gradient boosting quantile py for an example of using the pinball loss to evaluate and tune the hyper parameters of quantile regression models on data with non symmetric noise and outliers d2 score D score The D score computes the fraction of deviance explained It is a generalization of R where the squared error is generalized and replaced by a deviance of choice math text dev y hat y e g Tweedie pinball or mean absolute error D is a form of a skill score It is calculated as math D 2 y hat y 1 frac text dev y hat y text dev y y text null Where math y text null is the optimal prediction of an intercept only model e g the mean of y true for the Tweedie case the median for absolute error and the alpha quantile for pinball loss Like R the best possible score is 1 0 and it can be negative because the model can be arbitrarily worse A constant model that always predicts math y text null disregarding the input features would get a D score of 0 0 dropdown D Tweedie score The func d2 tweedie score function implements the special case of D where math text dev y hat y is the Tweedie deviance see ref mean tweedie deviance It is also known as D Tweedie and is related to McFadden s likelihood ratio index The argument power defines the Tweedie power as for func mean tweedie deviance Note that for power 0 func d2 tweedie score equals func r2 score for single targets A scorer object with a specific choice of power can be built by from sklearn metrics import d2 tweedie score make scorer d2 tweedie score 15 make scorer d2 tweedie score power 1 5 dropdown D pinball score The func d2 pinball score function implements the special case of D with the pinball loss see ref pinball loss i e math text dev y hat y text pinball y hat y The argument alpha defines the slope of the pinball loss as for func mean pinball loss ref pinball loss It determines the quantile level alpha for which the pinball loss and also D are optimal Note that for alpha 0 5 the default func d2 pinball score equals func d2 absolute error score A scorer object with a specific choice of alpha can be built by from sklearn metrics import d2 pinball score make scorer d2 pinball score 08 make scorer d2 pinball score alpha 0 8 dropdown D absolute error score The func d2 absolute error score function implements the special case of the ref mean absolute error math text dev y hat y text MAE y hat y Here are some usage examples of the func d2 absolute error score function from sklearn metrics import d2 absolute error score y true 3 0 5 2 7 y pred 2 5 0 0 2 8 d2 absolute error score y true y pred 0 764 y true 1 2 3 y pred 1 2 3 d2 absolute error score y true y pred 1 0 y true 1 2 3 y pred 2 2 2 d2 absolute error score y true y pred 0 0 visualization regression evaluation Visual evaluation of regression models Among methods to assess the quality of regression models scikit learn provides the class sklearn metrics PredictionErrorDisplay class It allows to visually inspect the prediction errors of a model in two different manners image auto examples model selection images sphx glr plot cv predict 001 png target auto examples model selection plot cv predict html scale 75 align center The plot on the left shows the actual values vs predicted values For a noise free regression task aiming to predict the conditional expectation of y a perfect regression model would display data points on the diagonal defined by predicted equal to actual values The further away from this optimal line the larger the error of the model In a more realistic setting with irreducible noise that is when not all the variations of y can be explained by features in X then the best model would lead to a cloud of points densely arranged around the diagonal Note that the above only holds when the predicted values is the expected value of y given X This is typically the case for regression models that minimize the mean squared error objective function or more generally the ref mean Tweedie deviance mean tweedie deviance for any value of its power parameter When plotting the predictions of an estimator that predicts a quantile of y given X e g class sklearn linear model QuantileRegressor or any other model minimizing the ref pinball loss pinball loss a fraction of the points are either expected to lie above or below the diagonal depending on the estimated quantile level All in all while intuitive to read this plot does not really inform us on what to do to obtain a better model The right hand side plot shows the residuals i e the difference between the actual and the predicted values vs the predicted values This plot makes it easier to visualize if the residuals follow and homoscedastic or heteroschedastic https en wikipedia org wiki Homoscedasticity and heteroscedasticity distribution In particular if the true distribution of y X is Poisson or Gamma distributed it is expected that the variance of the residuals of the optimal model would grow with the predicted value of E y X either linearly for Poisson or quadratically for Gamma When fitting a linear least squares regression model see class sklearn linear model LinearRegression and class sklearn linear model Ridge we can use this plot to check if some of the model assumptions https en wikipedia org wiki Ordinary least squares Assumptions are met in particular that the residuals should be uncorrelated their expected value should be null and that their variance should be constant homoschedasticity If this is not the case and in particular if the residuals plot show some banana shaped structure this is a hint that the model is likely mis specified and that non linear feature engineering or switching to a non linear regression model might be useful Refer to the example below to see a model evaluation that makes use of this display rubric Examples See ref sphx glr auto examples compose plot transformed target py for an example on how to use class sklearn metrics PredictionErrorDisplay to visualize the prediction quality improvement of a regression model obtained by transforming the target before learning clustering metrics Clustering metrics currentmodule sklearn metrics The mod sklearn metrics module implements several loss score and utility functions to measure clustering performance For more information see the ref clustering evaluation section for instance clustering and ref biclustering evaluation for biclustering dummy estimators Dummy estimators currentmodule sklearn dummy When doing supervised learning a simple sanity check consists of comparing one s estimator against simple rules of thumb class DummyClassifier implements several such simple strategies for classification stratified generates random predictions by respecting the training set class distribution most frequent always predicts the most frequent label in the training set prior always predicts the class that maximizes the class prior like most frequent and predict proba returns the class prior uniform generates predictions uniformly at random constant always predicts a constant label that is provided by the user A major motivation of this method is F1 scoring when the positive class is in the minority Note that with all these strategies the predict method completely ignores the input data To illustrate class DummyClassifier first let s create an imbalanced dataset from sklearn datasets import load iris from sklearn model selection import train test split X y load iris return X y True y y 1 1 X train X test y train y test train test split X y random state 0 Next let s compare the accuracy of SVC and most frequent from sklearn dummy import DummyClassifier from sklearn svm import SVC clf SVC kernel linear C 1 fit X train y train clf score X test y test 0 63 clf DummyClassifier strategy most frequent random state 0 clf fit X train y train DummyClassifier random state 0 strategy most frequent clf score X test y test 0 57 We see that SVC doesn t do much better than a dummy classifier Now let s change the kernel clf SVC kernel rbf C 1 fit X train y train clf score X test y test 0 94 We see that the accuracy was boosted to almost 100 A cross validation strategy is recommended for a better estimate of the accuracy if it is not too CPU costly For more information see the ref cross validation section Moreover if you want to optimize over the parameter space it is highly recommended to use an appropriate methodology see the ref grid search section for details More generally when the accuracy of a classifier is too close to random it probably means that something went wrong features are not helpful a hyperparameter is not correctly tuned the classifier is suffering from class imbalance etc class DummyRegressor also implements four simple rules of thumb for regression mean always predicts the mean of the training targets median always predicts the median of the training targets quantile always predicts a user provided quantile of the training targets constant always predicts a constant value that is provided by the user In all these strategies the predict method completely ignores the input data |
scikit-learn sklearn covariance Many statistical problems require the estimation of a Covariance estimation covariance | .. _covariance:
===================================================
Covariance estimation
===================================================
.. currentmodule:: sklearn.covariance
Many statistical problems require the estimation of a
population's covariance matrix, which can be seen as an estimation of
data set scatter plot shape. Most of the time, such an estimation has
to be done on a sample whose properties (size, structure, homogeneity)
have a large influence on the estimation's quality. The
:mod:`sklearn.covariance` package provides tools for accurately estimating
a population's covariance matrix under various settings.
We assume that the observations are independent and identically
distributed (i.i.d.).
Empirical covariance
====================
The covariance matrix of a data set is known to be well approximated
by the classical *maximum likelihood estimator* (or "empirical
covariance"), provided the number of observations is large enough
compared to the number of features (the variables describing the
observations). More precisely, the Maximum Likelihood Estimator of a
sample is an asymptotically unbiased estimator of the corresponding
population's covariance matrix.
The empirical covariance matrix of a sample can be computed using the
:func:`empirical_covariance` function of the package, or by fitting an
:class:`EmpiricalCovariance` object to the data sample with the
:meth:`EmpiricalCovariance.fit` method. Be careful that results depend
on whether the data are centered, so one may want to use the
``assume_centered`` parameter accurately. More precisely, if
``assume_centered=False``, then the test set is supposed to have the
same mean vector as the training set. If not, both should be centered
by the user, and ``assume_centered=True`` should be used.
.. rubric:: Examples
* See :ref:`sphx_glr_auto_examples_covariance_plot_covariance_estimation.py` for
an example on how to fit an :class:`EmpiricalCovariance` object to data.
.. _shrunk_covariance:
Shrunk Covariance
=================
Basic shrinkage
---------------
Despite being an asymptotically unbiased estimator of the covariance matrix,
the Maximum Likelihood Estimator is not a good estimator of the
eigenvalues of the covariance matrix, so the precision matrix obtained
from its inversion is not accurate. Sometimes, it even occurs that the
empirical covariance matrix cannot be inverted for numerical
reasons. To avoid such an inversion problem, a transformation of the
empirical covariance matrix has been introduced: the ``shrinkage``.
In scikit-learn, this transformation (with a user-defined shrinkage
coefficient) can be directly applied to a pre-computed covariance with
the :func:`shrunk_covariance` method. Also, a shrunk estimator of the
covariance can be fitted to data with a :class:`ShrunkCovariance` object
and its :meth:`ShrunkCovariance.fit` method. Again, results depend on
whether the data are centered, so one may want to use the
``assume_centered`` parameter accurately.
Mathematically, this shrinkage consists in reducing the ratio between the
smallest and the largest eigenvalues of the empirical covariance matrix.
It can be done by simply shifting every eigenvalue according to a given
offset, which is equivalent of finding the l2-penalized Maximum
Likelihood Estimator of the covariance matrix. In practice, shrinkage
boils down to a simple a convex transformation : :math:`\Sigma_{\rm
shrunk} = (1-\alpha)\hat{\Sigma} + \alpha\frac{{\rm
Tr}\hat{\Sigma}}{p}\rm Id`.
Choosing the amount of shrinkage, :math:`\alpha` amounts to setting a
bias/variance trade-off, and is discussed below.
.. rubric:: Examples
* See :ref:`sphx_glr_auto_examples_covariance_plot_covariance_estimation.py` for
an example on how to fit a :class:`ShrunkCovariance` object to data.
Ledoit-Wolf shrinkage
---------------------
In their 2004 paper [1]_, O. Ledoit and M. Wolf propose a formula
to compute the optimal shrinkage coefficient :math:`\alpha` that
minimizes the Mean Squared Error between the estimated and the real
covariance matrix.
The Ledoit-Wolf estimator of the covariance matrix can be computed on
a sample with the :meth:`ledoit_wolf` function of the
:mod:`sklearn.covariance` package, or it can be otherwise obtained by
fitting a :class:`LedoitWolf` object to the same sample.
.. note:: **Case when population covariance matrix is isotropic**
It is important to note that when the number of samples is much larger than
the number of features, one would expect that no shrinkage would be
necessary. The intuition behind this is that if the population covariance
is full rank, when the number of sample grows, the sample covariance will
also become positive definite. As a result, no shrinkage would necessary
and the method should automatically do this.
This, however, is not the case in the Ledoit-Wolf procedure when the
population covariance happens to be a multiple of the identity matrix. In
this case, the Ledoit-Wolf shrinkage estimate approaches 1 as the number of
samples increases. This indicates that the optimal estimate of the
covariance matrix in the Ledoit-Wolf sense is multiple of the identity.
Since the population covariance is already a multiple of the identity
matrix, the Ledoit-Wolf solution is indeed a reasonable estimate.
.. rubric:: Examples
* See :ref:`sphx_glr_auto_examples_covariance_plot_covariance_estimation.py` for
an example on how to fit a :class:`LedoitWolf` object to data and
for visualizing the performances of the Ledoit-Wolf estimator in
terms of likelihood.
.. rubric:: References
.. [1] O. Ledoit and M. Wolf, "A Well-Conditioned Estimator for Large-Dimensional
Covariance Matrices", Journal of Multivariate Analysis, Volume 88, Issue 2,
February 2004, pages 365-411.
.. _oracle_approximating_shrinkage:
Oracle Approximating Shrinkage
------------------------------
Under the assumption that the data are Gaussian distributed, Chen et
al. [2]_ derived a formula aimed at choosing a shrinkage coefficient that
yields a smaller Mean Squared Error than the one given by Ledoit and
Wolf's formula. The resulting estimator is known as the Oracle
Shrinkage Approximating estimator of the covariance.
The OAS estimator of the covariance matrix can be computed on a sample
with the :meth:`oas` function of the :mod:`sklearn.covariance`
package, or it can be otherwise obtained by fitting an :class:`OAS`
object to the same sample.
.. figure:: ../auto_examples/covariance/images/sphx_glr_plot_covariance_estimation_001.png
:target: ../auto_examples/covariance/plot_covariance_estimation.html
:align: center
:scale: 65%
Bias-variance trade-off when setting the shrinkage: comparing the
choices of Ledoit-Wolf and OAS estimators
.. rubric:: References
.. [2] :arxiv:`"Shrinkage algorithms for MMSE covariance estimation.",
Chen, Y., Wiesel, A., Eldar, Y. C., & Hero, A. O.
IEEE Transactions on Signal Processing, 58(10), 5016-5029, 2010.
<0907.4698>`
.. rubric:: Examples
* See :ref:`sphx_glr_auto_examples_covariance_plot_covariance_estimation.py` for
an example on how to fit an :class:`OAS` object to data.
* See :ref:`sphx_glr_auto_examples_covariance_plot_lw_vs_oas.py` to visualize the
Mean Squared Error difference between a :class:`LedoitWolf` and
an :class:`OAS` estimator of the covariance.
.. figure:: ../auto_examples/covariance/images/sphx_glr_plot_lw_vs_oas_001.png
:target: ../auto_examples/covariance/plot_lw_vs_oas.html
:align: center
:scale: 75%
.. _sparse_inverse_covariance:
Sparse inverse covariance
==========================
The matrix inverse of the covariance matrix, often called the precision
matrix, is proportional to the partial correlation matrix. It gives the
partial independence relationship. In other words, if two features are
independent conditionally on the others, the corresponding coefficient in
the precision matrix will be zero. This is why it makes sense to
estimate a sparse precision matrix: the estimation of the covariance
matrix is better conditioned by learning independence relations from
the data. This is known as *covariance selection*.
In the small-samples situation, in which ``n_samples`` is on the order
of ``n_features`` or smaller, sparse inverse covariance estimators tend to work
better than shrunk covariance estimators. However, in the opposite
situation, or for very correlated data, they can be numerically unstable.
In addition, unlike shrinkage estimators, sparse estimators are able to
recover off-diagonal structure.
The :class:`GraphicalLasso` estimator uses an l1 penalty to enforce sparsity on
the precision matrix: the higher its ``alpha`` parameter, the more sparse
the precision matrix. The corresponding :class:`GraphicalLassoCV` object uses
cross-validation to automatically set the ``alpha`` parameter.
.. figure:: ../auto_examples/covariance/images/sphx_glr_plot_sparse_cov_001.png
:target: ../auto_examples/covariance/plot_sparse_cov.html
:align: center
:scale: 60%
*A comparison of maximum likelihood, shrinkage and sparse estimates of
the covariance and precision matrix in the very small samples
settings.*
.. note:: **Structure recovery**
Recovering a graphical structure from correlations in the data is a
challenging thing. If you are interested in such recovery keep in mind
that:
* Recovery is easier from a correlation matrix than a covariance
matrix: standardize your observations before running :class:`GraphicalLasso`
* If the underlying graph has nodes with much more connections than
the average node, the algorithm will miss some of these connections.
* If your number of observations is not large compared to the number
of edges in your underlying graph, you will not recover it.
* Even if you are in favorable recovery conditions, the alpha
parameter chosen by cross-validation (e.g. using the
:class:`GraphicalLassoCV` object) will lead to selecting too many edges.
However, the relevant edges will have heavier weights than the
irrelevant ones.
The mathematical formulation is the following:
.. math::
\hat{K} = \mathrm{argmin}_K \big(
\mathrm{tr} S K - \mathrm{log} \mathrm{det} K
+ \alpha \|K\|_1
\big)
Where :math:`K` is the precision matrix to be estimated, and :math:`S` is the
sample covariance matrix. :math:`\|K\|_1` is the sum of the absolute values of
off-diagonal coefficients of :math:`K`. The algorithm employed to solve this
problem is the GLasso algorithm, from the Friedman 2008 Biostatistics
paper. It is the same algorithm as in the R ``glasso`` package.
.. rubric:: Examples
* :ref:`sphx_glr_auto_examples_covariance_plot_sparse_cov.py`: example on synthetic
data showing some recovery of a structure, and comparing to other
covariance estimators.
* :ref:`sphx_glr_auto_examples_applications_plot_stock_market.py`: example on real
stock market data, finding which symbols are most linked.
.. rubric:: References
* Friedman et al, `"Sparse inverse covariance estimation with the
graphical lasso" <https://biostatistics.oxfordjournals.org/content/9/3/432.short>`_,
Biostatistics 9, pp 432, 2008
.. _robust_covariance:
Robust Covariance Estimation
============================
Real data sets are often subject to measurement or recording
errors. Regular but uncommon observations may also appear for a variety
of reasons. Observations which are very uncommon are called
outliers.
The empirical covariance estimator and the shrunk covariance
estimators presented above are very sensitive to the presence of
outliers in the data. Therefore, one should use robust
covariance estimators to estimate the covariance of its real data
sets. Alternatively, robust covariance estimators can be used to
perform outlier detection and discard/downweight some observations
according to further processing of the data.
The ``sklearn.covariance`` package implements a robust estimator of covariance,
the Minimum Covariance Determinant [3]_.
Minimum Covariance Determinant
------------------------------
The Minimum Covariance Determinant estimator is a robust estimator of
a data set's covariance introduced by P.J. Rousseeuw in [3]_. The idea
is to find a given proportion (h) of "good" observations which are not
outliers and compute their empirical covariance matrix. This
empirical covariance matrix is then rescaled to compensate the
performed selection of observations ("consistency step"). Having
computed the Minimum Covariance Determinant estimator, one can give
weights to observations according to their Mahalanobis distance,
leading to a reweighted estimate of the covariance matrix of the data
set ("reweighting step").
Rousseeuw and Van Driessen [4]_ developed the FastMCD algorithm in order
to compute the Minimum Covariance Determinant. This algorithm is used
in scikit-learn when fitting an MCD object to data. The FastMCD
algorithm also computes a robust estimate of the data set location at
the same time.
Raw estimates can be accessed as ``raw_location_`` and ``raw_covariance_``
attributes of a :class:`MinCovDet` robust covariance estimator object.
.. rubric:: References
.. [3] P. J. Rousseeuw. Least median of squares regression.
J. Am Stat Ass, 79:871, 1984.
.. [4] A Fast Algorithm for the Minimum Covariance Determinant Estimator,
1999, American Statistical Association and the American Society
for Quality, TECHNOMETRICS.
.. rubric:: Examples
* See :ref:`sphx_glr_auto_examples_covariance_plot_robust_vs_empirical_covariance.py` for
an example on how to fit a :class:`MinCovDet` object to data and see how
the estimate remains accurate despite the presence of outliers.
* See :ref:`sphx_glr_auto_examples_covariance_plot_mahalanobis_distances.py` to
visualize the difference between :class:`EmpiricalCovariance` and
:class:`MinCovDet` covariance estimators in terms of Mahalanobis distance
(so we get a better estimate of the precision matrix too).
.. |robust_vs_emp| image:: ../auto_examples/covariance/images/sphx_glr_plot_robust_vs_empirical_covariance_001.png
:target: ../auto_examples/covariance/plot_robust_vs_empirical_covariance.html
:scale: 49%
.. |mahalanobis| image:: ../auto_examples/covariance/images/sphx_glr_plot_mahalanobis_distances_001.png
:target: ../auto_examples/covariance/plot_mahalanobis_distances.html
:scale: 49%
____
.. list-table::
:header-rows: 1
* - Influence of outliers on location and covariance estimates
- Separating inliers from outliers using a Mahalanobis distance
* - |robust_vs_emp|
- |mahalanobis| | scikit-learn | covariance Covariance estimation currentmodule sklearn covariance Many statistical problems require the estimation of a population s covariance matrix which can be seen as an estimation of data set scatter plot shape Most of the time such an estimation has to be done on a sample whose properties size structure homogeneity have a large influence on the estimation s quality The mod sklearn covariance package provides tools for accurately estimating a population s covariance matrix under various settings We assume that the observations are independent and identically distributed i i d Empirical covariance The covariance matrix of a data set is known to be well approximated by the classical maximum likelihood estimator or empirical covariance provided the number of observations is large enough compared to the number of features the variables describing the observations More precisely the Maximum Likelihood Estimator of a sample is an asymptotically unbiased estimator of the corresponding population s covariance matrix The empirical covariance matrix of a sample can be computed using the func empirical covariance function of the package or by fitting an class EmpiricalCovariance object to the data sample with the meth EmpiricalCovariance fit method Be careful that results depend on whether the data are centered so one may want to use the assume centered parameter accurately More precisely if assume centered False then the test set is supposed to have the same mean vector as the training set If not both should be centered by the user and assume centered True should be used rubric Examples See ref sphx glr auto examples covariance plot covariance estimation py for an example on how to fit an class EmpiricalCovariance object to data shrunk covariance Shrunk Covariance Basic shrinkage Despite being an asymptotically unbiased estimator of the covariance matrix the Maximum Likelihood Estimator is not a good estimator of the eigenvalues of the covariance matrix so the precision matrix obtained from its inversion is not accurate Sometimes it even occurs that the empirical covariance matrix cannot be inverted for numerical reasons To avoid such an inversion problem a transformation of the empirical covariance matrix has been introduced the shrinkage In scikit learn this transformation with a user defined shrinkage coefficient can be directly applied to a pre computed covariance with the func shrunk covariance method Also a shrunk estimator of the covariance can be fitted to data with a class ShrunkCovariance object and its meth ShrunkCovariance fit method Again results depend on whether the data are centered so one may want to use the assume centered parameter accurately Mathematically this shrinkage consists in reducing the ratio between the smallest and the largest eigenvalues of the empirical covariance matrix It can be done by simply shifting every eigenvalue according to a given offset which is equivalent of finding the l2 penalized Maximum Likelihood Estimator of the covariance matrix In practice shrinkage boils down to a simple a convex transformation math Sigma rm shrunk 1 alpha hat Sigma alpha frac rm Tr hat Sigma p rm Id Choosing the amount of shrinkage math alpha amounts to setting a bias variance trade off and is discussed below rubric Examples See ref sphx glr auto examples covariance plot covariance estimation py for an example on how to fit a class ShrunkCovariance object to data Ledoit Wolf shrinkage In their 2004 paper 1 O Ledoit and M Wolf propose a formula to compute the optimal shrinkage coefficient math alpha that minimizes the Mean Squared Error between the estimated and the real covariance matrix The Ledoit Wolf estimator of the covariance matrix can be computed on a sample with the meth ledoit wolf function of the mod sklearn covariance package or it can be otherwise obtained by fitting a class LedoitWolf object to the same sample note Case when population covariance matrix is isotropic It is important to note that when the number of samples is much larger than the number of features one would expect that no shrinkage would be necessary The intuition behind this is that if the population covariance is full rank when the number of sample grows the sample covariance will also become positive definite As a result no shrinkage would necessary and the method should automatically do this This however is not the case in the Ledoit Wolf procedure when the population covariance happens to be a multiple of the identity matrix In this case the Ledoit Wolf shrinkage estimate approaches 1 as the number of samples increases This indicates that the optimal estimate of the covariance matrix in the Ledoit Wolf sense is multiple of the identity Since the population covariance is already a multiple of the identity matrix the Ledoit Wolf solution is indeed a reasonable estimate rubric Examples See ref sphx glr auto examples covariance plot covariance estimation py for an example on how to fit a class LedoitWolf object to data and for visualizing the performances of the Ledoit Wolf estimator in terms of likelihood rubric References 1 O Ledoit and M Wolf A Well Conditioned Estimator for Large Dimensional Covariance Matrices Journal of Multivariate Analysis Volume 88 Issue 2 February 2004 pages 365 411 oracle approximating shrinkage Oracle Approximating Shrinkage Under the assumption that the data are Gaussian distributed Chen et al 2 derived a formula aimed at choosing a shrinkage coefficient that yields a smaller Mean Squared Error than the one given by Ledoit and Wolf s formula The resulting estimator is known as the Oracle Shrinkage Approximating estimator of the covariance The OAS estimator of the covariance matrix can be computed on a sample with the meth oas function of the mod sklearn covariance package or it can be otherwise obtained by fitting an class OAS object to the same sample figure auto examples covariance images sphx glr plot covariance estimation 001 png target auto examples covariance plot covariance estimation html align center scale 65 Bias variance trade off when setting the shrinkage comparing the choices of Ledoit Wolf and OAS estimators rubric References 2 arxiv Shrinkage algorithms for MMSE covariance estimation Chen Y Wiesel A Eldar Y C Hero A O IEEE Transactions on Signal Processing 58 10 5016 5029 2010 0907 4698 rubric Examples See ref sphx glr auto examples covariance plot covariance estimation py for an example on how to fit an class OAS object to data See ref sphx glr auto examples covariance plot lw vs oas py to visualize the Mean Squared Error difference between a class LedoitWolf and an class OAS estimator of the covariance figure auto examples covariance images sphx glr plot lw vs oas 001 png target auto examples covariance plot lw vs oas html align center scale 75 sparse inverse covariance Sparse inverse covariance The matrix inverse of the covariance matrix often called the precision matrix is proportional to the partial correlation matrix It gives the partial independence relationship In other words if two features are independent conditionally on the others the corresponding coefficient in the precision matrix will be zero This is why it makes sense to estimate a sparse precision matrix the estimation of the covariance matrix is better conditioned by learning independence relations from the data This is known as covariance selection In the small samples situation in which n samples is on the order of n features or smaller sparse inverse covariance estimators tend to work better than shrunk covariance estimators However in the opposite situation or for very correlated data they can be numerically unstable In addition unlike shrinkage estimators sparse estimators are able to recover off diagonal structure The class GraphicalLasso estimator uses an l1 penalty to enforce sparsity on the precision matrix the higher its alpha parameter the more sparse the precision matrix The corresponding class GraphicalLassoCV object uses cross validation to automatically set the alpha parameter figure auto examples covariance images sphx glr plot sparse cov 001 png target auto examples covariance plot sparse cov html align center scale 60 A comparison of maximum likelihood shrinkage and sparse estimates of the covariance and precision matrix in the very small samples settings note Structure recovery Recovering a graphical structure from correlations in the data is a challenging thing If you are interested in such recovery keep in mind that Recovery is easier from a correlation matrix than a covariance matrix standardize your observations before running class GraphicalLasso If the underlying graph has nodes with much more connections than the average node the algorithm will miss some of these connections If your number of observations is not large compared to the number of edges in your underlying graph you will not recover it Even if you are in favorable recovery conditions the alpha parameter chosen by cross validation e g using the class GraphicalLassoCV object will lead to selecting too many edges However the relevant edges will have heavier weights than the irrelevant ones The mathematical formulation is the following math hat K mathrm argmin K big mathrm tr S K mathrm log mathrm det K alpha K 1 big Where math K is the precision matrix to be estimated and math S is the sample covariance matrix math K 1 is the sum of the absolute values of off diagonal coefficients of math K The algorithm employed to solve this problem is the GLasso algorithm from the Friedman 2008 Biostatistics paper It is the same algorithm as in the R glasso package rubric Examples ref sphx glr auto examples covariance plot sparse cov py example on synthetic data showing some recovery of a structure and comparing to other covariance estimators ref sphx glr auto examples applications plot stock market py example on real stock market data finding which symbols are most linked rubric References Friedman et al Sparse inverse covariance estimation with the graphical lasso https biostatistics oxfordjournals org content 9 3 432 short Biostatistics 9 pp 432 2008 robust covariance Robust Covariance Estimation Real data sets are often subject to measurement or recording errors Regular but uncommon observations may also appear for a variety of reasons Observations which are very uncommon are called outliers The empirical covariance estimator and the shrunk covariance estimators presented above are very sensitive to the presence of outliers in the data Therefore one should use robust covariance estimators to estimate the covariance of its real data sets Alternatively robust covariance estimators can be used to perform outlier detection and discard downweight some observations according to further processing of the data The sklearn covariance package implements a robust estimator of covariance the Minimum Covariance Determinant 3 Minimum Covariance Determinant The Minimum Covariance Determinant estimator is a robust estimator of a data set s covariance introduced by P J Rousseeuw in 3 The idea is to find a given proportion h of good observations which are not outliers and compute their empirical covariance matrix This empirical covariance matrix is then rescaled to compensate the performed selection of observations consistency step Having computed the Minimum Covariance Determinant estimator one can give weights to observations according to their Mahalanobis distance leading to a reweighted estimate of the covariance matrix of the data set reweighting step Rousseeuw and Van Driessen 4 developed the FastMCD algorithm in order to compute the Minimum Covariance Determinant This algorithm is used in scikit learn when fitting an MCD object to data The FastMCD algorithm also computes a robust estimate of the data set location at the same time Raw estimates can be accessed as raw location and raw covariance attributes of a class MinCovDet robust covariance estimator object rubric References 3 P J Rousseeuw Least median of squares regression J Am Stat Ass 79 871 1984 4 A Fast Algorithm for the Minimum Covariance Determinant Estimator 1999 American Statistical Association and the American Society for Quality TECHNOMETRICS rubric Examples See ref sphx glr auto examples covariance plot robust vs empirical covariance py for an example on how to fit a class MinCovDet object to data and see how the estimate remains accurate despite the presence of outliers See ref sphx glr auto examples covariance plot mahalanobis distances py to visualize the difference between class EmpiricalCovariance and class MinCovDet covariance estimators in terms of Mahalanobis distance so we get a better estimate of the precision matrix too robust vs emp image auto examples covariance images sphx glr plot robust vs empirical covariance 001 png target auto examples covariance plot robust vs empirical covariance html scale 49 mahalanobis image auto examples covariance images sphx glr plot mahalanobis distances 001 png target auto examples covariance plot mahalanobis distances html scale 49 list table header rows 1 Influence of outliers on location and covariance estimates Separating inliers from outliers using a Mahalanobis distance robust vs emp mahalanobis |
scikit-learn classification and regression This section of the user guide covers functionality related to multi learning problems including and Multiclass and multioutput algorithms multiclass |
.. _multiclass:
=====================================
Multiclass and multioutput algorithms
=====================================
This section of the user guide covers functionality related to multi-learning
problems, including :term:`multiclass`, :term:`multilabel`, and
:term:`multioutput` classification and regression.
The modules in this section implement :term:`meta-estimators`, which require a
base estimator to be provided in their constructor. Meta-estimators extend the
functionality of the base estimator to support multi-learning problems, which
is accomplished by transforming the multi-learning problem into a set of
simpler problems, then fitting one estimator per problem.
This section covers two modules: :mod:`sklearn.multiclass` and
:mod:`sklearn.multioutput`. The chart below demonstrates the problem types
that each module is responsible for, and the corresponding meta-estimators
that each module provides.
.. image:: ../images/multi_org_chart.png
:align: center
The table below provides a quick reference on the differences between problem
types. More detailed explanations can be found in subsequent sections of this
guide.
+------------------------------+-----------------------+-------------------------+--------------------------------------------------+
| | Number of targets | Target cardinality | Valid |
| | | | :func:`~sklearn.utils.multiclass.type_of_target` |
+==============================+=======================+=========================+==================================================+
| Multiclass | 1 | >2 | 'multiclass' |
| classification | | | |
+------------------------------+-----------------------+-------------------------+--------------------------------------------------+
| Multilabel | >1 | 2 (0 or 1) | 'multilabel-indicator' |
| classification | | | |
+------------------------------+-----------------------+-------------------------+--------------------------------------------------+
| Multiclass-multioutput | >1 | >2 | 'multiclass-multioutput' |
| classification | | | |
+------------------------------+-----------------------+-------------------------+--------------------------------------------------+
| Multioutput | >1 | Continuous | 'continuous-multioutput' |
| regression | | | |
+------------------------------+-----------------------+-------------------------+--------------------------------------------------+
Below is a summary of scikit-learn estimators that have multi-learning support
built-in, grouped by strategy. You don't need the meta-estimators provided by
this section if you're using one of these estimators. However, meta-estimators
can provide additional strategies beyond what is built-in:
.. currentmodule:: sklearn
- **Inherently multiclass:**
- :class:`naive_bayes.BernoulliNB`
- :class:`tree.DecisionTreeClassifier`
- :class:`tree.ExtraTreeClassifier`
- :class:`ensemble.ExtraTreesClassifier`
- :class:`naive_bayes.GaussianNB`
- :class:`neighbors.KNeighborsClassifier`
- :class:`semi_supervised.LabelPropagation`
- :class:`semi_supervised.LabelSpreading`
- :class:`discriminant_analysis.LinearDiscriminantAnalysis`
- :class:`svm.LinearSVC` (setting multi_class="crammer_singer")
- :class:`linear_model.LogisticRegression` (with most solvers)
- :class:`linear_model.LogisticRegressionCV` (with most solvers)
- :class:`neural_network.MLPClassifier`
- :class:`neighbors.NearestCentroid`
- :class:`discriminant_analysis.QuadraticDiscriminantAnalysis`
- :class:`neighbors.RadiusNeighborsClassifier`
- :class:`ensemble.RandomForestClassifier`
- :class:`linear_model.RidgeClassifier`
- :class:`linear_model.RidgeClassifierCV`
- **Multiclass as One-Vs-One:**
- :class:`svm.NuSVC`
- :class:`svm.SVC`.
- :class:`gaussian_process.GaussianProcessClassifier` (setting multi_class = "one_vs_one")
- **Multiclass as One-Vs-The-Rest:**
- :class:`ensemble.GradientBoostingClassifier`
- :class:`gaussian_process.GaussianProcessClassifier` (setting multi_class = "one_vs_rest")
- :class:`svm.LinearSVC` (setting multi_class="ovr")
- :class:`linear_model.LogisticRegression` (most solvers)
- :class:`linear_model.LogisticRegressionCV` (most solvers)
- :class:`linear_model.SGDClassifier`
- :class:`linear_model.Perceptron`
- :class:`linear_model.PassiveAggressiveClassifier`
- **Support multilabel:**
- :class:`tree.DecisionTreeClassifier`
- :class:`tree.ExtraTreeClassifier`
- :class:`ensemble.ExtraTreesClassifier`
- :class:`neighbors.KNeighborsClassifier`
- :class:`neural_network.MLPClassifier`
- :class:`neighbors.RadiusNeighborsClassifier`
- :class:`ensemble.RandomForestClassifier`
- :class:`linear_model.RidgeClassifier`
- :class:`linear_model.RidgeClassifierCV`
- **Support multiclass-multioutput:**
- :class:`tree.DecisionTreeClassifier`
- :class:`tree.ExtraTreeClassifier`
- :class:`ensemble.ExtraTreesClassifier`
- :class:`neighbors.KNeighborsClassifier`
- :class:`neighbors.RadiusNeighborsClassifier`
- :class:`ensemble.RandomForestClassifier`
.. _multiclass_classification:
Multiclass classification
=========================
.. warning::
All classifiers in scikit-learn do multiclass classification
out-of-the-box. You don't need to use the :mod:`sklearn.multiclass` module
unless you want to experiment with different multiclass strategies.
**Multiclass classification** is a classification task with more than two
classes. Each sample can only be labeled as one class.
For example, classification using features extracted from a set of images of
fruit, where each image may either be of an orange, an apple, or a pear.
Each image is one sample and is labeled as one of the 3 possible classes.
Multiclass classification makes the assumption that each sample is assigned
to one and only one label - one sample cannot, for example, be both a pear
and an apple.
While all scikit-learn classifiers are capable of multiclass classification,
the meta-estimators offered by :mod:`sklearn.multiclass`
permit changing the way they handle more than two classes
because this may have an effect on classifier performance
(either in terms of generalization error or required computational resources).
Target format
-------------
Valid :term:`multiclass` representations for
:func:`~sklearn.utils.multiclass.type_of_target` (`y`) are:
- 1d or column vector containing more than two discrete values. An
example of a vector ``y`` for 4 samples:
>>> import numpy as np
>>> y = np.array(['apple', 'pear', 'apple', 'orange'])
>>> print(y)
['apple' 'pear' 'apple' 'orange']
- Dense or sparse :term:`binary` matrix of shape ``(n_samples, n_classes)``
with a single sample per row, where each column represents one class. An
example of both a dense and sparse :term:`binary` matrix ``y`` for 4
samples, where the columns, in order, are apple, orange, and pear:
>>> import numpy as np
>>> from sklearn.preprocessing import LabelBinarizer
>>> y = np.array(['apple', 'pear', 'apple', 'orange'])
>>> y_dense = LabelBinarizer().fit_transform(y)
>>> print(y_dense)
[[1 0 0]
[0 0 1]
[1 0 0]
[0 1 0]]
>>> from scipy import sparse
>>> y_sparse = sparse.csr_matrix(y_dense)
>>> print(y_sparse)
<Compressed Sparse Row sparse matrix of dtype 'int64'
with 4 stored elements and shape (4, 3)>
Coords Values
(0, 0) 1
(1, 2) 1
(2, 0) 1
(3, 1) 1
For more information about :class:`~sklearn.preprocessing.LabelBinarizer`,
refer to :ref:`preprocessing_targets`.
.. _ovr_classification:
OneVsRestClassifier
-------------------
The **one-vs-rest** strategy, also known as **one-vs-all**, is implemented in
:class:`~sklearn.multiclass.OneVsRestClassifier`. The strategy consists in
fitting one classifier per class. For each classifier, the class is fitted
against all the other classes. In addition to its computational efficiency
(only `n_classes` classifiers are needed), one advantage of this approach is
its interpretability. Since each class is represented by one and only one
classifier, it is possible to gain knowledge about the class by inspecting its
corresponding classifier. This is the most commonly used strategy and is a fair
default choice.
Below is an example of multiclass learning using OvR::
>>> from sklearn import datasets
>>> from sklearn.multiclass import OneVsRestClassifier
>>> from sklearn.svm import LinearSVC
>>> X, y = datasets.load_iris(return_X_y=True)
>>> OneVsRestClassifier(LinearSVC(random_state=0)).fit(X, y).predict(X)
array([0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 2, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 2, 2, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 1, 2, 2, 2, 1, 2, 2, 2, 2,
2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2])
:class:`~sklearn.multiclass.OneVsRestClassifier` also supports multilabel
classification. To use this feature, feed the classifier an indicator matrix,
in which cell [i, j] indicates the presence of label j in sample i.
.. figure:: ../auto_examples/miscellaneous/images/sphx_glr_plot_multilabel_001.png
:target: ../auto_examples/miscellaneous/plot_multilabel.html
:align: center
:scale: 75%
.. rubric:: Examples
* :ref:`sphx_glr_auto_examples_miscellaneous_plot_multilabel.py`
* :ref:`sphx_glr_auto_examples_classification_plot_classification_probability.py`
.. _ovo_classification:
OneVsOneClassifier
------------------
:class:`~sklearn.multiclass.OneVsOneClassifier` constructs one classifier per
pair of classes. At prediction time, the class which received the most votes
is selected. In the event of a tie (among two classes with an equal number of
votes), it selects the class with the highest aggregate classification
confidence by summing over the pair-wise classification confidence levels
computed by the underlying binary classifiers.
Since it requires to fit ``n_classes * (n_classes - 1) / 2`` classifiers,
this method is usually slower than one-vs-the-rest, due to its
O(n_classes^2) complexity. However, this method may be advantageous for
algorithms such as kernel algorithms which don't scale well with
``n_samples``. This is because each individual learning problem only involves
a small subset of the data whereas, with one-vs-the-rest, the complete
dataset is used ``n_classes`` times. The decision function is the result
of a monotonic transformation of the one-versus-one classification.
Below is an example of multiclass learning using OvO::
>>> from sklearn import datasets
>>> from sklearn.multiclass import OneVsOneClassifier
>>> from sklearn.svm import LinearSVC
>>> X, y = datasets.load_iris(return_X_y=True)
>>> OneVsOneClassifier(LinearSVC(random_state=0)).fit(X, y).predict(X)
array([0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 2, 1, 2, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 2, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2])
.. rubric:: References
* "Pattern Recognition and Machine Learning. Springer",
Christopher M. Bishop, page 183, (First Edition)
.. _ecoc:
OutputCodeClassifier
--------------------
Error-Correcting Output Code-based strategies are fairly different from
one-vs-the-rest and one-vs-one. With these strategies, each class is
represented in a Euclidean space, where each dimension can only be 0 or 1.
Another way to put it is that each class is represented by a binary code (an
array of 0 and 1). The matrix which keeps track of the location/code of each
class is called the code book. The code size is the dimensionality of the
aforementioned space. Intuitively, each class should be represented by a code
as unique as possible and a good code book should be designed to optimize
classification accuracy. In this implementation, we simply use a
randomly-generated code book as advocated in [3]_ although more elaborate
methods may be added in the future.
At fitting time, one binary classifier per bit in the code book is fitted.
At prediction time, the classifiers are used to project new points in the
class space and the class closest to the points is chosen.
In :class:`~sklearn.multiclass.OutputCodeClassifier`, the ``code_size``
attribute allows the user to control the number of classifiers which will be
used. It is a percentage of the total number of classes.
A number between 0 and 1 will require fewer classifiers than
one-vs-the-rest. In theory, ``log2(n_classes) / n_classes`` is sufficient to
represent each class unambiguously. However, in practice, it may not lead to
good accuracy since ``log2(n_classes)`` is much smaller than `n_classes`.
A number greater than 1 will require more classifiers than
one-vs-the-rest. In this case, some classifiers will in theory correct for
the mistakes made by other classifiers, hence the name "error-correcting".
In practice, however, this may not happen as classifier mistakes will
typically be correlated. The error-correcting output codes have a similar
effect to bagging.
Below is an example of multiclass learning using Output-Codes::
>>> from sklearn import datasets
>>> from sklearn.multiclass import OutputCodeClassifier
>>> from sklearn.svm import LinearSVC
>>> X, y = datasets.load_iris(return_X_y=True)
>>> clf = OutputCodeClassifier(LinearSVC(random_state=0), code_size=2, random_state=0)
>>> clf.fit(X, y).predict(X)
array([0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 2, 1, 1, 1, 1, 1, 1, 1, 1, 1, 2, 1, 1,
1, 2, 1, 1, 1, 1, 1, 1, 2, 1, 1, 1, 1, 1, 2, 2, 2, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2, 1, 2, 2, 2, 2, 2, 2, 2, 2, 2, 1, 2, 2, 2, 1, 1, 2, 2, 2,
2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2])
.. rubric:: References
* "Solving multiclass learning problems via error-correcting output codes",
Dietterich T., Bakiri G., Journal of Artificial Intelligence Research 2, 1995.
.. [3] "The error coding method and PICTs", James G., Hastie T.,
Journal of Computational and Graphical statistics 7, 1998.
* "The Elements of Statistical Learning",
Hastie T., Tibshirani R., Friedman J., page 606 (second-edition), 2008.
.. _multilabel_classification:
Multilabel classification
=========================
**Multilabel classification** (closely related to **multioutput**
**classification**) is a classification task labeling each sample with ``m``
labels from ``n_classes`` possible classes, where ``m`` can be 0 to
``n_classes`` inclusive. This can be thought of as predicting properties of a
sample that are not mutually exclusive. Formally, a binary output is assigned
to each class, for every sample. Positive classes are indicated with 1 and
negative classes with 0 or -1. It is thus comparable to running ``n_classes``
binary classification tasks, for example with
:class:`~sklearn.multioutput.MultiOutputClassifier`. This approach treats
each label independently whereas multilabel classifiers *may* treat the
multiple classes simultaneously, accounting for correlated behavior among
them.
For example, prediction of the topics relevant to a text document or video.
The document or video may be about one of 'religion', 'politics', 'finance'
or 'education', several of the topic classes or all of the topic classes.
Target format
-------------
A valid representation of :term:`multilabel` `y` is an either dense or sparse
:term:`binary` matrix of shape ``(n_samples, n_classes)``. Each column
represents a class. The ``1``'s in each row denote the positive classes a
sample has been labeled with. An example of a dense matrix ``y`` for 3
samples:
>>> y = np.array([[1, 0, 0, 1], [0, 0, 1, 1], [0, 0, 0, 0]])
>>> print(y)
[[1 0 0 1]
[0 0 1 1]
[0 0 0 0]]
Dense binary matrices can also be created using
:class:`~sklearn.preprocessing.MultiLabelBinarizer`. For more information,
refer to :ref:`preprocessing_targets`.
An example of the same ``y`` in sparse matrix form:
>>> y_sparse = sparse.csr_matrix(y)
>>> print(y_sparse)
<Compressed Sparse Row sparse matrix of dtype 'int64'
with 4 stored elements and shape (3, 4)>
Coords Values
(0, 0) 1
(0, 3) 1
(1, 2) 1
(1, 3) 1
.. _multioutputclassfier:
MultiOutputClassifier
---------------------
Multilabel classification support can be added to any classifier with
:class:`~sklearn.multioutput.MultiOutputClassifier`. This strategy consists of
fitting one classifier per target. This allows multiple target variable
classifications. The purpose of this class is to extend estimators
to be able to estimate a series of target functions (f1,f2,f3...,fn)
that are trained on a single X predictor matrix to predict a series
of responses (y1,y2,y3...,yn).
You can find a usage example for
:class:`~sklearn.multioutput.MultiOutputClassifier`
as part of the section on :ref:`multiclass_multioutput_classification`
since it is a generalization of multilabel classification to
multiclass outputs instead of binary outputs.
.. _classifierchain:
ClassifierChain
---------------
Classifier chains (see :class:`~sklearn.multioutput.ClassifierChain`) are a way
of combining a number of binary classifiers into a single multi-label model
that is capable of exploiting correlations among targets.
For a multi-label classification problem with N classes, N binary
classifiers are assigned an integer between 0 and N-1. These integers
define the order of models in the chain. Each classifier is then fit on the
available training data plus the true labels of the classes whose
models were assigned a lower number.
When predicting, the true labels will not be available. Instead the
predictions of each model are passed on to the subsequent models in the
chain to be used as features.
Clearly the order of the chain is important. The first model in the chain
has no information about the other labels while the last model in the chain
has features indicating the presence of all of the other labels. In general
one does not know the optimal ordering of the models in the chain so
typically many randomly ordered chains are fit and their predictions are
averaged together.
.. rubric:: References
* Jesse Read, Bernhard Pfahringer, Geoff Holmes, Eibe Frank,
"Classifier Chains for Multi-label Classification", 2009.
.. _multiclass_multioutput_classification:
Multiclass-multioutput classification
=====================================
**Multiclass-multioutput classification**
(also known as **multitask classification**) is a
classification task which labels each sample with a set of **non-binary**
properties. Both the number of properties and the number of
classes per property is greater than 2. A single estimator thus
handles several joint classification tasks. This is both a generalization of
the multi\ *label* classification task, which only considers binary
attributes, as well as a generalization of the multi\ *class* classification
task, where only one property is considered.
For example, classification of the properties "type of fruit" and "colour"
for a set of images of fruit. The property "type of fruit" has the possible
classes: "apple", "pear" and "orange". The property "colour" has the
possible classes: "green", "red", "yellow" and "orange". Each sample is an
image of a fruit, a label is output for both properties and each label is
one of the possible classes of the corresponding property.
Note that all classifiers handling multiclass-multioutput (also known as
multitask classification) tasks, support the multilabel classification task
as a special case. Multitask classification is similar to the multioutput
classification task with different model formulations. For more information,
see the relevant estimator documentation.
Below is an example of multiclass-multioutput classification:
>>> from sklearn.datasets import make_classification
>>> from sklearn.multioutput import MultiOutputClassifier
>>> from sklearn.ensemble import RandomForestClassifier
>>> from sklearn.utils import shuffle
>>> import numpy as np
>>> X, y1 = make_classification(n_samples=10, n_features=100,
... n_informative=30, n_classes=3,
... random_state=1)
>>> y2 = shuffle(y1, random_state=1)
>>> y3 = shuffle(y1, random_state=2)
>>> Y = np.vstack((y1, y2, y3)).T
>>> n_samples, n_features = X.shape # 10,100
>>> n_outputs = Y.shape[1] # 3
>>> n_classes = 3
>>> forest = RandomForestClassifier(random_state=1)
>>> multi_target_forest = MultiOutputClassifier(forest, n_jobs=2)
>>> multi_target_forest.fit(X, Y).predict(X)
array([[2, 2, 0],
[1, 2, 1],
[2, 1, 0],
[0, 0, 2],
[0, 2, 1],
[0, 0, 2],
[1, 1, 0],
[1, 1, 1],
[0, 0, 2],
[2, 0, 0]])
.. warning::
At present, no metric in :mod:`sklearn.metrics`
supports the multiclass-multioutput classification task.
Target format
-------------
A valid representation of :term:`multioutput` `y` is a dense matrix of shape
``(n_samples, n_classes)`` of class labels. A column wise concatenation of 1d
:term:`multiclass` variables. An example of ``y`` for 3 samples:
>>> y = np.array([['apple', 'green'], ['orange', 'orange'], ['pear', 'green']])
>>> print(y)
[['apple' 'green']
['orange' 'orange']
['pear' 'green']]
.. _multioutput_regression:
Multioutput regression
======================
**Multioutput regression** predicts multiple numerical properties for each
sample. Each property is a numerical variable and the number of properties
to be predicted for each sample is greater than or equal to 2. Some estimators
that support multioutput regression are faster than just running ``n_output``
estimators.
For example, prediction of both wind speed and wind direction, in degrees,
using data obtained at a certain location. Each sample would be data
obtained at one location and both wind speed and direction would be
output for each sample.
The following regressors natively support multioutput regression:
- :class:`cross_decomposition.CCA`
- :class:`tree.DecisionTreeRegressor`
- :class:`dummy.DummyRegressor`
- :class:`linear_model.ElasticNet`
- :class:`tree.ExtraTreeRegressor`
- :class:`ensemble.ExtraTreesRegressor`
- :class:`gaussian_process.GaussianProcessRegressor`
- :class:`neighbors.KNeighborsRegressor`
- :class:`kernel_ridge.KernelRidge`
- :class:`linear_model.Lars`
- :class:`linear_model.Lasso`
- :class:`linear_model.LassoLars`
- :class:`linear_model.LinearRegression`
- :class:`multioutput.MultiOutputRegressor`
- :class:`linear_model.MultiTaskElasticNet`
- :class:`linear_model.MultiTaskElasticNetCV`
- :class:`linear_model.MultiTaskLasso`
- :class:`linear_model.MultiTaskLassoCV`
- :class:`linear_model.OrthogonalMatchingPursuit`
- :class:`cross_decomposition.PLSCanonical`
- :class:`cross_decomposition.PLSRegression`
- :class:`linear_model.RANSACRegressor`
- :class:`neighbors.RadiusNeighborsRegressor`
- :class:`ensemble.RandomForestRegressor`
- :class:`multioutput.RegressorChain`
- :class:`linear_model.Ridge`
- :class:`linear_model.RidgeCV`
- :class:`compose.TransformedTargetRegressor`
Target format
-------------
A valid representation of :term:`multioutput` `y` is a dense matrix of shape
``(n_samples, n_output)`` of floats. A column wise concatenation of
:term:`continuous` variables. An example of ``y`` for 3 samples:
>>> y = np.array([[31.4, 94], [40.5, 109], [25.0, 30]])
>>> print(y)
[[ 31.4 94. ]
[ 40.5 109. ]
[ 25. 30. ]]
.. _multioutputregressor:
MultiOutputRegressor
--------------------
Multioutput regression support can be added to any regressor with
:class:`~sklearn.multioutput.MultiOutputRegressor`. This strategy consists of
fitting one regressor per target. Since each target is represented by exactly
one regressor it is possible to gain knowledge about the target by
inspecting its corresponding regressor. As
:class:`~sklearn.multioutput.MultiOutputRegressor` fits one regressor per
target it can not take advantage of correlations between targets.
Below is an example of multioutput regression:
>>> from sklearn.datasets import make_regression
>>> from sklearn.multioutput import MultiOutputRegressor
>>> from sklearn.ensemble import GradientBoostingRegressor
>>> X, y = make_regression(n_samples=10, n_targets=3, random_state=1)
>>> MultiOutputRegressor(GradientBoostingRegressor(random_state=0)).fit(X, y).predict(X)
array([[-154.75474165, -147.03498585, -50.03812219],
[ 7.12165031, 5.12914884, -81.46081961],
[-187.8948621 , -100.44373091, 13.88978285],
[-141.62745778, 95.02891072, -191.48204257],
[ 97.03260883, 165.34867495, 139.52003279],
[ 123.92529176, 21.25719016, -7.84253 ],
[-122.25193977, -85.16443186, -107.12274212],
[ -30.170388 , -94.80956739, 12.16979946],
[ 140.72667194, 176.50941682, -17.50447799],
[ 149.37967282, -81.15699552, -5.72850319]])
.. _regressorchain:
RegressorChain
--------------
Regressor chains (see :class:`~sklearn.multioutput.RegressorChain`) is
analogous to :class:`~sklearn.multioutput.ClassifierChain` as a way of
combining a number of regressions into a single multi-target model that is
capable of exploiting correlations among targets. | scikit-learn | multiclass Multiclass and multioutput algorithms This section of the user guide covers functionality related to multi learning problems including term multiclass term multilabel and term multioutput classification and regression The modules in this section implement term meta estimators which require a base estimator to be provided in their constructor Meta estimators extend the functionality of the base estimator to support multi learning problems which is accomplished by transforming the multi learning problem into a set of simpler problems then fitting one estimator per problem This section covers two modules mod sklearn multiclass and mod sklearn multioutput The chart below demonstrates the problem types that each module is responsible for and the corresponding meta estimators that each module provides image images multi org chart png align center The table below provides a quick reference on the differences between problem types More detailed explanations can be found in subsequent sections of this guide Number of targets Target cardinality Valid func sklearn utils multiclass type of target Multiclass 1 2 multiclass classification Multilabel 1 2 0 or 1 multilabel indicator classification Multiclass multioutput 1 2 multiclass multioutput classification Multioutput 1 Continuous continuous multioutput regression Below is a summary of scikit learn estimators that have multi learning support built in grouped by strategy You don t need the meta estimators provided by this section if you re using one of these estimators However meta estimators can provide additional strategies beyond what is built in currentmodule sklearn Inherently multiclass class naive bayes BernoulliNB class tree DecisionTreeClassifier class tree ExtraTreeClassifier class ensemble ExtraTreesClassifier class naive bayes GaussianNB class neighbors KNeighborsClassifier class semi supervised LabelPropagation class semi supervised LabelSpreading class discriminant analysis LinearDiscriminantAnalysis class svm LinearSVC setting multi class crammer singer class linear model LogisticRegression with most solvers class linear model LogisticRegressionCV with most solvers class neural network MLPClassifier class neighbors NearestCentroid class discriminant analysis QuadraticDiscriminantAnalysis class neighbors RadiusNeighborsClassifier class ensemble RandomForestClassifier class linear model RidgeClassifier class linear model RidgeClassifierCV Multiclass as One Vs One class svm NuSVC class svm SVC class gaussian process GaussianProcessClassifier setting multi class one vs one Multiclass as One Vs The Rest class ensemble GradientBoostingClassifier class gaussian process GaussianProcessClassifier setting multi class one vs rest class svm LinearSVC setting multi class ovr class linear model LogisticRegression most solvers class linear model LogisticRegressionCV most solvers class linear model SGDClassifier class linear model Perceptron class linear model PassiveAggressiveClassifier Support multilabel class tree DecisionTreeClassifier class tree ExtraTreeClassifier class ensemble ExtraTreesClassifier class neighbors KNeighborsClassifier class neural network MLPClassifier class neighbors RadiusNeighborsClassifier class ensemble RandomForestClassifier class linear model RidgeClassifier class linear model RidgeClassifierCV Support multiclass multioutput class tree DecisionTreeClassifier class tree ExtraTreeClassifier class ensemble ExtraTreesClassifier class neighbors KNeighborsClassifier class neighbors RadiusNeighborsClassifier class ensemble RandomForestClassifier multiclass classification Multiclass classification warning All classifiers in scikit learn do multiclass classification out of the box You don t need to use the mod sklearn multiclass module unless you want to experiment with different multiclass strategies Multiclass classification is a classification task with more than two classes Each sample can only be labeled as one class For example classification using features extracted from a set of images of fruit where each image may either be of an orange an apple or a pear Each image is one sample and is labeled as one of the 3 possible classes Multiclass classification makes the assumption that each sample is assigned to one and only one label one sample cannot for example be both a pear and an apple While all scikit learn classifiers are capable of multiclass classification the meta estimators offered by mod sklearn multiclass permit changing the way they handle more than two classes because this may have an effect on classifier performance either in terms of generalization error or required computational resources Target format Valid term multiclass representations for func sklearn utils multiclass type of target y are 1d or column vector containing more than two discrete values An example of a vector y for 4 samples import numpy as np y np array apple pear apple orange print y apple pear apple orange Dense or sparse term binary matrix of shape n samples n classes with a single sample per row where each column represents one class An example of both a dense and sparse term binary matrix y for 4 samples where the columns in order are apple orange and pear import numpy as np from sklearn preprocessing import LabelBinarizer y np array apple pear apple orange y dense LabelBinarizer fit transform y print y dense 1 0 0 0 0 1 1 0 0 0 1 0 from scipy import sparse y sparse sparse csr matrix y dense print y sparse Compressed Sparse Row sparse matrix of dtype int64 with 4 stored elements and shape 4 3 Coords Values 0 0 1 1 2 1 2 0 1 3 1 1 For more information about class sklearn preprocessing LabelBinarizer refer to ref preprocessing targets ovr classification OneVsRestClassifier The one vs rest strategy also known as one vs all is implemented in class sklearn multiclass OneVsRestClassifier The strategy consists in fitting one classifier per class For each classifier the class is fitted against all the other classes In addition to its computational efficiency only n classes classifiers are needed one advantage of this approach is its interpretability Since each class is represented by one and only one classifier it is possible to gain knowledge about the class by inspecting its corresponding classifier This is the most commonly used strategy and is a fair default choice Below is an example of multiclass learning using OvR from sklearn import datasets from sklearn multiclass import OneVsRestClassifier from sklearn svm import LinearSVC X y datasets load iris return X y True OneVsRestClassifier LinearSVC random state 0 fit X y predict X array 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 2 1 1 1 1 1 1 1 1 1 1 1 1 2 2 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 1 2 2 2 1 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 class sklearn multiclass OneVsRestClassifier also supports multilabel classification To use this feature feed the classifier an indicator matrix in which cell i j indicates the presence of label j in sample i figure auto examples miscellaneous images sphx glr plot multilabel 001 png target auto examples miscellaneous plot multilabel html align center scale 75 rubric Examples ref sphx glr auto examples miscellaneous plot multilabel py ref sphx glr auto examples classification plot classification probability py ovo classification OneVsOneClassifier class sklearn multiclass OneVsOneClassifier constructs one classifier per pair of classes At prediction time the class which received the most votes is selected In the event of a tie among two classes with an equal number of votes it selects the class with the highest aggregate classification confidence by summing over the pair wise classification confidence levels computed by the underlying binary classifiers Since it requires to fit n classes n classes 1 2 classifiers this method is usually slower than one vs the rest due to its O n classes 2 complexity However this method may be advantageous for algorithms such as kernel algorithms which don t scale well with n samples This is because each individual learning problem only involves a small subset of the data whereas with one vs the rest the complete dataset is used n classes times The decision function is the result of a monotonic transformation of the one versus one classification Below is an example of multiclass learning using OvO from sklearn import datasets from sklearn multiclass import OneVsOneClassifier from sklearn svm import LinearSVC X y datasets load iris return X y True OneVsOneClassifier LinearSVC random state 0 fit X y predict X array 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 2 1 2 1 1 1 1 1 1 1 1 1 1 2 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 rubric References Pattern Recognition and Machine Learning Springer Christopher M Bishop page 183 First Edition ecoc OutputCodeClassifier Error Correcting Output Code based strategies are fairly different from one vs the rest and one vs one With these strategies each class is represented in a Euclidean space where each dimension can only be 0 or 1 Another way to put it is that each class is represented by a binary code an array of 0 and 1 The matrix which keeps track of the location code of each class is called the code book The code size is the dimensionality of the aforementioned space Intuitively each class should be represented by a code as unique as possible and a good code book should be designed to optimize classification accuracy In this implementation we simply use a randomly generated code book as advocated in 3 although more elaborate methods may be added in the future At fitting time one binary classifier per bit in the code book is fitted At prediction time the classifiers are used to project new points in the class space and the class closest to the points is chosen In class sklearn multiclass OutputCodeClassifier the code size attribute allows the user to control the number of classifiers which will be used It is a percentage of the total number of classes A number between 0 and 1 will require fewer classifiers than one vs the rest In theory log2 n classes n classes is sufficient to represent each class unambiguously However in practice it may not lead to good accuracy since log2 n classes is much smaller than n classes A number greater than 1 will require more classifiers than one vs the rest In this case some classifiers will in theory correct for the mistakes made by other classifiers hence the name error correcting In practice however this may not happen as classifier mistakes will typically be correlated The error correcting output codes have a similar effect to bagging Below is an example of multiclass learning using Output Codes from sklearn import datasets from sklearn multiclass import OutputCodeClassifier from sklearn svm import LinearSVC X y datasets load iris return X y True clf OutputCodeClassifier LinearSVC random state 0 code size 2 random state 0 clf fit X y predict X array 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 2 1 1 1 1 1 1 1 1 1 2 1 1 1 2 1 1 1 1 1 1 2 1 1 1 1 1 2 2 2 1 1 1 1 1 1 1 1 1 1 1 1 1 1 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 1 2 2 2 2 2 2 2 2 2 1 2 2 2 1 1 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 rubric References Solving multiclass learning problems via error correcting output codes Dietterich T Bakiri G Journal of Artificial Intelligence Research 2 1995 3 The error coding method and PICTs James G Hastie T Journal of Computational and Graphical statistics 7 1998 The Elements of Statistical Learning Hastie T Tibshirani R Friedman J page 606 second edition 2008 multilabel classification Multilabel classification Multilabel classification closely related to multioutput classification is a classification task labeling each sample with m labels from n classes possible classes where m can be 0 to n classes inclusive This can be thought of as predicting properties of a sample that are not mutually exclusive Formally a binary output is assigned to each class for every sample Positive classes are indicated with 1 and negative classes with 0 or 1 It is thus comparable to running n classes binary classification tasks for example with class sklearn multioutput MultiOutputClassifier This approach treats each label independently whereas multilabel classifiers may treat the multiple classes simultaneously accounting for correlated behavior among them For example prediction of the topics relevant to a text document or video The document or video may be about one of religion politics finance or education several of the topic classes or all of the topic classes Target format A valid representation of term multilabel y is an either dense or sparse term binary matrix of shape n samples n classes Each column represents a class The 1 s in each row denote the positive classes a sample has been labeled with An example of a dense matrix y for 3 samples y np array 1 0 0 1 0 0 1 1 0 0 0 0 print y 1 0 0 1 0 0 1 1 0 0 0 0 Dense binary matrices can also be created using class sklearn preprocessing MultiLabelBinarizer For more information refer to ref preprocessing targets An example of the same y in sparse matrix form y sparse sparse csr matrix y print y sparse Compressed Sparse Row sparse matrix of dtype int64 with 4 stored elements and shape 3 4 Coords Values 0 0 1 0 3 1 1 2 1 1 3 1 multioutputclassfier MultiOutputClassifier Multilabel classification support can be added to any classifier with class sklearn multioutput MultiOutputClassifier This strategy consists of fitting one classifier per target This allows multiple target variable classifications The purpose of this class is to extend estimators to be able to estimate a series of target functions f1 f2 f3 fn that are trained on a single X predictor matrix to predict a series of responses y1 y2 y3 yn You can find a usage example for class sklearn multioutput MultiOutputClassifier as part of the section on ref multiclass multioutput classification since it is a generalization of multilabel classification to multiclass outputs instead of binary outputs classifierchain ClassifierChain Classifier chains see class sklearn multioutput ClassifierChain are a way of combining a number of binary classifiers into a single multi label model that is capable of exploiting correlations among targets For a multi label classification problem with N classes N binary classifiers are assigned an integer between 0 and N 1 These integers define the order of models in the chain Each classifier is then fit on the available training data plus the true labels of the classes whose models were assigned a lower number When predicting the true labels will not be available Instead the predictions of each model are passed on to the subsequent models in the chain to be used as features Clearly the order of the chain is important The first model in the chain has no information about the other labels while the last model in the chain has features indicating the presence of all of the other labels In general one does not know the optimal ordering of the models in the chain so typically many randomly ordered chains are fit and their predictions are averaged together rubric References Jesse Read Bernhard Pfahringer Geoff Holmes Eibe Frank Classifier Chains for Multi label Classification 2009 multiclass multioutput classification Multiclass multioutput classification Multiclass multioutput classification also known as multitask classification is a classification task which labels each sample with a set of non binary properties Both the number of properties and the number of classes per property is greater than 2 A single estimator thus handles several joint classification tasks This is both a generalization of the multi label classification task which only considers binary attributes as well as a generalization of the multi class classification task where only one property is considered For example classification of the properties type of fruit and colour for a set of images of fruit The property type of fruit has the possible classes apple pear and orange The property colour has the possible classes green red yellow and orange Each sample is an image of a fruit a label is output for both properties and each label is one of the possible classes of the corresponding property Note that all classifiers handling multiclass multioutput also known as multitask classification tasks support the multilabel classification task as a special case Multitask classification is similar to the multioutput classification task with different model formulations For more information see the relevant estimator documentation Below is an example of multiclass multioutput classification from sklearn datasets import make classification from sklearn multioutput import MultiOutputClassifier from sklearn ensemble import RandomForestClassifier from sklearn utils import shuffle import numpy as np X y1 make classification n samples 10 n features 100 n informative 30 n classes 3 random state 1 y2 shuffle y1 random state 1 y3 shuffle y1 random state 2 Y np vstack y1 y2 y3 T n samples n features X shape 10 100 n outputs Y shape 1 3 n classes 3 forest RandomForestClassifier random state 1 multi target forest MultiOutputClassifier forest n jobs 2 multi target forest fit X Y predict X array 2 2 0 1 2 1 2 1 0 0 0 2 0 2 1 0 0 2 1 1 0 1 1 1 0 0 2 2 0 0 warning At present no metric in mod sklearn metrics supports the multiclass multioutput classification task Target format A valid representation of term multioutput y is a dense matrix of shape n samples n classes of class labels A column wise concatenation of 1d term multiclass variables An example of y for 3 samples y np array apple green orange orange pear green print y apple green orange orange pear green multioutput regression Multioutput regression Multioutput regression predicts multiple numerical properties for each sample Each property is a numerical variable and the number of properties to be predicted for each sample is greater than or equal to 2 Some estimators that support multioutput regression are faster than just running n output estimators For example prediction of both wind speed and wind direction in degrees using data obtained at a certain location Each sample would be data obtained at one location and both wind speed and direction would be output for each sample The following regressors natively support multioutput regression class cross decomposition CCA class tree DecisionTreeRegressor class dummy DummyRegressor class linear model ElasticNet class tree ExtraTreeRegressor class ensemble ExtraTreesRegressor class gaussian process GaussianProcessRegressor class neighbors KNeighborsRegressor class kernel ridge KernelRidge class linear model Lars class linear model Lasso class linear model LassoLars class linear model LinearRegression class multioutput MultiOutputRegressor class linear model MultiTaskElasticNet class linear model MultiTaskElasticNetCV class linear model MultiTaskLasso class linear model MultiTaskLassoCV class linear model OrthogonalMatchingPursuit class cross decomposition PLSCanonical class cross decomposition PLSRegression class linear model RANSACRegressor class neighbors RadiusNeighborsRegressor class ensemble RandomForestRegressor class multioutput RegressorChain class linear model Ridge class linear model RidgeCV class compose TransformedTargetRegressor Target format A valid representation of term multioutput y is a dense matrix of shape n samples n output of floats A column wise concatenation of term continuous variables An example of y for 3 samples y np array 31 4 94 40 5 109 25 0 30 print y 31 4 94 40 5 109 25 30 multioutputregressor MultiOutputRegressor Multioutput regression support can be added to any regressor with class sklearn multioutput MultiOutputRegressor This strategy consists of fitting one regressor per target Since each target is represented by exactly one regressor it is possible to gain knowledge about the target by inspecting its corresponding regressor As class sklearn multioutput MultiOutputRegressor fits one regressor per target it can not take advantage of correlations between targets Below is an example of multioutput regression from sklearn datasets import make regression from sklearn multioutput import MultiOutputRegressor from sklearn ensemble import GradientBoostingRegressor X y make regression n samples 10 n targets 3 random state 1 MultiOutputRegressor GradientBoostingRegressor random state 0 fit X y predict X array 154 75474165 147 03498585 50 03812219 7 12165031 5 12914884 81 46081961 187 8948621 100 44373091 13 88978285 141 62745778 95 02891072 191 48204257 97 03260883 165 34867495 139 52003279 123 92529176 21 25719016 7 84253 122 25193977 85 16443186 107 12274212 30 170388 94 80956739 12 16979946 140 72667194 176 50941682 17 50447799 149 37967282 81 15699552 5 72850319 regressorchain RegressorChain Regressor chains see class sklearn multioutput RegressorChain is analogous to class sklearn multioutput ClassifierChain as a way of combining a number of regressions into a single multi target model that is capable of exploiting correlations among targets |
scikit-learn neighbors Jake Vanderplas vanderplas astro washington edu sklearn neighbors Nearest Neighbors | .. _neighbors:
=================
Nearest Neighbors
=================
.. sectionauthor:: Jake Vanderplas <[email protected]>
.. currentmodule:: sklearn.neighbors
:mod:`sklearn.neighbors` provides functionality for unsupervised and
supervised neighbors-based learning methods. Unsupervised nearest neighbors
is the foundation of many other learning methods,
notably manifold learning and spectral clustering. Supervised neighbors-based
learning comes in two flavors: `classification`_ for data with
discrete labels, and `regression`_ for data with continuous labels.
The principle behind nearest neighbor methods is to find a predefined number
of training samples closest in distance to the new point, and
predict the label from these. The number of samples can be a user-defined
constant (k-nearest neighbor learning), or vary based
on the local density of points (radius-based neighbor learning).
The distance can, in general, be any metric measure: standard Euclidean
distance is the most common choice.
Neighbors-based methods are known as *non-generalizing* machine
learning methods, since they simply "remember" all of its training data
(possibly transformed into a fast indexing structure such as a
:ref:`Ball Tree <ball_tree>` or :ref:`KD Tree <kd_tree>`).
Despite its simplicity, nearest neighbors has been successful in a
large number of classification and regression problems, including
handwritten digits and satellite image scenes. Being a non-parametric method,
it is often successful in classification situations where the decision
boundary is very irregular.
The classes in :mod:`sklearn.neighbors` can handle either NumPy arrays or
`scipy.sparse` matrices as input. For dense matrices, a large number of
possible distance metrics are supported. For sparse matrices, arbitrary
Minkowski metrics are supported for searches.
There are many learning routines which rely on nearest neighbors at their
core. One example is :ref:`kernel density estimation <kernel_density>`,
discussed in the :ref:`density estimation <density_estimation>` section.
.. _unsupervised_neighbors:
Unsupervised Nearest Neighbors
==============================
:class:`NearestNeighbors` implements unsupervised nearest neighbors learning.
It acts as a uniform interface to three different nearest neighbors
algorithms: :class:`BallTree`, :class:`KDTree`, and a
brute-force algorithm based on routines in :mod:`sklearn.metrics.pairwise`.
The choice of neighbors search algorithm is controlled through the keyword
``'algorithm'``, which must be one of
``['auto', 'ball_tree', 'kd_tree', 'brute']``. When the default value
``'auto'`` is passed, the algorithm attempts to determine the best approach
from the training data. For a discussion of the strengths and weaknesses
of each option, see `Nearest Neighbor Algorithms`_.
.. warning::
Regarding the Nearest Neighbors algorithms, if two
neighbors :math:`k+1` and :math:`k` have identical distances
but different labels, the result will depend on the ordering of the
training data.
Finding the Nearest Neighbors
-----------------------------
For the simple task of finding the nearest neighbors between two sets of
data, the unsupervised algorithms within :mod:`sklearn.neighbors` can be
used:
>>> from sklearn.neighbors import NearestNeighbors
>>> import numpy as np
>>> X = np.array([[-1, -1], [-2, -1], [-3, -2], [1, 1], [2, 1], [3, 2]])
>>> nbrs = NearestNeighbors(n_neighbors=2, algorithm='ball_tree').fit(X)
>>> distances, indices = nbrs.kneighbors(X)
>>> indices
array([[0, 1],
[1, 0],
[2, 1],
[3, 4],
[4, 3],
[5, 4]]...)
>>> distances
array([[0. , 1. ],
[0. , 1. ],
[0. , 1.41421356],
[0. , 1. ],
[0. , 1. ],
[0. , 1.41421356]])
Because the query set matches the training set, the nearest neighbor of each
point is the point itself, at a distance of zero.
It is also possible to efficiently produce a sparse graph showing the
connections between neighboring points:
>>> nbrs.kneighbors_graph(X).toarray()
array([[1., 1., 0., 0., 0., 0.],
[1., 1., 0., 0., 0., 0.],
[0., 1., 1., 0., 0., 0.],
[0., 0., 0., 1., 1., 0.],
[0., 0., 0., 1., 1., 0.],
[0., 0., 0., 0., 1., 1.]])
The dataset is structured such that points nearby in index order are nearby
in parameter space, leading to an approximately block-diagonal matrix of
K-nearest neighbors. Such a sparse graph is useful in a variety of
circumstances which make use of spatial relationships between points for
unsupervised learning: in particular, see :class:`~sklearn.manifold.Isomap`,
:class:`~sklearn.manifold.LocallyLinearEmbedding`, and
:class:`~sklearn.cluster.SpectralClustering`.
KDTree and BallTree Classes
---------------------------
Alternatively, one can use the :class:`KDTree` or :class:`BallTree` classes
directly to find nearest neighbors. This is the functionality wrapped by
the :class:`NearestNeighbors` class used above. The Ball Tree and KD Tree
have the same interface; we'll show an example of using the KD Tree here:
>>> from sklearn.neighbors import KDTree
>>> import numpy as np
>>> X = np.array([[-1, -1], [-2, -1], [-3, -2], [1, 1], [2, 1], [3, 2]])
>>> kdt = KDTree(X, leaf_size=30, metric='euclidean')
>>> kdt.query(X, k=2, return_distance=False)
array([[0, 1],
[1, 0],
[2, 1],
[3, 4],
[4, 3],
[5, 4]]...)
Refer to the :class:`KDTree` and :class:`BallTree` class documentation
for more information on the options available for nearest neighbors searches,
including specification of query strategies, distance metrics, etc. For a list
of valid metrics use `KDTree.valid_metrics` and `BallTree.valid_metrics`:
>>> from sklearn.neighbors import KDTree, BallTree
>>> KDTree.valid_metrics
['euclidean', 'l2', 'minkowski', 'p', 'manhattan', 'cityblock', 'l1', 'chebyshev', 'infinity']
>>> BallTree.valid_metrics
['euclidean', 'l2', 'minkowski', 'p', 'manhattan', 'cityblock', 'l1', 'chebyshev', 'infinity', 'seuclidean', 'mahalanobis', 'hamming', 'canberra', 'braycurtis', 'jaccard', 'dice', 'rogerstanimoto', 'russellrao', 'sokalmichener', 'sokalsneath', 'haversine', 'pyfunc']
.. _classification:
Nearest Neighbors Classification
================================
Neighbors-based classification is a type of *instance-based learning* or
*non-generalizing learning*: it does not attempt to construct a general
internal model, but simply stores instances of the training data.
Classification is computed from a simple majority vote of the nearest
neighbors of each point: a query point is assigned the data class which
has the most representatives within the nearest neighbors of the point.
scikit-learn implements two different nearest neighbors classifiers:
:class:`KNeighborsClassifier` implements learning based on the :math:`k`
nearest neighbors of each query point, where :math:`k` is an integer value
specified by the user. :class:`RadiusNeighborsClassifier` implements learning
based on the number of neighbors within a fixed radius :math:`r` of each
training point, where :math:`r` is a floating-point value specified by
the user.
The :math:`k`-neighbors classification in :class:`KNeighborsClassifier`
is the most commonly used technique. The optimal choice of the value :math:`k`
is highly data-dependent: in general a larger :math:`k` suppresses the effects
of noise, but makes the classification boundaries less distinct.
In cases where the data is not uniformly sampled, radius-based neighbors
classification in :class:`RadiusNeighborsClassifier` can be a better choice.
The user specifies a fixed radius :math:`r`, such that points in sparser
neighborhoods use fewer nearest neighbors for the classification. For
high-dimensional parameter spaces, this method becomes less effective due
to the so-called "curse of dimensionality".
The basic nearest neighbors classification uses uniform weights: that is, the
value assigned to a query point is computed from a simple majority vote of
the nearest neighbors. Under some circumstances, it is better to weight the
neighbors such that nearer neighbors contribute more to the fit. This can
be accomplished through the ``weights`` keyword. The default value,
``weights = 'uniform'``, assigns uniform weights to each neighbor.
``weights = 'distance'`` assigns weights proportional to the inverse of the
distance from the query point. Alternatively, a user-defined function of the
distance can be supplied to compute the weights.
.. |classification_1| image:: ../auto_examples/neighbors/images/sphx_glr_plot_classification_001.png
:target: ../auto_examples/neighbors/plot_classification.html
:scale: 75
.. centered:: |classification_1|
.. rubric:: Examples
* :ref:`sphx_glr_auto_examples_neighbors_plot_classification.py`: an example of
classification using nearest neighbors.
.. _regression:
Nearest Neighbors Regression
============================
Neighbors-based regression can be used in cases where the data labels are
continuous rather than discrete variables. The label assigned to a query
point is computed based on the mean of the labels of its nearest neighbors.
scikit-learn implements two different neighbors regressors:
:class:`KNeighborsRegressor` implements learning based on the :math:`k`
nearest neighbors of each query point, where :math:`k` is an integer
value specified by the user. :class:`RadiusNeighborsRegressor` implements
learning based on the neighbors within a fixed radius :math:`r` of the
query point, where :math:`r` is a floating-point value specified by the
user.
The basic nearest neighbors regression uses uniform weights: that is,
each point in the local neighborhood contributes uniformly to the
classification of a query point. Under some circumstances, it can be
advantageous to weight points such that nearby points contribute more
to the regression than faraway points. This can be accomplished through
the ``weights`` keyword. The default value, ``weights = 'uniform'``,
assigns equal weights to all points. ``weights = 'distance'`` assigns
weights proportional to the inverse of the distance from the query point.
Alternatively, a user-defined function of the distance can be supplied,
which will be used to compute the weights.
.. figure:: ../auto_examples/neighbors/images/sphx_glr_plot_regression_001.png
:target: ../auto_examples/neighbors/plot_regression.html
:align: center
:scale: 75
The use of multi-output nearest neighbors for regression is demonstrated in
:ref:`sphx_glr_auto_examples_miscellaneous_plot_multioutput_face_completion.py`. In this example, the inputs
X are the pixels of the upper half of faces and the outputs Y are the pixels of
the lower half of those faces.
.. figure:: ../auto_examples/miscellaneous/images/sphx_glr_plot_multioutput_face_completion_001.png
:target: ../auto_examples/miscellaneous/plot_multioutput_face_completion.html
:scale: 75
:align: center
.. rubric:: Examples
* :ref:`sphx_glr_auto_examples_neighbors_plot_regression.py`: an example of regression
using nearest neighbors.
* :ref:`sphx_glr_auto_examples_miscellaneous_plot_multioutput_face_completion.py`:
an example of multi-output regression using nearest neighbors.
Nearest Neighbor Algorithms
===========================
.. _brute_force:
Brute Force
-----------
Fast computation of nearest neighbors is an active area of research in
machine learning. The most naive neighbor search implementation involves
the brute-force computation of distances between all pairs of points in the
dataset: for :math:`N` samples in :math:`D` dimensions, this approach scales
as :math:`O[D N^2]`. Efficient brute-force neighbors searches can be very
competitive for small data samples.
However, as the number of samples :math:`N` grows, the brute-force
approach quickly becomes infeasible. In the classes within
:mod:`sklearn.neighbors`, brute-force neighbors searches are specified
using the keyword ``algorithm = 'brute'``, and are computed using the
routines available in :mod:`sklearn.metrics.pairwise`.
.. _kd_tree:
K-D Tree
--------
To address the computational inefficiencies of the brute-force approach, a
variety of tree-based data structures have been invented. In general, these
structures attempt to reduce the required number of distance calculations
by efficiently encoding aggregate distance information for the sample.
The basic idea is that if point :math:`A` is very distant from point
:math:`B`, and point :math:`B` is very close to point :math:`C`,
then we know that points :math:`A` and :math:`C`
are very distant, *without having to explicitly calculate their distance*.
In this way, the computational cost of a nearest neighbors search can be
reduced to :math:`O[D N \log(N)]` or better. This is a significant
improvement over brute-force for large :math:`N`.
An early approach to taking advantage of this aggregate information was
the *KD tree* data structure (short for *K-dimensional tree*), which
generalizes two-dimensional *Quad-trees* and 3-dimensional *Oct-trees*
to an arbitrary number of dimensions. The KD tree is a binary tree
structure which recursively partitions the parameter space along the data
axes, dividing it into nested orthotropic regions into which data points
are filed. The construction of a KD tree is very fast: because partitioning
is performed only along the data axes, no :math:`D`-dimensional distances
need to be computed. Once constructed, the nearest neighbor of a query
point can be determined with only :math:`O[\log(N)]` distance computations.
Though the KD tree approach is very fast for low-dimensional (:math:`D < 20`)
neighbors searches, it becomes inefficient as :math:`D` grows very large:
this is one manifestation of the so-called "curse of dimensionality".
In scikit-learn, KD tree neighbors searches are specified using the
keyword ``algorithm = 'kd_tree'``, and are computed using the class
:class:`KDTree`.
.. dropdown:: References
* `"Multidimensional binary search trees used for associative searching"
<https://dl.acm.org/citation.cfm?doid=361002.361007>`_,
Bentley, J.L., Communications of the ACM (1975)
.. _ball_tree:
Ball Tree
---------
To address the inefficiencies of KD Trees in higher dimensions, the *ball tree*
data structure was developed. Where KD trees partition data along
Cartesian axes, ball trees partition data in a series of nesting
hyper-spheres. This makes tree construction more costly than that of the
KD tree, but results in a data structure which can be very efficient on
highly structured data, even in very high dimensions.
A ball tree recursively divides the data into
nodes defined by a centroid :math:`C` and radius :math:`r`, such that each
point in the node lies within the hyper-sphere defined by :math:`r` and
:math:`C`. The number of candidate points for a neighbor search
is reduced through use of the *triangle inequality*:
.. math:: |x+y| \leq |x| + |y|
With this setup, a single distance calculation between a test point and
the centroid is sufficient to determine a lower and upper bound on the
distance to all points within the node.
Because of the spherical geometry of the ball tree nodes, it can out-perform
a *KD-tree* in high dimensions, though the actual performance is highly
dependent on the structure of the training data.
In scikit-learn, ball-tree-based
neighbors searches are specified using the keyword ``algorithm = 'ball_tree'``,
and are computed using the class :class:`BallTree`.
Alternatively, the user can work with the :class:`BallTree` class directly.
.. dropdown:: References
* `"Five Balltree Construction Algorithms"
<https://citeseerx.ist.psu.edu/doc_view/pid/17ac002939f8e950ffb32ec4dc8e86bdd8cb5ff1>`_,
Omohundro, S.M., International Computer Science Institute
Technical Report (1989)
.. dropdown:: Choice of Nearest Neighbors Algorithm
The optimal algorithm for a given dataset is a complicated choice, and
depends on a number of factors:
* number of samples :math:`N` (i.e. ``n_samples``) and dimensionality
:math:`D` (i.e. ``n_features``).
* *Brute force* query time grows as :math:`O[D N]`
* *Ball tree* query time grows as approximately :math:`O[D \log(N)]`
* *KD tree* query time changes with :math:`D` in a way that is difficult
to precisely characterise. For small :math:`D` (less than 20 or so)
the cost is approximately :math:`O[D\log(N)]`, and the KD tree
query can be very efficient.
For larger :math:`D`, the cost increases to nearly :math:`O[DN]`, and
the overhead due to the tree
structure can lead to queries which are slower than brute force.
For small data sets (:math:`N` less than 30 or so), :math:`\log(N)` is
comparable to :math:`N`, and brute force algorithms can be more efficient
than a tree-based approach. Both :class:`KDTree` and :class:`BallTree`
address this through providing a *leaf size* parameter: this controls the
number of samples at which a query switches to brute-force. This allows both
algorithms to approach the efficiency of a brute-force computation for small
:math:`N`.
* data structure: *intrinsic dimensionality* of the data and/or *sparsity*
of the data. Intrinsic dimensionality refers to the dimension
:math:`d \le D` of a manifold on which the data lies, which can be linearly
or non-linearly embedded in the parameter space. Sparsity refers to the
degree to which the data fills the parameter space (this is to be
distinguished from the concept as used in "sparse" matrices. The data
matrix may have no zero entries, but the **structure** can still be
"sparse" in this sense).
* *Brute force* query time is unchanged by data structure.
* *Ball tree* and *KD tree* query times can be greatly influenced
by data structure. In general, sparser data with a smaller intrinsic
dimensionality leads to faster query times. Because the KD tree
internal representation is aligned with the parameter axes, it will not
generally show as much improvement as ball tree for arbitrarily
structured data.
Datasets used in machine learning tend to be very structured, and are
very well-suited for tree-based queries.
* number of neighbors :math:`k` requested for a query point.
* *Brute force* query time is largely unaffected by the value of :math:`k`
* *Ball tree* and *KD tree* query time will become slower as :math:`k`
increases. This is due to two effects: first, a larger :math:`k` leads
to the necessity to search a larger portion of the parameter space.
Second, using :math:`k > 1` requires internal queueing of results
as the tree is traversed.
As :math:`k` becomes large compared to :math:`N`, the ability to prune
branches in a tree-based query is reduced. In this situation, Brute force
queries can be more efficient.
* number of query points. Both the ball tree and the KD Tree
require a construction phase. The cost of this construction becomes
negligible when amortized over many queries. If only a small number of
queries will be performed, however, the construction can make up
a significant fraction of the total cost. If very few query points
will be required, brute force is better than a tree-based method.
Currently, ``algorithm = 'auto'`` selects ``'brute'`` if any of the following
conditions are verified:
* input data is sparse
* ``metric = 'precomputed'``
* :math:`D > 15`
* :math:`k >= N/2`
* ``effective_metric_`` isn't in the ``VALID_METRICS`` list for either
``'kd_tree'`` or ``'ball_tree'``
Otherwise, it selects the first out of ``'kd_tree'`` and ``'ball_tree'`` that
has ``effective_metric_`` in its ``VALID_METRICS`` list. This heuristic is
based on the following assumptions:
* the number of query points is at least the same order as the number of
training points
* ``leaf_size`` is close to its default value of ``30``
* when :math:`D > 15`, the intrinsic dimensionality of the data is generally
too high for tree-based methods
.. dropdown:: Effect of ``leaf_size``
As noted above, for small sample sizes a brute force search can be more
efficient than a tree-based query. This fact is accounted for in the ball
tree and KD tree by internally switching to brute force searches within
leaf nodes. The level of this switch can be specified with the parameter
``leaf_size``. This parameter choice has many effects:
**construction time**
A larger ``leaf_size`` leads to a faster tree construction time, because
fewer nodes need to be created
**query time**
Both a large or small ``leaf_size`` can lead to suboptimal query cost.
For ``leaf_size`` approaching 1, the overhead involved in traversing
nodes can significantly slow query times. For ``leaf_size`` approaching
the size of the training set, queries become essentially brute force.
A good compromise between these is ``leaf_size = 30``, the default value
of the parameter.
**memory**
As ``leaf_size`` increases, the memory required to store a tree structure
decreases. This is especially important in the case of ball tree, which
stores a :math:`D`-dimensional centroid for each node. The required
storage space for :class:`BallTree` is approximately ``1 / leaf_size`` times
the size of the training set.
``leaf_size`` is not referenced for brute force queries.
.. dropdown:: Valid Metrics for Nearest Neighbor Algorithms
For a list of available metrics, see the documentation of the
:class:`~sklearn.metrics.DistanceMetric` class and the metrics listed in
`sklearn.metrics.pairwise.PAIRWISE_DISTANCE_FUNCTIONS`. Note that the "cosine"
metric uses :func:`~sklearn.metrics.pairwise.cosine_distances`.
A list of valid metrics for any of the above algorithms can be obtained by using their
``valid_metric`` attribute. For example, valid metrics for ``KDTree`` can be generated by:
>>> from sklearn.neighbors import KDTree
>>> print(sorted(KDTree.valid_metrics))
['chebyshev', 'cityblock', 'euclidean', 'infinity', 'l1', 'l2', 'manhattan', 'minkowski', 'p']
.. _nearest_centroid_classifier:
Nearest Centroid Classifier
===========================
The :class:`NearestCentroid` classifier is a simple algorithm that represents
each class by the centroid of its members. In effect, this makes it
similar to the label updating phase of the :class:`~sklearn.cluster.KMeans` algorithm.
It also has no parameters to choose, making it a good baseline classifier. It
does, however, suffer on non-convex classes, as well as when classes have
drastically different variances, as equal variance in all dimensions is
assumed. See Linear Discriminant Analysis (:class:`~sklearn.discriminant_analysis.LinearDiscriminantAnalysis`)
and Quadratic Discriminant Analysis (:class:`~sklearn.discriminant_analysis.QuadraticDiscriminantAnalysis`)
for more complex methods that do not make this assumption. Usage of the default
:class:`NearestCentroid` is simple:
>>> from sklearn.neighbors import NearestCentroid
>>> import numpy as np
>>> X = np.array([[-1, -1], [-2, -1], [-3, -2], [1, 1], [2, 1], [3, 2]])
>>> y = np.array([1, 1, 1, 2, 2, 2])
>>> clf = NearestCentroid()
>>> clf.fit(X, y)
NearestCentroid()
>>> print(clf.predict([[-0.8, -1]]))
[1]
Nearest Shrunken Centroid
-------------------------
The :class:`NearestCentroid` classifier has a ``shrink_threshold`` parameter,
which implements the nearest shrunken centroid classifier. In effect, the value
of each feature for each centroid is divided by the within-class variance of
that feature. The feature values are then reduced by ``shrink_threshold``. Most
notably, if a particular feature value crosses zero, it is set
to zero. In effect, this removes the feature from affecting the classification.
This is useful, for example, for removing noisy features.
In the example below, using a small shrink threshold increases the accuracy of
the model from 0.81 to 0.82.
.. |nearest_centroid_1| image:: ../auto_examples/neighbors/images/sphx_glr_plot_nearest_centroid_001.png
:target: ../auto_examples/neighbors/plot_nearest_centroid.html
:scale: 50
.. |nearest_centroid_2| image:: ../auto_examples/neighbors/images/sphx_glr_plot_nearest_centroid_002.png
:target: ../auto_examples/neighbors/plot_nearest_centroid.html
:scale: 50
.. centered:: |nearest_centroid_1| |nearest_centroid_2|
.. rubric:: Examples
* :ref:`sphx_glr_auto_examples_neighbors_plot_nearest_centroid.py`: an example of
classification using nearest centroid with different shrink thresholds.
.. _neighbors_transformer:
Nearest Neighbors Transformer
=============================
Many scikit-learn estimators rely on nearest neighbors: Several classifiers and
regressors such as :class:`KNeighborsClassifier` and
:class:`KNeighborsRegressor`, but also some clustering methods such as
:class:`~sklearn.cluster.DBSCAN` and
:class:`~sklearn.cluster.SpectralClustering`, and some manifold embeddings such
as :class:`~sklearn.manifold.TSNE` and :class:`~sklearn.manifold.Isomap`.
All these estimators can compute internally the nearest neighbors, but most of
them also accept precomputed nearest neighbors :term:`sparse graph`,
as given by :func:`~sklearn.neighbors.kneighbors_graph` and
:func:`~sklearn.neighbors.radius_neighbors_graph`. With mode
`mode='connectivity'`, these functions return a binary adjacency sparse graph
as required, for instance, in :class:`~sklearn.cluster.SpectralClustering`.
Whereas with `mode='distance'`, they return a distance sparse graph as required,
for instance, in :class:`~sklearn.cluster.DBSCAN`. To include these functions in
a scikit-learn pipeline, one can also use the corresponding classes
:class:`KNeighborsTransformer` and :class:`RadiusNeighborsTransformer`.
The benefits of this sparse graph API are multiple.
First, the precomputed graph can be re-used multiple times, for instance while
varying a parameter of the estimator. This can be done manually by the user, or
using the caching properties of the scikit-learn pipeline:
>>> import tempfile
>>> from sklearn.manifold import Isomap
>>> from sklearn.neighbors import KNeighborsTransformer
>>> from sklearn.pipeline import make_pipeline
>>> from sklearn.datasets import make_regression
>>> cache_path = tempfile.gettempdir() # we use a temporary folder here
>>> X, _ = make_regression(n_samples=50, n_features=25, random_state=0)
>>> estimator = make_pipeline(
... KNeighborsTransformer(mode='distance'),
... Isomap(n_components=3, metric='precomputed'),
... memory=cache_path)
>>> X_embedded = estimator.fit_transform(X)
>>> X_embedded.shape
(50, 3)
Second, precomputing the graph can give finer control on the nearest neighbors
estimation, for instance enabling multiprocessing though the parameter
`n_jobs`, which might not be available in all estimators.
Finally, the precomputation can be performed by custom estimators to use
different implementations, such as approximate nearest neighbors methods, or
implementation with special data types. The precomputed neighbors
:term:`sparse graph` needs to be formatted as in
:func:`~sklearn.neighbors.radius_neighbors_graph` output:
* a CSR matrix (although COO, CSC or LIL will be accepted).
* only explicitly store nearest neighborhoods of each sample with respect to the
training data. This should include those at 0 distance from a query point,
including the matrix diagonal when computing the nearest neighborhoods
between the training data and itself.
* each row's `data` should store the distance in increasing order (optional.
Unsorted data will be stable-sorted, adding a computational overhead).
* all values in data should be non-negative.
* there should be no duplicate `indices` in any row
(see https://github.com/scipy/scipy/issues/5807).
* if the algorithm being passed the precomputed matrix uses k nearest neighbors
(as opposed to radius neighborhood), at least k neighbors must be stored in
each row (or k+1, as explained in the following note).
.. note::
When a specific number of neighbors is queried (using
:class:`KNeighborsTransformer`), the definition of `n_neighbors` is ambiguous
since it can either include each training point as its own neighbor, or
exclude them. Neither choice is perfect, since including them leads to a
different number of non-self neighbors during training and testing, while
excluding them leads to a difference between `fit(X).transform(X)` and
`fit_transform(X)`, which is against scikit-learn API.
In :class:`KNeighborsTransformer` we use the definition which includes each
training point as its own neighbor in the count of `n_neighbors`. However,
for compatibility reasons with other estimators which use the other
definition, one extra neighbor will be computed when `mode == 'distance'`.
To maximise compatibility with all estimators, a safe choice is to always
include one extra neighbor in a custom nearest neighbors estimator, since
unnecessary neighbors will be filtered by following estimators.
.. rubric:: Examples
* :ref:`sphx_glr_auto_examples_neighbors_approximate_nearest_neighbors.py`:
an example of pipelining :class:`KNeighborsTransformer` and
:class:`~sklearn.manifold.TSNE`. Also proposes two custom nearest neighbors
estimators based on external packages.
* :ref:`sphx_glr_auto_examples_neighbors_plot_caching_nearest_neighbors.py`:
an example of pipelining :class:`KNeighborsTransformer` and
:class:`KNeighborsClassifier` to enable caching of the neighbors graph
during a hyper-parameter grid-search.
.. _nca:
Neighborhood Components Analysis
================================
.. sectionauthor:: William de Vazelhes <[email protected]>
Neighborhood Components Analysis (NCA, :class:`NeighborhoodComponentsAnalysis`)
is a distance metric learning algorithm which aims to improve the accuracy of
nearest neighbors classification compared to the standard Euclidean distance.
The algorithm directly maximizes a stochastic variant of the leave-one-out
k-nearest neighbors (KNN) score on the training set. It can also learn a
low-dimensional linear projection of data that can be used for data
visualization and fast classification.
.. |nca_illustration_1| image:: ../auto_examples/neighbors/images/sphx_glr_plot_nca_illustration_001.png
:target: ../auto_examples/neighbors/plot_nca_illustration.html
:scale: 50
.. |nca_illustration_2| image:: ../auto_examples/neighbors/images/sphx_glr_plot_nca_illustration_002.png
:target: ../auto_examples/neighbors/plot_nca_illustration.html
:scale: 50
.. centered:: |nca_illustration_1| |nca_illustration_2|
In the above illustrating figure, we consider some points from a randomly
generated dataset. We focus on the stochastic KNN classification of point no.
3. The thickness of a link between sample 3 and another point is proportional
to their distance, and can be seen as the relative weight (or probability) that
a stochastic nearest neighbor prediction rule would assign to this point. In
the original space, sample 3 has many stochastic neighbors from various
classes, so the right class is not very likely. However, in the projected space
learned by NCA, the only stochastic neighbors with non-negligible weight are
from the same class as sample 3, guaranteeing that the latter will be well
classified. See the :ref:`mathematical formulation <nca_mathematical_formulation>`
for more details.
Classification
--------------
Combined with a nearest neighbors classifier (:class:`KNeighborsClassifier`),
NCA is attractive for classification because it can naturally handle
multi-class problems without any increase in the model size, and does not
introduce additional parameters that require fine-tuning by the user.
NCA classification has been shown to work well in practice for data sets of
varying size and difficulty. In contrast to related methods such as Linear
Discriminant Analysis, NCA does not make any assumptions about the class
distributions. The nearest neighbor classification can naturally produce highly
irregular decision boundaries.
To use this model for classification, one needs to combine a
:class:`NeighborhoodComponentsAnalysis` instance that learns the optimal
transformation with a :class:`KNeighborsClassifier` instance that performs the
classification in the projected space. Here is an example using the two
classes:
>>> from sklearn.neighbors import (NeighborhoodComponentsAnalysis,
... KNeighborsClassifier)
>>> from sklearn.datasets import load_iris
>>> from sklearn.model_selection import train_test_split
>>> from sklearn.pipeline import Pipeline
>>> X, y = load_iris(return_X_y=True)
>>> X_train, X_test, y_train, y_test = train_test_split(X, y,
... stratify=y, test_size=0.7, random_state=42)
>>> nca = NeighborhoodComponentsAnalysis(random_state=42)
>>> knn = KNeighborsClassifier(n_neighbors=3)
>>> nca_pipe = Pipeline([('nca', nca), ('knn', knn)])
>>> nca_pipe.fit(X_train, y_train)
Pipeline(...)
>>> print(nca_pipe.score(X_test, y_test))
0.96190476...
.. |nca_classification_1| image:: ../auto_examples/neighbors/images/sphx_glr_plot_nca_classification_001.png
:target: ../auto_examples/neighbors/plot_nca_classification.html
:scale: 50
.. |nca_classification_2| image:: ../auto_examples/neighbors/images/sphx_glr_plot_nca_classification_002.png
:target: ../auto_examples/neighbors/plot_nca_classification.html
:scale: 50
.. centered:: |nca_classification_1| |nca_classification_2|
The plot shows decision boundaries for Nearest Neighbor Classification and
Neighborhood Components Analysis classification on the iris dataset, when
training and scoring on only two features, for visualisation purposes.
.. _nca_dim_reduction:
Dimensionality reduction
------------------------
NCA can be used to perform supervised dimensionality reduction. The input data
are projected onto a linear subspace consisting of the directions which
minimize the NCA objective. The desired dimensionality can be set using the
parameter ``n_components``. For instance, the following figure shows a
comparison of dimensionality reduction with Principal Component Analysis
(:class:`~sklearn.decomposition.PCA`), Linear Discriminant Analysis
(:class:`~sklearn.discriminant_analysis.LinearDiscriminantAnalysis`) and
Neighborhood Component Analysis (:class:`NeighborhoodComponentsAnalysis`) on
the Digits dataset, a dataset with size :math:`n_{samples} = 1797` and
:math:`n_{features} = 64`. The data set is split into a training and a test set
of equal size, then standardized. For evaluation the 3-nearest neighbor
classification accuracy is computed on the 2-dimensional projected points found
by each method. Each data sample belongs to one of 10 classes.
.. |nca_dim_reduction_1| image:: ../auto_examples/neighbors/images/sphx_glr_plot_nca_dim_reduction_001.png
:target: ../auto_examples/neighbors/plot_nca_dim_reduction.html
:width: 32%
.. |nca_dim_reduction_2| image:: ../auto_examples/neighbors/images/sphx_glr_plot_nca_dim_reduction_002.png
:target: ../auto_examples/neighbors/plot_nca_dim_reduction.html
:width: 32%
.. |nca_dim_reduction_3| image:: ../auto_examples/neighbors/images/sphx_glr_plot_nca_dim_reduction_003.png
:target: ../auto_examples/neighbors/plot_nca_dim_reduction.html
:width: 32%
.. centered:: |nca_dim_reduction_1| |nca_dim_reduction_2| |nca_dim_reduction_3|
.. rubric:: Examples
* :ref:`sphx_glr_auto_examples_neighbors_plot_nca_classification.py`
* :ref:`sphx_glr_auto_examples_neighbors_plot_nca_dim_reduction.py`
* :ref:`sphx_glr_auto_examples_manifold_plot_lle_digits.py`
.. _nca_mathematical_formulation:
Mathematical formulation
------------------------
The goal of NCA is to learn an optimal linear transformation matrix of size
``(n_components, n_features)``, which maximises the sum over all samples
:math:`i` of the probability :math:`p_i` that :math:`i` is correctly
classified, i.e.:
.. math::
\underset{L}{\arg\max} \sum\limits_{i=0}^{N - 1} p_{i}
with :math:`N` = ``n_samples`` and :math:`p_i` the probability of sample
:math:`i` being correctly classified according to a stochastic nearest
neighbors rule in the learned embedded space:
.. math::
p_{i}=\sum\limits_{j \in C_i}{p_{i j}}
where :math:`C_i` is the set of points in the same class as sample :math:`i`,
and :math:`p_{i j}` is the softmax over Euclidean distances in the embedded
space:
.. math::
p_{i j} = \frac{\exp(-||L x_i - L x_j||^2)}{\sum\limits_{k \ne
i} {\exp{-(||L x_i - L x_k||^2)}}} , \quad p_{i i} = 0
.. dropdown:: Mahalanobis distance
NCA can be seen as learning a (squared) Mahalanobis distance metric:
.. math::
|| L(x_i - x_j)||^2 = (x_i - x_j)^TM(x_i - x_j),
where :math:`M = L^T L` is a symmetric positive semi-definite matrix of size
``(n_features, n_features)``.
Implementation
--------------
This implementation follows what is explained in the original paper [1]_. For
the optimisation method, it currently uses scipy's L-BFGS-B with a full
gradient computation at each iteration, to avoid to tune the learning rate and
provide stable learning.
See the examples below and the docstring of
:meth:`NeighborhoodComponentsAnalysis.fit` for further information.
Complexity
----------
Training
^^^^^^^^
NCA stores a matrix of pairwise distances, taking ``n_samples ** 2`` memory.
Time complexity depends on the number of iterations done by the optimisation
algorithm. However, one can set the maximum number of iterations with the
argument ``max_iter``. For each iteration, time complexity is
``O(n_components x n_samples x min(n_samples, n_features))``.
Transform
^^^^^^^^^
Here the ``transform`` operation returns :math:`LX^T`, therefore its time
complexity equals ``n_components * n_features * n_samples_test``. There is no
added space complexity in the operation.
.. rubric:: References
.. [1] `"Neighbourhood Components Analysis"
<http://www.cs.nyu.edu/~roweis/papers/ncanips.pdf>`_,
J. Goldberger, S. Roweis, G. Hinton, R. Salakhutdinov, Advances in
Neural Information Processing Systems, Vol. 17, May 2005, pp. 513-520.
* `Wikipedia entry on Neighborhood Components Analysis
<https://en.wikipedia.org/wiki/Neighbourhood_components_analysis>`_ | scikit-learn | neighbors Nearest Neighbors sectionauthor Jake Vanderplas vanderplas astro washington edu currentmodule sklearn neighbors mod sklearn neighbors provides functionality for unsupervised and supervised neighbors based learning methods Unsupervised nearest neighbors is the foundation of many other learning methods notably manifold learning and spectral clustering Supervised neighbors based learning comes in two flavors classification for data with discrete labels and regression for data with continuous labels The principle behind nearest neighbor methods is to find a predefined number of training samples closest in distance to the new point and predict the label from these The number of samples can be a user defined constant k nearest neighbor learning or vary based on the local density of points radius based neighbor learning The distance can in general be any metric measure standard Euclidean distance is the most common choice Neighbors based methods are known as non generalizing machine learning methods since they simply remember all of its training data possibly transformed into a fast indexing structure such as a ref Ball Tree ball tree or ref KD Tree kd tree Despite its simplicity nearest neighbors has been successful in a large number of classification and regression problems including handwritten digits and satellite image scenes Being a non parametric method it is often successful in classification situations where the decision boundary is very irregular The classes in mod sklearn neighbors can handle either NumPy arrays or scipy sparse matrices as input For dense matrices a large number of possible distance metrics are supported For sparse matrices arbitrary Minkowski metrics are supported for searches There are many learning routines which rely on nearest neighbors at their core One example is ref kernel density estimation kernel density discussed in the ref density estimation density estimation section unsupervised neighbors Unsupervised Nearest Neighbors class NearestNeighbors implements unsupervised nearest neighbors learning It acts as a uniform interface to three different nearest neighbors algorithms class BallTree class KDTree and a brute force algorithm based on routines in mod sklearn metrics pairwise The choice of neighbors search algorithm is controlled through the keyword algorithm which must be one of auto ball tree kd tree brute When the default value auto is passed the algorithm attempts to determine the best approach from the training data For a discussion of the strengths and weaknesses of each option see Nearest Neighbor Algorithms warning Regarding the Nearest Neighbors algorithms if two neighbors math k 1 and math k have identical distances but different labels the result will depend on the ordering of the training data Finding the Nearest Neighbors For the simple task of finding the nearest neighbors between two sets of data the unsupervised algorithms within mod sklearn neighbors can be used from sklearn neighbors import NearestNeighbors import numpy as np X np array 1 1 2 1 3 2 1 1 2 1 3 2 nbrs NearestNeighbors n neighbors 2 algorithm ball tree fit X distances indices nbrs kneighbors X indices array 0 1 1 0 2 1 3 4 4 3 5 4 distances array 0 1 0 1 0 1 41421356 0 1 0 1 0 1 41421356 Because the query set matches the training set the nearest neighbor of each point is the point itself at a distance of zero It is also possible to efficiently produce a sparse graph showing the connections between neighboring points nbrs kneighbors graph X toarray array 1 1 0 0 0 0 1 1 0 0 0 0 0 1 1 0 0 0 0 0 0 1 1 0 0 0 0 1 1 0 0 0 0 0 1 1 The dataset is structured such that points nearby in index order are nearby in parameter space leading to an approximately block diagonal matrix of K nearest neighbors Such a sparse graph is useful in a variety of circumstances which make use of spatial relationships between points for unsupervised learning in particular see class sklearn manifold Isomap class sklearn manifold LocallyLinearEmbedding and class sklearn cluster SpectralClustering KDTree and BallTree Classes Alternatively one can use the class KDTree or class BallTree classes directly to find nearest neighbors This is the functionality wrapped by the class NearestNeighbors class used above The Ball Tree and KD Tree have the same interface we ll show an example of using the KD Tree here from sklearn neighbors import KDTree import numpy as np X np array 1 1 2 1 3 2 1 1 2 1 3 2 kdt KDTree X leaf size 30 metric euclidean kdt query X k 2 return distance False array 0 1 1 0 2 1 3 4 4 3 5 4 Refer to the class KDTree and class BallTree class documentation for more information on the options available for nearest neighbors searches including specification of query strategies distance metrics etc For a list of valid metrics use KDTree valid metrics and BallTree valid metrics from sklearn neighbors import KDTree BallTree KDTree valid metrics euclidean l2 minkowski p manhattan cityblock l1 chebyshev infinity BallTree valid metrics euclidean l2 minkowski p manhattan cityblock l1 chebyshev infinity seuclidean mahalanobis hamming canberra braycurtis jaccard dice rogerstanimoto russellrao sokalmichener sokalsneath haversine pyfunc classification Nearest Neighbors Classification Neighbors based classification is a type of instance based learning or non generalizing learning it does not attempt to construct a general internal model but simply stores instances of the training data Classification is computed from a simple majority vote of the nearest neighbors of each point a query point is assigned the data class which has the most representatives within the nearest neighbors of the point scikit learn implements two different nearest neighbors classifiers class KNeighborsClassifier implements learning based on the math k nearest neighbors of each query point where math k is an integer value specified by the user class RadiusNeighborsClassifier implements learning based on the number of neighbors within a fixed radius math r of each training point where math r is a floating point value specified by the user The math k neighbors classification in class KNeighborsClassifier is the most commonly used technique The optimal choice of the value math k is highly data dependent in general a larger math k suppresses the effects of noise but makes the classification boundaries less distinct In cases where the data is not uniformly sampled radius based neighbors classification in class RadiusNeighborsClassifier can be a better choice The user specifies a fixed radius math r such that points in sparser neighborhoods use fewer nearest neighbors for the classification For high dimensional parameter spaces this method becomes less effective due to the so called curse of dimensionality The basic nearest neighbors classification uses uniform weights that is the value assigned to a query point is computed from a simple majority vote of the nearest neighbors Under some circumstances it is better to weight the neighbors such that nearer neighbors contribute more to the fit This can be accomplished through the weights keyword The default value weights uniform assigns uniform weights to each neighbor weights distance assigns weights proportional to the inverse of the distance from the query point Alternatively a user defined function of the distance can be supplied to compute the weights classification 1 image auto examples neighbors images sphx glr plot classification 001 png target auto examples neighbors plot classification html scale 75 centered classification 1 rubric Examples ref sphx glr auto examples neighbors plot classification py an example of classification using nearest neighbors regression Nearest Neighbors Regression Neighbors based regression can be used in cases where the data labels are continuous rather than discrete variables The label assigned to a query point is computed based on the mean of the labels of its nearest neighbors scikit learn implements two different neighbors regressors class KNeighborsRegressor implements learning based on the math k nearest neighbors of each query point where math k is an integer value specified by the user class RadiusNeighborsRegressor implements learning based on the neighbors within a fixed radius math r of the query point where math r is a floating point value specified by the user The basic nearest neighbors regression uses uniform weights that is each point in the local neighborhood contributes uniformly to the classification of a query point Under some circumstances it can be advantageous to weight points such that nearby points contribute more to the regression than faraway points This can be accomplished through the weights keyword The default value weights uniform assigns equal weights to all points weights distance assigns weights proportional to the inverse of the distance from the query point Alternatively a user defined function of the distance can be supplied which will be used to compute the weights figure auto examples neighbors images sphx glr plot regression 001 png target auto examples neighbors plot regression html align center scale 75 The use of multi output nearest neighbors for regression is demonstrated in ref sphx glr auto examples miscellaneous plot multioutput face completion py In this example the inputs X are the pixels of the upper half of faces and the outputs Y are the pixels of the lower half of those faces figure auto examples miscellaneous images sphx glr plot multioutput face completion 001 png target auto examples miscellaneous plot multioutput face completion html scale 75 align center rubric Examples ref sphx glr auto examples neighbors plot regression py an example of regression using nearest neighbors ref sphx glr auto examples miscellaneous plot multioutput face completion py an example of multi output regression using nearest neighbors Nearest Neighbor Algorithms brute force Brute Force Fast computation of nearest neighbors is an active area of research in machine learning The most naive neighbor search implementation involves the brute force computation of distances between all pairs of points in the dataset for math N samples in math D dimensions this approach scales as math O D N 2 Efficient brute force neighbors searches can be very competitive for small data samples However as the number of samples math N grows the brute force approach quickly becomes infeasible In the classes within mod sklearn neighbors brute force neighbors searches are specified using the keyword algorithm brute and are computed using the routines available in mod sklearn metrics pairwise kd tree K D Tree To address the computational inefficiencies of the brute force approach a variety of tree based data structures have been invented In general these structures attempt to reduce the required number of distance calculations by efficiently encoding aggregate distance information for the sample The basic idea is that if point math A is very distant from point math B and point math B is very close to point math C then we know that points math A and math C are very distant without having to explicitly calculate their distance In this way the computational cost of a nearest neighbors search can be reduced to math O D N log N or better This is a significant improvement over brute force for large math N An early approach to taking advantage of this aggregate information was the KD tree data structure short for K dimensional tree which generalizes two dimensional Quad trees and 3 dimensional Oct trees to an arbitrary number of dimensions The KD tree is a binary tree structure which recursively partitions the parameter space along the data axes dividing it into nested orthotropic regions into which data points are filed The construction of a KD tree is very fast because partitioning is performed only along the data axes no math D dimensional distances need to be computed Once constructed the nearest neighbor of a query point can be determined with only math O log N distance computations Though the KD tree approach is very fast for low dimensional math D 20 neighbors searches it becomes inefficient as math D grows very large this is one manifestation of the so called curse of dimensionality In scikit learn KD tree neighbors searches are specified using the keyword algorithm kd tree and are computed using the class class KDTree dropdown References Multidimensional binary search trees used for associative searching https dl acm org citation cfm doid 361002 361007 Bentley J L Communications of the ACM 1975 ball tree Ball Tree To address the inefficiencies of KD Trees in higher dimensions the ball tree data structure was developed Where KD trees partition data along Cartesian axes ball trees partition data in a series of nesting hyper spheres This makes tree construction more costly than that of the KD tree but results in a data structure which can be very efficient on highly structured data even in very high dimensions A ball tree recursively divides the data into nodes defined by a centroid math C and radius math r such that each point in the node lies within the hyper sphere defined by math r and math C The number of candidate points for a neighbor search is reduced through use of the triangle inequality math x y leq x y With this setup a single distance calculation between a test point and the centroid is sufficient to determine a lower and upper bound on the distance to all points within the node Because of the spherical geometry of the ball tree nodes it can out perform a KD tree in high dimensions though the actual performance is highly dependent on the structure of the training data In scikit learn ball tree based neighbors searches are specified using the keyword algorithm ball tree and are computed using the class class BallTree Alternatively the user can work with the class BallTree class directly dropdown References Five Balltree Construction Algorithms https citeseerx ist psu edu doc view pid 17ac002939f8e950ffb32ec4dc8e86bdd8cb5ff1 Omohundro S M International Computer Science Institute Technical Report 1989 dropdown Choice of Nearest Neighbors Algorithm The optimal algorithm for a given dataset is a complicated choice and depends on a number of factors number of samples math N i e n samples and dimensionality math D i e n features Brute force query time grows as math O D N Ball tree query time grows as approximately math O D log N KD tree query time changes with math D in a way that is difficult to precisely characterise For small math D less than 20 or so the cost is approximately math O D log N and the KD tree query can be very efficient For larger math D the cost increases to nearly math O DN and the overhead due to the tree structure can lead to queries which are slower than brute force For small data sets math N less than 30 or so math log N is comparable to math N and brute force algorithms can be more efficient than a tree based approach Both class KDTree and class BallTree address this through providing a leaf size parameter this controls the number of samples at which a query switches to brute force This allows both algorithms to approach the efficiency of a brute force computation for small math N data structure intrinsic dimensionality of the data and or sparsity of the data Intrinsic dimensionality refers to the dimension math d le D of a manifold on which the data lies which can be linearly or non linearly embedded in the parameter space Sparsity refers to the degree to which the data fills the parameter space this is to be distinguished from the concept as used in sparse matrices The data matrix may have no zero entries but the structure can still be sparse in this sense Brute force query time is unchanged by data structure Ball tree and KD tree query times can be greatly influenced by data structure In general sparser data with a smaller intrinsic dimensionality leads to faster query times Because the KD tree internal representation is aligned with the parameter axes it will not generally show as much improvement as ball tree for arbitrarily structured data Datasets used in machine learning tend to be very structured and are very well suited for tree based queries number of neighbors math k requested for a query point Brute force query time is largely unaffected by the value of math k Ball tree and KD tree query time will become slower as math k increases This is due to two effects first a larger math k leads to the necessity to search a larger portion of the parameter space Second using math k 1 requires internal queueing of results as the tree is traversed As math k becomes large compared to math N the ability to prune branches in a tree based query is reduced In this situation Brute force queries can be more efficient number of query points Both the ball tree and the KD Tree require a construction phase The cost of this construction becomes negligible when amortized over many queries If only a small number of queries will be performed however the construction can make up a significant fraction of the total cost If very few query points will be required brute force is better than a tree based method Currently algorithm auto selects brute if any of the following conditions are verified input data is sparse metric precomputed math D 15 math k N 2 effective metric isn t in the VALID METRICS list for either kd tree or ball tree Otherwise it selects the first out of kd tree and ball tree that has effective metric in its VALID METRICS list This heuristic is based on the following assumptions the number of query points is at least the same order as the number of training points leaf size is close to its default value of 30 when math D 15 the intrinsic dimensionality of the data is generally too high for tree based methods dropdown Effect of leaf size As noted above for small sample sizes a brute force search can be more efficient than a tree based query This fact is accounted for in the ball tree and KD tree by internally switching to brute force searches within leaf nodes The level of this switch can be specified with the parameter leaf size This parameter choice has many effects construction time A larger leaf size leads to a faster tree construction time because fewer nodes need to be created query time Both a large or small leaf size can lead to suboptimal query cost For leaf size approaching 1 the overhead involved in traversing nodes can significantly slow query times For leaf size approaching the size of the training set queries become essentially brute force A good compromise between these is leaf size 30 the default value of the parameter memory As leaf size increases the memory required to store a tree structure decreases This is especially important in the case of ball tree which stores a math D dimensional centroid for each node The required storage space for class BallTree is approximately 1 leaf size times the size of the training set leaf size is not referenced for brute force queries dropdown Valid Metrics for Nearest Neighbor Algorithms For a list of available metrics see the documentation of the class sklearn metrics DistanceMetric class and the metrics listed in sklearn metrics pairwise PAIRWISE DISTANCE FUNCTIONS Note that the cosine metric uses func sklearn metrics pairwise cosine distances A list of valid metrics for any of the above algorithms can be obtained by using their valid metric attribute For example valid metrics for KDTree can be generated by from sklearn neighbors import KDTree print sorted KDTree valid metrics chebyshev cityblock euclidean infinity l1 l2 manhattan minkowski p nearest centroid classifier Nearest Centroid Classifier The class NearestCentroid classifier is a simple algorithm that represents each class by the centroid of its members In effect this makes it similar to the label updating phase of the class sklearn cluster KMeans algorithm It also has no parameters to choose making it a good baseline classifier It does however suffer on non convex classes as well as when classes have drastically different variances as equal variance in all dimensions is assumed See Linear Discriminant Analysis class sklearn discriminant analysis LinearDiscriminantAnalysis and Quadratic Discriminant Analysis class sklearn discriminant analysis QuadraticDiscriminantAnalysis for more complex methods that do not make this assumption Usage of the default class NearestCentroid is simple from sklearn neighbors import NearestCentroid import numpy as np X np array 1 1 2 1 3 2 1 1 2 1 3 2 y np array 1 1 1 2 2 2 clf NearestCentroid clf fit X y NearestCentroid print clf predict 0 8 1 1 Nearest Shrunken Centroid The class NearestCentroid classifier has a shrink threshold parameter which implements the nearest shrunken centroid classifier In effect the value of each feature for each centroid is divided by the within class variance of that feature The feature values are then reduced by shrink threshold Most notably if a particular feature value crosses zero it is set to zero In effect this removes the feature from affecting the classification This is useful for example for removing noisy features In the example below using a small shrink threshold increases the accuracy of the model from 0 81 to 0 82 nearest centroid 1 image auto examples neighbors images sphx glr plot nearest centroid 001 png target auto examples neighbors plot nearest centroid html scale 50 nearest centroid 2 image auto examples neighbors images sphx glr plot nearest centroid 002 png target auto examples neighbors plot nearest centroid html scale 50 centered nearest centroid 1 nearest centroid 2 rubric Examples ref sphx glr auto examples neighbors plot nearest centroid py an example of classification using nearest centroid with different shrink thresholds neighbors transformer Nearest Neighbors Transformer Many scikit learn estimators rely on nearest neighbors Several classifiers and regressors such as class KNeighborsClassifier and class KNeighborsRegressor but also some clustering methods such as class sklearn cluster DBSCAN and class sklearn cluster SpectralClustering and some manifold embeddings such as class sklearn manifold TSNE and class sklearn manifold Isomap All these estimators can compute internally the nearest neighbors but most of them also accept precomputed nearest neighbors term sparse graph as given by func sklearn neighbors kneighbors graph and func sklearn neighbors radius neighbors graph With mode mode connectivity these functions return a binary adjacency sparse graph as required for instance in class sklearn cluster SpectralClustering Whereas with mode distance they return a distance sparse graph as required for instance in class sklearn cluster DBSCAN To include these functions in a scikit learn pipeline one can also use the corresponding classes class KNeighborsTransformer and class RadiusNeighborsTransformer The benefits of this sparse graph API are multiple First the precomputed graph can be re used multiple times for instance while varying a parameter of the estimator This can be done manually by the user or using the caching properties of the scikit learn pipeline import tempfile from sklearn manifold import Isomap from sklearn neighbors import KNeighborsTransformer from sklearn pipeline import make pipeline from sklearn datasets import make regression cache path tempfile gettempdir we use a temporary folder here X make regression n samples 50 n features 25 random state 0 estimator make pipeline KNeighborsTransformer mode distance Isomap n components 3 metric precomputed memory cache path X embedded estimator fit transform X X embedded shape 50 3 Second precomputing the graph can give finer control on the nearest neighbors estimation for instance enabling multiprocessing though the parameter n jobs which might not be available in all estimators Finally the precomputation can be performed by custom estimators to use different implementations such as approximate nearest neighbors methods or implementation with special data types The precomputed neighbors term sparse graph needs to be formatted as in func sklearn neighbors radius neighbors graph output a CSR matrix although COO CSC or LIL will be accepted only explicitly store nearest neighborhoods of each sample with respect to the training data This should include those at 0 distance from a query point including the matrix diagonal when computing the nearest neighborhoods between the training data and itself each row s data should store the distance in increasing order optional Unsorted data will be stable sorted adding a computational overhead all values in data should be non negative there should be no duplicate indices in any row see https github com scipy scipy issues 5807 if the algorithm being passed the precomputed matrix uses k nearest neighbors as opposed to radius neighborhood at least k neighbors must be stored in each row or k 1 as explained in the following note note When a specific number of neighbors is queried using class KNeighborsTransformer the definition of n neighbors is ambiguous since it can either include each training point as its own neighbor or exclude them Neither choice is perfect since including them leads to a different number of non self neighbors during training and testing while excluding them leads to a difference between fit X transform X and fit transform X which is against scikit learn API In class KNeighborsTransformer we use the definition which includes each training point as its own neighbor in the count of n neighbors However for compatibility reasons with other estimators which use the other definition one extra neighbor will be computed when mode distance To maximise compatibility with all estimators a safe choice is to always include one extra neighbor in a custom nearest neighbors estimator since unnecessary neighbors will be filtered by following estimators rubric Examples ref sphx glr auto examples neighbors approximate nearest neighbors py an example of pipelining class KNeighborsTransformer and class sklearn manifold TSNE Also proposes two custom nearest neighbors estimators based on external packages ref sphx glr auto examples neighbors plot caching nearest neighbors py an example of pipelining class KNeighborsTransformer and class KNeighborsClassifier to enable caching of the neighbors graph during a hyper parameter grid search nca Neighborhood Components Analysis sectionauthor William de Vazelhes william de vazelhes inria fr Neighborhood Components Analysis NCA class NeighborhoodComponentsAnalysis is a distance metric learning algorithm which aims to improve the accuracy of nearest neighbors classification compared to the standard Euclidean distance The algorithm directly maximizes a stochastic variant of the leave one out k nearest neighbors KNN score on the training set It can also learn a low dimensional linear projection of data that can be used for data visualization and fast classification nca illustration 1 image auto examples neighbors images sphx glr plot nca illustration 001 png target auto examples neighbors plot nca illustration html scale 50 nca illustration 2 image auto examples neighbors images sphx glr plot nca illustration 002 png target auto examples neighbors plot nca illustration html scale 50 centered nca illustration 1 nca illustration 2 In the above illustrating figure we consider some points from a randomly generated dataset We focus on the stochastic KNN classification of point no 3 The thickness of a link between sample 3 and another point is proportional to their distance and can be seen as the relative weight or probability that a stochastic nearest neighbor prediction rule would assign to this point In the original space sample 3 has many stochastic neighbors from various classes so the right class is not very likely However in the projected space learned by NCA the only stochastic neighbors with non negligible weight are from the same class as sample 3 guaranteeing that the latter will be well classified See the ref mathematical formulation nca mathematical formulation for more details Classification Combined with a nearest neighbors classifier class KNeighborsClassifier NCA is attractive for classification because it can naturally handle multi class problems without any increase in the model size and does not introduce additional parameters that require fine tuning by the user NCA classification has been shown to work well in practice for data sets of varying size and difficulty In contrast to related methods such as Linear Discriminant Analysis NCA does not make any assumptions about the class distributions The nearest neighbor classification can naturally produce highly irregular decision boundaries To use this model for classification one needs to combine a class NeighborhoodComponentsAnalysis instance that learns the optimal transformation with a class KNeighborsClassifier instance that performs the classification in the projected space Here is an example using the two classes from sklearn neighbors import NeighborhoodComponentsAnalysis KNeighborsClassifier from sklearn datasets import load iris from sklearn model selection import train test split from sklearn pipeline import Pipeline X y load iris return X y True X train X test y train y test train test split X y stratify y test size 0 7 random state 42 nca NeighborhoodComponentsAnalysis random state 42 knn KNeighborsClassifier n neighbors 3 nca pipe Pipeline nca nca knn knn nca pipe fit X train y train Pipeline print nca pipe score X test y test 0 96190476 nca classification 1 image auto examples neighbors images sphx glr plot nca classification 001 png target auto examples neighbors plot nca classification html scale 50 nca classification 2 image auto examples neighbors images sphx glr plot nca classification 002 png target auto examples neighbors plot nca classification html scale 50 centered nca classification 1 nca classification 2 The plot shows decision boundaries for Nearest Neighbor Classification and Neighborhood Components Analysis classification on the iris dataset when training and scoring on only two features for visualisation purposes nca dim reduction Dimensionality reduction NCA can be used to perform supervised dimensionality reduction The input data are projected onto a linear subspace consisting of the directions which minimize the NCA objective The desired dimensionality can be set using the parameter n components For instance the following figure shows a comparison of dimensionality reduction with Principal Component Analysis class sklearn decomposition PCA Linear Discriminant Analysis class sklearn discriminant analysis LinearDiscriminantAnalysis and Neighborhood Component Analysis class NeighborhoodComponentsAnalysis on the Digits dataset a dataset with size math n samples 1797 and math n features 64 The data set is split into a training and a test set of equal size then standardized For evaluation the 3 nearest neighbor classification accuracy is computed on the 2 dimensional projected points found by each method Each data sample belongs to one of 10 classes nca dim reduction 1 image auto examples neighbors images sphx glr plot nca dim reduction 001 png target auto examples neighbors plot nca dim reduction html width 32 nca dim reduction 2 image auto examples neighbors images sphx glr plot nca dim reduction 002 png target auto examples neighbors plot nca dim reduction html width 32 nca dim reduction 3 image auto examples neighbors images sphx glr plot nca dim reduction 003 png target auto examples neighbors plot nca dim reduction html width 32 centered nca dim reduction 1 nca dim reduction 2 nca dim reduction 3 rubric Examples ref sphx glr auto examples neighbors plot nca classification py ref sphx glr auto examples neighbors plot nca dim reduction py ref sphx glr auto examples manifold plot lle digits py nca mathematical formulation Mathematical formulation The goal of NCA is to learn an optimal linear transformation matrix of size n components n features which maximises the sum over all samples math i of the probability math p i that math i is correctly classified i e math underset L arg max sum limits i 0 N 1 p i with math N n samples and math p i the probability of sample math i being correctly classified according to a stochastic nearest neighbors rule in the learned embedded space math p i sum limits j in C i p i j where math C i is the set of points in the same class as sample math i and math p i j is the softmax over Euclidean distances in the embedded space math p i j frac exp L x i L x j 2 sum limits k ne i exp L x i L x k 2 quad p i i 0 dropdown Mahalanobis distance NCA can be seen as learning a squared Mahalanobis distance metric math L x i x j 2 x i x j TM x i x j where math M L T L is a symmetric positive semi definite matrix of size n features n features Implementation This implementation follows what is explained in the original paper 1 For the optimisation method it currently uses scipy s L BFGS B with a full gradient computation at each iteration to avoid to tune the learning rate and provide stable learning See the examples below and the docstring of meth NeighborhoodComponentsAnalysis fit for further information Complexity Training NCA stores a matrix of pairwise distances taking n samples 2 memory Time complexity depends on the number of iterations done by the optimisation algorithm However one can set the maximum number of iterations with the argument max iter For each iteration time complexity is O n components x n samples x min n samples n features Transform Here the transform operation returns math LX T therefore its time complexity equals n components n features n samples test There is no added space complexity in the operation rubric References 1 Neighbourhood Components Analysis http www cs nyu edu roweis papers ncanips pdf J Goldberger S Roweis G Hinton R Salakhutdinov Advances in Neural Information Processing Systems Vol 17 May 2005 pp 513 520 Wikipedia entry on Neighborhood Components Analysis https en wikipedia org wiki Neighbourhood components analysis |
scikit-learn mixture sklearn mixture gmm Gaussian mixture models | .. _mixture:
.. _gmm:
=======================
Gaussian mixture models
=======================
.. currentmodule:: sklearn.mixture
``sklearn.mixture`` is a package which enables one to learn
Gaussian Mixture Models (diagonal, spherical, tied and full covariance
matrices supported), sample them, and estimate them from
data. Facilities to help determine the appropriate number of
components are also provided.
.. figure:: ../auto_examples/mixture/images/sphx_glr_plot_gmm_pdf_001.png
:target: ../auto_examples/mixture/plot_gmm_pdf.html
:align: center
:scale: 50%
**Two-component Gaussian mixture model:** *data points, and equi-probability
surfaces of the model.*
A Gaussian mixture model is a probabilistic model that assumes all the
data points are generated from a mixture of a finite number of
Gaussian distributions with unknown parameters. One can think of
mixture models as generalizing k-means clustering to incorporate
information about the covariance structure of the data as well as the
centers of the latent Gaussians.
Scikit-learn implements different classes to estimate Gaussian
mixture models, that correspond to different estimation strategies,
detailed below.
Gaussian Mixture
================
The :class:`GaussianMixture` object implements the
:ref:`expectation-maximization <expectation_maximization>` (EM)
algorithm for fitting mixture-of-Gaussian models. It can also draw
confidence ellipsoids for multivariate models, and compute the
Bayesian Information Criterion to assess the number of clusters in the
data. A :meth:`GaussianMixture.fit` method is provided that learns a Gaussian
Mixture Model from train data. Given test data, it can assign to each
sample the Gaussian it most probably belongs to using
the :meth:`GaussianMixture.predict` method.
..
Alternatively, the probability of each
sample belonging to the various Gaussians may be retrieved using the
:meth:`GaussianMixture.predict_proba` method.
The :class:`GaussianMixture` comes with different options to constrain the
covariance of the difference classes estimated: spherical, diagonal, tied or
full covariance.
.. figure:: ../auto_examples/mixture/images/sphx_glr_plot_gmm_covariances_001.png
:target: ../auto_examples/mixture/plot_gmm_covariances.html
:align: center
:scale: 75%
.. rubric:: Examples
* See :ref:`sphx_glr_auto_examples_mixture_plot_gmm_covariances.py` for an example of
using the Gaussian mixture as clustering on the iris dataset.
* See :ref:`sphx_glr_auto_examples_mixture_plot_gmm_pdf.py` for an example on plotting the
density estimation.
.. dropdown:: Pros and cons of class GaussianMixture
.. rubric:: Pros
:Speed: It is the fastest algorithm for learning mixture models
:Agnostic: As this algorithm maximizes only the likelihood, it
will not bias the means towards zero, or bias the cluster sizes to
have specific structures that might or might not apply.
.. rubric:: Cons
:Singularities: When one has insufficiently many points per
mixture, estimating the covariance matrices becomes difficult,
and the algorithm is known to diverge and find solutions with
infinite likelihood unless one regularizes the covariances artificially.
:Number of components: This algorithm will always use all the
components it has access to, needing held-out data
or information theoretical criteria to decide how many components to use
in the absence of external cues.
.. dropdown:: Selecting the number of components in a classical Gaussian Mixture model
The BIC criterion can be used to select the number of components in a Gaussian
Mixture in an efficient way. In theory, it recovers the true number of
components only in the asymptotic regime (i.e. if much data is available and
assuming that the data was actually generated i.i.d. from a mixture of Gaussian
distribution). Note that using a :ref:`Variational Bayesian Gaussian mixture <bgmm>`
avoids the specification of the number of components for a Gaussian mixture
model.
.. figure:: ../auto_examples/mixture/images/sphx_glr_plot_gmm_selection_002.png
:target: ../auto_examples/mixture/plot_gmm_selection.html
:align: center
:scale: 50%
.. rubric:: Examples
* See :ref:`sphx_glr_auto_examples_mixture_plot_gmm_selection.py` for an example
of model selection performed with classical Gaussian mixture.
.. _expectation_maximization:
.. dropdown:: Estimation algorithm expectation-maximization
The main difficulty in learning Gaussian mixture models from unlabeled
data is that one usually doesn't know which points came from
which latent component (if one has access to this information it gets
very easy to fit a separate Gaussian distribution to each set of
points). `Expectation-maximization
<https://en.wikipedia.org/wiki/Expectation%E2%80%93maximization_algorithm>`_
is a well-founded statistical
algorithm to get around this problem by an iterative process. First
one assumes random components (randomly centered on data points,
learned from k-means, or even just normally distributed around the
origin) and computes for each point a probability of being generated by
each component of the model. Then, one tweaks the
parameters to maximize the likelihood of the data given those
assignments. Repeating this process is guaranteed to always converge
to a local optimum.
.. dropdown:: Choice of the Initialization method
There is a choice of four initialization methods (as well as inputting user defined
initial means) to generate the initial centers for the model components:
k-means (default)
This applies a traditional k-means clustering algorithm.
This can be computationally expensive compared to other initialization methods.
k-means++
This uses the initialization method of k-means clustering: k-means++.
This will pick the first center at random from the data. Subsequent centers will be
chosen from a weighted distribution of the data favouring points further away from
existing centers. k-means++ is the default initialization for k-means so will be
quicker than running a full k-means but can still take a significant amount of
time for large data sets with many components.
random_from_data
This will pick random data points from the input data as the initial
centers. This is a very fast method of initialization but can produce non-convergent
results if the chosen points are too close to each other.
random
Centers are chosen as a small perturbation away from the mean of all data.
This method is simple but can lead to the model taking longer to converge.
.. figure:: ../auto_examples/mixture/images/sphx_glr_plot_gmm_init_001.png
:target: ../auto_examples/mixture/plot_gmm_init.html
:align: center
:scale: 50%
.. rubric:: Examples
* See :ref:`sphx_glr_auto_examples_mixture_plot_gmm_init.py` for an example of
using different initializations in Gaussian Mixture.
.. _bgmm:
Variational Bayesian Gaussian Mixture
=====================================
The :class:`BayesianGaussianMixture` object implements a variant of the
Gaussian mixture model with variational inference algorithms. The API is
similar to the one defined by :class:`GaussianMixture`.
.. _variational_inference:
**Estimation algorithm: variational inference**
Variational inference is an extension of expectation-maximization that
maximizes a lower bound on model evidence (including
priors) instead of data likelihood. The principle behind
variational methods is the same as expectation-maximization (that is
both are iterative algorithms that alternate between finding the
probabilities for each point to be generated by each mixture and
fitting the mixture to these assigned points), but variational
methods add regularization by integrating information from prior
distributions. This avoids the singularities often found in
expectation-maximization solutions but introduces some subtle biases
to the model. Inference is often notably slower, but not usually as
much so as to render usage unpractical.
Due to its Bayesian nature, the variational algorithm needs more hyperparameters
than expectation-maximization, the most important of these being the
concentration parameter ``weight_concentration_prior``. Specifying a low value
for the concentration prior will make the model put most of the weight on a few
components and set the remaining components' weights very close to zero. High
values of the concentration prior will allow a larger number of components to
be active in the mixture.
The parameters implementation of the :class:`BayesianGaussianMixture` class
proposes two types of prior for the weights distribution: a finite mixture model
with Dirichlet distribution and an infinite mixture model with the Dirichlet
Process. In practice Dirichlet Process inference algorithm is approximated and
uses a truncated distribution with a fixed maximum number of components (called
the Stick-breaking representation). The number of components actually used
almost always depends on the data.
The next figure compares the results obtained for the different type of the
weight concentration prior (parameter ``weight_concentration_prior_type``)
for different values of ``weight_concentration_prior``.
Here, we can see the value of the ``weight_concentration_prior`` parameter
has a strong impact on the effective number of active components obtained. We
can also notice that large values for the concentration weight prior lead to
more uniform weights when the type of prior is 'dirichlet_distribution' while
this is not necessarily the case for the 'dirichlet_process' type (used by
default).
.. |plot_bgmm| image:: ../auto_examples/mixture/images/sphx_glr_plot_concentration_prior_001.png
:target: ../auto_examples/mixture/plot_concentration_prior.html
:scale: 48%
.. |plot_dpgmm| image:: ../auto_examples/mixture/images/sphx_glr_plot_concentration_prior_002.png
:target: ../auto_examples/mixture/plot_concentration_prior.html
:scale: 48%
.. centered:: |plot_bgmm| |plot_dpgmm|
The examples below compare Gaussian mixture models with a fixed number of
components, to the variational Gaussian mixture models with a Dirichlet process
prior. Here, a classical Gaussian mixture is fitted with 5 components on a
dataset composed of 2 clusters. We can see that the variational Gaussian mixture
with a Dirichlet process prior is able to limit itself to only 2 components
whereas the Gaussian mixture fits the data with a fixed number of components
that has to be set a priori by the user. In this case the user has selected
``n_components=5`` which does not match the true generative distribution of this
toy dataset. Note that with very little observations, the variational Gaussian
mixture models with a Dirichlet process prior can take a conservative stand, and
fit only one component.
.. figure:: ../auto_examples/mixture/images/sphx_glr_plot_gmm_001.png
:target: ../auto_examples/mixture/plot_gmm.html
:align: center
:scale: 70%
On the following figure we are fitting a dataset not well-depicted by a
Gaussian mixture. Adjusting the ``weight_concentration_prior``, parameter of the
:class:`BayesianGaussianMixture` controls the number of components used to fit
this data. We also present on the last two plots a random sampling generated
from the two resulting mixtures.
.. figure:: ../auto_examples/mixture/images/sphx_glr_plot_gmm_sin_001.png
:target: ../auto_examples/mixture/plot_gmm_sin.html
:align: center
:scale: 65%
.. rubric:: Examples
* See :ref:`sphx_glr_auto_examples_mixture_plot_gmm.py` for an example on
plotting the confidence ellipsoids for both :class:`GaussianMixture`
and :class:`BayesianGaussianMixture`.
* :ref:`sphx_glr_auto_examples_mixture_plot_gmm_sin.py` shows using
:class:`GaussianMixture` and :class:`BayesianGaussianMixture` to fit a
sine wave.
* See :ref:`sphx_glr_auto_examples_mixture_plot_concentration_prior.py`
for an example plotting the confidence ellipsoids for the
:class:`BayesianGaussianMixture` with different
``weight_concentration_prior_type`` for different values of the parameter
``weight_concentration_prior``.
.. dropdown:: Pros and cons of variational inference with BayesianGaussianMixture
.. rubric:: Pros
:Automatic selection: When ``weight_concentration_prior`` is small enough and
``n_components`` is larger than what is found necessary by the model, the
Variational Bayesian mixture model has a natural tendency to set some mixture
weights values close to zero. This makes it possible to let the model choose
a suitable number of effective components automatically. Only an upper bound
of this number needs to be provided. Note however that the "ideal" number of
active components is very application specific and is typically ill-defined
in a data exploration setting.
:Less sensitivity to the number of parameters: Unlike finite models, which will
almost always use all components as much as they can, and hence will produce
wildly different solutions for different numbers of components, the
variational inference with a Dirichlet process prior
(``weight_concentration_prior_type='dirichlet_process'``) won't change much
with changes to the parameters, leading to more stability and less tuning.
:Regularization: Due to the incorporation of prior information,
variational solutions have less pathological special cases than
expectation-maximization solutions.
.. rubric:: Cons
:Speed: The extra parametrization necessary for variational inference makes
inference slower, although not by much.
:Hyperparameters: This algorithm needs an extra hyperparameter
that might need experimental tuning via cross-validation.
:Bias: There are many implicit biases in the inference algorithms (and also in
the Dirichlet process if used), and whenever there is a mismatch between
these biases and the data it might be possible to fit better models using a
finite mixture.
.. _dirichlet_process:
The Dirichlet Process
---------------------
Here we describe variational inference algorithms on Dirichlet process
mixture. The Dirichlet process is a prior probability distribution on
*clusterings with an infinite, unbounded, number of partitions*.
Variational techniques let us incorporate this prior structure on
Gaussian mixture models at almost no penalty in inference time, comparing
with a finite Gaussian mixture model.
An important question is how can the Dirichlet process use an infinite,
unbounded number of clusters and still be consistent. While a full explanation
doesn't fit this manual, one can think of its `stick breaking process
<https://en.wikipedia.org/wiki/Dirichlet_process#The_stick-breaking_process>`_
analogy to help understanding it. The stick breaking process is a generative
story for the Dirichlet process. We start with a unit-length stick and in each
step we break off a portion of the remaining stick. Each time, we associate the
length of the piece of the stick to the proportion of points that falls into a
group of the mixture. At the end, to represent the infinite mixture, we
associate the last remaining piece of the stick to the proportion of points
that don't fall into all the other groups. The length of each piece is a random
variable with probability proportional to the concentration parameter. Smaller
values of the concentration will divide the unit-length into larger pieces of
the stick (defining more concentrated distribution). Larger concentration
values will create smaller pieces of the stick (increasing the number of
components with non zero weights).
Variational inference techniques for the Dirichlet process still work
with a finite approximation to this infinite mixture model, but
instead of having to specify a priori how many components one wants to
use, one just specifies the concentration parameter and an upper bound
on the number of mixture components (this upper bound, assuming it is
higher than the "true" number of components, affects only algorithmic
complexity, not the actual number of components used). | scikit-learn | mixture gmm Gaussian mixture models currentmodule sklearn mixture sklearn mixture is a package which enables one to learn Gaussian Mixture Models diagonal spherical tied and full covariance matrices supported sample them and estimate them from data Facilities to help determine the appropriate number of components are also provided figure auto examples mixture images sphx glr plot gmm pdf 001 png target auto examples mixture plot gmm pdf html align center scale 50 Two component Gaussian mixture model data points and equi probability surfaces of the model A Gaussian mixture model is a probabilistic model that assumes all the data points are generated from a mixture of a finite number of Gaussian distributions with unknown parameters One can think of mixture models as generalizing k means clustering to incorporate information about the covariance structure of the data as well as the centers of the latent Gaussians Scikit learn implements different classes to estimate Gaussian mixture models that correspond to different estimation strategies detailed below Gaussian Mixture The class GaussianMixture object implements the ref expectation maximization expectation maximization EM algorithm for fitting mixture of Gaussian models It can also draw confidence ellipsoids for multivariate models and compute the Bayesian Information Criterion to assess the number of clusters in the data A meth GaussianMixture fit method is provided that learns a Gaussian Mixture Model from train data Given test data it can assign to each sample the Gaussian it most probably belongs to using the meth GaussianMixture predict method Alternatively the probability of each sample belonging to the various Gaussians may be retrieved using the meth GaussianMixture predict proba method The class GaussianMixture comes with different options to constrain the covariance of the difference classes estimated spherical diagonal tied or full covariance figure auto examples mixture images sphx glr plot gmm covariances 001 png target auto examples mixture plot gmm covariances html align center scale 75 rubric Examples See ref sphx glr auto examples mixture plot gmm covariances py for an example of using the Gaussian mixture as clustering on the iris dataset See ref sphx glr auto examples mixture plot gmm pdf py for an example on plotting the density estimation dropdown Pros and cons of class GaussianMixture rubric Pros Speed It is the fastest algorithm for learning mixture models Agnostic As this algorithm maximizes only the likelihood it will not bias the means towards zero or bias the cluster sizes to have specific structures that might or might not apply rubric Cons Singularities When one has insufficiently many points per mixture estimating the covariance matrices becomes difficult and the algorithm is known to diverge and find solutions with infinite likelihood unless one regularizes the covariances artificially Number of components This algorithm will always use all the components it has access to needing held out data or information theoretical criteria to decide how many components to use in the absence of external cues dropdown Selecting the number of components in a classical Gaussian Mixture model The BIC criterion can be used to select the number of components in a Gaussian Mixture in an efficient way In theory it recovers the true number of components only in the asymptotic regime i e if much data is available and assuming that the data was actually generated i i d from a mixture of Gaussian distribution Note that using a ref Variational Bayesian Gaussian mixture bgmm avoids the specification of the number of components for a Gaussian mixture model figure auto examples mixture images sphx glr plot gmm selection 002 png target auto examples mixture plot gmm selection html align center scale 50 rubric Examples See ref sphx glr auto examples mixture plot gmm selection py for an example of model selection performed with classical Gaussian mixture expectation maximization dropdown Estimation algorithm expectation maximization The main difficulty in learning Gaussian mixture models from unlabeled data is that one usually doesn t know which points came from which latent component if one has access to this information it gets very easy to fit a separate Gaussian distribution to each set of points Expectation maximization https en wikipedia org wiki Expectation E2 80 93maximization algorithm is a well founded statistical algorithm to get around this problem by an iterative process First one assumes random components randomly centered on data points learned from k means or even just normally distributed around the origin and computes for each point a probability of being generated by each component of the model Then one tweaks the parameters to maximize the likelihood of the data given those assignments Repeating this process is guaranteed to always converge to a local optimum dropdown Choice of the Initialization method There is a choice of four initialization methods as well as inputting user defined initial means to generate the initial centers for the model components k means default This applies a traditional k means clustering algorithm This can be computationally expensive compared to other initialization methods k means This uses the initialization method of k means clustering k means This will pick the first center at random from the data Subsequent centers will be chosen from a weighted distribution of the data favouring points further away from existing centers k means is the default initialization for k means so will be quicker than running a full k means but can still take a significant amount of time for large data sets with many components random from data This will pick random data points from the input data as the initial centers This is a very fast method of initialization but can produce non convergent results if the chosen points are too close to each other random Centers are chosen as a small perturbation away from the mean of all data This method is simple but can lead to the model taking longer to converge figure auto examples mixture images sphx glr plot gmm init 001 png target auto examples mixture plot gmm init html align center scale 50 rubric Examples See ref sphx glr auto examples mixture plot gmm init py for an example of using different initializations in Gaussian Mixture bgmm Variational Bayesian Gaussian Mixture The class BayesianGaussianMixture object implements a variant of the Gaussian mixture model with variational inference algorithms The API is similar to the one defined by class GaussianMixture variational inference Estimation algorithm variational inference Variational inference is an extension of expectation maximization that maximizes a lower bound on model evidence including priors instead of data likelihood The principle behind variational methods is the same as expectation maximization that is both are iterative algorithms that alternate between finding the probabilities for each point to be generated by each mixture and fitting the mixture to these assigned points but variational methods add regularization by integrating information from prior distributions This avoids the singularities often found in expectation maximization solutions but introduces some subtle biases to the model Inference is often notably slower but not usually as much so as to render usage unpractical Due to its Bayesian nature the variational algorithm needs more hyperparameters than expectation maximization the most important of these being the concentration parameter weight concentration prior Specifying a low value for the concentration prior will make the model put most of the weight on a few components and set the remaining components weights very close to zero High values of the concentration prior will allow a larger number of components to be active in the mixture The parameters implementation of the class BayesianGaussianMixture class proposes two types of prior for the weights distribution a finite mixture model with Dirichlet distribution and an infinite mixture model with the Dirichlet Process In practice Dirichlet Process inference algorithm is approximated and uses a truncated distribution with a fixed maximum number of components called the Stick breaking representation The number of components actually used almost always depends on the data The next figure compares the results obtained for the different type of the weight concentration prior parameter weight concentration prior type for different values of weight concentration prior Here we can see the value of the weight concentration prior parameter has a strong impact on the effective number of active components obtained We can also notice that large values for the concentration weight prior lead to more uniform weights when the type of prior is dirichlet distribution while this is not necessarily the case for the dirichlet process type used by default plot bgmm image auto examples mixture images sphx glr plot concentration prior 001 png target auto examples mixture plot concentration prior html scale 48 plot dpgmm image auto examples mixture images sphx glr plot concentration prior 002 png target auto examples mixture plot concentration prior html scale 48 centered plot bgmm plot dpgmm The examples below compare Gaussian mixture models with a fixed number of components to the variational Gaussian mixture models with a Dirichlet process prior Here a classical Gaussian mixture is fitted with 5 components on a dataset composed of 2 clusters We can see that the variational Gaussian mixture with a Dirichlet process prior is able to limit itself to only 2 components whereas the Gaussian mixture fits the data with a fixed number of components that has to be set a priori by the user In this case the user has selected n components 5 which does not match the true generative distribution of this toy dataset Note that with very little observations the variational Gaussian mixture models with a Dirichlet process prior can take a conservative stand and fit only one component figure auto examples mixture images sphx glr plot gmm 001 png target auto examples mixture plot gmm html align center scale 70 On the following figure we are fitting a dataset not well depicted by a Gaussian mixture Adjusting the weight concentration prior parameter of the class BayesianGaussianMixture controls the number of components used to fit this data We also present on the last two plots a random sampling generated from the two resulting mixtures figure auto examples mixture images sphx glr plot gmm sin 001 png target auto examples mixture plot gmm sin html align center scale 65 rubric Examples See ref sphx glr auto examples mixture plot gmm py for an example on plotting the confidence ellipsoids for both class GaussianMixture and class BayesianGaussianMixture ref sphx glr auto examples mixture plot gmm sin py shows using class GaussianMixture and class BayesianGaussianMixture to fit a sine wave See ref sphx glr auto examples mixture plot concentration prior py for an example plotting the confidence ellipsoids for the class BayesianGaussianMixture with different weight concentration prior type for different values of the parameter weight concentration prior dropdown Pros and cons of variational inference with BayesianGaussianMixture rubric Pros Automatic selection When weight concentration prior is small enough and n components is larger than what is found necessary by the model the Variational Bayesian mixture model has a natural tendency to set some mixture weights values close to zero This makes it possible to let the model choose a suitable number of effective components automatically Only an upper bound of this number needs to be provided Note however that the ideal number of active components is very application specific and is typically ill defined in a data exploration setting Less sensitivity to the number of parameters Unlike finite models which will almost always use all components as much as they can and hence will produce wildly different solutions for different numbers of components the variational inference with a Dirichlet process prior weight concentration prior type dirichlet process won t change much with changes to the parameters leading to more stability and less tuning Regularization Due to the incorporation of prior information variational solutions have less pathological special cases than expectation maximization solutions rubric Cons Speed The extra parametrization necessary for variational inference makes inference slower although not by much Hyperparameters This algorithm needs an extra hyperparameter that might need experimental tuning via cross validation Bias There are many implicit biases in the inference algorithms and also in the Dirichlet process if used and whenever there is a mismatch between these biases and the data it might be possible to fit better models using a finite mixture dirichlet process The Dirichlet Process Here we describe variational inference algorithms on Dirichlet process mixture The Dirichlet process is a prior probability distribution on clusterings with an infinite unbounded number of partitions Variational techniques let us incorporate this prior structure on Gaussian mixture models at almost no penalty in inference time comparing with a finite Gaussian mixture model An important question is how can the Dirichlet process use an infinite unbounded number of clusters and still be consistent While a full explanation doesn t fit this manual one can think of its stick breaking process https en wikipedia org wiki Dirichlet process The stick breaking process analogy to help understanding it The stick breaking process is a generative story for the Dirichlet process We start with a unit length stick and in each step we break off a portion of the remaining stick Each time we associate the length of the piece of the stick to the proportion of points that falls into a group of the mixture At the end to represent the infinite mixture we associate the last remaining piece of the stick to the proportion of points that don t fall into all the other groups The length of each piece is a random variable with probability proportional to the concentration parameter Smaller values of the concentration will divide the unit length into larger pieces of the stick defining more concentrated distribution Larger concentration values will create smaller pieces of the stick increasing the number of components with non zero weights Variational inference techniques for the Dirichlet process still work with a finite approximation to this infinite mixture model but instead of having to specify a priori how many components one wants to use one just specifies the concentration parameter and an upper bound on the number of mixture components this upper bound assuming it is higher than the true number of components affects only algorithmic complexity not the actual number of components used |
scikit-learn When performing classification you often want not only to predict the class sklearn calibration calibration Probability calibration | .. _calibration:
=======================
Probability calibration
=======================
.. currentmodule:: sklearn.calibration
When performing classification you often want not only to predict the class
label, but also obtain a probability of the respective label. This probability
gives you some kind of confidence on the prediction. Some models can give you
poor estimates of the class probabilities and some even do not support
probability prediction (e.g., some instances of
:class:`~sklearn.linear_model.SGDClassifier`).
The calibration module allows you to better calibrate
the probabilities of a given model, or to add support for probability
prediction.
Well calibrated classifiers are probabilistic classifiers for which the output
of the :term:`predict_proba` method can be directly interpreted as a confidence
level.
For instance, a well calibrated (binary) classifier should classify the samples such
that among the samples to which it gave a :term:`predict_proba` value close to, say,
0.8, approximately 80% actually belong to the positive class.
Before we show how to re-calibrate a classifier, we first need a way to detect how
good a classifier is calibrated.
.. note::
Strictly proper scoring rules for probabilistic predictions like
:func:`sklearn.metrics.brier_score_loss` and
:func:`sklearn.metrics.log_loss` assess calibration (reliability) and
discriminative power (resolution) of a model, as well as the randomness of the data
(uncertainty) at the same time. This follows from the well-known Brier score
decomposition of Murphy [1]_. As it is not clear which term dominates, the score is
of limited use for assessing calibration alone (unless one computes each term of
the decomposition). A lower Brier loss, for instance, does not necessarily
mean a better calibrated model, it could also mean a worse calibrated model with much
more discriminatory power, e.g. using many more features.
.. _calibration_curve:
Calibration curves
------------------
Calibration curves, also referred to as *reliability diagrams* (Wilks 1995 [2]_),
compare how well the probabilistic predictions of a binary classifier are calibrated.
It plots the frequency of the positive label (to be more precise, an estimation of the
*conditional event probability* :math:`P(Y=1|\text{predict_proba})`) on the y-axis
against the predicted probability :term:`predict_proba` of a model on the x-axis.
The tricky part is to get values for the y-axis.
In scikit-learn, this is accomplished by binning the predictions such that the x-axis
represents the average predicted probability in each bin.
The y-axis is then the *fraction of positives* given the predictions of that bin, i.e.
the proportion of samples whose class is the positive class (in each bin).
The top calibration curve plot is created with
:func:`CalibrationDisplay.from_estimator`, which uses :func:`calibration_curve` to
calculate the per bin average predicted probabilities and fraction of positives.
:func:`CalibrationDisplay.from_estimator`
takes as input a fitted classifier, which is used to calculate the predicted
probabilities. The classifier thus must have :term:`predict_proba` method. For
the few classifiers that do not have a :term:`predict_proba` method, it is
possible to use :class:`CalibratedClassifierCV` to calibrate the classifier
outputs to probabilities.
The bottom histogram gives some insight into the behavior of each classifier
by showing the number of samples in each predicted probability bin.
.. figure:: ../auto_examples/calibration/images/sphx_glr_plot_compare_calibration_001.png
:target: ../auto_examples/calibration/plot_compare_calibration.html
:align: center
.. currentmodule:: sklearn.linear_model
:class:`LogisticRegression` is more likely to return well calibrated predictions by itself as it has a
canonical link function for its loss, i.e. the logit-link for the :ref:`log_loss`.
In the unpenalized case, this leads to the so-called **balance property**, see [8]_ and :ref:`Logistic_regression`.
In the plot above, data is generated according to a linear mechanism, which is
consistent with the :class:`LogisticRegression` model (the model is 'well specified'),
and the value of the regularization parameter `C` is tuned to be
appropriate (neither too strong nor too low). As a consequence, this model returns
accurate predictions from its `predict_proba` method.
In contrast to that, the other shown models return biased probabilities; with
different biases per model.
.. currentmodule:: sklearn.naive_bayes
:class:`GaussianNB` (Naive Bayes) tends to push probabilities to 0 or 1 (note the counts
in the histograms). This is mainly because it makes the assumption that
features are conditionally independent given the class, which is not the
case in this dataset which contains 2 redundant features.
.. currentmodule:: sklearn.ensemble
:class:`RandomForestClassifier` shows the opposite behavior: the histograms
show peaks at probabilities approximately 0.2 and 0.9, while probabilities
close to 0 or 1 are very rare. An explanation for this is given by
Niculescu-Mizil and Caruana [3]_: "Methods such as bagging and random
forests that average predictions from a base set of models can have
difficulty making predictions near 0 and 1 because variance in the
underlying base models will bias predictions that should be near zero or one
away from these values. Because predictions are restricted to the interval
[0,1], errors caused by variance tend to be one-sided near zero and one. For
example, if a model should predict p = 0 for a case, the only way bagging
can achieve this is if all bagged trees predict zero. If we add noise to the
trees that bagging is averaging over, this noise will cause some trees to
predict values larger than 0 for this case, thus moving the average
prediction of the bagged ensemble away from 0. We observe this effect most
strongly with random forests because the base-level trees trained with
random forests have relatively high variance due to feature subsetting." As
a result, the calibration curve shows a characteristic sigmoid shape, indicating that
the classifier could trust its "intuition" more and return probabilities closer
to 0 or 1 typically.
.. currentmodule:: sklearn.svm
:class:`LinearSVC` (SVC) shows an even more sigmoid curve than the random forest, which
is typical for maximum-margin methods (compare Niculescu-Mizil and Caruana [3]_), which
focus on difficult to classify samples that are close to the decision boundary (the
support vectors).
Calibrating a classifier
------------------------
.. currentmodule:: sklearn.calibration
Calibrating a classifier consists of fitting a regressor (called a
*calibrator*) that maps the output of the classifier (as given by
:term:`decision_function` or :term:`predict_proba`) to a calibrated probability
in [0, 1]. Denoting the output of the classifier for a given sample by :math:`f_i`,
the calibrator tries to predict the conditional event probability
:math:`P(y_i = 1 | f_i)`.
Ideally, the calibrator is fit on a dataset independent of the training data used to
fit the classifier in the first place.
This is because performance of the classifier on its training data would be
better than for novel data. Using the classifier output of training data
to fit the calibrator would thus result in a biased calibrator that maps to
probabilities closer to 0 and 1 than it should.
Usage
-----
The :class:`CalibratedClassifierCV` class is used to calibrate a classifier.
:class:`CalibratedClassifierCV` uses a cross-validation approach to ensure
unbiased data is always used to fit the calibrator. The data is split into k
`(train_set, test_set)` couples (as determined by `cv`). When `ensemble=True`
(default), the following procedure is repeated independently for each
cross-validation split:
1. a clone of `base_estimator` is trained on the train subset
2. the trained `base_estimator` makes predictions on the test subset
3. the predictions are used to fit a calibrator (either a sigmoid or isotonic
regressor) (when the data is multiclass, a calibrator is fit for every class)
This results in an
ensemble of k `(classifier, calibrator)` couples where each calibrator maps
the output of its corresponding classifier into [0, 1]. Each couple is exposed
in the `calibrated_classifiers_` attribute, where each entry is a calibrated
classifier with a :term:`predict_proba` method that outputs calibrated
probabilities. The output of :term:`predict_proba` for the main
:class:`CalibratedClassifierCV` instance corresponds to the average of the
predicted probabilities of the `k` estimators in the `calibrated_classifiers_`
list. The output of :term:`predict` is the class that has the highest
probability.
It is important to choose `cv` carefully when using `ensemble=True`.
All classes should be present in both train and test subsets for every split.
When a class is absent in the train subset, the predicted probability for that
class will default to 0 for the `(classifier, calibrator)` couple of that split.
This skews the :term:`predict_proba` as it averages across all couples.
When a class is absent in the test subset, the calibrator for that class
(within the `(classifier, calibrator)` couple of that split) is
fit on data with no positive class. This results in ineffective calibration.
When `ensemble=False`, cross-validation is used to obtain 'unbiased'
predictions for all the data, via
:func:`~sklearn.model_selection.cross_val_predict`.
These unbiased predictions are then used to train the calibrator. The attribute
`calibrated_classifiers_` consists of only one `(classifier, calibrator)`
couple where the classifier is the `base_estimator` trained on all the data.
In this case the output of :term:`predict_proba` for
:class:`CalibratedClassifierCV` is the predicted probabilities obtained
from the single `(classifier, calibrator)` couple.
The main advantage of `ensemble=True` is to benefit from the traditional
ensembling effect (similar to :ref:`bagging`). The resulting ensemble should
both be well calibrated and slightly more accurate than with `ensemble=False`.
The main advantage of using `ensemble=False` is computational: it reduces the
overall fit time by training only a single base classifier and calibrator
pair, decreases the final model size and increases prediction speed.
Alternatively an already fitted classifier can be calibrated by using a
:class:`~sklearn.frozen.FrozenEstimator` as
``CalibratedClassifierCV(estimator=FrozenEstimator(estimator))``.
It is up to the user to make sure that the data used for fitting the classifier
is disjoint from the data used for fitting the regressor.
data used for fitting the regressor.
:class:`CalibratedClassifierCV` supports the use of two regression techniques
for calibration via the `method` parameter: `"sigmoid"` and `"isotonic"`.
.. _sigmoid_regressor:
Sigmoid
^^^^^^^
The sigmoid regressor, `method="sigmoid"` is based on Platt's logistic model [4]_:
.. math::
p(y_i = 1 | f_i) = \frac{1}{1 + \exp(A f_i + B)} \,,
where :math:`y_i` is the true label of sample :math:`i` and :math:`f_i`
is the output of the un-calibrated classifier for sample :math:`i`. :math:`A`
and :math:`B` are real numbers to be determined when fitting the regressor via
maximum likelihood.
The sigmoid method assumes the :ref:`calibration curve <calibration_curve>`
can be corrected by applying a sigmoid function to the raw predictions. This
assumption has been empirically justified in the case of :ref:`svm` with
common kernel functions on various benchmark datasets in section 2.1 of Platt
1999 [4]_ but does not necessarily hold in general. Additionally, the
logistic model works best if the calibration error is symmetrical, meaning
the classifier output for each binary class is normally distributed with
the same variance [7]_. This can be a problem for highly imbalanced
classification problems, where outputs do not have equal variance.
In general this method is most effective for small sample sizes or when the
un-calibrated model is under-confident and has similar calibration errors for both
high and low outputs.
Isotonic
^^^^^^^^
The `method="isotonic"` fits a non-parametric isotonic regressor, which outputs
a step-wise non-decreasing function, see :mod:`sklearn.isotonic`. It minimizes:
.. math::
\sum_{i=1}^{n} (y_i - \hat{f}_i)^2
subject to :math:`\hat{f}_i \geq \hat{f}_j` whenever
:math:`f_i \geq f_j`. :math:`y_i` is the true
label of sample :math:`i` and :math:`\hat{f}_i` is the output of the
calibrated classifier for sample :math:`i` (i.e., the calibrated probability).
This method is more general when compared to 'sigmoid' as the only restriction
is that the mapping function is monotonically increasing. It is thus more
powerful as it can correct any monotonic distortion of the un-calibrated model.
However, it is more prone to overfitting, especially on small datasets [6]_.
Overall, 'isotonic' will perform as well as or better than 'sigmoid' when
there is enough data (greater than ~ 1000 samples) to avoid overfitting [3]_.
.. note:: Impact on ranking metrics like AUC
It is generally expected that calibration does not affect ranking metrics such as
ROC-AUC. However, these metrics might differ after calibration when using
`method="isotonic"` since isotonic regression introduces ties in the predicted
probabilities. This can be seen as within the uncertainty of the model predictions.
In case, you strictly want to keep the ranking and thus AUC scores, use
`method="sigmoid"` which is a strictly monotonic transformation and thus keeps
the ranking.
Multiclass support
^^^^^^^^^^^^^^^^^^
Both isotonic and sigmoid regressors only
support 1-dimensional data (e.g., binary classification output) but are
extended for multiclass classification if the `base_estimator` supports
multiclass predictions. For multiclass predictions,
:class:`CalibratedClassifierCV` calibrates for
each class separately in a :ref:`ovr_classification` fashion [5]_. When
predicting
probabilities, the calibrated probabilities for each class
are predicted separately. As those probabilities do not necessarily sum to
one, a postprocessing is performed to normalize them.
.. rubric:: Examples
* :ref:`sphx_glr_auto_examples_calibration_plot_calibration_curve.py`
* :ref:`sphx_glr_auto_examples_calibration_plot_calibration_multiclass.py`
* :ref:`sphx_glr_auto_examples_calibration_plot_calibration.py`
* :ref:`sphx_glr_auto_examples_calibration_plot_compare_calibration.py`
.. rubric:: References
.. [1] Allan H. Murphy (1973).
:doi:`"A New Vector Partition of the Probability Score"
<10.1175/1520-0450(1973)012%3C0595:ANVPOT%3E2.0.CO;2>`
Journal of Applied Meteorology and Climatology
.. [2] `On the combination of forecast probabilities for
consecutive precipitation periods.
<https://journals.ametsoc.org/waf/article/5/4/640/40179>`_
Wea. Forecasting, 5, 640–650., Wilks, D. S., 1990a
.. [3] `Predicting Good Probabilities with Supervised Learning
<https://www.cs.cornell.edu/~alexn/papers/calibration.icml05.crc.rev3.pdf>`_,
A. Niculescu-Mizil & R. Caruana, ICML 2005
.. [4] `Probabilistic Outputs for Support Vector Machines and Comparisons
to Regularized Likelihood Methods.
<https://www.cs.colorado.edu/~mozer/Teaching/syllabi/6622/papers/Platt1999.pdf>`_
J. Platt, (1999)
.. [5] `Transforming Classifier Scores into Accurate Multiclass
Probability Estimates.
<https://dl.acm.org/doi/pdf/10.1145/775047.775151>`_
B. Zadrozny & C. Elkan, (KDD 2002)
.. [6] `Predicting accurate probabilities with a ranking loss.
<https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4180410/>`_
Menon AK, Jiang XJ, Vembu S, Elkan C, Ohno-Machado L.
Proc Int Conf Mach Learn. 2012;2012:703-710
.. [7] `Beyond sigmoids: How to obtain well-calibrated probabilities from
binary classifiers with beta calibration
<https://projecteuclid.org/euclid.ejs/1513306867>`_
Kull, M., Silva Filho, T. M., & Flach, P. (2017).
.. [8] Mario V. Wüthrich, Michael Merz (2023).
:doi:`"Statistical Foundations of Actuarial Learning and its Applications"
<10.1007/978-3-031-12409-9>`
Springer Actuarial | scikit-learn | calibration Probability calibration currentmodule sklearn calibration When performing classification you often want not only to predict the class label but also obtain a probability of the respective label This probability gives you some kind of confidence on the prediction Some models can give you poor estimates of the class probabilities and some even do not support probability prediction e g some instances of class sklearn linear model SGDClassifier The calibration module allows you to better calibrate the probabilities of a given model or to add support for probability prediction Well calibrated classifiers are probabilistic classifiers for which the output of the term predict proba method can be directly interpreted as a confidence level For instance a well calibrated binary classifier should classify the samples such that among the samples to which it gave a term predict proba value close to say 0 8 approximately 80 actually belong to the positive class Before we show how to re calibrate a classifier we first need a way to detect how good a classifier is calibrated note Strictly proper scoring rules for probabilistic predictions like func sklearn metrics brier score loss and func sklearn metrics log loss assess calibration reliability and discriminative power resolution of a model as well as the randomness of the data uncertainty at the same time This follows from the well known Brier score decomposition of Murphy 1 As it is not clear which term dominates the score is of limited use for assessing calibration alone unless one computes each term of the decomposition A lower Brier loss for instance does not necessarily mean a better calibrated model it could also mean a worse calibrated model with much more discriminatory power e g using many more features calibration curve Calibration curves Calibration curves also referred to as reliability diagrams Wilks 1995 2 compare how well the probabilistic predictions of a binary classifier are calibrated It plots the frequency of the positive label to be more precise an estimation of the conditional event probability math P Y 1 text predict proba on the y axis against the predicted probability term predict proba of a model on the x axis The tricky part is to get values for the y axis In scikit learn this is accomplished by binning the predictions such that the x axis represents the average predicted probability in each bin The y axis is then the fraction of positives given the predictions of that bin i e the proportion of samples whose class is the positive class in each bin The top calibration curve plot is created with func CalibrationDisplay from estimator which uses func calibration curve to calculate the per bin average predicted probabilities and fraction of positives func CalibrationDisplay from estimator takes as input a fitted classifier which is used to calculate the predicted probabilities The classifier thus must have term predict proba method For the few classifiers that do not have a term predict proba method it is possible to use class CalibratedClassifierCV to calibrate the classifier outputs to probabilities The bottom histogram gives some insight into the behavior of each classifier by showing the number of samples in each predicted probability bin figure auto examples calibration images sphx glr plot compare calibration 001 png target auto examples calibration plot compare calibration html align center currentmodule sklearn linear model class LogisticRegression is more likely to return well calibrated predictions by itself as it has a canonical link function for its loss i e the logit link for the ref log loss In the unpenalized case this leads to the so called balance property see 8 and ref Logistic regression In the plot above data is generated according to a linear mechanism which is consistent with the class LogisticRegression model the model is well specified and the value of the regularization parameter C is tuned to be appropriate neither too strong nor too low As a consequence this model returns accurate predictions from its predict proba method In contrast to that the other shown models return biased probabilities with different biases per model currentmodule sklearn naive bayes class GaussianNB Naive Bayes tends to push probabilities to 0 or 1 note the counts in the histograms This is mainly because it makes the assumption that features are conditionally independent given the class which is not the case in this dataset which contains 2 redundant features currentmodule sklearn ensemble class RandomForestClassifier shows the opposite behavior the histograms show peaks at probabilities approximately 0 2 and 0 9 while probabilities close to 0 or 1 are very rare An explanation for this is given by Niculescu Mizil and Caruana 3 Methods such as bagging and random forests that average predictions from a base set of models can have difficulty making predictions near 0 and 1 because variance in the underlying base models will bias predictions that should be near zero or one away from these values Because predictions are restricted to the interval 0 1 errors caused by variance tend to be one sided near zero and one For example if a model should predict p 0 for a case the only way bagging can achieve this is if all bagged trees predict zero If we add noise to the trees that bagging is averaging over this noise will cause some trees to predict values larger than 0 for this case thus moving the average prediction of the bagged ensemble away from 0 We observe this effect most strongly with random forests because the base level trees trained with random forests have relatively high variance due to feature subsetting As a result the calibration curve shows a characteristic sigmoid shape indicating that the classifier could trust its intuition more and return probabilities closer to 0 or 1 typically currentmodule sklearn svm class LinearSVC SVC shows an even more sigmoid curve than the random forest which is typical for maximum margin methods compare Niculescu Mizil and Caruana 3 which focus on difficult to classify samples that are close to the decision boundary the support vectors Calibrating a classifier currentmodule sklearn calibration Calibrating a classifier consists of fitting a regressor called a calibrator that maps the output of the classifier as given by term decision function or term predict proba to a calibrated probability in 0 1 Denoting the output of the classifier for a given sample by math f i the calibrator tries to predict the conditional event probability math P y i 1 f i Ideally the calibrator is fit on a dataset independent of the training data used to fit the classifier in the first place This is because performance of the classifier on its training data would be better than for novel data Using the classifier output of training data to fit the calibrator would thus result in a biased calibrator that maps to probabilities closer to 0 and 1 than it should Usage The class CalibratedClassifierCV class is used to calibrate a classifier class CalibratedClassifierCV uses a cross validation approach to ensure unbiased data is always used to fit the calibrator The data is split into k train set test set couples as determined by cv When ensemble True default the following procedure is repeated independently for each cross validation split 1 a clone of base estimator is trained on the train subset 2 the trained base estimator makes predictions on the test subset 3 the predictions are used to fit a calibrator either a sigmoid or isotonic regressor when the data is multiclass a calibrator is fit for every class This results in an ensemble of k classifier calibrator couples where each calibrator maps the output of its corresponding classifier into 0 1 Each couple is exposed in the calibrated classifiers attribute where each entry is a calibrated classifier with a term predict proba method that outputs calibrated probabilities The output of term predict proba for the main class CalibratedClassifierCV instance corresponds to the average of the predicted probabilities of the k estimators in the calibrated classifiers list The output of term predict is the class that has the highest probability It is important to choose cv carefully when using ensemble True All classes should be present in both train and test subsets for every split When a class is absent in the train subset the predicted probability for that class will default to 0 for the classifier calibrator couple of that split This skews the term predict proba as it averages across all couples When a class is absent in the test subset the calibrator for that class within the classifier calibrator couple of that split is fit on data with no positive class This results in ineffective calibration When ensemble False cross validation is used to obtain unbiased predictions for all the data via func sklearn model selection cross val predict These unbiased predictions are then used to train the calibrator The attribute calibrated classifiers consists of only one classifier calibrator couple where the classifier is the base estimator trained on all the data In this case the output of term predict proba for class CalibratedClassifierCV is the predicted probabilities obtained from the single classifier calibrator couple The main advantage of ensemble True is to benefit from the traditional ensembling effect similar to ref bagging The resulting ensemble should both be well calibrated and slightly more accurate than with ensemble False The main advantage of using ensemble False is computational it reduces the overall fit time by training only a single base classifier and calibrator pair decreases the final model size and increases prediction speed Alternatively an already fitted classifier can be calibrated by using a class sklearn frozen FrozenEstimator as CalibratedClassifierCV estimator FrozenEstimator estimator It is up to the user to make sure that the data used for fitting the classifier is disjoint from the data used for fitting the regressor data used for fitting the regressor class CalibratedClassifierCV supports the use of two regression techniques for calibration via the method parameter sigmoid and isotonic sigmoid regressor Sigmoid The sigmoid regressor method sigmoid is based on Platt s logistic model 4 math p y i 1 f i frac 1 1 exp A f i B where math y i is the true label of sample math i and math f i is the output of the un calibrated classifier for sample math i math A and math B are real numbers to be determined when fitting the regressor via maximum likelihood The sigmoid method assumes the ref calibration curve calibration curve can be corrected by applying a sigmoid function to the raw predictions This assumption has been empirically justified in the case of ref svm with common kernel functions on various benchmark datasets in section 2 1 of Platt 1999 4 but does not necessarily hold in general Additionally the logistic model works best if the calibration error is symmetrical meaning the classifier output for each binary class is normally distributed with the same variance 7 This can be a problem for highly imbalanced classification problems where outputs do not have equal variance In general this method is most effective for small sample sizes or when the un calibrated model is under confident and has similar calibration errors for both high and low outputs Isotonic The method isotonic fits a non parametric isotonic regressor which outputs a step wise non decreasing function see mod sklearn isotonic It minimizes math sum i 1 n y i hat f i 2 subject to math hat f i geq hat f j whenever math f i geq f j math y i is the true label of sample math i and math hat f i is the output of the calibrated classifier for sample math i i e the calibrated probability This method is more general when compared to sigmoid as the only restriction is that the mapping function is monotonically increasing It is thus more powerful as it can correct any monotonic distortion of the un calibrated model However it is more prone to overfitting especially on small datasets 6 Overall isotonic will perform as well as or better than sigmoid when there is enough data greater than 1000 samples to avoid overfitting 3 note Impact on ranking metrics like AUC It is generally expected that calibration does not affect ranking metrics such as ROC AUC However these metrics might differ after calibration when using method isotonic since isotonic regression introduces ties in the predicted probabilities This can be seen as within the uncertainty of the model predictions In case you strictly want to keep the ranking and thus AUC scores use method sigmoid which is a strictly monotonic transformation and thus keeps the ranking Multiclass support Both isotonic and sigmoid regressors only support 1 dimensional data e g binary classification output but are extended for multiclass classification if the base estimator supports multiclass predictions For multiclass predictions class CalibratedClassifierCV calibrates for each class separately in a ref ovr classification fashion 5 When predicting probabilities the calibrated probabilities for each class are predicted separately As those probabilities do not necessarily sum to one a postprocessing is performed to normalize them rubric Examples ref sphx glr auto examples calibration plot calibration curve py ref sphx glr auto examples calibration plot calibration multiclass py ref sphx glr auto examples calibration plot calibration py ref sphx glr auto examples calibration plot compare calibration py rubric References 1 Allan H Murphy 1973 doi A New Vector Partition of the Probability Score 10 1175 1520 0450 1973 012 3C0595 ANVPOT 3E2 0 CO 2 Journal of Applied Meteorology and Climatology 2 On the combination of forecast probabilities for consecutive precipitation periods https journals ametsoc org waf article 5 4 640 40179 Wea Forecasting 5 640 650 Wilks D S 1990a 3 Predicting Good Probabilities with Supervised Learning https www cs cornell edu alexn papers calibration icml05 crc rev3 pdf A Niculescu Mizil R Caruana ICML 2005 4 Probabilistic Outputs for Support Vector Machines and Comparisons to Regularized Likelihood Methods https www cs colorado edu mozer Teaching syllabi 6622 papers Platt1999 pdf J Platt 1999 5 Transforming Classifier Scores into Accurate Multiclass Probability Estimates https dl acm org doi pdf 10 1145 775047 775151 B Zadrozny C Elkan KDD 2002 6 Predicting accurate probabilities with a ranking loss https www ncbi nlm nih gov pmc articles PMC4180410 Menon AK Jiang XJ Vembu S Elkan C Ohno Machado L Proc Int Conf Mach Learn 2012 2012 703 710 7 Beyond sigmoids How to obtain well calibrated probabilities from binary classifiers with beta calibration https projecteuclid org euclid ejs 1513306867 Kull M Silva Filho T M Flach P 2017 8 Mario V W thrich Michael Merz 2023 doi Statistical Foundations of Actuarial Learning and its Applications 10 1007 978 3 031 12409 9 Springer Actuarial |
scikit-learn sklearn manifold Manifold learning manifold Look for the bare necessities |
.. currentmodule:: sklearn.manifold
.. _manifold:
=================
Manifold learning
=================
| Look for the bare necessities
| The simple bare necessities
| Forget about your worries and your strife
| I mean the bare necessities
| Old Mother Nature's recipes
| That bring the bare necessities of life
|
| -- Baloo's song [The Jungle Book]
.. figure:: ../auto_examples/manifold/images/sphx_glr_plot_compare_methods_001.png
:target: ../auto_examples/manifold/plot_compare_methods.html
:align: center
:scale: 70%
.. |manifold_img3| image:: ../auto_examples/manifold/images/sphx_glr_plot_compare_methods_003.png
:target: ../auto_examples/manifold/plot_compare_methods.html
:scale: 60%
.. |manifold_img4| image:: ../auto_examples/manifold/images/sphx_glr_plot_compare_methods_004.png
:target: ../auto_examples/manifold/plot_compare_methods.html
:scale: 60%
.. |manifold_img5| image:: ../auto_examples/manifold/images/sphx_glr_plot_compare_methods_005.png
:target: ../auto_examples/manifold/plot_compare_methods.html
:scale: 60%
.. |manifold_img6| image:: ../auto_examples/manifold/images/sphx_glr_plot_compare_methods_006.png
:target: ../auto_examples/manifold/plot_compare_methods.html
:scale: 60%
.. centered:: |manifold_img3| |manifold_img4| |manifold_img5| |manifold_img6|
Manifold learning is an approach to non-linear dimensionality reduction.
Algorithms for this task are based on the idea that the dimensionality of
many data sets is only artificially high.
Introduction
============
High-dimensional datasets can be very difficult to visualize. While data
in two or three dimensions can be plotted to show the inherent
structure of the data, equivalent high-dimensional plots are much less
intuitive. To aid visualization of the structure of a dataset, the
dimension must be reduced in some way.
The simplest way to accomplish this dimensionality reduction is by taking
a random projection of the data. Though this allows some degree of
visualization of the data structure, the randomness of the choice leaves much
to be desired. In a random projection, it is likely that the more
interesting structure within the data will be lost.
.. |digits_img| image:: ../auto_examples/manifold/images/sphx_glr_plot_lle_digits_001.png
:target: ../auto_examples/manifold/plot_lle_digits.html
:scale: 50
.. |projected_img| image:: ../auto_examples/manifold/images/sphx_glr_plot_lle_digits_002.png
:target: ../auto_examples/manifold/plot_lle_digits.html
:scale: 50
.. centered:: |digits_img| |projected_img|
To address this concern, a number of supervised and unsupervised linear
dimensionality reduction frameworks have been designed, such as Principal
Component Analysis (PCA), Independent Component Analysis, Linear
Discriminant Analysis, and others. These algorithms define specific
rubrics to choose an "interesting" linear projection of the data.
These methods can be powerful, but often miss important non-linear
structure in the data.
.. |PCA_img| image:: ../auto_examples/manifold/images/sphx_glr_plot_lle_digits_003.png
:target: ../auto_examples/manifold/plot_lle_digits.html
:scale: 50
.. |LDA_img| image:: ../auto_examples/manifold/images/sphx_glr_plot_lle_digits_004.png
:target: ../auto_examples/manifold/plot_lle_digits.html
:scale: 50
.. centered:: |PCA_img| |LDA_img|
Manifold Learning can be thought of as an attempt to generalize linear
frameworks like PCA to be sensitive to non-linear structure in data. Though
supervised variants exist, the typical manifold learning problem is
unsupervised: it learns the high-dimensional structure of the data
from the data itself, without the use of predetermined classifications.
.. rubric:: Examples
* See :ref:`sphx_glr_auto_examples_manifold_plot_lle_digits.py` for an example of
dimensionality reduction on handwritten digits.
* See :ref:`sphx_glr_auto_examples_manifold_plot_compare_methods.py` for an example of
dimensionality reduction on a toy "S-curve" dataset.
* See :ref:`sphx_glr_auto_examples_applications_plot_stock_market.py` for an example of
using manifold learning to map the stock market structure based on historical stock
prices.
The manifold learning implementations available in scikit-learn are
summarized below
.. _isomap:
Isomap
======
One of the earliest approaches to manifold learning is the Isomap
algorithm, short for Isometric Mapping. Isomap can be viewed as an
extension of Multi-dimensional Scaling (MDS) or Kernel PCA.
Isomap seeks a lower-dimensional embedding which maintains geodesic
distances between all points. Isomap can be performed with the object
:class:`Isomap`.
.. figure:: ../auto_examples/manifold/images/sphx_glr_plot_lle_digits_005.png
:target: ../auto_examples/manifold/plot_lle_digits.html
:align: center
:scale: 50
.. dropdown:: Complexity
The Isomap algorithm comprises three stages:
1. **Nearest neighbor search.** Isomap uses
:class:`~sklearn.neighbors.BallTree` for efficient neighbor search.
The cost is approximately :math:`O[D \log(k) N \log(N)]`, for :math:`k`
nearest neighbors of :math:`N` points in :math:`D` dimensions.
2. **Shortest-path graph search.** The most efficient known algorithms
for this are *Dijkstra's Algorithm*, which is approximately
:math:`O[N^2(k + \log(N))]`, or the *Floyd-Warshall algorithm*, which
is :math:`O[N^3]`. The algorithm can be selected by the user with
the ``path_method`` keyword of ``Isomap``. If unspecified, the code
attempts to choose the best algorithm for the input data.
3. **Partial eigenvalue decomposition.** The embedding is encoded in the
eigenvectors corresponding to the :math:`d` largest eigenvalues of the
:math:`N \times N` isomap kernel. For a dense solver, the cost is
approximately :math:`O[d N^2]`. This cost can often be improved using
the ``ARPACK`` solver. The eigensolver can be specified by the user
with the ``eigen_solver`` keyword of ``Isomap``. If unspecified, the
code attempts to choose the best algorithm for the input data.
The overall complexity of Isomap is
:math:`O[D \log(k) N \log(N)] + O[N^2(k + \log(N))] + O[d N^2]`.
* :math:`N` : number of training data points
* :math:`D` : input dimension
* :math:`k` : number of nearest neighbors
* :math:`d` : output dimension
.. rubric:: References
* `"A global geometric framework for nonlinear dimensionality reduction"
<http://science.sciencemag.org/content/290/5500/2319.full>`_
Tenenbaum, J.B.; De Silva, V.; & Langford, J.C. Science 290 (5500)
.. _locally_linear_embedding:
Locally Linear Embedding
========================
Locally linear embedding (LLE) seeks a lower-dimensional projection of the data
which preserves distances within local neighborhoods. It can be thought
of as a series of local Principal Component Analyses which are globally
compared to find the best non-linear embedding.
Locally linear embedding can be performed with function
:func:`locally_linear_embedding` or its object-oriented counterpart
:class:`LocallyLinearEmbedding`.
.. figure:: ../auto_examples/manifold/images/sphx_glr_plot_lle_digits_006.png
:target: ../auto_examples/manifold/plot_lle_digits.html
:align: center
:scale: 50
.. dropdown:: Complexity
The standard LLE algorithm comprises three stages:
1. **Nearest Neighbors Search**. See discussion under Isomap above.
2. **Weight Matrix Construction**. :math:`O[D N k^3]`.
The construction of the LLE weight matrix involves the solution of a
:math:`k \times k` linear equation for each of the :math:`N` local
neighborhoods.
3. **Partial Eigenvalue Decomposition**. See discussion under Isomap above.
The overall complexity of standard LLE is
:math:`O[D \log(k) N \log(N)] + O[D N k^3] + O[d N^2]`.
* :math:`N` : number of training data points
* :math:`D` : input dimension
* :math:`k` : number of nearest neighbors
* :math:`d` : output dimension
.. rubric:: References
* `"Nonlinear dimensionality reduction by locally linear embedding"
<http://www.sciencemag.org/content/290/5500/2323.full>`_
Roweis, S. & Saul, L. Science 290:2323 (2000)
Modified Locally Linear Embedding
=================================
One well-known issue with LLE is the regularization problem. When the number
of neighbors is greater than the number of input dimensions, the matrix
defining each local neighborhood is rank-deficient. To address this, standard
LLE applies an arbitrary regularization parameter :math:`r`, which is chosen
relative to the trace of the local weight matrix. Though it can be shown
formally that as :math:`r \to 0`, the solution converges to the desired
embedding, there is no guarantee that the optimal solution will be found
for :math:`r > 0`. This problem manifests itself in embeddings which distort
the underlying geometry of the manifold.
One method to address the regularization problem is to use multiple weight
vectors in each neighborhood. This is the essence of *modified locally
linear embedding* (MLLE). MLLE can be performed with function
:func:`locally_linear_embedding` or its object-oriented counterpart
:class:`LocallyLinearEmbedding`, with the keyword ``method = 'modified'``.
It requires ``n_neighbors > n_components``.
.. figure:: ../auto_examples/manifold/images/sphx_glr_plot_lle_digits_007.png
:target: ../auto_examples/manifold/plot_lle_digits.html
:align: center
:scale: 50
.. dropdown:: Complexity
The MLLE algorithm comprises three stages:
1. **Nearest Neighbors Search**. Same as standard LLE
2. **Weight Matrix Construction**. Approximately
:math:`O[D N k^3] + O[N (k-D) k^2]`. The first term is exactly equivalent
to that of standard LLE. The second term has to do with constructing the
weight matrix from multiple weights. In practice, the added cost of
constructing the MLLE weight matrix is relatively small compared to the
cost of stages 1 and 3.
3. **Partial Eigenvalue Decomposition**. Same as standard LLE
The overall complexity of MLLE is
:math:`O[D \log(k) N \log(N)] + O[D N k^3] + O[N (k-D) k^2] + O[d N^2]`.
* :math:`N` : number of training data points
* :math:`D` : input dimension
* :math:`k` : number of nearest neighbors
* :math:`d` : output dimension
.. rubric:: References
* `"MLLE: Modified Locally Linear Embedding Using Multiple Weights"
<https://citeseerx.ist.psu.edu/doc_view/pid/0b060fdbd92cbcc66b383bcaa9ba5e5e624d7ee3>`_
Zhang, Z. & Wang, J.
Hessian Eigenmapping
====================
Hessian Eigenmapping (also known as Hessian-based LLE: HLLE) is another method
of solving the regularization problem of LLE. It revolves around a
hessian-based quadratic form at each neighborhood which is used to recover
the locally linear structure. Though other implementations note its poor
scaling with data size, ``sklearn`` implements some algorithmic
improvements which make its cost comparable to that of other LLE variants
for small output dimension. HLLE can be performed with function
:func:`locally_linear_embedding` or its object-oriented counterpart
:class:`LocallyLinearEmbedding`, with the keyword ``method = 'hessian'``.
It requires ``n_neighbors > n_components * (n_components + 3) / 2``.
.. figure:: ../auto_examples/manifold/images/sphx_glr_plot_lle_digits_008.png
:target: ../auto_examples/manifold/plot_lle_digits.html
:align: center
:scale: 50
.. dropdown:: Complexity
The HLLE algorithm comprises three stages:
1. **Nearest Neighbors Search**. Same as standard LLE
2. **Weight Matrix Construction**. Approximately
:math:`O[D N k^3] + O[N d^6]`. The first term reflects a similar
cost to that of standard LLE. The second term comes from a QR
decomposition of the local hessian estimator.
3. **Partial Eigenvalue Decomposition**. Same as standard LLE.
The overall complexity of standard HLLE is
:math:`O[D \log(k) N \log(N)] + O[D N k^3] + O[N d^6] + O[d N^2]`.
* :math:`N` : number of training data points
* :math:`D` : input dimension
* :math:`k` : number of nearest neighbors
* :math:`d` : output dimension
.. rubric:: References
* `"Hessian Eigenmaps: Locally linear embedding techniques for
high-dimensional data" <http://www.pnas.org/content/100/10/5591>`_
Donoho, D. & Grimes, C. Proc Natl Acad Sci USA. 100:5591 (2003)
.. _spectral_embedding:
Spectral Embedding
====================
Spectral Embedding is an approach to calculating a non-linear embedding.
Scikit-learn implements Laplacian Eigenmaps, which finds a low dimensional
representation of the data using a spectral decomposition of the graph
Laplacian. The graph generated can be considered as a discrete approximation of
the low dimensional manifold in the high dimensional space. Minimization of a
cost function based on the graph ensures that points close to each other on
the manifold are mapped close to each other in the low dimensional space,
preserving local distances. Spectral embedding can be performed with the
function :func:`spectral_embedding` or its object-oriented counterpart
:class:`SpectralEmbedding`.
.. dropdown:: Complexity
The Spectral Embedding (Laplacian Eigenmaps) algorithm comprises three stages:
1. **Weighted Graph Construction**. Transform the raw input data into
graph representation using affinity (adjacency) matrix representation.
2. **Graph Laplacian Construction**. unnormalized Graph Laplacian
is constructed as :math:`L = D - A` for and normalized one as
:math:`L = D^{-\frac{1}{2}} (D - A) D^{-\frac{1}{2}}`.
3. **Partial Eigenvalue Decomposition**. Eigenvalue decomposition is
done on graph Laplacian.
The overall complexity of spectral embedding is
:math:`O[D \log(k) N \log(N)] + O[D N k^3] + O[d N^2]`.
* :math:`N` : number of training data points
* :math:`D` : input dimension
* :math:`k` : number of nearest neighbors
* :math:`d` : output dimension
.. rubric:: References
* `"Laplacian Eigenmaps for Dimensionality Reduction
and Data Representation"
<https://web.cse.ohio-state.edu/~mbelkin/papers/LEM_NC_03.pdf>`_
M. Belkin, P. Niyogi, Neural Computation, June 2003; 15 (6):1373-1396
Local Tangent Space Alignment
=============================
Though not technically a variant of LLE, Local tangent space alignment (LTSA)
is algorithmically similar enough to LLE that it can be put in this category.
Rather than focusing on preserving neighborhood distances as in LLE, LTSA
seeks to characterize the local geometry at each neighborhood via its
tangent space, and performs a global optimization to align these local
tangent spaces to learn the embedding. LTSA can be performed with function
:func:`locally_linear_embedding` or its object-oriented counterpart
:class:`LocallyLinearEmbedding`, with the keyword ``method = 'ltsa'``.
.. figure:: ../auto_examples/manifold/images/sphx_glr_plot_lle_digits_009.png
:target: ../auto_examples/manifold/plot_lle_digits.html
:align: center
:scale: 50
.. dropdown:: Complexity
The LTSA algorithm comprises three stages:
1. **Nearest Neighbors Search**. Same as standard LLE
2. **Weight Matrix Construction**. Approximately
:math:`O[D N k^3] + O[k^2 d]`. The first term reflects a similar
cost to that of standard LLE.
3. **Partial Eigenvalue Decomposition**. Same as standard LLE
The overall complexity of standard LTSA is
:math:`O[D \log(k) N \log(N)] + O[D N k^3] + O[k^2 d] + O[d N^2]`.
* :math:`N` : number of training data points
* :math:`D` : input dimension
* :math:`k` : number of nearest neighbors
* :math:`d` : output dimension
.. rubric:: References
* :arxiv:`"Principal manifolds and nonlinear dimensionality reduction via
tangent space alignment"
<cs/0212008>`
Zhang, Z. & Zha, H. Journal of Shanghai Univ. 8:406 (2004)
.. _multidimensional_scaling:
Multi-dimensional Scaling (MDS)
===============================
`Multidimensional scaling <https://en.wikipedia.org/wiki/Multidimensional_scaling>`_
(:class:`MDS`) seeks a low-dimensional
representation of the data in which the distances respect well the
distances in the original high-dimensional space.
In general, :class:`MDS` is a technique used for analyzing similarity or
dissimilarity data. It attempts to model similarity or dissimilarity data as
distances in a geometric spaces. The data can be ratings of similarity between
objects, interaction frequencies of molecules, or trade indices between
countries.
There exists two types of MDS algorithm: metric and non metric. In
scikit-learn, the class :class:`MDS` implements both. In Metric MDS, the input
similarity matrix arises from a metric (and thus respects the triangular
inequality), the distances between output two points are then set to be as
close as possible to the similarity or dissimilarity data. In the non-metric
version, the algorithms will try to preserve the order of the distances, and
hence seek for a monotonic relationship between the distances in the embedded
space and the similarities/dissimilarities.
.. figure:: ../auto_examples/manifold/images/sphx_glr_plot_lle_digits_010.png
:target: ../auto_examples/manifold/plot_lle_digits.html
:align: center
:scale: 50
Let :math:`S` be the similarity matrix, and :math:`X` the coordinates of the
:math:`n` input points. Disparities :math:`\hat{d}_{ij}` are transformation of
the similarities chosen in some optimal ways. The objective, called the
stress, is then defined by :math:`\sum_{i < j} d_{ij}(X) - \hat{d}_{ij}(X)`
.. dropdown:: Metric MDS
The simplest metric :class:`MDS` model, called *absolute MDS*, disparities are defined by
:math:`\hat{d}_{ij} = S_{ij}`. With absolute MDS, the value :math:`S_{ij}`
should then correspond exactly to the distance between point :math:`i` and
:math:`j` in the embedding point.
Most commonly, disparities are set to :math:`\hat{d}_{ij} = b S_{ij}`.
.. dropdown:: Nonmetric MDS
Non metric :class:`MDS` focuses on the ordination of the data. If
:math:`S_{ij} > S_{jk}`, then the embedding should enforce :math:`d_{ij} <
d_{jk}`. For this reason, we discuss it in terms of dissimilarities
(:math:`\delta_{ij}`) instead of similarities (:math:`S_{ij}`). Note that
dissimilarities can easily be obtained from similarities through a simple
transform, e.g. :math:`\delta_{ij}=c_1-c_2 S_{ij}` for some real constants
:math:`c_1, c_2`. A simple algorithm to enforce proper ordination is to use a
monotonic regression of :math:`d_{ij}` on :math:`\delta_{ij}`, yielding
disparities :math:`\hat{d}_{ij}` in the same order as :math:`\delta_{ij}`.
A trivial solution to this problem is to set all the points on the origin. In
order to avoid that, the disparities :math:`\hat{d}_{ij}` are normalized. Note
that since we only care about relative ordering, our objective should be
invariant to simple translation and scaling, however the stress used in metric
MDS is sensitive to scaling. To address this, non-metric MDS may use a
normalized stress, known as Stress-1 defined as
.. math::
\sqrt{\frac{\sum_{i < j} (d_{ij} - \hat{d}_{ij})^2}{\sum_{i < j} d_{ij}^2}}.
The use of normalized Stress-1 can be enabled by setting `normalized_stress=True`,
however it is only compatible with the non-metric MDS problem and will be ignored
in the metric case.
.. figure:: ../auto_examples/manifold/images/sphx_glr_plot_mds_001.png
:target: ../auto_examples/manifold/plot_mds.html
:align: center
:scale: 60
.. rubric:: References
* `"Modern Multidimensional Scaling - Theory and Applications"
<https://www.springer.com/fr/book/9780387251509>`_
Borg, I.; Groenen P. Springer Series in Statistics (1997)
* `"Nonmetric multidimensional scaling: a numerical method"
<http://cda.psych.uiuc.edu/psychometrika_highly_cited_articles/kruskal_1964b.pdf>`_
Kruskal, J. Psychometrika, 29 (1964)
* `"Multidimensional scaling by optimizing goodness of fit to a nonmetric hypothesis"
<http://cda.psych.uiuc.edu/psychometrika_highly_cited_articles/kruskal_1964a.pdf>`_
Kruskal, J. Psychometrika, 29, (1964)
.. _t_sne:
t-distributed Stochastic Neighbor Embedding (t-SNE)
===================================================
t-SNE (:class:`TSNE`) converts affinities of data points to probabilities.
The affinities in the original space are represented by Gaussian joint
probabilities and the affinities in the embedded space are represented by
Student's t-distributions. This allows t-SNE to be particularly sensitive
to local structure and has a few other advantages over existing techniques:
* Revealing the structure at many scales on a single map
* Revealing data that lie in multiple, different, manifolds or clusters
* Reducing the tendency to crowd points together at the center
While Isomap, LLE and variants are best suited to unfold a single continuous
low dimensional manifold, t-SNE will focus on the local structure of the data
and will tend to extract clustered local groups of samples as highlighted on
the S-curve example. This ability to group samples based on the local structure
might be beneficial to visually disentangle a dataset that comprises several
manifolds at once as is the case in the digits dataset.
The Kullback-Leibler (KL) divergence of the joint
probabilities in the original space and the embedded space will be minimized
by gradient descent. Note that the KL divergence is not convex, i.e.
multiple restarts with different initializations will end up in local minima
of the KL divergence. Hence, it is sometimes useful to try different seeds
and select the embedding with the lowest KL divergence.
The disadvantages to using t-SNE are roughly:
* t-SNE is computationally expensive, and can take several hours on million-sample
datasets where PCA will finish in seconds or minutes
* The Barnes-Hut t-SNE method is limited to two or three dimensional embeddings.
* The algorithm is stochastic and multiple restarts with different seeds can
yield different embeddings. However, it is perfectly legitimate to pick the
embedding with the least error.
* Global structure is not explicitly preserved. This problem is mitigated by
initializing points with PCA (using `init='pca'`).
.. figure:: ../auto_examples/manifold/images/sphx_glr_plot_lle_digits_013.png
:target: ../auto_examples/manifold/plot_lle_digits.html
:align: center
:scale: 50
.. dropdown:: Optimizing t-SNE
The main purpose of t-SNE is visualization of high-dimensional data. Hence,
it works best when the data will be embedded on two or three dimensions.
Optimizing the KL divergence can be a little bit tricky sometimes. There are
five parameters that control the optimization of t-SNE and therefore possibly
the quality of the resulting embedding:
* perplexity
* early exaggeration factor
* learning rate
* maximum number of iterations
* angle (not used in the exact method)
The perplexity is defined as :math:`k=2^{(S)}` where :math:`S` is the Shannon
entropy of the conditional probability distribution. The perplexity of a
:math:`k`-sided die is :math:`k`, so that :math:`k` is effectively the number of
nearest neighbors t-SNE considers when generating the conditional probabilities.
Larger perplexities lead to more nearest neighbors and less sensitive to small
structure. Conversely a lower perplexity considers a smaller number of
neighbors, and thus ignores more global information in favour of the
local neighborhood. As dataset sizes get larger more points will be
required to get a reasonable sample of the local neighborhood, and hence
larger perplexities may be required. Similarly noisier datasets will require
larger perplexity values to encompass enough local neighbors to see beyond
the background noise.
The maximum number of iterations is usually high enough and does not need
any tuning. The optimization consists of two phases: the early exaggeration
phase and the final optimization. During early exaggeration the joint
probabilities in the original space will be artificially increased by
multiplication with a given factor. Larger factors result in larger gaps
between natural clusters in the data. If the factor is too high, the KL
divergence could increase during this phase. Usually it does not have to be
tuned. A critical parameter is the learning rate. If it is too low gradient
descent will get stuck in a bad local minimum. If it is too high the KL
divergence will increase during optimization. A heuristic suggested in
Belkina et al. (2019) is to set the learning rate to the sample size
divided by the early exaggeration factor. We implement this heuristic
as `learning_rate='auto'` argument. More tips can be found in
Laurens van der Maaten's FAQ (see references). The last parameter, angle,
is a tradeoff between performance and accuracy. Larger angles imply that we
can approximate larger regions by a single point, leading to better speed
but less accurate results.
`"How to Use t-SNE Effectively" <https://distill.pub/2016/misread-tsne/>`_
provides a good discussion of the effects of the various parameters, as well
as interactive plots to explore the effects of different parameters.
.. dropdown:: Barnes-Hut t-SNE
The Barnes-Hut t-SNE that has been implemented here is usually much slower than
other manifold learning algorithms. The optimization is quite difficult
and the computation of the gradient is :math:`O[d N log(N)]`, where :math:`d`
is the number of output dimensions and :math:`N` is the number of samples. The
Barnes-Hut method improves on the exact method where t-SNE complexity is
:math:`O[d N^2]`, but has several other notable differences:
* The Barnes-Hut implementation only works when the target dimensionality is 3
or less. The 2D case is typical when building visualizations.
* Barnes-Hut only works with dense input data. Sparse data matrices can only be
embedded with the exact method or can be approximated by a dense low rank
projection for instance using :class:`~sklearn.decomposition.PCA`
* Barnes-Hut is an approximation of the exact method. The approximation is
parameterized with the angle parameter, therefore the angle parameter is
unused when method="exact"
* Barnes-Hut is significantly more scalable. Barnes-Hut can be used to embed
hundred of thousands of data points while the exact method can handle
thousands of samples before becoming computationally intractable
For visualization purpose (which is the main use case of t-SNE), using the
Barnes-Hut method is strongly recommended. The exact t-SNE method is useful
for checking the theoretically properties of the embedding possibly in higher
dimensional space but limit to small datasets due to computational constraints.
Also note that the digits labels roughly match the natural grouping found by
t-SNE while the linear 2D projection of the PCA model yields a representation
where label regions largely overlap. This is a strong clue that this data can
be well separated by non linear methods that focus on the local structure (e.g.
an SVM with a Gaussian RBF kernel). However, failing to visualize well
separated homogeneously labeled groups with t-SNE in 2D does not necessarily
imply that the data cannot be correctly classified by a supervised model. It
might be the case that 2 dimensions are not high enough to accurately represent
the internal structure of the data.
.. rubric:: References
* `"Visualizing High-Dimensional Data Using t-SNE"
<https://jmlr.org/papers/v9/vandermaaten08a.html>`_
van der Maaten, L.J.P.; Hinton, G. Journal of Machine Learning Research (2008)
* `"t-Distributed Stochastic Neighbor Embedding"
<https://lvdmaaten.github.io/tsne/>`_ van der Maaten, L.J.P.
* `"Accelerating t-SNE using Tree-Based Algorithms"
<https://lvdmaaten.github.io/publications/papers/JMLR_2014.pdf>`_
van der Maaten, L.J.P.; Journal of Machine Learning Research 15(Oct):3221-3245, 2014.
* `"Automated optimized parameters for T-distributed stochastic neighbor
embedding improve visualization and analysis of large datasets"
<https://www.nature.com/articles/s41467-019-13055-y>`_
Belkina, A.C., Ciccolella, C.O., Anno, R., Halpert, R., Spidlen, J.,
Snyder-Cappione, J.E., Nature Communications 10, 5415 (2019).
Tips on practical use
=====================
* Make sure the same scale is used over all features. Because manifold
learning methods are based on a nearest-neighbor search, the algorithm
may perform poorly otherwise. See :ref:`StandardScaler <preprocessing_scaler>`
for convenient ways of scaling heterogeneous data.
* The reconstruction error computed by each routine can be used to choose
the optimal output dimension. For a :math:`d`-dimensional manifold embedded
in a :math:`D`-dimensional parameter space, the reconstruction error will
decrease as ``n_components`` is increased until ``n_components == d``.
* Note that noisy data can "short-circuit" the manifold, in essence acting
as a bridge between parts of the manifold that would otherwise be
well-separated. Manifold learning on noisy and/or incomplete data is
an active area of research.
* Certain input configurations can lead to singular weight matrices, for
example when more than two points in the dataset are identical, or when
the data is split into disjointed groups. In this case, ``solver='arpack'``
will fail to find the null space. The easiest way to address this is to
use ``solver='dense'`` which will work on a singular matrix, though it may
be very slow depending on the number of input points. Alternatively, one
can attempt to understand the source of the singularity: if it is due to
disjoint sets, increasing ``n_neighbors`` may help. If it is due to
identical points in the dataset, removing these points may help.
.. seealso::
:ref:`random_trees_embedding` can also be useful to derive non-linear
representations of feature space, also it does not perform
dimensionality reduction. | scikit-learn | currentmodule sklearn manifold manifold Manifold learning Look for the bare necessities The simple bare necessities Forget about your worries and your strife I mean the bare necessities Old Mother Nature s recipes That bring the bare necessities of life Baloo s song The Jungle Book figure auto examples manifold images sphx glr plot compare methods 001 png target auto examples manifold plot compare methods html align center scale 70 manifold img3 image auto examples manifold images sphx glr plot compare methods 003 png target auto examples manifold plot compare methods html scale 60 manifold img4 image auto examples manifold images sphx glr plot compare methods 004 png target auto examples manifold plot compare methods html scale 60 manifold img5 image auto examples manifold images sphx glr plot compare methods 005 png target auto examples manifold plot compare methods html scale 60 manifold img6 image auto examples manifold images sphx glr plot compare methods 006 png target auto examples manifold plot compare methods html scale 60 centered manifold img3 manifold img4 manifold img5 manifold img6 Manifold learning is an approach to non linear dimensionality reduction Algorithms for this task are based on the idea that the dimensionality of many data sets is only artificially high Introduction High dimensional datasets can be very difficult to visualize While data in two or three dimensions can be plotted to show the inherent structure of the data equivalent high dimensional plots are much less intuitive To aid visualization of the structure of a dataset the dimension must be reduced in some way The simplest way to accomplish this dimensionality reduction is by taking a random projection of the data Though this allows some degree of visualization of the data structure the randomness of the choice leaves much to be desired In a random projection it is likely that the more interesting structure within the data will be lost digits img image auto examples manifold images sphx glr plot lle digits 001 png target auto examples manifold plot lle digits html scale 50 projected img image auto examples manifold images sphx glr plot lle digits 002 png target auto examples manifold plot lle digits html scale 50 centered digits img projected img To address this concern a number of supervised and unsupervised linear dimensionality reduction frameworks have been designed such as Principal Component Analysis PCA Independent Component Analysis Linear Discriminant Analysis and others These algorithms define specific rubrics to choose an interesting linear projection of the data These methods can be powerful but often miss important non linear structure in the data PCA img image auto examples manifold images sphx glr plot lle digits 003 png target auto examples manifold plot lle digits html scale 50 LDA img image auto examples manifold images sphx glr plot lle digits 004 png target auto examples manifold plot lle digits html scale 50 centered PCA img LDA img Manifold Learning can be thought of as an attempt to generalize linear frameworks like PCA to be sensitive to non linear structure in data Though supervised variants exist the typical manifold learning problem is unsupervised it learns the high dimensional structure of the data from the data itself without the use of predetermined classifications rubric Examples See ref sphx glr auto examples manifold plot lle digits py for an example of dimensionality reduction on handwritten digits See ref sphx glr auto examples manifold plot compare methods py for an example of dimensionality reduction on a toy S curve dataset See ref sphx glr auto examples applications plot stock market py for an example of using manifold learning to map the stock market structure based on historical stock prices The manifold learning implementations available in scikit learn are summarized below isomap Isomap One of the earliest approaches to manifold learning is the Isomap algorithm short for Isometric Mapping Isomap can be viewed as an extension of Multi dimensional Scaling MDS or Kernel PCA Isomap seeks a lower dimensional embedding which maintains geodesic distances between all points Isomap can be performed with the object class Isomap figure auto examples manifold images sphx glr plot lle digits 005 png target auto examples manifold plot lle digits html align center scale 50 dropdown Complexity The Isomap algorithm comprises three stages 1 Nearest neighbor search Isomap uses class sklearn neighbors BallTree for efficient neighbor search The cost is approximately math O D log k N log N for math k nearest neighbors of math N points in math D dimensions 2 Shortest path graph search The most efficient known algorithms for this are Dijkstra s Algorithm which is approximately math O N 2 k log N or the Floyd Warshall algorithm which is math O N 3 The algorithm can be selected by the user with the path method keyword of Isomap If unspecified the code attempts to choose the best algorithm for the input data 3 Partial eigenvalue decomposition The embedding is encoded in the eigenvectors corresponding to the math d largest eigenvalues of the math N times N isomap kernel For a dense solver the cost is approximately math O d N 2 This cost can often be improved using the ARPACK solver The eigensolver can be specified by the user with the eigen solver keyword of Isomap If unspecified the code attempts to choose the best algorithm for the input data The overall complexity of Isomap is math O D log k N log N O N 2 k log N O d N 2 math N number of training data points math D input dimension math k number of nearest neighbors math d output dimension rubric References A global geometric framework for nonlinear dimensionality reduction http science sciencemag org content 290 5500 2319 full Tenenbaum J B De Silva V Langford J C Science 290 5500 locally linear embedding Locally Linear Embedding Locally linear embedding LLE seeks a lower dimensional projection of the data which preserves distances within local neighborhoods It can be thought of as a series of local Principal Component Analyses which are globally compared to find the best non linear embedding Locally linear embedding can be performed with function func locally linear embedding or its object oriented counterpart class LocallyLinearEmbedding figure auto examples manifold images sphx glr plot lle digits 006 png target auto examples manifold plot lle digits html align center scale 50 dropdown Complexity The standard LLE algorithm comprises three stages 1 Nearest Neighbors Search See discussion under Isomap above 2 Weight Matrix Construction math O D N k 3 The construction of the LLE weight matrix involves the solution of a math k times k linear equation for each of the math N local neighborhoods 3 Partial Eigenvalue Decomposition See discussion under Isomap above The overall complexity of standard LLE is math O D log k N log N O D N k 3 O d N 2 math N number of training data points math D input dimension math k number of nearest neighbors math d output dimension rubric References Nonlinear dimensionality reduction by locally linear embedding http www sciencemag org content 290 5500 2323 full Roweis S Saul L Science 290 2323 2000 Modified Locally Linear Embedding One well known issue with LLE is the regularization problem When the number of neighbors is greater than the number of input dimensions the matrix defining each local neighborhood is rank deficient To address this standard LLE applies an arbitrary regularization parameter math r which is chosen relative to the trace of the local weight matrix Though it can be shown formally that as math r to 0 the solution converges to the desired embedding there is no guarantee that the optimal solution will be found for math r 0 This problem manifests itself in embeddings which distort the underlying geometry of the manifold One method to address the regularization problem is to use multiple weight vectors in each neighborhood This is the essence of modified locally linear embedding MLLE MLLE can be performed with function func locally linear embedding or its object oriented counterpart class LocallyLinearEmbedding with the keyword method modified It requires n neighbors n components figure auto examples manifold images sphx glr plot lle digits 007 png target auto examples manifold plot lle digits html align center scale 50 dropdown Complexity The MLLE algorithm comprises three stages 1 Nearest Neighbors Search Same as standard LLE 2 Weight Matrix Construction Approximately math O D N k 3 O N k D k 2 The first term is exactly equivalent to that of standard LLE The second term has to do with constructing the weight matrix from multiple weights In practice the added cost of constructing the MLLE weight matrix is relatively small compared to the cost of stages 1 and 3 3 Partial Eigenvalue Decomposition Same as standard LLE The overall complexity of MLLE is math O D log k N log N O D N k 3 O N k D k 2 O d N 2 math N number of training data points math D input dimension math k number of nearest neighbors math d output dimension rubric References MLLE Modified Locally Linear Embedding Using Multiple Weights https citeseerx ist psu edu doc view pid 0b060fdbd92cbcc66b383bcaa9ba5e5e624d7ee3 Zhang Z Wang J Hessian Eigenmapping Hessian Eigenmapping also known as Hessian based LLE HLLE is another method of solving the regularization problem of LLE It revolves around a hessian based quadratic form at each neighborhood which is used to recover the locally linear structure Though other implementations note its poor scaling with data size sklearn implements some algorithmic improvements which make its cost comparable to that of other LLE variants for small output dimension HLLE can be performed with function func locally linear embedding or its object oriented counterpart class LocallyLinearEmbedding with the keyword method hessian It requires n neighbors n components n components 3 2 figure auto examples manifold images sphx glr plot lle digits 008 png target auto examples manifold plot lle digits html align center scale 50 dropdown Complexity The HLLE algorithm comprises three stages 1 Nearest Neighbors Search Same as standard LLE 2 Weight Matrix Construction Approximately math O D N k 3 O N d 6 The first term reflects a similar cost to that of standard LLE The second term comes from a QR decomposition of the local hessian estimator 3 Partial Eigenvalue Decomposition Same as standard LLE The overall complexity of standard HLLE is math O D log k N log N O D N k 3 O N d 6 O d N 2 math N number of training data points math D input dimension math k number of nearest neighbors math d output dimension rubric References Hessian Eigenmaps Locally linear embedding techniques for high dimensional data http www pnas org content 100 10 5591 Donoho D Grimes C Proc Natl Acad Sci USA 100 5591 2003 spectral embedding Spectral Embedding Spectral Embedding is an approach to calculating a non linear embedding Scikit learn implements Laplacian Eigenmaps which finds a low dimensional representation of the data using a spectral decomposition of the graph Laplacian The graph generated can be considered as a discrete approximation of the low dimensional manifold in the high dimensional space Minimization of a cost function based on the graph ensures that points close to each other on the manifold are mapped close to each other in the low dimensional space preserving local distances Spectral embedding can be performed with the function func spectral embedding or its object oriented counterpart class SpectralEmbedding dropdown Complexity The Spectral Embedding Laplacian Eigenmaps algorithm comprises three stages 1 Weighted Graph Construction Transform the raw input data into graph representation using affinity adjacency matrix representation 2 Graph Laplacian Construction unnormalized Graph Laplacian is constructed as math L D A for and normalized one as math L D frac 1 2 D A D frac 1 2 3 Partial Eigenvalue Decomposition Eigenvalue decomposition is done on graph Laplacian The overall complexity of spectral embedding is math O D log k N log N O D N k 3 O d N 2 math N number of training data points math D input dimension math k number of nearest neighbors math d output dimension rubric References Laplacian Eigenmaps for Dimensionality Reduction and Data Representation https web cse ohio state edu mbelkin papers LEM NC 03 pdf M Belkin P Niyogi Neural Computation June 2003 15 6 1373 1396 Local Tangent Space Alignment Though not technically a variant of LLE Local tangent space alignment LTSA is algorithmically similar enough to LLE that it can be put in this category Rather than focusing on preserving neighborhood distances as in LLE LTSA seeks to characterize the local geometry at each neighborhood via its tangent space and performs a global optimization to align these local tangent spaces to learn the embedding LTSA can be performed with function func locally linear embedding or its object oriented counterpart class LocallyLinearEmbedding with the keyword method ltsa figure auto examples manifold images sphx glr plot lle digits 009 png target auto examples manifold plot lle digits html align center scale 50 dropdown Complexity The LTSA algorithm comprises three stages 1 Nearest Neighbors Search Same as standard LLE 2 Weight Matrix Construction Approximately math O D N k 3 O k 2 d The first term reflects a similar cost to that of standard LLE 3 Partial Eigenvalue Decomposition Same as standard LLE The overall complexity of standard LTSA is math O D log k N log N O D N k 3 O k 2 d O d N 2 math N number of training data points math D input dimension math k number of nearest neighbors math d output dimension rubric References arxiv Principal manifolds and nonlinear dimensionality reduction via tangent space alignment cs 0212008 Zhang Z Zha H Journal of Shanghai Univ 8 406 2004 multidimensional scaling Multi dimensional Scaling MDS Multidimensional scaling https en wikipedia org wiki Multidimensional scaling class MDS seeks a low dimensional representation of the data in which the distances respect well the distances in the original high dimensional space In general class MDS is a technique used for analyzing similarity or dissimilarity data It attempts to model similarity or dissimilarity data as distances in a geometric spaces The data can be ratings of similarity between objects interaction frequencies of molecules or trade indices between countries There exists two types of MDS algorithm metric and non metric In scikit learn the class class MDS implements both In Metric MDS the input similarity matrix arises from a metric and thus respects the triangular inequality the distances between output two points are then set to be as close as possible to the similarity or dissimilarity data In the non metric version the algorithms will try to preserve the order of the distances and hence seek for a monotonic relationship between the distances in the embedded space and the similarities dissimilarities figure auto examples manifold images sphx glr plot lle digits 010 png target auto examples manifold plot lle digits html align center scale 50 Let math S be the similarity matrix and math X the coordinates of the math n input points Disparities math hat d ij are transformation of the similarities chosen in some optimal ways The objective called the stress is then defined by math sum i j d ij X hat d ij X dropdown Metric MDS The simplest metric class MDS model called absolute MDS disparities are defined by math hat d ij S ij With absolute MDS the value math S ij should then correspond exactly to the distance between point math i and math j in the embedding point Most commonly disparities are set to math hat d ij b S ij dropdown Nonmetric MDS Non metric class MDS focuses on the ordination of the data If math S ij S jk then the embedding should enforce math d ij d jk For this reason we discuss it in terms of dissimilarities math delta ij instead of similarities math S ij Note that dissimilarities can easily be obtained from similarities through a simple transform e g math delta ij c 1 c 2 S ij for some real constants math c 1 c 2 A simple algorithm to enforce proper ordination is to use a monotonic regression of math d ij on math delta ij yielding disparities math hat d ij in the same order as math delta ij A trivial solution to this problem is to set all the points on the origin In order to avoid that the disparities math hat d ij are normalized Note that since we only care about relative ordering our objective should be invariant to simple translation and scaling however the stress used in metric MDS is sensitive to scaling To address this non metric MDS may use a normalized stress known as Stress 1 defined as math sqrt frac sum i j d ij hat d ij 2 sum i j d ij 2 The use of normalized Stress 1 can be enabled by setting normalized stress True however it is only compatible with the non metric MDS problem and will be ignored in the metric case figure auto examples manifold images sphx glr plot mds 001 png target auto examples manifold plot mds html align center scale 60 rubric References Modern Multidimensional Scaling Theory and Applications https www springer com fr book 9780387251509 Borg I Groenen P Springer Series in Statistics 1997 Nonmetric multidimensional scaling a numerical method http cda psych uiuc edu psychometrika highly cited articles kruskal 1964b pdf Kruskal J Psychometrika 29 1964 Multidimensional scaling by optimizing goodness of fit to a nonmetric hypothesis http cda psych uiuc edu psychometrika highly cited articles kruskal 1964a pdf Kruskal J Psychometrika 29 1964 t sne t distributed Stochastic Neighbor Embedding t SNE t SNE class TSNE converts affinities of data points to probabilities The affinities in the original space are represented by Gaussian joint probabilities and the affinities in the embedded space are represented by Student s t distributions This allows t SNE to be particularly sensitive to local structure and has a few other advantages over existing techniques Revealing the structure at many scales on a single map Revealing data that lie in multiple different manifolds or clusters Reducing the tendency to crowd points together at the center While Isomap LLE and variants are best suited to unfold a single continuous low dimensional manifold t SNE will focus on the local structure of the data and will tend to extract clustered local groups of samples as highlighted on the S curve example This ability to group samples based on the local structure might be beneficial to visually disentangle a dataset that comprises several manifolds at once as is the case in the digits dataset The Kullback Leibler KL divergence of the joint probabilities in the original space and the embedded space will be minimized by gradient descent Note that the KL divergence is not convex i e multiple restarts with different initializations will end up in local minima of the KL divergence Hence it is sometimes useful to try different seeds and select the embedding with the lowest KL divergence The disadvantages to using t SNE are roughly t SNE is computationally expensive and can take several hours on million sample datasets where PCA will finish in seconds or minutes The Barnes Hut t SNE method is limited to two or three dimensional embeddings The algorithm is stochastic and multiple restarts with different seeds can yield different embeddings However it is perfectly legitimate to pick the embedding with the least error Global structure is not explicitly preserved This problem is mitigated by initializing points with PCA using init pca figure auto examples manifold images sphx glr plot lle digits 013 png target auto examples manifold plot lle digits html align center scale 50 dropdown Optimizing t SNE The main purpose of t SNE is visualization of high dimensional data Hence it works best when the data will be embedded on two or three dimensions Optimizing the KL divergence can be a little bit tricky sometimes There are five parameters that control the optimization of t SNE and therefore possibly the quality of the resulting embedding perplexity early exaggeration factor learning rate maximum number of iterations angle not used in the exact method The perplexity is defined as math k 2 S where math S is the Shannon entropy of the conditional probability distribution The perplexity of a math k sided die is math k so that math k is effectively the number of nearest neighbors t SNE considers when generating the conditional probabilities Larger perplexities lead to more nearest neighbors and less sensitive to small structure Conversely a lower perplexity considers a smaller number of neighbors and thus ignores more global information in favour of the local neighborhood As dataset sizes get larger more points will be required to get a reasonable sample of the local neighborhood and hence larger perplexities may be required Similarly noisier datasets will require larger perplexity values to encompass enough local neighbors to see beyond the background noise The maximum number of iterations is usually high enough and does not need any tuning The optimization consists of two phases the early exaggeration phase and the final optimization During early exaggeration the joint probabilities in the original space will be artificially increased by multiplication with a given factor Larger factors result in larger gaps between natural clusters in the data If the factor is too high the KL divergence could increase during this phase Usually it does not have to be tuned A critical parameter is the learning rate If it is too low gradient descent will get stuck in a bad local minimum If it is too high the KL divergence will increase during optimization A heuristic suggested in Belkina et al 2019 is to set the learning rate to the sample size divided by the early exaggeration factor We implement this heuristic as learning rate auto argument More tips can be found in Laurens van der Maaten s FAQ see references The last parameter angle is a tradeoff between performance and accuracy Larger angles imply that we can approximate larger regions by a single point leading to better speed but less accurate results How to Use t SNE Effectively https distill pub 2016 misread tsne provides a good discussion of the effects of the various parameters as well as interactive plots to explore the effects of different parameters dropdown Barnes Hut t SNE The Barnes Hut t SNE that has been implemented here is usually much slower than other manifold learning algorithms The optimization is quite difficult and the computation of the gradient is math O d N log N where math d is the number of output dimensions and math N is the number of samples The Barnes Hut method improves on the exact method where t SNE complexity is math O d N 2 but has several other notable differences The Barnes Hut implementation only works when the target dimensionality is 3 or less The 2D case is typical when building visualizations Barnes Hut only works with dense input data Sparse data matrices can only be embedded with the exact method or can be approximated by a dense low rank projection for instance using class sklearn decomposition PCA Barnes Hut is an approximation of the exact method The approximation is parameterized with the angle parameter therefore the angle parameter is unused when method exact Barnes Hut is significantly more scalable Barnes Hut can be used to embed hundred of thousands of data points while the exact method can handle thousands of samples before becoming computationally intractable For visualization purpose which is the main use case of t SNE using the Barnes Hut method is strongly recommended The exact t SNE method is useful for checking the theoretically properties of the embedding possibly in higher dimensional space but limit to small datasets due to computational constraints Also note that the digits labels roughly match the natural grouping found by t SNE while the linear 2D projection of the PCA model yields a representation where label regions largely overlap This is a strong clue that this data can be well separated by non linear methods that focus on the local structure e g an SVM with a Gaussian RBF kernel However failing to visualize well separated homogeneously labeled groups with t SNE in 2D does not necessarily imply that the data cannot be correctly classified by a supervised model It might be the case that 2 dimensions are not high enough to accurately represent the internal structure of the data rubric References Visualizing High Dimensional Data Using t SNE https jmlr org papers v9 vandermaaten08a html van der Maaten L J P Hinton G Journal of Machine Learning Research 2008 t Distributed Stochastic Neighbor Embedding https lvdmaaten github io tsne van der Maaten L J P Accelerating t SNE using Tree Based Algorithms https lvdmaaten github io publications papers JMLR 2014 pdf van der Maaten L J P Journal of Machine Learning Research 15 Oct 3221 3245 2014 Automated optimized parameters for T distributed stochastic neighbor embedding improve visualization and analysis of large datasets https www nature com articles s41467 019 13055 y Belkina A C Ciccolella C O Anno R Halpert R Spidlen J Snyder Cappione J E Nature Communications 10 5415 2019 Tips on practical use Make sure the same scale is used over all features Because manifold learning methods are based on a nearest neighbor search the algorithm may perform poorly otherwise See ref StandardScaler preprocessing scaler for convenient ways of scaling heterogeneous data The reconstruction error computed by each routine can be used to choose the optimal output dimension For a math d dimensional manifold embedded in a math D dimensional parameter space the reconstruction error will decrease as n components is increased until n components d Note that noisy data can short circuit the manifold in essence acting as a bridge between parts of the manifold that would otherwise be well separated Manifold learning on noisy and or incomplete data is an active area of research Certain input configurations can lead to singular weight matrices for example when more than two points in the dataset are identical or when the data is split into disjointed groups In this case solver arpack will fail to find the null space The easiest way to address this is to use solver dense which will work on a singular matrix though it may be very slow depending on the number of input points Alternatively one can attempt to understand the source of the singularity if it is due to disjoint sets increasing n neighbors may help If it is due to identical points in the dataset removing these points may help seealso ref random trees embedding can also be useful to derive non linear representations of feature space also it does not perform dimensionality reduction |
scikit-learn Installing the development version of scikit learn advanced installation mindependencysubstitutions rst This section introduces how to install the main branch of scikit learn |
.. _advanced-installation:
.. include:: ../min_dependency_substitutions.rst
==================================================
Installing the development version of scikit-learn
==================================================
This section introduces how to install the **main branch** of scikit-learn.
This can be done by either installing a nightly build or building from source.
.. _install_nightly_builds:
Installing nightly builds
=========================
The continuous integration servers of the scikit-learn project build, test
and upload wheel packages for the most recent Python version on a nightly
basis.
Installing a nightly build is the quickest way to:
- try a new feature that will be shipped in the next release (that is, a
feature from a pull-request that was recently merged to the main branch);
- check whether a bug you encountered has been fixed since the last release.
You can install the nightly build of scikit-learn using the `scientific-python-nightly-wheels`
index from the PyPI registry of `anaconda.org`:
.. prompt:: bash $
pip install --pre --extra-index https://pypi.anaconda.org/scientific-python-nightly-wheels/simple scikit-learn
Note that first uninstalling scikit-learn might be required to be able to
install nightly builds of scikit-learn.
.. _install_bleeding_edge:
Building from source
====================
Building from source is required to work on a contribution (bug fix, new
feature, code or documentation improvement).
.. _git_repo:
#. Use `Git <https://git-scm.com/>`_ to check out the latest source from the
`scikit-learn repository <https://github.com/scikit-learn/scikit-learn>`_ on
Github.:
.. prompt:: bash $
git clone [email protected]:scikit-learn/scikit-learn.git # add --depth 1 if your connection is slow
cd scikit-learn
If you plan on submitting a pull-request, you should clone from your fork
instead.
#. Install a recent version of Python (3.9 or later at the time of writing) for
instance using Miniforge3_. Miniforge provides a conda-based distribution of
Python and the most popular scientific libraries.
If you installed Python with conda, we recommend to create a dedicated
`conda environment`_ with all the build dependencies of scikit-learn
(namely NumPy_, SciPy_, Cython_, meson-python_ and Ninja_):
.. prompt:: bash $
conda create -n sklearn-env -c conda-forge python numpy scipy cython meson-python ninja
It is not always necessary but it is safer to open a new prompt before
activating the newly created conda environment.
.. prompt:: bash $
conda activate sklearn-env
#. **Alternative to conda:** You can use alternative installations of Python
provided they are recent enough (3.9 or higher at the time of writing).
Here is an example on how to create a build environment for a Linux system's
Python. Build dependencies are installed with `pip` in a dedicated virtualenv_
to avoid disrupting other Python programs installed on the system:
.. prompt:: bash $
python3 -m venv sklearn-env
source sklearn-env/bin/activate
pip install wheel numpy scipy cython meson-python ninja
#. Install a compiler with OpenMP_ support for your platform. See instructions
for :ref:`compiler_windows`, :ref:`compiler_macos`, :ref:`compiler_linux`
and :ref:`compiler_freebsd`.
#. Build the project with pip:
.. prompt:: bash $
pip install --editable . \
--verbose --no-build-isolation \
--config-settings editable-verbose=true
#. Check that the installed scikit-learn has a version number ending with
`.dev0`:
.. prompt:: bash $
python -c "import sklearn; sklearn.show_versions()"
#. Please refer to the :ref:`developers_guide` and :ref:`pytest_tips` to run
the tests on the module of your choice.
.. note::
`--config-settings editable-verbose=true` is optional but recommended
to avoid surprises when you import `sklearn`. `meson-python` implements
editable installs by rebuilding `sklearn` when executing `import sklearn`.
With the recommended setting you will see a message when this happens,
rather than potentially waiting without feed-back and wondering
what is taking so long. Bonus: this means you only have to run the `pip
install` command once, `sklearn` will automatically be rebuilt when
importing `sklearn`.
Note that `--config-settings` is only supported in `pip` version 23.1 or
later. To upgrade `pip` to a compatible version, run `pip install -U pip`.
Dependencies
------------
Runtime dependencies
~~~~~~~~~~~~~~~~~~~~
Scikit-learn requires the following dependencies both at build time and at
runtime:
- Python (>= 3.8),
- NumPy (>= |NumpyMinVersion|),
- SciPy (>= |ScipyMinVersion|),
- Joblib (>= |JoblibMinVersion|),
- threadpoolctl (>= |ThreadpoolctlMinVersion|).
Build dependencies
~~~~~~~~~~~~~~~~~~
Building Scikit-learn also requires:
..
# The following places need to be in sync with regard to Cython version:
# - .circleci config file
# - sklearn/_build_utils/__init__.py
# - advanced installation guide
- Cython >= |CythonMinVersion|
- A C/C++ compiler and a matching OpenMP_ runtime library. See the
:ref:`platform system specific instructions
<platform_specific_instructions>` for more details.
.. note::
If OpenMP is not supported by the compiler, the build will be done with
OpenMP functionalities disabled. This is not recommended since it will force
some estimators to run in sequential mode instead of leveraging thread-based
parallelism. Setting the ``SKLEARN_FAIL_NO_OPENMP`` environment variable
(before cythonization) will force the build to fail if OpenMP is not
supported.
Since version 0.21, scikit-learn automatically detects and uses the linear
algebra library used by SciPy **at runtime**. Scikit-learn has therefore no
build dependency on BLAS/LAPACK implementations such as OpenBlas, Atlas, Blis
or MKL.
Test dependencies
~~~~~~~~~~~~~~~~~
Running tests requires:
- pytest >= |PytestMinVersion|
Some tests also require `pandas <https://pandas.pydata.org>`_.
Building a specific version from a tag
--------------------------------------
If you want to build a stable version, you can ``git checkout <VERSION>``
to get the code for that particular version, or download an zip archive of
the version from github.
.. _platform_specific_instructions:
Platform-specific instructions
==============================
Here are instructions to install a working C/C++ compiler with OpenMP support
to build scikit-learn Cython extensions for each supported platform.
.. _compiler_windows:
Windows
-------
First, download the `Build Tools for Visual Studio 2019 installer
<https://aka.ms/vs/17/release/vs_buildtools.exe>`_.
Run the downloaded `vs_buildtools.exe` file, during the installation you will
need to make sure you select "Desktop development with C++", similarly to this
screenshot:
.. image:: ../images/visual-studio-build-tools-selection.png
Secondly, find out if you are running 64-bit or 32-bit Python. The building
command depends on the architecture of the Python interpreter. You can check
the architecture by running the following in ``cmd`` or ``powershell``
console:
.. prompt:: bash $
python -c "import struct; print(struct.calcsize('P') * 8)"
For 64-bit Python, configure the build environment by running the following
commands in ``cmd`` or an Anaconda Prompt (if you use Anaconda):
.. sphinx-prompt 1.3.0 (used in doc-min-dependencies CI task) does not support `batch` prompt type,
.. so we work around by using a known prompt type and an explicit prompt text.
..
.. prompt:: bash C:\>
SET DISTUTILS_USE_SDK=1
"C:\Program Files (x86)\Microsoft Visual Studio\2019\BuildTools\VC\Auxiliary\Build\vcvarsall.bat" x64
Replace ``x64`` by ``x86`` to build for 32-bit Python.
Please be aware that the path above might be different from user to user. The
aim is to point to the "vcvarsall.bat" file that will set the necessary
environment variables in the current command prompt.
Finally, build scikit-learn with this command prompt:
.. prompt:: bash $
pip install --editable . \
--verbose --no-build-isolation \
--config-settings editable-verbose=true
.. _compiler_macos:
macOS
-----
The default C compiler on macOS, Apple clang (confusingly aliased as
`/usr/bin/gcc`), does not directly support OpenMP. We present two alternatives
to enable OpenMP support:
- either install `conda-forge::compilers` with conda;
- or install `libomp` with Homebrew to extend the default Apple clang compiler.
For Apple Silicon M1 hardware, only the conda-forge method below is known to
work at the time of writing (January 2021). You can install the `macos/arm64`
distribution of conda using the `miniforge installer
<https://github.com/conda-forge/miniforge#miniforge>`_
macOS compilers from conda-forge
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
If you use the conda package manager (version >= 4.7), you can install the
``compilers`` meta-package from the conda-forge channel, which provides
OpenMP-enabled C/C++ compilers based on the llvm toolchain.
First install the macOS command line tools:
.. prompt:: bash $
xcode-select --install
It is recommended to use a dedicated `conda environment`_ to build
scikit-learn from source:
.. prompt:: bash $
conda create -n sklearn-dev -c conda-forge python numpy scipy cython \
joblib threadpoolctl pytest compilers llvm-openmp meson-python ninja
It is not always necessary but it is safer to open a new prompt before
activating the newly created conda environment.
.. prompt:: bash $
conda activate sklearn-dev
make clean
pip install --editable . \
--verbose --no-build-isolation \
--config-settings editable-verbose=true
.. note::
If you get any conflicting dependency error message, try commenting out
any custom conda configuration in the ``$HOME/.condarc`` file. In
particular the ``channel_priority: strict`` directive is known to cause
problems for this setup.
You can check that the custom compilers are properly installed from conda
forge using the following command:
.. prompt:: bash $
conda list
which should include ``compilers`` and ``llvm-openmp``.
The compilers meta-package will automatically set custom environment
variables:
.. prompt:: bash $
echo $CC
echo $CXX
echo $CFLAGS
echo $CXXFLAGS
echo $LDFLAGS
They point to files and folders from your ``sklearn-dev`` conda environment
(in particular in the bin/, include/ and lib/ subfolders). For instance
``-L/path/to/conda/envs/sklearn-dev/lib`` should appear in ``LDFLAGS``.
In the log, you should see the compiled extension being built with the clang
and clang++ compilers installed by conda with the ``-fopenmp`` command line
flag.
macOS compilers from Homebrew
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Another solution is to enable OpenMP support for the clang compiler shipped
by default on macOS.
First install the macOS command line tools:
.. prompt:: bash $
xcode-select --install
Install the Homebrew_ package manager for macOS.
Install the LLVM OpenMP library:
.. prompt:: bash $
brew install libomp
Set the following environment variables:
.. prompt:: bash $
export CC=/usr/bin/clang
export CXX=/usr/bin/clang++
export CPPFLAGS="$CPPFLAGS -Xpreprocessor -fopenmp"
export CFLAGS="$CFLAGS -I/usr/local/opt/libomp/include"
export CXXFLAGS="$CXXFLAGS -I/usr/local/opt/libomp/include"
export LDFLAGS="$LDFLAGS -Wl,-rpath,/usr/local/opt/libomp/lib -L/usr/local/opt/libomp/lib -lomp"
Finally, build scikit-learn in verbose mode (to check for the presence of the
``-fopenmp`` flag in the compiler commands):
.. prompt:: bash $
make clean
pip install --editable . \
--verbose --no-build-isolation \
--config-settings editable-verbose=true
.. _compiler_linux:
Linux
-----
Linux compilers from the system
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Installing scikit-learn from source without using conda requires you to have
installed the scikit-learn Python development headers and a working C/C++
compiler with OpenMP support (typically the GCC toolchain).
Install build dependencies for Debian-based operating systems, e.g.
Ubuntu:
.. prompt:: bash $
sudo apt-get install build-essential python3-dev python3-pip
then proceed as usual:
.. prompt:: bash $
pip3 install cython
pip3 install --editable . \
--verbose --no-build-isolation \
--config-settings editable-verbose=true
Cython and the pre-compiled wheels for the runtime dependencies (numpy, scipy
and joblib) should automatically be installed in
``$HOME/.local/lib/pythonX.Y/site-packages``. Alternatively you can run the
above commands from a virtualenv_ or a `conda environment`_ to get full
isolation from the Python packages installed via the system packager. When
using an isolated environment, ``pip3`` should be replaced by ``pip`` in the
above commands.
When precompiled wheels of the runtime dependencies are not available for your
architecture (e.g. ARM), you can install the system versions:
.. prompt:: bash $
sudo apt-get install cython3 python3-numpy python3-scipy
On Red Hat and clones (e.g. CentOS), install the dependencies using:
.. prompt:: bash $
sudo yum -y install gcc gcc-c++ python3-devel numpy scipy
Linux compilers from conda-forge
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Alternatively, install a recent version of the GNU C Compiler toolchain (GCC)
in the user folder using conda:
.. prompt:: bash $
conda create -n sklearn-dev -c conda-forge python numpy scipy cython \
joblib threadpoolctl pytest compilers meson-python ninja
It is not always necessary but it is safer to open a new prompt before
activating the newly created conda environment.
.. prompt:: bash $
conda activate sklearn-dev
pip install --editable . \
--verbose --no-build-isolation \
--config-settings editable-verbose=true
.. _compiler_freebsd:
FreeBSD
-------
The clang compiler included in FreeBSD 12.0 and 11.2 base systems does not
include OpenMP support. You need to install the `openmp` library from packages
(or ports):
.. prompt:: bash $
sudo pkg install openmp
This will install header files in ``/usr/local/include`` and libs in
``/usr/local/lib``. Since these directories are not searched by default, you
can set the environment variables to these locations:
.. prompt:: bash $
export CFLAGS="$CFLAGS -I/usr/local/include"
export CXXFLAGS="$CXXFLAGS -I/usr/local/include"
export LDFLAGS="$LDFLAGS -Wl,-rpath,/usr/local/lib -L/usr/local/lib -lomp"
Finally, build the package using the standard command:
.. prompt:: bash $
pip install --editable . \
--verbose --no-build-isolation \
--config-settings editable-verbose=true
For the upcoming FreeBSD 12.1 and 11.3 versions, OpenMP will be included in
the base system and these steps will not be necessary.
.. _OpenMP: https://en.wikipedia.org/wiki/OpenMP
.. _Cython: https://cython.org
.. _meson-python: https://mesonbuild.com/meson-python
.. _Ninja: https://ninja-build.org/
.. _NumPy: https://numpy.org
.. _SciPy: https://www.scipy.org
.. _Homebrew: https://brew.sh
.. _virtualenv: https://docs.python.org/3/tutorial/venv.html
.. _conda environment: https://docs.conda.io/projects/conda/en/latest/user-guide/tasks/manage-environments.html
.. _Miniforge3: https://github.com/conda-forge/miniforge#miniforge3 | scikit-learn | advanced installation include min dependency substitutions rst Installing the development version of scikit learn This section introduces how to install the main branch of scikit learn This can be done by either installing a nightly build or building from source install nightly builds Installing nightly builds The continuous integration servers of the scikit learn project build test and upload wheel packages for the most recent Python version on a nightly basis Installing a nightly build is the quickest way to try a new feature that will be shipped in the next release that is a feature from a pull request that was recently merged to the main branch check whether a bug you encountered has been fixed since the last release You can install the nightly build of scikit learn using the scientific python nightly wheels index from the PyPI registry of anaconda org prompt bash pip install pre extra index https pypi anaconda org scientific python nightly wheels simple scikit learn Note that first uninstalling scikit learn might be required to be able to install nightly builds of scikit learn install bleeding edge Building from source Building from source is required to work on a contribution bug fix new feature code or documentation improvement git repo Use Git https git scm com to check out the latest source from the scikit learn repository https github com scikit learn scikit learn on Github prompt bash git clone git github com scikit learn scikit learn git add depth 1 if your connection is slow cd scikit learn If you plan on submitting a pull request you should clone from your fork instead Install a recent version of Python 3 9 or later at the time of writing for instance using Miniforge3 Miniforge provides a conda based distribution of Python and the most popular scientific libraries If you installed Python with conda we recommend to create a dedicated conda environment with all the build dependencies of scikit learn namely NumPy SciPy Cython meson python and Ninja prompt bash conda create n sklearn env c conda forge python numpy scipy cython meson python ninja It is not always necessary but it is safer to open a new prompt before activating the newly created conda environment prompt bash conda activate sklearn env Alternative to conda You can use alternative installations of Python provided they are recent enough 3 9 or higher at the time of writing Here is an example on how to create a build environment for a Linux system s Python Build dependencies are installed with pip in a dedicated virtualenv to avoid disrupting other Python programs installed on the system prompt bash python3 m venv sklearn env source sklearn env bin activate pip install wheel numpy scipy cython meson python ninja Install a compiler with OpenMP support for your platform See instructions for ref compiler windows ref compiler macos ref compiler linux and ref compiler freebsd Build the project with pip prompt bash pip install editable verbose no build isolation config settings editable verbose true Check that the installed scikit learn has a version number ending with dev0 prompt bash python c import sklearn sklearn show versions Please refer to the ref developers guide and ref pytest tips to run the tests on the module of your choice note config settings editable verbose true is optional but recommended to avoid surprises when you import sklearn meson python implements editable installs by rebuilding sklearn when executing import sklearn With the recommended setting you will see a message when this happens rather than potentially waiting without feed back and wondering what is taking so long Bonus this means you only have to run the pip install command once sklearn will automatically be rebuilt when importing sklearn Note that config settings is only supported in pip version 23 1 or later To upgrade pip to a compatible version run pip install U pip Dependencies Runtime dependencies Scikit learn requires the following dependencies both at build time and at runtime Python 3 8 NumPy NumpyMinVersion SciPy ScipyMinVersion Joblib JoblibMinVersion threadpoolctl ThreadpoolctlMinVersion Build dependencies Building Scikit learn also requires The following places need to be in sync with regard to Cython version circleci config file sklearn build utils init py advanced installation guide Cython CythonMinVersion A C C compiler and a matching OpenMP runtime library See the ref platform system specific instructions platform specific instructions for more details note If OpenMP is not supported by the compiler the build will be done with OpenMP functionalities disabled This is not recommended since it will force some estimators to run in sequential mode instead of leveraging thread based parallelism Setting the SKLEARN FAIL NO OPENMP environment variable before cythonization will force the build to fail if OpenMP is not supported Since version 0 21 scikit learn automatically detects and uses the linear algebra library used by SciPy at runtime Scikit learn has therefore no build dependency on BLAS LAPACK implementations such as OpenBlas Atlas Blis or MKL Test dependencies Running tests requires pytest PytestMinVersion Some tests also require pandas https pandas pydata org Building a specific version from a tag If you want to build a stable version you can git checkout VERSION to get the code for that particular version or download an zip archive of the version from github platform specific instructions Platform specific instructions Here are instructions to install a working C C compiler with OpenMP support to build scikit learn Cython extensions for each supported platform compiler windows Windows First download the Build Tools for Visual Studio 2019 installer https aka ms vs 17 release vs buildtools exe Run the downloaded vs buildtools exe file during the installation you will need to make sure you select Desktop development with C similarly to this screenshot image images visual studio build tools selection png Secondly find out if you are running 64 bit or 32 bit Python The building command depends on the architecture of the Python interpreter You can check the architecture by running the following in cmd or powershell console prompt bash python c import struct print struct calcsize P 8 For 64 bit Python configure the build environment by running the following commands in cmd or an Anaconda Prompt if you use Anaconda sphinx prompt 1 3 0 used in doc min dependencies CI task does not support batch prompt type so we work around by using a known prompt type and an explicit prompt text prompt bash C SET DISTUTILS USE SDK 1 C Program Files x86 Microsoft Visual Studio 2019 BuildTools VC Auxiliary Build vcvarsall bat x64 Replace x64 by x86 to build for 32 bit Python Please be aware that the path above might be different from user to user The aim is to point to the vcvarsall bat file that will set the necessary environment variables in the current command prompt Finally build scikit learn with this command prompt prompt bash pip install editable verbose no build isolation config settings editable verbose true compiler macos macOS The default C compiler on macOS Apple clang confusingly aliased as usr bin gcc does not directly support OpenMP We present two alternatives to enable OpenMP support either install conda forge compilers with conda or install libomp with Homebrew to extend the default Apple clang compiler For Apple Silicon M1 hardware only the conda forge method below is known to work at the time of writing January 2021 You can install the macos arm64 distribution of conda using the miniforge installer https github com conda forge miniforge miniforge macOS compilers from conda forge If you use the conda package manager version 4 7 you can install the compilers meta package from the conda forge channel which provides OpenMP enabled C C compilers based on the llvm toolchain First install the macOS command line tools prompt bash xcode select install It is recommended to use a dedicated conda environment to build scikit learn from source prompt bash conda create n sklearn dev c conda forge python numpy scipy cython joblib threadpoolctl pytest compilers llvm openmp meson python ninja It is not always necessary but it is safer to open a new prompt before activating the newly created conda environment prompt bash conda activate sklearn dev make clean pip install editable verbose no build isolation config settings editable verbose true note If you get any conflicting dependency error message try commenting out any custom conda configuration in the HOME condarc file In particular the channel priority strict directive is known to cause problems for this setup You can check that the custom compilers are properly installed from conda forge using the following command prompt bash conda list which should include compilers and llvm openmp The compilers meta package will automatically set custom environment variables prompt bash echo CC echo CXX echo CFLAGS echo CXXFLAGS echo LDFLAGS They point to files and folders from your sklearn dev conda environment in particular in the bin include and lib subfolders For instance L path to conda envs sklearn dev lib should appear in LDFLAGS In the log you should see the compiled extension being built with the clang and clang compilers installed by conda with the fopenmp command line flag macOS compilers from Homebrew Another solution is to enable OpenMP support for the clang compiler shipped by default on macOS First install the macOS command line tools prompt bash xcode select install Install the Homebrew package manager for macOS Install the LLVM OpenMP library prompt bash brew install libomp Set the following environment variables prompt bash export CC usr bin clang export CXX usr bin clang export CPPFLAGS CPPFLAGS Xpreprocessor fopenmp export CFLAGS CFLAGS I usr local opt libomp include export CXXFLAGS CXXFLAGS I usr local opt libomp include export LDFLAGS LDFLAGS Wl rpath usr local opt libomp lib L usr local opt libomp lib lomp Finally build scikit learn in verbose mode to check for the presence of the fopenmp flag in the compiler commands prompt bash make clean pip install editable verbose no build isolation config settings editable verbose true compiler linux Linux Linux compilers from the system Installing scikit learn from source without using conda requires you to have installed the scikit learn Python development headers and a working C C compiler with OpenMP support typically the GCC toolchain Install build dependencies for Debian based operating systems e g Ubuntu prompt bash sudo apt get install build essential python3 dev python3 pip then proceed as usual prompt bash pip3 install cython pip3 install editable verbose no build isolation config settings editable verbose true Cython and the pre compiled wheels for the runtime dependencies numpy scipy and joblib should automatically be installed in HOME local lib pythonX Y site packages Alternatively you can run the above commands from a virtualenv or a conda environment to get full isolation from the Python packages installed via the system packager When using an isolated environment pip3 should be replaced by pip in the above commands When precompiled wheels of the runtime dependencies are not available for your architecture e g ARM you can install the system versions prompt bash sudo apt get install cython3 python3 numpy python3 scipy On Red Hat and clones e g CentOS install the dependencies using prompt bash sudo yum y install gcc gcc c python3 devel numpy scipy Linux compilers from conda forge Alternatively install a recent version of the GNU C Compiler toolchain GCC in the user folder using conda prompt bash conda create n sklearn dev c conda forge python numpy scipy cython joblib threadpoolctl pytest compilers meson python ninja It is not always necessary but it is safer to open a new prompt before activating the newly created conda environment prompt bash conda activate sklearn dev pip install editable verbose no build isolation config settings editable verbose true compiler freebsd FreeBSD The clang compiler included in FreeBSD 12 0 and 11 2 base systems does not include OpenMP support You need to install the openmp library from packages or ports prompt bash sudo pkg install openmp This will install header files in usr local include and libs in usr local lib Since these directories are not searched by default you can set the environment variables to these locations prompt bash export CFLAGS CFLAGS I usr local include export CXXFLAGS CXXFLAGS I usr local include export LDFLAGS LDFLAGS Wl rpath usr local lib L usr local lib lomp Finally build the package using the standard command prompt bash pip install editable verbose no build isolation config settings editable verbose true For the upcoming FreeBSD 12 1 and 11 3 versions OpenMP will be included in the base system and these steps will not be necessary OpenMP https en wikipedia org wiki OpenMP Cython https cython org meson python https mesonbuild com meson python Ninja https ninja build org NumPy https numpy org SciPy https www scipy org Homebrew https brew sh virtualenv https docs python org 3 tutorial venv html conda environment https docs conda io projects conda en latest user guide tasks manage environments html Miniforge3 https github com conda forge miniforge miniforge3 |
scikit-learn sklearn Contributing contribute It is hosted on https github com scikit learn scikit learn This project is a community effort and everyone is welcome to contributing | .. _contributing:
============
Contributing
============
.. currentmodule:: sklearn
This project is a community effort, and everyone is welcome to
contribute. It is hosted on https://github.com/scikit-learn/scikit-learn.
The decision making process and governance structure of scikit-learn is laid
out in :ref:`governance`.
Scikit-learn is somewhat :ref:`selective <selectiveness>` when it comes to
adding new algorithms, and the best way to contribute and to help the project
is to start working on known issues.
See :ref:`new_contributors` to get started.
.. topic:: **Our community, our values**
We are a community based on openness and friendly, didactic,
discussions.
We aspire to treat everybody equally, and value their contributions. We
are particularly seeking people from underrepresented backgrounds in Open
Source Software and scikit-learn in particular to participate and
contribute their expertise and experience.
Decisions are made based on technical merit and consensus.
Code is not the only way to help the project. Reviewing pull
requests, answering questions to help others on mailing lists or
issues, organizing and teaching tutorials, working on the website,
improving the documentation, are all priceless contributions.
We abide by the principles of openness, respect, and consideration of
others of the Python Software Foundation:
https://www.python.org/psf/codeofconduct/
In case you experience issues using this package, do not hesitate to submit a
ticket to the
`GitHub issue tracker
<https://github.com/scikit-learn/scikit-learn/issues>`_. You are also
welcome to post feature requests or pull requests.
Ways to contribute
==================
There are many ways to contribute to scikit-learn, with the most common ones
being contribution of code or documentation to the project. Improving the
documentation is no less important than improving the library itself. If you
find a typo in the documentation, or have made improvements, do not hesitate to
create a GitHub issue or preferably submit a GitHub pull request.
Full documentation can be found under the doc/ directory.
But there are many other ways to help. In particular helping to
:ref:`improve, triage, and investigate issues <bug_triaging>` and
:ref:`reviewing other developers' pull requests <code_review>` are very
valuable contributions that decrease the burden on the project
maintainers.
Another way to contribute is to report issues you're facing, and give a "thumbs
up" on issues that others reported and that are relevant to you. It also helps
us if you spread the word: reference the project from your blog and articles,
link to it from your website, or simply star to say "I use it":
.. raw:: html
<p>
<object
data="https://img.shields.io/github/stars/scikit-learn/scikit-learn?style=for-the-badge&logo=github"
type="image/svg+xml">
</object>
</p>
In case a contribution/issue involves changes to the API principles
or changes to dependencies or supported versions, it must be backed by a
:ref:`slep`, where a SLEP must be submitted as a pull-request to
`enhancement proposals <https://scikit-learn-enhancement-proposals.readthedocs.io>`_
using the `SLEP template <https://scikit-learn-enhancement-proposals.readthedocs.io/en/latest/slep_template.html>`_
and follows the decision-making process outlined in :ref:`governance`.
.. dropdown:: Contributing to related projects
Scikit-learn thrives in an ecosystem of several related projects, which also
may have relevant issues to work on, including smaller projects such as:
* `scikit-learn-contrib <https://github.com/search?q=org%3Ascikit-learn-contrib+is%3Aissue+is%3Aopen+sort%3Aupdated-desc&type=Issues>`__
* `joblib <https://github.com/joblib/joblib/issues>`__
* `sphinx-gallery <https://github.com/sphinx-gallery/sphinx-gallery/issues>`__
* `numpydoc <https://github.com/numpy/numpydoc/issues>`__
* `liac-arff <https://github.com/renatopp/liac-arff/issues>`__
and larger projects:
* `numpy <https://github.com/numpy/numpy/issues>`__
* `scipy <https://github.com/scipy/scipy/issues>`__
* `matplotlib <https://github.com/matplotlib/matplotlib/issues>`__
* and so on.
Look for issues marked "help wanted" or similar. Helping these projects may help
scikit-learn too. See also :ref:`related_projects`.
Automated Contributions Policy
==============================
Please refrain from submitting issues or pull requests generated by
fully-automated tools. Maintainers reserve the right, at their sole discretion,
to close such submissions and to block any account responsible for them.
Ideally, contributions should follow from a human-to-human discussion in the
form of an issue.
Submitting a bug report or a feature request
============================================
We use GitHub issues to track all bugs and feature requests; feel free to open
an issue if you have found a bug or wish to see a feature implemented.
In case you experience issues using this package, do not hesitate to submit a
ticket to the
`Bug Tracker <https://github.com/scikit-learn/scikit-learn/issues>`_. You are
also welcome to post feature requests or pull requests.
It is recommended to check that your issue complies with the
following rules before submitting:
- Verify that your issue is not being currently addressed by other
`issues <https://github.com/scikit-learn/scikit-learn/issues?q=>`_
or `pull requests <https://github.com/scikit-learn/scikit-learn/pulls?q=>`_.
- If you are submitting an algorithm or feature request, please verify that
the algorithm fulfills our
`new algorithm requirements
<https://scikit-learn.org/stable/faq.html#what-are-the-inclusion-criteria-for-new-algorithms>`_.
- If you are submitting a bug report, we strongly encourage you to follow the guidelines in
:ref:`filing_bugs`.
.. _filing_bugs:
How to make a good bug report
-----------------------------
When you submit an issue to `GitHub
<https://github.com/scikit-learn/scikit-learn/issues>`__, please do your best to
follow these guidelines! This will make it a lot easier to provide you with good
feedback:
- The ideal bug report contains a :ref:`short reproducible code snippet
<minimal_reproducer>`, this way anyone can try to reproduce the bug easily. If your
snippet is longer than around 50 lines, please link to a `Gist
<https://gist.github.com>`_ or a GitHub repo.
- If not feasible to include a reproducible snippet, please be specific about
what **estimators and/or functions are involved and the shape of the data**.
- If an exception is raised, please **provide the full traceback**.
- Please include your **operating system type and version number**, as well as
your **Python, scikit-learn, numpy, and scipy versions**. This information
can be found by running:
.. prompt:: bash
python -c "import sklearn; sklearn.show_versions()"
- Please ensure all **code snippets and error messages are formatted in
appropriate code blocks**. See `Creating and highlighting code blocks
<https://help.github.com/articles/creating-and-highlighting-code-blocks>`_
for more details.
If you want to help curate issues, read about :ref:`bug_triaging`.
Contributing code
=================
.. note::
To avoid duplicating work, it is highly advised that you search through the
`issue tracker <https://github.com/scikit-learn/scikit-learn/issues>`_ and
the `PR list <https://github.com/scikit-learn/scikit-learn/pulls>`_.
If in doubt about duplicated work, or if you want to work on a non-trivial
feature, it's recommended to first open an issue in
the `issue tracker <https://github.com/scikit-learn/scikit-learn/issues>`_
to get some feedbacks from core developers.
One easy way to find an issue to work on is by applying the "help wanted"
label in your search. This lists all the issues that have been unclaimed
so far. In order to claim an issue for yourself, please comment exactly
``/take`` on it for the CI to automatically assign the issue to you.
To maintain the quality of the codebase and ease the review process, any
contribution must conform to the project's :ref:`coding guidelines
<coding-guidelines>`, in particular:
- Don't modify unrelated lines to keep the PR focused on the scope stated in its
description or issue.
- Only write inline comments that add value and avoid stating the obvious: explain
the "why" rather than the "what".
- **Most importantly**: Do not contribute code that you don't understand.
Video resources
---------------
These videos are step-by-step introductions on how to contribute to
scikit-learn, and are a great companion to the following text guidelines.
Please make sure to still check our guidelines below, since they describe our
latest up-to-date workflow.
- Crash Course in Contributing to Scikit-Learn & Open Source Projects:
`Video <https://youtu.be/5OL8XoMMOfA>`__,
`Transcript
<https://github.com/data-umbrella/event-transcripts/blob/main/2020/05-andreas-mueller-contributing.md>`__
- Example of Submitting a Pull Request to scikit-learn:
`Video <https://youtu.be/PU1WyDPGePI>`__,
`Transcript
<https://github.com/data-umbrella/event-transcripts/blob/main/2020/06-reshama-shaikh-sklearn-pr.md>`__
- Sprint-specific instructions and practical tips:
`Video <https://youtu.be/p_2Uw2BxdhA>`__,
`Transcript
<https://github.com/data-umbrella/data-umbrella-scikit-learn-sprint/blob/master/3_transcript_ACM_video_vol2.md>`__
- 3 Components of Reviewing a Pull Request:
`Video <https://youtu.be/dyxS9KKCNzA>`__,
`Transcript
<https://github.com/data-umbrella/event-transcripts/blob/main/2021/27-thomas-pr.md>`__
.. note::
In January 2021, the default branch name changed from ``master`` to ``main``
for the scikit-learn GitHub repository to use more inclusive terms.
These videos were created prior to the renaming of the branch.
For contributors who are viewing these videos to set up their
working environment and submitting a PR, ``master`` should be replaced to ``main``.
How to contribute
-----------------
The preferred way to contribute to scikit-learn is to fork the `main
repository <https://github.com/scikit-learn/scikit-learn/>`__ on GitHub,
then submit a "pull request" (PR).
In the first few steps, we explain how to locally install scikit-learn, and
how to set up your git repository:
1. `Create an account <https://github.com/join>`_ on
GitHub if you do not already have one.
2. Fork the `project repository
<https://github.com/scikit-learn/scikit-learn>`__: click on the 'Fork'
button near the top of the page. This creates a copy of the code under your
account on the GitHub user account. For more details on how to fork a
repository see `this guide <https://help.github.com/articles/fork-a-repo/>`_.
3. Clone your fork of the scikit-learn repo from your GitHub account to your
local disk:
.. prompt:: bash
git clone [email protected]:YourLogin/scikit-learn.git # add --depth 1 if your connection is slow
cd scikit-learn
4. Follow steps 2-6 in :ref:`install_bleeding_edge` to build scikit-learn in
development mode and return to this document.
5. Install the development dependencies:
.. prompt:: bash
pip install pytest pytest-cov ruff mypy numpydoc black==24.3.0
.. _upstream:
6. Add the ``upstream`` remote. This saves a reference to the main
scikit-learn repository, which you can use to keep your repository
synchronized with the latest changes:
.. prompt:: bash
git remote add upstream [email protected]:scikit-learn/scikit-learn.git
7. Check that the `upstream` and `origin` remote aliases are configured correctly
by running `git remote -v` which should display:
.. code-block:: text
origin [email protected]:YourLogin/scikit-learn.git (fetch)
origin [email protected]:YourLogin/scikit-learn.git (push)
upstream [email protected]:scikit-learn/scikit-learn.git (fetch)
upstream [email protected]:scikit-learn/scikit-learn.git (push)
You should now have a working installation of scikit-learn, and your git repository
properly configured. It could be useful to run some test to verify your installation.
Please refer to :ref:`pytest_tips` for examples.
The next steps now describe the process of modifying code and submitting a PR:
8. Synchronize your ``main`` branch with the ``upstream/main`` branch,
more details on `GitHub Docs <https://docs.github.com/en/github/collaborating-with-issues-and-pull-requests/syncing-a-fork>`_:
.. prompt:: bash
git checkout main
git fetch upstream
git merge upstream/main
9. Create a feature branch to hold your development changes:
.. prompt:: bash
git checkout -b my_feature
and start making changes. Always use a feature branch. It's good
practice to never work on the ``main`` branch!
10. (**Optional**) Install `pre-commit <https://pre-commit.com/#install>`_ to
run code style checks before each commit:
.. prompt:: bash
pip install pre-commit
pre-commit install
pre-commit checks can be disabled for a particular commit with
`git commit -n`.
11. Develop the feature on your feature branch on your computer, using Git to
do the version control. When you're done editing, add changed files using
``git add`` and then ``git commit``:
.. prompt:: bash
git add modified_files
git commit
to record your changes in Git, then push the changes to your GitHub
account with:
.. prompt:: bash
git push -u origin my_feature
12. Follow `these
<https://help.github.com/articles/creating-a-pull-request-from-a-fork>`_
instructions to create a pull request from your fork. This will send an
notification to potential reviewers. You may want to consider sending an message to
the `discord <https://discord.com/invite/h9qyrK8Jc8>`_ in the development
channel for more visibility if your pull request does not receive attention after
a couple of days (instant replies are not guaranteed though).
It is often helpful to keep your local feature branch synchronized with the
latest changes of the main scikit-learn repository:
.. prompt:: bash
git fetch upstream
git merge upstream/main
Subsequently, you might need to solve the conflicts. You can refer to the
`Git documentation related to resolving merge conflict using the command
line
<https://help.github.com/articles/resolving-a-merge-conflict-using-the-command-line/>`_.
.. topic:: Learning Git
The `Git documentation <https://git-scm.com/doc>`_ and
http://try.github.io are excellent resources to get started with git,
and understanding all of the commands shown here.
.. _pr_checklist:
Pull request checklist
----------------------
Before a PR can be merged, it needs to be approved by two core developers.
An incomplete contribution -- where you expect to do more work before receiving
a full review -- should be marked as a `draft pull request
<https://docs.github.com/en/pull-requests/collaborating-with-pull-requests/proposing-changes-to-your-work-with-pull-requests/changing-the-stage-of-a-pull-request>`__
and changed to "ready for review" when it matures. Draft PRs may be useful to:
indicate you are working on something to avoid duplicated work, request
broad review of functionality or API, or seek collaborators. Draft PRs often
benefit from the inclusion of a `task list
<https://github.com/blog/1375-task-lists-in-gfm-issues-pulls-comments>`_ in
the PR description.
In order to ease the reviewing process, we recommend that your contribution
complies with the following rules before marking a PR as "ready for review". The
**bolded** ones are especially important:
1. **Give your pull request a helpful title** that summarizes what your
contribution does. This title will often become the commit message once
merged so it should summarize your contribution for posterity. In some
cases "Fix <ISSUE TITLE>" is enough. "Fix #<ISSUE NUMBER>" is never a
good title.
2. **Make sure your code passes the tests**. The whole test suite can be run
with `pytest`, but it is usually not recommended since it takes a long
time. It is often enough to only run the test related to your changes:
for example, if you changed something in
`sklearn/linear_model/_logistic.py`, running the following commands will
usually be enough:
- `pytest sklearn/linear_model/_logistic.py` to make sure the doctest
examples are correct
- `pytest sklearn/linear_model/tests/test_logistic.py` to run the tests
specific to the file
- `pytest sklearn/linear_model` to test the whole
:mod:`~sklearn.linear_model` module
- `pytest doc/modules/linear_model.rst` to make sure the user guide
examples are correct.
- `pytest sklearn/tests/test_common.py -k LogisticRegression` to run all our
estimator checks (specifically for `LogisticRegression`, if that's the
estimator you changed).
There may be other failing tests, but they will be caught by the CI so
you don't need to run the whole test suite locally. For guidelines on how
to use ``pytest`` efficiently, see the :ref:`pytest_tips`.
3. **Make sure your code is properly commented and documented**, and **make
sure the documentation renders properly**. To build the documentation, please
refer to our :ref:`contribute_documentation` guidelines. The CI will also
build the docs: please refer to :ref:`generated_doc_CI`.
4. **Tests are necessary for enhancements to be
accepted**. Bug-fixes or new features should be provided with
`non-regression tests
<https://en.wikipedia.org/wiki/Non-regression_testing>`_. These tests
verify the correct behavior of the fix or feature. In this manner, further
modifications on the code base are granted to be consistent with the
desired behavior. In the case of bug fixes, at the time of the PR, the
non-regression tests should fail for the code base in the ``main`` branch
and pass for the PR code.
5. If your PR is likely to affect users, you need to add a changelog entry describing
your PR changes, see the `following README <https://github.com/scikit-learn/scikit-learn/blob/main/doc/whats_new/upcoming_changes/README.md>`
for more details.
6. Follow the :ref:`coding-guidelines`.
7. When applicable, use the validation tools and scripts in the :mod:`sklearn.utils`
module. A list of utility routines available for developers can be found in the
:ref:`developers-utils` page.
8. Often pull requests resolve one or more other issues (or pull requests).
If merging your pull request means that some other issues/PRs should
be closed, you should `use keywords to create link to them
<https://github.com/blog/1506-closing-issues-via-pull-requests/>`_
(e.g., ``Fixes #1234``; multiple issues/PRs are allowed as long as each
one is preceded by a keyword). Upon merging, those issues/PRs will
automatically be closed by GitHub. If your pull request is simply
related to some other issues/PRs, or it only partially resolves the target
issue, create a link to them without using the keywords (e.g., ``Towards #1234``).
9. PRs should often substantiate the change, through benchmarks of
performance and efficiency (see :ref:`monitoring_performances`) or through
examples of usage. Examples also illustrate the features and intricacies of
the library to users. Have a look at other examples in the `examples/
<https://github.com/scikit-learn/scikit-learn/tree/main/examples>`_
directory for reference. Examples should demonstrate why the new
functionality is useful in practice and, if possible, compare it to other
methods available in scikit-learn.
10. New features have some maintenance overhead. We expect PR authors
to take part in the maintenance for the code they submit, at least
initially. New features need to be illustrated with narrative
documentation in the user guide, with small code snippets.
If relevant, please also add references in the literature, with PDF links
when possible.
11. The user guide should also include expected time and space complexity
of the algorithm and scalability, e.g. "this algorithm can scale to a
large number of samples > 100000, but does not scale in dimensionality:
`n_features` is expected to be lower than 100".
You can also check our :ref:`code_review` to get an idea of what reviewers
will expect.
You can check for common programming errors with the following tools:
* Code with a good unit test coverage (at least 80%, better 100%), check with:
.. prompt:: bash
pip install pytest pytest-cov
pytest --cov sklearn path/to/tests
See also :ref:`testing_coverage`.
* Run static analysis with `mypy`:
.. prompt:: bash
mypy sklearn
This must not produce new errors in your pull request. Using `# type: ignore`
annotation can be a workaround for a few cases that are not supported by
mypy, in particular,
- when importing C or Cython modules,
- on properties with decorators.
Bonus points for contributions that include a performance analysis with
a benchmark script and profiling output (see :ref:`monitoring_performances`).
Also check out the :ref:`performance-howto` guide for more details on
profiling and Cython optimizations.
.. note::
The current state of the scikit-learn code base is not compliant with
all of those guidelines, but we expect that enforcing those constraints
on all new contributions will get the overall code base quality in the
right direction.
.. seealso::
For two very well documented and more detailed guides on development
workflow, please pay a visit to the `Scipy Development Workflow
<http://scipy.github.io/devdocs/dev/dev_quickstart.html>`_ -
and the `Astropy Workflow for Developers
<https://astropy.readthedocs.io/en/latest/development/workflow/development_workflow.html>`_
sections.
Continuous Integration (CI)
---------------------------
* Azure pipelines are used for testing scikit-learn on Linux, Mac and Windows,
with different dependencies and settings.
* CircleCI is used to build the docs for viewing.
* Github Actions are used for various tasks, including building wheels and
source distributions.
* Cirrus CI is used to build on ARM.
.. _commit_markers:
Commit message markers
^^^^^^^^^^^^^^^^^^^^^^
Please note that if one of the following markers appear in the latest commit
message, the following actions are taken.
====================== ===================
Commit Message Marker Action Taken by CI
====================== ===================
[ci skip] CI is skipped completely
[cd build] CD is run (wheels and source distribution are built)
[cd build gh] CD is run only for GitHub Actions
[cd build cirrus] CD is run only for Cirrus CI
[lint skip] Azure pipeline skips linting
[scipy-dev] Build & test with our dependencies (numpy, scipy, etc.) development builds
[free-threaded] Build & test with CPython 3.13 free-threaded
[pyodide] Build & test with Pyodide
[azure parallel] Run Azure CI jobs in parallel
[cirrus arm] Run Cirrus CI ARM test
[float32] Run float32 tests by setting `SKLEARN_RUN_FLOAT32_TESTS=1`. See :ref:`environment_variable` for more details
[doc skip] Docs are not built
[doc quick] Docs built, but excludes example gallery plots
[doc build] Docs built including example gallery plots (very long)
====================== ===================
Note that, by default, the documentation is built but only the examples
that are directly modified by the pull request are executed.
.. _build_lock_files:
Build lock files
^^^^^^^^^^^^^^^^
CIs use lock files to build environments with specific versions of dependencies. When a
PR needs to modify the dependencies or their versions, the lock files should be updated
accordingly. This can be done by adding the following comment directly in the GitHub
Pull Request (PR) discussion:
.. code-block:: text
@scikit-learn-bot update lock-files
A bot will push a commit to your PR branch with the updated lock files in a few minutes.
Make sure to tick the *Allow edits from maintainers* checkbox located at the bottom of
the right sidebar of the PR. You can also specify the options `--select-build`,
`--skip-build`, and `--select-tag` as in a command line. Use `--help` on the script
`build_tools/update_environments_and_lock_files.py` for more information. For example,
.. code-block:: text
@scikit-learn-bot update lock-files --select-tag main-ci --skip-build doc
The bot will automatically add :ref:`commit message markers <commit_markers>` to the
commit for certain tags. If you want to add more markers manually, you can do so using
the `--commit-marker` option. For example, the following comment will trigger the bot to
update documentation-related lock files and add the `[doc build]` marker to the commit:
.. code-block:: text
@scikit-learn-bot update lock-files --select-build doc --commit-marker "[doc build]"
Resolve conflicts in lock files
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Here is a bash snippet that helps resolving conflicts in environment and lock files:
.. prompt:: bash
# pull latest upstream/main
git pull upstream main --no-rebase
# resolve conflicts - keeping the upstream/main version for specific files
git checkout --theirs build_tools/*/*.lock build_tools/*/*environment.yml \
build_tools/*/*lock.txt build_tools/*/*requirements.txt
git add build_tools/*/*.lock build_tools/*/*environment.yml \
build_tools/*/*lock.txt build_tools/*/*requirements.txt
git merge --continue
This will merge `upstream/main` into our branch, automatically prioritising the
`upstream/main` for conflicting environment and lock files (this is good enough, because
we will re-generate the lock files afterwards).
Note that this only fixes conflicts in environment and lock files and you might have
other conflicts to resolve.
Finally, we have to re-generate the environment and lock files for the CIs, as described
in :ref:`Build lock files <build_lock_files>`, or by running:
.. prompt:: bash
python build_tools/update_environments_and_lock_files.py
.. _stalled_pull_request:
Stalled pull requests
---------------------
As contributing a feature can be a lengthy process, some
pull requests appear inactive but unfinished. In such a case, taking
them over is a great service for the project. A good etiquette to take over is:
* **Determine if a PR is stalled**
* A pull request may have the label "stalled" or "help wanted" if we
have already identified it as a candidate for other contributors.
* To decide whether an inactive PR is stalled, ask the contributor if
she/he plans to continue working on the PR in the near future.
Failure to respond within 2 weeks with an activity that moves the PR
forward suggests that the PR is stalled and will result in tagging
that PR with "help wanted".
Note that if a PR has received earlier comments on the contribution
that have had no reply in a month, it is safe to assume that the PR
is stalled and to shorten the wait time to one day.
After a sprint, follow-up for un-merged PRs opened during sprint will
be communicated to participants at the sprint, and those PRs will be
tagged "sprint". PRs tagged with "sprint" can be reassigned or
declared stalled by sprint leaders.
* **Taking over a stalled PR**: To take over a PR, it is important to
comment on the stalled PR that you are taking over and to link from the
new PR to the old one. The new PR should be created by pulling from the
old one.
Stalled and Unclaimed Issues
----------------------------
Generally speaking, issues which are up for grabs will have a
`"help wanted" <https://github.com/scikit-learn/scikit-learn/labels/help%20wanted>`_.
tag. However, not all issues which need contributors will have this tag,
as the "help wanted" tag is not always up-to-date with the state
of the issue. Contributors can find issues which are still up for grabs
using the following guidelines:
* First, to **determine if an issue is claimed**:
* Check for linked pull requests
* Check the conversation to see if anyone has said that they're working on
creating a pull request
* If a contributor comments on an issue to say they are working on it,
a pull request is expected within 2 weeks (new contributor) or 4 weeks
(contributor or core dev), unless an larger time frame is explicitly given.
Beyond that time, another contributor can take the issue and make a
pull request for it. We encourage contributors to comment directly on the
stalled or unclaimed issue to let community members know that they will be
working on it.
* If the issue is linked to a :ref:`stalled pull request <stalled_pull_request>`,
we recommend that contributors follow the procedure
described in the :ref:`stalled_pull_request`
section rather than working directly on the issue.
.. _new_contributors:
Issues for New Contributors
---------------------------
New contributors should look for the following tags when looking for issues. We
strongly recommend that new contributors tackle "easy" issues first: this helps
the contributor become familiar with the contribution workflow, and for the core
devs to become acquainted with the contributor; besides which, we frequently
underestimate how easy an issue is to solve!
- **Good first issue tag**
A great way to start contributing to scikit-learn is to pick an item from
the list of `good first issues
<https://github.com/scikit-learn/scikit-learn/labels/good%20first%20issue>`_
in the issue tracker. Resolving these issues allow you to start contributing
to the project without much prior knowledge. If you have already contributed
to scikit-learn, you should look at Easy issues instead.
- **Easy tag**
If you have already contributed to scikit-learn, another great way to contribute
to scikit-learn is to pick an item from the list of `Easy issues
<https://github.com/scikit-learn/scikit-learn/labels/Easy>`_ in the issue
tracker. Your assistance in this area will be greatly appreciated by the
more experienced developers as it helps free up their time to concentrate on
other issues.
- **Help wanted tag**
We often use the help wanted tag to mark issues regardless of difficulty.
Additionally, we use the help wanted tag to mark Pull Requests which have been
abandoned by their original contributor and are available for someone to pick up where
the original contributor left off. The list of issues with the help wanted tag can be
found `here <https://github.com/scikit-learn/scikit-learn/labels/help%20wanted>`_.
Note that not all issues which need contributors will have this tag.
.. _contribute_documentation:
Documentation
=============
We are glad to accept any sort of documentation:
* **Function/method/class docstrings:** Also known as "API documentation", these
describe what the object does and details any parameters, attributes and
methods. Docstrings live alongside the code in `sklearn/
<https://github.com/scikit-learn/scikit-learn/tree/main/sklearn>`_, and are generated
generated according to `doc/api_reference.py
<https://github.com/scikit-learn/scikit-learn/blob/main/doc/api_reference.py>`_. To
add, update, remove, or deprecate a public API that is listed in :ref:`api_ref`, this
is the place to look at.
* **User guide:** These provide more detailed information about the algorithms
implemented in scikit-learn and generally live in the root
`doc/ <https://github.com/scikit-learn/scikit-learn/tree/main/doc>`_ directory
and
`doc/modules/ <https://github.com/scikit-learn/scikit-learn/tree/main/doc/modules>`_.
* **Examples:** These provide full code examples that may demonstrate the use
of scikit-learn modules, compare different algorithms or discuss their
interpretation, etc. Examples live in
`examples/ <https://github.com/scikit-learn/scikit-learn/tree/main/examples>`_.
* **Other reStructuredText documents:** These provide various other useful information
(e.g., the :ref:`contributing` guide) and live in
`doc/ <https://github.com/scikit-learn/scikit-learn/tree/main/doc>`_.
.. dropdown:: Guidelines for writing docstrings
* When documenting the parameters and attributes, here is a list of some
well-formatted examples
.. code-block:: text
n_clusters : int, default=3
The number of clusters detected by the algorithm.
some_param : {"hello", "goodbye"}, bool or int, default=True
The parameter description goes here, which can be either a string
literal (either `hello` or `goodbye`), a bool, or an int. The default
value is True.
array_parameter : {array-like, sparse matrix} of shape (n_samples, n_features) \
or (n_samples,)
This parameter accepts data in either of the mentioned forms, with one
of the mentioned shapes. The default value is `np.ones(shape=(n_samples,))`.
list_param : list of int
typed_ndarray : ndarray of shape (n_samples,), dtype=np.int32
sample_weight : array-like of shape (n_samples,), default=None
multioutput_array : ndarray of shape (n_samples, n_classes) or list of such arrays
In general have the following in mind:
* Use Python basic types. (``bool`` instead of ``boolean``)
* Use parenthesis for defining shapes: ``array-like of shape (n_samples,)``
or ``array-like of shape (n_samples, n_features)``
* For strings with multiple options, use brackets: ``input: {'log',
'squared', 'multinomial'}``
* 1D or 2D data can be a subset of ``{array-like, ndarray, sparse matrix,
dataframe}``. Note that ``array-like`` can also be a ``list``, while
``ndarray`` is explicitly only a ``numpy.ndarray``.
* Specify ``dataframe`` when "frame-like" features are being used, such as
the column names.
* When specifying the data type of a list, use ``of`` as a delimiter: ``list
of int``. When the parameter supports arrays giving details about the
shape and/or data type and a list of such arrays, you can use one of
``array-like of shape (n_samples,) or list of such arrays``.
* When specifying the dtype of an ndarray, use e.g. ``dtype=np.int32`` after
defining the shape: ``ndarray of shape (n_samples,), dtype=np.int32``. You
can specify multiple dtype as a set: ``array-like of shape (n_samples,),
dtype={np.float64, np.float32}``. If one wants to mention arbitrary
precision, use `integral` and `floating` rather than the Python dtype
`int` and `float`. When both `int` and `floating` are supported, there is
no need to specify the dtype.
* When the default is ``None``, ``None`` only needs to be specified at the
end with ``default=None``. Be sure to include in the docstring, what it
means for the parameter or attribute to be ``None``.
* Add "See Also" in docstrings for related classes/functions.
* "See Also" in docstrings should be one line per reference, with a colon and an
explanation, for example:
.. code-block:: text
See Also
--------
SelectKBest : Select features based on the k highest scores.
SelectFpr : Select features based on a false positive rate test.
* Add one or two snippets of code in "Example" section to show how it can be used.
.. dropdown:: Guidelines for writing the user guide and other reStructuredText documents
It is important to keep a good compromise between mathematical and algorithmic
details, and give intuition to the reader on what the algorithm does.
* Begin with a concise, hand-waving explanation of what the algorithm/code does on
the data.
* Highlight the usefulness of the feature and its recommended application.
Consider including the algorithm's complexity
(:math:`O\left(g\left(n\right)\right)`) if available, as "rules of thumb" can
be very machine-dependent. Only if those complexities are not available, then
rules of thumb may be provided instead.
* Incorporate a relevant figure (generated from an example) to provide intuitions.
* Include one or two short code examples to demonstrate the feature's usage.
* Introduce any necessary mathematical equations, followed by references. By
deferring the mathematical aspects, the documentation becomes more accessible
to users primarily interested in understanding the feature's practical
implications rather than its underlying mechanics.
* When editing reStructuredText (``.rst``) files, try to keep line length under
88 characters when possible (exceptions include links and tables).
* In scikit-learn reStructuredText files both single and double backticks
surrounding text will render as inline literal (often used for code, e.g.,
`list`). This is due to specific configurations we have set. Single
backticks should be used nowadays.
* Too much information makes it difficult for users to access the content they
are interested in. Use dropdowns to factorize it by using the following syntax
.. code-block:: rst
.. dropdown:: Dropdown title
Dropdown content.
The snippet above will result in the following dropdown:
.. dropdown:: Dropdown title
Dropdown content.
* Information that can be hidden by default using dropdowns is:
* low hierarchy sections such as `References`, `Properties`, etc. (see for
instance the subsections in :ref:`det_curve`);
* in-depth mathematical details;
* narrative that is use-case specific;
* in general, narrative that may only interest users that want to go beyond
the pragmatics of a given tool.
* Do not use dropdowns for the low level section `Examples`, as it should stay
visible to all users. Make sure that the `Examples` section comes right after
the main discussion with the least possible folded section in-between.
* Be aware that dropdowns break cross-references. If that makes sense, hide the
reference along with the text mentioning it. Else, do not use dropdown.
.. dropdown:: Guidelines for writing references
* When bibliographic references are available with `arxiv <https://arxiv.org/>`_
or `Digital Object Identifier <https://www.doi.org/>`_ identification numbers,
use the sphinx directives `:arxiv:` or `:doi:`. For example, see references in
:ref:`Spectral Clustering Graphs <spectral_clustering_graph>`.
* For the "References" section in docstrings, see
:func:`sklearn.metrics.silhouette_score` as an example.
* To cross-reference to other pages in the scikit-learn documentation use the
reStructuredText cross-referencing syntax:
* **Section:** to link to an arbitrary section in the documentation, use
reference labels (see `Sphinx docs
<https://www.sphinx-doc.org/en/master/usage/restructuredtext/roles.html#ref-role>`_).
For example:
.. code-block:: rst
.. _my-section:
My section
----------
This is the text of the section.
To refer to itself use :ref:`my-section`.
You should not modify existing sphinx reference labels as this would break
existing cross references and external links pointing to specific sections
in the scikit-learn documentation.
* **Glossary:** linking to a term in the :ref:`glossary`:
.. code-block:: rst
:term:`cross_validation`
* **Function:** to link to the documentation of a function, use the full import
path to the function:
.. code-block:: rst
:func:`~sklearn.model_selection.cross_val_score`
However, if there is a `.. currentmodule::` directive above you in the document,
you will only need to use the path to the function succeeding the current
module specified. For example:
.. code-block:: rst
.. currentmodule:: sklearn.model_selection
:func:`cross_val_score`
* **Class:** to link to documentation of a class, use the full import path to the
class, unless there is a `.. currentmodule::` directive in the document above
(see above):
.. code-block:: rst
:class:`~sklearn.preprocessing.StandardScaler`
You can edit the documentation using any text editor, and then generate the
HTML output by following :ref:`building_documentation`. The resulting HTML files
will be placed in ``_build/html/`` and are viewable in a web browser, for instance by
opening the local ``_build/html/index.html`` file or by running a local server
.. prompt:: bash
python -m http.server -d _build/html
.. _building_documentation:
Building the documentation
--------------------------
**Before submitting a pull request check if your modifications have introduced
new sphinx warnings by building the documentation locally and try to fix them.**
First, make sure you have :ref:`properly installed <install_bleeding_edge>` the
development version. On top of that, building the documentation requires installing some
additional packages:
..
packaging is not needed once setuptools starts shipping packaging>=17.0
.. prompt:: bash
pip install sphinx sphinx-gallery numpydoc matplotlib Pillow pandas \
polars scikit-image packaging seaborn sphinx-prompt \
sphinxext-opengraph sphinx-copybutton plotly pooch \
pydata-sphinx-theme sphinxcontrib-sass sphinx-design \
sphinx-remove-toctrees
To build the documentation, you need to be in the ``doc`` folder:
.. prompt:: bash
cd doc
In the vast majority of cases, you only need to generate the web site without
the example gallery:
.. prompt:: bash
make
The documentation will be generated in the ``_build/html/stable`` directory
and are viewable in a web browser, for instance by opening the local
``_build/html/stable/index.html`` file.
To also generate the example gallery you can use:
.. prompt:: bash
make html
This will run all the examples, which takes a while. You can also run only a few examples based on their file names.
Here is a way to run all examples with filenames containing `plot_calibration`:
.. prompt:: bash
EXAMPLES_PATTERN="plot_calibration" make html
You can use regular expressions for more advanced use cases.
Set the environment variable `NO_MATHJAX=1` if you intend to view the documentation in
an offline setting. To build the PDF manual, run:
.. prompt:: bash
make latexpdf
.. admonition:: Sphinx version
:class: warning
While we do our best to have the documentation build under as many
versions of Sphinx as possible, the different versions tend to
behave slightly differently. To get the best results, you should
use the same version as the one we used on CircleCI. Look at this
`GitHub search <https://github.com/search?q=repo%3Ascikit-learn%2Fscikit-learn+%2F%5C%2Fsphinx-%5B0-9.%5D%2B%2F+path%3Abuild_tools%2Fcircle%2Fdoc_linux-64_conda.lock&type=code>`_
to know the exact version.
.. _generated_doc_CI:
Generated documentation on GitHub Actions
-----------------------------------------
When you change the documentation in a pull request, GitHub Actions automatically
builds it. To view the documentation generated by GitHub Actions, simply go to the
bottom of your PR page, look for the item "Check the rendered docs here!" and
click on 'details' next to it:
.. image:: ../images/generated-doc-ci.png
:align: center
.. _testing_coverage:
Testing and improving test coverage
===================================
High-quality `unit testing <https://en.wikipedia.org/wiki/Unit_testing>`_
is a corner-stone of the scikit-learn development process. For this
purpose, we use the `pytest <https://docs.pytest.org>`_
package. The tests are functions appropriately named, located in `tests`
subdirectories, that check the validity of the algorithms and the
different options of the code.
Running `pytest` in a folder will run all the tests of the corresponding
subpackages. For a more detailed `pytest` workflow, please refer to the
:ref:`pr_checklist`.
We expect code coverage of new features to be at least around 90%.
.. dropdown:: Writing matplotlib-related tests
Test fixtures ensure that a set of tests will be executing with the appropriate
initialization and cleanup. The scikit-learn test suite implements a ``pyplot``
fixture which can be used with ``matplotlib``.
The ``pyplot`` fixture should be used when a test function is dealing with
``matplotlib``. ``matplotlib`` is a soft dependency and is not required.
This fixture is in charge of skipping the tests if ``matplotlib`` is not
installed. In addition, figures created during the tests will be
automatically closed once the test function has been executed.
To use this fixture in a test function, one needs to pass it as an
argument::
def test_requiring_mpl_fixture(pyplot):
# you can now safely use matplotlib
.. dropdown:: Workflow to improve test coverage
To test code coverage, you need to install the `coverage
<https://pypi.org/project/coverage/>`_ package in addition to `pytest`.
1. Run `pytest --cov sklearn /path/to/tests`. The output lists for each file the line
numbers that are not tested.
2. Find a low hanging fruit, looking at which lines are not tested,
write or adapt a test specifically for these lines.
3. Loop.
.. _monitoring_performances:
Monitoring performance
======================
*This section is heavily inspired from the* `pandas documentation
<https://pandas.pydata.org/docs/development/contributing_codebase.html#running-the-performance-test-suite>`_.
When proposing changes to the existing code base, it's important to make sure
that they don't introduce performance regressions. Scikit-learn uses
`asv benchmarks <https://github.com/airspeed-velocity/asv>`_ to monitor the
performance of a selection of common estimators and functions. You can view
these benchmarks on the `scikit-learn benchmark page
<https://scikit-learn.org/scikit-learn-benchmarks>`_.
The corresponding benchmark suite can be found in the `asv_benchmarks/` directory.
To use all features of asv, you will need either `conda` or `virtualenv`. For
more details please check the `asv installation webpage
<https://asv.readthedocs.io/en/latest/installing.html>`_.
First of all you need to install the development version of asv:
.. prompt:: bash
pip install git+https://github.com/airspeed-velocity/asv
and change your directory to `asv_benchmarks/`:
.. prompt:: bash
cd asv_benchmarks
The benchmark suite is configured to run against your local clone of
scikit-learn. Make sure it is up to date:
.. prompt:: bash
git fetch upstream
In the benchmark suite, the benchmarks are organized following the same
structure as scikit-learn. For example, you can compare the performance of a
specific estimator between ``upstream/main`` and the branch you are working on:
.. prompt:: bash
asv continuous -b LogisticRegression upstream/main HEAD
The command uses conda by default for creating the benchmark environments. If
you want to use virtualenv instead, use the `-E` flag:
.. prompt:: bash
asv continuous -E virtualenv -b LogisticRegression upstream/main HEAD
You can also specify a whole module to benchmark:
.. prompt:: bash
asv continuous -b linear_model upstream/main HEAD
You can replace `HEAD` by any local branch. By default it will only report the
benchmarks that have change by at least 10%. You can control this ratio with
the `-f` flag.
To run the full benchmark suite, simply remove the `-b` flag :
.. prompt:: bash
asv continuous upstream/main HEAD
However this can take up to two hours. The `-b` flag also accepts a regular
expression for a more complex subset of benchmarks to run.
To run the benchmarks without comparing to another branch, use the `run`
command:
.. prompt:: bash
asv run -b linear_model HEAD^!
You can also run the benchmark suite using the version of scikit-learn already
installed in your current Python environment:
.. prompt:: bash
asv run --python=same
It's particularly useful when you installed scikit-learn in editable mode to
avoid creating a new environment each time you run the benchmarks. By default
the results are not saved when using an existing installation. To save the
results you must specify a commit hash:
.. prompt:: bash
asv run --python=same --set-commit-hash=<commit hash>
Benchmarks are saved and organized by machine, environment and commit. To see
the list of all saved benchmarks:
.. prompt:: bash
asv show
and to see the report of a specific run:
.. prompt:: bash
asv show <commit hash>
When running benchmarks for a pull request you're working on please report the
results on github.
The benchmark suite supports additional configurable options which can be set
in the `benchmarks/config.json` configuration file. For example, the benchmarks
can run for a provided list of values for the `n_jobs` parameter.
More information on how to write a benchmark and how to use asv can be found in
the `asv documentation <https://asv.readthedocs.io/en/latest/index.html>`_.
.. _issue_tracker_tags:
Issue Tracker Tags
==================
All issues and pull requests on the
`GitHub issue tracker <https://github.com/scikit-learn/scikit-learn/issues>`_
should have (at least) one of the following tags:
:Bug:
Something is happening that clearly shouldn't happen.
Wrong results as well as unexpected errors from estimators go here.
:Enhancement:
Improving performance, usability, consistency.
:Documentation:
Missing, incorrect or sub-standard documentations and examples.
:New Feature:
Feature requests and pull requests implementing a new feature.
There are four other tags to help new contributors:
:Good first issue:
This issue is ideal for a first contribution to scikit-learn. Ask for help
if the formulation is unclear. If you have already contributed to
scikit-learn, look at Easy issues instead.
:Easy:
This issue can be tackled without much prior experience.
:Moderate:
Might need some knowledge of machine learning or the package,
but is still approachable for someone new to the project.
:Help wanted:
This tag marks an issue which currently lacks a contributor or a
PR that needs another contributor to take over the work. These
issues can range in difficulty, and may not be approachable
for new contributors. Note that not all issues which need
contributors will have this tag.
.. _backwards-compatibility:
Maintaining backwards compatibility
===================================
.. _contributing_deprecation:
Deprecation
-----------
If any publicly accessible class, function, method, attribute or parameter is renamed,
we still support the old one for two releases and issue a deprecation warning when it is
called, passed, or accessed.
.. rubric:: Deprecating a class or a function
Suppose the function ``zero_one`` is renamed to ``zero_one_loss``, we add the decorator
:class:`utils.deprecated` to ``zero_one`` and call ``zero_one_loss`` from that
function::
from ..utils import deprecated
def zero_one_loss(y_true, y_pred, normalize=True):
# actual implementation
pass
@deprecated(
"Function `zero_one` was renamed to `zero_one_loss` in 0.13 and will be "
"removed in 0.15. Default behavior is changed from `normalize=False` to "
"`normalize=True`"
)
def zero_one(y_true, y_pred, normalize=False):
return zero_one_loss(y_true, y_pred, normalize)
One also needs to move ``zero_one`` from ``API_REFERENCE`` to
``DEPRECATED_API_REFERENCE`` and add ``zero_one_loss`` to ``API_REFERENCE`` in the
``doc/api_reference.py`` file to reflect the changes in :ref:`api_ref`.
.. rubric:: Deprecating an attribute or a method
If an attribute or a method is to be deprecated, use the decorator
:class:`~utils.deprecated` on the property. Please note that the
:class:`~utils.deprecated` decorator should be placed before the ``property`` decorator
if there is one, so that the docstrings can be rendered properly. For instance, renaming
an attribute ``labels_`` to ``classes_`` can be done as::
@deprecated(
"Attribute `labels_` was deprecated in 0.13 and will be removed in 0.15. Use "
"`classes_` instead"
)
@property
def labels_(self):
return self.classes_
.. rubric:: Deprecating a parameter
If a parameter has to be deprecated, a ``FutureWarning`` warning must be raised
manually. In the following example, ``k`` is deprecated and renamed to n_clusters::
import warnings
def example_function(n_clusters=8, k="deprecated"):
if k != "deprecated":
warnings.warn(
"`k` was renamed to `n_clusters` in 0.13 and will be removed in 0.15",
FutureWarning,
)
n_clusters = k
When the change is in a class, we validate and raise warning in ``fit``::
import warnings
class ExampleEstimator(BaseEstimator):
def __init__(self, n_clusters=8, k='deprecated'):
self.n_clusters = n_clusters
self.k = k
def fit(self, X, y):
if self.k != "deprecated":
warnings.warn(
"`k` was renamed to `n_clusters` in 0.13 and will be removed in 0.15.",
FutureWarning,
)
self._n_clusters = self.k
else:
self._n_clusters = self.n_clusters
As in these examples, the warning message should always give both the
version in which the deprecation happened and the version in which the
old behavior will be removed. If the deprecation happened in version
0.x-dev, the message should say deprecation occurred in version 0.x and
the removal will be in 0.(x+2), so that users will have enough time to
adapt their code to the new behaviour. For example, if the deprecation happened
in version 0.18-dev, the message should say it happened in version 0.18
and the old behavior will be removed in version 0.20.
The warning message should also include a brief explanation of the change and point
users to an alternative.
In addition, a deprecation note should be added in the docstring, recalling the
same information as the deprecation warning as explained above. Use the
``.. deprecated::`` directive:
.. code-block:: rst
.. deprecated:: 0.13
``k`` was renamed to ``n_clusters`` in version 0.13 and will be removed
in 0.15.
What's more, a deprecation requires a test which ensures that the warning is
raised in relevant cases but not in other cases. The warning should be caught
in all other tests (using e.g., ``@pytest.mark.filterwarnings``),
and there should be no warning in the examples.
Change the default value of a parameter
---------------------------------------
If the default value of a parameter needs to be changed, please replace the
default value with a specific value (e.g., ``"warn"``) and raise
``FutureWarning`` when users are using the default value. The following
example assumes that the current version is 0.20 and that we change the
default value of ``n_clusters`` from 5 (old default for 0.20) to 10
(new default for 0.22)::
import warnings
def example_function(n_clusters="warn"):
if n_clusters == "warn":
warnings.warn(
"The default value of `n_clusters` will change from 5 to 10 in 0.22.",
FutureWarning,
)
n_clusters = 5
When the change is in a class, we validate and raise warning in ``fit``::
import warnings
class ExampleEstimator:
def __init__(self, n_clusters="warn"):
self.n_clusters = n_clusters
def fit(self, X, y):
if self.n_clusters == "warn":
warnings.warn(
"The default value of `n_clusters` will change from 5 to 10 in 0.22.",
FutureWarning,
)
self._n_clusters = 5
Similar to deprecations, the warning message should always give both the
version in which the change happened and the version in which the old behavior
will be removed.
The parameter description in the docstring needs to be updated accordingly by adding
a ``versionchanged`` directive with the old and new default value, pointing to the
version when the change will be effective:
.. code-block:: rst
.. versionchanged:: 0.22
The default value for `n_clusters` will change from 5 to 10 in version 0.22.
Finally, we need a test which ensures that the warning is raised in relevant cases but
not in other cases. The warning should be caught in all other tests
(using e.g., ``@pytest.mark.filterwarnings``), and there should be no warning
in the examples.
.. _code_review:
Code Review Guidelines
======================
Reviewing code contributed to the project as PRs is a crucial component of
scikit-learn development. We encourage anyone to start reviewing code of other
developers. The code review process is often highly educational for everybody
involved. This is particularly appropriate if it is a feature you would like to
use, and so can respond critically about whether the PR meets your needs. While
each pull request needs to be signed off by two core developers, you can speed
up this process by providing your feedback.
.. note::
The difference between an objective improvement and a subjective nit isn't
always clear. Reviewers should recall that code review is primarily about
reducing risk in the project. When reviewing code, one should aim at
preventing situations which may require a bug fix, a deprecation, or a
retraction. Regarding docs: typos, grammar issues and disambiguations are
better addressed immediately.
.. dropdown:: Important aspects to be covered in any code review
Here are a few important aspects that need to be covered in any code review,
from high-level questions to a more detailed check-list.
- Do we want this in the library? Is it likely to be used? Do you, as
a scikit-learn user, like the change and intend to use it? Is it in
the scope of scikit-learn? Will the cost of maintaining a new
feature be worth its benefits?
- Is the code consistent with the API of scikit-learn? Are public
functions/classes/parameters well named and intuitively designed?
- Are all public functions/classes and their parameters, return types, and
stored attributes named according to scikit-learn conventions and documented clearly?
- Is any new functionality described in the user-guide and illustrated with examples?
- Is every public function/class tested? Are a reasonable set of
parameters, their values, value types, and combinations tested? Do
the tests validate that the code is correct, i.e. doing what the
documentation says it does? If the change is a bug-fix, is a
non-regression test included? Look at `this
<https://jeffknupp.com/blog/2013/12/09/improve-your-python-understanding-unit-testing>`__
to get started with testing in Python.
- Do the tests pass in the continuous integration build? If
appropriate, help the contributor understand why tests failed.
- Do the tests cover every line of code (see the coverage report in the build
log)? If not, are the lines missing coverage good exceptions?
- Is the code easy to read and low on redundancy? Should variable names be
improved for clarity or consistency? Should comments be added? Should comments
be removed as unhelpful or extraneous?
- Could the code easily be rewritten to run much more efficiently for
relevant settings?
- Is the code backwards compatible with previous versions? (or is a
deprecation cycle necessary?)
- Will the new code add any dependencies on other libraries? (this is
unlikely to be accepted)
- Does the documentation render properly (see the
:ref:`contribute_documentation` section for more details), and are the plots
instructive?
:ref:`saved_replies` includes some frequent comments that reviewers may make.
.. _communication:
.. dropdown:: Communication Guidelines
Reviewing open pull requests (PRs) helps move the project forward. It is a
great way to get familiar with the codebase and should motivate the
contributor to keep involved in the project. [1]_
- Every PR, good or bad, is an act of generosity. Opening with a positive
comment will help the author feel rewarded, and your subsequent remarks may
be heard more clearly. You may feel good also.
- Begin if possible with the large issues, so the author knows they've been
understood. Resist the temptation to immediately go line by line, or to open
with small pervasive issues.
- Do not let perfect be the enemy of the good. If you find yourself making
many small suggestions that don't fall into the :ref:`code_review`, consider
the following approaches:
- refrain from submitting these;
- prefix them as "Nit" so that the contributor knows it's OK not to address;
- follow up in a subsequent PR, out of courtesy, you may want to let the
original contributor know.
- Do not rush, take the time to make your comments clear and justify your
suggestions.
- You are the face of the project. Bad days occur to everyone, in that
occasion you deserve a break: try to take your time and stay offline.
.. [1] Adapted from the numpy `communication guidelines
<https://numpy.org/devdocs/dev/reviewer_guidelines.html#communication-guidelines>`_.
Reading the existing code base
==============================
Reading and digesting an existing code base is always a difficult exercise
that takes time and experience to master. Even though we try to write simple
code in general, understanding the code can seem overwhelming at first,
given the sheer size of the project. Here is a list of tips that may help
make this task easier and faster (in no particular order).
- Get acquainted with the :ref:`api_overview`: understand what :term:`fit`,
:term:`predict`, :term:`transform`, etc. are used for.
- Before diving into reading the code of a function / class, go through the
docstrings first and try to get an idea of what each parameter / attribute
is doing. It may also help to stop a minute and think *how would I do this
myself if I had to?*
- The trickiest thing is often to identify which portions of the code are
relevant, and which are not. In scikit-learn **a lot** of input checking
is performed, especially at the beginning of the :term:`fit` methods.
Sometimes, only a very small portion of the code is doing the actual job.
For example looking at the :meth:`~linear_model.LinearRegression.fit` method of
:class:`~linear_model.LinearRegression`, what you're looking for
might just be the call the :func:`scipy.linalg.lstsq`, but it is buried into
multiple lines of input checking and the handling of different kinds of
parameters.
- Due to the use of `Inheritance
<https://en.wikipedia.org/wiki/Inheritance_(object-oriented_programming)>`_,
some methods may be implemented in parent classes. All estimators inherit
at least from :class:`~base.BaseEstimator`, and
from a ``Mixin`` class (e.g. :class:`~base.ClassifierMixin`) that enables default
behaviour depending on the nature of the estimator (classifier, regressor,
transformer, etc.).
- Sometimes, reading the tests for a given function will give you an idea of
what its intended purpose is. You can use ``git grep`` (see below) to find
all the tests written for a function. Most tests for a specific
function/class are placed under the ``tests/`` folder of the module
- You'll often see code looking like this:
``out = Parallel(...)(delayed(some_function)(param) for param in
some_iterable)``. This runs ``some_function`` in parallel using `Joblib
<https://joblib.readthedocs.io/>`_. ``out`` is then an iterable containing
the values returned by ``some_function`` for each call.
- We use `Cython <https://cython.org/>`_ to write fast code. Cython code is
located in ``.pyx`` and ``.pxd`` files. Cython code has a more C-like flavor:
we use pointers, perform manual memory allocation, etc. Having some minimal
experience in C / C++ is pretty much mandatory here. For more information see
:ref:`cython`.
- Master your tools.
- With such a big project, being efficient with your favorite editor or
IDE goes a long way towards digesting the code base. Being able to quickly
jump (or *peek*) to a function/class/attribute definition helps a lot.
So does being able to quickly see where a given name is used in a file.
- `Git <https://git-scm.com/book/en>`_ also has some built-in killer
features. It is often useful to understand how a file changed over time,
using e.g. ``git blame`` (`manual
<https://git-scm.com/docs/git-blame>`_). This can also be done directly
on GitHub. ``git grep`` (`examples
<https://git-scm.com/docs/git-grep#_examples>`_) is also extremely
useful to see every occurrence of a pattern (e.g. a function call or a
variable) in the code base.
- Configure `git blame` to ignore the commit that migrated the code style to
`black`.
.. prompt:: bash
git config blame.ignoreRevsFile .git-blame-ignore-revs
Find out more information in black's
`documentation for avoiding ruining git blame <https://black.readthedocs.io/en/stable/guides/introducing_black_to_your_project.html#avoiding-ruining-git-blame>`_. | scikit-learn | contributing Contributing currentmodule sklearn This project is a community effort and everyone is welcome to contribute It is hosted on https github com scikit learn scikit learn The decision making process and governance structure of scikit learn is laid out in ref governance Scikit learn is somewhat ref selective selectiveness when it comes to adding new algorithms and the best way to contribute and to help the project is to start working on known issues See ref new contributors to get started topic Our community our values We are a community based on openness and friendly didactic discussions We aspire to treat everybody equally and value their contributions We are particularly seeking people from underrepresented backgrounds in Open Source Software and scikit learn in particular to participate and contribute their expertise and experience Decisions are made based on technical merit and consensus Code is not the only way to help the project Reviewing pull requests answering questions to help others on mailing lists or issues organizing and teaching tutorials working on the website improving the documentation are all priceless contributions We abide by the principles of openness respect and consideration of others of the Python Software Foundation https www python org psf codeofconduct In case you experience issues using this package do not hesitate to submit a ticket to the GitHub issue tracker https github com scikit learn scikit learn issues You are also welcome to post feature requests or pull requests Ways to contribute There are many ways to contribute to scikit learn with the most common ones being contribution of code or documentation to the project Improving the documentation is no less important than improving the library itself If you find a typo in the documentation or have made improvements do not hesitate to create a GitHub issue or preferably submit a GitHub pull request Full documentation can be found under the doc directory But there are many other ways to help In particular helping to ref improve triage and investigate issues bug triaging and ref reviewing other developers pull requests code review are very valuable contributions that decrease the burden on the project maintainers Another way to contribute is to report issues you re facing and give a thumbs up on issues that others reported and that are relevant to you It also helps us if you spread the word reference the project from your blog and articles link to it from your website or simply star to say I use it raw html p object data https img shields io github stars scikit learn scikit learn style for the badge logo github type image svg xml object p In case a contribution issue involves changes to the API principles or changes to dependencies or supported versions it must be backed by a ref slep where a SLEP must be submitted as a pull request to enhancement proposals https scikit learn enhancement proposals readthedocs io using the SLEP template https scikit learn enhancement proposals readthedocs io en latest slep template html and follows the decision making process outlined in ref governance dropdown Contributing to related projects Scikit learn thrives in an ecosystem of several related projects which also may have relevant issues to work on including smaller projects such as scikit learn contrib https github com search q org 3Ascikit learn contrib is 3Aissue is 3Aopen sort 3Aupdated desc type Issues joblib https github com joblib joblib issues sphinx gallery https github com sphinx gallery sphinx gallery issues numpydoc https github com numpy numpydoc issues liac arff https github com renatopp liac arff issues and larger projects numpy https github com numpy numpy issues scipy https github com scipy scipy issues matplotlib https github com matplotlib matplotlib issues and so on Look for issues marked help wanted or similar Helping these projects may help scikit learn too See also ref related projects Automated Contributions Policy Please refrain from submitting issues or pull requests generated by fully automated tools Maintainers reserve the right at their sole discretion to close such submissions and to block any account responsible for them Ideally contributions should follow from a human to human discussion in the form of an issue Submitting a bug report or a feature request We use GitHub issues to track all bugs and feature requests feel free to open an issue if you have found a bug or wish to see a feature implemented In case you experience issues using this package do not hesitate to submit a ticket to the Bug Tracker https github com scikit learn scikit learn issues You are also welcome to post feature requests or pull requests It is recommended to check that your issue complies with the following rules before submitting Verify that your issue is not being currently addressed by other issues https github com scikit learn scikit learn issues q or pull requests https github com scikit learn scikit learn pulls q If you are submitting an algorithm or feature request please verify that the algorithm fulfills our new algorithm requirements https scikit learn org stable faq html what are the inclusion criteria for new algorithms If you are submitting a bug report we strongly encourage you to follow the guidelines in ref filing bugs filing bugs How to make a good bug report When you submit an issue to GitHub https github com scikit learn scikit learn issues please do your best to follow these guidelines This will make it a lot easier to provide you with good feedback The ideal bug report contains a ref short reproducible code snippet minimal reproducer this way anyone can try to reproduce the bug easily If your snippet is longer than around 50 lines please link to a Gist https gist github com or a GitHub repo If not feasible to include a reproducible snippet please be specific about what estimators and or functions are involved and the shape of the data If an exception is raised please provide the full traceback Please include your operating system type and version number as well as your Python scikit learn numpy and scipy versions This information can be found by running prompt bash python c import sklearn sklearn show versions Please ensure all code snippets and error messages are formatted in appropriate code blocks See Creating and highlighting code blocks https help github com articles creating and highlighting code blocks for more details If you want to help curate issues read about ref bug triaging Contributing code note To avoid duplicating work it is highly advised that you search through the issue tracker https github com scikit learn scikit learn issues and the PR list https github com scikit learn scikit learn pulls If in doubt about duplicated work or if you want to work on a non trivial feature it s recommended to first open an issue in the issue tracker https github com scikit learn scikit learn issues to get some feedbacks from core developers One easy way to find an issue to work on is by applying the help wanted label in your search This lists all the issues that have been unclaimed so far In order to claim an issue for yourself please comment exactly take on it for the CI to automatically assign the issue to you To maintain the quality of the codebase and ease the review process any contribution must conform to the project s ref coding guidelines coding guidelines in particular Don t modify unrelated lines to keep the PR focused on the scope stated in its description or issue Only write inline comments that add value and avoid stating the obvious explain the why rather than the what Most importantly Do not contribute code that you don t understand Video resources These videos are step by step introductions on how to contribute to scikit learn and are a great companion to the following text guidelines Please make sure to still check our guidelines below since they describe our latest up to date workflow Crash Course in Contributing to Scikit Learn Open Source Projects Video https youtu be 5OL8XoMMOfA Transcript https github com data umbrella event transcripts blob main 2020 05 andreas mueller contributing md Example of Submitting a Pull Request to scikit learn Video https youtu be PU1WyDPGePI Transcript https github com data umbrella event transcripts blob main 2020 06 reshama shaikh sklearn pr md Sprint specific instructions and practical tips Video https youtu be p 2Uw2BxdhA Transcript https github com data umbrella data umbrella scikit learn sprint blob master 3 transcript ACM video vol2 md 3 Components of Reviewing a Pull Request Video https youtu be dyxS9KKCNzA Transcript https github com data umbrella event transcripts blob main 2021 27 thomas pr md note In January 2021 the default branch name changed from master to main for the scikit learn GitHub repository to use more inclusive terms These videos were created prior to the renaming of the branch For contributors who are viewing these videos to set up their working environment and submitting a PR master should be replaced to main How to contribute The preferred way to contribute to scikit learn is to fork the main repository https github com scikit learn scikit learn on GitHub then submit a pull request PR In the first few steps we explain how to locally install scikit learn and how to set up your git repository 1 Create an account https github com join on GitHub if you do not already have one 2 Fork the project repository https github com scikit learn scikit learn click on the Fork button near the top of the page This creates a copy of the code under your account on the GitHub user account For more details on how to fork a repository see this guide https help github com articles fork a repo 3 Clone your fork of the scikit learn repo from your GitHub account to your local disk prompt bash git clone git github com YourLogin scikit learn git add depth 1 if your connection is slow cd scikit learn 4 Follow steps 2 6 in ref install bleeding edge to build scikit learn in development mode and return to this document 5 Install the development dependencies prompt bash pip install pytest pytest cov ruff mypy numpydoc black 24 3 0 upstream 6 Add the upstream remote This saves a reference to the main scikit learn repository which you can use to keep your repository synchronized with the latest changes prompt bash git remote add upstream git github com scikit learn scikit learn git 7 Check that the upstream and origin remote aliases are configured correctly by running git remote v which should display code block text origin git github com YourLogin scikit learn git fetch origin git github com YourLogin scikit learn git push upstream git github com scikit learn scikit learn git fetch upstream git github com scikit learn scikit learn git push You should now have a working installation of scikit learn and your git repository properly configured It could be useful to run some test to verify your installation Please refer to ref pytest tips for examples The next steps now describe the process of modifying code and submitting a PR 8 Synchronize your main branch with the upstream main branch more details on GitHub Docs https docs github com en github collaborating with issues and pull requests syncing a fork prompt bash git checkout main git fetch upstream git merge upstream main 9 Create a feature branch to hold your development changes prompt bash git checkout b my feature and start making changes Always use a feature branch It s good practice to never work on the main branch 10 Optional Install pre commit https pre commit com install to run code style checks before each commit prompt bash pip install pre commit pre commit install pre commit checks can be disabled for a particular commit with git commit n 11 Develop the feature on your feature branch on your computer using Git to do the version control When you re done editing add changed files using git add and then git commit prompt bash git add modified files git commit to record your changes in Git then push the changes to your GitHub account with prompt bash git push u origin my feature 12 Follow these https help github com articles creating a pull request from a fork instructions to create a pull request from your fork This will send an notification to potential reviewers You may want to consider sending an message to the discord https discord com invite h9qyrK8Jc8 in the development channel for more visibility if your pull request does not receive attention after a couple of days instant replies are not guaranteed though It is often helpful to keep your local feature branch synchronized with the latest changes of the main scikit learn repository prompt bash git fetch upstream git merge upstream main Subsequently you might need to solve the conflicts You can refer to the Git documentation related to resolving merge conflict using the command line https help github com articles resolving a merge conflict using the command line topic Learning Git The Git documentation https git scm com doc and http try github io are excellent resources to get started with git and understanding all of the commands shown here pr checklist Pull request checklist Before a PR can be merged it needs to be approved by two core developers An incomplete contribution where you expect to do more work before receiving a full review should be marked as a draft pull request https docs github com en pull requests collaborating with pull requests proposing changes to your work with pull requests changing the stage of a pull request and changed to ready for review when it matures Draft PRs may be useful to indicate you are working on something to avoid duplicated work request broad review of functionality or API or seek collaborators Draft PRs often benefit from the inclusion of a task list https github com blog 1375 task lists in gfm issues pulls comments in the PR description In order to ease the reviewing process we recommend that your contribution complies with the following rules before marking a PR as ready for review The bolded ones are especially important 1 Give your pull request a helpful title that summarizes what your contribution does This title will often become the commit message once merged so it should summarize your contribution for posterity In some cases Fix ISSUE TITLE is enough Fix ISSUE NUMBER is never a good title 2 Make sure your code passes the tests The whole test suite can be run with pytest but it is usually not recommended since it takes a long time It is often enough to only run the test related to your changes for example if you changed something in sklearn linear model logistic py running the following commands will usually be enough pytest sklearn linear model logistic py to make sure the doctest examples are correct pytest sklearn linear model tests test logistic py to run the tests specific to the file pytest sklearn linear model to test the whole mod sklearn linear model module pytest doc modules linear model rst to make sure the user guide examples are correct pytest sklearn tests test common py k LogisticRegression to run all our estimator checks specifically for LogisticRegression if that s the estimator you changed There may be other failing tests but they will be caught by the CI so you don t need to run the whole test suite locally For guidelines on how to use pytest efficiently see the ref pytest tips 3 Make sure your code is properly commented and documented and make sure the documentation renders properly To build the documentation please refer to our ref contribute documentation guidelines The CI will also build the docs please refer to ref generated doc CI 4 Tests are necessary for enhancements to be accepted Bug fixes or new features should be provided with non regression tests https en wikipedia org wiki Non regression testing These tests verify the correct behavior of the fix or feature In this manner further modifications on the code base are granted to be consistent with the desired behavior In the case of bug fixes at the time of the PR the non regression tests should fail for the code base in the main branch and pass for the PR code 5 If your PR is likely to affect users you need to add a changelog entry describing your PR changes see the following README https github com scikit learn scikit learn blob main doc whats new upcoming changes README md for more details 6 Follow the ref coding guidelines 7 When applicable use the validation tools and scripts in the mod sklearn utils module A list of utility routines available for developers can be found in the ref developers utils page 8 Often pull requests resolve one or more other issues or pull requests If merging your pull request means that some other issues PRs should be closed you should use keywords to create link to them https github com blog 1506 closing issues via pull requests e g Fixes 1234 multiple issues PRs are allowed as long as each one is preceded by a keyword Upon merging those issues PRs will automatically be closed by GitHub If your pull request is simply related to some other issues PRs or it only partially resolves the target issue create a link to them without using the keywords e g Towards 1234 9 PRs should often substantiate the change through benchmarks of performance and efficiency see ref monitoring performances or through examples of usage Examples also illustrate the features and intricacies of the library to users Have a look at other examples in the examples https github com scikit learn scikit learn tree main examples directory for reference Examples should demonstrate why the new functionality is useful in practice and if possible compare it to other methods available in scikit learn 10 New features have some maintenance overhead We expect PR authors to take part in the maintenance for the code they submit at least initially New features need to be illustrated with narrative documentation in the user guide with small code snippets If relevant please also add references in the literature with PDF links when possible 11 The user guide should also include expected time and space complexity of the algorithm and scalability e g this algorithm can scale to a large number of samples 100000 but does not scale in dimensionality n features is expected to be lower than 100 You can also check our ref code review to get an idea of what reviewers will expect You can check for common programming errors with the following tools Code with a good unit test coverage at least 80 better 100 check with prompt bash pip install pytest pytest cov pytest cov sklearn path to tests See also ref testing coverage Run static analysis with mypy prompt bash mypy sklearn This must not produce new errors in your pull request Using type ignore annotation can be a workaround for a few cases that are not supported by mypy in particular when importing C or Cython modules on properties with decorators Bonus points for contributions that include a performance analysis with a benchmark script and profiling output see ref monitoring performances Also check out the ref performance howto guide for more details on profiling and Cython optimizations note The current state of the scikit learn code base is not compliant with all of those guidelines but we expect that enforcing those constraints on all new contributions will get the overall code base quality in the right direction seealso For two very well documented and more detailed guides on development workflow please pay a visit to the Scipy Development Workflow http scipy github io devdocs dev dev quickstart html and the Astropy Workflow for Developers https astropy readthedocs io en latest development workflow development workflow html sections Continuous Integration CI Azure pipelines are used for testing scikit learn on Linux Mac and Windows with different dependencies and settings CircleCI is used to build the docs for viewing Github Actions are used for various tasks including building wheels and source distributions Cirrus CI is used to build on ARM commit markers Commit message markers Please note that if one of the following markers appear in the latest commit message the following actions are taken Commit Message Marker Action Taken by CI ci skip CI is skipped completely cd build CD is run wheels and source distribution are built cd build gh CD is run only for GitHub Actions cd build cirrus CD is run only for Cirrus CI lint skip Azure pipeline skips linting scipy dev Build test with our dependencies numpy scipy etc development builds free threaded Build test with CPython 3 13 free threaded pyodide Build test with Pyodide azure parallel Run Azure CI jobs in parallel cirrus arm Run Cirrus CI ARM test float32 Run float32 tests by setting SKLEARN RUN FLOAT32 TESTS 1 See ref environment variable for more details doc skip Docs are not built doc quick Docs built but excludes example gallery plots doc build Docs built including example gallery plots very long Note that by default the documentation is built but only the examples that are directly modified by the pull request are executed build lock files Build lock files CIs use lock files to build environments with specific versions of dependencies When a PR needs to modify the dependencies or their versions the lock files should be updated accordingly This can be done by adding the following comment directly in the GitHub Pull Request PR discussion code block text scikit learn bot update lock files A bot will push a commit to your PR branch with the updated lock files in a few minutes Make sure to tick the Allow edits from maintainers checkbox located at the bottom of the right sidebar of the PR You can also specify the options select build skip build and select tag as in a command line Use help on the script build tools update environments and lock files py for more information For example code block text scikit learn bot update lock files select tag main ci skip build doc The bot will automatically add ref commit message markers commit markers to the commit for certain tags If you want to add more markers manually you can do so using the commit marker option For example the following comment will trigger the bot to update documentation related lock files and add the doc build marker to the commit code block text scikit learn bot update lock files select build doc commit marker doc build Resolve conflicts in lock files Here is a bash snippet that helps resolving conflicts in environment and lock files prompt bash pull latest upstream main git pull upstream main no rebase resolve conflicts keeping the upstream main version for specific files git checkout theirs build tools lock build tools environment yml build tools lock txt build tools requirements txt git add build tools lock build tools environment yml build tools lock txt build tools requirements txt git merge continue This will merge upstream main into our branch automatically prioritising the upstream main for conflicting environment and lock files this is good enough because we will re generate the lock files afterwards Note that this only fixes conflicts in environment and lock files and you might have other conflicts to resolve Finally we have to re generate the environment and lock files for the CIs as described in ref Build lock files build lock files or by running prompt bash python build tools update environments and lock files py stalled pull request Stalled pull requests As contributing a feature can be a lengthy process some pull requests appear inactive but unfinished In such a case taking them over is a great service for the project A good etiquette to take over is Determine if a PR is stalled A pull request may have the label stalled or help wanted if we have already identified it as a candidate for other contributors To decide whether an inactive PR is stalled ask the contributor if she he plans to continue working on the PR in the near future Failure to respond within 2 weeks with an activity that moves the PR forward suggests that the PR is stalled and will result in tagging that PR with help wanted Note that if a PR has received earlier comments on the contribution that have had no reply in a month it is safe to assume that the PR is stalled and to shorten the wait time to one day After a sprint follow up for un merged PRs opened during sprint will be communicated to participants at the sprint and those PRs will be tagged sprint PRs tagged with sprint can be reassigned or declared stalled by sprint leaders Taking over a stalled PR To take over a PR it is important to comment on the stalled PR that you are taking over and to link from the new PR to the old one The new PR should be created by pulling from the old one Stalled and Unclaimed Issues Generally speaking issues which are up for grabs will have a help wanted https github com scikit learn scikit learn labels help 20wanted tag However not all issues which need contributors will have this tag as the help wanted tag is not always up to date with the state of the issue Contributors can find issues which are still up for grabs using the following guidelines First to determine if an issue is claimed Check for linked pull requests Check the conversation to see if anyone has said that they re working on creating a pull request If a contributor comments on an issue to say they are working on it a pull request is expected within 2 weeks new contributor or 4 weeks contributor or core dev unless an larger time frame is explicitly given Beyond that time another contributor can take the issue and make a pull request for it We encourage contributors to comment directly on the stalled or unclaimed issue to let community members know that they will be working on it If the issue is linked to a ref stalled pull request stalled pull request we recommend that contributors follow the procedure described in the ref stalled pull request section rather than working directly on the issue new contributors Issues for New Contributors New contributors should look for the following tags when looking for issues We strongly recommend that new contributors tackle easy issues first this helps the contributor become familiar with the contribution workflow and for the core devs to become acquainted with the contributor besides which we frequently underestimate how easy an issue is to solve Good first issue tag A great way to start contributing to scikit learn is to pick an item from the list of good first issues https github com scikit learn scikit learn labels good 20first 20issue in the issue tracker Resolving these issues allow you to start contributing to the project without much prior knowledge If you have already contributed to scikit learn you should look at Easy issues instead Easy tag If you have already contributed to scikit learn another great way to contribute to scikit learn is to pick an item from the list of Easy issues https github com scikit learn scikit learn labels Easy in the issue tracker Your assistance in this area will be greatly appreciated by the more experienced developers as it helps free up their time to concentrate on other issues Help wanted tag We often use the help wanted tag to mark issues regardless of difficulty Additionally we use the help wanted tag to mark Pull Requests which have been abandoned by their original contributor and are available for someone to pick up where the original contributor left off The list of issues with the help wanted tag can be found here https github com scikit learn scikit learn labels help 20wanted Note that not all issues which need contributors will have this tag contribute documentation Documentation We are glad to accept any sort of documentation Function method class docstrings Also known as API documentation these describe what the object does and details any parameters attributes and methods Docstrings live alongside the code in sklearn https github com scikit learn scikit learn tree main sklearn and are generated generated according to doc api reference py https github com scikit learn scikit learn blob main doc api reference py To add update remove or deprecate a public API that is listed in ref api ref this is the place to look at User guide These provide more detailed information about the algorithms implemented in scikit learn and generally live in the root doc https github com scikit learn scikit learn tree main doc directory and doc modules https github com scikit learn scikit learn tree main doc modules Examples These provide full code examples that may demonstrate the use of scikit learn modules compare different algorithms or discuss their interpretation etc Examples live in examples https github com scikit learn scikit learn tree main examples Other reStructuredText documents These provide various other useful information e g the ref contributing guide and live in doc https github com scikit learn scikit learn tree main doc dropdown Guidelines for writing docstrings When documenting the parameters and attributes here is a list of some well formatted examples code block text n clusters int default 3 The number of clusters detected by the algorithm some param hello goodbye bool or int default True The parameter description goes here which can be either a string literal either hello or goodbye a bool or an int The default value is True array parameter array like sparse matrix of shape n samples n features or n samples This parameter accepts data in either of the mentioned forms with one of the mentioned shapes The default value is np ones shape n samples list param list of int typed ndarray ndarray of shape n samples dtype np int32 sample weight array like of shape n samples default None multioutput array ndarray of shape n samples n classes or list of such arrays In general have the following in mind Use Python basic types bool instead of boolean Use parenthesis for defining shapes array like of shape n samples or array like of shape n samples n features For strings with multiple options use brackets input log squared multinomial 1D or 2D data can be a subset of array like ndarray sparse matrix dataframe Note that array like can also be a list while ndarray is explicitly only a numpy ndarray Specify dataframe when frame like features are being used such as the column names When specifying the data type of a list use of as a delimiter list of int When the parameter supports arrays giving details about the shape and or data type and a list of such arrays you can use one of array like of shape n samples or list of such arrays When specifying the dtype of an ndarray use e g dtype np int32 after defining the shape ndarray of shape n samples dtype np int32 You can specify multiple dtype as a set array like of shape n samples dtype np float64 np float32 If one wants to mention arbitrary precision use integral and floating rather than the Python dtype int and float When both int and floating are supported there is no need to specify the dtype When the default is None None only needs to be specified at the end with default None Be sure to include in the docstring what it means for the parameter or attribute to be None Add See Also in docstrings for related classes functions See Also in docstrings should be one line per reference with a colon and an explanation for example code block text See Also SelectKBest Select features based on the k highest scores SelectFpr Select features based on a false positive rate test Add one or two snippets of code in Example section to show how it can be used dropdown Guidelines for writing the user guide and other reStructuredText documents It is important to keep a good compromise between mathematical and algorithmic details and give intuition to the reader on what the algorithm does Begin with a concise hand waving explanation of what the algorithm code does on the data Highlight the usefulness of the feature and its recommended application Consider including the algorithm s complexity math O left g left n right right if available as rules of thumb can be very machine dependent Only if those complexities are not available then rules of thumb may be provided instead Incorporate a relevant figure generated from an example to provide intuitions Include one or two short code examples to demonstrate the feature s usage Introduce any necessary mathematical equations followed by references By deferring the mathematical aspects the documentation becomes more accessible to users primarily interested in understanding the feature s practical implications rather than its underlying mechanics When editing reStructuredText rst files try to keep line length under 88 characters when possible exceptions include links and tables In scikit learn reStructuredText files both single and double backticks surrounding text will render as inline literal often used for code e g list This is due to specific configurations we have set Single backticks should be used nowadays Too much information makes it difficult for users to access the content they are interested in Use dropdowns to factorize it by using the following syntax code block rst dropdown Dropdown title Dropdown content The snippet above will result in the following dropdown dropdown Dropdown title Dropdown content Information that can be hidden by default using dropdowns is low hierarchy sections such as References Properties etc see for instance the subsections in ref det curve in depth mathematical details narrative that is use case specific in general narrative that may only interest users that want to go beyond the pragmatics of a given tool Do not use dropdowns for the low level section Examples as it should stay visible to all users Make sure that the Examples section comes right after the main discussion with the least possible folded section in between Be aware that dropdowns break cross references If that makes sense hide the reference along with the text mentioning it Else do not use dropdown dropdown Guidelines for writing references When bibliographic references are available with arxiv https arxiv org or Digital Object Identifier https www doi org identification numbers use the sphinx directives arxiv or doi For example see references in ref Spectral Clustering Graphs spectral clustering graph For the References section in docstrings see func sklearn metrics silhouette score as an example To cross reference to other pages in the scikit learn documentation use the reStructuredText cross referencing syntax Section to link to an arbitrary section in the documentation use reference labels see Sphinx docs https www sphinx doc org en master usage restructuredtext roles html ref role For example code block rst my section My section This is the text of the section To refer to itself use ref my section You should not modify existing sphinx reference labels as this would break existing cross references and external links pointing to specific sections in the scikit learn documentation Glossary linking to a term in the ref glossary code block rst term cross validation Function to link to the documentation of a function use the full import path to the function code block rst func sklearn model selection cross val score However if there is a currentmodule directive above you in the document you will only need to use the path to the function succeeding the current module specified For example code block rst currentmodule sklearn model selection func cross val score Class to link to documentation of a class use the full import path to the class unless there is a currentmodule directive in the document above see above code block rst class sklearn preprocessing StandardScaler You can edit the documentation using any text editor and then generate the HTML output by following ref building documentation The resulting HTML files will be placed in build html and are viewable in a web browser for instance by opening the local build html index html file or by running a local server prompt bash python m http server d build html building documentation Building the documentation Before submitting a pull request check if your modifications have introduced new sphinx warnings by building the documentation locally and try to fix them First make sure you have ref properly installed install bleeding edge the development version On top of that building the documentation requires installing some additional packages packaging is not needed once setuptools starts shipping packaging 17 0 prompt bash pip install sphinx sphinx gallery numpydoc matplotlib Pillow pandas polars scikit image packaging seaborn sphinx prompt sphinxext opengraph sphinx copybutton plotly pooch pydata sphinx theme sphinxcontrib sass sphinx design sphinx remove toctrees To build the documentation you need to be in the doc folder prompt bash cd doc In the vast majority of cases you only need to generate the web site without the example gallery prompt bash make The documentation will be generated in the build html stable directory and are viewable in a web browser for instance by opening the local build html stable index html file To also generate the example gallery you can use prompt bash make html This will run all the examples which takes a while You can also run only a few examples based on their file names Here is a way to run all examples with filenames containing plot calibration prompt bash EXAMPLES PATTERN plot calibration make html You can use regular expressions for more advanced use cases Set the environment variable NO MATHJAX 1 if you intend to view the documentation in an offline setting To build the PDF manual run prompt bash make latexpdf admonition Sphinx version class warning While we do our best to have the documentation build under as many versions of Sphinx as possible the different versions tend to behave slightly differently To get the best results you should use the same version as the one we used on CircleCI Look at this GitHub search https github com search q repo 3Ascikit learn 2Fscikit learn 2F 5C 2Fsphinx 5B0 9 5D 2B 2F path 3Abuild tools 2Fcircle 2Fdoc linux 64 conda lock type code to know the exact version generated doc CI Generated documentation on GitHub Actions When you change the documentation in a pull request GitHub Actions automatically builds it To view the documentation generated by GitHub Actions simply go to the bottom of your PR page look for the item Check the rendered docs here and click on details next to it image images generated doc ci png align center testing coverage Testing and improving test coverage High quality unit testing https en wikipedia org wiki Unit testing is a corner stone of the scikit learn development process For this purpose we use the pytest https docs pytest org package The tests are functions appropriately named located in tests subdirectories that check the validity of the algorithms and the different options of the code Running pytest in a folder will run all the tests of the corresponding subpackages For a more detailed pytest workflow please refer to the ref pr checklist We expect code coverage of new features to be at least around 90 dropdown Writing matplotlib related tests Test fixtures ensure that a set of tests will be executing with the appropriate initialization and cleanup The scikit learn test suite implements a pyplot fixture which can be used with matplotlib The pyplot fixture should be used when a test function is dealing with matplotlib matplotlib is a soft dependency and is not required This fixture is in charge of skipping the tests if matplotlib is not installed In addition figures created during the tests will be automatically closed once the test function has been executed To use this fixture in a test function one needs to pass it as an argument def test requiring mpl fixture pyplot you can now safely use matplotlib dropdown Workflow to improve test coverage To test code coverage you need to install the coverage https pypi org project coverage package in addition to pytest 1 Run pytest cov sklearn path to tests The output lists for each file the line numbers that are not tested 2 Find a low hanging fruit looking at which lines are not tested write or adapt a test specifically for these lines 3 Loop monitoring performances Monitoring performance This section is heavily inspired from the pandas documentation https pandas pydata org docs development contributing codebase html running the performance test suite When proposing changes to the existing code base it s important to make sure that they don t introduce performance regressions Scikit learn uses asv benchmarks https github com airspeed velocity asv to monitor the performance of a selection of common estimators and functions You can view these benchmarks on the scikit learn benchmark page https scikit learn org scikit learn benchmarks The corresponding benchmark suite can be found in the asv benchmarks directory To use all features of asv you will need either conda or virtualenv For more details please check the asv installation webpage https asv readthedocs io en latest installing html First of all you need to install the development version of asv prompt bash pip install git https github com airspeed velocity asv and change your directory to asv benchmarks prompt bash cd asv benchmarks The benchmark suite is configured to run against your local clone of scikit learn Make sure it is up to date prompt bash git fetch upstream In the benchmark suite the benchmarks are organized following the same structure as scikit learn For example you can compare the performance of a specific estimator between upstream main and the branch you are working on prompt bash asv continuous b LogisticRegression upstream main HEAD The command uses conda by default for creating the benchmark environments If you want to use virtualenv instead use the E flag prompt bash asv continuous E virtualenv b LogisticRegression upstream main HEAD You can also specify a whole module to benchmark prompt bash asv continuous b linear model upstream main HEAD You can replace HEAD by any local branch By default it will only report the benchmarks that have change by at least 10 You can control this ratio with the f flag To run the full benchmark suite simply remove the b flag prompt bash asv continuous upstream main HEAD However this can take up to two hours The b flag also accepts a regular expression for a more complex subset of benchmarks to run To run the benchmarks without comparing to another branch use the run command prompt bash asv run b linear model HEAD You can also run the benchmark suite using the version of scikit learn already installed in your current Python environment prompt bash asv run python same It s particularly useful when you installed scikit learn in editable mode to avoid creating a new environment each time you run the benchmarks By default the results are not saved when using an existing installation To save the results you must specify a commit hash prompt bash asv run python same set commit hash commit hash Benchmarks are saved and organized by machine environment and commit To see the list of all saved benchmarks prompt bash asv show and to see the report of a specific run prompt bash asv show commit hash When running benchmarks for a pull request you re working on please report the results on github The benchmark suite supports additional configurable options which can be set in the benchmarks config json configuration file For example the benchmarks can run for a provided list of values for the n jobs parameter More information on how to write a benchmark and how to use asv can be found in the asv documentation https asv readthedocs io en latest index html issue tracker tags Issue Tracker Tags All issues and pull requests on the GitHub issue tracker https github com scikit learn scikit learn issues should have at least one of the following tags Bug Something is happening that clearly shouldn t happen Wrong results as well as unexpected errors from estimators go here Enhancement Improving performance usability consistency Documentation Missing incorrect or sub standard documentations and examples New Feature Feature requests and pull requests implementing a new feature There are four other tags to help new contributors Good first issue This issue is ideal for a first contribution to scikit learn Ask for help if the formulation is unclear If you have already contributed to scikit learn look at Easy issues instead Easy This issue can be tackled without much prior experience Moderate Might need some knowledge of machine learning or the package but is still approachable for someone new to the project Help wanted This tag marks an issue which currently lacks a contributor or a PR that needs another contributor to take over the work These issues can range in difficulty and may not be approachable for new contributors Note that not all issues which need contributors will have this tag backwards compatibility Maintaining backwards compatibility contributing deprecation Deprecation If any publicly accessible class function method attribute or parameter is renamed we still support the old one for two releases and issue a deprecation warning when it is called passed or accessed rubric Deprecating a class or a function Suppose the function zero one is renamed to zero one loss we add the decorator class utils deprecated to zero one and call zero one loss from that function from utils import deprecated def zero one loss y true y pred normalize True actual implementation pass deprecated Function zero one was renamed to zero one loss in 0 13 and will be removed in 0 15 Default behavior is changed from normalize False to normalize True def zero one y true y pred normalize False return zero one loss y true y pred normalize One also needs to move zero one from API REFERENCE to DEPRECATED API REFERENCE and add zero one loss to API REFERENCE in the doc api reference py file to reflect the changes in ref api ref rubric Deprecating an attribute or a method If an attribute or a method is to be deprecated use the decorator class utils deprecated on the property Please note that the class utils deprecated decorator should be placed before the property decorator if there is one so that the docstrings can be rendered properly For instance renaming an attribute labels to classes can be done as deprecated Attribute labels was deprecated in 0 13 and will be removed in 0 15 Use classes instead property def labels self return self classes rubric Deprecating a parameter If a parameter has to be deprecated a FutureWarning warning must be raised manually In the following example k is deprecated and renamed to n clusters import warnings def example function n clusters 8 k deprecated if k deprecated warnings warn k was renamed to n clusters in 0 13 and will be removed in 0 15 FutureWarning n clusters k When the change is in a class we validate and raise warning in fit import warnings class ExampleEstimator BaseEstimator def init self n clusters 8 k deprecated self n clusters n clusters self k k def fit self X y if self k deprecated warnings warn k was renamed to n clusters in 0 13 and will be removed in 0 15 FutureWarning self n clusters self k else self n clusters self n clusters As in these examples the warning message should always give both the version in which the deprecation happened and the version in which the old behavior will be removed If the deprecation happened in version 0 x dev the message should say deprecation occurred in version 0 x and the removal will be in 0 x 2 so that users will have enough time to adapt their code to the new behaviour For example if the deprecation happened in version 0 18 dev the message should say it happened in version 0 18 and the old behavior will be removed in version 0 20 The warning message should also include a brief explanation of the change and point users to an alternative In addition a deprecation note should be added in the docstring recalling the same information as the deprecation warning as explained above Use the deprecated directive code block rst deprecated 0 13 k was renamed to n clusters in version 0 13 and will be removed in 0 15 What s more a deprecation requires a test which ensures that the warning is raised in relevant cases but not in other cases The warning should be caught in all other tests using e g pytest mark filterwarnings and there should be no warning in the examples Change the default value of a parameter If the default value of a parameter needs to be changed please replace the default value with a specific value e g warn and raise FutureWarning when users are using the default value The following example assumes that the current version is 0 20 and that we change the default value of n clusters from 5 old default for 0 20 to 10 new default for 0 22 import warnings def example function n clusters warn if n clusters warn warnings warn The default value of n clusters will change from 5 to 10 in 0 22 FutureWarning n clusters 5 When the change is in a class we validate and raise warning in fit import warnings class ExampleEstimator def init self n clusters warn self n clusters n clusters def fit self X y if self n clusters warn warnings warn The default value of n clusters will change from 5 to 10 in 0 22 FutureWarning self n clusters 5 Similar to deprecations the warning message should always give both the version in which the change happened and the version in which the old behavior will be removed The parameter description in the docstring needs to be updated accordingly by adding a versionchanged directive with the old and new default value pointing to the version when the change will be effective code block rst versionchanged 0 22 The default value for n clusters will change from 5 to 10 in version 0 22 Finally we need a test which ensures that the warning is raised in relevant cases but not in other cases The warning should be caught in all other tests using e g pytest mark filterwarnings and there should be no warning in the examples code review Code Review Guidelines Reviewing code contributed to the project as PRs is a crucial component of scikit learn development We encourage anyone to start reviewing code of other developers The code review process is often highly educational for everybody involved This is particularly appropriate if it is a feature you would like to use and so can respond critically about whether the PR meets your needs While each pull request needs to be signed off by two core developers you can speed up this process by providing your feedback note The difference between an objective improvement and a subjective nit isn t always clear Reviewers should recall that code review is primarily about reducing risk in the project When reviewing code one should aim at preventing situations which may require a bug fix a deprecation or a retraction Regarding docs typos grammar issues and disambiguations are better addressed immediately dropdown Important aspects to be covered in any code review Here are a few important aspects that need to be covered in any code review from high level questions to a more detailed check list Do we want this in the library Is it likely to be used Do you as a scikit learn user like the change and intend to use it Is it in the scope of scikit learn Will the cost of maintaining a new feature be worth its benefits Is the code consistent with the API of scikit learn Are public functions classes parameters well named and intuitively designed Are all public functions classes and their parameters return types and stored attributes named according to scikit learn conventions and documented clearly Is any new functionality described in the user guide and illustrated with examples Is every public function class tested Are a reasonable set of parameters their values value types and combinations tested Do the tests validate that the code is correct i e doing what the documentation says it does If the change is a bug fix is a non regression test included Look at this https jeffknupp com blog 2013 12 09 improve your python understanding unit testing to get started with testing in Python Do the tests pass in the continuous integration build If appropriate help the contributor understand why tests failed Do the tests cover every line of code see the coverage report in the build log If not are the lines missing coverage good exceptions Is the code easy to read and low on redundancy Should variable names be improved for clarity or consistency Should comments be added Should comments be removed as unhelpful or extraneous Could the code easily be rewritten to run much more efficiently for relevant settings Is the code backwards compatible with previous versions or is a deprecation cycle necessary Will the new code add any dependencies on other libraries this is unlikely to be accepted Does the documentation render properly see the ref contribute documentation section for more details and are the plots instructive ref saved replies includes some frequent comments that reviewers may make communication dropdown Communication Guidelines Reviewing open pull requests PRs helps move the project forward It is a great way to get familiar with the codebase and should motivate the contributor to keep involved in the project 1 Every PR good or bad is an act of generosity Opening with a positive comment will help the author feel rewarded and your subsequent remarks may be heard more clearly You may feel good also Begin if possible with the large issues so the author knows they ve been understood Resist the temptation to immediately go line by line or to open with small pervasive issues Do not let perfect be the enemy of the good If you find yourself making many small suggestions that don t fall into the ref code review consider the following approaches refrain from submitting these prefix them as Nit so that the contributor knows it s OK not to address follow up in a subsequent PR out of courtesy you may want to let the original contributor know Do not rush take the time to make your comments clear and justify your suggestions You are the face of the project Bad days occur to everyone in that occasion you deserve a break try to take your time and stay offline 1 Adapted from the numpy communication guidelines https numpy org devdocs dev reviewer guidelines html communication guidelines Reading the existing code base Reading and digesting an existing code base is always a difficult exercise that takes time and experience to master Even though we try to write simple code in general understanding the code can seem overwhelming at first given the sheer size of the project Here is a list of tips that may help make this task easier and faster in no particular order Get acquainted with the ref api overview understand what term fit term predict term transform etc are used for Before diving into reading the code of a function class go through the docstrings first and try to get an idea of what each parameter attribute is doing It may also help to stop a minute and think how would I do this myself if I had to The trickiest thing is often to identify which portions of the code are relevant and which are not In scikit learn a lot of input checking is performed especially at the beginning of the term fit methods Sometimes only a very small portion of the code is doing the actual job For example looking at the meth linear model LinearRegression fit method of class linear model LinearRegression what you re looking for might just be the call the func scipy linalg lstsq but it is buried into multiple lines of input checking and the handling of different kinds of parameters Due to the use of Inheritance https en wikipedia org wiki Inheritance object oriented programming some methods may be implemented in parent classes All estimators inherit at least from class base BaseEstimator and from a Mixin class e g class base ClassifierMixin that enables default behaviour depending on the nature of the estimator classifier regressor transformer etc Sometimes reading the tests for a given function will give you an idea of what its intended purpose is You can use git grep see below to find all the tests written for a function Most tests for a specific function class are placed under the tests folder of the module You ll often see code looking like this out Parallel delayed some function param for param in some iterable This runs some function in parallel using Joblib https joblib readthedocs io out is then an iterable containing the values returned by some function for each call We use Cython https cython org to write fast code Cython code is located in pyx and pxd files Cython code has a more C like flavor we use pointers perform manual memory allocation etc Having some minimal experience in C C is pretty much mandatory here For more information see ref cython Master your tools With such a big project being efficient with your favorite editor or IDE goes a long way towards digesting the code base Being able to quickly jump or peek to a function class attribute definition helps a lot So does being able to quickly see where a given name is used in a file Git https git scm com book en also has some built in killer features It is often useful to understand how a file changed over time using e g git blame manual https git scm com docs git blame This can also be done directly on GitHub git grep examples https git scm com docs git grep examples is also extremely useful to see every occurrence of a pattern e g a function call or a variable in the code base Configure git blame to ignore the commit that migrated the code style to black prompt bash git config blame ignoreRevsFile git blame ignore revs Find out more information in black s documentation for avoiding ruining git blame https black readthedocs io en stable guides introducing black to your project html avoiding ruining git blame |
scikit-learn cython Tips for developing with Cython in scikit learn Cython Best Practices Conventions and Knowledge This documents tips to develop Cython code in scikit learn | .. _cython:
Cython Best Practices, Conventions and Knowledge
================================================
This documents tips to develop Cython code in scikit-learn.
Tips for developing with Cython in scikit-learn
-----------------------------------------------
Tips to ease development
^^^^^^^^^^^^^^^^^^^^^^^^
* Time spent reading `Cython's documentation <https://cython.readthedocs.io/en/latest/>`_ is not time lost.
* If you intend to use OpenMP: On MacOS, system's distribution of ``clang`` does not implement OpenMP.
You can install the ``compilers`` package available on ``conda-forge`` which comes with an implementation of OpenMP.
* Activating `checks <https://github.com/scikit-learn/scikit-learn/blob/62a017efa047e9581ae7df8bbaa62cf4c0544ee4/sklearn/_build_utils/__init__.py#L68-L87>`_ might help. E.g. for activating boundscheck use:
.. code-block:: bash
export SKLEARN_ENABLE_DEBUG_CYTHON_DIRECTIVES=1
* `Start from scratch in a notebook <https://cython.readthedocs.io/en/latest/src/quickstart/build.html#using-the-jupyter-notebook>`_ to understand how to use Cython and to get feedback on your work quickly.
If you plan to use OpenMP for your implementations in your Jupyter Notebook, do add extra compiler and linkers arguments in the Cython magic.
.. code-block:: python
# For GCC and for clang
%%cython --compile-args=-fopenmp --link-args=-fopenmp
# For Microsoft's compilers
%%cython --compile-args=/openmp --link-args=/openmp
* To debug C code (e.g. a segfault), do use ``gdb`` with:
.. code-block:: bash
gdb --ex r --args python ./entrypoint_to_bug_reproducer.py
* To have access to some value in place to debug in ``cdef (nogil)`` context, use:
.. code-block:: cython
with gil:
print(state_to_print)
* Note that Cython cannot parse f-strings with ``{var=}`` expressions, e.g.
.. code-block:: bash
print(f"{test_val=}")
* scikit-learn codebase has a lot of non-unified (fused) types (re)definitions.
There currently is `ongoing work to simplify and unify that across the codebase
<https://github.com/scikit-learn/scikit-learn/issues/25572>`_.
For now, make sure you understand which concrete types are used ultimately.
* You might find this alias to compile individual Cython extension handy:
.. code-block::
# You might want to add this alias to your shell script config.
alias cythonX="cython -X language_level=3 -X boundscheck=False -X wraparound=False -X initializedcheck=False -X nonecheck=False -X cdivision=True"
# This generates `source.c` as if you had recompiled scikit-learn entirely.
cythonX --annotate source.pyx
* Using the ``--annotate`` option with this flag allows generating a HTML report of code annotation.
This report indicates interactions with the CPython interpreter on a line-by-line basis.
Interactions with the CPython interpreter must be avoided as much as possible in
the computationally intensive sections of the algorithms.
For more information, please refer to `this section of Cython's tutorial <https://cython.readthedocs.io/en/latest/src/tutorial/cython_tutorial.html#primes>`_
.. code-block::
# This generates a HTML report (`source.html`) for `source.c`.
cythonX --annotate source.pyx
Tips for performance
^^^^^^^^^^^^^^^^^^^^
* Understand the GIL in context for CPython (which problems it solves, what are its limitations)
and get a good understanding of when Cython will be mapped to C code free of interactions with
CPython, when it will not, and when it cannot (e.g. presence of interactions with Python
objects, which include functions). In this regard, `PEP073 <https://peps.python.org/pep-0703/>`_
provides a good overview and context and pathways for removal.
* Make sure you have deactivated `checks <https://github.com/scikit-learn/scikit-learn/blob/62a017efa047e9581ae7df8bbaa62cf4c0544ee4/sklearn/_build_utils/__init__.py#L68-L87>`_.
* Always prefer memoryviews instead over ``cnp.ndarray`` when possible: memoryviews are lightweight.
* Avoid memoryview slicing: memoryview slicing might be costly or misleading in some cases and
we better not use it, even if handling fewer dimensions in some context would be preferable.
* Decorate final classes or methods with ``@final`` (this allows removing virtual tables when needed)
* Inline methods and function when it makes sense
* In doubt, read the generated C or C++ code if you can: "The fewer C instructions and indirections
for a line of Cython code, the better" is a good rule of thumb.
* ``nogil`` declarations are just hints: when declaring the ``cdef`` functions
as nogil, it means that they can be called without holding the GIL, but it does not release
the GIL when entering them. You have to do that yourself either by passing ``nogil=True`` to
``cython.parallel.prange`` explicitly, or by using an explicit context manager:
.. code-block:: cython
cdef inline void my_func(self) nogil:
# Some logic interacting with CPython, e.g. allocating arrays via NumPy.
with nogil:
# The code here is run as is it were written in C.
return 0
This item is based on `this comment from Stéfan's Benhel <https://github.com/cython/cython/issues/2798#issuecomment-459971828>`_
* Direct calls to BLAS routines are possible via interfaces defined in ``sklearn.utils._cython_blas``.
Using OpenMP
^^^^^^^^^^^^
Since scikit-learn can be built without OpenMP, it's necessary to protect each
direct call to OpenMP.
The `_openmp_helpers` module, available in
`sklearn/utils/_openmp_helpers.pyx <https://github.com/scikit-learn/scikit-learn/blob/main/sklearn/utils/_openmp_helpers.pyx>`_
provides protected versions of the OpenMP routines. To use OpenMP routines, they
must be ``cimported`` from this module and not from the OpenMP library directly:
.. code-block:: cython
from sklearn.utils._openmp_helpers cimport omp_get_max_threads
max_threads = omp_get_max_threads()
The parallel loop, `prange`, is already protected by cython and can be used directly
from `cython.parallel`.
Types
~~~~~
Cython code requires to use explicit types. This is one of the reasons you get a
performance boost. In order to avoid code duplication, we have a central place
for the most used types in
`sklearn/utils/_typedefs.pyd <https://github.com/scikit-learn/scikit-learn/blob/main/sklearn/utils/_typedefs.pyd>`_.
Ideally you start by having a look there and `cimport` types you need, for example
.. code-block:: cython
from sklear.utils._typedefs cimport float32, float64 | scikit-learn | cython Cython Best Practices Conventions and Knowledge This documents tips to develop Cython code in scikit learn Tips for developing with Cython in scikit learn Tips to ease development Time spent reading Cython s documentation https cython readthedocs io en latest is not time lost If you intend to use OpenMP On MacOS system s distribution of clang does not implement OpenMP You can install the compilers package available on conda forge which comes with an implementation of OpenMP Activating checks https github com scikit learn scikit learn blob 62a017efa047e9581ae7df8bbaa62cf4c0544ee4 sklearn build utils init py L68 L87 might help E g for activating boundscheck use code block bash export SKLEARN ENABLE DEBUG CYTHON DIRECTIVES 1 Start from scratch in a notebook https cython readthedocs io en latest src quickstart build html using the jupyter notebook to understand how to use Cython and to get feedback on your work quickly If you plan to use OpenMP for your implementations in your Jupyter Notebook do add extra compiler and linkers arguments in the Cython magic code block python For GCC and for clang cython compile args fopenmp link args fopenmp For Microsoft s compilers cython compile args openmp link args openmp To debug C code e g a segfault do use gdb with code block bash gdb ex r args python entrypoint to bug reproducer py To have access to some value in place to debug in cdef nogil context use code block cython with gil print state to print Note that Cython cannot parse f strings with var expressions e g code block bash print f test val scikit learn codebase has a lot of non unified fused types re definitions There currently is ongoing work to simplify and unify that across the codebase https github com scikit learn scikit learn issues 25572 For now make sure you understand which concrete types are used ultimately You might find this alias to compile individual Cython extension handy code block You might want to add this alias to your shell script config alias cythonX cython X language level 3 X boundscheck False X wraparound False X initializedcheck False X nonecheck False X cdivision True This generates source c as if you had recompiled scikit learn entirely cythonX annotate source pyx Using the annotate option with this flag allows generating a HTML report of code annotation This report indicates interactions with the CPython interpreter on a line by line basis Interactions with the CPython interpreter must be avoided as much as possible in the computationally intensive sections of the algorithms For more information please refer to this section of Cython s tutorial https cython readthedocs io en latest src tutorial cython tutorial html primes code block This generates a HTML report source html for source c cythonX annotate source pyx Tips for performance Understand the GIL in context for CPython which problems it solves what are its limitations and get a good understanding of when Cython will be mapped to C code free of interactions with CPython when it will not and when it cannot e g presence of interactions with Python objects which include functions In this regard PEP073 https peps python org pep 0703 provides a good overview and context and pathways for removal Make sure you have deactivated checks https github com scikit learn scikit learn blob 62a017efa047e9581ae7df8bbaa62cf4c0544ee4 sklearn build utils init py L68 L87 Always prefer memoryviews instead over cnp ndarray when possible memoryviews are lightweight Avoid memoryview slicing memoryview slicing might be costly or misleading in some cases and we better not use it even if handling fewer dimensions in some context would be preferable Decorate final classes or methods with final this allows removing virtual tables when needed Inline methods and function when it makes sense In doubt read the generated C or C code if you can The fewer C instructions and indirections for a line of Cython code the better is a good rule of thumb nogil declarations are just hints when declaring the cdef functions as nogil it means that they can be called without holding the GIL but it does not release the GIL when entering them You have to do that yourself either by passing nogil True to cython parallel prange explicitly or by using an explicit context manager code block cython cdef inline void my func self nogil Some logic interacting with CPython e g allocating arrays via NumPy with nogil The code here is run as is it were written in C return 0 This item is based on this comment from St fan s Benhel https github com cython cython issues 2798 issuecomment 459971828 Direct calls to BLAS routines are possible via interfaces defined in sklearn utils cython blas Using OpenMP Since scikit learn can be built without OpenMP it s necessary to protect each direct call to OpenMP The openmp helpers module available in sklearn utils openmp helpers pyx https github com scikit learn scikit learn blob main sklearn utils openmp helpers pyx provides protected versions of the OpenMP routines To use OpenMP routines they must be cimported from this module and not from the OpenMP library directly code block cython from sklearn utils openmp helpers cimport omp get max threads max threads omp get max threads The parallel loop prange is already protected by cython and can be used directly from cython parallel Types Cython code requires to use explicit types This is one of the reasons you get a performance boost In order to avoid code duplication we have a central place for the most used types in sklearn utils typedefs pyd https github com scikit learn scikit learn blob main sklearn utils typedefs pyd Ideally you start by having a look there and cimport types you need for example code block cython from sklear utils typedefs cimport float32 float64 |
scikit-learn The is important to the communication in the project it helps priorities For this reason it is important to curate it adding labels bugtriaging to issues and closing issues that are not necessary developers identify major projects to work on as well as to discuss Bug triaging and issue curation | .. _bug_triaging:
Bug triaging and issue curation
===============================
The `issue tracker <https://github.com/scikit-learn/scikit-learn/issues>`_
is important to the communication in the project: it helps
developers identify major projects to work on, as well as to discuss
priorities. For this reason, it is important to curate it, adding labels
to issues and closing issues that are not necessary.
Working on issues to improve them
---------------------------------
Improving issues increases their chances of being successfully resolved.
Guidelines on submitting good issues can be found :ref:`here
<filing_bugs>`.
A third party can give useful feedback or even add
comments on the issue.
The following actions are typically useful:
- documenting issues that are missing elements to reproduce the problem
such as code samples
- suggesting better use of code formatting
- suggesting to reformulate the title and description to make them more
explicit about the problem to be solved
- linking to related issues or discussions while briefly describing how
they are related, for instance "See also #xyz for a similar attempt
at this" or "See also #xyz where the same thing happened in
SomeEstimator" provides context and helps the discussion.
.. topic:: Fruitful discussions
Online discussions may be harder than it seems at first glance, in
particular given that a person new to open-source may have a very
different understanding of the process than a seasoned maintainer.
Overall, it is useful to stay positive and assume good will. `The
following article
<https://gael-varoquaux.info/programming/technical-discussions-are-hard-a-few-tips.html>`_
explores how to lead online discussions in the context of open source.
Working on PRs to help review
-----------------------------
Reviewing code is also encouraged. Contributors and users are welcome to
participate to the review process following our :ref:`review guidelines
<code_review>`.
Triaging operations for members of the core and contributor experience teams
----------------------------------------------------------------------------
In addition to the above, members of the core team and the contributor experience team
can do the following important tasks:
- Update :ref:`labels for issues and PRs <issue_tracker_tags>`: see the list of
the `available github labels
<https://github.com/scikit-learn/scikit-learn/labels>`_.
- :ref:`Determine if a PR must be relabeled as stalled <stalled_pull_request>`
or needs help (this is typically very important in the context
of sprints, where the risk is to create many unfinished PRs)
- If a stalled PR is taken over by a newer PR, then label the stalled PR as
"Superseded", leave a comment on the stalled PR linking to the new PR, and
likely close the stalled PR.
- Triage issues:
- **close usage questions** and politely point the reporter to use
Stack Overflow instead.
- **close duplicate issues**, after checking that they are
indeed duplicate. Ideally, the original submitter moves the
discussion to the older, duplicate issue
- **close issues that cannot be replicated**, after leaving time (at
least a week) to add extra information
:ref:`Saved replies <saved_replies>` are useful to gain time and yet be
welcoming and polite when triaging.
See the github description for `roles in the organization
<https://docs.github.com/en/github/setting-up-and-managing-organizations-and-teams/repository-permission-levels-for-an-organization>`_.
.. topic:: Closing issues: a tough call
When uncertain on whether an issue should be closed or not, it is
best to strive for consensus with the original poster, and possibly
to seek relevant expertise. However, when the issue is a usage
question, or when it has been considered as unclear for many years it
should be closed.
A typical workflow for triaging issues
--------------------------------------
The following workflow [1]_ is a good way to approach issue triaging:
#. Thank the reporter for opening an issue
The issue tracker is many people's first interaction with the
scikit-learn project itself, beyond just using the library. As such,
we want it to be a welcoming, pleasant experience.
#. Is this a usage question? If so close it with a polite message
(:ref:`here is an example <saved_replies>`).
#. Is the necessary information provided?
If crucial information (like the version of scikit-learn used), is
missing feel free to ask for that and label the issue with "Needs
info".
#. Is this a duplicate issue?
We have many open issues. If a new issue seems to be a duplicate,
point to the original issue. If it is a clear duplicate, or consensus
is that it is redundant, close it. Make sure to still thank the
reporter, and encourage them to chime in on the original issue, and
perhaps try to fix it.
If the new issue provides relevant information, such as a better or
slightly different example, add it to the original issue as a comment
or an edit to the original post.
#. Make sure that the title accurately reflects the issue. If you have the
necessary permissions edit it yourself if it's not clear.
#. Is the issue minimal and reproducible?
For bug reports, we ask that the reporter provide a minimal
reproducible example. See `this useful post
<https://matthewrocklin.com/blog/work/2018/02/28/minimal-bug-reports>`_
by Matthew Rocklin for a good explanation. If the example is not
reproducible, or if it's clearly not minimal, feel free to ask the reporter
if they can provide and example or simplify the provided one.
Do acknowledge that writing minimal reproducible examples is hard work.
If the reporter is struggling, you can try to write one yourself.
If a reproducible example is provided, but you see a simplification,
add your simpler reproducible example.
#. Add the relevant labels, such as "Documentation" when the issue is
about documentation, "Bug" if it is clearly a bug, "Enhancement" if it
is an enhancement request, ...
If the issue is clearly defined and the fix seems relatively
straightforward, label the issue as “Good first issue”.
An additional useful step can be to tag the corresponding module e.g.
`sklearn.linear_models` when relevant.
#. Remove the "Needs Triage" label from the issue if the label exists.
.. [1] Adapted from the pandas project `maintainers guide
<https://pandas.pydata.org/docs/development/maintaining.html>`_ | scikit-learn | bug triaging Bug triaging and issue curation The issue tracker https github com scikit learn scikit learn issues is important to the communication in the project it helps developers identify major projects to work on as well as to discuss priorities For this reason it is important to curate it adding labels to issues and closing issues that are not necessary Working on issues to improve them Improving issues increases their chances of being successfully resolved Guidelines on submitting good issues can be found ref here filing bugs A third party can give useful feedback or even add comments on the issue The following actions are typically useful documenting issues that are missing elements to reproduce the problem such as code samples suggesting better use of code formatting suggesting to reformulate the title and description to make them more explicit about the problem to be solved linking to related issues or discussions while briefly describing how they are related for instance See also xyz for a similar attempt at this or See also xyz where the same thing happened in SomeEstimator provides context and helps the discussion topic Fruitful discussions Online discussions may be harder than it seems at first glance in particular given that a person new to open source may have a very different understanding of the process than a seasoned maintainer Overall it is useful to stay positive and assume good will The following article https gael varoquaux info programming technical discussions are hard a few tips html explores how to lead online discussions in the context of open source Working on PRs to help review Reviewing code is also encouraged Contributors and users are welcome to participate to the review process following our ref review guidelines code review Triaging operations for members of the core and contributor experience teams In addition to the above members of the core team and the contributor experience team can do the following important tasks Update ref labels for issues and PRs issue tracker tags see the list of the available github labels https github com scikit learn scikit learn labels ref Determine if a PR must be relabeled as stalled stalled pull request or needs help this is typically very important in the context of sprints where the risk is to create many unfinished PRs If a stalled PR is taken over by a newer PR then label the stalled PR as Superseded leave a comment on the stalled PR linking to the new PR and likely close the stalled PR Triage issues close usage questions and politely point the reporter to use Stack Overflow instead close duplicate issues after checking that they are indeed duplicate Ideally the original submitter moves the discussion to the older duplicate issue close issues that cannot be replicated after leaving time at least a week to add extra information ref Saved replies saved replies are useful to gain time and yet be welcoming and polite when triaging See the github description for roles in the organization https docs github com en github setting up and managing organizations and teams repository permission levels for an organization topic Closing issues a tough call When uncertain on whether an issue should be closed or not it is best to strive for consensus with the original poster and possibly to seek relevant expertise However when the issue is a usage question or when it has been considered as unclear for many years it should be closed A typical workflow for triaging issues The following workflow 1 is a good way to approach issue triaging Thank the reporter for opening an issue The issue tracker is many people s first interaction with the scikit learn project itself beyond just using the library As such we want it to be a welcoming pleasant experience Is this a usage question If so close it with a polite message ref here is an example saved replies Is the necessary information provided If crucial information like the version of scikit learn used is missing feel free to ask for that and label the issue with Needs info Is this a duplicate issue We have many open issues If a new issue seems to be a duplicate point to the original issue If it is a clear duplicate or consensus is that it is redundant close it Make sure to still thank the reporter and encourage them to chime in on the original issue and perhaps try to fix it If the new issue provides relevant information such as a better or slightly different example add it to the original issue as a comment or an edit to the original post Make sure that the title accurately reflects the issue If you have the necessary permissions edit it yourself if it s not clear Is the issue minimal and reproducible For bug reports we ask that the reporter provide a minimal reproducible example See this useful post https matthewrocklin com blog work 2018 02 28 minimal bug reports by Matthew Rocklin for a good explanation If the example is not reproducible or if it s clearly not minimal feel free to ask the reporter if they can provide and example or simplify the provided one Do acknowledge that writing minimal reproducible examples is hard work If the reporter is struggling you can try to write one yourself If a reproducible example is provided but you see a simplification add your simpler reproducible example Add the relevant labels such as Documentation when the issue is about documentation Bug if it is clearly a bug Enhancement if it is an enhancement request If the issue is clearly defined and the fix seems relatively straightforward label the issue as Good first issue An additional useful step can be to tag the corresponding module e g sklearn linear models when relevant Remove the Needs Triage label from the issue if the label exists 1 Adapted from the pandas project maintainers guide https pandas pydata org docs development maintaining html |
scikit-learn minimalreproducer Crafting a minimal reproducer for scikit learn question in the discussions being able to craft minimal reproducible examples Whether submitting a bug report designing a suite of tests or simply posting a or minimal workable examples is the key to communicating effectively and | .. _minimal_reproducer:
==============================================
Crafting a minimal reproducer for scikit-learn
==============================================
Whether submitting a bug report, designing a suite of tests, or simply posting a
question in the discussions, being able to craft minimal, reproducible examples
(or minimal, workable examples) is the key to communicating effectively and
efficiently with the community.
There are very good guidelines on the internet such as `this StackOverflow
document <https://stackoverflow.com/help/mcve>`_ or `this blogpost by Matthew
Rocklin <https://matthewrocklin.com/blog/work/2018/02/28/minimal-bug-reports>`_
on crafting Minimal Complete Verifiable Examples (referred below as MCVE).
Our goal is not to be repetitive with those references but rather to provide a
step-by-step guide on how to narrow down a bug until you have reached the
shortest possible code to reproduce it.
The first step before submitting a bug report to scikit-learn is to read the
`Issue template
<https://github.com/scikit-learn/scikit-learn/blob/main/.github/ISSUE_TEMPLATE/bug_report.yml>`_.
It is already quite informative about the information you will be asked to
provide.
.. _good_practices:
Good practices
==============
In this section we will focus on the **Steps/Code to Reproduce** section of the
`Issue template
<https://github.com/scikit-learn/scikit-learn/blob/main/.github/ISSUE_TEMPLATE/bug_report.yml>`_.
We will start with a snippet of code that already provides a failing example but
that has room for readability improvement. We then craft a MCVE from it.
**Example**
.. code-block:: python
# I am currently working in a ML project and when I tried to fit a
# GradientBoostingRegressor instance to my_data.csv I get a UserWarning:
# "X has feature names, but DecisionTreeRegressor was fitted without
# feature names". You can get a copy of my dataset from
# https://example.com/my_data.csv and verify my features do have
# names. The problem seems to arise during fit when I pass an integer
# to the n_iter_no_change parameter.
df = pd.read_csv('my_data.csv')
X = df[["feature_name"]] # my features do have names
y = df["target"]
# We set random_state=42 for the train_test_split
X_train, X_test, y_train, y_test = train_test_split(
X, y, test_size=0.33, random_state=42
)
scaler = StandardScaler(with_mean=False)
X_train = scaler.fit_transform(X_train)
X_test = scaler.transform(X_test)
# An instance with default n_iter_no_change raises no error nor warnings
gbdt = GradientBoostingRegressor(random_state=0)
gbdt.fit(X_train, y_train)
default_score = gbdt.score(X_test, y_test)
# the bug appears when I change the value for n_iter_no_change
gbdt = GradientBoostingRegressor(random_state=0, n_iter_no_change=5)
gbdt.fit(X_train, y_train)
other_score = gbdt.score(X_test, y_test)
other_score = gbdt.score(X_test, y_test)
Provide a failing code example with minimal comments
----------------------------------------------------
Writing instructions to reproduce the problem in English is often ambiguous.
Better make sure that all the necessary details to reproduce the problem are
illustrated in the Python code snippet to avoid any ambiguity. Besides, by this
point you already provided a concise description in the **Describe the bug**
section of the `Issue template
<https://github.com/scikit-learn/scikit-learn/blob/main/.github/ISSUE_TEMPLATE/bug_report.yml>`_.
The following code, while **still not minimal**, is already **much better**
because it can be copy-pasted in a Python terminal to reproduce the problem in
one step. In particular:
- it contains **all necessary imports statements**;
- it can fetch the public dataset without having to manually download a
file and put it in the expected location on the disk.
**Improved example**
.. code-block:: python
import pandas as pd
df = pd.read_csv("https://example.com/my_data.csv")
X = df[["feature_name"]]
y = df["target"]
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(
X, y, test_size=0.33, random_state=42
)
from sklearn.preprocessing import StandardScaler
scaler = StandardScaler(with_mean=False)
X_train = scaler.fit_transform(X_train)
X_test = scaler.transform(X_test)
from sklearn.ensemble import GradientBoostingRegressor
gbdt = GradientBoostingRegressor(random_state=0)
gbdt.fit(X_train, y_train) # no warning
default_score = gbdt.score(X_test, y_test)
gbdt = GradientBoostingRegressor(random_state=0, n_iter_no_change=5)
gbdt.fit(X_train, y_train) # raises warning
other_score = gbdt.score(X_test, y_test)
other_score = gbdt.score(X_test, y_test)
Boil down your script to something as small as possible
-------------------------------------------------------
You have to ask yourself which lines of code are relevant and which are not for
reproducing the bug. Deleting unnecessary lines of code or simplifying the
function calls by omitting unrelated non-default options will help you and other
contributors narrow down the cause of the bug.
In particular, for this specific example:
- the warning has nothing to do with the `train_test_split` since it already
appears in the training step, before we use the test set.
- similarly, the lines that compute the scores on the test set are not
necessary;
- the bug can be reproduced for any value of `random_state` so leave it to its
default;
- the bug can be reproduced without preprocessing the data with the
`StandardScaler`.
**Improved example**
.. code-block:: python
import pandas as pd
df = pd.read_csv("https://example.com/my_data.csv")
X = df[["feature_name"]]
y = df["target"]
from sklearn.ensemble import GradientBoostingRegressor
gbdt = GradientBoostingRegressor()
gbdt.fit(X, y) # no warning
gbdt = GradientBoostingRegressor(n_iter_no_change=5)
gbdt.fit(X, y) # raises warning
**DO NOT** report your data unless it is extremely necessary
------------------------------------------------------------
The idea is to make the code as self-contained as possible. For doing so, you
can use a :ref:`synth_data`. It can be generated using numpy, pandas or the
:mod:`sklearn.datasets` module. Most of the times the bug is not related to a
particular structure of your data. Even if it is, try to find an available
dataset that has similar characteristics to yours and that reproduces the
problem. In this particular case, we are interested in data that has labeled
feature names.
**Improved example**
.. code-block:: python
import pandas as pd
from sklearn.ensemble import GradientBoostingRegressor
df = pd.DataFrame(
{
"feature_name": [-12.32, 1.43, 30.01, 22.17],
"target": [72, 55, 32, 43],
}
)
X = df[["feature_name"]]
y = df["target"]
gbdt = GradientBoostingRegressor()
gbdt.fit(X, y) # no warning
gbdt = GradientBoostingRegressor(n_iter_no_change=5)
gbdt.fit(X, y) # raises warning
As already mentioned, the key to communication is the readability of the code
and good formatting can really be a plus. Notice that in the previous snippet
we:
- try to limit all lines to a maximum of 79 characters to avoid horizontal
scrollbars in the code snippets blocks rendered on the GitHub issue;
- use blank lines to separate groups of related functions;
- place all the imports in their own group at the beginning.
The simplification steps presented in this guide can be implemented in a
different order than the progression we have shown here. The important points
are:
- a minimal reproducer should be runnable by a simple copy-and-paste in a
python terminal;
- it should be simplified as much as possible by removing any code steps
that are not strictly needed to reproducing the original problem;
- it should ideally only rely on a minimal dataset generated on-the-fly by
running the code instead of relying on external data, if possible.
Use markdown formatting
-----------------------
To format code or text into its own distinct block, use triple backticks.
`Markdown
<https://docs.github.com/en/get-started/writing-on-github/getting-started-with-writing-and-formatting-on-github/basic-writing-and-formatting-syntax>`_
supports an optional language identifier to enable syntax highlighting in your
fenced code block. For example::
```python
from sklearn.datasets import make_blobs
n_samples = 100
n_components = 3
X, y = make_blobs(n_samples=n_samples, centers=n_components)
```
will render a python formatted snippet as follows
.. code-block:: python
from sklearn.datasets import make_blobs
n_samples = 100
n_components = 3
X, y = make_blobs(n_samples=n_samples, centers=n_components)
It is not necessary to create several blocks of code when submitting a bug
report. Remember other reviewers are going to copy-paste your code and having a
single cell will make their task easier.
In the section named **Actual results** of the `Issue template
<https://github.com/scikit-learn/scikit-learn/blob/main/.github/ISSUE_TEMPLATE/bug_report.yml>`_
you are asked to provide the error message including the full traceback of the
exception. In this case, use the `python-traceback` qualifier. For example::
```python-traceback
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-1-a674e682c281> in <module>
4 vectorizer = CountVectorizer(input=docs, analyzer='word')
5 lda_features = vectorizer.fit_transform(docs)
----> 6 lda_model = LatentDirichletAllocation(
7 n_topics=10,
8 learning_method='online',
TypeError: __init__() got an unexpected keyword argument 'n_topics'
```
yields the following when rendered:
.. code-block:: python
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-1-a674e682c281> in <module>
4 vectorizer = CountVectorizer(input=docs, analyzer='word')
5 lda_features = vectorizer.fit_transform(docs)
----> 6 lda_model = LatentDirichletAllocation(
7 n_topics=10,
8 learning_method='online',
TypeError: __init__() got an unexpected keyword argument 'n_topics'
.. _synth_data:
Synthetic dataset
=================
Before choosing a particular synthetic dataset, first you have to identify the
type of problem you are solving: Is it a classification, a regression,
a clustering, etc?
Once that you narrowed down the type of problem, you need to provide a synthetic
dataset accordingly. Most of the times you only need a minimalistic dataset.
Here is a non-exhaustive list of tools that may help you.
NumPy
-----
NumPy tools such as `numpy.random.randn
<https://numpy.org/doc/stable/reference/random/generated/numpy.random.randn.html>`_
and `numpy.random.randint
<https://numpy.org/doc/stable/reference/random/generated/numpy.random.randint.html>`_
can be used to create dummy numeric data.
- regression
Regressions take continuous numeric data as features and target.
.. code-block:: python
import numpy as np
rng = np.random.RandomState(0)
n_samples, n_features = 5, 5
X = rng.randn(n_samples, n_features)
y = rng.randn(n_samples)
A similar snippet can be used as synthetic data when testing scaling tools such
as :class:`sklearn.preprocessing.StandardScaler`.
- classification
If the bug is not raised during when encoding a categorical variable, you can
feed numeric data to a classifier. Just remember to ensure that the target
is indeed an integer.
.. code-block:: python
import numpy as np
rng = np.random.RandomState(0)
n_samples, n_features = 5, 5
X = rng.randn(n_samples, n_features)
y = rng.randint(0, 2, n_samples) # binary target with values in {0, 1}
If the bug only happens with non-numeric class labels, you might want to
generate a random target with `numpy.random.choice
<https://numpy.org/doc/stable/reference/random/generated/numpy.random.choice.html>`_.
.. code-block:: python
import numpy as np
rng = np.random.RandomState(0)
n_samples, n_features = 50, 5
X = rng.randn(n_samples, n_features)
y = np.random.choice(
["male", "female", "other"], size=n_samples, p=[0.49, 0.49, 0.02]
)
Pandas
------
Some scikit-learn objects expect pandas dataframes as input. In this case you can
transform numpy arrays into pandas objects using `pandas.DataFrame
<https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.html>`_, or
`pandas.Series
<https://pandas.pydata.org/docs/reference/api/pandas.Series.html>`_.
.. code-block:: python
import numpy as np
import pandas as pd
rng = np.random.RandomState(0)
n_samples, n_features = 5, 5
X = pd.DataFrame(
{
"continuous_feature": rng.randn(n_samples),
"positive_feature": rng.uniform(low=0.0, high=100.0, size=n_samples),
"categorical_feature": rng.choice(["a", "b", "c"], size=n_samples),
}
)
y = pd.Series(rng.randn(n_samples))
In addition, scikit-learn includes various :ref:`sample_generators` that can be
used to build artificial datasets of controlled size and complexity.
`make_regression`
-----------------
As hinted by the name, :class:`sklearn.datasets.make_regression` produces
regression targets with noise as an optionally-sparse random linear combination
of random features.
.. code-block:: python
from sklearn.datasets import make_regression
X, y = make_regression(n_samples=1000, n_features=20)
`make_classification`
---------------------
:class:`sklearn.datasets.make_classification` creates multiclass datasets with multiple Gaussian
clusters per class. Noise can be introduced by means of correlated, redundant or
uninformative features.
.. code-block:: python
from sklearn.datasets import make_classification
X, y = make_classification(
n_features=2, n_redundant=0, n_informative=2, n_clusters_per_class=1
)
`make_blobs`
------------
Similarly to `make_classification`, :class:`sklearn.datasets.make_blobs` creates
multiclass datasets using normally-distributed clusters of points. It provides
greater control regarding the centers and standard deviations of each cluster,
and therefore it is useful to demonstrate clustering.
.. code-block:: python
from sklearn.datasets import make_blobs
X, y = make_blobs(n_samples=10, centers=3, n_features=2)
Dataset loading utilities
-------------------------
You can use the :ref:`datasets` to load and fetch several popular reference
datasets. This option is useful when the bug relates to the particular structure
of the data, e.g. dealing with missing values or image recognition.
.. code-block:: python
from sklearn.datasets import load_breast_cancer
X, y = load_breast_cancer(return_X_y=True) | scikit-learn | minimal reproducer Crafting a minimal reproducer for scikit learn Whether submitting a bug report designing a suite of tests or simply posting a question in the discussions being able to craft minimal reproducible examples or minimal workable examples is the key to communicating effectively and efficiently with the community There are very good guidelines on the internet such as this StackOverflow document https stackoverflow com help mcve or this blogpost by Matthew Rocklin https matthewrocklin com blog work 2018 02 28 minimal bug reports on crafting Minimal Complete Verifiable Examples referred below as MCVE Our goal is not to be repetitive with those references but rather to provide a step by step guide on how to narrow down a bug until you have reached the shortest possible code to reproduce it The first step before submitting a bug report to scikit learn is to read the Issue template https github com scikit learn scikit learn blob main github ISSUE TEMPLATE bug report yml It is already quite informative about the information you will be asked to provide good practices Good practices In this section we will focus on the Steps Code to Reproduce section of the Issue template https github com scikit learn scikit learn blob main github ISSUE TEMPLATE bug report yml We will start with a snippet of code that already provides a failing example but that has room for readability improvement We then craft a MCVE from it Example code block python I am currently working in a ML project and when I tried to fit a GradientBoostingRegressor instance to my data csv I get a UserWarning X has feature names but DecisionTreeRegressor was fitted without feature names You can get a copy of my dataset from https example com my data csv and verify my features do have names The problem seems to arise during fit when I pass an integer to the n iter no change parameter df pd read csv my data csv X df feature name my features do have names y df target We set random state 42 for the train test split X train X test y train y test train test split X y test size 0 33 random state 42 scaler StandardScaler with mean False X train scaler fit transform X train X test scaler transform X test An instance with default n iter no change raises no error nor warnings gbdt GradientBoostingRegressor random state 0 gbdt fit X train y train default score gbdt score X test y test the bug appears when I change the value for n iter no change gbdt GradientBoostingRegressor random state 0 n iter no change 5 gbdt fit X train y train other score gbdt score X test y test other score gbdt score X test y test Provide a failing code example with minimal comments Writing instructions to reproduce the problem in English is often ambiguous Better make sure that all the necessary details to reproduce the problem are illustrated in the Python code snippet to avoid any ambiguity Besides by this point you already provided a concise description in the Describe the bug section of the Issue template https github com scikit learn scikit learn blob main github ISSUE TEMPLATE bug report yml The following code while still not minimal is already much better because it can be copy pasted in a Python terminal to reproduce the problem in one step In particular it contains all necessary imports statements it can fetch the public dataset without having to manually download a file and put it in the expected location on the disk Improved example code block python import pandas as pd df pd read csv https example com my data csv X df feature name y df target from sklearn model selection import train test split X train X test y train y test train test split X y test size 0 33 random state 42 from sklearn preprocessing import StandardScaler scaler StandardScaler with mean False X train scaler fit transform X train X test scaler transform X test from sklearn ensemble import GradientBoostingRegressor gbdt GradientBoostingRegressor random state 0 gbdt fit X train y train no warning default score gbdt score X test y test gbdt GradientBoostingRegressor random state 0 n iter no change 5 gbdt fit X train y train raises warning other score gbdt score X test y test other score gbdt score X test y test Boil down your script to something as small as possible You have to ask yourself which lines of code are relevant and which are not for reproducing the bug Deleting unnecessary lines of code or simplifying the function calls by omitting unrelated non default options will help you and other contributors narrow down the cause of the bug In particular for this specific example the warning has nothing to do with the train test split since it already appears in the training step before we use the test set similarly the lines that compute the scores on the test set are not necessary the bug can be reproduced for any value of random state so leave it to its default the bug can be reproduced without preprocessing the data with the StandardScaler Improved example code block python import pandas as pd df pd read csv https example com my data csv X df feature name y df target from sklearn ensemble import GradientBoostingRegressor gbdt GradientBoostingRegressor gbdt fit X y no warning gbdt GradientBoostingRegressor n iter no change 5 gbdt fit X y raises warning DO NOT report your data unless it is extremely necessary The idea is to make the code as self contained as possible For doing so you can use a ref synth data It can be generated using numpy pandas or the mod sklearn datasets module Most of the times the bug is not related to a particular structure of your data Even if it is try to find an available dataset that has similar characteristics to yours and that reproduces the problem In this particular case we are interested in data that has labeled feature names Improved example code block python import pandas as pd from sklearn ensemble import GradientBoostingRegressor df pd DataFrame feature name 12 32 1 43 30 01 22 17 target 72 55 32 43 X df feature name y df target gbdt GradientBoostingRegressor gbdt fit X y no warning gbdt GradientBoostingRegressor n iter no change 5 gbdt fit X y raises warning As already mentioned the key to communication is the readability of the code and good formatting can really be a plus Notice that in the previous snippet we try to limit all lines to a maximum of 79 characters to avoid horizontal scrollbars in the code snippets blocks rendered on the GitHub issue use blank lines to separate groups of related functions place all the imports in their own group at the beginning The simplification steps presented in this guide can be implemented in a different order than the progression we have shown here The important points are a minimal reproducer should be runnable by a simple copy and paste in a python terminal it should be simplified as much as possible by removing any code steps that are not strictly needed to reproducing the original problem it should ideally only rely on a minimal dataset generated on the fly by running the code instead of relying on external data if possible Use markdown formatting To format code or text into its own distinct block use triple backticks Markdown https docs github com en get started writing on github getting started with writing and formatting on github basic writing and formatting syntax supports an optional language identifier to enable syntax highlighting in your fenced code block For example python from sklearn datasets import make blobs n samples 100 n components 3 X y make blobs n samples n samples centers n components will render a python formatted snippet as follows code block python from sklearn datasets import make blobs n samples 100 n components 3 X y make blobs n samples n samples centers n components It is not necessary to create several blocks of code when submitting a bug report Remember other reviewers are going to copy paste your code and having a single cell will make their task easier In the section named Actual results of the Issue template https github com scikit learn scikit learn blob main github ISSUE TEMPLATE bug report yml you are asked to provide the error message including the full traceback of the exception In this case use the python traceback qualifier For example python traceback TypeError Traceback most recent call last ipython input 1 a674e682c281 in module 4 vectorizer CountVectorizer input docs analyzer word 5 lda features vectorizer fit transform docs 6 lda model LatentDirichletAllocation 7 n topics 10 8 learning method online TypeError init got an unexpected keyword argument n topics yields the following when rendered code block python TypeError Traceback most recent call last ipython input 1 a674e682c281 in module 4 vectorizer CountVectorizer input docs analyzer word 5 lda features vectorizer fit transform docs 6 lda model LatentDirichletAllocation 7 n topics 10 8 learning method online TypeError init got an unexpected keyword argument n topics synth data Synthetic dataset Before choosing a particular synthetic dataset first you have to identify the type of problem you are solving Is it a classification a regression a clustering etc Once that you narrowed down the type of problem you need to provide a synthetic dataset accordingly Most of the times you only need a minimalistic dataset Here is a non exhaustive list of tools that may help you NumPy NumPy tools such as numpy random randn https numpy org doc stable reference random generated numpy random randn html and numpy random randint https numpy org doc stable reference random generated numpy random randint html can be used to create dummy numeric data regression Regressions take continuous numeric data as features and target code block python import numpy as np rng np random RandomState 0 n samples n features 5 5 X rng randn n samples n features y rng randn n samples A similar snippet can be used as synthetic data when testing scaling tools such as class sklearn preprocessing StandardScaler classification If the bug is not raised during when encoding a categorical variable you can feed numeric data to a classifier Just remember to ensure that the target is indeed an integer code block python import numpy as np rng np random RandomState 0 n samples n features 5 5 X rng randn n samples n features y rng randint 0 2 n samples binary target with values in 0 1 If the bug only happens with non numeric class labels you might want to generate a random target with numpy random choice https numpy org doc stable reference random generated numpy random choice html code block python import numpy as np rng np random RandomState 0 n samples n features 50 5 X rng randn n samples n features y np random choice male female other size n samples p 0 49 0 49 0 02 Pandas Some scikit learn objects expect pandas dataframes as input In this case you can transform numpy arrays into pandas objects using pandas DataFrame https pandas pydata org docs reference api pandas DataFrame html or pandas Series https pandas pydata org docs reference api pandas Series html code block python import numpy as np import pandas as pd rng np random RandomState 0 n samples n features 5 5 X pd DataFrame continuous feature rng randn n samples positive feature rng uniform low 0 0 high 100 0 size n samples categorical feature rng choice a b c size n samples y pd Series rng randn n samples In addition scikit learn includes various ref sample generators that can be used to build artificial datasets of controlled size and complexity make regression As hinted by the name class sklearn datasets make regression produces regression targets with noise as an optionally sparse random linear combination of random features code block python from sklearn datasets import make regression X y make regression n samples 1000 n features 20 make classification class sklearn datasets make classification creates multiclass datasets with multiple Gaussian clusters per class Noise can be introduced by means of correlated redundant or uninformative features code block python from sklearn datasets import make classification X y make classification n features 2 n redundant 0 n informative 2 n clusters per class 1 make blobs Similarly to make classification class sklearn datasets make blobs creates multiclass datasets using normally distributed clusters of points It provides greater control regarding the centers and standard deviations of each cluster and therefore it is useful to demonstrate clustering code block python from sklearn datasets import make blobs X y make blobs n samples 10 centers 3 n features 2 Dataset loading utilities You can use the ref datasets to load and fetch several popular reference datasets This option is useful when the bug relates to the particular structure of the data e g dealing with missing values or image recognition code block python from sklearn datasets import load breast cancer X y load breast cancer return X y True |
terraform page title AWS Cost Estimation HCP Terraform Supported AWS resources in HCP Terraform Cost Estimation Learn which AWS resources HCP Terraform includes in cost estimation HCP Terraform can estimate monthly costs for many AWS Terraform resources | ---
page_title: AWS - Cost Estimation - HCP Terraform
description: >-
Learn which AWS resources HCP Terraform includes in cost estimation.
---
# Supported AWS resources in HCP Terraform Cost Estimation
HCP Terraform can estimate monthly costs for many AWS Terraform resources.
-> **Note:** Terraform Enterprise requires AWS credentials to support cost estimation. These credentials are configured at the instance level, not the organization level. See the [Application Administration docs](/terraform/enterprise/admin/application/integration) for more details.
## Supported Resources
Cost estimation supports the following resources. Not all possible values for attributes of each resource are supported, ex. newer instance types or EBS volume types.
| Resource | Incurs Cost |
| ----------- | ----------- |
| `aws_alb` | X |
| `aws_autoscaling_group` | X |
| `aws_cloudhsm_v2_hsm` | X |
| `aws_cloudwatch_dashboard` | X |
| `aws_cloudwatch_metric_alarm` | X |
| `aws_db_instance` | X |
| `aws_dynamodb_table` | X |
| `aws_ebs_volume` | X |
| `aws_elasticache_cluster` | X |
| `aws_elasticsearch_domain` | X |
| `aws_elb` | X |
| `aws_instance` | X |
| `aws_kms_key` | X |
| `aws_lb` | X |
| `aws_rds_cluster_instance` | X |
| `aws_acm_certificate_validation` | |
| `aws_alb_listener` | |
| `aws_alb_listener_rule` | |
| `aws_alb_target_group` | |
| `aws_alb_target_group_attachment` | |
| `aws_api_gateway_api_key` | |
| `aws_api_gateway_deployment` | |
| `aws_api_gateway_integration` | |
| `aws_api_gateway_integration_response` | |
| `aws_api_gateway_method` | |
| `aws_api_gateway_method_response` | |
| `aws_api_gateway_resource` | |
| `aws_api_gateway_usage_plan_key` | |
| `aws_appautoscaling_policy` | |
| `aws_appautoscaling_target` | |
| `aws_autoscaling_lifecycle_hook` | |
| `aws_autoscaling_policy` | |
| `aws_cloudformation_stack` | |
| `aws_cloudfront_distribution` | |
| `aws_cloudfront_origin_access_identity` | |
| `aws_cloudwatch_event_rule` | |
| `aws_cloudwatch_event_target` | |
| `aws_cloudwatch_log_group` | |
| `aws_cloudwatch_log_metric_filter` | |
| `aws_cloudwatch_log_stream` | |
| `aws_cloudwatch_log_subscription_filter` | |
| `aws_codebuild_webhook` | |
| `aws_codedeploy_deployment_group` | |
| `aws_cognito_identity_provider` | |
| `aws_cognito_user_pool` | |
| `aws_cognito_user_pool_client` | |
| `aws_cognito_user_pool_domain` | |
| `aws_config_config_rule` | |
| `aws_customer_gateway` | |
| `aws_db_parameter_group` | |
| `aws_db_subnet_group` | |
| `aws_dynamodb_table_item` | |
| `aws_ecr_lifecycle_policy` | |
| `aws_ecr_repository_policy` | |
| `aws_ecs_cluster` | |
| `aws_ecs_task_definition` | |
| `aws_efs_mount_target` | |
| `aws_eip_association` | |
| `aws_elastic_beanstalk_application` | |
| `aws_elastic_beanstalk_application_version` | |
| `aws_elastic_beanstalk_environment` | |
| `aws_elasticache_parameter_group` | |
| `aws_elasticache_subnet_group` | |
| `aws_flow_log` | |
| `aws_iam_access_key` | |
| `aws_iam_account_alias` | |
| `aws_iam_account_password_policy` | |
| `aws_iam_group` | |
| `aws_iam_group_membership` | |
| `aws_iam_group_policy` | |
| `aws_iam_group_policy_attachment` | |
| `aws_iam_instance_profile` | |
| `aws_iam_policy` | |
| `aws_iam_policy_attachment` | |
| `aws_iam_role` | |
| `aws_iam_role_policy` | |
| `aws_iam_role_policy_attachment` | |
| `aws_iam_saml_provider` | |
| `aws_iam_service_linked_role` | |
| `aws_iam_user` | |
| `aws_iam_user_group_membership` | |
| `aws_iam_user_login_profile` | |
| `aws_iam_user_policy` | |
| `aws_iam_user_policy_attachment` | |
| `aws_iam_user_ssh_key` | |
| `aws_internet_gateway` | |
| `aws_key_pair` | |
| `aws_kms_alias` | |
| `aws_lambda_alias` | |
| `aws_lambda_event_source_mapping` | |
| `aws_lambda_function` | |
| `aws_lambda_layer_version` | |
| `aws_lambda_permission` | |
| `aws_launch_configuration` | |
| `aws_lb_listener` | |
| `aws_lb_listener_rule` | |
| `aws_lb_target_group` | |
| `aws_lb_target_group_attachment` | |
| `aws_network_acl` | |
| `aws_network_acl_rule` | |
| `aws_network_interface` | |
| `aws_placement_group` | |
| `aws_rds_cluster_parameter_group` | |
| `aws_route` | |
| `aws_route53_record` | |
| `aws_route53_zone_association` | |
| `aws_route_table` | |
| `aws_route_table_association` | |
| `aws_s3_bucket` | |
| `aws_s3_bucket_notification` | |
| `aws_s3_bucket_object` | |
| `aws_s3_bucket_policy` | |
| `aws_s3_bucket_public_access_block` | |
| `aws_security_group` | |
| `aws_security_group_rule` | |
| `aws_service_discovery_service` | |
| `aws_sfn_state_machine` | |
| `aws_sns_topic` | |
| `aws_sns_topic_subscription` | |
| `aws_sqs_queue` | |
| `aws_sqs_queue_policy` | |
| `aws_ssm_maintenance_window` | |
| `aws_ssm_maintenance_window_target` | |
| `aws_ssm_maintenance_window_task` | |
| `aws_ssm_parameter` | |
| `aws_subnet` | |
| `aws_volume_attachment` | |
| `aws_vpc` | |
| `aws_vpc_dhcp_options` | |
| `aws_vpc_dhcp_options_association` | |
| `aws_vpc_endpoint` | |
| `aws_vpc_endpoint_route_table_association` | |
| `aws_vpc_endpoint_service` | |
| `aws_vpc_ipv4_cidr_block_association` | |
| `aws_vpc_peering_connection_accepter` | |
| `aws_vpc_peering_connection_options` | |
| `aws_vpn_connection_route` | |
| `aws_waf_ipset` | |
| `aws_waf_rule` | |
| `aws_waf_web_acl` | | | terraform | page title AWS Cost Estimation HCP Terraform description Learn which AWS resources HCP Terraform includes in cost estimation Supported AWS resources in HCP Terraform Cost Estimation HCP Terraform can estimate monthly costs for many AWS Terraform resources Note Terraform Enterprise requires AWS credentials to support cost estimation These credentials are configured at the instance level not the organization level See the Application Administration docs terraform enterprise admin application integration for more details Supported Resources Cost estimation supports the following resources Not all possible values for attributes of each resource are supported ex newer instance types or EBS volume types Resource Incurs Cost aws alb X aws autoscaling group X aws cloudhsm v2 hsm X aws cloudwatch dashboard X aws cloudwatch metric alarm X aws db instance X aws dynamodb table X aws ebs volume X aws elasticache cluster X aws elasticsearch domain X aws elb X aws instance X aws kms key X aws lb X aws rds cluster instance X aws acm certificate validation aws alb listener aws alb listener rule aws alb target group aws alb target group attachment aws api gateway api key aws api gateway deployment aws api gateway integration aws api gateway integration response aws api gateway method aws api gateway method response aws api gateway resource aws api gateway usage plan key aws appautoscaling policy aws appautoscaling target aws autoscaling lifecycle hook aws autoscaling policy aws cloudformation stack aws cloudfront distribution aws cloudfront origin access identity aws cloudwatch event rule aws cloudwatch event target aws cloudwatch log group aws cloudwatch log metric filter aws cloudwatch log stream aws cloudwatch log subscription filter aws codebuild webhook aws codedeploy deployment group aws cognito identity provider aws cognito user pool aws cognito user pool client aws cognito user pool domain aws config config rule aws customer gateway aws db parameter group aws db subnet group aws dynamodb table item aws ecr lifecycle policy aws ecr repository policy aws ecs cluster aws ecs task definition aws efs mount target aws eip association aws elastic beanstalk application aws elastic beanstalk application version aws elastic beanstalk environment aws elasticache parameter group aws elasticache subnet group aws flow log aws iam access key aws iam account alias aws iam account password policy aws iam group aws iam group membership aws iam group policy aws iam group policy attachment aws iam instance profile aws iam policy aws iam policy attachment aws iam role aws iam role policy aws iam role policy attachment aws iam saml provider aws iam service linked role aws iam user aws iam user group membership aws iam user login profile aws iam user policy aws iam user policy attachment aws iam user ssh key aws internet gateway aws key pair aws kms alias aws lambda alias aws lambda event source mapping aws lambda function aws lambda layer version aws lambda permission aws launch configuration aws lb listener aws lb listener rule aws lb target group aws lb target group attachment aws network acl aws network acl rule aws network interface aws placement group aws rds cluster parameter group aws route aws route53 record aws route53 zone association aws route table aws route table association aws s3 bucket aws s3 bucket notification aws s3 bucket object aws s3 bucket policy aws s3 bucket public access block aws security group aws security group rule aws service discovery service aws sfn state machine aws sns topic aws sns topic subscription aws sqs queue aws sqs queue policy aws ssm maintenance window aws ssm maintenance window target aws ssm maintenance window task aws ssm parameter aws subnet aws volume attachment aws vpc aws vpc dhcp options aws vpc dhcp options association aws vpc endpoint aws vpc endpoint route table association aws vpc endpoint service aws vpc ipv4 cidr block association aws vpc peering connection accepter aws vpc peering connection options aws vpn connection route aws waf ipset aws waf rule aws waf web acl |
terraform page title Review deployment plans Review deployment plans Deployment plans are a combination of an individual configuration version and one of your stack s deployments As in the traditional Terraform workflow HCP Terraform creates plans every time a new configuration version introduces potential changes for a deployment Learn how to review deployment plans in HCP Terraform Stacks tfc only true | ---
page_title: Review deployment plans
description: Learn how to review deployment plans in HCP Terraform Stacks.
tfc_only: true
---
# Review deployment plans
Deployment plans are a combination of an individual configuration version and one of your stack’s deployments. As in the traditional Terraform workflow, HCP Terraform creates plans every time a new configuration version introduces potential changes for a deployment.
This guide explains how to review and approve deployment plans in HCP Terraform.
## Requirements
To view a Stack and its plans, you must also be a member of a team in your organization with one of the following permissions:
* [Organization-level **View all projects**](/terraform/cloud-docs/users-teams-organizations/permissions#view-all-projects) or higher
* [Project-level **Write**](/terraform/cloud-docs/users-teams-organizations/permissions#write) or higher
## View a deployment
If you are not already on your stack’s deployment page, navigate to it:
1. In the navigation menu, click **Projects** under **Manage**.
1. Select the project containing your Stack.
1. Click **Stacks** in the navigation menu.
1. Select the Stack you want to review.
A stack’s **Overview** page displays the following information:
* The number of your stack’s components and deployments
* Your deployment’s current [health](#check-deployment-health)
* The latest configuration version
* A chart listing each deployment’s recent configuration versions
To view all of the plans for a deployment, click the name of the deployment you want to review.
A deployment’s page lists its components and the last five deployment plans. Clicking on a component reveals all of the resources it contains. Opening the **Inspect** dropdown menu on a deployment's page reveals the option to download:
* **State description** downloads that deployment’s current state.
* **Provider schemas** downloads the schema of the providers of your deployment.
### View plans
A Stack deployment can have multiple plans. In a deployment's page, underneath **Latest plans**, each deployment plan lists:
* The plan name
* The trigger why HCP Terraform created this plan
* The configuration version HCP Terraform created the plan with
* When HCP Terraform created the plan
You can see a plan’s full details by clicking the plan’s name or an abbreviated list of the plan’s changes by clicking **Quick View**. You can also click **View all plans** to display a list of all the plans for this deployment. You can filter plans by [health status](#check-deployment-health) in the list of all plans.
Each plan includes a timeline detailing when the plan started, when it received its configuration version, and when it was approved.
### Check deployment health
Click **Overview** in the sidebar of your Stack to view the status of each of your deployments.
| Status | Description |
| :---- | :---- |
| Healthy | HCP Terraform is not applying this deployment, no plans await approval, and no diagnostic alerts exist. |
| Deploying | A plan is approved, and HCP Terraform is applying it. |
| Plans waiting for approval | HCP Terraform created a plan successfully and is waiting for approval. |
| Error diagnostics | The deployment has error diagnostic alerts. |
| Warning diagnostics | The deployment has warning diagnostic alerts. |
## Download plan data
You can download Stack plan data to debug and analyze how your Stack changes over time.
Select one of the following options in the **Inspect** drop-down to perform the associated action:
* **View plan orchestration results** displays HCP Terraform's decisions while making this plan.
* **Download plan event stream** downloads a log file of your plan’s events.
* **Download plan description** downloads a file with all plan information.
* **View apply orchestration results** displays HCP Terraform's decisions to apply this plan.
* **Download apply event stream** downloads a log file of your plan’s events.
* **Download apply description** downloads a file with all the information about this applied plan.
## Approve plans
Like traditional Terraform plans, Stack deployment plans list the changes that will occur if you approve that plan. Each component lists its expected resource changes, and you can review those changes as you decide whether to apply a plan.
When viewing a deployment plan, HCP Terraform notes if that plan is awaiting approval. Click **Approve plan** if you want HCP Terraform to apply a plan to this deployment or **Discard plan** to ignore it. You manage each deployment independently, so any plans you approve only affect the current deployment you are interacting with.
### Convergence checks
After applying any plan, HCP Terraform automatically triggers a plan called a convergence check. A convergence check is a re-plan to ensure components do not have any [deferred changes](#deferred-changes). HCP Terraform continues to trigger new plans until the convergence check returns a plan that does not contain changes.
By default, each Stack has an `auto-approve` rule named `empty_plan`, which auto-approves a plan if it does not contain changes. When a convergence check contains no changes, HCP Terraform auto-applies that plan.
## Deferred changes
Like with Terraform configuration files, HCP Terraform generates a dependency graph and creates resources defined in `*.tfstack.hcl` and `*.tfdeploy.hcl` files.
When you deploy a Stack with resources that depend on resources provisioned by other components in your stack, HCP Terraform recognizes the dependency between components and automatically defers that plan until HCP Terraform can complete it successfully. Plans with deferred changes are plans with resources that depend on resources that don't exist yet, requiring follow-up plans.
-> **Hands-on**: Complete the [Manage Kubernetes workloads with stacks](/terraform/tutorials/cloud/stacks-eks-deferred) tutorial to create plans with deferred changes.
HCP Terraform notifies you in the UI if a plan contains deferred changes. Approving a plan with deferred changes makes HCP Terraform automatically create a follow-up plan to properly set up resources in the order of operations those resources require.
After applying a plan with deferred changes, HCP Terraform notifies you of any replans it creates with a link to **View replan**. You can review the replan to ensure HCP Terraform created your resources as expected | terraform | page title Review deployment plans description Learn how to review deployment plans in HCP Terraform Stacks tfc only true Review deployment plans Deployment plans are a combination of an individual configuration version and one of your stack s deployments As in the traditional Terraform workflow HCP Terraform creates plans every time a new configuration version introduces potential changes for a deployment This guide explains how to review and approve deployment plans in HCP Terraform Requirements To view a Stack and its plans you must also be a member of a team in your organization with one of the following permissions Organization level View all projects terraform cloud docs users teams organizations permissions view all projects or higher Project level Write terraform cloud docs users teams organizations permissions write or higher View a deployment If you are not already on your stack s deployment page navigate to it 1 In the navigation menu click Projects under Manage 1 Select the project containing your Stack 1 Click Stacks in the navigation menu 1 Select the Stack you want to review A stack s Overview page displays the following information The number of your stack s components and deployments Your deployment s current health check deployment health The latest configuration version A chart listing each deployment s recent configuration versions To view all of the plans for a deployment click the name of the deployment you want to review A deployment s page lists its components and the last five deployment plans Clicking on a component reveals all of the resources it contains Opening the Inspect dropdown menu on a deployment s page reveals the option to download State description downloads that deployment s current state Provider schemas downloads the schema of the providers of your deployment View plans A Stack deployment can have multiple plans In a deployment s page underneath Latest plans each deployment plan lists The plan name The trigger why HCP Terraform created this plan The configuration version HCP Terraform created the plan with When HCP Terraform created the plan You can see a plan s full details by clicking the plan s name or an abbreviated list of the plan s changes by clicking Quick View You can also click View all plans to display a list of all the plans for this deployment You can filter plans by health status check deployment health in the list of all plans Each plan includes a timeline detailing when the plan started when it received its configuration version and when it was approved Check deployment health Click Overview in the sidebar of your Stack to view the status of each of your deployments Status Description Healthy HCP Terraform is not applying this deployment no plans await approval and no diagnostic alerts exist Deploying A plan is approved and HCP Terraform is applying it Plans waiting for approval HCP Terraform created a plan successfully and is waiting for approval Error diagnostics The deployment has error diagnostic alerts Warning diagnostics The deployment has warning diagnostic alerts Download plan data You can download Stack plan data to debug and analyze how your Stack changes over time Select one of the following options in the Inspect drop down to perform the associated action View plan orchestration results displays HCP Terraform s decisions while making this plan Download plan event stream downloads a log file of your plan s events Download plan description downloads a file with all plan information View apply orchestration results displays HCP Terraform s decisions to apply this plan Download apply event stream downloads a log file of your plan s events Download apply description downloads a file with all the information about this applied plan Approve plans Like traditional Terraform plans Stack deployment plans list the changes that will occur if you approve that plan Each component lists its expected resource changes and you can review those changes as you decide whether to apply a plan When viewing a deployment plan HCP Terraform notes if that plan is awaiting approval Click Approve plan if you want HCP Terraform to apply a plan to this deployment or Discard plan to ignore it You manage each deployment independently so any plans you approve only affect the current deployment you are interacting with Convergence checks After applying any plan HCP Terraform automatically triggers a plan called a convergence check A convergence check is a re plan to ensure components do not have any deferred changes deferred changes HCP Terraform continues to trigger new plans until the convergence check returns a plan that does not contain changes By default each Stack has an auto approve rule named empty plan which auto approves a plan if it does not contain changes When a convergence check contains no changes HCP Terraform auto applies that plan Deferred changes Like with Terraform configuration files HCP Terraform generates a dependency graph and creates resources defined in tfstack hcl and tfdeploy hcl files When you deploy a Stack with resources that depend on resources provisioned by other components in your stack HCP Terraform recognizes the dependency between components and automatically defers that plan until HCP Terraform can complete it successfully Plans with deferred changes are plans with resources that depend on resources that don t exist yet requiring follow up plans Hands on Complete the Manage Kubernetes workloads with stacks terraform tutorials cloud stacks eks deferred tutorial to create plans with deferred changes HCP Terraform notifies you in the UI if a plan contains deferred changes Approving a plan with deferred changes makes HCP Terraform automatically create a follow up plan to properly set up resources in the order of operations those resources require After applying a plan with deferred changes HCP Terraform notifies you of any replans it creates with a link to View replan You can review the replan to ensure HCP Terraform created your resources as expected |
terraform Manage projects page title Manage projects in HCP Terraform and Terraform Enterprise This topic describes how to create and manage projects in HCP Terraform and Terraform Enterprise A project is a folder containing one or more workspaces Use projects to organize and group workspaces and create ownership boundaries across your infrastructure | ---
page_title: Manage projects in HCP Terraform and Terraform Enterprise
description: |-
Use projects to organize and group workspaces and create ownership boundaries
across your infrastructure.
---
# Manage projects
This topic describes how to create and manage projects in HCP Terraform and Terraform Enterprise. A project is a folder containing one or more workspaces.
## Requirements
You must have the following permissions to manage projects:
- You must be a member of a team with the **Manage all Projects** permissions enabled to create a project. Refer to [Organization Permissions](/terraform/cloud-docs/users-teams-organizations/permissions#organization-permissions) for additional information.
- You must be a member of a team with the **Visible** option enabled under **Visibility** in the organization settings to configure a new team's access to the project. Refer to [Team Visibility](/terraform/cloud-docs/users-teams-organizations/teams/manage#team-visibility) for additional information.
- You must be a member of a team with update and delete permissions to be able to update and delete teams respectively.
To delete tags on a project, you must be member of a team with the **Admin** permission group enabled for the project.
To create tags for a project, you must be member of a team with the **Write** permission group enabled for the project.
## View a project
To view your organization's projects:
1. Click **Projects**.
1. Search for a project that you want to view. You can use the following methods:
- Sort by column header.
- Use the search bar to search on the name of a project or a tag.
1. Click on a project's name to view more details.
## Create a project
1. Click **Projects**.
1. Click **+ New project**.
1. Specify a name for the project. The name must be unique within the organization and can only include letters, numbers, inner spaces, hyphens, and underscores.
1. Add a description for the project. This field is optional.
~> **Adding project tags is in private beta and unavailable for some users.** Skip to step 8 if your interface does not include elements for adding tags. Contact your HashiCorp representative for information about participating in the private beta.
1. Open the **Add key value tags** menu to add tags to your project. Tags are optional key-value pairs that you can use to organize projects. Any workspaces you create within the project inherit project tags. Refer to [Define project tags](#define-project-tags) for additional information.
1. Click **+Add tag** and specify a tag key and tag value. If your organization has defined reserved tag keys, they appear in the **Tag key** field as suggestions. Refer to [Create and manage reserved tags](/terraform/cloud-docs/users-teams-organizations/organizations#create-and-manage-reserved-tags) for additional information.
1. Click **+add another tag** to attach any additional tags.
1. Click **Create** to finish creating the project.
HCP Terraform returns a new project page displaying all the project
information.
## Edit a project
1. Click **Projects**.
1. Click on a project name of the project you want to edit.
1. Click **Settings**.
On this **General settings** page, you can update the project name, project
description, and delete the project. On the **Team access** page, you can modify
team access to the project.
## Automatically destroy inactive workspaces
<!-- BEGIN: TFC:only name:pnp-callout -->
@include 'tfc-package-callouts/ephemeral-workspaces.mdx'
<!-- END: TFC:only name:pnp-callout -->
You can configure HCP Terraform to automatically destroy each workspace's
infrastructure in a project after a period of inactivity. A workspace
is _inactive_ if the workspace's state has not changed within your designated
time period.
If you configure a project to auto-destroy its infrastructure when inactive,
any run that updates Terraform state further delays the scheduled auto-destroy
time by the length of your designated timeframe.
!> **Warning:** Automatic destroy plans _do not_ prompt you for apply approval in the HCP Terraform user interface. We recommend only using this setting for development environments.
To schedule an auto-destroy run after a period of workspace inactivity:
1. Navigate to the project's **Settings** > **Auto-destroy Workspaces** page.
1. Click **Set up default**.
1. Select or customize a desired timeframe of inactivity.
1. Click **Confirm default**.
You can configure an individual workspace's auto-destroy settings to override
this default configuration. Refer to [automatically destroy workspaces](/terraform/cloud-docs/workspaces/settings/deletion#automatically-destroy) for more information.
## Delete a project
You can only delete projects that do not contain workspaces.
To delete an empty project:
1. Click **Projects**.
1. Search for a project that you want to review by scrolling down the table or
searching for a project name in the search bar above the project table.
1. Click **Settings**. The settings view for the selected project appears.
1. Click the **Delete** button. A **Delete project** modal appears.
1. Click the **Delete** button to confirm the deletion.
HCP Terraform returns to the **Projects** view with the deleted project
removed from the list.
## Define project tags
~> **Adding project tags is in private beta and unavailable for some users.** Contact your HashiCorp representative for information about participating in the private beta.
You can define tags stored as key-value pairs to help you organize your projects and track resource consumption. HCP Terraform applies tags that you attach to projects to the workspaces created inside the project.
Workspace administrators with appropriate permissions can attach new key-value pairs to their workspaces to override inherited tags. Refer to [Create workspace tags](/terraform/cloud-docs/workspaces/tags) for additional information about using tags in workspaces.
Tags that you create appear in the tags management screen in the organization settings. Refer to [Organizations](/terraform/cloud-docs/users-teams-organizations/organizations) for additional information.
The following rules apply to tag keys and values:
- Tags must be one or more characters.
- Tags have a 255 character limit.
- Tags can include letters, numbers, colons, hyphens, and underscores.
- Tag values are optional.
- You can create up to 10 unique tags per workspace and 10 unique tags per project. As a result, each workspace can have up to 20 tags.
- You cannot use the following strings at the beginning of a tag key:
- `hcp`
- `hc`
- `ibm | terraform | page title Manage projects in HCP Terraform and Terraform Enterprise description Use projects to organize and group workspaces and create ownership boundaries across your infrastructure Manage projects This topic describes how to create and manage projects in HCP Terraform and Terraform Enterprise A project is a folder containing one or more workspaces Requirements You must have the following permissions to manage projects You must be a member of a team with the Manage all Projects permissions enabled to create a project Refer to Organization Permissions terraform cloud docs users teams organizations permissions organization permissions for additional information You must be a member of a team with the Visible option enabled under Visibility in the organization settings to configure a new team s access to the project Refer to Team Visibility terraform cloud docs users teams organizations teams manage team visibility for additional information You must be a member of a team with update and delete permissions to be able to update and delete teams respectively To delete tags on a project you must be member of a team with the Admin permission group enabled for the project To create tags for a project you must be member of a team with the Write permission group enabled for the project View a project To view your organization s projects 1 Click Projects 1 Search for a project that you want to view You can use the following methods Sort by column header Use the search bar to search on the name of a project or a tag 1 Click on a project s name to view more details Create a project 1 Click Projects 1 Click New project 1 Specify a name for the project The name must be unique within the organization and can only include letters numbers inner spaces hyphens and underscores 1 Add a description for the project This field is optional Adding project tags is in private beta and unavailable for some users Skip to step 8 if your interface does not include elements for adding tags Contact your HashiCorp representative for information about participating in the private beta 1 Open the Add key value tags menu to add tags to your project Tags are optional key value pairs that you can use to organize projects Any workspaces you create within the project inherit project tags Refer to Define project tags define project tags for additional information 1 Click Add tag and specify a tag key and tag value If your organization has defined reserved tag keys they appear in the Tag key field as suggestions Refer to Create and manage reserved tags terraform cloud docs users teams organizations organizations create and manage reserved tags for additional information 1 Click add another tag to attach any additional tags 1 Click Create to finish creating the project HCP Terraform returns a new project page displaying all the project information Edit a project 1 Click Projects 1 Click on a project name of the project you want to edit 1 Click Settings On this General settings page you can update the project name project description and delete the project On the Team access page you can modify team access to the project Automatically destroy inactive workspaces BEGIN TFC only name pnp callout include tfc package callouts ephemeral workspaces mdx END TFC only name pnp callout You can configure HCP Terraform to automatically destroy each workspace s infrastructure in a project after a period of inactivity A workspace is inactive if the workspace s state has not changed within your designated time period If you configure a project to auto destroy its infrastructure when inactive any run that updates Terraform state further delays the scheduled auto destroy time by the length of your designated timeframe Warning Automatic destroy plans do not prompt you for apply approval in the HCP Terraform user interface We recommend only using this setting for development environments To schedule an auto destroy run after a period of workspace inactivity 1 Navigate to the project s Settings Auto destroy Workspaces page 1 Click Set up default 1 Select or customize a desired timeframe of inactivity 1 Click Confirm default You can configure an individual workspace s auto destroy settings to override this default configuration Refer to automatically destroy workspaces terraform cloud docs workspaces settings deletion automatically destroy for more information Delete a project You can only delete projects that do not contain workspaces To delete an empty project 1 Click Projects 1 Search for a project that you want to review by scrolling down the table or searching for a project name in the search bar above the project table 1 Click Settings The settings view for the selected project appears 1 Click the Delete button A Delete project modal appears 1 Click the Delete button to confirm the deletion HCP Terraform returns to the Projects view with the deleted project removed from the list Define project tags Adding project tags is in private beta and unavailable for some users Contact your HashiCorp representative for information about participating in the private beta You can define tags stored as key value pairs to help you organize your projects and track resource consumption HCP Terraform applies tags that you attach to projects to the workspaces created inside the project Workspace administrators with appropriate permissions can attach new key value pairs to their workspaces to override inherited tags Refer to Create workspace tags terraform cloud docs workspaces tags for additional information about using tags in workspaces Tags that you create appear in the tags management screen in the organization settings Refer to Organizations terraform cloud docs users teams organizations organizations for additional information The following rules apply to tag keys and values Tags must be one or more characters Tags have a 255 character limit Tags can include letters numbers colons hyphens and underscores Tag values are optional You can create up to 10 unique tags per workspace and 10 unique tags per project As a result each workspace can have up to 20 tags You cannot use the following strings at the beginning of a tag key hcp hc ibm |
terraform Workspaces organize infrastructure into meaningful groups Learn how to create page title Create workspaces Workspaces HCP Terraform This topic describes how to create and manage workspaces in HCP Terraform and Terraform Enterprise UI A workspace is a group of infrastructure resources managed by Terraform Refer to Workspaces overview terraform cloud docs workspaces for additional information and configure workspaces through the UI Create workspaces | ---
page_title: Create workspaces - Workspaces - HCP Terraform
description: >-
Workspaces organize infrastructure into meaningful groups. Learn how to create
and configure workspaces through the UI.
---
# Create workspaces
This topic describes how to create and manage workspaces in HCP Terraform and Terraform Enterprise UI. A workspace is a group of infrastructure resources managed by Terraform. Refer to [Workspaces overview](/terraform/cloud-docs/workspaces) for additional information.
> **Hands-on:** Try the [Get Started - HCP Terraform](/terraform/tutorials/cloud-get-started?utm_source=WEBSITE&utm_medium=WEB_IO&utm_offer=ARTICLE_PAGE&utm_content=DOCS) tutorials.
## Introduction
Create new workspaces when you need to manage a new collection of infrastructure resources. You can use the following methods to create workspaces:
- HCP Terraform UI: Refer to [Create a workspace](#create-a-workspace) for instructions.
- Workspaces API: Send a `POST`call to the `/organizations/:organization_name/workspaces` endpoint to create a workspace. Refer to the [API documentation](/terraform/cloud-docs/api-docs/workspaces#create-a-workspace) for instructions.
- Terraform Enterprise provider: Install the `tfe` provider and add the `tfe_workspace` resource to your configuration. Refer to the [`tfe` provider documentation](https://registry.terraform.io/providers/hashicorp/tfe/latest/docs/resources/workspace) in the Terraform registry for instructions.
- No-code provisioning: Use a no-code module from the registry to create a new workspace and deploy the module's resources. Refer to [Provisioning No-Code Infrastructure](/terraform/cloud-docs/no-code-provisioning/provisioning) for instructions.
Each workspace belongs to a project. Refer to [Manage projects](/terraform/cloud-docs/projects/manage) for additional information.
## Requirements
You must be a member of a team with one of the following permissions enabled to create and manage workspaces:
- **Manage all projects**
- **Manage all workspaces**
- **Admin** permission group for a project.
[permissions-citation]: #intentionally-unused---keep-for-maintainers
## Workspace naming
We recommend using consistent and informative names for new workspaces. One common approach is combining the workspace's important attributes in a consistent order. Attributes can be any defining characteristic of a workspace, such as the component, the component’s run environment, and the region where the workspace is provisioning infrastructure.
This strategy could produce the following example workspace names:
- networking-prod-us-east
- networking-staging-us-east
- networking-prod-eu-central
- networking-staging-eu-central
- monitoring-prod-us-east
- monitoring-staging-us-east
- monitoring-prod-eu-central
- monitoring-staging-eu-central
You can add additional attributes to your workspace names as needed. For example, you may add the infrastructure provider, datacenter, or line of business.
We recommend using 90 characters or less for the name of your workspace.
## Create a workspace
[workdir]: /terraform/cloud-docs/workspaces/settings#terraform-working-directory
[trigger]: /terraform/cloud-docs/workspaces/settings/vcs#automatic-run-triggering
[branch]: /terraform/cloud-docs/workspaces/settings/vcs#vcs-branch
[submodules]: /terraform/cloud-docs/workspaces/settings/vcs#include-submodules-on-clone
Complete the following steps to use the HCP Terraform or Terraform Enterprise UI to create a workspace:
1. Log in and choose your organization.
1. Click **New** and choose **Workspace** from the drop-down menu.
1. If you have multiple projects, HCP Terraform may prompt you to choose the project to create the workspace in. Only users on teams with permissions for the entire project or the specific workspace can access the workspace. Refer to [Manage projects](/terraform/cloud-docs/projects/manage) for additional information.
1. Choose a workflow type.
1. Complete the following steps if you are creating a workspace that follows the VCS workflow:
1. Choose an existing version control provider from the list or configure a new system. You must enable the workspace project to connect to your provider. Refer to [Connecting VCS
Providers](/terraform/cloud-docs/vcs) for more details.
1. If you choose the **GitHub App** provider, choose an organization and repository when prompted. The list only displays the first 100 repositories from your VCS provider. If your repository is missing from the list, enter the repository ID in the text field .
1. Refer to the following topics for information about configuring workspaces settings in the **Advanced options** screen:
- [Terraform Working Directory][workdir]
- [Automatic Run Triggering][trigger]
- [VCS branch][branch]
- [Include submodules on clone][submodules]
1. Specify a name for the workspace. VCS workflow workspaces default to the name of the repository. The name must be unique within the organization and can include letters, numbers, hyphens, and underscores. Refer to [Workspace naming](#workspace-naming) for additional information.
1. Add an optional description for the workspace. The description appears at the top of the workspace in the HCP Terraform UI.
1. Click **Create workspace** to finish.
For CLI or API-driven workflow, the system opens the new workspace overview. For version control workspaces, the **Configure Terraform variables** page appears.
### Configure Terraform variables for VCS workflows
After you create a new workspace from a version control repository, HCP Terraform scans its configuration files for [Terraform variables](/terraform/cloud-docs/workspaces/variables#terraform-variables) and displays variables without default values or variables that are undefined in an existing [global or project-scoped variable set](/terraform/cloud-docs/workspaces/variables/managing-variables#variable-sets). Terraform cannot perform successful runs in the workspace until you set values for these variables.
Choose one of the following actions:
- To skip this step, click **Go to workspace overview**. You can [load these variables from files](/terraform/cloud-docs/workspaces/variables/managing-variables#loading-variables-from-files) or create and set values for them later from within the workspace. HCP Terraform does not automatically scan your configuration again; you can only add variables from within the workspace individually.
- To configure variables, enter a value for each variable on the page. You may want to leave a variable empty if you plan to provide it through another source, like an `auto.tfvars` file. Click **Save variables** to add these variables to the workspace.
## Next steps
If you have already configured all Terraform variables, we recommend [manually starting a run](/terraform/cloud-docs/run/ui#manually-starting-runs) to prepare VCS-driven workspaces. You may also want to do one or more of the following actions:
- [Upload configuration versions](/terraform/cloud-docs/workspaces/configurations#providing-configuration-versions): If you chose the API or CLI-Driven workflow, you must upload configuration versions for the workspace.
- [Edit environment variables](/terraform/cloud-docs/workspaces/variables): Shell environment variables store credentials and customize Terraform's behavior.
- [Edit additional workspace settings](/terraform/cloud-docs/workspaces/settings): This includes notifications, permissions, and run triggers to start runs automatically.
- [Learn more about running Terraform in your workspace](/terraform/cloud-docs/run/remote-operations): This includes how Terraform processes runs within the workspace, run modes, run states, and other operations.
- [Create workspace tags](/terraform/cloud-docs/workspaces/tags): Add tags to your workspaces so that you can organize and track them.
- [Browse workspaces](/terraform/cloud-docs/workspaces/browse): Use the interfaces available in the UI to browse, sort, and filter workspaces so that you can track resource consumption.
### VCS Connection
If you connected a VCS repository to the workspace, HCP Terraform automatically registers a webhook with your VCS provider. A workspace with no runs will not accept new runs from a VCS webhook, so you must [manually start at least one run](/terraform/cloud-docs/run/ui#manually-starting-runs).
After you manually start a run, HCP Terraform automatically queues a plan when new commits appear in the selected branch of the linked repository or someone opens a pull request on that branch. Refer to [Webhooks](/terraform/cloud-docs/vcs#webhooks) for more details. | terraform | page title Create workspaces Workspaces HCP Terraform description Workspaces organize infrastructure into meaningful groups Learn how to create and configure workspaces through the UI Create workspaces This topic describes how to create and manage workspaces in HCP Terraform and Terraform Enterprise UI A workspace is a group of infrastructure resources managed by Terraform Refer to Workspaces overview terraform cloud docs workspaces for additional information Hands on Try the Get Started HCP Terraform terraform tutorials cloud get started utm source WEBSITE utm medium WEB IO utm offer ARTICLE PAGE utm content DOCS tutorials Introduction Create new workspaces when you need to manage a new collection of infrastructure resources You can use the following methods to create workspaces HCP Terraform UI Refer to Create a workspace create a workspace for instructions Workspaces API Send a POST call to the organizations organization name workspaces endpoint to create a workspace Refer to the API documentation terraform cloud docs api docs workspaces create a workspace for instructions Terraform Enterprise provider Install the tfe provider and add the tfe workspace resource to your configuration Refer to the tfe provider documentation https registry terraform io providers hashicorp tfe latest docs resources workspace in the Terraform registry for instructions No code provisioning Use a no code module from the registry to create a new workspace and deploy the module s resources Refer to Provisioning No Code Infrastructure terraform cloud docs no code provisioning provisioning for instructions Each workspace belongs to a project Refer to Manage projects terraform cloud docs projects manage for additional information Requirements You must be a member of a team with one of the following permissions enabled to create and manage workspaces Manage all projects Manage all workspaces Admin permission group for a project permissions citation intentionally unused keep for maintainers Workspace naming We recommend using consistent and informative names for new workspaces One common approach is combining the workspace s important attributes in a consistent order Attributes can be any defining characteristic of a workspace such as the component the component s run environment and the region where the workspace is provisioning infrastructure This strategy could produce the following example workspace names networking prod us east networking staging us east networking prod eu central networking staging eu central monitoring prod us east monitoring staging us east monitoring prod eu central monitoring staging eu central You can add additional attributes to your workspace names as needed For example you may add the infrastructure provider datacenter or line of business We recommend using 90 characters or less for the name of your workspace Create a workspace workdir terraform cloud docs workspaces settings terraform working directory trigger terraform cloud docs workspaces settings vcs automatic run triggering branch terraform cloud docs workspaces settings vcs vcs branch submodules terraform cloud docs workspaces settings vcs include submodules on clone Complete the following steps to use the HCP Terraform or Terraform Enterprise UI to create a workspace 1 Log in and choose your organization 1 Click New and choose Workspace from the drop down menu 1 If you have multiple projects HCP Terraform may prompt you to choose the project to create the workspace in Only users on teams with permissions for the entire project or the specific workspace can access the workspace Refer to Manage projects terraform cloud docs projects manage for additional information 1 Choose a workflow type 1 Complete the following steps if you are creating a workspace that follows the VCS workflow 1 Choose an existing version control provider from the list or configure a new system You must enable the workspace project to connect to your provider Refer to Connecting VCS Providers terraform cloud docs vcs for more details 1 If you choose the GitHub App provider choose an organization and repository when prompted The list only displays the first 100 repositories from your VCS provider If your repository is missing from the list enter the repository ID in the text field 1 Refer to the following topics for information about configuring workspaces settings in the Advanced options screen Terraform Working Directory workdir Automatic Run Triggering trigger VCS branch branch Include submodules on clone submodules 1 Specify a name for the workspace VCS workflow workspaces default to the name of the repository The name must be unique within the organization and can include letters numbers hyphens and underscores Refer to Workspace naming workspace naming for additional information 1 Add an optional description for the workspace The description appears at the top of the workspace in the HCP Terraform UI 1 Click Create workspace to finish For CLI or API driven workflow the system opens the new workspace overview For version control workspaces the Configure Terraform variables page appears Configure Terraform variables for VCS workflows After you create a new workspace from a version control repository HCP Terraform scans its configuration files for Terraform variables terraform cloud docs workspaces variables terraform variables and displays variables without default values or variables that are undefined in an existing global or project scoped variable set terraform cloud docs workspaces variables managing variables variable sets Terraform cannot perform successful runs in the workspace until you set values for these variables Choose one of the following actions To skip this step click Go to workspace overview You can load these variables from files terraform cloud docs workspaces variables managing variables loading variables from files or create and set values for them later from within the workspace HCP Terraform does not automatically scan your configuration again you can only add variables from within the workspace individually To configure variables enter a value for each variable on the page You may want to leave a variable empty if you plan to provide it through another source like an auto tfvars file Click Save variables to add these variables to the workspace Next steps If you have already configured all Terraform variables we recommend manually starting a run terraform cloud docs run ui manually starting runs to prepare VCS driven workspaces You may also want to do one or more of the following actions Upload configuration versions terraform cloud docs workspaces configurations providing configuration versions If you chose the API or CLI Driven workflow you must upload configuration versions for the workspace Edit environment variables terraform cloud docs workspaces variables Shell environment variables store credentials and customize Terraform s behavior Edit additional workspace settings terraform cloud docs workspaces settings This includes notifications permissions and run triggers to start runs automatically Learn more about running Terraform in your workspace terraform cloud docs run remote operations This includes how Terraform processes runs within the workspace run modes run states and other operations Create workspace tags terraform cloud docs workspaces tags Add tags to your workspaces so that you can organize and track them Browse workspaces terraform cloud docs workspaces browse Use the interfaces available in the UI to browse sort and filter workspaces so that you can track resource consumption VCS Connection If you connected a VCS repository to the workspace HCP Terraform automatically registers a webhook with your VCS provider A workspace with no runs will not accept new runs from a VCS webhook so you must manually start at least one run terraform cloud docs run ui manually starting runs After you manually start a run HCP Terraform automatically queues a plan when new commits appear in the selected branch of the linked repository or someone opens a pull request on that branch Refer to Webhooks terraform cloud docs vcs webhooks for more details |
terraform page title JSON Filtering HCP Terraform viewer terraform cloud docs workspaces state and policy check JSON data About JSON Data Filtering Certain pages where JSON data is displayed such as the state viewer terraform cloud docs policy enforcement sentinel json allow you to filter the results This Learn how to create custom datasets on pages that display JSON data | ---
page_title: JSON Filtering - HCP Terraform
description: Learn how to create custom datasets on pages that display JSON data.
---
# About JSON Data Filtering
Certain pages where JSON data is displayed, such as the [state
viewer](/terraform/cloud-docs/workspaces/state) and [policy check JSON data
viewer](/terraform/cloud-docs/policy-enforcement/sentinel/json), allow you to filter the results. This
enables you to see just the data you need, and even create entirely new datasets
to see data in the way you want to see it!

-> **NOTE:** _Filtering_ the data in the JSON viewer is separate from
_searching_ it. To search, press Control-F (or Command-F on MacOS). You can
search and apply a filter at the same time.
## Entering a Filter
Filters are entered by putting the filter in the aptly named **filter** box in
the JSON viewer. After entering the filter, pressing **Apply** or the enter key
on your keyboard will apply the filter. The filtered results, if any, are
displayed in result box. Clearing the filter will restore the original JSON
data.

## Filter Language
The JSON filter language is a small subset of the
[jq](https://stedolan.github.io/jq/) JSON filtering language. Selectors,
literals, indexes, slices, iterators, and pipes are supported, as are also array
and object construction. At this time, parentheses, and more complex operations
such as mathematical operators, conditionals, and functions are not supported.
Below is a quick reference of some of the more basic functions to get you
started.
### Selectors
Selectors allow you to pick an index out of a JSON object, and are written as
`.KEY.SUBKEY`. So, as an example, given an object of
`{"foo": {"bar": "baz"}}`, and the filter `.foo.bar`, the result would be
displayed as `"baz"`.
A single dot (`.`) without anything else always denotes the current value,
unaltered.
### Indexes
Indexes can be used to fetch array elements, or select non-alphanumeric object
fields. They are written as `[0]` or `["foo-bar"]`, depending on the purpose.
Given an object of `{"foo-bar": ["baz", "qux"]}` and the filter of
`.["foo-bar"][0]`, the result would be displayed as `"baz"`.
### Slices
Arrays can be sliced to get a subset an array. The syntax is `[LOW:HIGH]`.
Given an array of `[0, 1, 2, 3, 4]` and the filter of
`.[1:3]`, the result would be displayed as `[1, 2]`. This also illustrates that
the result of the slice operation is always of length HIGH-LOW.
Slices can also be applied to strings, in which a substring is returned with the
same rules applied, with the first character of the string being index 0.
### Iterators
Iterators can iterate over arrays and objects. The syntax is `[]`.
Iterators iterate over the _values_ of an object only. So given a object of
`{"foo": 1, "bar": 2}`, the filter `.[]` would yield an iteration of `1, 2`.
Note that iteration results are not necessarily always arrays. Iterators are
handled in a special fashion when dealing with pipes and object creators (see
below).
### Array Construction
Wrapping an expression in brackets (`[ ... ]`) creates an array with the
sub-expressions inside the array. The results are always concatenated.
For example, for an object of `{"foo": [1, 2], "bar": [3, 4]}`, the construction
expressions `[.foo[], .bar[]]` and `[.[][]]`, are the same, producing the
resulting array `[1, 2, 3, 4]`.
### Object Construction
Wrapping an expression in curly braces `{KEY: EXPRESSION, ...}` creates an
object.
Iterators work uniquely with object construction in that an object is
constructed for each _iteration_ that the iterator produces.
As a basic example, Consider an array `[1, 2, 3]`. While the expression
`{foo: .}` will produce `{"foo": [1, 2, 3]}`, adding an iterator to the
expression so that it reads `{foo: .[]}` will produce 3 individual objects:
`{"foo": 1}`, `{"foo": 2}`, and `{"foo": 3}`.
### Pipes
Pipes allow the results of one expression to be fed into another. This can be
used to re-write expressions to help reduce complexity.
Iterators work with pipes in a fashion similar to object construction, where the
expression on the right-hand side of the pipe is evaluated once for every
iteration.
As an example, for the object `{"foo": {"a": 1}, "bar": {"a": 2}}`, both the
expression `{z: .[].a}` and `.[] | {z: .a}` produce the same result: `{"z": 1}`
and `{"z": 2}`. | terraform | page title JSON Filtering HCP Terraform description Learn how to create custom datasets on pages that display JSON data About JSON Data Filtering Certain pages where JSON data is displayed such as the state viewer terraform cloud docs workspaces state and policy check JSON data viewer terraform cloud docs policy enforcement sentinel json allow you to filter the results This enables you to see just the data you need and even create entirely new datasets to see data in the way you want to see it entering a json filter img docs json viewer intro png NOTE Filtering the data in the JSON viewer is separate from searching it To search press Control F or Command F on MacOS You can search and apply a filter at the same time Entering a Filter Filters are entered by putting the filter in the aptly named filter box in the JSON viewer After entering the filter pressing Apply or the enter key on your keyboard will apply the filter The filtered results if any are displayed in result box Clearing the filter will restore the original JSON data entering a json filter img docs sentinel json enter filter png Filter Language The JSON filter language is a small subset of the jq https stedolan github io jq JSON filtering language Selectors literals indexes slices iterators and pipes are supported as are also array and object construction At this time parentheses and more complex operations such as mathematical operators conditionals and functions are not supported Below is a quick reference of some of the more basic functions to get you started Selectors Selectors allow you to pick an index out of a JSON object and are written as KEY SUBKEY So as an example given an object of foo bar baz and the filter foo bar the result would be displayed as baz A single dot without anything else always denotes the current value unaltered Indexes Indexes can be used to fetch array elements or select non alphanumeric object fields They are written as 0 or foo bar depending on the purpose Given an object of foo bar baz qux and the filter of foo bar 0 the result would be displayed as baz Slices Arrays can be sliced to get a subset an array The syntax is LOW HIGH Given an array of 0 1 2 3 4 and the filter of 1 3 the result would be displayed as 1 2 This also illustrates that the result of the slice operation is always of length HIGH LOW Slices can also be applied to strings in which a substring is returned with the same rules applied with the first character of the string being index 0 Iterators Iterators can iterate over arrays and objects The syntax is Iterators iterate over the values of an object only So given a object of foo 1 bar 2 the filter would yield an iteration of 1 2 Note that iteration results are not necessarily always arrays Iterators are handled in a special fashion when dealing with pipes and object creators see below Array Construction Wrapping an expression in brackets creates an array with the sub expressions inside the array The results are always concatenated For example for an object of foo 1 2 bar 3 4 the construction expressions foo bar and are the same producing the resulting array 1 2 3 4 Object Construction Wrapping an expression in curly braces KEY EXPRESSION creates an object Iterators work uniquely with object construction in that an object is constructed for each iteration that the iterator produces As a basic example Consider an array 1 2 3 While the expression foo will produce foo 1 2 3 adding an iterator to the expression so that it reads foo will produce 3 individual objects foo 1 foo 2 and foo 3 Pipes Pipes allow the results of one expression to be fed into another This can be used to re write expressions to help reduce complexity Iterators work with pipes in a fashion similar to object construction where the expression on the right hand side of the pipe is evaluated once for every iteration As an example for the object foo a 1 bar a 2 both the expression z a and z a produce the same result z 1 and z 2 |
terraform Health page title Health HCP Terraform HCP Terraform can perform automatic health assessments in a workspace to assess whether its real infrastructure matches the requirements defined in its Terraform configuration Health assessments include the following types of evaluations HCP Terraform can continuously monitor workspaces to assess whether their real infrastructure matches the requirements defined in their Terraform configuration | ---
page_title: Health - HCP Terraform
description: |-
HCP Terraform can continuously monitor workspaces to assess whether their real infrastructure matches the requirements defined in their Terraform configuration.
---
# Health
HCP Terraform can perform automatic health assessments in a workspace to assess whether its real infrastructure matches the requirements defined in its Terraform configuration. Health assessments include the following types of evaluations:
- [Drift detection](#drift-detection) determines whether your real-world infrastructure matches your Terraform configuration.
- [Continuous validation](#continuous-validation) determines whether custom conditions in the workspace’s configuration continue to pass after Terraform provisions the infrastructure.
When you enable health assessments, HCP Terraform periodically runs health assessments for your workspace. Refer to [Health Assessment Scheduling](#health-assessment-scheduling) for details.
<!-- BEGIN: TFC:only name:pnp-callout -->
@include 'tfc-package-callouts/health-assessments.mdx'
<!-- END: TFC:only name:pnp-callout -->
## Permissions
Working with health assessments requires the following permissions:
- To view health status for a workspace, you need read access to that workspace.
- To change organization health settings, you must be an [organization owner](/terraform/cloud-docs/users-teams-organizations/permissions#organization-owners).
- To change a workspace’s health settings, you must be an [administrator for that workspace](/terraform/cloud-docs/users-teams-organizations/permissions#workspace-admins).
<!-- BEGIN: TFC:only name:health-assessments -->
- To trigger [on-demand health assessments](/terraform/cloud-docs/workspaces/health#on-demand-assessments) for a workspace, you must be an [administrator for that workspace](/terraform/cloud-docs/users-teams-organizations/permissions#workspace-admins).
<!-- END: TFC:only name:health-assessments -->
## Workspace requirements
Workspaces require the following settings to receive health assessments:
- Terraform version 0.15.4+ for drift detection only
- Terraform version 1.3.0+ for drift detection and continuous validation
- [Remote execution mode](/terraform/cloud-docs/workspaces/settings#execution-mode) or [Agent execution mode](/terraform/cloud-docs/agents/agent-pools#configure-workspaces-to-use-the-agent) for Terraform runs
The latest Terraform run in the workspace must have been successful. If the most recent run ended in an errored, canceled, or discarded state, HCP Terraform pauses health assessments until there is a successfully applied run.
The workspace must also have at least one run in which Terraform successfully applies a configuration. HCP Terraform does not perform health assessments in workspaces with no real-world infrastructure.
## Enable health assessments
You can enforce health assessments across all eligible workspaces in an organization within the [organization settings](/terraform/cloud-docs/users-teams-organizations/organizations#health). Enforcing health assessments at an organization-level overrides workspace-level settings. You can only enable health assessments within a specific workspace when HCP Terraform is not enforcing health assessments at the organization level.
To enable health assessments within a workspace:
1. Verify that your workspace satisfies the [requirements](#workspace-requirements).
1. Go to the workspace and click **Settings > Health**.
1. Select **Enable** under **Health Assessments**.
1. Click **Save settings**.
## Health assessment scheduling
When you enable health assessments for a workspace, HCP Terraform runs the first health assessment based on whether there are active Terraform runs for the workspace:
- **No active runs:** A few minutes after you enable the feature.
- **Active speculative plan:** A few minutes after that plan is complete.
- **Other active runs:** During the next assessment period.
After the first health assessment, HCP Terraform starts a new health assessment during the next assessment period if there are no active runs in the workspace. Health assessments may take longer to complete when you enable health assessments in many workspaces at once or your workspace contains a complex configuration with many resources.
A health assessment never interrupts or interferes with runs. If you start a new run during a health assessment, HCP Terraform cancels the current assessment and runs the next assessment during the next assessment period. This behavior may prevent HCP Terraform from performing health assessments in workspaces with frequent runs.
HCP Terraform pauses health assessments if the latest run ended in an errored state. This behavior occurs for all run types, including plan-only runs and speculative plans. Once the workspace completes a successful run, HCP Terraform restarts health assessments during the next assessment period.
Terraform Enterprise administrators can modify their installation's [assessment frequency and number of maximum concurrent assessments](/terraform/enterprise/admin/application/general#health-assessments) from the admin settings console.
<!-- BEGIN: TFC:only name:health-assessments -->
### On-demand assessments
-> **Note:** On-demand assessments are only available in the HCP Terraform user interface.
If you are an administrator for a workspace and it satisfies all [assessment requirements](/terraform/cloud-docs/workspaces/health#workspace-requirements), you can trigger a new assessment by clicking **Start health assessment** on the workspace's **Health** page.
After clicking **Start health assessment**, the workspace displays a message in the bottom lefthand corner of the page to indicate if it successfully triggered a new assessment. The time it takes to complete an assessment can vary based on network latency and the number of resources managed by the workspace.
You cannot trigger another assessment while one is in progress. An on-demand assessment resets the scheduling for automated assessments, so HCP Terraform waits to run the next assessment until the next scheduled period.
<!-- END: TFC:only name:health-assessments -->
### Concurrency
If you enable health assessments on multiple workspaces, assessments may run concurrently. Health assessments do not affect your concurrency limit. HCP Terraform also monitors and controls health assessment concurrency to avoid issues for large-scale deployments with thousands of workspaces. However, HCP Terraform performs health assessments in batches, so health assessments may take longer to complete when you enable them in a large number of workspaces.
### Notifications
HCP Terraform sends [notifications](/terraform/cloud-docs/workspaces/settings/notifications) about health assessment results according to your workspace’s settings.
## Workspace health status
On the organization's **Workspaces** page, HCP Terraform displays a **Health warning** status for workspaces with infrastructure drift or failed continuous validation checks.
On the right of a workspace’s overview page, HCP Terraform displays a **Health** bar that summarizes the results of the last health assessment.
- The **Drift** summary shows the total number of resources in the configuration and the number of resources that have drifted.
- The **Checks** summary shows the number of passed, failed, and unknown statuses for objects with continuous validation checks.
<!-- BEGIN: TFC:only name:health-in-explorer -->
### View workspace health in explorer
The [Explorer page](/terraform/cloud-docs/workspaces/explorer) presents a condensed overview of the health status of the workspaces within your organization. You can see the following information:
- Workspaces that are monitoring workspace health
- Status of any configured continuous validation checks
- Count of drifted resources for each workspace
For additional details on the data available for reporting, refer to the [Explorer](/terraform/cloud-docs/workspaces/explorer) documentation.

<!-- END: TFC:only name:health-in-explorer -->
## Drift detection
Drift detection helps you identify situations where your actual infrastructure no longer matches the configuration defined in Terraform. This deviation is known as _configuration drift_. Configuration drift occurs when changes are made outside Terraform's regular process, leading to inconsistencies between the remote objects and your configured infrastructure.
For example, a teammate could create configuration drift by directly updating a storage bucket's settings with conflicting configuration settings using the cloud provider's console. Drift detection could detect these differences and recommend steps to address and rectify the discrepancies.
Configuration drift differs from state drift. Drift detection does not detect state drift.
Configuration drift happens when external changes affecting remote objects invalidate your infrastructure configuration. State drift occurs when external changes affecting remote objects _do not_ invalidate your infrastructure configuration. Refer to [Refresh-Only Mode](/terraform/cloud-docs/run/modes-and-options#refresh-only-mode) to learn more about remediating state drift.
### View workspace drift
To view the drift detection results from the latest health assessment, go to the workspace and click **Health > Drift**. If there is configuration drift, HCP Terraform proposes the necessary changes to bring the infrastructure back in sync with its configuration.
### Resolve drift
You can use one of the following approaches to correct configuration drift:
- **Overwrite drift**: If you do not want the drift's changes, queue a new plan and apply the changes to revert your real-world infrastructure to match your Terraform configuration.
- **Update Terraform configuration:** If you want the drift's changes, modify your Terraform configuration to include the changes and push a new configuration version. This prevents Terraform from reverting the drift during the next apply. Refer to the [Manage Resource Drift](/terraform/tutorials/state/resource-drift) tutorial for a detailed example.
## Continuous validation
Continuous validation regularly verifies whether your configuration’s custom assertions continue to pass, validating your infrastructure. For example, you can monitor whether your website returns an expected status code, or whether an API gateway certificate is valid. Identifying failed assertions helps you resolve the failure and prevent errors during your next time Terraform operation.
Continuous validation evaluates preconditions, postconditions, and check blocks as part of an assessment, but we recommend using [check blocks](/terraform/language/checks) for post-apply monitoring. Use check blocks to create custom rules to validate your infrastructure's resources, data sources, and outputs.
### Preventing false positives
Health assessments create a speculative plan to access the current state of your infrastructure. Terraform evaluates any check blocks in your configuration as the last step of creating the speculative plan. If your configuration relies on data sources and the values queried by a data source change between the time of your last run and the assessment, the speculative plan will include those changes. HCP Terraform will not modify your infrastructure as part of an assessment, but it can use those updated values to evaluate checks. This may lead to false positive results for alerts since your infrastructure did not yet change.
To ensure your checks evaluate the current state of your configuration instead of against a possible future change, use nested data sources that query your actual resource configuration, rather than a computed latest value. Refer to the [AMI image scenario](#asserting-up-to-date-amis-for-compute-instances) below for an example.
### Example use cases
Review the provider documentation for `check` block examples with [AWS](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/guides/continuous-validation-examples), [Azure](https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/guides/tfc-check-blocks), and [GCP](https://registry.terraform.io/providers/hashicorp/google/latest/docs/guides/google-continuous-validation).
#### Monitoring the health of a provisioned website
The following example uses the [HTTP](https://registry.terraform.io/providers/hashicorp/http/latest/docs) Terraform provider and a [scoped data source](/terraform/language/checks#scoped-data-sources) within a [`check` block](/terraform/language/checks) to assert the Terraform website returns a `200` status code, indicating it is healthy.
```hcl
check "health_check" {
data "http" "terraform_io" {
url = "https://www.terraform.io"
}
assert {
condition = data.http.terraform_io.status_code == 200
error_message = "${data.http.terraform_io.url} returned an unhealthy status code"
}
}
```
Continuous Validation alerts you if the website returns any status code besides `200` while Terraform evaluates this assertion. You can also find failures in your workspace's [Continuous Validation Results](#view-continuous-validation-results) page. You can configure continuous validation alerts in your workspace's [notification settings](/terraform/cloud-docs/workspaces/settings/notifications).
#### Monitoring certificate expiration
[Vault](https://www.vaultproject.io/) lets you secure, store, and tightly control access to tokens, passwords, certificates, encryption keys, and other sensitive data. The following example uses a `check` block to monitor for the expiration of a Vault certificate.
```hcl
resource "vault_pki_secret_backend_cert" "app" {
backend = vault_mount.intermediate.path
name = vault_pki_secret_backend_role.test.name
common_name = "app.my.domain"
}
check "certificate_valid" {
assert {
condition = !vault_pki_secret_backend_cert.app.renew_pending
error_message = "Vault cert is ready to renew."
}
}
```
#### Asserting up-to-date AMIs for compute instances
[HCP Packer](/hcp/docs/packer) stores metadata about your [Packer](https://www.packer.io/) images. The following example check fails when there is a newer AMI version available.
```hcl
data "hcp_packer_artifact" "hashiapp_image" {
bucket_name = "hashiapp"
channel_name = "latest"
platform = "aws"
region = "us-west-2"
}
resource "aws_instance" "hashiapp" {
ami = data.hcp_packer_artifact.hashiapp_image.external_identifier
instance_type = var.instance_type
associate_public_ip_address = true
subnet_id = aws_subnet.hashiapp.id
vpc_security_group_ids = [aws_security_group.hashiapp.id]
key_name = aws_key_pair.generated_key.key_name
tags = {
Name = "hashiapp"
}
}
check "ami_version_check" {
data "aws_instance" "hashiapp_current" {
instance_tags = {
Name = "hashiapp"
}
}
assert {
condition = aws_instance.hashiapp.ami == data.hcp_packer_artifact.hashiapp_image.external_identifier
error_message = "Must use the latest available AMI, ${data.hcp_packer_artifact.hashiapp_image.external_identifier}."
}
}
```
### View continuous validation results
To view the continuous validation results from the latest health assessment, go to the workspace and click **Health > Continuous validation**.
The page shows all of the resources, outputs, and data sources with custom assertions that HCP Terraform evaluated. Next to each object, HCP Terraform reports whether the assertion passed or failed. If one or more assertions fail, HCP Terraform displays the error messages for each assertion.
The health assessment page displays each assertion by its [named value](/terraform/language/expressions/references). A `check` block's named value combines the prefix `check` with its configuration name.
If your configuration contains multiple [preconditions and postconditions](/terraform/language/expressions/custom-conditions#preconditions-and-postconditions) within a single resource, output, or data source, HCP Terraform will not show the results of individual conditions unless they fail. If all custom conditions on the object pass, HCP Terraform reports that the entire check passed. The assessment results will display the results of any precondition and postconditions alongside the results of any assertions from `check` blocks, identified by the named values of their parent block. | terraform | page title Health HCP Terraform description HCP Terraform can continuously monitor workspaces to assess whether their real infrastructure matches the requirements defined in their Terraform configuration Health HCP Terraform can perform automatic health assessments in a workspace to assess whether its real infrastructure matches the requirements defined in its Terraform configuration Health assessments include the following types of evaluations Drift detection drift detection determines whether your real world infrastructure matches your Terraform configuration Continuous validation continuous validation determines whether custom conditions in the workspace s configuration continue to pass after Terraform provisions the infrastructure When you enable health assessments HCP Terraform periodically runs health assessments for your workspace Refer to Health Assessment Scheduling health assessment scheduling for details BEGIN TFC only name pnp callout include tfc package callouts health assessments mdx END TFC only name pnp callout Permissions Working with health assessments requires the following permissions To view health status for a workspace you need read access to that workspace To change organization health settings you must be an organization owner terraform cloud docs users teams organizations permissions organization owners To change a workspace s health settings you must be an administrator for that workspace terraform cloud docs users teams organizations permissions workspace admins BEGIN TFC only name health assessments To trigger on demand health assessments terraform cloud docs workspaces health on demand assessments for a workspace you must be an administrator for that workspace terraform cloud docs users teams organizations permissions workspace admins END TFC only name health assessments Workspace requirements Workspaces require the following settings to receive health assessments Terraform version 0 15 4 for drift detection only Terraform version 1 3 0 for drift detection and continuous validation Remote execution mode terraform cloud docs workspaces settings execution mode or Agent execution mode terraform cloud docs agents agent pools configure workspaces to use the agent for Terraform runs The latest Terraform run in the workspace must have been successful If the most recent run ended in an errored canceled or discarded state HCP Terraform pauses health assessments until there is a successfully applied run The workspace must also have at least one run in which Terraform successfully applies a configuration HCP Terraform does not perform health assessments in workspaces with no real world infrastructure Enable health assessments You can enforce health assessments across all eligible workspaces in an organization within the organization settings terraform cloud docs users teams organizations organizations health Enforcing health assessments at an organization level overrides workspace level settings You can only enable health assessments within a specific workspace when HCP Terraform is not enforcing health assessments at the organization level To enable health assessments within a workspace 1 Verify that your workspace satisfies the requirements workspace requirements 1 Go to the workspace and click Settings Health 1 Select Enable under Health Assessments 1 Click Save settings Health assessment scheduling When you enable health assessments for a workspace HCP Terraform runs the first health assessment based on whether there are active Terraform runs for the workspace No active runs A few minutes after you enable the feature Active speculative plan A few minutes after that plan is complete Other active runs During the next assessment period After the first health assessment HCP Terraform starts a new health assessment during the next assessment period if there are no active runs in the workspace Health assessments may take longer to complete when you enable health assessments in many workspaces at once or your workspace contains a complex configuration with many resources A health assessment never interrupts or interferes with runs If you start a new run during a health assessment HCP Terraform cancels the current assessment and runs the next assessment during the next assessment period This behavior may prevent HCP Terraform from performing health assessments in workspaces with frequent runs HCP Terraform pauses health assessments if the latest run ended in an errored state This behavior occurs for all run types including plan only runs and speculative plans Once the workspace completes a successful run HCP Terraform restarts health assessments during the next assessment period Terraform Enterprise administrators can modify their installation s assessment frequency and number of maximum concurrent assessments terraform enterprise admin application general health assessments from the admin settings console BEGIN TFC only name health assessments On demand assessments Note On demand assessments are only available in the HCP Terraform user interface If you are an administrator for a workspace and it satisfies all assessment requirements terraform cloud docs workspaces health workspace requirements you can trigger a new assessment by clicking Start health assessment on the workspace s Health page After clicking Start health assessment the workspace displays a message in the bottom lefthand corner of the page to indicate if it successfully triggered a new assessment The time it takes to complete an assessment can vary based on network latency and the number of resources managed by the workspace You cannot trigger another assessment while one is in progress An on demand assessment resets the scheduling for automated assessments so HCP Terraform waits to run the next assessment until the next scheduled period END TFC only name health assessments Concurrency If you enable health assessments on multiple workspaces assessments may run concurrently Health assessments do not affect your concurrency limit HCP Terraform also monitors and controls health assessment concurrency to avoid issues for large scale deployments with thousands of workspaces However HCP Terraform performs health assessments in batches so health assessments may take longer to complete when you enable them in a large number of workspaces Notifications HCP Terraform sends notifications terraform cloud docs workspaces settings notifications about health assessment results according to your workspace s settings Workspace health status On the organization s Workspaces page HCP Terraform displays a Health warning status for workspaces with infrastructure drift or failed continuous validation checks On the right of a workspace s overview page HCP Terraform displays a Health bar that summarizes the results of the last health assessment The Drift summary shows the total number of resources in the configuration and the number of resources that have drifted The Checks summary shows the number of passed failed and unknown statuses for objects with continuous validation checks BEGIN TFC only name health in explorer View workspace health in explorer The Explorer page terraform cloud docs workspaces explorer presents a condensed overview of the health status of the workspaces within your organization You can see the following information Workspaces that are monitoring workspace health Status of any configured continuous validation checks Count of drifted resources for each workspace For additional details on the data available for reporting refer to the Explorer terraform cloud docs workspaces explorer documentation Viewing Workspace Health in Explorer img docs tfc explorer health png END TFC only name health in explorer Drift detection Drift detection helps you identify situations where your actual infrastructure no longer matches the configuration defined in Terraform This deviation is known as configuration drift Configuration drift occurs when changes are made outside Terraform s regular process leading to inconsistencies between the remote objects and your configured infrastructure For example a teammate could create configuration drift by directly updating a storage bucket s settings with conflicting configuration settings using the cloud provider s console Drift detection could detect these differences and recommend steps to address and rectify the discrepancies Configuration drift differs from state drift Drift detection does not detect state drift Configuration drift happens when external changes affecting remote objects invalidate your infrastructure configuration State drift occurs when external changes affecting remote objects do not invalidate your infrastructure configuration Refer to Refresh Only Mode terraform cloud docs run modes and options refresh only mode to learn more about remediating state drift View workspace drift To view the drift detection results from the latest health assessment go to the workspace and click Health Drift If there is configuration drift HCP Terraform proposes the necessary changes to bring the infrastructure back in sync with its configuration Resolve drift You can use one of the following approaches to correct configuration drift Overwrite drift If you do not want the drift s changes queue a new plan and apply the changes to revert your real world infrastructure to match your Terraform configuration Update Terraform configuration If you want the drift s changes modify your Terraform configuration to include the changes and push a new configuration version This prevents Terraform from reverting the drift during the next apply Refer to the Manage Resource Drift terraform tutorials state resource drift tutorial for a detailed example Continuous validation Continuous validation regularly verifies whether your configuration s custom assertions continue to pass validating your infrastructure For example you can monitor whether your website returns an expected status code or whether an API gateway certificate is valid Identifying failed assertions helps you resolve the failure and prevent errors during your next time Terraform operation Continuous validation evaluates preconditions postconditions and check blocks as part of an assessment but we recommend using check blocks terraform language checks for post apply monitoring Use check blocks to create custom rules to validate your infrastructure s resources data sources and outputs Preventing false positives Health assessments create a speculative plan to access the current state of your infrastructure Terraform evaluates any check blocks in your configuration as the last step of creating the speculative plan If your configuration relies on data sources and the values queried by a data source change between the time of your last run and the assessment the speculative plan will include those changes HCP Terraform will not modify your infrastructure as part of an assessment but it can use those updated values to evaluate checks This may lead to false positive results for alerts since your infrastructure did not yet change To ensure your checks evaluate the current state of your configuration instead of against a possible future change use nested data sources that query your actual resource configuration rather than a computed latest value Refer to the AMI image scenario asserting up to date amis for compute instances below for an example Example use cases Review the provider documentation for check block examples with AWS https registry terraform io providers hashicorp aws latest docs guides continuous validation examples Azure https registry terraform io providers hashicorp azurerm latest docs guides tfc check blocks and GCP https registry terraform io providers hashicorp google latest docs guides google continuous validation Monitoring the health of a provisioned website The following example uses the HTTP https registry terraform io providers hashicorp http latest docs Terraform provider and a scoped data source terraform language checks scoped data sources within a check block terraform language checks to assert the Terraform website returns a 200 status code indicating it is healthy hcl check health check data http terraform io url https www terraform io assert condition data http terraform io status code 200 error message data http terraform io url returned an unhealthy status code Continuous Validation alerts you if the website returns any status code besides 200 while Terraform evaluates this assertion You can also find failures in your workspace s Continuous Validation Results view continuous validation results page You can configure continuous validation alerts in your workspace s notification settings terraform cloud docs workspaces settings notifications Monitoring certificate expiration Vault https www vaultproject io lets you secure store and tightly control access to tokens passwords certificates encryption keys and other sensitive data The following example uses a check block to monitor for the expiration of a Vault certificate hcl resource vault pki secret backend cert app backend vault mount intermediate path name vault pki secret backend role test name common name app my domain check certificate valid assert condition vault pki secret backend cert app renew pending error message Vault cert is ready to renew Asserting up to date AMIs for compute instances HCP Packer hcp docs packer stores metadata about your Packer https www packer io images The following example check fails when there is a newer AMI version available hcl data hcp packer artifact hashiapp image bucket name hashiapp channel name latest platform aws region us west 2 resource aws instance hashiapp ami data hcp packer artifact hashiapp image external identifier instance type var instance type associate public ip address true subnet id aws subnet hashiapp id vpc security group ids aws security group hashiapp id key name aws key pair generated key key name tags Name hashiapp check ami version check data aws instance hashiapp current instance tags Name hashiapp assert condition aws instance hashiapp ami data hcp packer artifact hashiapp image external identifier error message Must use the latest available AMI data hcp packer artifact hashiapp image external identifier View continuous validation results To view the continuous validation results from the latest health assessment go to the workspace and click Health Continuous validation The page shows all of the resources outputs and data sources with custom assertions that HCP Terraform evaluated Next to each object HCP Terraform reports whether the assertion passed or failed If one or more assertions fail HCP Terraform displays the error messages for each assertion The health assessment page displays each assertion by its named value terraform language expressions references A check block s named value combines the prefix check with its configuration name If your configuration contains multiple preconditions and postconditions terraform language expressions custom conditions preconditions and postconditions within a single resource output or data source HCP Terraform will not show the results of individual conditions unless they fail If all custom conditions on the object pass HCP Terraform reports that the entire check passed The assessment results will display the results of any precondition and postconditions alongside the results of any assertions from check blocks identified by the named values of their parent block |
terraform Surface information from across workspaces and projects in your organization Explorer for Workspace Visibility As your organization grows keeping track of your sprawling infrastructure estate can get increasingly more complicated The explorer for workspace visibility helps surface a wide range of valuable information from across your organization page title Explorer for Workspace Visibility HCP Terraform tfc only true | ---
page_title: Explorer for Workspace Visibility - HCP Terraform
description: >-
Surface information from across workspaces and projects in your organization.
tfc_only: true
---
# Explorer for Workspace Visibility
As your organization grows, keeping track of your sprawling infrastructure estate can get increasingly more complicated. The explorer for workspace visibility helps surface a wide range of valuable information from across your organization.
Open the explorer for workspace visibility by clicking **Explorer** in your organization's top-level side navigation.
The **Explorer** page displays buttons grouped by **Types** and **Use Cases**. Each button offers a new view into your organization or workspace's data. Clicking a button triggers the explorer to perform a query and display the results in a table of data.

The **Types** buttons present generic use cases. For example, "Workspaces" displays a paginated and unfiltered list of your organization's workspaces and each workspace's accompanying data. The **Use Cases** buttons present sorted and filtered results to give you a focused view of your organizational data.
You can sort each column of the explorer results table. Clicking a hyperlinked field shows increasingly specific views of your data. For example, a workspace's modules count field links to a view of that workspace's associated modules.
Clearing a query takes you back to the explorer landing page. To clear a query, click the back arrow at the top left of your current explorer view page.
## Permissions
The explorer for workspace visibility requires access to a broad range of an organization's data. To use the explorer, you must have either of the following organization permissions:
- [Organization owner](/terraform/cloud-docs/users-teams-organizations/permissions#organization-owners)
- [View all workspaces](/terraform/cloud-docs/users-teams-organizations/permissions#view-all-workspaces) or greater
## Types
The explorer for workspace visibility supports four types:
- [Workspaces](/terraform/cloud-docs/workspaces)
- [Modules](/terraform/language/modules)
- [Providers](/terraform/language/providers)
- [Terraform Versions](/terraform/language/upgrade-guides#upgrading-to-terraform-v1-4)
## Use cases
The explorer for workspace visibility provides the following queries for specific use cases:
- **Top module versions** shows modules sorted by usage frequency.
- **Latest Terraform versions** displays a sorted list of Terraform versions in use.
- **Top provider versions** lists providers sorted by usage frequency.
- **Workspaces without VCS** lists workspaces not backed by VCS.
- **Workspace VCS source** lists VCS-backed workspaces sorted by repository name.
- **Workspaces with failed checks** lists workspaces that failed at least one [continuous validation](/terraform/enterprise/workspaces/health#continuous-validation) check.
- **Drifted workspaces** displays [workspaces with drift](/terraform/enterprise/workspaces/health#drift-detection) and relevant drift information.
- **Workspace VCS source** displays a subset of workspace data sorted by the underlying VCS repository.
- **All workspace versions** is a simplified view of your workspaces with current run and version information.
- **Runs by status** provides a run-focused view by sorting workspaces by their current run status.
- **Top Terraform versions** lists all Terraform versions by usage frequency
- **Latest updated workspaces** displays your most recently updated workspaces.
- **Oldest applied workspaces** sorts workspaces by the date of the current applied run.
## Custom filter conditions
The explorer's query builder allows you to execute queries with custom filter conditions against any of the supported [types](#types).
To use the query builder, select a type or use case from the explorer home page. Expand the **Modify conditions** section to show the filter conditions in use for the current query and to define new filter conditions.

Each filter condition is represented by a row of inputs, each made up of a target field, operator, and value.
1. Choose a target field from the first dropdown to select field that the explorer will run the query against. The options available will vary based on the target field's data type.
1. Choose an operator from the second dropdown. The options available will vary based on the target field's data type.
1. Provide a value in the third field to compare against the target field's value.
1. Click **Run Query** to evaluate the filter conditions.
-> **Tip:** Inspect the filter conditions used by the various pre-canned [**Use Cases**](#use-cases) to learn how they are constructed.
You can create multiple filter conditions for a query. When you provide multiple conditions, the explorer evaluates them at query time with a logical AND. To add a new condition, use the **Add condition** button below the condition list. To remove a condition, use the trash bin button on the right hand side of the condition.
## Save a view
You can save explorer views to revisit a custom query or use case.
-> **Note**: The ability to save views in the explorer is in public beta. All APIs and workflows are subject to change.
You can save the explorer’s view of your data by performing the following steps:
1. Navigate to the **Explorer** page in the sidebar of your organization.
1. Click on a tile in the **Types** or **Use cases** section.
1. Define a query using the query building interface.
1. By default, the explorer displays all available information, but you can adjust which columns you want your view to include.
1. Open the **Actions** dropdown menu and select **Save view**, saving the last query you performed in the explorer.
1. Specify a new, unique name for your saved view.
1. Click **Save**.
When your explorer saves a view, it saves the last query it performed. If you change a query and do not rerun it, the explorer does not save those changes.
After you have saved a view of your data, you can access it from the explorer’s main page underneath the **Saved views** tab. Saved views keep track of the following attributes:
* The name of the saved view.
* The type of data you are querying, either module, workspace, provider, or Terraform versions.
* The owner of the saved view.
* When the saved view was last updated.
You can rename or delete a saved view from the **Saved views** tab by opening the ellipsis menu next to a view and selecting either **Rename** or **Delete**.
### Manage a saved view
Complete the following steps to update a saved view:
1. Open the view in the explorer and make changes.
1. Open the **Actions** dropdown menu and select **Save view.**
Complete the following steps to save a new view based on an existing saved view:
1. Open a saved view in the explorer and make changes.
1. Open the **Actions** dropdown menu and select **Save as**.
1. Entering a name for this new saved view.
1. Click **Save**.
Complete the following steps to delete a saved view:
1. Open a saved view in the explorer.
1. Choose **Delete view** from the **Actions** drop-down menu.
1. Click **Delete** when prompted to confirm that you want to permanently delete the view. | terraform | page title Explorer for Workspace Visibility HCP Terraform description Surface information from across workspaces and projects in your organization tfc only true Explorer for Workspace Visibility As your organization grows keeping track of your sprawling infrastructure estate can get increasingly more complicated The explorer for workspace visibility helps surface a wide range of valuable information from across your organization Open the explorer for workspace visibility by clicking Explorer in your organization s top level side navigation The Explorer page displays buttons grouped by Types and Use Cases Each button offers a new view into your organization or workspace s data Clicking a button triggers the explorer to perform a query and display the results in a table of data Explorer Landing Page img docs terraform cloud explorer landing page png The Types buttons present generic use cases For example Workspaces displays a paginated and unfiltered list of your organization s workspaces and each workspace s accompanying data The Use Cases buttons present sorted and filtered results to give you a focused view of your organizational data You can sort each column of the explorer results table Clicking a hyperlinked field shows increasingly specific views of your data For example a workspace s modules count field links to a view of that workspace s associated modules Clearing a query takes you back to the explorer landing page To clear a query click the back arrow at the top left of your current explorer view page Permissions The explorer for workspace visibility requires access to a broad range of an organization s data To use the explorer you must have either of the following organization permissions Organization owner terraform cloud docs users teams organizations permissions organization owners View all workspaces terraform cloud docs users teams organizations permissions view all workspaces or greater Types The explorer for workspace visibility supports four types Workspaces terraform cloud docs workspaces Modules terraform language modules Providers terraform language providers Terraform Versions terraform language upgrade guides upgrading to terraform v1 4 Use cases The explorer for workspace visibility provides the following queries for specific use cases Top module versions shows modules sorted by usage frequency Latest Terraform versions displays a sorted list of Terraform versions in use Top provider versions lists providers sorted by usage frequency Workspaces without VCS lists workspaces not backed by VCS Workspace VCS source lists VCS backed workspaces sorted by repository name Workspaces with failed checks lists workspaces that failed at least one continuous validation terraform enterprise workspaces health continuous validation check Drifted workspaces displays workspaces with drift terraform enterprise workspaces health drift detection and relevant drift information Workspace VCS source displays a subset of workspace data sorted by the underlying VCS repository All workspace versions is a simplified view of your workspaces with current run and version information Runs by status provides a run focused view by sorting workspaces by their current run status Top Terraform versions lists all Terraform versions by usage frequency Latest updated workspaces displays your most recently updated workspaces Oldest applied workspaces sorts workspaces by the date of the current applied run Custom filter conditions The explorer s query builder allows you to execute queries with custom filter conditions against any of the supported types types To use the query builder select a type or use case from the explorer home page Expand the Modify conditions section to show the filter conditions in use for the current query and to define new filter conditions Explorer Query Builder img docs query builder png Each filter condition is represented by a row of inputs each made up of a target field operator and value 1 Choose a target field from the first dropdown to select field that the explorer will run the query against The options available will vary based on the target field s data type 1 Choose an operator from the second dropdown The options available will vary based on the target field s data type 1 Provide a value in the third field to compare against the target field s value 1 Click Run Query to evaluate the filter conditions Tip Inspect the filter conditions used by the various pre canned Use Cases use cases to learn how they are constructed You can create multiple filter conditions for a query When you provide multiple conditions the explorer evaluates them at query time with a logical AND To add a new condition use the Add condition button below the condition list To remove a condition use the trash bin button on the right hand side of the condition Save a view You can save explorer views to revisit a custom query or use case Note The ability to save views in the explorer is in public beta All APIs and workflows are subject to change You can save the explorer s view of your data by performing the following steps 1 Navigate to the Explorer page in the sidebar of your organization 1 Click on a tile in the Types or Use cases section 1 Define a query using the query building interface 1 By default the explorer displays all available information but you can adjust which columns you want your view to include 1 Open the Actions dropdown menu and select Save view saving the last query you performed in the explorer 1 Specify a new unique name for your saved view 1 Click Save When your explorer saves a view it saves the last query it performed If you change a query and do not rerun it the explorer does not save those changes After you have saved a view of your data you can access it from the explorer s main page underneath the Saved views tab Saved views keep track of the following attributes The name of the saved view The type of data you are querying either module workspace provider or Terraform versions The owner of the saved view When the saved view was last updated You can rename or delete a saved view from the Saved views tab by opening the ellipsis menu next to a view and selecting either Rename or Delete Manage a saved view Complete the following steps to update a saved view 1 Open the view in the explorer and make changes 1 Open the Actions dropdown menu and select Save view Complete the following steps to save a new view based on an existing saved view 1 Open a saved view in the explorer and make changes 1 Open the Actions dropdown menu and select Save as 1 Entering a name for this new saved view 1 Click Save Complete the following steps to delete a saved view 1 Open a saved view in the explorer 1 Choose Delete view from the Actions drop down menu 1 Click Delete when prompted to confirm that you want to permanently delete the view |
terraform Create workspace tags Learn how to create tags for your workspaces so that you can organize workspaces Tagging workspaces also lets you sort and filter workspaces in the UI page title Create workspace tags Overview This topic describes how to attach tags to your workspaces so that you can organize workspaces Tagging workspaces also helps you sort and filter workspaces in the UI and enable you to associate Terraform configurations with several workspaces | ---
page_title: Create workspace tags
description: Learn how to create tags for your workspaces so that you can organize workspaces. Tagging workspaces also lets you sort and filter workspaces in the UI.
---
# Create workspace tags
This topic describes how to attach tags to your workspaces so that you can organize workspaces. Tagging workspaces also helps you sort and filter workspaces in the UI and enable you to associate Terraform configurations with several workspaces.
## Overview
You can create tags and attach them to your workspaces. Tagging workspaces helps organization administrators organize, sort, and filter workspaces so that they can track resource consumption. For example, you could add a `cost-center` tag so that administrators can sort workspaces according to cost center.
HCP Terraform stores tags as either single-value tags or key-value pairs. You can also migrate existing single-value tags to the key-value scheme. Refer to [Migrating to key-value tags](#migrating-to-key-value-tags) for instructions.
~> **Adding tags stored as key-value pairs is in private beta and unavailable for some users.** Contact your HashiCorp representative for information about participating in the private beta.
Single-value tags enable you to associate a single Terraform configuration file with several workspaces according to tag. Refer to the following topics in the Terraform CLI and configuration language documentation for additional information:
- [`terraform{}.cloud{}.workspaces` reference](/terraform/language/terraform#terraform-cloud-workspaces)
- [Define connection settings](/terraform/cli/cloud/settings#define-connection-settings)
### Reserved tags
You can reserve a set of tag keys for each organization. Reserved tag keys appear as suggestions when people create tags for projects and workspaces so that you can use consistent terms for tags. Refer to [Create and manage reserved tags](/terraform/cloud-docs/users-teams-organizations/organizations#create-and-manage-reserved-tags) for additional information.
~> **Reserved tags and project tags are in private beta and unavailable for some users.** Contact your HashiCorp representative for information about participating in the private beta.
## Requirements
- You must be member of a team with the **Write** permission group enabled for the workspace to create tags for a workspace.
- You must be member of a team with the **Admin** permission group enabled for the workspace to delete tags on a workspace.
You cannot create tags for a workspace using the CLI.
## Define tags
Complete the following steps to define workspace tags:
<Tabs>
<Tab heading="Key-value tags">
1. Open your workspace.
1. Click either the count link for the **Tags** label or **Manage Tags** in the **Tags** card on the right-sidebar to open the **Manage workspace tags** drawer.
1. Click **+Add tag** to define a new key-value pair. Refer to [Tag syntax](#Tag-syntax) for information about supported characters.
1. Tags inherited from the project appear in the **Inherited Tags** section. You can attach new key-value pairs to their projects to override inherited tags. Refer to [Manage projects](/terraform/cloud-docs/projects/manage) for additional information about using tags in projects.
You cannot override reserved tag keys created by the organization administrator. Refer to [Create and manage reserved tags](/terraform/cloud-docs/users-teams-organizations/organizations#create-and-manage-reserved-tags) for additional information.
You can also click on tag links in the **Inherited Tags** section to view workspaces that use the same tag.
1. Click **Save**.
</Tab>
<Tab heading="Single-value tags">
1. Open your workspace.
1. Open the **Add a tag** drop-down menu and enter a value. If a tag value already exists, you can attach it to the workspace. Otherwise, HCP Terraform creates a new tag and attaches it to the workspace. Refer to [Tag syntax](#Tag-syntax) for information about supported characters.
</Tab>
</Tabs>
Tags that you create appear in the tags management screen in the organization settings. Refer to [Organizations](/terraform/cloud-docs/users-teams-organizations/organizations) for additional information.
## Update tags
<Tabs>
<Tab heading="Key-value tags">
1. Open your workspace.
1. Click either the count link for the **Tags** label or **Manage Tags** in the **Tags** card on the right-sidebar to open the **Manage workspace tags** drawer.
1. In the **Direct Tags** section, modify either a key, value, or both and click **Save**.
</Tab>
<Tab heading="Single-value tags">
You cannot manage single-value tags in the UI. Instead, use the following workspace API endpoints to manage single-value tags:
- [`POST /workspaces/:workspace_id/relationships/tags`](/terraform/cloud-docs/api-docs/workspaces#add-tags-to-a-workspace): Adds a new single-value workspace tag.
- [`DELETE /workspaces/:workspace_id/relationships/tags`](/terraform/cloud-docs/api-docs/workspaces#remove-tags-from-workspace): Deletes a single-value workspace tag.
- [`PATCH /workspaces/:workspace_id`](/terraform/cloud-docs/api-docs/workspaces#update-a-workspace): Updates an existing single-value workspace tag.
</Tab>
</Tabs>
## Migrating to key-value tags
You can use the API to migrate single-value tags that may already be in your workspace tags to tag stored as key-value pairs. You must have permissions in the workspace to perform the following task. Refer to [Requirements](#requirements) for additional information.
Note that Terraform adds single-value workspace tags that are defined in the associated Terraform cloud block configuration to workspaces selected by the configuration. As result, your workspace may include duplicate tags. Refer to the [Terraform reference documentation](/terraform/language/terraform#terraform-cloud-workspaces) for additional information.
### Re-create existing workspace tags as resource tags
1. Send a `GET` request to the [`/organizations/:organization_name/tags`](/terraform/cloud-docs/api-docs/organization-tags#list-tags) endpoint to request all workspaces for your organization. The response may span several pages.
1. For each workspace, check the `tag-names` attribute for existing tags.
1. Send a `PATCH` request to the [`/workspaces/:workspace_id`](/terraform/cloud-docs/api-docs/workspaces#update-a-workspace) endpoint and include the `tag-binding` relationship in the request body for each workspace tag.
### Delete single-value workspace tags
1. Send a `GET` request to the [`/organizations/:organization_name/tags`](/terraform/cloud-docs/api-docs/organization-tags#list-tags) endpoint to request all workspaces for your organization.
1. Enumerate the external IDs for all tags.
2. Send a `DELETE` request to the [`/organizations/:organization_name/tags`](/terraform/cloud-docs/api-docs/organization-tags#delete-tags) endpoint to delete tags.
## Tag syntax
The following rules apply to tags:
- Tags must be one or more characters.
- Tags have a 255 character limit.
- Tags can include letters, numbers, colons, hyphens, and underscores.
- For tags stored as key-value pairs, tag values are optional | terraform | page title Create workspace tags description Learn how to create tags for your workspaces so that you can organize workspaces Tagging workspaces also lets you sort and filter workspaces in the UI Create workspace tags This topic describes how to attach tags to your workspaces so that you can organize workspaces Tagging workspaces also helps you sort and filter workspaces in the UI and enable you to associate Terraform configurations with several workspaces Overview You can create tags and attach them to your workspaces Tagging workspaces helps organization administrators organize sort and filter workspaces so that they can track resource consumption For example you could add a cost center tag so that administrators can sort workspaces according to cost center HCP Terraform stores tags as either single value tags or key value pairs You can also migrate existing single value tags to the key value scheme Refer to Migrating to key value tags migrating to key value tags for instructions Adding tags stored as key value pairs is in private beta and unavailable for some users Contact your HashiCorp representative for information about participating in the private beta Single value tags enable you to associate a single Terraform configuration file with several workspaces according to tag Refer to the following topics in the Terraform CLI and configuration language documentation for additional information terraform cloud workspaces reference terraform language terraform terraform cloud workspaces Define connection settings terraform cli cloud settings define connection settings Reserved tags You can reserve a set of tag keys for each organization Reserved tag keys appear as suggestions when people create tags for projects and workspaces so that you can use consistent terms for tags Refer to Create and manage reserved tags terraform cloud docs users teams organizations organizations create and manage reserved tags for additional information Reserved tags and project tags are in private beta and unavailable for some users Contact your HashiCorp representative for information about participating in the private beta Requirements You must be member of a team with the Write permission group enabled for the workspace to create tags for a workspace You must be member of a team with the Admin permission group enabled for the workspace to delete tags on a workspace You cannot create tags for a workspace using the CLI Define tags Complete the following steps to define workspace tags Tabs Tab heading Key value tags 1 Open your workspace 1 Click either the count link for the Tags label or Manage Tags in the Tags card on the right sidebar to open the Manage workspace tags drawer 1 Click Add tag to define a new key value pair Refer to Tag syntax Tag syntax for information about supported characters 1 Tags inherited from the project appear in the Inherited Tags section You can attach new key value pairs to their projects to override inherited tags Refer to Manage projects terraform cloud docs projects manage for additional information about using tags in projects You cannot override reserved tag keys created by the organization administrator Refer to Create and manage reserved tags terraform cloud docs users teams organizations organizations create and manage reserved tags for additional information You can also click on tag links in the Inherited Tags section to view workspaces that use the same tag 1 Click Save Tab Tab heading Single value tags 1 Open your workspace 1 Open the Add a tag drop down menu and enter a value If a tag value already exists you can attach it to the workspace Otherwise HCP Terraform creates a new tag and attaches it to the workspace Refer to Tag syntax Tag syntax for information about supported characters Tab Tabs Tags that you create appear in the tags management screen in the organization settings Refer to Organizations terraform cloud docs users teams organizations organizations for additional information Update tags Tabs Tab heading Key value tags 1 Open your workspace 1 Click either the count link for the Tags label or Manage Tags in the Tags card on the right sidebar to open the Manage workspace tags drawer 1 In the Direct Tags section modify either a key value or both and click Save Tab Tab heading Single value tags You cannot manage single value tags in the UI Instead use the following workspace API endpoints to manage single value tags POST workspaces workspace id relationships tags terraform cloud docs api docs workspaces add tags to a workspace Adds a new single value workspace tag DELETE workspaces workspace id relationships tags terraform cloud docs api docs workspaces remove tags from workspace Deletes a single value workspace tag PATCH workspaces workspace id terraform cloud docs api docs workspaces update a workspace Updates an existing single value workspace tag Tab Tabs Migrating to key value tags You can use the API to migrate single value tags that may already be in your workspace tags to tag stored as key value pairs You must have permissions in the workspace to perform the following task Refer to Requirements requirements for additional information Note that Terraform adds single value workspace tags that are defined in the associated Terraform cloud block configuration to workspaces selected by the configuration As result your workspace may include duplicate tags Refer to the Terraform reference documentation terraform language terraform terraform cloud workspaces for additional information Re create existing workspace tags as resource tags 1 Send a GET request to the organizations organization name tags terraform cloud docs api docs organization tags list tags endpoint to request all workspaces for your organization The response may span several pages 1 For each workspace check the tag names attribute for existing tags 1 Send a PATCH request to the workspaces workspace id terraform cloud docs api docs workspaces update a workspace endpoint and include the tag binding relationship in the request body for each workspace tag Delete single value workspace tags 1 Send a GET request to the organizations organization name tags terraform cloud docs api docs organization tags list tags endpoint to request all workspaces for your organization 1 Enumerate the external IDs for all tags 2 Send a DELETE request to the organizations organization name tags terraform cloud docs api docs organization tags delete tags endpoint to delete tags Tag syntax The following rules apply to tags Tags must be one or more characters Tags have a 255 character limit Tags can include letters numbers colons hyphens and underscores For tags stored as key value pairs tag values are optional |
terraform to access state from other workspaces page title Terraform State Workspaces HCP Terraform Workspaces have their own separate state data Learn how state is used and how Each HCP Terraform workspace has its own separate state data used for runs within that workspace Terraform State in HCP Terraform | ---
page_title: Terraform State - Workspaces - HCP Terraform
description: >-
Workspaces have their own separate state data. Learn how state is used and how
to access state from other workspaces.
---
# Terraform State in HCP Terraform
Each HCP Terraform workspace has its own separate state data, used for runs within that workspace.
-> **API:** See the [State Versions API](/terraform/cloud-docs/api-docs/state-versions).
## State Usage in Terraform Runs
In [remote runs](/terraform/cloud-docs/run/remote-operations), HCP Terraform automatically configures Terraform to use the workspace's state; the Terraform configuration does not need an explicit backend configuration. (If a backend configuration is present, it will be overridden.)
In local runs (available for workspaces whose execution mode setting is set to "local"), you can use a workspace's state by configuring the [CLI integration](/terraform/cli/cloud) and authenticating with a user token that has permission to read and write state versions for the relevant workspace. When using a Terraform configuration that references outputs from another workspace, the authentication token must also have permission to read state outputs for that workspace. ([More about permissions.](/terraform/cloud-docs/users-teams-organizations/permissions))
<!-- BEGIN: TFC:only name:intermediate-state -->
During an HCP Terraform run, Terraform incrementally creates intermediate state versions and marks them as finalized once it uploads the state content.
When a workspace is unlocked, HCP Terraform selects the latest state and sets it as the current state version, deletes all other intermediate state versions that were saved as recovery snapshots for the duration of the lock, and discards all pending intermediate state versions that were superseded by newer state versions.
[permissions-citation]: #intentionally-unused---keep-for-maintainers
<!-- END: TFC:only name:intermediate-state -->
## State Versions
In addition to the current state, HCP Terraform retains historical state versions, which can be used to analyze infrastructure changes over time.
You can view a workspace's state versions from its **States** tab. Each state in the list indicates which run and which VCS commit (if applicable) it was associated with. Click a state in the list for more details, including a diff against the previous state and a link to the raw state file.
<!-- BEGIN: TFC:only name:managed-resources -->
## Managed Resources Count
-> **Note:** A managed resources count for each organization is available in your organization's settings.
Your organization’s managed resource count helps you understand the number of infrastructure resources that HCP Terraform manages across all your workspaces.
HCP Terraform reads all the workspaces’ state files to determine the total number of managed resources. Each [resource](/terraform/language/resources/syntax) in the state equals one managed resource. HCP Terraform includes resources in modules and each resource instance created with the `count` or `for_each` meta-arguments. For example, `"aws_instance" "servers" { count = 10 }` creates ten separate managed resources in state. HCP Terraform does not include [data sources](/terraform/language/data-sources) in the count.
### Examples - Managed Resources
The following Terraform state excerpt describes a `random` resource. HCP Terraform counts `random` as one managed resource because `“mode”: “managed”`.
```json
"resources": [
{
"mode": "managed",
"type": "random_pet",
"name": "random",
"provider": "provider[\"registry.terraform.io/hashicorp/random\"]",
"instances": [
{
"schema_version": 0,
"attributes": {
"id": "puma",
"keepers": null,
"length": 1,
"prefix": null,
"separator": "-"
},
"sensitive_attributes": []
}
]
}
]
```
A single resource configuration block can describe multiple resource instances with the [`count`](/terraform/language/meta-arguments/count) or [`for_each`](/terraform/language/meta-arguments/for_each) meta-arguments. Each of these instances counts as a managed resource.
The following example shows a Terraform state excerpt with 2 instances of a `aws_subnet` resource. HCP Terraform counts each instance of `aws_subnet` as a separate managed resource.
```json
{
"module": "module.vpc",
"mode": "managed",
"type": "aws_subnet",
"name": "public",
"provider": "provider[\"registry.terraform.io/hashicorp/aws\"]",
"instances": [
{
"index_key": 0,
"schema_version": 1,
"attributes": {
"arn": "arn:aws:ec2:us-east-2:561656980159:subnet/subnet-024b05c4fba9c9733",
"assign_ipv6_address_on_creation": false,
"availability_zone": "us-east-2a",
##...
"private_dns_hostname_type_on_launch": "ip-name",
"tags": {
"Name": "-public-us-east-2a"
},
"tags_all": {
"Name": "-public-us-east-2a"
},
"timeouts": null,
"vpc_id": "vpc-0f693f9721b61333b"
},
"sensitive_attributes": [],
"private": "eyJlMmJmYjczMC1lY2FhLTExZTYtOGY4OC0zNDM2M2JjN2M0YzAiOnsiY3JlYXRlIjo2MDAwMDAwMDAwMDAsImRlbGV0ZSI6MTIwMDAwMDAwMDAwMH0sInNjaGVtYV92ZXJzaW9uIjoiMSJ9",
"dependencies": [
"data.aws_availability_zones.available",
"module.vpc.aws_vpc.this",
"module.vpc.aws_vpc_ipv4_cidr_block_association.this"
]
},
{
"index_key": 1,
"schema_version": 1,
"attributes": {
"arn": "arn:aws:ec2:us-east-2:561656980159:subnet/subnet-08924f16617e087b2",
"assign_ipv6_address_on_creation": false,
"availability_zone": "us-east-2b",
##...
"private_dns_hostname_type_on_launch": "ip-name",
"tags": {
"Name": "-public-us-east-2b"
},
"tags_all": {
"Name": "-public-us-east-2b"
},
"timeouts": null,
"vpc_id": "vpc-0f693f9721b61333b"
},
"sensitive_attributes": [],
"private": "eyJlMmJmYjczMC1lY2FhLTExZTYtOGY4OC0zNDM2M2JjN2M0YzAiOnsiY3JlYXRlIjo2MDAwMDAwMDAwMDAsImRlbGV0ZSI6MTIwMDAwMDAwMDAwMH0sInNjaGVtYV92ZXJzaW9uIjoiMSJ9",
"dependencies": [
"data.aws_availability_zones.available",
"module.vpc.aws_vpc.this",
"module.vpc.aws_vpc_ipv4_cidr_block_association.this"
]
}
]
}
```
### Example - Excluded Data Source
The following Terraform state excerpt describes a `aws_availability_zones` data source. HCP Terraform does not include `aws_availability_zones` in the managed resource count because `”mode”: “data”`.
```json
"resources": [
{
"mode": "data",
"type": "aws_availability_zones",
"name": "available",
"provider": "provider[\"registry.terraform.io/hashicorp/aws\"]",
"instances": [
{
"schema_version": 0,
"attributes": {
"all_availability_zones": null,
"exclude_names": null,
"exclude_zone_ids": null,
"filter": null,
"group_names": [
"us-east-2"
],
"id": "us-east-2",
"names": [
"us-east-2a",
"us-east-2b",
"us-east-2c"
],
"state": null,
"zone_ids": [
"use2-az1",
"use2-az2",
"use2-az3"
]
},
"sensitive_attributes": []
}
]
}
]
```
<!-- END: TFC:only name:managed-resources -->
## State Manipulation
Certain tasks (including importing resources, tainting resources, moving or renaming existing resources to match a changed configuration, and more) may require modifying Terraform state outside the context of a run, depending on which version of Terraform your HCP Terraform workspace is configured to use.
Newer Terraform features like [`moved` blocks](/terraform/language/modules/develop/refactoring), [`import` blocks](/terraform/language/import), and the [`replace` option](/terraform/cloud-docs/run/modes-and-options#replacing-selected-resources) allow you to accomplish these tasks using the usual plan and apply workflow. However, if the Terraform version you're using doesn't support these features, you may need to fall back to manual state manipulation.
Manual state manipulation in HCP Terraform workspaces, with the exception of [rolling back to a previous state version](#rolling-back-to-a-previous-state), requires the use of Terraform CLI, using the same commands as would be used in a local workflow (`terraform import`, `terraform taint`, etc.). To manipulate state, you must configure the [CLI integration](/terraform/cli/cloud) and authenticate with a user token that has permission to read and write state versions for the relevant workspace. ([More about permissions.](/terraform/cloud-docs/users-teams-organizations/permissions))
### Rolling Back to a Previous State
You can rollback to a previous, known good state version using the HCP Terraform UI. Navigate to the state you want to rollback to and click the **Advanced** toggle button. This option requires that you have access to create new state and that you lock the workspace. It works by duplicating the state that you specify and making it the workspace's current state version. The workspace remains locked. To undo the rollback operation, rollback to the state version that was previously the latest state.
-> **Note:** You can rollback to any prior state, but you should use caution because replacing state improperly can result in orphaned or duplicated infrastructure resources. This feature is provided as a convenient alternative to manually downloading older state and using state manipulation commands in the CLI to push it to HCP Terraform.
[permissions-citation]: #intentionally-unused---keep-for-maintainers
## Accessing State from Other Workspaces
-> **Note:** Provider-specific [data sources](/terraform/language/data-sources) are usually the most resilient way to share information between separate Terraform configurations. `terraform_remote_state` is more flexible, but we recommend using specialized data sources whenever it is convenient to do so.
Terraform's built-in [`terraform_remote_state` data source](/terraform/language/state/remote-state-data) lets you share arbitrary information between configurations via root module [outputs](/terraform/language/values/outputs).
HCP Terraform automatically manages API credentials for `terraform_remote_state` access during [runs managed by HCP Terraform](/terraform/cloud-docs/run/remote-operations#remote-operations). This means you do not usually need to include an API token in a `terraform_remote_state` data source's configuration.
## Upgrading State
You can upgrade a workspace's state version to a new Terraform version without making any configuration changes. To upgrade, we recommend the following steps:
1. Run a [speculative plan](/terraform/cloud-docs/run/ui#testing-terraform-upgrades-with-speculative-plans) to test whether your configuration is compatible with the new Terraform version. You can run speculative plans with a Terraform version that is different than the one currently selected for the workspace.
1. Select **Settings > General** and select the desired new **Terraform Version**.
1. Click **+ New run** and then select **Allow empty apply** as the run type. An [empty apply](/terraform/cloud-docs/run/modes-and-options#allow-empty-apply) allows Terraform to apply a plan that produces no infrastructure changes. Terraform upgrades the state file version during the apply process.
-> **Note:** If the desired Terraform version is incompatible with a workspace's existing state version, the run fails and HCP Terraform prompts you to run an apply with a compatible version first. Refer to the [Terraform upgrade guides](/terraform/language/upgrade-guides) for details about upgrading between versions.
### Remote State Access Controls
Remote state access between workspaces is subject to access controls:
- Only workspaces within the same organization can access each other's state.
- The workspace whose state is being read must be configured to allow that access. State access permissions are configured on a workspace's [general settings page](/terraform/cloud-docs/workspaces/settings). There are two ways a workspace can allow access:
- Globally, to all workspaces within the same organization.
- Selectively, to a list of specific approved workspaces.
By default, new workspaces in HCP Terraform do not allow other workspaces to access their state. We recommend that you follow the principle of least privilege and only enable state access between workspaces that specifically need information from each other.
-> **Note:** The default access permissions for new workspaces in HCP Terraform changed in April 2021. Workspaces created before this change defaulted to allowing global access within their organization. These workspaces can be changed to more restrictive access at any time on their [general settings page](/terraform/cloud-docs/workspaces/settings). Terraform Enterprise administrators can choose whether new workspaces on their instances default to global access or selective access.
### Data Source Configuration
To configure a `tfe_outputs` data source that references an HCP Terraform workspace, specify the organization and workspace in the `config` argument.
You must still properly configure the `tfe` provider with a valid authentication token and correct permissions to HCP Terraform.
```hcl
data "tfe_outputs" "vpc" {
config = {
organization = "example_corp"
workspaces = {
name = "vpc-prod"
}
}
}
resource "aws_instance" "redis_server" {
# Terraform 0.12 and later: use the "outputs.<OUTPUT NAME>" attribute
subnet_id = data.tfe_outputs.vpc.outputs.subnet_id
}
```
-> **Note:** Remote state access controls do not apply when using the `tfe_outputs` data source. | terraform | page title Terraform State Workspaces HCP Terraform description Workspaces have their own separate state data Learn how state is used and how to access state from other workspaces Terraform State in HCP Terraform Each HCP Terraform workspace has its own separate state data used for runs within that workspace API See the State Versions API terraform cloud docs api docs state versions State Usage in Terraform Runs In remote runs terraform cloud docs run remote operations HCP Terraform automatically configures Terraform to use the workspace s state the Terraform configuration does not need an explicit backend configuration If a backend configuration is present it will be overridden In local runs available for workspaces whose execution mode setting is set to local you can use a workspace s state by configuring the CLI integration terraform cli cloud and authenticating with a user token that has permission to read and write state versions for the relevant workspace When using a Terraform configuration that references outputs from another workspace the authentication token must also have permission to read state outputs for that workspace More about permissions terraform cloud docs users teams organizations permissions BEGIN TFC only name intermediate state During an HCP Terraform run Terraform incrementally creates intermediate state versions and marks them as finalized once it uploads the state content When a workspace is unlocked HCP Terraform selects the latest state and sets it as the current state version deletes all other intermediate state versions that were saved as recovery snapshots for the duration of the lock and discards all pending intermediate state versions that were superseded by newer state versions permissions citation intentionally unused keep for maintainers END TFC only name intermediate state State Versions In addition to the current state HCP Terraform retains historical state versions which can be used to analyze infrastructure changes over time You can view a workspace s state versions from its States tab Each state in the list indicates which run and which VCS commit if applicable it was associated with Click a state in the list for more details including a diff against the previous state and a link to the raw state file BEGIN TFC only name managed resources Managed Resources Count Note A managed resources count for each organization is available in your organization s settings Your organization s managed resource count helps you understand the number of infrastructure resources that HCP Terraform manages across all your workspaces HCP Terraform reads all the workspaces state files to determine the total number of managed resources Each resource terraform language resources syntax in the state equals one managed resource HCP Terraform includes resources in modules and each resource instance created with the count or for each meta arguments For example aws instance servers count 10 creates ten separate managed resources in state HCP Terraform does not include data sources terraform language data sources in the count Examples Managed Resources The following Terraform state excerpt describes a random resource HCP Terraform counts random as one managed resource because mode managed json resources mode managed type random pet name random provider provider registry terraform io hashicorp random instances schema version 0 attributes id puma keepers null length 1 prefix null separator sensitive attributes A single resource configuration block can describe multiple resource instances with the count terraform language meta arguments count or for each terraform language meta arguments for each meta arguments Each of these instances counts as a managed resource The following example shows a Terraform state excerpt with 2 instances of a aws subnet resource HCP Terraform counts each instance of aws subnet as a separate managed resource json module module vpc mode managed type aws subnet name public provider provider registry terraform io hashicorp aws instances index key 0 schema version 1 attributes arn arn aws ec2 us east 2 561656980159 subnet subnet 024b05c4fba9c9733 assign ipv6 address on creation false availability zone us east 2a private dns hostname type on launch ip name tags Name public us east 2a tags all Name public us east 2a timeouts null vpc id vpc 0f693f9721b61333b sensitive attributes private eyJlMmJmYjczMC1lY2FhLTExZTYtOGY4OC0zNDM2M2JjN2M0YzAiOnsiY3JlYXRlIjo2MDAwMDAwMDAwMDAsImRlbGV0ZSI6MTIwMDAwMDAwMDAwMH0sInNjaGVtYV92ZXJzaW9uIjoiMSJ9 dependencies data aws availability zones available module vpc aws vpc this module vpc aws vpc ipv4 cidr block association this index key 1 schema version 1 attributes arn arn aws ec2 us east 2 561656980159 subnet subnet 08924f16617e087b2 assign ipv6 address on creation false availability zone us east 2b private dns hostname type on launch ip name tags Name public us east 2b tags all Name public us east 2b timeouts null vpc id vpc 0f693f9721b61333b sensitive attributes private eyJlMmJmYjczMC1lY2FhLTExZTYtOGY4OC0zNDM2M2JjN2M0YzAiOnsiY3JlYXRlIjo2MDAwMDAwMDAwMDAsImRlbGV0ZSI6MTIwMDAwMDAwMDAwMH0sInNjaGVtYV92ZXJzaW9uIjoiMSJ9 dependencies data aws availability zones available module vpc aws vpc this module vpc aws vpc ipv4 cidr block association this Example Excluded Data Source The following Terraform state excerpt describes a aws availability zones data source HCP Terraform does not include aws availability zones in the managed resource count because mode data json resources mode data type aws availability zones name available provider provider registry terraform io hashicorp aws instances schema version 0 attributes all availability zones null exclude names null exclude zone ids null filter null group names us east 2 id us east 2 names us east 2a us east 2b us east 2c state null zone ids use2 az1 use2 az2 use2 az3 sensitive attributes END TFC only name managed resources State Manipulation Certain tasks including importing resources tainting resources moving or renaming existing resources to match a changed configuration and more may require modifying Terraform state outside the context of a run depending on which version of Terraform your HCP Terraform workspace is configured to use Newer Terraform features like moved blocks terraform language modules develop refactoring import blocks terraform language import and the replace option terraform cloud docs run modes and options replacing selected resources allow you to accomplish these tasks using the usual plan and apply workflow However if the Terraform version you re using doesn t support these features you may need to fall back to manual state manipulation Manual state manipulation in HCP Terraform workspaces with the exception of rolling back to a previous state version rolling back to a previous state requires the use of Terraform CLI using the same commands as would be used in a local workflow terraform import terraform taint etc To manipulate state you must configure the CLI integration terraform cli cloud and authenticate with a user token that has permission to read and write state versions for the relevant workspace More about permissions terraform cloud docs users teams organizations permissions Rolling Back to a Previous State You can rollback to a previous known good state version using the HCP Terraform UI Navigate to the state you want to rollback to and click the Advanced toggle button This option requires that you have access to create new state and that you lock the workspace It works by duplicating the state that you specify and making it the workspace s current state version The workspace remains locked To undo the rollback operation rollback to the state version that was previously the latest state Note You can rollback to any prior state but you should use caution because replacing state improperly can result in orphaned or duplicated infrastructure resources This feature is provided as a convenient alternative to manually downloading older state and using state manipulation commands in the CLI to push it to HCP Terraform permissions citation intentionally unused keep for maintainers Accessing State from Other Workspaces Note Provider specific data sources terraform language data sources are usually the most resilient way to share information between separate Terraform configurations terraform remote state is more flexible but we recommend using specialized data sources whenever it is convenient to do so Terraform s built in terraform remote state data source terraform language state remote state data lets you share arbitrary information between configurations via root module outputs terraform language values outputs HCP Terraform automatically manages API credentials for terraform remote state access during runs managed by HCP Terraform terraform cloud docs run remote operations remote operations This means you do not usually need to include an API token in a terraform remote state data source s configuration Upgrading State You can upgrade a workspace s state version to a new Terraform version without making any configuration changes To upgrade we recommend the following steps 1 Run a speculative plan terraform cloud docs run ui testing terraform upgrades with speculative plans to test whether your configuration is compatible with the new Terraform version You can run speculative plans with a Terraform version that is different than the one currently selected for the workspace 1 Select Settings General and select the desired new Terraform Version 1 Click New run and then select Allow empty apply as the run type An empty apply terraform cloud docs run modes and options allow empty apply allows Terraform to apply a plan that produces no infrastructure changes Terraform upgrades the state file version during the apply process Note If the desired Terraform version is incompatible with a workspace s existing state version the run fails and HCP Terraform prompts you to run an apply with a compatible version first Refer to the Terraform upgrade guides terraform language upgrade guides for details about upgrading between versions Remote State Access Controls Remote state access between workspaces is subject to access controls Only workspaces within the same organization can access each other s state The workspace whose state is being read must be configured to allow that access State access permissions are configured on a workspace s general settings page terraform cloud docs workspaces settings There are two ways a workspace can allow access Globally to all workspaces within the same organization Selectively to a list of specific approved workspaces By default new workspaces in HCP Terraform do not allow other workspaces to access their state We recommend that you follow the principle of least privilege and only enable state access between workspaces that specifically need information from each other Note The default access permissions for new workspaces in HCP Terraform changed in April 2021 Workspaces created before this change defaulted to allowing global access within their organization These workspaces can be changed to more restrictive access at any time on their general settings page terraform cloud docs workspaces settings Terraform Enterprise administrators can choose whether new workspaces on their instances default to global access or selective access Data Source Configuration To configure a tfe outputs data source that references an HCP Terraform workspace specify the organization and workspace in the config argument You must still properly configure the tfe provider with a valid authentication token and correct permissions to HCP Terraform hcl data tfe outputs vpc config organization example corp workspaces name vpc prod resource aws instance redis server Terraform 0 12 and later use the outputs OUTPUT NAME attribute subnet id data tfe outputs vpc outputs subnet id Note Remote state access controls do not apply when using the tfe outputs data source |
terraform You can change a workspace s settings after creation Workspace settings are separated into several pages page title Settings Workspaces HCP Terraform settings for notifications permissions and more Workspaces organize infrastructure Find documentation about workspace Workspace Settings | ---
page_title: Settings - Workspaces - HCP Terraform
description: >-
Workspaces organize infrastructure. Find documentation about workspace
settings for notifications, permissions, and more.
---
# Workspace Settings
You can change a workspace’s settings after creation. Workspace settings are separated into several pages.
- [General](#general): Settings that determine how the workspace functions, including its name, description, associated project, Terraform version, and execution mode.
- [Health](/terraform/cloud-docs/workspaces/health): Settings that let you configure health assessments, including drift detection and continuous validation.
- [Locking](#locking): Locking a workspace temporarily prevents new plans and applies.
- [Notifications](#notifications): Settings that let you configure run notifications.
- [Policies](#policies): Settings that let you toggle between Sentinel policy evaluation experiences.
- [Run Triggers](#run-triggers): Settings that let you configure run triggers. Run triggers allow runs to queue automatically in your workspace when runs in other workspaces are successful.
- [SSH Key](#ssh-key): Set a private SSH key for downloading Terraform modules from Git-based module sources.
- [Team Access](#team-access): Settings that let you manage which teams can view the workspace and use it to provision infrastructure.
- [Version Control](#version-control): Manage the workspace’s VCS integration.
- [Destruction and Deletion](#destruction-and-deletion): Remove a workspace and the infrastructure it manages.
Changing settings requires admin access to the relevant workspace. ([More about permissions.](/terraform/cloud-docs/users-teams-organizations/permissions))
[permissions-citation]: #intentionally-unused---keep-for-maintainers
-> **API:** See the [Update a Workspace endpoint](/terraform/cloud-docs/api-docs/workspaces#update-a-workspace) (`PATCH /organizations/:organization_name/workspaces/:name`).
## General
General settings let you change a workspace's name, description, the project it belongs to, and details about how Terraform runs operate. After changing these settings, click **Save settings** at the bottom of the page.
### ID
Every workspace has a unique ID that you cannot change. You may need to reference the workspace's ID when using the [HCP Terraform API](/terraform/cloud-docs/api-docs).
Click the icon beside the ID to copy it to your clipboard.
### Name
The display name of the workspace.
!> **Warning:** Some API calls refer to a workspace by its name, so changing the name may break existing integrations.
### Project
The [project](/terraform/cloud-docs/projects) that this workspace belongs to. Changing the workspace's project can change the read and write permissions for the workspace and which users can access it.
To move a workspace, you must have the "Manage all Projects" organization permission or explicit team admin privileges on both the source and destination projects. Remember that moving a workspace to another project may affect user visibility for that project's workspaces. Refer to [Project Permissions](/terraform/cloud-docs/users-teams-organizations/permissions#project-permissions) for details on workspace access.
### Description (Optional)
Enter a brief description of the workspace's purpose or types of infrastructure.
### Execution Mode
Whether to use HCP Terraform as the Terraform execution platform for this workspace.
By default, HCP Terraform uses an organization's [default execution mode](/terraform/cloud-docs/users-teams-organizations/organizations#organization-settings) to choose the execution platform for a workspace. Alternatively, you can instead choose a custom execution mode for a workspace.
Specifying the "Remote" execution mode instructs HCP Terraform to perform Terraform runs on its own disposable virtual machines. This provides a consistent and reliable run environment and enables advanced features like Sentinel policy enforcement, cost estimation, notifications, version control integration, and more.
To disable remote execution for a workspace, change its execution mode to "Local". This mode lets you perform Terraform runs locally with the [CLI-driven run workflow](/terraform/cloud-docs/run/cli). The workspace will store state, which Terraform can access with the [CLI integration](/terraform/cli/cloud). HCP Terraform does not evaluate workspace variables or variable sets in local execution mode.
If you instead need to allow HCP Terraform to communicate with isolated, private, or on-premises infrastructure, consider using [HCP Terraform agents](/terraform/cloud-docs/agents). By deploying a lightweight agent, you can establish a simple connection between your environment and HCP Terraform.
Changing your workspace's execution mode after a run has already been planned will cause the run to error when it is applied.
To minimize the number of runs that error when changing your workspace's execution mode, you should:
1. Disable [auto-apply](/terraform/cloud-docs/workspaces/settings#auto-apply) if you have it enabled.
1. Complete any runs that are no longer in the [pending stage](/terraform/cloud-docs/run/states#the-pending-stage).
1. [Lock](/terraform/cloud-docs/workspaces/settings#locking) your workspace to prevent any new runs.
1. Change the execution mode.
1. Enable [auto-apply](/terraform/cloud-docs/workspaces/settings#auto-apply), if you had it enabled before changing your execution mode.
1. [Unlock](/terraform/cloud-docs/workspaces/settings#locking) your workspace.
<a id="apply-method"></a>
<a id="auto-apply-and-manual-apply"></a>
### Auto-apply
Whether or not HCP Terraform should automatically apply a successful Terraform plan. If you choose manual apply, an operator must confirm a successful plan and choose to apply it.
The main auto-apply setting affects runs created by the HCP Terraform user interface, API, CLI, and version control webhooks. HCP Terraform also has a separate setting for runs created by [run triggers](/terraform/cloud-docs/workspaces/settings/run-triggers) from another workspace.
Auto-apply has the following exception:
- Plans queued by users without permission to apply runs for the workspace must be approved by a user who does have permission. ([More about permissions.](/terraform/cloud-docs/users-teams-organizations/permissions))
[permissions-citation]: #intentionally-unused---keep-for-maintainers
### Terraform Version
The Terraform version to use for all operations in the workspace. The default value is whichever release was current when HCP Terraform created the workspace. You can also update a workspace's Terraform version to an exact version or a valid [version constraint](/terraform/language/expressions/version-constraints).
> **Hands-on:** Try the [Upgrade Terraform Version in HCP Terraform](/terraform/tutorials/cloud/cloud-versions) tutorial.
-> **API:** You can specify a Terraform version when you [create a workspace](/terraform/cloud-docs/api-docs/workspaces#create-a-workspace) with the API.
### Terraform Working Directory
The directory where Terraform will execute, specified as a relative path from the root of the configuration directory. Defaults to the root of the configuration directory.
HCP Terraform will change to this directory before starting a Terraform run, and will report an error if the directory does not exist.
Setting a working directory creates a default filter for automatic run triggering, and sometimes causes CLI-driven runs to upload additional configuration content.
#### Default Run Trigger Filtering
In VCS-backed workspaces that specify a working directory, HCP Terraform assumes that only changes within that working directory should trigger a run. You can override this behavior with the [Automatic Run Triggering](/terraform/cloud-docs/workspaces/settings/vcs#automatic-run-triggering) settings.
#### Parent Directory Uploads
If a working directory is configured, HCP Terraform always expects the complete shared configuration directory to be available, since the configuration might use local modules from outside its working directory.
In [runs triggered by VCS commits](/terraform/cloud-docs/run/ui), this is automatic. In [CLI-driven runs](/terraform/cloud-docs/run/cli), Terraform's CLI sometimes uploads additional content:
- When the local working directory _does not match_ the name of the configured working directory, Terraform assumes it is the root of the configuration directory, and uploads only the local working directory.
- When the local working directory _matches_ the name of the configured working directory, Terraform uploads one or more parents of the local working directory, according to the depth of the configured working directory. (For example, a working directory of `production` is only one level deep, so Terraform would upload the immediate parent directory. `consul/production` is two levels deep, so Terraform would upload the parent and grandparent directories.)
If you use the working directory setting, always run Terraform from a complete copy of the configuration directory. Moving one subdirectory to a new location can result in unexpected content uploads.
### Remote State Sharing
Which other workspaces within the organization can access the state of the workspace during [runs managed by HCP Terraform](/terraform/cloud-docs/run/remote-operations#remote-operations). The [`terraform_remote_state` data source](/terraform/language/state/remote-state-data) relies on state sharing to access workspace outputs.
- If "Share state globally" is enabled, all other workspaces within the organization can access this workspace's state during runs.
- If global sharing is turned off, you can specify a list of workspaces within the organization that can access this workspace's state; no other workspaces will be allowed.
The workspace selector is searchable; if you don't initially see a workspace you're looking for, type part of its name.
By default, new workspaces in HCP Terraform do not allow other workspaces to access their state. We recommend that you follow the principle of least privilege and only enable state access between workspaces that specifically need information from each other. To configure remote state sharing, a user must have read access for the destination workspace. If a user does not have access to the destination workspace due to scoped project or workspace permissions, they will not have complete visibility into the list of other workspace that can access its state.
-> **Note:** The default access permissions for new workspaces in HCP Terraform changed in April 2021. Workspaces created before this change default to allowing global access within their organization. These workspaces can be changed to more restrictive access at any time. Terraform Enterprise administrators can choose whether new workspaces on their instances default to global access or selective access.
### User Interface
Select the user experience for displaying plan and apply details.
The default experience is _Structured Run Output_, which displays your plan and apply results in a human-readable format. This includes nodes that you can expand to view details about each resource and any configured output.
The Console UI experience is the traditional Terraform experience, where live text logging is streamed in real time to the UI. This experience most closely emulates the CLI output.
~> **Note:** Your workspace must be configured to use a Terraform version of 1.0.5 or higher for the Structured Run Output experience to be fully supported. Workspaces running versions from 0.15.2 may see partial functionality. Workspaces running versions below 0.15.2 will default to the "Console UI" experience regardless of the User Interface setting.
## Locking
~> **Important:** Unlike other settings, locks can also be managed by users with permission to lock and unlock the workspace. ([More about permissions.](/terraform/cloud-docs/users-teams-organizations/permissions))
[permissions-citation]: #intentionally-unused---keep-for-maintainers
If you need to prevent Terraform runs for any reason, you can lock a workspace. This prevents all applies (and many kinds of plans) from proceeding, and affects runs created via UI, CLI, API, and automated systems. To enable runs again, a user must unlock the workspace.
Two kinds of run operations can ignore workspace locking because they cannot affect resources or state and do not attempt to lock the workspace themselves:
- Plan-only runs.
- The planning stages of [saved plan runs](/terraform/cloud-docs/run/modes-and-options.mdx#saved-plans). You can only _apply_ a saved plan if the workspace is unlocked, and applying that plan locks the workspace as usual. Terraform Enterprise does not yet support this workflow.
Locking a workspace also restricts state uploads. In order to upload state, the workspace must be locked by the user who is uploading state.
Users with permission to lock and unlock a workspace can't unlock a workspace which was locked by another user. Users with admin access to a workspace can force unlock a workspace even if another user has locked it.
[permissions-citation]: #intentionally-unused---keep-for-maintainers
Locks are managed with a single "Lock/Unlock/Force unlock `<WORKSPACE NAME>`" button. HCP Terraform asks for confirmation when unlocking.
You can also manage the workspace's lock from the **Actions** menu.
## Notifications
The "Notifications" page allows HCP Terraform to send webhooks to external services whenever specific run events occur in a workspace.
See [Run Notifications](/terraform/cloud-docs/workspaces/settings/notifications) for detailed information about configuring notifications.
## Policies
HCP Terraform offers two experiences for Sentinel policy evaluations. On the "Policies" page, you can adjust your **Sentinel Experience** settings to your preferred experience. By default, HCP Terraform enables the newest policy evaluation experience.
To toggle between the two Sentinel policy evaluation experiences, click the **Enable the new Sentinel policy experience** toggle under the **Sentinel Experience** heading. HCP Terraform persists your changes automatically. If HCP Terraform is performing a run on a different page, you must refresh that page to see changes to your policy evaluation experience.
## Run Triggers
The "Run Triggers" page configures connections between a workspace and one or more source workspaces. These connections, called "run triggers", allow runs to queue automatically in a workspace on successful apply of runs in any of the source workspaces.
See [Run Triggers](/terraform/cloud-docs/workspaces/settings/run-triggers) for detailed information about configuring run triggers.
## SSH Key
If a workspace's configuration uses [Git-based module sources](/terraform/language/modules/sources) to reference Terraform modules in private Git repositories, Terraform needs an SSH key to clone those repositories. The "SSH Key" page lets you choose which key it should use.
See [Using SSH Keys for Cloning Modules](/terraform/cloud-docs/workspaces/settings/ssh-keys) for detailed information about this page.
## Team Access
The "Team Access" page configures which teams can perform which actions on a workspace.
See [Managing Access to Workspaces](/terraform/cloud-docs/workspaces/settings/access) for detailed information.
## Version Control
The "Version Control" page configures an optional VCS repository that contains the workspace's Terraform configuration. Version control integration is only relevant for workspaces with [remote execution](#execution-mode) enabled.
See [VCS Connections](/terraform/cloud-docs/workspaces/settings/vcs) for detailed information about this page.
## Destruction and Deletion
The **Destruction and Deletion** page allows [admin users](/terraform/cloud-docs/users-teams-organizations/permissions) to delete a workspace's managed infrastructure or delete the workspace itself.
For details, refer to [Destruction and Deletion](/terraform/cloud-docs/workspaces/settings/deletion) for detailed information about this page.
| terraform | page title Settings Workspaces HCP Terraform description Workspaces organize infrastructure Find documentation about workspace settings for notifications permissions and more Workspace Settings You can change a workspace s settings after creation Workspace settings are separated into several pages General general Settings that determine how the workspace functions including its name description associated project Terraform version and execution mode Health terraform cloud docs workspaces health Settings that let you configure health assessments including drift detection and continuous validation Locking locking Locking a workspace temporarily prevents new plans and applies Notifications notifications Settings that let you configure run notifications Policies policies Settings that let you toggle between Sentinel policy evaluation experiences Run Triggers run triggers Settings that let you configure run triggers Run triggers allow runs to queue automatically in your workspace when runs in other workspaces are successful SSH Key ssh key Set a private SSH key for downloading Terraform modules from Git based module sources Team Access team access Settings that let you manage which teams can view the workspace and use it to provision infrastructure Version Control version control Manage the workspace s VCS integration Destruction and Deletion destruction and deletion Remove a workspace and the infrastructure it manages Changing settings requires admin access to the relevant workspace More about permissions terraform cloud docs users teams organizations permissions permissions citation intentionally unused keep for maintainers API See the Update a Workspace endpoint terraform cloud docs api docs workspaces update a workspace PATCH organizations organization name workspaces name General General settings let you change a workspace s name description the project it belongs to and details about how Terraform runs operate After changing these settings click Save settings at the bottom of the page ID Every workspace has a unique ID that you cannot change You may need to reference the workspace s ID when using the HCP Terraform API terraform cloud docs api docs Click the icon beside the ID to copy it to your clipboard Name The display name of the workspace Warning Some API calls refer to a workspace by its name so changing the name may break existing integrations Project The project terraform cloud docs projects that this workspace belongs to Changing the workspace s project can change the read and write permissions for the workspace and which users can access it To move a workspace you must have the Manage all Projects organization permission or explicit team admin privileges on both the source and destination projects Remember that moving a workspace to another project may affect user visibility for that project s workspaces Refer to Project Permissions terraform cloud docs users teams organizations permissions project permissions for details on workspace access Description Optional Enter a brief description of the workspace s purpose or types of infrastructure Execution Mode Whether to use HCP Terraform as the Terraform execution platform for this workspace By default HCP Terraform uses an organization s default execution mode terraform cloud docs users teams organizations organizations organization settings to choose the execution platform for a workspace Alternatively you can instead choose a custom execution mode for a workspace Specifying the Remote execution mode instructs HCP Terraform to perform Terraform runs on its own disposable virtual machines This provides a consistent and reliable run environment and enables advanced features like Sentinel policy enforcement cost estimation notifications version control integration and more To disable remote execution for a workspace change its execution mode to Local This mode lets you perform Terraform runs locally with the CLI driven run workflow terraform cloud docs run cli The workspace will store state which Terraform can access with the CLI integration terraform cli cloud HCP Terraform does not evaluate workspace variables or variable sets in local execution mode If you instead need to allow HCP Terraform to communicate with isolated private or on premises infrastructure consider using HCP Terraform agents terraform cloud docs agents By deploying a lightweight agent you can establish a simple connection between your environment and HCP Terraform Changing your workspace s execution mode after a run has already been planned will cause the run to error when it is applied To minimize the number of runs that error when changing your workspace s execution mode you should 1 Disable auto apply terraform cloud docs workspaces settings auto apply if you have it enabled 1 Complete any runs that are no longer in the pending stage terraform cloud docs run states the pending stage 1 Lock terraform cloud docs workspaces settings locking your workspace to prevent any new runs 1 Change the execution mode 1 Enable auto apply terraform cloud docs workspaces settings auto apply if you had it enabled before changing your execution mode 1 Unlock terraform cloud docs workspaces settings locking your workspace a id apply method a a id auto apply and manual apply a Auto apply Whether or not HCP Terraform should automatically apply a successful Terraform plan If you choose manual apply an operator must confirm a successful plan and choose to apply it The main auto apply setting affects runs created by the HCP Terraform user interface API CLI and version control webhooks HCP Terraform also has a separate setting for runs created by run triggers terraform cloud docs workspaces settings run triggers from another workspace Auto apply has the following exception Plans queued by users without permission to apply runs for the workspace must be approved by a user who does have permission More about permissions terraform cloud docs users teams organizations permissions permissions citation intentionally unused keep for maintainers Terraform Version The Terraform version to use for all operations in the workspace The default value is whichever release was current when HCP Terraform created the workspace You can also update a workspace s Terraform version to an exact version or a valid version constraint terraform language expressions version constraints Hands on Try the Upgrade Terraform Version in HCP Terraform terraform tutorials cloud cloud versions tutorial API You can specify a Terraform version when you create a workspace terraform cloud docs api docs workspaces create a workspace with the API Terraform Working Directory The directory where Terraform will execute specified as a relative path from the root of the configuration directory Defaults to the root of the configuration directory HCP Terraform will change to this directory before starting a Terraform run and will report an error if the directory does not exist Setting a working directory creates a default filter for automatic run triggering and sometimes causes CLI driven runs to upload additional configuration content Default Run Trigger Filtering In VCS backed workspaces that specify a working directory HCP Terraform assumes that only changes within that working directory should trigger a run You can override this behavior with the Automatic Run Triggering terraform cloud docs workspaces settings vcs automatic run triggering settings Parent Directory Uploads If a working directory is configured HCP Terraform always expects the complete shared configuration directory to be available since the configuration might use local modules from outside its working directory In runs triggered by VCS commits terraform cloud docs run ui this is automatic In CLI driven runs terraform cloud docs run cli Terraform s CLI sometimes uploads additional content When the local working directory does not match the name of the configured working directory Terraform assumes it is the root of the configuration directory and uploads only the local working directory When the local working directory matches the name of the configured working directory Terraform uploads one or more parents of the local working directory according to the depth of the configured working directory For example a working directory of production is only one level deep so Terraform would upload the immediate parent directory consul production is two levels deep so Terraform would upload the parent and grandparent directories If you use the working directory setting always run Terraform from a complete copy of the configuration directory Moving one subdirectory to a new location can result in unexpected content uploads Remote State Sharing Which other workspaces within the organization can access the state of the workspace during runs managed by HCP Terraform terraform cloud docs run remote operations remote operations The terraform remote state data source terraform language state remote state data relies on state sharing to access workspace outputs If Share state globally is enabled all other workspaces within the organization can access this workspace s state during runs If global sharing is turned off you can specify a list of workspaces within the organization that can access this workspace s state no other workspaces will be allowed The workspace selector is searchable if you don t initially see a workspace you re looking for type part of its name By default new workspaces in HCP Terraform do not allow other workspaces to access their state We recommend that you follow the principle of least privilege and only enable state access between workspaces that specifically need information from each other To configure remote state sharing a user must have read access for the destination workspace If a user does not have access to the destination workspace due to scoped project or workspace permissions they will not have complete visibility into the list of other workspace that can access its state Note The default access permissions for new workspaces in HCP Terraform changed in April 2021 Workspaces created before this change default to allowing global access within their organization These workspaces can be changed to more restrictive access at any time Terraform Enterprise administrators can choose whether new workspaces on their instances default to global access or selective access User Interface Select the user experience for displaying plan and apply details The default experience is Structured Run Output which displays your plan and apply results in a human readable format This includes nodes that you can expand to view details about each resource and any configured output The Console UI experience is the traditional Terraform experience where live text logging is streamed in real time to the UI This experience most closely emulates the CLI output Note Your workspace must be configured to use a Terraform version of 1 0 5 or higher for the Structured Run Output experience to be fully supported Workspaces running versions from 0 15 2 may see partial functionality Workspaces running versions below 0 15 2 will default to the Console UI experience regardless of the User Interface setting Locking Important Unlike other settings locks can also be managed by users with permission to lock and unlock the workspace More about permissions terraform cloud docs users teams organizations permissions permissions citation intentionally unused keep for maintainers If you need to prevent Terraform runs for any reason you can lock a workspace This prevents all applies and many kinds of plans from proceeding and affects runs created via UI CLI API and automated systems To enable runs again a user must unlock the workspace Two kinds of run operations can ignore workspace locking because they cannot affect resources or state and do not attempt to lock the workspace themselves Plan only runs The planning stages of saved plan runs terraform cloud docs run modes and options mdx saved plans You can only apply a saved plan if the workspace is unlocked and applying that plan locks the workspace as usual Terraform Enterprise does not yet support this workflow Locking a workspace also restricts state uploads In order to upload state the workspace must be locked by the user who is uploading state Users with permission to lock and unlock a workspace can t unlock a workspace which was locked by another user Users with admin access to a workspace can force unlock a workspace even if another user has locked it permissions citation intentionally unused keep for maintainers Locks are managed with a single Lock Unlock Force unlock WORKSPACE NAME button HCP Terraform asks for confirmation when unlocking You can also manage the workspace s lock from the Actions menu Notifications The Notifications page allows HCP Terraform to send webhooks to external services whenever specific run events occur in a workspace See Run Notifications terraform cloud docs workspaces settings notifications for detailed information about configuring notifications Policies HCP Terraform offers two experiences for Sentinel policy evaluations On the Policies page you can adjust your Sentinel Experience settings to your preferred experience By default HCP Terraform enables the newest policy evaluation experience To toggle between the two Sentinel policy evaluation experiences click the Enable the new Sentinel policy experience toggle under the Sentinel Experience heading HCP Terraform persists your changes automatically If HCP Terraform is performing a run on a different page you must refresh that page to see changes to your policy evaluation experience Run Triggers The Run Triggers page configures connections between a workspace and one or more source workspaces These connections called run triggers allow runs to queue automatically in a workspace on successful apply of runs in any of the source workspaces See Run Triggers terraform cloud docs workspaces settings run triggers for detailed information about configuring run triggers SSH Key If a workspace s configuration uses Git based module sources terraform language modules sources to reference Terraform modules in private Git repositories Terraform needs an SSH key to clone those repositories The SSH Key page lets you choose which key it should use See Using SSH Keys for Cloning Modules terraform cloud docs workspaces settings ssh keys for detailed information about this page Team Access The Team Access page configures which teams can perform which actions on a workspace See Managing Access to Workspaces terraform cloud docs workspaces settings access for detailed information Version Control The Version Control page configures an optional VCS repository that contains the workspace s Terraform configuration Version control integration is only relevant for workspaces with remote execution execution mode enabled See VCS Connections terraform cloud docs workspaces settings vcs for detailed information about this page Destruction and Deletion The Destruction and Deletion page allows admin users terraform cloud docs users teams organizations permissions to delete a workspace s managed infrastructure or delete the workspace itself For details refer to Destruction and Deletion terraform cloud docs workspaces settings deletion for detailed information about this page |
terraform page title Destruction and Deletion Workspaces HCP Terraform HCP Terraform workspaces have two primary delete actions Learn about destroying infrastructure and deleting workspaces in HCP Terraform Destruction and Deletion | ---
page_title: Destruction and Deletion - Workspaces - HCP Terraform
description: |-
Learn about destroying infrastructure and deleting workspaces in HCP Terraform.
---
# Destruction and Deletion
HCP Terraform workspaces have two primary delete actions:
- [Destroying infrastructure](#destroy-infrastructure) deletes resources managed by the HCP Terraform workspace by triggering a destroy run.
- [Deleting a workspace](#delete-workspaces) deletes the workspace itself without triggering a destroy run.
In general, you should perform both actions in the above order when destroying a workspace to ensure resource cleanup for all of a workspace's managed infrastructure.
## Destroy Infrastructure
Destroy plans delete the infrastructure managed by a workspace. We recommend destroying the infrastructure managed by a workspace _before_ deleting the workspace itself. Otherwise, the unmanaged infrastructure resources will continue to exist but will become unmanaged, and you must go into your infrastructure providers to delete the resources manually.
Before queuing a destroy plan, enable the **Allow destroy plans** toggle setting on this page.
### Automatically Destroy
<!-- BEGIN: TFC:only name:pnp-callout -->
@include 'tfc-package-callouts/ephemeral-workspaces.mdx'
<!-- END: TFC:only name:pnp-callout -->
Configuring automatic infrastructure destruction for a workspace requires [admin permissions](/terraform/cloud-docs/users-teams-organizations/permissions#workspace-admins) for that workspace.
There are two main ways to automatically destroy a workspace's resources:
* Schedule a run to destroy all resources in a workspace at a specific date and time.
* Configure HCP Terraform to destroy a workspace's infrastructure after a period of workspace inactivity.
Workspaces can inherit auto-destroy settings from their project. Refer to [managing projects](/terraform/cloud-docs/projects/manage#automatically-destroy-inactive-workspaces) for more information. You can configure an individual workspace's auto-destroy settings to override the project's configuration.
You can reduce your spending on infrastructure by automatically destroying temporary resources like development environments.
After HCP Terraform performs an auto-destroy run, it unsets the `auto-destroy-at` field on the workspace. If you continue using the workspace, you can schedule another future auto-destroy run to remove any new resources.
!> **Note:** Automatic destroy plans _do not_ prompt you for apply approval in the HCP Terraform user interface. We recommend only using this setting for development environments.
You can schedule an auto-destroy run using the HCP Terraform web user interface, or the [workspace API](/terraform/cloud-docs/api-docs/workspaces).
You can also schedule [notifications](/terraform/cloud-docs/workspaces/settings/notifications) to alert you 12 and 24 hours before an auto-destroy run, and to report auto-destroy run results.
#### Destroy at a specific day and time
To schedule an auto-destroy run at a specific time in HCP Terraform:
1. Navigate to the workspace's **Settings** > **Destruction and Deletion** page.
1. Under **Automatically destroy**, click **Set up auto-destroy**.
1. Enter the desired date and time. HCP Terraform defaults to your local time zone for scheduling and displays how long until the scheduled operation.
1. Click **Confirm auto-destroy**.
To cancel a scheduled auto-destroy run in HCP Terraform:
1. Navigate to the workspace's **Settings** > **Destruction and Deletion** page.
1. Under **Automatically destroy**, click **Edit** next to your scheduled run's details.
1. Click **Remove**.
#### Destroy if a workspace is inactive
You can configure HCP Terraform to automatically destroy a workspace's infrastructure after a period of inactivity.
A workspace is _inactive_ if the workspace's state has not changed within your designated time period.
!> **Caution:** As opposed to configuring an auto-destroy run for a specific date and time, this setting _persists_ after queueing auto-destroy runs.
If you configure a workspace to auto-destroy its infrastructure when inactive, any run that updates Terraform state further delays the scheduled auto-destroy time by the length of your designated timeframe.
To schedule an auto-destroy run after a period of workspace inactivity:
1. Navigate to the workspace's **Settings** > **Destruction and Deletion** page.
1. Under **Automatically destroy**, click **Set up auto-destroy**.
1. Click the **Destroy if inactive** toggle.
1. Select or customize a desired timeframe of inactivity.
1. Click **Confirm auto-destroy**.
When configured for the first time, the auto-destroy duration setting displays the scheduled date and time that HCP Terraform will perform the auto-destroy run.
Subsequent auto-destroy runs and Terraform runs that update state both update the next scheduled auto-destroy date.
After HCP Terraform completes a manual or automatic destroy run, it waits until further state updates to schedule a new auto-destroy run.
To remove your workspace's auto-destroy based on inactivity:
1. Navigate to the workspace's **Settings** > **Destruction and Deletion** page.
1. Under **Auto-destroy settings**, click **Edit** to change the auto-destroy settings.
1. Click **Remove**.
## Delete Workspace
Terraform does not automatically destroy managed infrastructure when you delete a workspace.
After you delete the workspace and its state file, Terraform can _no longer track or manage_ that infrastructure. You must manually delete or [import](/terraform/cli/commands/import) any remaining resources into another Terraform workspace.
By default, [workspace administrators](/terraform/cloud-docs/users-teams-organizations/permissions#workspace-admins) can only delete unlocked workspaces that are not managing any infrastructure. Organization owners can force delete a workspace to override these protections. Organization owners can also configure the [organization's settings](/terraform/cloud-docs/users-teams-organizations/organizations#general) to let workspace administrators force delete their own workspaces.
## Data Retention Policies
<EnterpriseAlert>
Data retention policies are exclusive to Terraform Enterprise, and not available in HCP Terraform. <a href="https://developer.hashicorp.com/terraform/enterprise">Learn more about Terraform Enterprise</a>.
</EnterpriseAlert>
Define configurable data retention policies for workspaces to help reduce object storage consumption. You can define a policy that allows Terraform to _soft delete_ the backing data associated with configuration versions and state versions. Soft deleting refers to marking a data object for garbage collection so that Terraform can automatically delete the object after a set number of days.
Once an object is soft deleted, any attempts to read the object will fail. Until the garbage collection grace period elapses, you can still restore an object using the APIs described in the [configuration version documentation](/terraform/enterprise/api-docs/configuration-versions) and [state version documentation](/terraform/enterprise/api-docs/state-versions). After the garbage collection grace period elapses, Terraform permanently deletes the archivist storage.
The [organization policy](/terraform/enterprise/users-teams-organizations/organizations#destruction-and-deletion) is the default policy applied to workspaces, but members of individual workspaces can override the policy for their workspaces.
The workspace policy always overrides the organization policy. A workspace admin can set or override the following data retention policies:
- **Organization default policy**
- **Do not auto-delete**
- **Auto-delete data**
Setting the data retention policy to **Organization default policy** disables the other data retention policy settings. | terraform | page title Destruction and Deletion Workspaces HCP Terraform description Learn about destroying infrastructure and deleting workspaces in HCP Terraform Destruction and Deletion HCP Terraform workspaces have two primary delete actions Destroying infrastructure destroy infrastructure deletes resources managed by the HCP Terraform workspace by triggering a destroy run Deleting a workspace delete workspaces deletes the workspace itself without triggering a destroy run In general you should perform both actions in the above order when destroying a workspace to ensure resource cleanup for all of a workspace s managed infrastructure Destroy Infrastructure Destroy plans delete the infrastructure managed by a workspace We recommend destroying the infrastructure managed by a workspace before deleting the workspace itself Otherwise the unmanaged infrastructure resources will continue to exist but will become unmanaged and you must go into your infrastructure providers to delete the resources manually Before queuing a destroy plan enable the Allow destroy plans toggle setting on this page Automatically Destroy BEGIN TFC only name pnp callout include tfc package callouts ephemeral workspaces mdx END TFC only name pnp callout Configuring automatic infrastructure destruction for a workspace requires admin permissions terraform cloud docs users teams organizations permissions workspace admins for that workspace There are two main ways to automatically destroy a workspace s resources Schedule a run to destroy all resources in a workspace at a specific date and time Configure HCP Terraform to destroy a workspace s infrastructure after a period of workspace inactivity Workspaces can inherit auto destroy settings from their project Refer to managing projects terraform cloud docs projects manage automatically destroy inactive workspaces for more information You can configure an individual workspace s auto destroy settings to override the project s configuration You can reduce your spending on infrastructure by automatically destroying temporary resources like development environments After HCP Terraform performs an auto destroy run it unsets the auto destroy at field on the workspace If you continue using the workspace you can schedule another future auto destroy run to remove any new resources Note Automatic destroy plans do not prompt you for apply approval in the HCP Terraform user interface We recommend only using this setting for development environments You can schedule an auto destroy run using the HCP Terraform web user interface or the workspace API terraform cloud docs api docs workspaces You can also schedule notifications terraform cloud docs workspaces settings notifications to alert you 12 and 24 hours before an auto destroy run and to report auto destroy run results Destroy at a specific day and time To schedule an auto destroy run at a specific time in HCP Terraform 1 Navigate to the workspace s Settings Destruction and Deletion page 1 Under Automatically destroy click Set up auto destroy 1 Enter the desired date and time HCP Terraform defaults to your local time zone for scheduling and displays how long until the scheduled operation 1 Click Confirm auto destroy To cancel a scheduled auto destroy run in HCP Terraform 1 Navigate to the workspace s Settings Destruction and Deletion page 1 Under Automatically destroy click Edit next to your scheduled run s details 1 Click Remove Destroy if a workspace is inactive You can configure HCP Terraform to automatically destroy a workspace s infrastructure after a period of inactivity A workspace is inactive if the workspace s state has not changed within your designated time period Caution As opposed to configuring an auto destroy run for a specific date and time this setting persists after queueing auto destroy runs If you configure a workspace to auto destroy its infrastructure when inactive any run that updates Terraform state further delays the scheduled auto destroy time by the length of your designated timeframe To schedule an auto destroy run after a period of workspace inactivity 1 Navigate to the workspace s Settings Destruction and Deletion page 1 Under Automatically destroy click Set up auto destroy 1 Click the Destroy if inactive toggle 1 Select or customize a desired timeframe of inactivity 1 Click Confirm auto destroy When configured for the first time the auto destroy duration setting displays the scheduled date and time that HCP Terraform will perform the auto destroy run Subsequent auto destroy runs and Terraform runs that update state both update the next scheduled auto destroy date After HCP Terraform completes a manual or automatic destroy run it waits until further state updates to schedule a new auto destroy run To remove your workspace s auto destroy based on inactivity 1 Navigate to the workspace s Settings Destruction and Deletion page 1 Under Auto destroy settings click Edit to change the auto destroy settings 1 Click Remove Delete Workspace Terraform does not automatically destroy managed infrastructure when you delete a workspace After you delete the workspace and its state file Terraform can no longer track or manage that infrastructure You must manually delete or import terraform cli commands import any remaining resources into another Terraform workspace By default workspace administrators terraform cloud docs users teams organizations permissions workspace admins can only delete unlocked workspaces that are not managing any infrastructure Organization owners can force delete a workspace to override these protections Organization owners can also configure the organization s settings terraform cloud docs users teams organizations organizations general to let workspace administrators force delete their own workspaces Data Retention Policies EnterpriseAlert Data retention policies are exclusive to Terraform Enterprise and not available in HCP Terraform a href https developer hashicorp com terraform enterprise Learn more about Terraform Enterprise a EnterpriseAlert Define configurable data retention policies for workspaces to help reduce object storage consumption You can define a policy that allows Terraform to soft delete the backing data associated with configuration versions and state versions Soft deleting refers to marking a data object for garbage collection so that Terraform can automatically delete the object after a set number of days Once an object is soft deleted any attempts to read the object will fail Until the garbage collection grace period elapses you can still restore an object using the APIs described in the configuration version documentation terraform enterprise api docs configuration versions and state version documentation terraform enterprise api docs state versions After the garbage collection grace period elapses Terraform permanently deletes the archivist storage The organization policy terraform enterprise users teams organizations organizations destruction and deletion is the default policy applied to workspaces but members of individual workspaces can override the policy for their workspaces The workspace policy always overrides the organization policy A workspace admin can set or override the following data retention policies Organization default policy Do not auto delete Auto delete data Setting the data retention policy to Organization default policy disables the other data retention policy settings |
terraform page title Notifications Workspaces HCP Terraform HCP Terraform can use webhooks to notify external systems about run progress and other events Each workspace has its own notification settings and can notify up to 20 destinations Notifications Learn how to use webhooks to notify external systems about run progress and other events Create and enable workspace notifications | ---
page_title: Notifications - Workspaces - HCP Terraform
description: >-
Learn how to use webhooks to notify external systems about run progress and other events. Create and enable workspace notifications.
---
# Notifications
HCP Terraform can use webhooks to notify external systems about run progress and other events. Each workspace has its own notification settings and can notify up to 20 destinations.
-> **Note:** [Speculative plans](/terraform/cloud-docs/run/modes-and-options#plan-only-speculative-plan) and workspaces configured with `Local` [execution mode](/terraform/cloud-docs/workspaces/settings#execution-mode) do not support notifications.
Configuring notifications requires admin access to the workspace. Refer to [Permissions](/terraform/cloud-docs/users-teams-organizations/permissions) for details.
[permissions-citation]: #intentionally-unused---keep-for-maintainers
-> **API:** Refer to [Notification Configuration APIs](/terraform/cloud-docs/api-docs/notification-configurations).
## Viewing and Managing Notification Settings
To add, edit, or delete notifications for a workspace, go to the workspace and click **Settings > Notifications**. The **Notifications** page appears, showing existing notification configurations.
## Creating a Notification Configuration
A notification configuration specifies a destination URL, a payload type, and the events that should generate a notification. To create a notification configuration:
1. Click **Settings > Notifications**. The **Notifications** page appears.
2. Click **Create a Notification**. The **Create a Notification** form appears.
3. Configure the notifications:
- **Destination:** HCP Terraform can deliver either a generic payload or a payload formatted specifically for Slack, Microsoft Teams, or Email. Refer to [Notification Payloads](#notification-payloads) for details.
- **Name:** A display name for this notification configuration.
- **Webhook URL** This URL is only available for generic, Slack, and Microsoft Teams webhooks. The webhook URL is the destination for the webhook payload. This URL must accept HTTP or HTTPS `POST` requests and should be able to use the chosen payload type. For details, refer to Slack's documentation on [creating an incoming webhook](https://api.slack.com/messaging/webhooks#create_a_webhook) and Microsoft's documentation on [creating a workflow from a channel in teams](https://support.microsoft.com/en-us/office/creating-a-workflow-from-a-channel-in-teams-242eb8f2-f328-45be-b81f-9817b51a5f0e).
- **Token** (Optional) This notification is only available for generic webhooks. A token is an arbitrary secret string that HCP Terraform will use to sign its notification webhooks. Refer to [Notification Authenticity][inpage-hmac] for details. You cannot view the token after you save the notification configuration.
- **Email Recipients** This notification is only available for emails. Select users that should receive notifications.
- **Workspace Events**: HCP Terraform can send notifications for all events or only for specific events. The following events are available:
- **Drift**: HCP Terraform detected configuration drift. This notification is only available if you enable [health assessments](/terraform/cloud-docs/workspaces/health) for the workspace.
- **Check Failure:** HCP Terraform detected one or more failed continuous validation checks. This notification is only available if you enable health assessments for the workspace.
- **Health Assessment Fail**: A health assessment failed. This notification is only available if you enable health assessments for the workspace. Health assessments fail when HCP Terraform cannot perform drift detection, continuous validation, or both. The notification does not specify the cause of the failure, but you can use the [Assessment Result](/terraform/cloud-docs/api-docs/assessment-results) logs to help diagnose the issue.
- **Auto destroy reminder**: Sends reminders 12 and 24 hours before a scheduled auto destroy run.
- **Auto destroy results**: HCP Terraform performed an auto destroy run in the workspace. Reports both successful and errored runs.
<!-- BEGIN: TFC:only name:pnp-callout -->
@include 'tfc-package-callouts/health-assessments.mdx'
<!-- END: TFC:only name:pnp-callout -->
- **Run Events:** HCP Terraform can send notifications for all events or only for specific events. The following events are available:
- **Created**: A run begins and enters the [Pending stage](/terraform/enterprise/run/states#the-pending-stage).
- **Planning**: A run acquires the lock and starts to execute.
- **Needs Attention**: A plan has changes and Terraform requires user input to continue. This event may include approving the plan or a [policy override](/terraform/enterprise/run/states#the-policy-check-stage).
- **Applying**: A run enters the [Apply stage](/terraform/enterprise/run/states#the-apply-stage), where Terraform makes the infrastructure changes described in the plan.
- **Completed**: A run completed successfully.
- **Errored**: A run terminated early due to error or cancellation.
4. Click **Create a notification**.
## Enabling and Verifying a Configuration
To enable or disable a configuration, toggle the **Enabled/Disabled** switch on its detail page. HCP Terraform will attempt to verify the configuration for generic and slack webhooks by sending a test message, and will enable the notification configuration if the test succeeds.
For a verification to be successful, the destination must respond with a `2xx` HTTP code. If verification fails, HCP Terraform displays the error message and the configuration will remain disabled.
For both successful and unsuccessful verifications, click the **Last Response** box to view more information about the verification results. You can also send additional test messages with the **Send a Test** link.
## Notification Payloads
### Slack
Notifications to Slack will contain the following information:
- The run's workspace (as a link)
- The HCP Terraform username and avatar of the person that created the run
- The run ID (as a link)
- The reason the run was queued (usually a commit message or a custom message)
- The time the run was created
- The event that triggered the notification and the time that event occurred
### Microsoft Teams
Notifications to Microsoft Teams contain the following information:
- The run's workspace (as a link)
- The HCP Terraform username and avatar of the person that created the run
- The run ID
- A link to view the run
- The reason the run was queued (usually a commit message or a custom message)
- The time the run was created
- The event that triggered the notification and the time that event occurred
### Email
Email notifications will contain the following information:
- The run's workspace (as a link)
- The run ID (as a link)
- The event that triggered the notification, and if the run needs to be acted upon or not
### Generic
A generic notification will contain information about a run and its state at the time the triggering event occurred. The complete generic notification payload is described in the [API documentation][generic-payload].
[generic-payload]: /terraform/cloud-docs/api-docs/notification-configurations#notification-payload
Some of the values in the payload can be used to retrieve additional information through the API, such as:
- The [run ID](/terraform/cloud-docs/api-docs/run#get-run-details)
- The [workspace ID](/terraform/cloud-docs/api-docs/workspaces#list-workspaces)
- The [organization name](/terraform/cloud-docs/api-docs/organizations#show-an-organization)
## Notification Authenticity
[inpage-hmac]: #notification-authenticity
Slack notifications use Slack's own protocols for verifying HCP Terraform's webhook requests.
Generic notifications can include a signature for verifying the request. For notification configurations that include a secret token, HCP Terraform's webhook requests will include an `X-TFE-Notification-Signature` header, which contains an HMAC signature computed from the token using the SHA-512 digest algorithm. The receiving service is responsible for validating the signature. More information, as well as an example of how to validate the signature, can be found in the [API documentation](/terraform/cloud-docs/api-docs/notification-configurations#notification-authenticity). | terraform | page title Notifications Workspaces HCP Terraform description Learn how to use webhooks to notify external systems about run progress and other events Create and enable workspace notifications Notifications HCP Terraform can use webhooks to notify external systems about run progress and other events Each workspace has its own notification settings and can notify up to 20 destinations Note Speculative plans terraform cloud docs run modes and options plan only speculative plan and workspaces configured with Local execution mode terraform cloud docs workspaces settings execution mode do not support notifications Configuring notifications requires admin access to the workspace Refer to Permissions terraform cloud docs users teams organizations permissions for details permissions citation intentionally unused keep for maintainers API Refer to Notification Configuration APIs terraform cloud docs api docs notification configurations Viewing and Managing Notification Settings To add edit or delete notifications for a workspace go to the workspace and click Settings Notifications The Notifications page appears showing existing notification configurations Creating a Notification Configuration A notification configuration specifies a destination URL a payload type and the events that should generate a notification To create a notification configuration 1 Click Settings Notifications The Notifications page appears 2 Click Create a Notification The Create a Notification form appears 3 Configure the notifications Destination HCP Terraform can deliver either a generic payload or a payload formatted specifically for Slack Microsoft Teams or Email Refer to Notification Payloads notification payloads for details Name A display name for this notification configuration Webhook URL This URL is only available for generic Slack and Microsoft Teams webhooks The webhook URL is the destination for the webhook payload This URL must accept HTTP or HTTPS POST requests and should be able to use the chosen payload type For details refer to Slack s documentation on creating an incoming webhook https api slack com messaging webhooks create a webhook and Microsoft s documentation on creating a workflow from a channel in teams https support microsoft com en us office creating a workflow from a channel in teams 242eb8f2 f328 45be b81f 9817b51a5f0e Token Optional This notification is only available for generic webhooks A token is an arbitrary secret string that HCP Terraform will use to sign its notification webhooks Refer to Notification Authenticity inpage hmac for details You cannot view the token after you save the notification configuration Email Recipients This notification is only available for emails Select users that should receive notifications Workspace Events HCP Terraform can send notifications for all events or only for specific events The following events are available Drift HCP Terraform detected configuration drift This notification is only available if you enable health assessments terraform cloud docs workspaces health for the workspace Check Failure HCP Terraform detected one or more failed continuous validation checks This notification is only available if you enable health assessments for the workspace Health Assessment Fail A health assessment failed This notification is only available if you enable health assessments for the workspace Health assessments fail when HCP Terraform cannot perform drift detection continuous validation or both The notification does not specify the cause of the failure but you can use the Assessment Result terraform cloud docs api docs assessment results logs to help diagnose the issue Auto destroy reminder Sends reminders 12 and 24 hours before a scheduled auto destroy run Auto destroy results HCP Terraform performed an auto destroy run in the workspace Reports both successful and errored runs BEGIN TFC only name pnp callout include tfc package callouts health assessments mdx END TFC only name pnp callout Run Events HCP Terraform can send notifications for all events or only for specific events The following events are available Created A run begins and enters the Pending stage terraform enterprise run states the pending stage Planning A run acquires the lock and starts to execute Needs Attention A plan has changes and Terraform requires user input to continue This event may include approving the plan or a policy override terraform enterprise run states the policy check stage Applying A run enters the Apply stage terraform enterprise run states the apply stage where Terraform makes the infrastructure changes described in the plan Completed A run completed successfully Errored A run terminated early due to error or cancellation 4 Click Create a notification Enabling and Verifying a Configuration To enable or disable a configuration toggle the Enabled Disabled switch on its detail page HCP Terraform will attempt to verify the configuration for generic and slack webhooks by sending a test message and will enable the notification configuration if the test succeeds For a verification to be successful the destination must respond with a 2xx HTTP code If verification fails HCP Terraform displays the error message and the configuration will remain disabled For both successful and unsuccessful verifications click the Last Response box to view more information about the verification results You can also send additional test messages with the Send a Test link Notification Payloads Slack Notifications to Slack will contain the following information The run s workspace as a link The HCP Terraform username and avatar of the person that created the run The run ID as a link The reason the run was queued usually a commit message or a custom message The time the run was created The event that triggered the notification and the time that event occurred Microsoft Teams Notifications to Microsoft Teams contain the following information The run s workspace as a link The HCP Terraform username and avatar of the person that created the run The run ID A link to view the run The reason the run was queued usually a commit message or a custom message The time the run was created The event that triggered the notification and the time that event occurred Email Email notifications will contain the following information The run s workspace as a link The run ID as a link The event that triggered the notification and if the run needs to be acted upon or not Generic A generic notification will contain information about a run and its state at the time the triggering event occurred The complete generic notification payload is described in the API documentation generic payload generic payload terraform cloud docs api docs notification configurations notification payload Some of the values in the payload can be used to retrieve additional information through the API such as The run ID terraform cloud docs api docs run get run details The workspace ID terraform cloud docs api docs workspaces list workspaces The organization name terraform cloud docs api docs organizations show an organization Notification Authenticity inpage hmac notification authenticity Slack notifications use Slack s own protocols for verifying HCP Terraform s webhook requests Generic notifications can include a signature for verifying the request For notification configurations that include a secret token HCP Terraform s webhook requests will include an X TFE Notification Signature header which contains an HMAC signature computed from the token using the SHA 512 digest algorithm The receiving service is responsible for validating the signature More information as well as an example of how to validate the signature can be found in the API documentation terraform cloud docs api docs notification configurations notification authenticity |
terraform Learn how to use the web UI to connect a workspace to a version control system repository that contains Terraform configuration You can connect any HCP Terraform workspace terraform cloud docs workspaces to a version control system VCS repository that contains a Terraform configuration This page explains the workspace VCS connection settings in the HCP Terraform UI page title VCS Connections Workspaces HCP Terraform Configuring Workspace VCS Connections | ---
page_title: VCS Connections - Workspaces - HCP Terraform
description: >-
Learn how to use the web UI to connect a workspace to a version control system repository that contains Terraform configuration.
---
# Configuring Workspace VCS Connections
You can connect any HCP Terraform [workspace](/terraform/cloud-docs/workspaces) to a version control system (VCS) repository that contains a Terraform configuration. This page explains the workspace VCS connection settings in the HCP Terraform UI.
Refer to [Terraform Configurations in HCP Terraform Workspaces](/terraform/cloud-docs/workspaces/configurations) for details on handling configuration versions and connected repositories. Refer to [Connecting VCS Providers](/terraform/cloud-docs/vcs) for a list of supported VCS providers and details about configuring VCS access, viewing VCS events, etc.
## API
You can use the [Update a Workspace endpoint](/terraform/cloud-docs/api-docs/workspaces#update-a-workspace) in the Workspaces API to change one or more VCS settings. We also recommend using this endpoint to automate changing VCS connections for many workspaces at once. For example, when you move a VCS server or remove a deprecated API version.
## Version Control Settings
To change a workspace's VCS settings:
1. Go to the workspace and click **Settings > Version Control**. The **Version Control** page appears.
1. Choose the desired settings and click **Update VCS settings**.
You can update the following types of VCS settings for the workspace.
### VCS Connection
You can take one of the following actions:
- To add a new VCS connection, click **Connect to version control**. Select **Version control workflow** and follow the steps to [select a VCS provider and repository](/terraform/cloud-docs/workspaces/create#create-a-workspace).
- To edit an existing VCS connection, click **Change source**. Choose the **Version control workflow** and follow the steps to [select VCS provider and repository](/terraform/cloud-docs/workspaces/create#create-a-workspace).
- To remove the VCS connection, click **Change source**. Select either the **CLI-driven workflow** or the **API-driven workflow**, and click **Update VCS settings**. The workspace is no longer connected to VCS.
[permissions-citation]: #intentionally-unused---keep-for-maintainers
### Terraform Working Directory
Specify the directory where Terraform will execute runs. This defaults to the root directory in your repository, but you may want to specify another directory if you have directories for multiple different Terraform configurations within the same repository. For example, if you had one `staging` directory and one `production` directory.
A working directory is required when you use [trigger prefixes](#automatic-run-triggering).
### Apply Method
Choose a workflow for Terraform runs.
- **Auto apply:** Terraform will apply changes from successful plans without prompting for approval. A push to the default branch of your repository will trigger a plan and apply cycle. You may want to do this in non-interactive environments, like continuous deployment workflows.
!> **Warning:** If you choose auto apply, make sure that no one can change your infrastructure outside of your automated build pipeline. This reduces the risk of configuration drift and unexpected changes.
- **Manual apply:** Terraform will ask for approval before applying changes from a successful plan. A push to the default branch of your repository will trigger a plan, and then Terraform will wait for confirmation.
### Automatic Run Triggering
HCP Terraform uses your VCS provider's API to retrieve the changed files in your repository. You can choose one of the following options to specify which changes trigger Terraform runs.
#### Always trigger runs
This option instructs Terraform to begin a run when changes are pushed to any file within the repository. This can be useful for repositories that do not have multiple configurations but require a working directory for some other reason. However, we do not recommend this approach for true monorepos, as it queues unnecessary runs and slows down your ability to provision infrastructure.
#### Only trigger runs when files in specified paths change
This option instructs Terraform to begin new runs only for changes that affect specified files and directories. This behavior also applies to [speculative plans](/terraform/cloud-docs/run/remote-operations#speculative-plans) on pull requests.
You can use trigger patterns and trigger prefixes in the **Add path** field to specify groups of files and directories.
- **Trigger Patterns:** (Recommended) Use glob patterns to specify the files that should trigger a new run. For example, `/submodule/**/*.tf`, specifies all files with the `.tf` extension that are nested below the `submodule` directory. You can also use more complex patterns like `/**/networking/**/*`, which specifies all files that have a `networking` folder in their file path. (e.g., `/submodule/service-1/networking/private/main.tf`). Refer to [Glob Patterns for Automatic Run Triggering](#glob-patterns-for-automatic-run-triggering) for details.
- **Trigger Prefixes:** HCP Terraform will queue runs for changes in any of the specified trigger directories matching the provided prefixes (including the working directory). For example, if you use a top-level `modules` directory to share Terraform code across multiple configurations, changes to the shared modules are relevant to every workspace that uses that repository. You can add `modules` as a trigger directory for each workspace to track changes to shared code.
-> **Note:** HCP Terraform triggers runs on all attached workspaces if it does not receive a list of changed files or if that list is too large to process. When this happens, HCP Terraform may show several runs with completed plans that do not result in infrastructure changes.
#### Trigger runs when a git tag is published
This option instructs Terraform to begin new runs only for changes that have a specific tag format.
The tag format can be chosen between the following options:
- **Semantic Versioning:** It matches tags in the popular [SemVer format](https://semver.org/). For example, `0.4.2`.
- **Version contains a prefix:** It matches tags which have an additional prefix before the [SemVer format](https://semver.org/). For example, `version-0.4.2`.
- **Version contains a suffix:** It matches tags which have an additional suffix after the [SemVer format](https://semver.org/). For example `0.4.2-alpha`.
- **Custom Regular Expression:** You can define your own regex for HCP Terraform to match against tags.
You must include an additional `\` to escape the regex pattern when you manage your workspace with the [hashicorp/tfe provider](https://registry.terraform.io/providers/hashicorp/tfe/latest/docs/resources/workspace#tags_regex) and trigger runs through matching git tags. Refer to [Terraform escape sequences](/terraform/language/expressions/strings#escape-sequences) for more details.
| Tag Format | Regex Pattern | Regex Pattern (Escaped) |
|-------------------------------|---------------------------|--------------------------|
| **Semantic Versioning** | `^\d+.\d+.\d+$` | `^\\d+.\\d+.\\d+$` |
| **Version contains a prefix** | `\d+.\d+.\d+$` | `\\d+.\\d+.\\d+$` |
| **Version contains a suffix** | `^\d+.\d+.\d+` | `^\\d+.\\d+.\\d+` |
HCP Terraform triggers runs for all tags matching this pattern, regardless of the value in the [VCS Branch](#vcs-branch) setting.
### VCS Branch
This setting designates which branch of the repository HCP Terraform should use when the workspace is set to [Always Trigger Runs](#always-trigger-runs) or [Only trigger runs when files in specified paths change](#only-trigger-runs-when-files-in-specified-paths-change). If you leave this setting blank, HCP Terraform uses the repository's default branch. If the workspace is set to trigger runs when a [git tag is published](#trigger-runs-when-a-git-tag-is-published), all tags will trigger runs, regardless of the branch specified in this setting.
### Automatic Speculative Plans
Whether to perform [speculative plans on pull requests](/terraform/cloud-docs/run/ui#speculative-plans-on-pull-requests) to the connected repository, to assist in reviewing proposed changes. Automatic speculative plans are enabled by default, but you can disable them for any workspace.
### Include Submodules on Clone
Select **Include submodules on clone** to recursively clone all of the repository's Git submodules when HCP Terraform fetches a configuration.
-> **Note:** The [SSH key for cloning Git submodules](/terraform/cloud-docs/vcs#ssh-keys) is set in the VCS provider settings for the organization and is not related to the workspace's SSH key for Terraform modules.
## Glob Patterns for Automatic Run Triggering
We support `glob` patterns to describe a set of triggers for automatic runs. Refer to [trigger patterns](#only-trigger-runs-when-files-in-specified-paths-change) for details.
Supported wildcards:
- `*` Matches zero or more characters.
- `?` Matches one or more characters.
- `**` Matches directories recursively.
The following examples demonstrate how to use the supported wildcards:
- `/**/*` matches every file in every directory
- `/module/**/*` matches all files in any directory below the `module` directory
- `/**/networking/*` matches every file that is inside any `networking` directory
- `/**/networking/**/*` matches every file that has `networking` directory on its path
- `/**/*.tf` matches every file in any directory that has the `.tf` extension
- `/submodule/*.???` matches every file inside `submodule` directory which has three characters long extension. | terraform | page title VCS Connections Workspaces HCP Terraform description Learn how to use the web UI to connect a workspace to a version control system repository that contains Terraform configuration Configuring Workspace VCS Connections You can connect any HCP Terraform workspace terraform cloud docs workspaces to a version control system VCS repository that contains a Terraform configuration This page explains the workspace VCS connection settings in the HCP Terraform UI Refer to Terraform Configurations in HCP Terraform Workspaces terraform cloud docs workspaces configurations for details on handling configuration versions and connected repositories Refer to Connecting VCS Providers terraform cloud docs vcs for a list of supported VCS providers and details about configuring VCS access viewing VCS events etc API You can use the Update a Workspace endpoint terraform cloud docs api docs workspaces update a workspace in the Workspaces API to change one or more VCS settings We also recommend using this endpoint to automate changing VCS connections for many workspaces at once For example when you move a VCS server or remove a deprecated API version Version Control Settings To change a workspace s VCS settings 1 Go to the workspace and click Settings Version Control The Version Control page appears 1 Choose the desired settings and click Update VCS settings You can update the following types of VCS settings for the workspace VCS Connection You can take one of the following actions To add a new VCS connection click Connect to version control Select Version control workflow and follow the steps to select a VCS provider and repository terraform cloud docs workspaces create create a workspace To edit an existing VCS connection click Change source Choose the Version control workflow and follow the steps to select VCS provider and repository terraform cloud docs workspaces create create a workspace To remove the VCS connection click Change source Select either the CLI driven workflow or the API driven workflow and click Update VCS settings The workspace is no longer connected to VCS permissions citation intentionally unused keep for maintainers Terraform Working Directory Specify the directory where Terraform will execute runs This defaults to the root directory in your repository but you may want to specify another directory if you have directories for multiple different Terraform configurations within the same repository For example if you had one staging directory and one production directory A working directory is required when you use trigger prefixes automatic run triggering Apply Method Choose a workflow for Terraform runs Auto apply Terraform will apply changes from successful plans without prompting for approval A push to the default branch of your repository will trigger a plan and apply cycle You may want to do this in non interactive environments like continuous deployment workflows Warning If you choose auto apply make sure that no one can change your infrastructure outside of your automated build pipeline This reduces the risk of configuration drift and unexpected changes Manual apply Terraform will ask for approval before applying changes from a successful plan A push to the default branch of your repository will trigger a plan and then Terraform will wait for confirmation Automatic Run Triggering HCP Terraform uses your VCS provider s API to retrieve the changed files in your repository You can choose one of the following options to specify which changes trigger Terraform runs Always trigger runs This option instructs Terraform to begin a run when changes are pushed to any file within the repository This can be useful for repositories that do not have multiple configurations but require a working directory for some other reason However we do not recommend this approach for true monorepos as it queues unnecessary runs and slows down your ability to provision infrastructure Only trigger runs when files in specified paths change This option instructs Terraform to begin new runs only for changes that affect specified files and directories This behavior also applies to speculative plans terraform cloud docs run remote operations speculative plans on pull requests You can use trigger patterns and trigger prefixes in the Add path field to specify groups of files and directories Trigger Patterns Recommended Use glob patterns to specify the files that should trigger a new run For example submodule tf specifies all files with the tf extension that are nested below the submodule directory You can also use more complex patterns like networking which specifies all files that have a networking folder in their file path e g submodule service 1 networking private main tf Refer to Glob Patterns for Automatic Run Triggering glob patterns for automatic run triggering for details Trigger Prefixes HCP Terraform will queue runs for changes in any of the specified trigger directories matching the provided prefixes including the working directory For example if you use a top level modules directory to share Terraform code across multiple configurations changes to the shared modules are relevant to every workspace that uses that repository You can add modules as a trigger directory for each workspace to track changes to shared code Note HCP Terraform triggers runs on all attached workspaces if it does not receive a list of changed files or if that list is too large to process When this happens HCP Terraform may show several runs with completed plans that do not result in infrastructure changes Trigger runs when a git tag is published This option instructs Terraform to begin new runs only for changes that have a specific tag format The tag format can be chosen between the following options Semantic Versioning It matches tags in the popular SemVer format https semver org For example 0 4 2 Version contains a prefix It matches tags which have an additional prefix before the SemVer format https semver org For example version 0 4 2 Version contains a suffix It matches tags which have an additional suffix after the SemVer format https semver org For example 0 4 2 alpha Custom Regular Expression You can define your own regex for HCP Terraform to match against tags You must include an additional to escape the regex pattern when you manage your workspace with the hashicorp tfe provider https registry terraform io providers hashicorp tfe latest docs resources workspace tags regex and trigger runs through matching git tags Refer to Terraform escape sequences terraform language expressions strings escape sequences for more details Tag Format Regex Pattern Regex Pattern Escaped Semantic Versioning d d d d d d Version contains a prefix d d d d d d Version contains a suffix d d d d d d HCP Terraform triggers runs for all tags matching this pattern regardless of the value in the VCS Branch vcs branch setting VCS Branch This setting designates which branch of the repository HCP Terraform should use when the workspace is set to Always Trigger Runs always trigger runs or Only trigger runs when files in specified paths change only trigger runs when files in specified paths change If you leave this setting blank HCP Terraform uses the repository s default branch If the workspace is set to trigger runs when a git tag is published trigger runs when a git tag is published all tags will trigger runs regardless of the branch specified in this setting Automatic Speculative Plans Whether to perform speculative plans on pull requests terraform cloud docs run ui speculative plans on pull requests to the connected repository to assist in reviewing proposed changes Automatic speculative plans are enabled by default but you can disable them for any workspace Include Submodules on Clone Select Include submodules on clone to recursively clone all of the repository s Git submodules when HCP Terraform fetches a configuration Note The SSH key for cloning Git submodules terraform cloud docs vcs ssh keys is set in the VCS provider settings for the organization and is not related to the workspace s SSH key for Terraform modules Glob Patterns for Automatic Run Triggering We support glob patterns to describe a set of triggers for automatic runs Refer to trigger patterns only trigger runs when files in specified paths change for details Supported wildcards Matches zero or more characters Matches one or more characters Matches directories recursively The following examples demonstrate how to use the supported wildcards matches every file in every directory module matches all files in any directory below the module directory networking matches every file that is inside any networking directory networking matches every file that has networking directory on its path tf matches every file in any directory that has the tf extension submodule matches every file inside submodule directory which has three characters long extension |
terraform entitlement terraform cloud docs api docs feature entitlements Learn how to integrate third party tools into the run lifecycle Create and delete run tasks and associate them with workspaces Run Tasks page title Run Tasks Workspaces HCP Terraform | ---
page_title: Run Tasks - Workspaces - HCP Terraform
description: >-
Learn how to integrate third-party tools into the run lifecycle. Create and delete run tasks and associate them with workspaces.
---
[entitlement]: /terraform/cloud-docs/api-docs#feature-entitlements
# Run Tasks
HCP Terraform run tasks let you directly integrate third-party tools and services at certain stages in the HCP Terraform run lifecycle. Use run tasks to validate Terraform configuration files, analyze execution plans before applying them, scan for security vulnerabilities, or perform other custom actions.
Run tasks send data about a run to an external service at [specific run stages](#understanding-run-tasks-within-a-run). The external service processes the data, evaluates whether the run passes or fails, and sends a response to HCP Terraform. HCP Terraform then uses this response and the run task enforcement level to determine if a run can proceed. [Explore run tasks in the Terraform registry](https://registry.terraform.io/browse/run-tasks).
<!-- BEGIN: TFC:only name:pnp-callout -->
@include 'tfc-package-callouts/run-tasks.mdx'
<!-- END: TFC:only name:pnp-callout -->
You can manage run tasks through the HCP Terraform UI or the [Run Tasks API](/terraform/cloud-docs/api-docs/run-tasks/run-tasks).
> **Hands-on:** Try the [HCP Packer validation run task](/packer/tutorials/hcp/setup-hcp-terraform-run-task) tutorial.
## Requirements
**Terraform Version** - You can assign run tasks to workspaces that use a Terraform version of 1.1.9 and later. You can downgrade a workspace with existing runs to use a prior Terraform version without causing an error. However, HCP Terraform no longer triggers the run tasks during plan and apply operations.
**Permissions** - To create a run task, you must have a user account with the [Manage Run Tasks permission](/terraform/cloud-docs/users-teams-organizations/permissions#manage-run-tasks). To associate run tasks with a workspace, you need the [Manage Workspace Run Tasks permission](/terraform/cloud-docs/users-teams-organizations/permissions#general-workspace-permissions) on that particular workspace.
## Creating a Run Task
Explore the full list of [run tasks in the Terraform Registry](https://registry.terraform.io/browse/run-tasks).
Run tasks send an API payload to an external service. The API payload contains run-related information, including a callback URL, which the service uses to return a pass or fail status to HCP Terraform.
For example, the [HCP Packer integration](/terraform/cloud-docs/integrations/run-tasks#hcp-packer-run-task) checks image artifacts within a Terraform configuration for validity. If the configuration references images marked as unusable (revoked), then the run task fails and provides an error message.
To create a new run task:
1. Navigate to the desired workspace, open the **Settings** menu, and select **Run Tasks**.
1. Click **Create a new run task**. The **Run Tasks** page appears.
1. Enter the information about the run task to be configured:
- **Enabled** (optional): Whether the run task will run across all associated workspaces. New tasks are enabled by default.
- **Name** (required): A human-readable name for the run task. This will be displayed in workspace configuration pages and can contain letters, numbers, dashes and underscores.
- **Endpoint URL** (required): The URL for the external service. Run tasks will POST the [run tasks payload](/terraform/cloud-docs/integrations/run-tasks#integration-details) to this URL.
- **Description** (optional): A human-readable description for the run task. This information can contain letters, numbers, spaces, and special characters.
- **HMAC key** (optional): A secret key that may be required by the external service to verify request authenticity.
1. Click **Create run task**. The run task is now available within the organization, and you can associate it with one or more workspaces.
### Global Run Tasks
When you create a new run task, you can choose to apply it globally to every workspace in an organization. Your organization must have the `global-run-task` [entitlement][] to use global run tasks.
1. Select the **Global** checkbox
1. Choose when HCP Terraform should start the run task:
- **Pre-plan**: Before Terraform creates the plan.
- **Post-plan**: After Terraform creates the plan.
- **Pre-apply**: Before Terraform applies a plan.
- **Post-apply**: After Terraform applies a plan.
1. Choose an enforcement level:
- **Advisory**: Run tasks can not block a run from completing. If the task fails, the run proceeds with a warning in the user interface.
- **Mandatory**: Failed run tasks can block a run from completing. If the task fails (including timeouts or unexpected remote errors), the run stops and errors with a warning in the user interface.
## Associating Run Tasks with a Workspace
1. Click **Workspaces** and then go to the workspace where you want to associate run tasks.
1. Open the **Settings** menu and select **Run Tasks**.
1. Click the **+** next to the task you want to add to the workspace.
1. Choose when HCP Terraform should start the run task:
- **Pre-plan**: Before Terraform creates the plan.
- **Post-plan**: After Terraform creates the plan.
- **Pre-apply**: Before Terraform applies a plan.
- **Post-apply**: After Terraform applies a plan.
1. Choose an enforcement level:
- **Advisory**: Run tasks can not block a run from completing. If the task fails, the run will proceed with a warning in the UI.
- **Mandatory**: Run tasks can block a run from completing. If the task fails (including a timeout or unexpected remote error condition), the run will transition to an Errored state with a warning in the UI.
1. Click **Create**. Your run task is now configured.
## Understanding Run Tasks Within a Run
Run tasks perform actions before and after, the [plan](/terraform/cloud-docs/run/states#the-plan-stage) and [apply](/terraform/cloud-docs/run/states#the-apply-stage) stages of a [Terraform run](/terraform/cloud-docs/run/remote-operations). Once all run tasks complete, the run ends based on the most restrictive enforcement level in each associated run task.
For example, if a mandatory task fails and an advisory task succeeds, the run fails. If an advisory task fails, but a mandatory task succeeds, the run succeeds and proceeds to the apply stage. Regardless of the exit status of a task, HCP Terraform displays the status and any related message data in the UI.
## Removing a Run Task from a Workspace
Removing a run task from a workspace does not delete it from the organization. To remove a run task from a specific workspace:
1. Navigate to the desired workspace, open the **Settings** menu and select **Run Tasks**.
1. Click the ellipses (...) on the associated run task, and then click **Remove**. The run task will no longer be applied to runs within the workspace.
## Deleting a Run Task
You must remove a run task from all associated workspaces before you can delete it. To delete a run task:
1. Navigate to **Settings** and click **Run Tasks**.
1. Click the ellipses (...) next to the run task you want to delete, and then click **Edit**.
1. Click **Delete run task**.
You cannot delete run tasks that are still associated with a workspace. If you attempt this, you will see a warning in the UI containing a list of all workspaces that are associated with the run task | terraform | page title Run Tasks Workspaces HCP Terraform description Learn how to integrate third party tools into the run lifecycle Create and delete run tasks and associate them with workspaces entitlement terraform cloud docs api docs feature entitlements Run Tasks HCP Terraform run tasks let you directly integrate third party tools and services at certain stages in the HCP Terraform run lifecycle Use run tasks to validate Terraform configuration files analyze execution plans before applying them scan for security vulnerabilities or perform other custom actions Run tasks send data about a run to an external service at specific run stages understanding run tasks within a run The external service processes the data evaluates whether the run passes or fails and sends a response to HCP Terraform HCP Terraform then uses this response and the run task enforcement level to determine if a run can proceed Explore run tasks in the Terraform registry https registry terraform io browse run tasks BEGIN TFC only name pnp callout include tfc package callouts run tasks mdx END TFC only name pnp callout You can manage run tasks through the HCP Terraform UI or the Run Tasks API terraform cloud docs api docs run tasks run tasks Hands on Try the HCP Packer validation run task packer tutorials hcp setup hcp terraform run task tutorial Requirements Terraform Version You can assign run tasks to workspaces that use a Terraform version of 1 1 9 and later You can downgrade a workspace with existing runs to use a prior Terraform version without causing an error However HCP Terraform no longer triggers the run tasks during plan and apply operations Permissions To create a run task you must have a user account with the Manage Run Tasks permission terraform cloud docs users teams organizations permissions manage run tasks To associate run tasks with a workspace you need the Manage Workspace Run Tasks permission terraform cloud docs users teams organizations permissions general workspace permissions on that particular workspace Creating a Run Task Explore the full list of run tasks in the Terraform Registry https registry terraform io browse run tasks Run tasks send an API payload to an external service The API payload contains run related information including a callback URL which the service uses to return a pass or fail status to HCP Terraform For example the HCP Packer integration terraform cloud docs integrations run tasks hcp packer run task checks image artifacts within a Terraform configuration for validity If the configuration references images marked as unusable revoked then the run task fails and provides an error message To create a new run task 1 Navigate to the desired workspace open the Settings menu and select Run Tasks 1 Click Create a new run task The Run Tasks page appears 1 Enter the information about the run task to be configured Enabled optional Whether the run task will run across all associated workspaces New tasks are enabled by default Name required A human readable name for the run task This will be displayed in workspace configuration pages and can contain letters numbers dashes and underscores Endpoint URL required The URL for the external service Run tasks will POST the run tasks payload terraform cloud docs integrations run tasks integration details to this URL Description optional A human readable description for the run task This information can contain letters numbers spaces and special characters HMAC key optional A secret key that may be required by the external service to verify request authenticity 1 Click Create run task The run task is now available within the organization and you can associate it with one or more workspaces Global Run Tasks When you create a new run task you can choose to apply it globally to every workspace in an organization Your organization must have the global run task entitlement to use global run tasks 1 Select the Global checkbox 1 Choose when HCP Terraform should start the run task Pre plan Before Terraform creates the plan Post plan After Terraform creates the plan Pre apply Before Terraform applies a plan Post apply After Terraform applies a plan 1 Choose an enforcement level Advisory Run tasks can not block a run from completing If the task fails the run proceeds with a warning in the user interface Mandatory Failed run tasks can block a run from completing If the task fails including timeouts or unexpected remote errors the run stops and errors with a warning in the user interface Associating Run Tasks with a Workspace 1 Click Workspaces and then go to the workspace where you want to associate run tasks 1 Open the Settings menu and select Run Tasks 1 Click the next to the task you want to add to the workspace 1 Choose when HCP Terraform should start the run task Pre plan Before Terraform creates the plan Post plan After Terraform creates the plan Pre apply Before Terraform applies a plan Post apply After Terraform applies a plan 1 Choose an enforcement level Advisory Run tasks can not block a run from completing If the task fails the run will proceed with a warning in the UI Mandatory Run tasks can block a run from completing If the task fails including a timeout or unexpected remote error condition the run will transition to an Errored state with a warning in the UI 1 Click Create Your run task is now configured Understanding Run Tasks Within a Run Run tasks perform actions before and after the plan terraform cloud docs run states the plan stage and apply terraform cloud docs run states the apply stage stages of a Terraform run terraform cloud docs run remote operations Once all run tasks complete the run ends based on the most restrictive enforcement level in each associated run task For example if a mandatory task fails and an advisory task succeeds the run fails If an advisory task fails but a mandatory task succeeds the run succeeds and proceeds to the apply stage Regardless of the exit status of a task HCP Terraform displays the status and any related message data in the UI Removing a Run Task from a Workspace Removing a run task from a workspace does not delete it from the organization To remove a run task from a specific workspace 1 Navigate to the desired workspace open the Settings menu and select Run Tasks 1 Click the ellipses on the associated run task and then click Remove The run task will no longer be applied to runs within the workspace Deleting a Run Task You must remove a run task from all associated workspaces before you can delete it To delete a run task 1 Navigate to Settings and click Run Tasks 1 Click the ellipses next to the run task you want to delete and then click Edit 1 Click Delete run task You cannot delete run tasks that are still associated with a workspace If you attempt this you will see a warning in the UI containing a list of all workspaces that are associated with the run task |
terraform your HCP Terraform runs Use OpenID Connect to get short term credentials for the Kubernetes and Helm Terraform providers in page title Dynamic Credentials with the Kubernetes and Helm Provider Workspaces HCP Terraform Dynamic Credentials with the Kubernetes and Helm providers Important If you are self hosting HCP Terraform agents terraform cloud docs agents ensure your agents use v1 13 1 terraform cloud docs agents changelog 1 13 1 10 25 2023 or above To use the latest dynamic credentials features upgrade your agents to the latest version terraform cloud docs agents changelog | ---
page_title: Dynamic Credentials with the Kubernetes and Helm Provider - Workspaces - HCP Terraform
description: >-
Use OpenID Connect to get short-term credentials for the Kubernetes and Helm Terraform providers in
your HCP Terraform runs.
---
# Dynamic Credentials with the Kubernetes and Helm providers
~> **Important:** If you are self-hosting [HCP Terraform agents](/terraform/cloud-docs/agents), ensure your agents use [v1.13.1](/terraform/cloud-docs/agents/changelog#1-13-1-10-25-2023) or above. To use the latest dynamic credentials features, [upgrade your agents to the latest version](/terraform/cloud-docs/agents/changelog).
You can use HCP Terraform’s native OpenID Connect integration with Kubernetes to use [dynamic credentials](/terraform/cloud-docs/workspaces/dynamic-provider-credentials) for the Kubernetes and Helm providers in your HCP Terraform runs. Configuring the integration requires the following steps:
1. **[Configure Kubernetes](#configure-kubernetes):** Set up a trust configuration between Kubernetes and HCP Terraform. Next, create Kubernetes role bindings for your HCP Terraform identities.
2. **[Configure HCP Terraform](#configure-hcp-terraform):** Add environment variables to the HCP Terraform workspaces where you want to use dynamic credentials.
3. **[Configure the Kubernetes or Helm provider](#configure-the-provider)**: Set the required attributes on the provider block.
Once you complete the setup, HCP Terraform automatically authenticates to Kubernetes during each run. The Kubernetes and Helm providers' authentication is valid for the length of a plan or apply operation.
## Configure Kubernetes
You must enable and configure an OIDC identity provider in the Kubernetes API. This workflow changes based on the platform hosting your Kubernetes cluster. HCP Terraform only supports dynamic credentials with Kubernetes in AWS and GCP.
### Configure an OIDC identity provider
Refer to the AWS documentation for guidance on [setting up an EKS cluster for OIDC authentication](https://docs.aws.amazon.com/eks/latest/userguide/authenticate-oidc-identity-provider.html). You can also refer to our [example configuration](https://github.com/hashicorp-education/learn-terraform-dynamic-credentials/tree/main/eks/trust).
Refer to the GCP documentation for guidance on [setting up a GKE cluster for OIDC authentication](https://cloud.google.com/kubernetes-engine/docs/how-to/oidc). You can also refer to our [example configuration](https://github.com/hashicorp-education/learn-terraform-dynamic-credentials/tree/main/gke/trust).
When inputting an "issuer URL", use the address of HCP Terraform (`https://app.terraform.io` _without_ a trailing slash) or the URL of your Terraform Enterprise instance. The value of "client ID" is your audience in OIDC terminology, and it should match the value of the `TFC_KUBERNETES_WORKLOAD_IDENTITY_AUDIENCE` environment variable in your workspace.
The OIDC identity resolves authentication to the Kubernetes API, but it first requires authorization to interact with that API. So, you must bind RBAC roles to the OIDC identity in Kubernetes.
You can use both "User" and "Group" subjects in your role bindings. For OIDC identities coming from TFC, the "User" value is formatted like so: `organization:<MY-ORG-NAME>:project:<MY-PROJECT-NAME>:workspace:<MY-WORKSPACE-NAME>:run_phase:<plan|apply>`.
You can extract the "Group" value from the token claim you configured in your cluster OIDC configuration. For details on the structure of the HCP Terraform token, refer to [Workload Identity](/terraform/cloud-docs/workspaces/dynamic-provider-credentials/workload-identity-tokens).
Below, we show an example of a `RoleBinding` for the HCP Terraform OIDC identity.
```hcl
resource "kubernetes_cluster_role_binding_v1" "oidc_role" {
metadata {
name = "odic-identity"
}
role_ref {
api_group = "rbac.authorization.k8s.io"
kind = "ClusterRole"
name = var.rbac_group_cluster_role
}
// Option A - Bind RBAC roles to groups
//
// Groups are extracted from the token claim designated by 'rbac_group_oidc_claim'
//
subject {
api_group = "rbac.authorization.k8s.io"
kind = "Group"
name = var.tfc_organization_name
}
// Option B - Bind RBAC roles to user identities
//
// Users are extracted from the 'sub' token claim.
// Plan and apply phases get assigned different users identities.
// For HCP Terraform tokens, the format of the user id is always the one described bellow.
//
subject {
api_group = "rbac.authorization.k8s.io"
kind = "User"
name = "${var.tfc_hostname}#organization:${var.tfc_organization_name}:project:${var.tfc_project_name}:workspace:${var.tfc_workspace_name}:run_phase:plan"
}
subject {
api_group = "rbac.authorization.k8s.io"
kind = "User"
name = "${var.tfc_hostname}#organization:${var.tfc_organization_name}:project:${var.tfc_project_name}:workspace:${var.tfc_workspace_name}:run_phase:apply"
}
}
```
If binding with "User" subjects, be aware that plan and apply phases are assigned different identities, each requiring specific bindings. Meaning you can tailor permissions for each Terraform operation. Planning operations usually require "read-only" permissions, while apply operations also require "write" access.
!> **Warning**: Always check, at minimum, the audience and the organization's name to prevent unauthorized access from other HCP Terraform organizations.
## Configure HCP Terraform
You must set certain environment variables in your HCP Terraform workspace to configure HCP Terraform to authenticate with Kubernetes or Helm using dynamic credentials. You can set these as workspace variables, or if you’d like to share one Kubernetes role across multiple workspaces, you can use a variable set. When you configure dynamic provider credentials with multiple provider configurations of the same type, use either a default variable or a tagged alias variable name for each provider configuration. Refer to [Specifying Multiple Configurations](#specifying-multiple-configurations) for more details.
### Required Environment Variables
| Variable | Value | Notes |
|-------------------------------------------------------------------------------------------------------------------------------------------------------------|-------------------------------------------------------------------------------|------------------------------------------------------------------------------------------------------------------------------------------------------------|
| `TFC_KUBERNETES_PROVIDER_AUTH`<br />`TFC_KUBERNETES_PROVIDER_AUTH[_TAG]`<br />_(Default variable not supported)_ | `true` | Requires **v1.14.0** or later if self-managing agents. Must be present and set to `true`, or HCP Terraform will not attempt to authenticate to Kubernetes. |
| `TFC_KUBERNETES_WORKLOAD_IDENTITY_AUDIENCE`<br />`TFC_KUBERNETES_WORKLOAD_IDENTITY_AUDIENCE[_TAG]`<br />`TFC_DEFAULT_KUBERNETES_WORKLOAD_IDENTITY_AUDIENCE` | The audience name in your cluster's OIDC configuration, such as `kubernetes`. | Requires **v1.14.0** or later if self-managing agents. |
## Configure the provider
The Kubernetes and Helm providers share the same schema of configuration attributes for the provider block. The example below illustrates using the Kubernetes provider but the same configuration applies to the Helm provider.
Make sure that you are not using any of the other arguments or methods listed in the [authentication](https://registry.terraform.io/providers/hashicorp/kubernetes/latest/docs#authentication) section of the provider documentation as these settings may interfere with dynamic provider credentials. The only allowed provider attributes are `host` and `cluster_ca_certificate`.
### Single provider instance
HCP Terraform automatically sets the `KUBE_TOKEN` environment variable and includes the workload identity token.
The provider needs to be configured with the URL of the API endpoint using the `host` attribute (or `KUBE_HOST` environment variable). In most cases, the `cluster_ca_certificate` (or `KUBE_CLUSTER_CA_CERT_DATA` environment variable) is also required.
#### Example Usage
```hcl
provider "kubernetes" {
host = var.cluster-endpoint-url
cluster_ca_certificate = base64decode(var.cluster-endpoint-ca)
}
```
### Multiple aliases
You can add additional variables to handle multiple distinct Kubernetes clusters, enabling you to use multiple [provider aliases](/terraform/language/providers/configuration#alias-multiple-provider-configurations) within the same workspace. You can configure each set of credentials independently, or use default values by configuring the variables prefixed with `TFC_DEFAULT_`.
For more details, see [Specifying Multiple Configurations](/terraform/cloud-docs/workspaces/dynamic-provider-credentials/specifying-multiple-configurations).
#### Required Terraform Variable
To use additional configurations, add the following code to your Terraform configuration. This lets HCP Terraform supply variable values that you can then use to map authentication and configuration details to the correct provider blocks.
```hcl
variable "tfc_kubernetes_dynamic_credentials" {
description = "Object containing Kubernetes dynamic credentials configuration"
type = object({
default = object({
token_path = string
})
aliases = map(object({
token_path = string
}))
})
}
```
#### Example Usage
```hcl
provider "kubernetes" {
alias = "ALIAS1"
host = var.alias1-endpoint-url
cluster_ca_certificate = base64decode(var.alias1-cluster-ca)
token = file(var.tfc_kubernetes_dynamic_credentials.aliases["ALIAS1"].token_path)
}
provider "kubernetes" {
alias = "ALIAS2"
host = var.alias1-endpoint-url
cluster_ca_certificate = base64decode(var.alias1-cluster-ca)
token = file(var.tfc_kubernetes_dynamic_credentials.aliases["ALIAS2"].token_path)
}
```
The `tfc_kubernetes_dynamic_credentials` variable is also available to use for single provider configurations, instead of the `KUBE_TOKEN` environment variable.
```hcl
provider "kubernetes" {
host = var.cluster-endpoint-url
cluster_ca_certificate = base64decode(var.cluster-endpoint-ca)
token = file(var.tfc_kubernetes_dynamic_credentials.default.token_path)
}
``` | terraform | page title Dynamic Credentials with the Kubernetes and Helm Provider Workspaces HCP Terraform description Use OpenID Connect to get short term credentials for the Kubernetes and Helm Terraform providers in your HCP Terraform runs Dynamic Credentials with the Kubernetes and Helm providers Important If you are self hosting HCP Terraform agents terraform cloud docs agents ensure your agents use v1 13 1 terraform cloud docs agents changelog 1 13 1 10 25 2023 or above To use the latest dynamic credentials features upgrade your agents to the latest version terraform cloud docs agents changelog You can use HCP Terraform s native OpenID Connect integration with Kubernetes to use dynamic credentials terraform cloud docs workspaces dynamic provider credentials for the Kubernetes and Helm providers in your HCP Terraform runs Configuring the integration requires the following steps 1 Configure Kubernetes configure kubernetes Set up a trust configuration between Kubernetes and HCP Terraform Next create Kubernetes role bindings for your HCP Terraform identities 2 Configure HCP Terraform configure hcp terraform Add environment variables to the HCP Terraform workspaces where you want to use dynamic credentials 3 Configure the Kubernetes or Helm provider configure the provider Set the required attributes on the provider block Once you complete the setup HCP Terraform automatically authenticates to Kubernetes during each run The Kubernetes and Helm providers authentication is valid for the length of a plan or apply operation Configure Kubernetes You must enable and configure an OIDC identity provider in the Kubernetes API This workflow changes based on the platform hosting your Kubernetes cluster HCP Terraform only supports dynamic credentials with Kubernetes in AWS and GCP Configure an OIDC identity provider Refer to the AWS documentation for guidance on setting up an EKS cluster for OIDC authentication https docs aws amazon com eks latest userguide authenticate oidc identity provider html You can also refer to our example configuration https github com hashicorp education learn terraform dynamic credentials tree main eks trust Refer to the GCP documentation for guidance on setting up a GKE cluster for OIDC authentication https cloud google com kubernetes engine docs how to oidc You can also refer to our example configuration https github com hashicorp education learn terraform dynamic credentials tree main gke trust When inputting an issuer URL use the address of HCP Terraform https app terraform io without a trailing slash or the URL of your Terraform Enterprise instance The value of client ID is your audience in OIDC terminology and it should match the value of the TFC KUBERNETES WORKLOAD IDENTITY AUDIENCE environment variable in your workspace The OIDC identity resolves authentication to the Kubernetes API but it first requires authorization to interact with that API So you must bind RBAC roles to the OIDC identity in Kubernetes You can use both User and Group subjects in your role bindings For OIDC identities coming from TFC the User value is formatted like so organization MY ORG NAME project MY PROJECT NAME workspace MY WORKSPACE NAME run phase plan apply You can extract the Group value from the token claim you configured in your cluster OIDC configuration For details on the structure of the HCP Terraform token refer to Workload Identity terraform cloud docs workspaces dynamic provider credentials workload identity tokens Below we show an example of a RoleBinding for the HCP Terraform OIDC identity hcl resource kubernetes cluster role binding v1 oidc role metadata name odic identity role ref api group rbac authorization k8s io kind ClusterRole name var rbac group cluster role Option A Bind RBAC roles to groups Groups are extracted from the token claim designated by rbac group oidc claim subject api group rbac authorization k8s io kind Group name var tfc organization name Option B Bind RBAC roles to user identities Users are extracted from the sub token claim Plan and apply phases get assigned different users identities For HCP Terraform tokens the format of the user id is always the one described bellow subject api group rbac authorization k8s io kind User name var tfc hostname organization var tfc organization name project var tfc project name workspace var tfc workspace name run phase plan subject api group rbac authorization k8s io kind User name var tfc hostname organization var tfc organization name project var tfc project name workspace var tfc workspace name run phase apply If binding with User subjects be aware that plan and apply phases are assigned different identities each requiring specific bindings Meaning you can tailor permissions for each Terraform operation Planning operations usually require read only permissions while apply operations also require write access Warning Always check at minimum the audience and the organization s name to prevent unauthorized access from other HCP Terraform organizations Configure HCP Terraform You must set certain environment variables in your HCP Terraform workspace to configure HCP Terraform to authenticate with Kubernetes or Helm using dynamic credentials You can set these as workspace variables or if you d like to share one Kubernetes role across multiple workspaces you can use a variable set When you configure dynamic provider credentials with multiple provider configurations of the same type use either a default variable or a tagged alias variable name for each provider configuration Refer to Specifying Multiple Configurations specifying multiple configurations for more details Required Environment Variables Variable Value Notes TFC KUBERNETES PROVIDER AUTH br TFC KUBERNETES PROVIDER AUTH TAG br Default variable not supported true Requires v1 14 0 or later if self managing agents Must be present and set to true or HCP Terraform will not attempt to authenticate to Kubernetes TFC KUBERNETES WORKLOAD IDENTITY AUDIENCE br TFC KUBERNETES WORKLOAD IDENTITY AUDIENCE TAG br TFC DEFAULT KUBERNETES WORKLOAD IDENTITY AUDIENCE The audience name in your cluster s OIDC configuration such as kubernetes Requires v1 14 0 or later if self managing agents Configure the provider The Kubernetes and Helm providers share the same schema of configuration attributes for the provider block The example below illustrates using the Kubernetes provider but the same configuration applies to the Helm provider Make sure that you are not using any of the other arguments or methods listed in the authentication https registry terraform io providers hashicorp kubernetes latest docs authentication section of the provider documentation as these settings may interfere with dynamic provider credentials The only allowed provider attributes are host and cluster ca certificate Single provider instance HCP Terraform automatically sets the KUBE TOKEN environment variable and includes the workload identity token The provider needs to be configured with the URL of the API endpoint using the host attribute or KUBE HOST environment variable In most cases the cluster ca certificate or KUBE CLUSTER CA CERT DATA environment variable is also required Example Usage hcl provider kubernetes host var cluster endpoint url cluster ca certificate base64decode var cluster endpoint ca Multiple aliases You can add additional variables to handle multiple distinct Kubernetes clusters enabling you to use multiple provider aliases terraform language providers configuration alias multiple provider configurations within the same workspace You can configure each set of credentials independently or use default values by configuring the variables prefixed with TFC DEFAULT For more details see Specifying Multiple Configurations terraform cloud docs workspaces dynamic provider credentials specifying multiple configurations Required Terraform Variable To use additional configurations add the following code to your Terraform configuration This lets HCP Terraform supply variable values that you can then use to map authentication and configuration details to the correct provider blocks hcl variable tfc kubernetes dynamic credentials description Object containing Kubernetes dynamic credentials configuration type object default object token path string aliases map object token path string Example Usage hcl provider kubernetes alias ALIAS1 host var alias1 endpoint url cluster ca certificate base64decode var alias1 cluster ca token file var tfc kubernetes dynamic credentials aliases ALIAS1 token path provider kubernetes alias ALIAS2 host var alias1 endpoint url cluster ca certificate base64decode var alias1 cluster ca token file var tfc kubernetes dynamic credentials aliases ALIAS2 token path The tfc kubernetes dynamic credentials variable is also available to use for single provider configurations instead of the KUBE TOKEN environment variable hcl provider kubernetes host var cluster endpoint url cluster ca certificate base64decode var cluster endpoint ca token file var tfc kubernetes dynamic credentials default token path |
terraform Dynamic Credentials with the Vault Provider your HCP Terraform runs Important If you are self hosting HCP Terraform agents terraform cloud docs agents ensure your agents use v1 7 0 terraform cloud docs agents changelog 1 7 0 03 02 2023 or above To use the latest dynamic credentials features upgrade your agents to the latest version terraform cloud docs agents changelog Use OpenID Connect to get short term credentials for the Vault Terraform provider in page title Dynamic Credentials with the Vault Provider Workspaces HCP Terraform | ---
page_title: Dynamic Credentials with the Vault Provider - Workspaces - HCP Terraform
description: >-
Use OpenID Connect to get short-term credentials for the Vault Terraform provider in
your HCP Terraform runs.
---
# Dynamic Credentials with the Vault Provider
~> **Important:** If you are self-hosting [HCP Terraform agents](/terraform/cloud-docs/agents), ensure your agents use [v1.7.0](/terraform/cloud-docs/agents/changelog#1-7-0-03-02-2023) or above. To use the latest dynamic credentials features, [upgrade your agents to the latest version](/terraform/cloud-docs/agents/changelog).
You can use HCP Terraform’s native OpenID Connect integration with Vault to get [dynamic credentials](/terraform/cloud-docs/workspaces/dynamic-provider-credentials) for the Vault provider in your HCP Terraform runs. Configuring the integration requires the following steps:
1. **[Configure Vault](#configure-vault):** Set up a trust configuration between Vault and HCP Terraform. Then, you must create Vault roles and policies for your HCP Terraform workspaces.
2. **[Configure HCP Terraform](#configure-hcp-terraform):** Add environment variables to the HCP Terraform workspaces where you want to use Dynamic Credentials.
Once you complete the setup, HCP Terraform automatically authenticates to Vault during each run. The Vault provider authentication is valid for the length of the plan or apply. Vault does not revoke authentication until the run is complete.
If you are using Vault's [secrets engines](/vault/docs/secrets), you must complete the following set up before continuing to configure [Vault-backed dynamic credentials](/terraform/cloud-docs/workspaces/dynamic-provider-credentials/vault-backed).
## Configure Vault
You must enable and configure the JWT backend in Vault. These instructions use the Vault CLI commands, but you can also use Terraform to configure Vault. Refer to our [example Terraform configuration](https://github.com/hashicorp/terraform-dynamic-credentials-setup-examples/tree/main/vault).
### Enable the JWT Auth Backend
Run the following command to enable the JWT auth backend in Vault:
```shell
vault auth enable jwt
```
### Configure Trust with HCP Terraform
You must configure Vault to trust HCP Terraform’s identity tokens and verify them using HCP Terraform’s public key. The following command configures the `jwt` auth backend in Vault to trust HCP Terraform as an OIDC identity provider:
```shell
vault write auth/jwt/config \
oidc_discovery_url="https://app.terraform.io" \
bound_issuer="https://app.terraform.io"
```
The `oidc_discovery_url` and `bound_issuer` should both be the root address of HCP Terraform, including the scheme and without a trailing slash.
#### Terraform Enterprise Specific Requirements
If you are using a custom or self-signed CA certificate you may need to specify the CA certificate or chain of certificates, in PEM format, via the [`oidc_discovery_ca_pem`](/vault/api-docs/auth/jwt#oidc_discovery_ca_pem) argument as shown in the following example command:
```shell
vault write auth/jwt/config \
oidc_discovery_url="https://app.terraform.io" \
bound_issuer="https://app.terraform.io" \
[email protected]
```
In the example above, `my-cert.pem` is a PEM formatted file containing the certificate.
### Create a Vault Policy
You must create a Vault policy that controls what paths and secrets your HCP Terraform workspace can access in Vault.
Create a file called tfc-policy.hcl with the following content:
```hcl
# Allow tokens to query themselves
path "auth/token/lookup-self" {
capabilities = ["read"]
}
# Allow tokens to renew themselves
path "auth/token/renew-self" {
capabilities = ["update"]
}
# Allow tokens to revoke themselves
path "auth/token/revoke-self" {
capabilities = ["update"]
}
# Configure the actual secrets the token should have access to
path "secret/*" {
capabilities = ["read"]
}
```
Then create the policy in Vault:
```shell
vault policy write tfc-policy tfc-policy.hcl
```
### Create a JWT Auth Role
Create a Vault role that HCP Terraform can use when authenticating to Vault.
Vault offers a lot of flexibility in determining how to map roles and permissions in Vault to workspaces in HCP Terraform. You can have one role for each workspace, one role for a group of workspaces, or one role for all workspaces in an organization. You can also configure different roles for the plan and apply phases of a run.
-> **Note:** If you set your `user_claim` to be per workspace, then Vault ties the entity it creates to that workspace's name. If you rename the workspace tied to your `user_claim`, Vault will create an additional identity object. To avoid this, update the alias name in Vault to your new workspace name before you update it in HCP Terraform.
The following example creates a role called `tfc-role`. The role is mapped to a single workspace and HCP Terraform can use it for both plan and apply runs.
Create a file called `vault-jwt-auth-role.json` with the following content:
```json
{
"policies": ["tfc-policy"],
"bound_audiences": ["vault.workload.identity"],
"bound_claims_type": "glob",
"bound_claims": {
"sub":
"organization:my-org-name:project:my-project-name:workspace:my-workspace-name:run_phase:*"
},
"user_claim": "terraform_full_workspace",
"role_type": "jwt",
"token_ttl": "20m"
}
```
Then run the following command to create a role named `tfc-role` with this configuration in Vault:
```shell
vault write auth/jwt/role/tfc-role @vault-jwt-auth-role.json
```
To understand all the available options for matching bound claims, refer to the [Terraform workload identity claim specification](/terraform/cloud-docs/workspaces/dynamic-provider-credentials) and the [Vault documentation on configuring bound claims](/vault/docs/auth/jwt#bound-claims). To understand all the options available when configuring Vault JWT auth roles, refer to the [Vault API documentation](/vault/api-docs/auth/jwt#create-role).
!> **Warning:** you should always check, at minimum, the audience and the name of the organization in order to prevent unauthorized access from other HCP Terraform organizations!
#### Token TTLs
We recommend setting token_ttl to a relatively short value. HCP Terraform can renew the token periodically until the plan or apply is complete, then revoke it to prevent it from being used further.
## Configure HCP Terraform
You’ll need to set some environment variables in your HCP Terraform workspace in order to configure HCP Terraform to authenticate with Vault using dynamic credentials. You can set these as workspace variables, or if you’d like to share one Vault role across multiple workspaces, you can use a variable set. When you configure dynamic provider credentials with multiple provider configurations of the same type, use either a default variable or a tagged alias variable name for each provider configuration. Refer to [Specifying Multiple Configurations](#specifying-multiple-configurations) for more details.
### Required Environment Variables
| Variable | Value | Notes |
|--------------------------------------------------------------------------------------------------------|----------------------------------------------------------------------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| `TFC_VAULT_PROVIDER_AUTH`<br />`TFC_VAULT_PROVIDER_AUTH[_TAG]`<br />_(Default variable not supported)_ | `true` | Requires **v1.7.0** or later if self-managing agents. Must be present and set to `true`, or HCP Terraform will not attempt to authenticate to Vault. |
| `TFC_VAULT_ADDR`<br />`TFC_VAULT_ADDR[_TAG]`<br />`TFC_DEFAULT_VAULT_ADDR` | The address of the Vault instance to authenticate against. | Requires **v1.7.0** or later if self-managing agents. Will also be used to set `VAULT_ADDR` in the run environment. |
| `TFC_VAULT_RUN_ROLE`<br />`TFC_VAULT_RUN_ROLE[_TAG]`<br />`TFC_DEFAULT_VAULT_RUN_ROLE` | The name of the Vault role to authenticate against (`tfc-role`, in our example). | Requires **v1.7.0** or later if self-managing agents. Optional if `TFC_VAULT_PLAN_ROLE` and `TFC_VAULT_APPLY_ROLE` are both provided. These variables are described [below](/terraform/cloud-docs/workspaces/dynamic-provider-credentials/vault-configuration#optional-environment-variables) |
### Optional Environment Variables
You may need to set these variables, depending on your Vault configuration and use case.
| Variable | Value | Notes |
|----------------------------------------------------------------------------------------------------------------------------------------------|------------------------------------------------------------------------------------------------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| `TFC_VAULT_NAMESPACE`<br />`TFC_VAULT_NAMESPACE[_TAG]`<br />`TFC_DEFAULT_VAULT_NAMESPACE` | The namespace to use when authenticating to Vault. | Requires **v1.7.0** or later if self-managing agents. Will also be used to set `VAULT_NAMESPACE` in the run environment. |
| `TFC_VAULT_AUTH_PATH`<br />`TFC_VAULT_AUTH_PATH[_TAG]`<br />`TFC_DEFAULT_VAULT_AUTH_PATH` | The path where the JWT auth backend is mounted in Vault. Defaults to jwt. | Requires **v1.7.0** or later if self-managing agents. |
| `TFC_VAULT_WORKLOAD_IDENTITY_AUDIENCE`<br />`TFC_VAULT_WORKLOAD_IDENTITY_AUDIENCE[_TAG]`<br />`TFC_DEFAULT_VAULT_WORKLOAD_IDENTITY_AUDIENCE` | Will be used as the `aud` claim for the identity token. Defaults to `vault.workload.identity`. | Requires **v1.7.0** or later if self-managing agents. Must match the `bound_audiences` configured for the role in Vault. |
| `TFC_VAULT_PLAN_ROLE`<br />`TFC_VAULT_PLAN_ROLE[_TAG]`<br />`TFC_DEFAULT_VAULT_PLAN_ROLE` | The Vault role to use for the plan phase of a run. | Requires **v1.7.0** or later if self-managing agents. Will fall back to the value of `TFC_VAULT_RUN_ROLE` if not provided. |
| `TFC_VAULT_APPLY_ROLE`<br />`TFC_VAULT_APPLY_ROLE[_TAG]`<br />`TFC_DEFAULT_VAULT_APPLY_ROLE` | The Vault role to use for the apply phase of a run. | Requires **v1.7.0** or later if self-managing agents. Will fall back to the value of `TFC_VAULT_RUN_ROLE` if not provided. |
| `TFC_VAULT_ENCODED_CACERT`<br />`TFC_VAULT_ENCODED_CACERT[_TAG]`<br />`TFC_DEFAULT_VAULT_ENCODED_CACERT` | A PEM-encoded CA certificate that has been Base64 encoded. | Requires **v1.9.0** or later if self-managing agents. This certificate will be used when connecting to Vault. May be required when connecting to Vault instances that use a custom or self-signed certificate. |
## Vault Provider Configuration
Once you set up dynamic credentials for a workspace, HCP Terraform automatically authenticates to Vault for each run. Do not pass the `address`, `token`, or `namespace` arguments into the provider configuration block. HCP Terraform sets these values as environment variables in the run environment.
You can use the Vault provider to read static secrets from Vault and use them with other Terraform resources. You can also access the other resources and data sources available in the [Vault provider documentation](https://registry.terraform.io/providers/hashicorp/vault/latest). You must adjust your [Vault policy](#create-a-vault-policy) to give your HCP Terraform workspace access to all required Vault paths.
~> **Important:** data sources that use secrets engines to generate dynamic secrets must not be used with Vault dynamic credentials. You can use Vault's dynamic secrets engines for AWS, GCP, and Azure by adding additional configurations. For more details, see [Vault-backed dynamic credentials](/terraform/cloud-docs/workspaces/dynamic-provider-credentials/vault-backed).
### Specifying Multiple Configurations
~> **Important:** If you are self-hosting [HCP Terraform agents](/terraform/cloud-docs/agents), ensure your agents use [v1.12.0](/terraform/cloud-docs/agents/changelog#1-12-0-07-26-2023) or above. To use the latest dynamic credentials features, [upgrade your agents to the latest version](/terraform/cloud-docs/agents/changelog).
~> **Important:** Ensure you are using version **3.18.0** or later of the **Vault provider** as the required [`auth_login_token_file`](https://registry.terraform.io/providers/hashicorp/vault/latest/docs#token-file) block was introduced in this provider version.
You can add additional variables to handle multiple distinct Vault setups, enabling you to use multiple [provider aliases](/terraform/language/providers/configuration#alias-multiple-provider-configurations) within the same workspace. You can configure each set of credentials independently, or use default values by configuring the variables prefixed with `TFC_DEFAULT_`.
For more details, see [Specifying Multiple Configurations](/terraform/cloud-docs/workspaces/dynamic-provider-credentials/specifying-multiple-configurations).
#### Required Terraform Variable
To use additional configurations, add the following code to your Terraform configuration. This lets HCP Terraform supply variable values that you can then use to map authentication and configuration details to the correct provider blocks.
```hcl
variable "tfc_vault_dynamic_credentials" {
description = "Object containing Vault dynamic credentials configuration"
type = object({
default = object({
token_filename = string
address = string
namespace = string
ca_cert_file = string
})
aliases = map(object({
token_filename = string
address = string
namespace = string
ca_cert_file = string
}))
})
}
```
#### Example Usage
```hcl
provider "vault" {
// skip_child_token must be explicitly set to true as HCP Terraform manages the token lifecycle
skip_child_token = true
address = var.tfc_vault_dynamic_credentials.default.address
namespace = var.tfc_vault_dynamic_credentials.default.namespace
auth_login_token_file {
filename = var.tfc_vault_dynamic_credentials.default.token_filename
}
}
provider "vault" {
// skip_child_token must be explicitly set to true as HCP Terraform manages the token lifecycle
skip_child_token = true
alias = "ALIAS1"
address = var.tfc_vault_dynamic_credentials.aliases["ALIAS1"].address
namespace = var.tfc_vault_dynamic_credentials.aliases["ALIAS1"].namespace
auth_login_token_file {
filename = var.tfc_vault_dynamic_credentials.aliases["ALIAS1"].token_filename
}
}
``` | terraform | page title Dynamic Credentials with the Vault Provider Workspaces HCP Terraform description Use OpenID Connect to get short term credentials for the Vault Terraform provider in your HCP Terraform runs Dynamic Credentials with the Vault Provider Important If you are self hosting HCP Terraform agents terraform cloud docs agents ensure your agents use v1 7 0 terraform cloud docs agents changelog 1 7 0 03 02 2023 or above To use the latest dynamic credentials features upgrade your agents to the latest version terraform cloud docs agents changelog You can use HCP Terraform s native OpenID Connect integration with Vault to get dynamic credentials terraform cloud docs workspaces dynamic provider credentials for the Vault provider in your HCP Terraform runs Configuring the integration requires the following steps 1 Configure Vault configure vault Set up a trust configuration between Vault and HCP Terraform Then you must create Vault roles and policies for your HCP Terraform workspaces 2 Configure HCP Terraform configure hcp terraform Add environment variables to the HCP Terraform workspaces where you want to use Dynamic Credentials Once you complete the setup HCP Terraform automatically authenticates to Vault during each run The Vault provider authentication is valid for the length of the plan or apply Vault does not revoke authentication until the run is complete If you are using Vault s secrets engines vault docs secrets you must complete the following set up before continuing to configure Vault backed dynamic credentials terraform cloud docs workspaces dynamic provider credentials vault backed Configure Vault You must enable and configure the JWT backend in Vault These instructions use the Vault CLI commands but you can also use Terraform to configure Vault Refer to our example Terraform configuration https github com hashicorp terraform dynamic credentials setup examples tree main vault Enable the JWT Auth Backend Run the following command to enable the JWT auth backend in Vault shell vault auth enable jwt Configure Trust with HCP Terraform You must configure Vault to trust HCP Terraform s identity tokens and verify them using HCP Terraform s public key The following command configures the jwt auth backend in Vault to trust HCP Terraform as an OIDC identity provider shell vault write auth jwt config oidc discovery url https app terraform io bound issuer https app terraform io The oidc discovery url and bound issuer should both be the root address of HCP Terraform including the scheme and without a trailing slash Terraform Enterprise Specific Requirements If you are using a custom or self signed CA certificate you may need to specify the CA certificate or chain of certificates in PEM format via the oidc discovery ca pem vault api docs auth jwt oidc discovery ca pem argument as shown in the following example command shell vault write auth jwt config oidc discovery url https app terraform io bound issuer https app terraform io oidc discovery ca pem my cert pem In the example above my cert pem is a PEM formatted file containing the certificate Create a Vault Policy You must create a Vault policy that controls what paths and secrets your HCP Terraform workspace can access in Vault Create a file called tfc policy hcl with the following content hcl Allow tokens to query themselves path auth token lookup self capabilities read Allow tokens to renew themselves path auth token renew self capabilities update Allow tokens to revoke themselves path auth token revoke self capabilities update Configure the actual secrets the token should have access to path secret capabilities read Then create the policy in Vault shell vault policy write tfc policy tfc policy hcl Create a JWT Auth Role Create a Vault role that HCP Terraform can use when authenticating to Vault Vault offers a lot of flexibility in determining how to map roles and permissions in Vault to workspaces in HCP Terraform You can have one role for each workspace one role for a group of workspaces or one role for all workspaces in an organization You can also configure different roles for the plan and apply phases of a run Note If you set your user claim to be per workspace then Vault ties the entity it creates to that workspace s name If you rename the workspace tied to your user claim Vault will create an additional identity object To avoid this update the alias name in Vault to your new workspace name before you update it in HCP Terraform The following example creates a role called tfc role The role is mapped to a single workspace and HCP Terraform can use it for both plan and apply runs Create a file called vault jwt auth role json with the following content json policies tfc policy bound audiences vault workload identity bound claims type glob bound claims sub organization my org name project my project name workspace my workspace name run phase user claim terraform full workspace role type jwt token ttl 20m Then run the following command to create a role named tfc role with this configuration in Vault shell vault write auth jwt role tfc role vault jwt auth role json To understand all the available options for matching bound claims refer to the Terraform workload identity claim specification terraform cloud docs workspaces dynamic provider credentials and the Vault documentation on configuring bound claims vault docs auth jwt bound claims To understand all the options available when configuring Vault JWT auth roles refer to the Vault API documentation vault api docs auth jwt create role Warning you should always check at minimum the audience and the name of the organization in order to prevent unauthorized access from other HCP Terraform organizations Token TTLs We recommend setting token ttl to a relatively short value HCP Terraform can renew the token periodically until the plan or apply is complete then revoke it to prevent it from being used further Configure HCP Terraform You ll need to set some environment variables in your HCP Terraform workspace in order to configure HCP Terraform to authenticate with Vault using dynamic credentials You can set these as workspace variables or if you d like to share one Vault role across multiple workspaces you can use a variable set When you configure dynamic provider credentials with multiple provider configurations of the same type use either a default variable or a tagged alias variable name for each provider configuration Refer to Specifying Multiple Configurations specifying multiple configurations for more details Required Environment Variables Variable Value Notes TFC VAULT PROVIDER AUTH br TFC VAULT PROVIDER AUTH TAG br Default variable not supported true Requires v1 7 0 or later if self managing agents Must be present and set to true or HCP Terraform will not attempt to authenticate to Vault TFC VAULT ADDR br TFC VAULT ADDR TAG br TFC DEFAULT VAULT ADDR The address of the Vault instance to authenticate against Requires v1 7 0 or later if self managing agents Will also be used to set VAULT ADDR in the run environment TFC VAULT RUN ROLE br TFC VAULT RUN ROLE TAG br TFC DEFAULT VAULT RUN ROLE The name of the Vault role to authenticate against tfc role in our example Requires v1 7 0 or later if self managing agents Optional if TFC VAULT PLAN ROLE and TFC VAULT APPLY ROLE are both provided These variables are described below terraform cloud docs workspaces dynamic provider credentials vault configuration optional environment variables Optional Environment Variables You may need to set these variables depending on your Vault configuration and use case Variable Value Notes TFC VAULT NAMESPACE br TFC VAULT NAMESPACE TAG br TFC DEFAULT VAULT NAMESPACE The namespace to use when authenticating to Vault Requires v1 7 0 or later if self managing agents Will also be used to set VAULT NAMESPACE in the run environment TFC VAULT AUTH PATH br TFC VAULT AUTH PATH TAG br TFC DEFAULT VAULT AUTH PATH The path where the JWT auth backend is mounted in Vault Defaults to jwt Requires v1 7 0 or later if self managing agents TFC VAULT WORKLOAD IDENTITY AUDIENCE br TFC VAULT WORKLOAD IDENTITY AUDIENCE TAG br TFC DEFAULT VAULT WORKLOAD IDENTITY AUDIENCE Will be used as the aud claim for the identity token Defaults to vault workload identity Requires v1 7 0 or later if self managing agents Must match the bound audiences configured for the role in Vault TFC VAULT PLAN ROLE br TFC VAULT PLAN ROLE TAG br TFC DEFAULT VAULT PLAN ROLE The Vault role to use for the plan phase of a run Requires v1 7 0 or later if self managing agents Will fall back to the value of TFC VAULT RUN ROLE if not provided TFC VAULT APPLY ROLE br TFC VAULT APPLY ROLE TAG br TFC DEFAULT VAULT APPLY ROLE The Vault role to use for the apply phase of a run Requires v1 7 0 or later if self managing agents Will fall back to the value of TFC VAULT RUN ROLE if not provided TFC VAULT ENCODED CACERT br TFC VAULT ENCODED CACERT TAG br TFC DEFAULT VAULT ENCODED CACERT A PEM encoded CA certificate that has been Base64 encoded Requires v1 9 0 or later if self managing agents This certificate will be used when connecting to Vault May be required when connecting to Vault instances that use a custom or self signed certificate Vault Provider Configuration Once you set up dynamic credentials for a workspace HCP Terraform automatically authenticates to Vault for each run Do not pass the address token or namespace arguments into the provider configuration block HCP Terraform sets these values as environment variables in the run environment You can use the Vault provider to read static secrets from Vault and use them with other Terraform resources You can also access the other resources and data sources available in the Vault provider documentation https registry terraform io providers hashicorp vault latest You must adjust your Vault policy create a vault policy to give your HCP Terraform workspace access to all required Vault paths Important data sources that use secrets engines to generate dynamic secrets must not be used with Vault dynamic credentials You can use Vault s dynamic secrets engines for AWS GCP and Azure by adding additional configurations For more details see Vault backed dynamic credentials terraform cloud docs workspaces dynamic provider credentials vault backed Specifying Multiple Configurations Important If you are self hosting HCP Terraform agents terraform cloud docs agents ensure your agents use v1 12 0 terraform cloud docs agents changelog 1 12 0 07 26 2023 or above To use the latest dynamic credentials features upgrade your agents to the latest version terraform cloud docs agents changelog Important Ensure you are using version 3 18 0 or later of the Vault provider as the required auth login token file https registry terraform io providers hashicorp vault latest docs token file block was introduced in this provider version You can add additional variables to handle multiple distinct Vault setups enabling you to use multiple provider aliases terraform language providers configuration alias multiple provider configurations within the same workspace You can configure each set of credentials independently or use default values by configuring the variables prefixed with TFC DEFAULT For more details see Specifying Multiple Configurations terraform cloud docs workspaces dynamic provider credentials specifying multiple configurations Required Terraform Variable To use additional configurations add the following code to your Terraform configuration This lets HCP Terraform supply variable values that you can then use to map authentication and configuration details to the correct provider blocks hcl variable tfc vault dynamic credentials description Object containing Vault dynamic credentials configuration type object default object token filename string address string namespace string ca cert file string aliases map object token filename string address string namespace string ca cert file string Example Usage hcl provider vault skip child token must be explicitly set to true as HCP Terraform manages the token lifecycle skip child token true address var tfc vault dynamic credentials default address namespace var tfc vault dynamic credentials default namespace auth login token file filename var tfc vault dynamic credentials default token filename provider vault skip child token must be explicitly set to true as HCP Terraform manages the token lifecycle skip child token true alias ALIAS1 address var tfc vault dynamic credentials aliases ALIAS1 address namespace var tfc vault dynamic credentials aliases ALIAS1 namespace auth login token file filename var tfc vault dynamic credentials aliases ALIAS1 token filename |
terraform your HCP Terraform runs Use OpenID Connect to get short term credentials for the HCP provider in Important If you are self hosting HCP Terraform agents terraform cloud docs agents ensure your agents use v1 15 1 terraform cloud docs agents changelog 1 15 1 05 01 2024 or above To use the latest dynamic credentials features upgrade your agents to the latest version terraform cloud docs agents changelog Dynamic Credentials with the HCP Provider page title Dynamic Credentials with the HCP Provider Workspaces HCP Terraform | ---
page_title: Dynamic Credentials with the HCP Provider - Workspaces - HCP Terraform
description: >-
Use OpenID Connect to get short-term credentials for the HCP provider in
your HCP Terraform runs.
---
# Dynamic Credentials with the HCP Provider
~> **Important:** If you are self-hosting [HCP Terraform agents](/terraform/cloud-docs/agents), ensure your agents use [v1.15.1](/terraform/cloud-docs/agents/changelog#1-15-1-05-01-2024) or above. To use the latest dynamic credentials features, [upgrade your agents to the latest version](/terraform/cloud-docs/agents/changelog).
You can use HCP Terraform’s native OpenID Connect integration with HCP to authenticate with the HCP provider using [dynamic credentials](/terraform/cloud-docs/workspaces/dynamic-provider-credentials) in your HCP Terraform runs. Configuring dynamic credentials for the HCP provider requires the following steps:
1. **[Configure HCP](#configure-hcp):** Set up a trust configuration between HCP and HCP Terraform. Then, you must create a [service principal in HPC](https://developer.hashicorp.com/hcp/docs/hcp/admin/iam/service-principals) for your HCP Terraform workspaces.
2. **[Configure HCP Terraform](#configure-hcp-terraform):** Add environment variables to the HCP Terraform workspaces where you want to use Dynamic Credentials.
Once you complete the setup, HCP Terraform automatically authenticates to HCP during each run.
## Configure HCP
You must enable and configure a workload identity pool and provider on HCP. These instructions use the HCP CLI, but you can also use Terraform to configure HCP. Refer to our [example Terraform configuration](https://github.com/hashicorp/terraform-dynamic-credentials-setup-examples/tree/main/hcp).
#### Create a Service Principal
Create a service principal for HCP Terraform to assume during runs by running the following HCP command. Note the ID of the service principal you create because you will need it in the following steps. For all remaining steps, replace `HCP_PROJECT_ID` with the ID of the project that contains all the resources and workspaces that you want to manage with this service principal. If you wish to manage more than one project with dynamic credentials, it is recommended that you create multiple service principals, one for each project.
```shell
hcp iam service-principals create hcp-terraform --project=HCP_PROJECT_ID
```
Grant your service principal the necessary permissions to manage your infrastructure during runs.
```shell
hcp projects add-binding \
--project=HCP_PROJECT_ID \
--member=HCP_PRINCIPAL_ID \
--role=roles/contributor
```
#### Add a Workload Identity Provider
Next, create a workload identity provider that HCP uses to authenticate the HCP Terraform run. Make sure to replace `HCP_PROJECT_ID`, `ORG_NAME`, `PROJECT_NAME`, and `WORKSPACE_NAME` with their respective values before running the command.
```shell
hcp iam workload-identity-providers create-oidc hcp-terraform-dynamic-credentials \
--service-principal=iam/project/HCP_PROJECT_ID/service-principal/hcp-terraform \
--issuer=https://app.terraform.io \
--allowed-audience=hcp.workload.identity
--conditional-access='jwt_claims.sub matches `^organization:ORG_NAME:project:PROJECT_NAME:workspace:WORKSPACE_NAME:run_phase:.*`' \
--description="Allow HCP Terraform agents to act as the hcp-terraform service principal"
```
## Configure HCP Terraform
Next, you need to set environment variables in your HCP Terraform workspace to configure HCP Terraform to authenticate with HCP using dynamic credentials. You can set these as workspace variables or use a variable set to share one HCP service principal across multiple workspaces. When you configure dynamic provider credentials with multiple provider configurations of the same type, use either a default variable or a tagged alias variable name for each provider configuration. Refer to [Specifying Multiple Configurations](#specifying-multiple-configurations) for more details.
### Required Environment Variables
| Variable | Value | Notes |
|----------------------------------------------------------------------------------------------------------------------------------------|-------------------------------------------------------------------------------------------------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| `TFC_HCP_PROVIDER_AUTH`<br />`TFC_HCP_PROVIDER_AUTH[_TAG]`<br />_(Default variable not supported)_ | `true` | Requires **v1.15.1** or later if you use self-managing agents. Must be present and set to `true`, or HCP Terraform will not attempt to use dynamic credentials to authenticate to HCP. |
| `TFC_HCP_RUN_PROVIDER_RESOURCE_NAME`<br />`TFC_HCP_RUN_PROVIDER_RESOURCE_NAME[_TAG]`<br />`TFC_DEFAULT_HCP_RUN_PROVIDER_RESOURCE_NAME` | The resource name of the workload identity provider that will be used to assume the service principal | Requires **v1.15.1** or later if you use self-managing agents. Optional if you provide `PLAN_PROVIDER_RESOURCE_NAME` and `APPLY_PROVIDER_RESOURCE_NAME`. [Learn more](#optional-environment-variables). |
### Optional Environment Variables
You may need to set these variables, depending on your use case.
| Variable | Value | Notes |
|----------------------------------------------------------------------------------------------------------------------------------------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|-------------------------------------------------------------------------------------------------------------------------------------|
| `TFC_HCP_WORKLOAD_IDENTITY_AUDIENCE`<br />`TFC_HCP_WORKLOAD_IDENTITY_AUDIENCE[_TAG]`<br />`TFC_DEFAULT_HCP_WORKLOAD_IDENTITY_AUDIENCE` | HCP Terraform uses this as the `aud` claim for the identity token. Defaults to the provider resource name for the current run phase, which HCP Terraform derives from the values you provide for `RUN_PROVIDER_RESOURCE_NAME`, `PLAN_PROVIDER_RESOURCE_NAME`, and `APPLY_PROVIDER_RESOURCE_NAME`. | Requires **v1.15.1** or later if you use self-managing agents. This is one of the default `aud` formats that HCP accepts. |
| `TFC_HCP_PLAN_PROVIDER_RESOURCE_NAME`<br />`TFC_HCP_PLAN_PROVIDER_RESOURCE_NAME[_TAG]`<br />`TFC_DEFAULT_HCP_PLAN_PROVIDER_RESOURCE_NAME` | The resource name of the workload identity provider that will HCP Terraform will use to authenticate the agent during the plan phase of a run. | Requires **v1.15.1** or later if self-managing agents. Will fall back to the value of `RUN_PROVIDER_RESOURCE_NAME` if not provided. |
| `TFC_HCP_APPLY_PROVIDER_RESOURCE_NAME`<br />`TFC_HCP_APPLY_PROVIDER_RESOURCE_NAME[_TAG]`<br />`TFC_DEFAULT_HCP_APPLY_PROVIDER_RESOURCE_NAME` | The resource name of the workload identity provider that will HCP Terraform will use to authenticate the agent during the apply phase of a run. | Requires **v1.15.1** or later if self-managing agents. Will fall back to the value of `RUN_PROVIDER_RESOURCE_NAME` if not provided. |
## Configure the HCP Provider
Do not set the `HCP_CRED_FILE` environment variable when configuring the HCP provider, or `HCP_CRED_FILE` will conflict with the dynamic credentials authentication process.
### Specifying Multiple Configurations
~> **Important:** If you are self-hosting [HCP Terraform agents](/terraform/cloud-docs/agents), ensure your agents use [v1.15.1](/terraform/cloud-docs/agents/changelog#1-15-1-05-01-2024) or above. To use the latest dynamic credentials features, [upgrade your agents to the latest version](/terraform/cloud-docs/agents/changelog).
You can add additional variables to handle multiple distinct HCP setups, enabling you to use multiple [provider aliases](/terraform/language/providers/configuration#alias-multiple-provider-configurations) within the same workspace. You can configure each set of credentials independently, or use default values by configuring the variables prefixed with `TFC_DEFAULT_`.
For more details, refer to [Specifying Multiple Configurations](/terraform/cloud-docs/workspaces/dynamic-provider-credentials/specifying-multiple-configurations).
#### Required Terraform Variable
Add the following variable to your Terraform configuration to set up additional dynamic credential configurations with the HCP provider. This variable lets HCP Terraform supply variable values that you can then use to map authentication and configuration details to the correct provider blocks.
```hcl
variable "tfc_hcp_dynamic_credentials" {
description = "Object containing HCP dynamic credentials configuration"
type = object({
default = object({
credential_file = string
})
aliases = map(object({
credential_file = string
}))
})
}
```
#### Example Usage
```hcl
provider "hcp" {
credential_file = var.tfc_hcp_dynamic_credentials.default.credential_file
}
provider "hcp" {
alias = "ALIAS1"
credential_file = var.tfc_hcp_dynamic_credentials.aliases["ALIAS1"].credential_file
}
``` | terraform | page title Dynamic Credentials with the HCP Provider Workspaces HCP Terraform description Use OpenID Connect to get short term credentials for the HCP provider in your HCP Terraform runs Dynamic Credentials with the HCP Provider Important If you are self hosting HCP Terraform agents terraform cloud docs agents ensure your agents use v1 15 1 terraform cloud docs agents changelog 1 15 1 05 01 2024 or above To use the latest dynamic credentials features upgrade your agents to the latest version terraform cloud docs agents changelog You can use HCP Terraform s native OpenID Connect integration with HCP to authenticate with the HCP provider using dynamic credentials terraform cloud docs workspaces dynamic provider credentials in your HCP Terraform runs Configuring dynamic credentials for the HCP provider requires the following steps 1 Configure HCP configure hcp Set up a trust configuration between HCP and HCP Terraform Then you must create a service principal in HPC https developer hashicorp com hcp docs hcp admin iam service principals for your HCP Terraform workspaces 2 Configure HCP Terraform configure hcp terraform Add environment variables to the HCP Terraform workspaces where you want to use Dynamic Credentials Once you complete the setup HCP Terraform automatically authenticates to HCP during each run Configure HCP You must enable and configure a workload identity pool and provider on HCP These instructions use the HCP CLI but you can also use Terraform to configure HCP Refer to our example Terraform configuration https github com hashicorp terraform dynamic credentials setup examples tree main hcp Create a Service Principal Create a service principal for HCP Terraform to assume during runs by running the following HCP command Note the ID of the service principal you create because you will need it in the following steps For all remaining steps replace HCP PROJECT ID with the ID of the project that contains all the resources and workspaces that you want to manage with this service principal If you wish to manage more than one project with dynamic credentials it is recommended that you create multiple service principals one for each project shell hcp iam service principals create hcp terraform project HCP PROJECT ID Grant your service principal the necessary permissions to manage your infrastructure during runs shell hcp projects add binding project HCP PROJECT ID member HCP PRINCIPAL ID role roles contributor Add a Workload Identity Provider Next create a workload identity provider that HCP uses to authenticate the HCP Terraform run Make sure to replace HCP PROJECT ID ORG NAME PROJECT NAME and WORKSPACE NAME with their respective values before running the command shell hcp iam workload identity providers create oidc hcp terraform dynamic credentials service principal iam project HCP PROJECT ID service principal hcp terraform issuer https app terraform io allowed audience hcp workload identity conditional access jwt claims sub matches organization ORG NAME project PROJECT NAME workspace WORKSPACE NAME run phase description Allow HCP Terraform agents to act as the hcp terraform service principal Configure HCP Terraform Next you need to set environment variables in your HCP Terraform workspace to configure HCP Terraform to authenticate with HCP using dynamic credentials You can set these as workspace variables or use a variable set to share one HCP service principal across multiple workspaces When you configure dynamic provider credentials with multiple provider configurations of the same type use either a default variable or a tagged alias variable name for each provider configuration Refer to Specifying Multiple Configurations specifying multiple configurations for more details Required Environment Variables Variable Value Notes TFC HCP PROVIDER AUTH br TFC HCP PROVIDER AUTH TAG br Default variable not supported true Requires v1 15 1 or later if you use self managing agents Must be present and set to true or HCP Terraform will not attempt to use dynamic credentials to authenticate to HCP TFC HCP RUN PROVIDER RESOURCE NAME br TFC HCP RUN PROVIDER RESOURCE NAME TAG br TFC DEFAULT HCP RUN PROVIDER RESOURCE NAME The resource name of the workload identity provider that will be used to assume the service principal Requires v1 15 1 or later if you use self managing agents Optional if you provide PLAN PROVIDER RESOURCE NAME and APPLY PROVIDER RESOURCE NAME Learn more optional environment variables Optional Environment Variables You may need to set these variables depending on your use case Variable Value Notes TFC HCP WORKLOAD IDENTITY AUDIENCE br TFC HCP WORKLOAD IDENTITY AUDIENCE TAG br TFC DEFAULT HCP WORKLOAD IDENTITY AUDIENCE HCP Terraform uses this as the aud claim for the identity token Defaults to the provider resource name for the current run phase which HCP Terraform derives from the values you provide for RUN PROVIDER RESOURCE NAME PLAN PROVIDER RESOURCE NAME and APPLY PROVIDER RESOURCE NAME Requires v1 15 1 or later if you use self managing agents This is one of the default aud formats that HCP accepts TFC HCP PLAN PROVIDER RESOURCE NAME br TFC HCP PLAN PROVIDER RESOURCE NAME TAG br TFC DEFAULT HCP PLAN PROVIDER RESOURCE NAME The resource name of the workload identity provider that will HCP Terraform will use to authenticate the agent during the plan phase of a run Requires v1 15 1 or later if self managing agents Will fall back to the value of RUN PROVIDER RESOURCE NAME if not provided TFC HCP APPLY PROVIDER RESOURCE NAME br TFC HCP APPLY PROVIDER RESOURCE NAME TAG br TFC DEFAULT HCP APPLY PROVIDER RESOURCE NAME The resource name of the workload identity provider that will HCP Terraform will use to authenticate the agent during the apply phase of a run Requires v1 15 1 or later if self managing agents Will fall back to the value of RUN PROVIDER RESOURCE NAME if not provided Configure the HCP Provider Do not set the HCP CRED FILE environment variable when configuring the HCP provider or HCP CRED FILE will conflict with the dynamic credentials authentication process Specifying Multiple Configurations Important If you are self hosting HCP Terraform agents terraform cloud docs agents ensure your agents use v1 15 1 terraform cloud docs agents changelog 1 15 1 05 01 2024 or above To use the latest dynamic credentials features upgrade your agents to the latest version terraform cloud docs agents changelog You can add additional variables to handle multiple distinct HCP setups enabling you to use multiple provider aliases terraform language providers configuration alias multiple provider configurations within the same workspace You can configure each set of credentials independently or use default values by configuring the variables prefixed with TFC DEFAULT For more details refer to Specifying Multiple Configurations terraform cloud docs workspaces dynamic provider credentials specifying multiple configurations Required Terraform Variable Add the following variable to your Terraform configuration to set up additional dynamic credential configurations with the HCP provider This variable lets HCP Terraform supply variable values that you can then use to map authentication and configuration details to the correct provider blocks hcl variable tfc hcp dynamic credentials description Object containing HCP dynamic credentials configuration type object default object credential file string aliases map object credential file string Example Usage hcl provider hcp credential file var tfc hcp dynamic credentials default credential file provider hcp alias ALIAS1 credential file var tfc hcp dynamic credentials aliases ALIAS1 credential file |
terraform Dynamic Credentials with the AWS Provider your HCP Terraform runs Use OpenID Connect to get short term credentials for the AWS Terraform provider in Important If you are self hosting HCP Terraform agents terraform cloud docs agents ensure your agents use v1 7 0 terraform cloud docs agents changelog 1 7 0 03 02 2023 or above To use the latest dynamic credentials features upgrade your agents to the latest version terraform cloud docs agents changelog page title Dynamic Credentials with the AWS Provider Workspaces HCP Terraform | ---
page_title: Dynamic Credentials with the AWS Provider - Workspaces - HCP Terraform
description: >-
Use OpenID Connect to get short-term credentials for the AWS Terraform provider in
your HCP Terraform runs.
---
# Dynamic Credentials with the AWS Provider
~> **Important:** If you are self-hosting [HCP Terraform agents](/terraform/cloud-docs/agents), ensure your agents use [v1.7.0](/terraform/cloud-docs/agents/changelog#1-7-0-03-02-2023) or above. To use the latest dynamic credentials features, [upgrade your agents to the latest version](/terraform/cloud-docs/agents/changelog).
You can use HCP Terraform’s native OpenID Connect integration with AWS to get [dynamic credentials](/terraform/cloud-docs/workspaces/dynamic-provider-credentials) for the AWS provider in your HCP Terraform runs. Configuring the integration requires the following steps:
1. **[Configure AWS](#configure-aws):** Set up a trust configuration between AWS and HCP Terraform. Then, you must create AWS roles and policies for your HCP Terraform workspaces.
2. **[Configure HCP Terraform](#configure-hcp-terraform):** Add environment variables to the HCP Terraform workspaces where you want to use Dynamic Credentials.
Once you complete the setup, HCP Terraform automatically authenticates to AWS during each run. The AWS provider authentication is valid for the length of the plan or apply.
## Configure AWS
You must enable and configure an OIDC identity provider and accompanying role and trust policy on AWS. These instructions use the AWS console, but you can also use Terraform to configure AWS. Refer to our [example Terraform configuration](https://github.com/hashicorp/terraform-dynamic-credentials-setup-examples/tree/main/aws).
### Create an OIDC Identity Provider
AWS documentation for setting this up through the AWS console or API can be found here: [Creating OpenID Connect (OIDC) identity providers - AWS Identity and Access Management](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_providers_create_oidc.html).
The `provider URL` should be set to the address of HCP Terraform (e.g., https://app.terraform.io **without** a trailing slash), and the `audience` should be set to `aws.workload.identity` or the value of `TFC_AWS_WORKLOAD_IDENTITY_AUDIENCE`, if configured.
### Configure a Role and Trust Policy
You must configure a role and corresponding trust policy. Amazon documentation on setting this up can be found here: [Creating a role for web identity or OpenID Connect Federation (console) - AWS Identity and Access Management](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_create_for-idp_oidc.html).
The trust policy will be of the form:
```json
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Federated": "OIDC_PROVIDER_ARN"
},
"Action": "sts:AssumeRoleWithWebIdentity",
"Condition": {
"StringEquals": {
"SITE_ADDRESS:aud": "AUDIENCE_VALUE",
"SITE_ADDRESS:sub": "organization:ORG_NAME:project:PROJECT_NAME:workspace:WORKSPACE_NAME:run_phase:RUN_PHASE"
}
}
}
]
}
```
with the capitalized values replaced with the following:
* **OIDC_PROVIDER_ARN**: The ARN from the OIDC provider resource created in the previous step
* **SITE_ADDRESS**: The address of HCP Terraform with `https://` stripped, (e.g., `app.terraform.io`)
* **AUDIENCE_VALUE**: This should be set to `aws.workload.identity` unless a non-default audience has been specified in TFC
* **ORG_NAME**: The organization name this policy will apply to, such as `my-org-name`
* **PROJECT_NAME**: The project name that this policy will apply to, such as `my-project-name`
* **WORKSPACE_NAME**: The workspace name this policy will apply to, such as `my-workspace-name`
* **RUN_PHASE**: The run phase this policy will apply to, currently one of `plan` or `apply`.
-> **Note:** if different permissions are desired for plan and apply, then two separate roles and trust policies must be created for each of these run phases to properly match them to the correct access level.
If the same permissions will be used regardless of run phase, then the condition can be modified like the below to use `StringLike` instead of `StringEquals` for the sub and include a `*` after `run_phase:` to perform a wildcard match:
```json
{
"Condition": {
"StringEquals": {
"SITE_ADDRESS:aud": "AUDIENCE_VALUE"
},
"StringLike": {
"SITE_ADDRESS:sub": "organization:ORG_NAME:project:PROJECT_NAME:workspace:WORKSPACE_NAME:run_phase:*"
}
}
}
```
!> **Warning**: you should always check, at minimum, the audience and the name of the organization in order to prevent unauthorized access from other HCP Terraform organizations!
A permissions policy needs to be added to the role which defines what operations within AWS the role is allowed to perform. As an example, the below policy allows for fetching a list of S3 buckets:
```json
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:ListBucket"
],
"Resource": "*"
}
]
}
```
## Configure HCP Terraform
You’ll need to set some environment variables in your HCP Terraform workspace in order to configure HCP Terraform to authenticate with AWS using dynamic credentials. You can set these as workspace variables, or if you’d like to share one AWS role across multiple workspaces, you can use a variable set. When you configure dynamic provider credentials with multiple provider configurations of the same type, use either a default variable or a tagged alias variable name for each provider configuration. Refer to [Specifying Multiple Configurations](#specifying-multiple-configurations) for more details.
### Required Environment Variables
| Variable | Value | Notes |
|----------------------------------------------------------------------------------------------|---------------------------------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| `TFC_AWS_PROVIDER_AUTH`<br />`TFC_AWS_PROVIDER_AUTH[_TAG]`<br />_(Default variable not supported)_ | `true` | Requires **v1.7.0** or later if self-managing agents. Must be present and set to `true`, or HCP Terraform will not attempt to authenticate to AWS. |
| `TFC_AWS_RUN_ROLE_ARN`<br />`TFC_AWS_RUN_ROLE_ARN[_TAG]`<br />`TFC_DEFAULT_AWS_RUN_ROLE_ARN` | The ARN of the role to assume in AWS. | Requires **v1.7.0** or later if self-managing agents. Optional if `TFC_AWS_PLAN_ROLE_ARN` and `TFC_AWS_APPLY_ROLE_ARN` are both provided. These variables are described [below](/terraform/cloud-docs/workspaces/dynamic-provider-credentials/aws-configuration#optional-environment-variables) |
### Optional Environment Variables
You may need to set these variables, depending on your use case.
| Variable | Value | Notes |
|----------------------------------------------------------------------------------------------------------------------------------------|----------------------------------------------------------------------------------------------|------------------------------------------------------------------------------------------------------------------------------|
| `TFC_AWS_WORKLOAD_IDENTITY_AUDIENCE`<br />`TFC_AWS_WORKLOAD_IDENTITY_AUDIENCE[_TAG]`<br />`TFC_DEFAULT_AWS_WORKLOAD_IDENTITY_AUDIENCE` | Will be used as the `aud` claim for the identity token. Defaults to `aws.workload.identity`. | Requires **v1.7.0** or later if self-managing agents. |
| `TFC_AWS_PLAN_ROLE_ARN`<br />`TFC_AWS_PLAN_ROLE_ARN[_TAG]`<br />`TFC_DEFAULT_AWS_PLAN_ROLE_ARN` | The ARN of the role to use for the plan phase of a run. | Requires **v1.7.0** or later if self-managing agents. Will fall back to the value of `TFC_AWS_RUN_ROLE_ARN` if not provided. |
| `TFC_AWS_APPLY_ROLE_ARN`<br />`TFC_AWS_APPLY_ROLE_ARN[_TAG]`<br />`TFC_DEFAULT_AWS_APPLY_ROLE_ARN` | The ARN of the role to use for the apply phase of a run. | Requires **v1.7.0** or later if self-managing agents. Will fall back to the value of `TFC_AWS_RUN_ROLE_ARN` if not provided. |
## Configure the AWS Provider
Make sure that you’re passing a value for the `region` argument into the provider configuration block or setting the `AWS_REGION` variable in your workspace.
Make sure that you’re not using any of the other arguments or methods mentioned in the [authentication and configuration](https://registry.terraform.io/providers/hashicorp/aws/latest/docs#authentication-and-configuration) section of the provider documentation as these settings may interfere with dynamic provider credentials.
### Specifying Multiple Configurations
~> **Important:** If you are self-hosting [HCP Terraform agents](/terraform/cloud-docs/agents), ensure your agents use [v1.12.0](/terraform/cloud-docs/agents/changelog#1-12-0-07-26-2023) or above. To use the latest dynamic credentials features, [upgrade your agents to the latest version](/terraform/cloud-docs/agents/changelog).
You can add additional variables to handle multiple distinct AWS setups, enabling you to use multiple [provider aliases](/terraform/language/providers/configuration#alias-multiple-provider-configurations) within the same workspace. You can configure each set of credentials independently, or use default values by configuring the variables prefixed with `TFC_DEFAULT_`.
For more details, see [Specifying Multiple Configurations](/terraform/cloud-docs/workspaces/dynamic-provider-credentials/specifying-multiple-configurations).
#### Required Terraform Variable
To use additional configurations, add the following code to your Terraform configuration. This lets HCP Terraform supply variable values that you can then use to map authentication and configuration details to the correct provider blocks.
```hcl
variable "tfc_aws_dynamic_credentials" {
description = "Object containing AWS dynamic credentials configuration"
type = object({
default = object({
shared_config_file = string
})
aliases = map(object({
shared_config_file = string
}))
})
}
```
#### Example Usage
```hcl
provider "aws" {
shared_config_files = [var.tfc_aws_dynamic_credentials.default.shared_config_file]
}
provider "aws" {
alias = "ALIAS1"
shared_config_files = [var.tfc_aws_dynamic_credentials.aliases["ALIAS1"].shared_config_file]
}
``` | terraform | page title Dynamic Credentials with the AWS Provider Workspaces HCP Terraform description Use OpenID Connect to get short term credentials for the AWS Terraform provider in your HCP Terraform runs Dynamic Credentials with the AWS Provider Important If you are self hosting HCP Terraform agents terraform cloud docs agents ensure your agents use v1 7 0 terraform cloud docs agents changelog 1 7 0 03 02 2023 or above To use the latest dynamic credentials features upgrade your agents to the latest version terraform cloud docs agents changelog You can use HCP Terraform s native OpenID Connect integration with AWS to get dynamic credentials terraform cloud docs workspaces dynamic provider credentials for the AWS provider in your HCP Terraform runs Configuring the integration requires the following steps 1 Configure AWS configure aws Set up a trust configuration between AWS and HCP Terraform Then you must create AWS roles and policies for your HCP Terraform workspaces 2 Configure HCP Terraform configure hcp terraform Add environment variables to the HCP Terraform workspaces where you want to use Dynamic Credentials Once you complete the setup HCP Terraform automatically authenticates to AWS during each run The AWS provider authentication is valid for the length of the plan or apply Configure AWS You must enable and configure an OIDC identity provider and accompanying role and trust policy on AWS These instructions use the AWS console but you can also use Terraform to configure AWS Refer to our example Terraform configuration https github com hashicorp terraform dynamic credentials setup examples tree main aws Create an OIDC Identity Provider AWS documentation for setting this up through the AWS console or API can be found here Creating OpenID Connect OIDC identity providers AWS Identity and Access Management https docs aws amazon com IAM latest UserGuide id roles providers create oidc html The provider URL should be set to the address of HCP Terraform e g https app terraform io without a trailing slash and the audience should be set to aws workload identity or the value of TFC AWS WORKLOAD IDENTITY AUDIENCE if configured Configure a Role and Trust Policy You must configure a role and corresponding trust policy Amazon documentation on setting this up can be found here Creating a role for web identity or OpenID Connect Federation console AWS Identity and Access Management https docs aws amazon com IAM latest UserGuide id roles create for idp oidc html The trust policy will be of the form json Version 2012 10 17 Statement Effect Allow Principal Federated OIDC PROVIDER ARN Action sts AssumeRoleWithWebIdentity Condition StringEquals SITE ADDRESS aud AUDIENCE VALUE SITE ADDRESS sub organization ORG NAME project PROJECT NAME workspace WORKSPACE NAME run phase RUN PHASE with the capitalized values replaced with the following OIDC PROVIDER ARN The ARN from the OIDC provider resource created in the previous step SITE ADDRESS The address of HCP Terraform with https stripped e g app terraform io AUDIENCE VALUE This should be set to aws workload identity unless a non default audience has been specified in TFC ORG NAME The organization name this policy will apply to such as my org name PROJECT NAME The project name that this policy will apply to such as my project name WORKSPACE NAME The workspace name this policy will apply to such as my workspace name RUN PHASE The run phase this policy will apply to currently one of plan or apply Note if different permissions are desired for plan and apply then two separate roles and trust policies must be created for each of these run phases to properly match them to the correct access level If the same permissions will be used regardless of run phase then the condition can be modified like the below to use StringLike instead of StringEquals for the sub and include a after run phase to perform a wildcard match json Condition StringEquals SITE ADDRESS aud AUDIENCE VALUE StringLike SITE ADDRESS sub organization ORG NAME project PROJECT NAME workspace WORKSPACE NAME run phase Warning you should always check at minimum the audience and the name of the organization in order to prevent unauthorized access from other HCP Terraform organizations A permissions policy needs to be added to the role which defines what operations within AWS the role is allowed to perform As an example the below policy allows for fetching a list of S3 buckets json Version 2012 10 17 Statement Effect Allow Action s3 ListBucket Resource Configure HCP Terraform You ll need to set some environment variables in your HCP Terraform workspace in order to configure HCP Terraform to authenticate with AWS using dynamic credentials You can set these as workspace variables or if you d like to share one AWS role across multiple workspaces you can use a variable set When you configure dynamic provider credentials with multiple provider configurations of the same type use either a default variable or a tagged alias variable name for each provider configuration Refer to Specifying Multiple Configurations specifying multiple configurations for more details Required Environment Variables Variable Value Notes TFC AWS PROVIDER AUTH br TFC AWS PROVIDER AUTH TAG br Default variable not supported true Requires v1 7 0 or later if self managing agents Must be present and set to true or HCP Terraform will not attempt to authenticate to AWS TFC AWS RUN ROLE ARN br TFC AWS RUN ROLE ARN TAG br TFC DEFAULT AWS RUN ROLE ARN The ARN of the role to assume in AWS Requires v1 7 0 or later if self managing agents Optional if TFC AWS PLAN ROLE ARN and TFC AWS APPLY ROLE ARN are both provided These variables are described below terraform cloud docs workspaces dynamic provider credentials aws configuration optional environment variables Optional Environment Variables You may need to set these variables depending on your use case Variable Value Notes TFC AWS WORKLOAD IDENTITY AUDIENCE br TFC AWS WORKLOAD IDENTITY AUDIENCE TAG br TFC DEFAULT AWS WORKLOAD IDENTITY AUDIENCE Will be used as the aud claim for the identity token Defaults to aws workload identity Requires v1 7 0 or later if self managing agents TFC AWS PLAN ROLE ARN br TFC AWS PLAN ROLE ARN TAG br TFC DEFAULT AWS PLAN ROLE ARN The ARN of the role to use for the plan phase of a run Requires v1 7 0 or later if self managing agents Will fall back to the value of TFC AWS RUN ROLE ARN if not provided TFC AWS APPLY ROLE ARN br TFC AWS APPLY ROLE ARN TAG br TFC DEFAULT AWS APPLY ROLE ARN The ARN of the role to use for the apply phase of a run Requires v1 7 0 or later if self managing agents Will fall back to the value of TFC AWS RUN ROLE ARN if not provided Configure the AWS Provider Make sure that you re passing a value for the region argument into the provider configuration block or setting the AWS REGION variable in your workspace Make sure that you re not using any of the other arguments or methods mentioned in the authentication and configuration https registry terraform io providers hashicorp aws latest docs authentication and configuration section of the provider documentation as these settings may interfere with dynamic provider credentials Specifying Multiple Configurations Important If you are self hosting HCP Terraform agents terraform cloud docs agents ensure your agents use v1 12 0 terraform cloud docs agents changelog 1 12 0 07 26 2023 or above To use the latest dynamic credentials features upgrade your agents to the latest version terraform cloud docs agents changelog You can add additional variables to handle multiple distinct AWS setups enabling you to use multiple provider aliases terraform language providers configuration alias multiple provider configurations within the same workspace You can configure each set of credentials independently or use default values by configuring the variables prefixed with TFC DEFAULT For more details see Specifying Multiple Configurations terraform cloud docs workspaces dynamic provider credentials specifying multiple configurations Required Terraform Variable To use additional configurations add the following code to your Terraform configuration This lets HCP Terraform supply variable values that you can then use to map authentication and configuration details to the correct provider blocks hcl variable tfc aws dynamic credentials description Object containing AWS dynamic credentials configuration type object default object shared config file string aliases map object shared config file string Example Usage hcl provider aws shared config files var tfc aws dynamic credentials default shared config file provider aws alias ALIAS1 shared config files var tfc aws dynamic credentials aliases ALIAS1 shared config file |
terraform your HCP Terraform runs page title Dynamic Credentials with the Azure Providers HCP Terraform Dynamic Credentials with the Azure Provider Important Ensure you are using version 3 25 0 or later of the AzureRM provider and version 2 29 0 or later of the Microsoft Entra ID provider previously Azure Active Directory as required OIDC functionality was introduced in these provider versions Use OpenID Connect to get short term credentials for the Azure Terraform providers in | ---
page_title: Dynamic Credentials with the Azure Providers - HCP Terraform
description: >-
Use OpenID Connect to get short-term credentials for the Azure Terraform providers in
your HCP Terraform runs.
---
# Dynamic Credentials with the Azure Provider
~> **Important:** Ensure you are using version **3.25.0** or later of the **AzureRM provider** and version **2.29.0** or later of the **Microsoft Entra ID provider** (previously Azure Active Directory) as required OIDC functionality was introduced in these provider versions.
~> **Important:** If you are self-hosting [HCP Terraform agents](/terraform/cloud-docs/agents), ensure your agents use [v1.7.0](/terraform/cloud-docs/agents/changelog#1-7-0-03-02-2023) or above. To use the latest dynamic credentials features, [upgrade your agents to the latest version](/terraform/cloud-docs/agents/changelog).
You can use HCP Terraform’s native OpenID Connect integration with Azure to get [dynamic credentials](/terraform/cloud-docs/workspaces/dynamic-provider-credentials) for the AzureRM or Microsoft Entra ID providers in your HCP Terraform runs. Configuring the integration requires the following steps:
1. **[Configure Azure](#configure-azure):** Set up a trust configuration between Azure and HCP Terraform. Then, you must create Azure roles and policies for your HCP Terraform workspaces.
2. **[Configure HCP Terraform](#configure-hcp-terraform):** Add environment variables to the HCP Terraform workspaces where you want to use Dynamic Credentials.
Once you complete the setup, HCP Terraform automatically authenticates to Azure during each run. The Azure provider authentication is valid for the length of the plan or apply.
!> **Warning:** Dynamic credentials with the Azure providers do not work when your Terraform Enterprise instance uses a custom or self-signed certificate. This limitation is due to restrictions in Azure.
## Configure Azure
You must enable and configure an application and service principal with accompanying federated credentials and permissions on Azure. These instructions use the Azure portal, but you can also use Terraform to configure Azure. Refer to our [example Terraform configuration](https://github.com/hashicorp/terraform-dynamic-credentials-setup-examples/tree/main/azure).
### Create an Application and Service Principal
Follow the steps mentioned in the AzureRM provider docs here: [Creating the Application and Service Principal](https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/guides/service_principal_oidc#creating-the-application-and-service-principal).
As mentioned in the documentation it will be important to make note of the `client_id` for the application as you will use this later for authentication.
-> **Note:** you will want to skip the `“Configure Microsoft Entra ID Application to Trust a GitHub Repository”` section as this does not apply here.
### Grant the Application Access to Manage Resources in Your Azure Subscription
You must now give the created Application permission to modify resources within your Subscription.
Follow the steps mentioned in the AzureRM provider docs here: [Granting the Application access to manage resources in your Azure Subscription](https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/guides/service_principal_oidc#granting-the-application-access-to-manage-resources-in-your-azure-subscription).
### Configure Microsoft Entra ID Application to Trust a Generic Issuer
Finally, you must create federated identity credentials which validate the contents of the token sent to Azure from HCP Terraform.
Follow the steps mentioned in the AzureRM provider docs here: [Configure Azure Microsoft Entra ID Application to Trust a Generic Issuer](https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/guides/service_principal_oidc#configure-azure-active-directory-application-to-trust-a-generic-issuer).
The following information should be specified:
* **Federated credential scenario**: Must be set to `Other issuer`.
* **Issuer**: The address of HCP Terraform (e.g., https://app.terraform.io).
* **Important**: make sure this value starts with **https://** and does _not_ have a trailing slash.
* **Subject identifier**: The subject identifier from HCP Terraform that this credential will match. This will be in the form `organization:my-org-name:project:my-project-name:workspace:my-workspace-name:run_phase:plan` where the `run_phase` can be one of `plan` or `apply`.
* **Name**: A name for the federated credential, such as `tfc-plan-credential`. Note that this cannot be changed later.
The following is optional, but may be desired:
* **Audience**: Enter the audience value that will be set when requesting the identity token. This will be `api://AzureADTokenExchange` by default. This should be set to the value of `TFC_AZURE_WORKLOAD_IDENTITY_AUDIENCE` if this has been configured.
-> **Note:** because the `Subject identifier` for federated credentials is a direct string match, two federated identity credentials need to be created for each workspace using dynamic credentials: one that matches `run_phase:plan` and one that matches `run_phase:apply`.
## Configure HCP Terraform
You’ll need to set some environment variables in your HCP Terraform workspace in order to configure HCP Terraform to authenticate with Azure using dynamic credentials. You can set these as workspace variables. When you configure dynamic provider credentials with multiple provider configurations of the same type, use either a default variable or a tagged alias variable name for each provider configuration. Refer to [Specifying Multiple Configurations](#specifying-multiple-configurations) for more details.
### Required Environment Variables
| Variable | Value | Notes |
|--------------------------------------------------------------------------------------------------------|------------------------------------------------------------------------------------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| `TFC_AZURE_PROVIDER_AUTH`<br />`TFC_AZURE_PROVIDER_AUTH[_TAG]`<br />_(Default variable not supported)_ | `true` | Requires **v1.7.0** or later if self-managing agents. Must be present and set to `true`, or HCP Terraform will not attempt to authenticate to Azure. |
| `TFC_AZURE_RUN_CLIENT_ID`<br />`TFC_AZURE_RUN_CLIENT_ID[_TAG]`<br />`TFC_DEFAULT_AZURE_RUN_CLIENT_ID` | The client ID for the Service Principal / Application used when authenticating to Azure. | Requires **v1.7.0** or later if self-managing agents. Optional if `TFC_AZURE_PLAN_CLIENT_ID` and `TFC_AZURE_APPLY_CLIENT_ID` are both provided. These variables are described [below](/terraform/cloud-docs/workspaces/dynamic-provider-credentials/azure-configuration#optional-environment-variables) |
### Optional Environment Variables
You may need to set these variables, depending on your use case.
| Variable | Value | Notes |
|----------------------------------------------------------------------------------------------------------------------------------------------|---------------------------------------------------------------------------------------------------|---------------------------------------------------------------------------------------------------------------------------------|
| `TFC_AZURE_WORKLOAD_IDENTITY_AUDIENCE`<br />`TFC_AZURE_WORKLOAD_IDENTITY_AUDIENCE[_TAG]`<br />`TFC_DEFAULT_AZURE_WORKLOAD_IDENTITY_AUDIENCE` | Will be used as the `aud` claim for the identity token. Defaults to `api://AzureADTokenExchange`. | Requires **v1.7.0** or later if self-managing agents. |
| `TFC_AZURE_PLAN_CLIENT_ID`<br />`TFC_AZURE_PLAN_CLIENT_ID[_TAG]`<br />`TFC_DEFAULT_AZURE_PLAN_CLIENT_ID` | The client ID for the Service Principal / Application to use for the plan phase of a run. | Requires **v1.7.0** or later if self-managing agents. Will fall back to the value of `TFC_AZURE_RUN_CLIENT_ID` if not provided. |
| `TFC_AZURE_APPLY_CLIENT_ID`<br />`TFC_AZURE_APPLY_CLIENT_ID[_TAG]`<br />`TFC_DEFAULT_AZURE_APPLY_CLIENT_ID` | The client ID for the Service Principal / Application to use for the apply phase of a run. | Requires **v1.7.0** or later if self-managing agents. Will fall back to the value of `TFC_AZURE_RUN_CLIENT_ID` if not provided. |
## Configure the AzureRM or Microsoft Entra ID Provider
Make sure that you’re passing values for the `subscription_id` and `tenant_id` arguments into the provider configuration block or setting the `ARM_SUBSCRIPTION_ID` and `ARM_TENANT_ID` variables in your workspace.
Make sure that you’re _not_ setting values for `client_id`, `use_oidc`, or `oidc_token` in the provider or setting any of `ARM_CLIENT_ID`, `ARM_USE_OIDC`, `ARM_OIDC_TOKEN`.
### Specifying Multiple Configurations
~> **Important:** Ensure you are using version **3.60.0** or later of the **AzureRM provider** and version **2.43.0** or later of the **Microsoft Entra ID provider** as required functionality was introduced in these provider versions.
~> **Important:** If you are self-hosting [HCP Terraform agents](/terraform/cloud-docs/agents), ensure your agents use [v1.12.0](/terraform/cloud-docs/agents/changelog#1-12-0-07-26-2023) or above. To use the latest dynamic credentials features, [upgrade your agents to the latest version](/terraform/cloud-docs/agents/changelog).
You can add additional variables to handle multiple distinct Azure setups, enabling you to use multiple [provider aliases](/terraform/language/providers/configuration#alias-multiple-provider-configurations) within the same workspace. You can configure each set of credentials independently, or use default values by configuring the variables prefixed with `TFC_DEFAULT_`.
For more details, see [Specifying Multiple Configurations](/terraform/cloud-docs/workspaces/dynamic-provider-credentials/specifying-multiple-configurations).
#### Required Terraform Variable
To use additional configurations, add the following code to your Terraform configuration. This lets HCP Terraform supply variable values that you can then use to map authentication and configuration details to the correct provider blocks.
```hcl
variable "tfc_azure_dynamic_credentials" {
description = "Object containing Azure dynamic credentials configuration"
type = object({
default = object({
client_id_file_path = string
oidc_token_file_path = string
})
aliases = map(object({
client_id_file_path = string
oidc_token_file_path = string
}))
})
}
```
#### Example Usage
##### AzureRM Provider
```hcl
provider "azurerm" {
features {}
// use_cli should be set to false to yield more accurate error messages on auth failure.
use_cli = false
// use_oidc must be explicitly set to true when using multiple configurations.
use_oidc = true
client_id_file_path = var.tfc_azure_dynamic_credentials.default.client_id_file_path
oidc_token_file_path = var.tfc_azure_dynamic_credentials.default.oidc_token_file_path
subscription_id = "00000000-0000-0000-0000-000000000000"
tenant_id = "10000000-0000-0000-0000-000000000000"
}
provider "azurerm" {
features {}
// use_cli should be set to false to yield more accurate error messages on auth failure.
use_cli = false
// use_oidc must be explicitly set to true when using multiple configurations.
use_oidc = true
alias = "ALIAS1"
client_id_file_path = var.tfc_azure_dynamic_credentials.aliases["ALIAS1"].client_id_file_path
oidc_token_file_path = var.tfc_azure_dynamic_credentials.aliases["ALIAS1"].oidc_token_file_path
subscription_id = "00000000-0000-0000-0000-000000000000"
tenant_id = "20000000-0000-0000-0000-000000000000"
}
```
##### Microsoft Entra ID Provider (formerly Azure AD)
```hcl
provider "azuread" {
features {}
// use_cli should be set to false to yield more accurate error messages on auth failure.
use_cli = false
// use_oidc must be explicitly set to true when using multiple configurations.
use_oidc = true
client_id_file_path = var.tfc_azure_dynamic_credentials.default.client_id_file_path
oidc_token_file_path = var.tfc_azure_dynamic_credentials.default.oidc_token_file_path
subscription_id = "00000000-0000-0000-0000-000000000000"
tenant_id = "10000000-0000-0000-0000-000000000000"
}
provider "azuread" {
features {}
// use_cli should be set to false to yield more accurate error messages on auth failure.
use_cli = false
// use_oidc must be explicitly set to true when using multiple configurations.
use_oidc = true
alias = "ALIAS1"
client_id_file_path = var.tfc_azure_dynamic_credentials.aliases["ALIAS1"].client_id_file_path
oidc_token_file_path = var.tfc_azure_dynamic_credentials.aliases["ALIAS1"].oidc_token_file_path
subscription_id = "00000000-0000-0000-0000-000000000000"
tenant_id = "20000000-0000-0000-0000-000000000000"
}
``` | terraform | page title Dynamic Credentials with the Azure Providers HCP Terraform description Use OpenID Connect to get short term credentials for the Azure Terraform providers in your HCP Terraform runs Dynamic Credentials with the Azure Provider Important Ensure you are using version 3 25 0 or later of the AzureRM provider and version 2 29 0 or later of the Microsoft Entra ID provider previously Azure Active Directory as required OIDC functionality was introduced in these provider versions Important If you are self hosting HCP Terraform agents terraform cloud docs agents ensure your agents use v1 7 0 terraform cloud docs agents changelog 1 7 0 03 02 2023 or above To use the latest dynamic credentials features upgrade your agents to the latest version terraform cloud docs agents changelog You can use HCP Terraform s native OpenID Connect integration with Azure to get dynamic credentials terraform cloud docs workspaces dynamic provider credentials for the AzureRM or Microsoft Entra ID providers in your HCP Terraform runs Configuring the integration requires the following steps 1 Configure Azure configure azure Set up a trust configuration between Azure and HCP Terraform Then you must create Azure roles and policies for your HCP Terraform workspaces 2 Configure HCP Terraform configure hcp terraform Add environment variables to the HCP Terraform workspaces where you want to use Dynamic Credentials Once you complete the setup HCP Terraform automatically authenticates to Azure during each run The Azure provider authentication is valid for the length of the plan or apply Warning Dynamic credentials with the Azure providers do not work when your Terraform Enterprise instance uses a custom or self signed certificate This limitation is due to restrictions in Azure Configure Azure You must enable and configure an application and service principal with accompanying federated credentials and permissions on Azure These instructions use the Azure portal but you can also use Terraform to configure Azure Refer to our example Terraform configuration https github com hashicorp terraform dynamic credentials setup examples tree main azure Create an Application and Service Principal Follow the steps mentioned in the AzureRM provider docs here Creating the Application and Service Principal https registry terraform io providers hashicorp azurerm latest docs guides service principal oidc creating the application and service principal As mentioned in the documentation it will be important to make note of the client id for the application as you will use this later for authentication Note you will want to skip the Configure Microsoft Entra ID Application to Trust a GitHub Repository section as this does not apply here Grant the Application Access to Manage Resources in Your Azure Subscription You must now give the created Application permission to modify resources within your Subscription Follow the steps mentioned in the AzureRM provider docs here Granting the Application access to manage resources in your Azure Subscription https registry terraform io providers hashicorp azurerm latest docs guides service principal oidc granting the application access to manage resources in your azure subscription Configure Microsoft Entra ID Application to Trust a Generic Issuer Finally you must create federated identity credentials which validate the contents of the token sent to Azure from HCP Terraform Follow the steps mentioned in the AzureRM provider docs here Configure Azure Microsoft Entra ID Application to Trust a Generic Issuer https registry terraform io providers hashicorp azurerm latest docs guides service principal oidc configure azure active directory application to trust a generic issuer The following information should be specified Federated credential scenario Must be set to Other issuer Issuer The address of HCP Terraform e g https app terraform io Important make sure this value starts with https and does not have a trailing slash Subject identifier The subject identifier from HCP Terraform that this credential will match This will be in the form organization my org name project my project name workspace my workspace name run phase plan where the run phase can be one of plan or apply Name A name for the federated credential such as tfc plan credential Note that this cannot be changed later The following is optional but may be desired Audience Enter the audience value that will be set when requesting the identity token This will be api AzureADTokenExchange by default This should be set to the value of TFC AZURE WORKLOAD IDENTITY AUDIENCE if this has been configured Note because the Subject identifier for federated credentials is a direct string match two federated identity credentials need to be created for each workspace using dynamic credentials one that matches run phase plan and one that matches run phase apply Configure HCP Terraform You ll need to set some environment variables in your HCP Terraform workspace in order to configure HCP Terraform to authenticate with Azure using dynamic credentials You can set these as workspace variables When you configure dynamic provider credentials with multiple provider configurations of the same type use either a default variable or a tagged alias variable name for each provider configuration Refer to Specifying Multiple Configurations specifying multiple configurations for more details Required Environment Variables Variable Value Notes TFC AZURE PROVIDER AUTH br TFC AZURE PROVIDER AUTH TAG br Default variable not supported true Requires v1 7 0 or later if self managing agents Must be present and set to true or HCP Terraform will not attempt to authenticate to Azure TFC AZURE RUN CLIENT ID br TFC AZURE RUN CLIENT ID TAG br TFC DEFAULT AZURE RUN CLIENT ID The client ID for the Service Principal Application used when authenticating to Azure Requires v1 7 0 or later if self managing agents Optional if TFC AZURE PLAN CLIENT ID and TFC AZURE APPLY CLIENT ID are both provided These variables are described below terraform cloud docs workspaces dynamic provider credentials azure configuration optional environment variables Optional Environment Variables You may need to set these variables depending on your use case Variable Value Notes TFC AZURE WORKLOAD IDENTITY AUDIENCE br TFC AZURE WORKLOAD IDENTITY AUDIENCE TAG br TFC DEFAULT AZURE WORKLOAD IDENTITY AUDIENCE Will be used as the aud claim for the identity token Defaults to api AzureADTokenExchange Requires v1 7 0 or later if self managing agents TFC AZURE PLAN CLIENT ID br TFC AZURE PLAN CLIENT ID TAG br TFC DEFAULT AZURE PLAN CLIENT ID The client ID for the Service Principal Application to use for the plan phase of a run Requires v1 7 0 or later if self managing agents Will fall back to the value of TFC AZURE RUN CLIENT ID if not provided TFC AZURE APPLY CLIENT ID br TFC AZURE APPLY CLIENT ID TAG br TFC DEFAULT AZURE APPLY CLIENT ID The client ID for the Service Principal Application to use for the apply phase of a run Requires v1 7 0 or later if self managing agents Will fall back to the value of TFC AZURE RUN CLIENT ID if not provided Configure the AzureRM or Microsoft Entra ID Provider Make sure that you re passing values for the subscription id and tenant id arguments into the provider configuration block or setting the ARM SUBSCRIPTION ID and ARM TENANT ID variables in your workspace Make sure that you re not setting values for client id use oidc or oidc token in the provider or setting any of ARM CLIENT ID ARM USE OIDC ARM OIDC TOKEN Specifying Multiple Configurations Important Ensure you are using version 3 60 0 or later of the AzureRM provider and version 2 43 0 or later of the Microsoft Entra ID provider as required functionality was introduced in these provider versions Important If you are self hosting HCP Terraform agents terraform cloud docs agents ensure your agents use v1 12 0 terraform cloud docs agents changelog 1 12 0 07 26 2023 or above To use the latest dynamic credentials features upgrade your agents to the latest version terraform cloud docs agents changelog You can add additional variables to handle multiple distinct Azure setups enabling you to use multiple provider aliases terraform language providers configuration alias multiple provider configurations within the same workspace You can configure each set of credentials independently or use default values by configuring the variables prefixed with TFC DEFAULT For more details see Specifying Multiple Configurations terraform cloud docs workspaces dynamic provider credentials specifying multiple configurations Required Terraform Variable To use additional configurations add the following code to your Terraform configuration This lets HCP Terraform supply variable values that you can then use to map authentication and configuration details to the correct provider blocks hcl variable tfc azure dynamic credentials description Object containing Azure dynamic credentials configuration type object default object client id file path string oidc token file path string aliases map object client id file path string oidc token file path string Example Usage AzureRM Provider hcl provider azurerm features use cli should be set to false to yield more accurate error messages on auth failure use cli false use oidc must be explicitly set to true when using multiple configurations use oidc true client id file path var tfc azure dynamic credentials default client id file path oidc token file path var tfc azure dynamic credentials default oidc token file path subscription id 00000000 0000 0000 0000 000000000000 tenant id 10000000 0000 0000 0000 000000000000 provider azurerm features use cli should be set to false to yield more accurate error messages on auth failure use cli false use oidc must be explicitly set to true when using multiple configurations use oidc true alias ALIAS1 client id file path var tfc azure dynamic credentials aliases ALIAS1 client id file path oidc token file path var tfc azure dynamic credentials aliases ALIAS1 oidc token file path subscription id 00000000 0000 0000 0000 000000000000 tenant id 20000000 0000 0000 0000 000000000000 Microsoft Entra ID Provider formerly Azure AD hcl provider azuread features use cli should be set to false to yield more accurate error messages on auth failure use cli false use oidc must be explicitly set to true when using multiple configurations use oidc true client id file path var tfc azure dynamic credentials default client id file path oidc token file path var tfc azure dynamic credentials default oidc token file path subscription id 00000000 0000 0000 0000 000000000000 tenant id 10000000 0000 0000 0000 000000000000 provider azuread features use cli should be set to false to yield more accurate error messages on auth failure use cli false use oidc must be explicitly set to true when using multiple configurations use oidc true alias ALIAS1 client id file path var tfc azure dynamic credentials aliases ALIAS1 client id file path oidc token file path var tfc azure dynamic credentials aliases ALIAS1 oidc token file path subscription id 00000000 0000 0000 0000 000000000000 tenant id 20000000 0000 0000 0000 000000000000 |
terraform your HCP Terraform runs Important If you are self hosting HCP Terraform agents terraform cloud docs agents ensure your agents use v1 7 0 terraform cloud docs agents changelog 1 7 0 03 02 2023 or above To use the latest dynamic credentials features upgrade your agents to the latest version terraform cloud docs agents changelog Dynamic Credentials with the GCP Provider page title Dynamic Credentials with the GCP Provider Workspaces HCP Terraform Use OpenID Connect to get short term credentials for the GCP Terraform provider in | ---
page_title: Dynamic Credentials with the GCP Provider - Workspaces - HCP Terraform
description: >-
Use OpenID Connect to get short-term credentials for the GCP Terraform provider in
your HCP Terraform runs.
---
# Dynamic Credentials with the GCP Provider
~> **Important:** If you are self-hosting [HCP Terraform agents](/terraform/cloud-docs/agents), ensure your agents use [v1.7.0](/terraform/cloud-docs/agents/changelog#1-7-0-03-02-2023) or above. To use the latest dynamic credentials features, [upgrade your agents to the latest version](/terraform/cloud-docs/agents/changelog).
You can use HCP Terraform’s native OpenID Connect integration with GCP to get [dynamic credentials](/terraform/cloud-docs/workspaces/dynamic-provider-credentials) for the GCP provider in your HCP Terraform runs. Configuring the integration requires the following steps:
1. **[Configure GCP](#configure-gcp):** Set up a trust configuration between GCP and HCP Terraform. Then, you must create GCP roles and policies for your HCP Terraform workspaces.
2. **[Configure HCP Terraform](#configure-hcp-terraform):** Add environment variables to the HCP Terraform workspaces where you want to use Dynamic Credentials.
Once you complete the setup, HCP Terraform automatically authenticates to GCP during each run. The GCP provider authentication is valid for the length of the plan or apply.
!> **Warning:** Dynamic credentials with the GCP provider do not work if your Terraform Enterprise instance uses a custom or self-signed certificate. This limitation is due to restrictions in GCP.
## Configure GCP
You must enable and configure a workload identity pool and provider on GCP. These instructions use the GCP console, but you can also use Terraform to configure GCP. Refer to our [example Terraform configuration](https://github.com/hashicorp/terraform-dynamic-credentials-setup-examples/tree/main/gcp).
### Add a Workload Identity Pool and Provider
Google documentation for setting this up can be found here: [Configuring workload identity federation with other identity providers](https://cloud.google.com/iam/docs/workload-identity-federation-with-other-providers).
Before starting to create the resources, you must enable the APIs mentioned at the start of the [Configure workload Identity federation](https://cloud.google.com/iam/docs/workload-identity-federation-with-other-providers#configure).
#### Add a Workload Identity Pool
The following information should be specified:
* **Name**: Name for the pool, such as `my-tfc-pool`. The name is also used as the pool ID. You can't change the pool ID later.
The following is optional, but may be desired:
* **Pool ID**: The ID for the pool. This defaults to the name as mentioned above, but can be set to another value.
* **Description**: Text that describes the purpose of the pool.
You will also want to ensure that the `Enabled Pool` option is set to be enabled before clicking next.
#### Add a Workload Identity Provider
You must add a workload identity provider to the pool. The following information should be specified:
* **Provider type**: Must be `OpenID Connect (OIDC)`.
* **Provider name**: Name for the identity provider, such as `my-tfc-provider`. The name is also used as the provider ID. You can’t change the provider ID later.
* **Issuer (URL)**: The address of the TFC/E instance, such as https://app.terraform.io
* **Important**: make sure this value starts with **https://** and does _not_ have a trailing slash.
* **Audiences**: This can be left as `Default audience` if you are planning on using the default audience HCP Terraform provides.
* **Important**: you must select the `Allowed audiences` toggle and set this to the value of `TFC_GCP_WORKLOAD_IDENTITY_AUDIENCE`, if configured.
* **Provider attributes mapping**: At the minimum this must include `assertion.sub` for the `google.subject` entry. Other mappings can be added for other claims in the identity token to attributes by adding `attribute.[claim name]` on the Google side and `assertion.[claim name]` on the OIDC side of a new mapping.
* **Attribute Conditions**: Conditions to restrict which identity tokens can authenticate using the workload identity pool, such as `assertion.sub.startsWith("organization:my-org:project:my-project:workspace:my-workspace")` to restrict access to identity tokens from a specific workspace. See this page in Google documentation for more information on the expression language: [Attribute conditions](https://cloud.google.com/iam/docs/workload-identity-federation#conditions).
!> **Warning**: you should always check, at minimum, the audience and the name of the organization in order to prevent unauthorized access from other HCP Terraform organizations!
The following is optional, but may be desired:
* **Provider ID**: The ID for the provider. This defaults to the name as mentioned above, but can be set to another value.
### Add a Service Account and Permissions
You must next add a service account and properly configure the permissions.
#### Create a Service Account
Google documentation for setting this up can be found here: [Creating a service account for the external workload](https://cloud.google.com/iam/docs/workload-identity-federation-with-other-providers#create_a_service_account_for_the_external_workload).
The following information should be specified:
* **Service account name**: Name for the service account, such as `tfc-service-account`. The name is also used as the pool ID. You can't change the pool ID later.
The following is optional, but may be desired:
* **Service account ID**: The ID for the service account. This defaults to the name as mentioned above, but can be set to another value.
* **Description**: Text that describes the purpose of the service account.
#### Grant IAM Permissions
The next step in the setup wizard will allow for granting IAM permissions for the service account. The role that is given to the service account will vary depending on your specific needs and project setup. This should in general be the most minimal set of permissions needed for the service account to properly function.
#### Grant External Permissions
Once the service account has been created and granted IAM permissions, you will need to grant access to the service account for the identity pool created above. Google documentation for setting this up can be found here: [Allow the external workload to impersonate the service account](https://cloud.google.com/iam/docs/workload-identity-federation-with-other-providers#allow_the_external_workload_to_impersonate_the_service_account).
## Configure HCP Terraform
You’ll need to set some environment variables in your HCP Terraform workspace in order to configure HCP Terraform to authenticate with GCP using dynamic credentials. You can set these as workspace variables, or if you’d like to share one GCP service account across multiple workspaces, you can use a variable set. When you configure dynamic provider credentials with multiple provider configurations of the same type, use either a default variable or a tagged alias variable name for each provider configuration. Refer to [Specifying Multiple Configurations](#specifying-multiple-configurations) for more details.
### Required Environment Variables
| Variable | Value | Notes |
|-------------------------------------------------------------------------------------------------------------------------------------|------------------------------------------------------------------------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| `TFC_GCP_PROVIDER_AUTH`<br />`TFC_GCP_PROVIDER_AUTH[_TAG]`<br />_(Default variable not supported)_ | `true` | Requires **v1.7.0** or later if self-managing agents. Must be present and set to `true`, or HCP Terraform will not attempt to use dynamic credentials to authenticate to GCP. |
| `TFC_GCP_RUN_SERVICE_ACCOUNT_EMAIL`<br />`TFC_GCP_RUN_SERVICE_ACCOUNT_EMAIL[_TAG]`<br />`TFC_DEFAULT_GCP_RUN_SERVICE_ACCOUNT_EMAIL` | The service account email HCP Terraform will use when authenticating to GCP. | Requires **v1.7.0** or later if self-managing agents. Optional if `TFC_GCP_PLAN_SERVICE_ACCOUNT_EMAIL` and `TFC_GCP_APPLY_SERVICE_ACCOUNT_EMAIL` are both provided. These variables are described [below](/terraform/cloud-docs/workspaces/dynamic-provider-credentials/gcp-configuration#optional-environment-variables) |
You must also include information about the GCP Workload Identity Provider that HCP Terraform will use when authenticating to GCP. You can supply this information in two different ways:
1. By providing one unified variable containing the canonical name of the workload identity provider.
2. By providing the project number, pool ID, and provider ID as separate variables.
You should avoid setting both types of variables, but if you do, the unified version will take precedence.
#### Unified Variable
| Variable | Value | Notes |
|----------------------------------------------------------------------------------------------------------------------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| `TFC_GCP_WORKLOAD_PROVIDER_NAME`<br />`TFC_GCP_WORKLOAD_PROVIDER_NAME[_TAG]`<br />`TFC_DEFAULT_GCP_WORKLOAD_PROVIDER_NAME` | The canonical name of the workload identity provider. This must be in the form mentioned for the `name` attribute [here](https://registry.terraform.io/providers/hashicorp/google/latest/docs/resources/iam_workload_identity_pool_provider#attributes-reference) | Requires **v1.7.0** or later if self-managing agents. This will take precedence over `TFC_GCP_PROJECT_NUMBER`, `TFC_GCP_WORKLOAD_POOL_ID`, and `TFC_GCP_WORKLOAD_PROVIDER_ID` if set. |
#### Separate Variables
| Variable | Value | Notes |
|----------------------------------------------------------------------------------------------------------------------|-------------------------------------------------------------|--------------------------------------------------------------------------------------------------------------|
| `TFC_GCP_PROJECT_NUMBER`<br />`TFC_GCP_PROJECT_NUMBER[_TAG]`<br />`TFC_DEFAULT_GCP_PROJECT_NUMBER` | The project number where the pool and other resources live. | Requires **v1.7.0** or later if self-managing agents. This is _not_ the project ID and is a separate number. |
| `TFC_GCP_WORKLOAD_POOL_ID`<br />`TFC_GCP_WORKLOAD_POOL_ID[_TAG]`<br />`TFC_DEFAULT_GCP_WORKLOAD_POOL_ID` | The workload pool ID. | Requires **v1.7.0** or later if self-managing agents. |
| `TFC_GCP_WORKLOAD_PROVIDER_ID`<br />`TFC_GCP_WORKLOAD_PROVIDER_ID[_TAG]`<br />`TFC_DEFAULT_GCP_WORKLOAD_PROVIDER_ID` | The workload identity provider ID. | Requires **v1.7.0** or later if self-managing agents. |
### Optional Environment Variables
You may need to set these variables, depending on your use case.
| Variable | Value | Notes |
|-------------------------------------------------------------------------------------------------------------------------------------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|-------------------------------------------------------------------------------------------------------------------------------------------|
| `TFC_GCP_WORKLOAD_IDENTITY_AUDIENCE`<br />`TFC_GCP_WORKLOAD_IDENTITY_AUDIENCE[_TAG]`<br />`TFC_DEFAULT_GCP_WORKLOAD_IDENTITY_AUDIENCE` | Will be used as the `aud` claim for the identity token. Defaults to a string of the form mentioned [here](https://cloud.google.com/iam/docs/workload-identity-federation-with-other-providers#oidc_1) in the GCP docs with the leading **https:** stripped. | Requires **v1.7.0** or later if self-managing agents. This is one of the default `aud` formats that GCP accepts. |
| `TFC_GCP_PLAN_SERVICE_ACCOUNT_EMAIL`<br />`TFC_GCP_PLAN_SERVICE_ACCOUNT_EMAIL[_TAG]`<br />`TFC_DEFAULT_GCP_PLAN_SERVICE_ACCOUNT_EMAIL` | The service account email to use for the plan phase of a run. | Requires **v1.7.0** or later if self-managing agents. Will fall back to the value of `TFC_GCP_RUN_SERVICE_ACCOUNT_EMAIL` if not provided. |
| `TFC_GCP_APPLY_SERVICE_ACCOUNT_EMAIL`<br />`TFC_GCP_APPLY_SERVICE_ACCOUNT_EMAIL[_TAG]`<br />`TFC_DEFAULT_GCP_APPLY_SERVICE_ACCOUNT_EMAIL` | The service account email to use for the apply phase of a run. | Requires **v1.7.0** or later if self-managing agents. Will fall back to the value of `TFC_GCP_RUN_SERVICE_ACCOUNT_EMAIL` if not provided. |
## Configure the GCP Provider
Make sure that you’re passing values for the `project` and `region` arguments into the provider configuration block.
Make sure that you’re not setting values for the `GOOGLE_CREDENTIALS` or `GOOGLE_APPLICATION_CREDENTIALS` environment variables as these will conflict with the dynamic credentials authentication process.
### Specifying Multiple Configurations
~> **Important:** If you are self-hosting [HCP Terraform agents](/terraform/cloud-docs/agents), ensure your agents use [v1.12.0](/terraform/cloud-docs/agents/changelog#1-12-0-07-26-2023) or above. To use the latest dynamic credentials features, [upgrade your agents to the latest version](/terraform/cloud-docs/agents/changelog).
You can add additional variables to handle multiple distinct GCP setups, enabling you to use multiple [provider aliases](/terraform/language/providers/configuration#alias-multiple-provider-configurations) within the same workspace. You can configure each set of credentials independently, or use default values by configuring the variables prefixed with `TFC_DEFAULT_`.
For more details, see [Specifying Multiple Configurations](/terraform/cloud-docs/workspaces/dynamic-provider-credentials/specifying-multiple-configurations).
#### Required Terraform Variable
To use additional configurations, add the following code to your Terraform configuration. This lets HCP Terraform supply variable values that you can then use to map authentication and configuration details to the correct provider blocks.
```hcl
variable "tfc_gcp_dynamic_credentials" {
description = "Object containing GCP dynamic credentials configuration"
type = object({
default = object({
credentials = string
})
aliases = map(object({
credentials = string
}))
})
}
```
#### Example Usage
```hcl
provider "google" {
credentials = var.tfc_gcp_dynamic_credentials.default.credentials
}
provider "google" {
alias = "ALIAS1"
credentials = var.tfc_gcp_dynamic_credentials.aliases["ALIAS1"].credentials
}
``` | terraform | page title Dynamic Credentials with the GCP Provider Workspaces HCP Terraform description Use OpenID Connect to get short term credentials for the GCP Terraform provider in your HCP Terraform runs Dynamic Credentials with the GCP Provider Important If you are self hosting HCP Terraform agents terraform cloud docs agents ensure your agents use v1 7 0 terraform cloud docs agents changelog 1 7 0 03 02 2023 or above To use the latest dynamic credentials features upgrade your agents to the latest version terraform cloud docs agents changelog You can use HCP Terraform s native OpenID Connect integration with GCP to get dynamic credentials terraform cloud docs workspaces dynamic provider credentials for the GCP provider in your HCP Terraform runs Configuring the integration requires the following steps 1 Configure GCP configure gcp Set up a trust configuration between GCP and HCP Terraform Then you must create GCP roles and policies for your HCP Terraform workspaces 2 Configure HCP Terraform configure hcp terraform Add environment variables to the HCP Terraform workspaces where you want to use Dynamic Credentials Once you complete the setup HCP Terraform automatically authenticates to GCP during each run The GCP provider authentication is valid for the length of the plan or apply Warning Dynamic credentials with the GCP provider do not work if your Terraform Enterprise instance uses a custom or self signed certificate This limitation is due to restrictions in GCP Configure GCP You must enable and configure a workload identity pool and provider on GCP These instructions use the GCP console but you can also use Terraform to configure GCP Refer to our example Terraform configuration https github com hashicorp terraform dynamic credentials setup examples tree main gcp Add a Workload Identity Pool and Provider Google documentation for setting this up can be found here Configuring workload identity federation with other identity providers https cloud google com iam docs workload identity federation with other providers Before starting to create the resources you must enable the APIs mentioned at the start of the Configure workload Identity federation https cloud google com iam docs workload identity federation with other providers configure Add a Workload Identity Pool The following information should be specified Name Name for the pool such as my tfc pool The name is also used as the pool ID You can t change the pool ID later The following is optional but may be desired Pool ID The ID for the pool This defaults to the name as mentioned above but can be set to another value Description Text that describes the purpose of the pool You will also want to ensure that the Enabled Pool option is set to be enabled before clicking next Add a Workload Identity Provider You must add a workload identity provider to the pool The following information should be specified Provider type Must be OpenID Connect OIDC Provider name Name for the identity provider such as my tfc provider The name is also used as the provider ID You can t change the provider ID later Issuer URL The address of the TFC E instance such as https app terraform io Important make sure this value starts with https and does not have a trailing slash Audiences This can be left as Default audience if you are planning on using the default audience HCP Terraform provides Important you must select the Allowed audiences toggle and set this to the value of TFC GCP WORKLOAD IDENTITY AUDIENCE if configured Provider attributes mapping At the minimum this must include assertion sub for the google subject entry Other mappings can be added for other claims in the identity token to attributes by adding attribute claim name on the Google side and assertion claim name on the OIDC side of a new mapping Attribute Conditions Conditions to restrict which identity tokens can authenticate using the workload identity pool such as assertion sub startsWith organization my org project my project workspace my workspace to restrict access to identity tokens from a specific workspace See this page in Google documentation for more information on the expression language Attribute conditions https cloud google com iam docs workload identity federation conditions Warning you should always check at minimum the audience and the name of the organization in order to prevent unauthorized access from other HCP Terraform organizations The following is optional but may be desired Provider ID The ID for the provider This defaults to the name as mentioned above but can be set to another value Add a Service Account and Permissions You must next add a service account and properly configure the permissions Create a Service Account Google documentation for setting this up can be found here Creating a service account for the external workload https cloud google com iam docs workload identity federation with other providers create a service account for the external workload The following information should be specified Service account name Name for the service account such as tfc service account The name is also used as the pool ID You can t change the pool ID later The following is optional but may be desired Service account ID The ID for the service account This defaults to the name as mentioned above but can be set to another value Description Text that describes the purpose of the service account Grant IAM Permissions The next step in the setup wizard will allow for granting IAM permissions for the service account The role that is given to the service account will vary depending on your specific needs and project setup This should in general be the most minimal set of permissions needed for the service account to properly function Grant External Permissions Once the service account has been created and granted IAM permissions you will need to grant access to the service account for the identity pool created above Google documentation for setting this up can be found here Allow the external workload to impersonate the service account https cloud google com iam docs workload identity federation with other providers allow the external workload to impersonate the service account Configure HCP Terraform You ll need to set some environment variables in your HCP Terraform workspace in order to configure HCP Terraform to authenticate with GCP using dynamic credentials You can set these as workspace variables or if you d like to share one GCP service account across multiple workspaces you can use a variable set When you configure dynamic provider credentials with multiple provider configurations of the same type use either a default variable or a tagged alias variable name for each provider configuration Refer to Specifying Multiple Configurations specifying multiple configurations for more details Required Environment Variables Variable Value Notes TFC GCP PROVIDER AUTH br TFC GCP PROVIDER AUTH TAG br Default variable not supported true Requires v1 7 0 or later if self managing agents Must be present and set to true or HCP Terraform will not attempt to use dynamic credentials to authenticate to GCP TFC GCP RUN SERVICE ACCOUNT EMAIL br TFC GCP RUN SERVICE ACCOUNT EMAIL TAG br TFC DEFAULT GCP RUN SERVICE ACCOUNT EMAIL The service account email HCP Terraform will use when authenticating to GCP Requires v1 7 0 or later if self managing agents Optional if TFC GCP PLAN SERVICE ACCOUNT EMAIL and TFC GCP APPLY SERVICE ACCOUNT EMAIL are both provided These variables are described below terraform cloud docs workspaces dynamic provider credentials gcp configuration optional environment variables You must also include information about the GCP Workload Identity Provider that HCP Terraform will use when authenticating to GCP You can supply this information in two different ways 1 By providing one unified variable containing the canonical name of the workload identity provider 2 By providing the project number pool ID and provider ID as separate variables You should avoid setting both types of variables but if you do the unified version will take precedence Unified Variable Variable Value Notes TFC GCP WORKLOAD PROVIDER NAME br TFC GCP WORKLOAD PROVIDER NAME TAG br TFC DEFAULT GCP WORKLOAD PROVIDER NAME The canonical name of the workload identity provider This must be in the form mentioned for the name attribute here https registry terraform io providers hashicorp google latest docs resources iam workload identity pool provider attributes reference Requires v1 7 0 or later if self managing agents This will take precedence over TFC GCP PROJECT NUMBER TFC GCP WORKLOAD POOL ID and TFC GCP WORKLOAD PROVIDER ID if set Separate Variables Variable Value Notes TFC GCP PROJECT NUMBER br TFC GCP PROJECT NUMBER TAG br TFC DEFAULT GCP PROJECT NUMBER The project number where the pool and other resources live Requires v1 7 0 or later if self managing agents This is not the project ID and is a separate number TFC GCP WORKLOAD POOL ID br TFC GCP WORKLOAD POOL ID TAG br TFC DEFAULT GCP WORKLOAD POOL ID The workload pool ID Requires v1 7 0 or later if self managing agents TFC GCP WORKLOAD PROVIDER ID br TFC GCP WORKLOAD PROVIDER ID TAG br TFC DEFAULT GCP WORKLOAD PROVIDER ID The workload identity provider ID Requires v1 7 0 or later if self managing agents Optional Environment Variables You may need to set these variables depending on your use case Variable Value Notes TFC GCP WORKLOAD IDENTITY AUDIENCE br TFC GCP WORKLOAD IDENTITY AUDIENCE TAG br TFC DEFAULT GCP WORKLOAD IDENTITY AUDIENCE Will be used as the aud claim for the identity token Defaults to a string of the form mentioned here https cloud google com iam docs workload identity federation with other providers oidc 1 in the GCP docs with the leading https stripped Requires v1 7 0 or later if self managing agents This is one of the default aud formats that GCP accepts TFC GCP PLAN SERVICE ACCOUNT EMAIL br TFC GCP PLAN SERVICE ACCOUNT EMAIL TAG br TFC DEFAULT GCP PLAN SERVICE ACCOUNT EMAIL The service account email to use for the plan phase of a run Requires v1 7 0 or later if self managing agents Will fall back to the value of TFC GCP RUN SERVICE ACCOUNT EMAIL if not provided TFC GCP APPLY SERVICE ACCOUNT EMAIL br TFC GCP APPLY SERVICE ACCOUNT EMAIL TAG br TFC DEFAULT GCP APPLY SERVICE ACCOUNT EMAIL The service account email to use for the apply phase of a run Requires v1 7 0 or later if self managing agents Will fall back to the value of TFC GCP RUN SERVICE ACCOUNT EMAIL if not provided Configure the GCP Provider Make sure that you re passing values for the project and region arguments into the provider configuration block Make sure that you re not setting values for the GOOGLE CREDENTIALS or GOOGLE APPLICATION CREDENTIALS environment variables as these will conflict with the dynamic credentials authentication process Specifying Multiple Configurations Important If you are self hosting HCP Terraform agents terraform cloud docs agents ensure your agents use v1 12 0 terraform cloud docs agents changelog 1 12 0 07 26 2023 or above To use the latest dynamic credentials features upgrade your agents to the latest version terraform cloud docs agents changelog You can add additional variables to handle multiple distinct GCP setups enabling you to use multiple provider aliases terraform language providers configuration alias multiple provider configurations within the same workspace You can configure each set of credentials independently or use default values by configuring the variables prefixed with TFC DEFAULT For more details see Specifying Multiple Configurations terraform cloud docs workspaces dynamic provider credentials specifying multiple configurations Required Terraform Variable To use additional configurations add the following code to your Terraform configuration This lets HCP Terraform supply variable values that you can then use to map authentication and configuration details to the correct provider blocks hcl variable tfc gcp dynamic credentials description Object containing GCP dynamic credentials configuration type object default object credentials string aliases map object credentials string Example Usage hcl provider google credentials var tfc gcp dynamic credentials default credentials provider google alias ALIAS1 credentials var tfc gcp dynamic credentials aliases ALIAS1 credentials |
Subsets and Splits