markdown
stringlengths
0
1.02M
code
stringlengths
0
832k
output
stringlengths
0
1.02M
license
stringlengths
3
36
path
stringlengths
6
265
repo_name
stringlengths
6
127
Get the data!
df_illtypes = get_ill_facets(params, ill_types)
_____no_output_____
MIT
visualise-searches-over-time.ipynb
GLAM-Workbench/trove-newspapers
To calculate proportions for these searches we'll just use the total number of articles across all of Trove we collected above.
# Merge results with total articles and calculate proportions df_illtypes_merged = merge_df_with_total(df_illtypes, df_total) # Make total results chart chart9 = make_chart_totals(df_illtypes_merged, 'ill_type', 'Type') # Make proportions chart chart10 = make_chart_proportions(df_illtypes_merged, 'ill_type', 'Type') # Shorthand way of concatenating the two charts (note there's only one legend) chart9 & chart10
_____no_output_____
MIT
visualise-searches-over-time.ipynb
GLAM-Workbench/trove-newspapers
And there we have it – interesting to see the rapid increase in photos from the 1920s on. 9. But what are we searching?We've seen that we can visualise Trove search results over time in a number of different ways. But what are we actually searching? In the last example, exploring illustration types, we sliced up the complete collection of Trove newspaper articles using the `ill_type` facet. This is a metadata field whose value is set by the people who processed the articles. It should be consistent, but we can't take these sorts of things for granted. Let's look at all the values in the `illtype` field.
ill_params = params.copy() # No query! Set q to a single space for everything ill_params['q'] = ' ' # Set the illustrated facet to true - necessary before setting ill_type ill_params['l-illustrated'] = 'true' ill_params['facet'] = 'illtype' data = get_results(ill_params) facets = [] for term in data['response']['zone'][0]['facets']['facet']['term']: # Get the state and the number of results, and convert it to integers, before adding to our results facets.append({'ill_type': term['search'], 'total_results': int(term['count'])}) df_ill_types = pd.DataFrame(facets) df_ill_types
_____no_output_____
MIT
visualise-searches-over-time.ipynb
GLAM-Workbench/trove-newspapers
It's pretty consistent, but why are there entries both for 'Cartoon' and 'Cartoons'? In the past I've noticed variations in capitalisation amongst the facet values, but fortunately these seem to have been fixed. The point is that we can't take search results for granted – we have to think about how they are created.Just as working with the Trove API enables us to view search results in different ways, so we can turn the search results against themselves to reveal some of their limitations and inconsistencies. In most of the examples above we're searching the full text of the newspaper articles for specific terms. The full text has been extracted from page images using Optical Character Recognition. The results are far from perfect, and Trove users help to correct errors. But many errors remain, and all the visualisations we've created will have been affected by them. Some articles will be missing. While we can't do much directly to improve the results, we can investigate whether the OCR errors are evenly distributed across the collection – do certain time periods, or newspapers have higher error rates?As a final example, let's what we can find out about the variation of OCR errors over time. We'll do this by searching for a very common OCR error – 'tbe'. This is of course meant to be 'the'. This is hardly a perfect measure of OCR accuracy, but it is something we can easily measure. How does the frequency of 'tbe' change over time?
params['q'] = 'text:"tbe"~0' ocr_facets = get_facet_data(params) df_ocr = pd.DataFrame(ocr_facets) df_ocr_merged = merge_df_with_total(df_ocr, df_total) alt.Chart(df_ocr_merged).mark_line(point=True).encode( x=alt.X('year:Q', axis=alt.Axis(format='c', title='Year')), # This time we're showing the proportion (formatted as a percentage) on the Y axis y=alt.Y('proportion:Q', axis=alt.Axis(format='%', title='Proportion of articles')), tooltip=[alt.Tooltip('year:Q', title='Year'), alt.Tooltip('proportion:Q', title='Proportion', format='%')], ).properties(width=700, height=400)
_____no_output_____
MIT
visualise-searches-over-time.ipynb
GLAM-Workbench/trove-newspapers
Table of Contents1  Goal2  Var3  Init4  Load5  Summary6  Filter6.1  Random selection6.1.1  Summary6.1.1.1  Taxonomy7  Split8  Write9  CheckM value histograms10  Taxonomy summary11  sessionInfo Goal* selecting genomes from the GTDB for testing and validation * randomly selecting Var
work_dir = '/ebio/abt3_projects/databases_no-backup/DeepMAsED/GTDB_ref_genomes/' metadata_file = '/ebio/abt3_projects/databases_no-backup/GTDB/release86/metadata_1perGTDBSpec_gte50comp-lt5cont_wPath.tsv'
_____no_output_____
MIT
notebooks/01_simulation_datasets/02_train-test_default/01_GTDB_test-val_genomes.ipynb
chrisLanderson/DeepMAsED
Init
library(dplyr) library(tidyr) library(ggplot2) set.seed(8364)
Attaching package: ‘dplyr’ The following objects are masked from ‘package:stats’: filter, lag The following objects are masked from ‘package:base’: intersect, setdiff, setequal, union
MIT
notebooks/01_simulation_datasets/02_train-test_default/01_GTDB_test-val_genomes.ipynb
chrisLanderson/DeepMAsED
Load
metadata = read.delim(metadata_file, sep='\t') %>% dplyr::select(ncbi_organism_name, accession, scaffold_count, longest_scaffold, gc_percentage, total_gap_length, genome_size, n50_contigs, trna_count, checkm_completeness, checkm_contamination, ssu_count, ncbi_taxonomy, ssu_gg_taxonomy, gtdb_taxonomy, fasta_file_path) metadata %>% nrow %>% print metadata %>% head
[1] 21276
MIT
notebooks/01_simulation_datasets/02_train-test_default/01_GTDB_test-val_genomes.ipynb
chrisLanderson/DeepMAsED
Summary
metadata %>% summary
_____no_output_____
MIT
notebooks/01_simulation_datasets/02_train-test_default/01_GTDB_test-val_genomes.ipynb
chrisLanderson/DeepMAsED
Filter
# removing abnormal metadata_f = metadata %>% filter(scaffold_count <= 100, total_gap_length < 100000, genome_size < 10000000, checkm_completeness >= 90, ssu_count < 20, fasta_file_path != '') metadata_f %>% nrow metadata_f %>% summary
_____no_output_____
MIT
notebooks/01_simulation_datasets/02_train-test_default/01_GTDB_test-val_genomes.ipynb
chrisLanderson/DeepMAsED
Random selectionSelecting 2000 genomes, which will be split into training & testing
metadata_f = metadata_f %>% sample_n(2000) metadata_f %>% nrow
_____no_output_____
MIT
notebooks/01_simulation_datasets/02_train-test_default/01_GTDB_test-val_genomes.ipynb
chrisLanderson/DeepMAsED
Summary
metadata_f %>% dplyr::select(scaffold_count, longest_scaffold, gc_percentage, total_gap_length, genome_size, n50_contigs, trna_count, checkm_completeness, checkm_contamination, ssu_count) %>% summary
_____no_output_____
MIT
notebooks/01_simulation_datasets/02_train-test_default/01_GTDB_test-val_genomes.ipynb
chrisLanderson/DeepMAsED
Taxonomy
metadata_f_tax = metadata_f %>% dplyr::select(ncbi_taxonomy) %>% separate(ncbi_taxonomy, c('Domain', 'Phylum', 'Class', 'Order', 'Family', 'Genus', 'Species'), sep=';') metadata_f_tax %>% head metadata_f_tax %>% group_by(Domain) %>% summarize(n = n()) %>% ungroup() %>% arrange(-n) metadata_f_tax %>% group_by(Domain, Phylum) %>% summarize(n = n()) %>% ungroup() %>% arrange(-n) %>% head(n=20) metadata_f_tax = metadata_f %>% dplyr::select(gtdb_taxonomy) %>% separate(gtdb_taxonomy, c('Domain', 'Phylum', 'Class', 'Order', 'Family', 'Genus', 'Species'), sep=';') metadata_f_tax %>% head metadata_f_tax %>% group_by(Domain) %>% summarize(n = n()) %>% ungroup() %>% arrange(-n) metadata_f_tax %>% group_by(Domain, Phylum) %>% summarize(n = n()) %>% ungroup() %>% arrange(-n) %>% head(n=20)
_____no_output_____
MIT
notebooks/01_simulation_datasets/02_train-test_default/01_GTDB_test-val_genomes.ipynb
chrisLanderson/DeepMAsED
Split
metadata_f_train = metadata_f %>% sample_n(1000) metadata_f_train %>% nrow metadata_f_test = metadata_f %>% anti_join(metadata_f_train, c('ncbi_organism_name', 'accession')) metadata_f_test %>% nrow # accidental overlap? metadata_f_train %>% inner_join(metadata_f_test, c('ncbi_organism_name', 'accession')) %>% nrow
_____no_output_____
MIT
notebooks/01_simulation_datasets/02_train-test_default/01_GTDB_test-val_genomes.ipynb
chrisLanderson/DeepMAsED
Write
outF = file.path(work_dir, 'DeepMAsED_GTDB_genome-refs_train.tsv') metadata_f_train %>% rename('Taxon' = ncbi_organism_name, 'Fasta' = fasta_file_path) %>% write.table(outF, sep='\t', quote=FALSE, row.names=FALSE) cat('File written:', outF, '\n') outF = file.path(work_dir, 'DeepMAsED_GTDB_genome-refs_test.tsv') metadata_f_test %>% rename('Taxon' = ncbi_organism_name, 'Fasta' = fasta_file_path) %>% write.table(outF, sep='\t', quote=FALSE, row.names=FALSE) cat('File written:', outF, '\n')
File written: /ebio/abt3_projects/databases_no-backup/DeepMAsED/GTDB_ref_genomes//DeepMAsED_GTDB_genome-refs_test.tsv
MIT
notebooks/01_simulation_datasets/02_train-test_default/01_GTDB_test-val_genomes.ipynb
chrisLanderson/DeepMAsED
CheckM value histograms* the distribution of checkM completeness/contamination for training & test
F = file.path(work_dir, 'DeepMAsED_GTDB_genome-refs_train.tsv') metadata_f_train = read.delim(F, sep='\t') %>% mutate(data_partition = 'Train') metadata_f_train %>% dim %>% print metadata_f_train %>% head(n=3) F = file.path(work_dir, 'DeepMAsED_GTDB_genome-refs_test.tsv') metadata_f_test = read.delim(F, sep='\t') %>% mutate(data_partition = 'Test') metadata_f_test %>% dim %>% print metadata_f_test %>% head(n=3) metadata_f = rbind(metadata_f_train, metadata_f_test) metadata_f %>% head(n=3) p = metadata_f %>% dplyr::select(Taxon, data_partition, checkm_completeness, checkm_contamination) %>% gather(checkm_stat, checkm_value, -Taxon, -data_partition) %>% mutate(data_partition = factor(data_partition, levels=c('Train', 'Test')), checkm_stat = ifelse(checkm_stat == 'checkm_completeness', 'Completenss', 'Contamination')) %>% ggplot(aes(data_partition, checkm_value)) + geom_boxplot() + labs(x='Data partition', y='') + facet_wrap(~ checkm_stat, scales='free_y') + theme_bw() options(repr.plot.width=6, repr.plot.height=3) plot(p) F = file.path(work_dir, 'DeepMAsED_GTDB_genome-refs_checkM-box.pdf') ggsave(p, file=F, width=7, height=4) cat('File written:', F, '\n') metadata_f$checkm_completeness %>% summary %>% print metadata_f$checkm_completeness %>% sd %>% print metadata_f$checkm_contamination %>% summary %>% print metadata_f$checkm_contamination %>% sd %>% print
Min. 1st Qu. Median Mean 3rd Qu. Max. 90.09 98.68 99.44 98.77 99.89 100.00 [1] 1.846624 Min. 1st Qu. Median Mean 3rd Qu. Max. 0.0000 0.0000 0.4800 0.7283 1.0600 4.9900 [1] 0.8609621
MIT
notebooks/01_simulation_datasets/02_train-test_default/01_GTDB_test-val_genomes.ipynb
chrisLanderson/DeepMAsED
Taxonomy summary
F = file.path(work_dir, 'DeepMAsED_GTDB_genome-refs_train.tsv') metadata_f_train = read.delim(F, sep='\t') %>% mutate(data_partition = 'Train') metadata_f_train %>% dim %>% print metadata_f_train %>% head(n=3) F = file.path(work_dir, 'DeepMAsED_GTDB_genome-refs_test.tsv') metadata_f_test = read.delim(F, sep='\t') %>% mutate(data_partition = 'Test') metadata_f_test %>% dim %>% print metadata_f_test %>% head(n=3) metadata_f = rbind(metadata_f_train, metadata_f_test) metadata_f %>% head(n=3) metadata_f_tax = metadata_f %>% dplyr::select(data_partition, ncbi_taxonomy) %>% separate(ncbi_taxonomy, c('Domain', 'Phylum', 'Class', 'Order', 'Family', 'Genus', 'Species'), sep=';') metadata_f_tax %>% head # domain-level distribution metadata_f_tax %>% group_by(data_partition, Domain) %>% summarize(n = n()) %>% ungroup() # number of phyla represented metadata_f_tax %>% filter(Phylum != 'p__') %>% .$Phylum %>% unique %>% length %>% print metadata_f_tax %>% filter(Class != 'c__') %>% .$Class %>% unique %>% length %>% print metadata_f_tax %>% filter(Genus != 'g__') %>% .$Genus %>% unique %>% length %>% print
[1] 764
MIT
notebooks/01_simulation_datasets/02_train-test_default/01_GTDB_test-val_genomes.ipynb
chrisLanderson/DeepMAsED
sessionInfo
sessionInfo()
_____no_output_____
MIT
notebooks/01_simulation_datasets/02_train-test_default/01_GTDB_test-val_genomes.ipynb
chrisLanderson/DeepMAsED
This one is a matrix problem: let's use NumPy! Sample input part 1 Let's start with the sample.
sample = """1, 1 1, 6 8, 3 3, 4 5, 5 8, 9""" sources = [tuple(map(int, line.split(', '))) for line in sample.split('\n')] sources
_____no_output_____
MIT
Problem 06.ipynb
flothesof/advent_of_code2018
Let's write a function that returns a numpy grid of closest distances given a source and a grid size.
import numpy as np def manhattan(source, grid_size): n, m = grid_size grid = np.empty(grid_size, dtype=int) X, Y = np.meshgrid(np.arange(n), np.arange(m)) return np.abs(X - source[0]) + np.abs(Y - source[1]) manhattan(sources[0], (10, 10)) manhattan(sources[1], (10, 10)) a = manhattan(sources[0], (10, 10)) b = manhattan(sources[1], (10, 10)) np.where(a < b, np.ones_like(a), np.ones_like(a) * 2) grid_size = (10, 10) nearest = np.ones(grid_size) * -1 min_dist = np.ones(grid_size, dtype=int) * 10000 for index, source in enumerate(sources): dist = manhattan(source, grid_size) nearest[dist == min_dist] = np.nan nearest = np.where(dist < min_dist, np.zeros_like(nearest) + index, nearest) min_dist = np.where(dist < min_dist, dist, min_dist) nearest
_____no_output_____
MIT
Problem 06.ipynb
flothesof/advent_of_code2018
If we set the infinite groups to zero that would be the 0, 2, 1 and 5 we would get:
infinites = [0, 2, 1, 5] for infinite in infinites: nearest[nearest == infinite] = np.nan for i in (set(range(len(sources))) - set(infinites)): print(i, np.nansum(nearest == i))
3 9 4 17
MIT
Problem 06.ipynb
flothesof/advent_of_code2018
Part 1 for real
sources = [tuple(map(int, line.split(', '))) for line in open('input06.txt').readlines()] np.array(sources).min(axis=0) np.array(sources).max(axis=0) grid_size = (360, 360) nearest = np.ones(grid_size) * -1 min_dist = np.ones(grid_size, dtype=int) * 10000 for index, source in enumerate(sources): dist = manhattan(source, grid_size) nearest[dist == min_dist] = np.nan nearest = np.where(dist < min_dist, np.zeros_like(nearest) + index, nearest) min_dist = np.where(dist < min_dist, dist, min_dist) nearest
_____no_output_____
MIT
Problem 06.ipynb
flothesof/advent_of_code2018
The funny thing here is that this view of the matrix directly allows us to set the four corners as the infinites since it's the nearest neighbor to each corner that will be the infinite one!
infinites = [0, 9, 28, 37] for infinite in infinites: nearest[nearest == infinite] = np.nan for i in (set(range(len(sources))) - set(infinites)): print(i, np.nansum(nearest == i)) max(np.nansum(nearest == i) for i in (set(range(len(sources))) - set(infinites)))
_____no_output_____
MIT
Problem 06.ipynb
flothesof/advent_of_code2018
Part 2 sample
sources = [tuple(map(int, line.split(', '))) for line in sample.split('\n')] X, Y = np.meshgrid(np.arange(10), np.arange(10)) total = np.zeros((10, 10)) for source in sources: total += np.abs(X - source[0]) + np.abs(Y - source[1]) np.where(total < 32, 1, 0) np.sum(np.where(total < 32, 1, 0))
_____no_output_____
MIT
Problem 06.ipynb
flothesof/advent_of_code2018
Part 2 for real
sources = [tuple(map(int, line.split(', '))) for line in open('input06.txt').readlines()] grid_size = (360, 360) X, Y = np.meshgrid(np.arange(360), np.arange(360)) total = np.zeros(grid_size) for source in sources: total += np.abs(X - source[0]) + np.abs(Y - source[1]) np.sum(np.where(total < 10000, 1, 0))
_____no_output_____
MIT
Problem 06.ipynb
flothesof/advent_of_code2018
Answering Business Questions using SQL for Chinook databaseIn this project we are going to explore and analyze chinook data base. we are going to use modified version of data base which we included in project directory. The Chinook database contains information about a fictional digital music shop - kind of like a mini-iTunes storehere is some questions that we are going to answer in this project1- what is the best genre 2- employee performance 3- best countries by sale 4- how many purchases are whole album purchasing vs individual tracks First we are going to import essential libraries and connect to data base also in this cell we defined our necessary functions which we are going to use in our project. at the end of the cell we showed all tables and their names in this database in a table
import sqlite3 import pandas as pd import matplotlib.pyplot as plt import numpy as np from matplotlib import cm %matplotlib inline def run_query(q): with sqlite3.connect("chinook.db") as conn: return pd.read_sql(q, conn) def run_command(c): with sqlite3.connect("chinook.db") as conn: conn.isolation_level = None conn.execute(c) def show_tables(): q =''' SELECT name, type FROM sqlite_master WHERE type IN ("table","view"); ''' return run_query(q) show_tables()
_____no_output_____
MIT
Chinook_1.ipynb
alinasr/business_questions_SQL_chinook_database
Best Genresthe chinook signed a contract and they want to see which genre and artist is worse to by and invest for their online shop. in table blow we can see each artists and genre in this contract.| Artist Name| Genre | | --- | --- | | Regal | Hip-Hop || --- | --- | | Red Tone | Punk || --- | --- | | Meteor and the Girls | Pop || --- | --- | | Slim Jim Bites | Blues |Then we are going to find the best sold genres in USA and recommend first three one which are in contract our business managers.
best_sold_genre = ''' WITH USA_purches AS (SELECT il.*, c.country FROM customer c INNER JOIN invoice i ON i.customer_id = c.customer_id INNER JOIN invoice_line il ON il.invoice_id = i.invoice_id WHERE c.country = "USA" ) SELECT g.name genre_name, SUM(up.quantity) total_sold, CAST(SUM(up.quantity) as float) / CAST((SELECT COUNT(quantity) FROM USA_purches) as float) sold_percentage FROM USA_purches up LEFT JOIN track t ON t.track_id = up.track_id LEFT JOIN genre g on g.genre_id = t.genre_id GROUP BY 1 ORDER by 2 DESC LIMIT 10 ''' run_query(best_sold_genre) top_genre_usa = run_query(best_sold_genre) top_genre_usa.set_index("genre_name",drop=True,inplace=True) ax = top_genre_usa["total_sold"].plot(kind='barh', title ="Top genres in USA", figsize=(8, 6), legend=False, fontsize=12, width = 0.75) ax.annotate("test labele annoate", (600, - 0.15))
_____no_output_____
MIT
Chinook_1.ipynb
alinasr/business_questions_SQL_chinook_database
As you can see in the table and figure above the best genre in with high sold tracks is ROCK then Alternative & Punk following with metal. But based on our list to choose first three albums we first need to select Alternative & Punk then Blues and then Pop. So we need to purchase for the store from artists :1- `Red Tone` 2- `Slim Jim Bites` 3- `Meteor and the Girls` Employee performancein this section we are going to analyze each sales support agent sales performance as in the query and the code blow
employee_performance = ''' SELECT e.first_name || " " || e.last_name employee_name, e.birthdate, e.hire_Date, SUM(i.total) total_dollar FROM employee e LEFT JOIN customer c ON c.support_rep_id = e.employee_id LEFT JOIN invoice i ON i.customer_id = c.customer_id WHERE e.title = "Sales Support Agent" GROUP BY 1 ORDER BY 4 DESC ''' run_query(employee_performance) employee_dollar = run_query(employee_performance) employee_dollar.set_index("employee_name", inplace = True, drop = True) employee_dollar["total_dollar"].plot.barh(title = "Employee performance for Sales Support Agents", colormap=plt.cm.Accent)
_____no_output_____
MIT
Chinook_1.ipynb
alinasr/business_questions_SQL_chinook_database
As we can see in the table and figure above there are only a bit differences between total amount of dollar for each employee and if you look at their hire data the oldest one has most total dollar amount. there is no relation between their age and their performance in this particular analysis. Sales by countrywe are going to analyze sales by country like number of customer total sale by dollar for each country in the query blow
country_customer_sales = ''' WITH customer_purches AS ( SELECT c.country, c.customer_id, SUM(i.total) total, COUNT(distinct invoice_id) number_of_order FROM customer c LEFT JOIN invoice i ON i.customer_id = c.customer_id GROUP BY 2 ), country_customer AS ( SELECT SUM(number_of_order) number_of_order, SUM(total) total, SUM(total_customer) total_customer, CASE WHEN total_customer = 1 THEN "other" ELSE country END AS country FROM ( SELECT country, COUNT(customer_id) total_customer, SUM(total) total, SUM(number_of_order) number_of_order FROM customer_purches GROUP by 1 ORDER by 2 DESC ) GROUP by 4 ORDER BY 3 DESC ) SELECT country, total_customer, total total_dollar, CAST(total as float) / CAST(total_customer as float) customer_lifetime_value, number_of_order, CAST(total as float) / CAST(number_of_order as float) Average_order FROM ( SELECT cc.*, CASE WHEN country = "other" THEN 1 ELSE 0 END AS sort FROM country_customer cc ) ORDER BY sort ''' run_query(country_customer_sales)
_____no_output_____
MIT
Chinook_1.ipynb
alinasr/business_questions_SQL_chinook_database
Also here in the code blow we are going to plot the results in the table above to have better understanding
country_sales = run_query(country_customer_sales) country_sales.set_index("country", drop=True, inplace=True) colors = [plt.cm.tab20(i) for i in np.linspace(0, 1, country_sales.shape[0])] fig = plt.figure(figsize=(18, 14)) fig.subplots_adjust(hspace=.5, wspace=.5) ax1 = fig.add_subplot(2, 2, 1) country_sales_rename = country_sales["total_customer"].copy().rename('') country_sales_rename.plot.pie( startangle=-90, counterclock=False, title="Number of customers for each country", colormap=plt.cm.tab20, ax =ax1, fontsize = 14 ) ax2 = fig.add_subplot(2, 2, 2) average_order = country_sales["Average_order"] average_order.index.name = '' difretional = ((average_order * 100) / average_order.mean()) -100 difretional.plot.barh(ax = ax2, title = "Average_order differences from mean", color = colors, fontsize = 14, width = 0.8 ) ax3 = fig.add_subplot(2, 2, 3) customer_lifetime_value = country_sales["customer_lifetime_value"] customer_lifetime_value.index.name = '' customer_lifetime_value.plot.bar(ax = ax3, title = "customer lifetime value", color = colors, fontsize = 14, width = 0.8 ) ax4 = fig.add_subplot(2, 2, 4) total_dollar = country_sales["total_dollar"] total_dollar.index.name = '' total_dollar.plot.bar(ax = ax4, title = "Total salwes for each country $", color = colors, fontsize = 14, width = 0.8 )
_____no_output_____
MIT
Chinook_1.ipynb
alinasr/business_questions_SQL_chinook_database
as you can see in the plots and tables we have results very clearly for each country. most of customer and sales belong to first USA then Canada. But with looking at the `Average_order differences from mean` plot we could that there are opportunities in countries like `Czech Republic`, `United Kingdom` and `India`. but the data for these countries are very low 2 or 3 customer so we need more data in theses country if we are going to invest in them. Album purchases or notThe chinook allows to customer purchases whole album and purchase a collection of one or more individual tracks.but recently they want to make some changes in their purchasing policy and it is to purchase only the most popular tracks from each album from record companies, instead of purchasing every track from an album.we need to find out how many of purchases are individual tracks vs whole albums, so the management could make decisions based on.the query blow actually doing this. in this query for each invoice_id we compare if it is whole album purchases or not and we showed the result in the table
album_purches_or_not = ''' WITH track_album AS ( SELECT track_id, album_id FROM track ), first_track_album AS (SELECT il.invoice_id, il.track_id, t.album_id FROM invoice_line il LEFT JOIN track t ON t.track_id = il.track_id GROUP BY 1 ) SELECT album_purch, COUNT(invoice_id) invoice_number FROM ( SELECT fta.invoice_id, CASE WHEN ( SELECT ta.track_id FROM track_album ta WHERE ta.album_id = fta.album_id EXCEPT SELECT il.track_id FROM invoice_line il WHERE fta.invoice_id = il.invoice_id ) IS NULL AND ( SELECT il.track_id FROM invoice_line il WHERE fta.invoice_id = il.invoice_id EXCEPT SELECT ta.track_id FROM track_album ta WHERE ta.album_id = fta.album_id ) IS NULL THEN "YES" ELSE "NO" END AS album_purch FROM first_track_album fta ) GROUP BY album_purch ''' run_query(album_purches_or_not)
_____no_output_____
MIT
Chinook_1.ipynb
alinasr/business_questions_SQL_chinook_database
Remove existing figure file.
from pathlib import Path desktop = Path(r"C:\Users\tfiers\Desktop"); fname_suffix = " spy.png" for f in desktop.glob(f"*{fname_suffix}"): f.unlink() if buy_longg[-1]: fname_prefix = "💰💰BUY" elif buy_short[-1]: fname_prefix = "💰buy" else: fname_prefix = "don't buy" fig.savefig(desktop / (fname_prefix + fname_suffix), );
_____no_output_____
MIT
when_to_buy_sp500.ipynb
tfiers/when-to-buy-spy
Final extractionThere are 2 options for a final extraction:1. Directly take the binned flux (see Tikhonov_extraction.ipynb)2. 2D decontamination + box-like extractionThis notebook focuses on the second options (the first option is shown in other notebooks) Imports
# Imports for plots import matplotlib.pyplot as plt from matplotlib.colors import LogNorm #for better display of FITS images # Imports from standard packages # from scipy.interpolate import interp1d from astropy.io import fits import numpy as np # Imports for extraction from extract.overlap import TrpzOverlap, TrpzBox from extract.throughput import ThroughputSOSS from extract.convolution import WebbKer
_____no_output_____
MIT
SOSS/Final_extraction.ipynb
njcuk9999/jwst-mtl
Matplotlib defaults
%matplotlib inline plt.rc('figure', figsize=(9,3)) plt.rcParams["image.cmap"] = "inferno"
_____no_output_____
MIT
SOSS/Final_extraction.ipynb
njcuk9999/jwst-mtl
Read ref files
# List of orders to consider in the extraction order_list = [1, 2] #### Wavelength solution #### wave_maps = [] wave_maps.append(fits.getdata("extract/Ref_files/wavelengths_m1.fits")) wave_maps.append(fits.getdata("extract/Ref_files/wavelengths_m2.fits")) #### Spatial profiles #### spat_pros = [] spat_pros.append(fits.getdata("extract/Ref_files/spat_profile_m1.fits").squeeze()) spat_pros.append(fits.getdata("extract/Ref_files/spat_profile_m2.fits").squeeze()) # Convert data from fits files to float (fits precision is 1e-8) wave_maps = [wv.astype('float64') for wv in wave_maps] spat_pros = [p_ord.astype('float64') for p_ord in spat_pros] #### Throughputs #### thrpt_list = [ThroughputSOSS(order) for order in order_list] #### Convolution kernels #### ker_list = [WebbKer(wv_map) for wv_map in wave_maps] # Put all inputs from reference files in a list ref_files_args = [spat_pros, wave_maps, thrpt_list, ker_list]
_____no_output_____
MIT
SOSS/Final_extraction.ipynb
njcuk9999/jwst-mtl
Load simulation
# Import custom function to read toy simulation from sys import path path.append("Fake_data") from simu_utils import load_simu # Load a simulation simu = load_simu("Fake_data/phoenix_teff_02300_scale_1.0e+02.fits") data = simu["data"]
_____no_output_____
MIT
SOSS/Final_extraction.ipynb
njcuk9999/jwst-mtl
Extraction Parameters(Example using with few inputs parameters)
params = {} # Map of expected noise (sig) bkgd_noise = 20. # Oversampling params["n_os"] = 3 # Threshold on the spatial profile params["thresh"] = 1e-4
_____no_output_____
MIT
SOSS/Final_extraction.ipynb
njcuk9999/jwst-mtl
Init extraction object(This can be done only once if the `n_os` doesn't change)
extra = TrpzOverlap(*ref_files_args, **params)
_____no_output_____
MIT
SOSS/Final_extraction.ipynb
njcuk9999/jwst-mtl
ExtractHere, we run a really simple extraction (only one step, no tikhonov). In reality, this is were the "solver" should be used to iterate and get the best extraction as possible.
# Noise estimate to weight the pixels # Poisson noise + background noise sig = np.sqrt(data + bkgd_noise**2) # Extract f_k = extra.extract(data=data, sig=sig)
_____no_output_____
MIT
SOSS/Final_extraction.ipynb
njcuk9999/jwst-mtl
Quality estimate Rebuild the detector
rebuilt = extra.rebuild(f_k) plt.figure(figsize=(16,2)) plt.imshow((rebuilt-data)/sig, vmin=-3, vmax=3) plt.colorbar(label="Error relative to noise")
_____no_output_____
MIT
SOSS/Final_extraction.ipynb
njcuk9999/jwst-mtl
Decontaminate the 2d imageGenerate a decontaminated image for each order
data_decont = [] for i_ord in range(extra.n_ord): # Rebuild the contaminating order rebuilt = extra.rebuild(f_k, orders=[i_ord]) # Remove this order and save data_decont.append(data - rebuilt) # Flip so the first order is in first position data_decont = np.flip(data_decont, axis=0)
_____no_output_____
MIT
SOSS/Final_extraction.ipynb
njcuk9999/jwst-mtl
Get 1d spectrum with box-like extractionMore details for box-like extraction in Box_like_extraction.ipynb
# Use a single row for final wavelength bin grid_box_list = [wv_map[50, :] for wv_map in wave_maps] # Keep well defined values and sort grid_box_list = [np.unique(wv[wv > 0.]) for wv in grid_box_list] f_bin_list = [] # Iterate on each orders for i_ord in range(extra.n_ord): # Reference files wv_map = extra.lam_list[i_ord] aperture = extra.p_list[i_ord] # Mask mask = np.isnan(data_decont[i_ord]) # Define extraction object box_extra = TrpzBox(aperture, wv_map, box_width=30, mask=mask) # Extract the flux f_k = box_extra.extract(data=data_decont[i_ord]) # Bin to pixels grid_box = grid_box_list[i_ord] _, f_bin = box_extra.bin_to_pixel(grid_pix=grid_box, f_k=f_k) # Save f_bin_list.append(f_bin)
_____no_output_____
MIT
SOSS/Final_extraction.ipynb
njcuk9999/jwst-mtl
The final output is in f_bin_list (for each orders)
fig, ax = plt.subplots(2, 1, sharex=True) for i_ord in range(extra.n_ord): ax[i_ord].plot(grid_box_list[i_ord], f_bin_list[i_ord]) ax[0].set_ylabel("Counts") ax[1].set_xlabel("Wavelength [$\mu m$]")
_____no_output_____
MIT
SOSS/Final_extraction.ipynb
njcuk9999/jwst-mtl
0.0 Imports
import pandas as pd import numpy as np import matplotlib.pyplot as plt import seaborn as sns from sklearn import metrics as m from sklearn.svm import SVC from imblearn import combine as c from boruta import BorutaPy from IPython.core.display import display, HTML import inflection import warnings import joblib warnings.filterwarnings('ignore') from scipy import stats import requests from sklearn.dummy import DummyClassifier from sklearn.linear_model import LogisticRegression from sklearn.neighbors import KNeighborsClassifier from sklearn.naive_bayes import GaussianNB from sklearn.ensemble import RandomForestClassifier from sklearn.ensemble import GradientBoostingClassifier from xgboost import XGBClassifier from sklearn.model_selection import RandomizedSearchCV from sklearn import metrics as m from sklearn.model_selection import train_test_split, StratifiedKFold from sklearn.ensemble import RandomForestClassifier from sklearn.preprocessing import RobustScaler, MinMaxScaler, LabelEncoder
_____no_output_____
MIT
Churn-Prediction.ipynb
Leonardodsch/churn-prediction
0.1 Helper Functions
def jupyter_settings(): %matplotlib inline %pylab inline plt.style.use( 'bmh' ) plt.rcParams['figure.figsize'] = [18, 9] plt.rcParams['font.size'] = 24 display( HTML( '<style>.container { width:100% !important; }</style>') ) pd.options.display.max_columns = None pd.options.display.max_rows = None pd.set_option( 'display.expand_frame_repr', False ) pd.set_option('display.float_format', lambda x: '%.3f' % x) warnings.filterwarnings('ignore') sns.set() jupyter_settings() def barplot(a,b,data): plot = sns.barplot(x=a, y=b, data=data, edgecolor='k', palette='Blues'); return plot def cramer_v(x,y): cm = pd.crosstab(x,y).values n = cm.sum() r,k = cm.shape chi2 = stats.chi2_contingency(cm)[0] chi2corr = max(0, chi2 - (k-1)*(r-1)/(n-1)) kcorr = k - (k-1)**2/(n-1) rcorr = r - (r-1)**2/(n-1) return np.sqrt((chi2corr/n) / (min(kcorr-1, rcorr-1))) def ml_metrics(model_name, y_true, pred): accuracy = m.balanced_accuracy_score(y_true, pred) precision = m.precision_score(y_true, pred) recall = m.recall_score(y_true, pred) f1 = m.f1_score(y_true, pred) kappa = m.cohen_kappa_score(y_true, pred) return pd.DataFrame({'Balanced Accuracy': np.round(accuracy, 2), 'Precision': np.round(precision, 2), 'Recall': np.round(recall, 2), 'F1': np.round(f1, 2), 'Kappa': np.round(kappa, 2)}, index=[model_name]) def ml_results_cv(model_name, model, x, y): x = x.to_numpy() y = y.to_numpy() mms = MinMaxScaler() balanced_accuracy = [] precision = [] recall = [] f1 = [] kappa = [] skf = StratifiedKFold(n_splits=5, shuffle=True, random_state=42) for train_index, test_index in skf.split(x, y): x_train_cv, x_test_cv = x[train_index], x[test_index] y_train_cv, y_test_cv = y[train_index], y[test_index] x_train_cv = mms.fit_transform(x_train_cv) x_test_cv = mms.fit_transform(x_test_cv) model.fit(x_train_cv, y_train_cv) pred = model.predict(x_test_cv) balanced_accuracy.append(m.balanced_accuracy_score(y_test_cv, pred)) precision.append(m.precision_score(y_test_cv, pred)) recall.append(m.recall_score(y_test_cv, pred)) f1.append(m.f1_score(y_test_cv, pred)) kappa.append(m.cohen_kappa_score(y_test_cv, pred)) acurracy_mean, acurracy_std = np.round(np.mean(balanced_accuracy), 2), np.round(np.std(balanced_accuracy),2) precision_mean, precision_std = np.round(np.mean(precision),2), np.round(np.std(precision),2) recall_mean, recall_std = np.round(np.mean(recall),2), np.round(np.std(recall),2) f1_mean, f1_std = np.round(np.mean(f1),2), np.round(np.std(f1),2) kappa_mean, kappa_std = np.round(np.mean(kappa),2), np.round(np.std(kappa),2) return pd.DataFrame({"Balanced Accuracy": "{} +/- {}".format(acurracy_mean, acurracy_std), "Precision": "{} +/- {}".format(precision_mean, precision_std), "Recall": "{} +/- {}".format(recall_mean, recall_std), "F1": "{} +/- {}".format(f1_mean, f1_std), "Kappa": "{} +/- {}".format(kappa_mean, kappa_std)}, index=[model_name])
Populating the interactive namespace from numpy and matplotlib
MIT
Churn-Prediction.ipynb
Leonardodsch/churn-prediction
0.2 Loading Data
df_raw = pd.read_csv('data/churn.csv')
_____no_output_____
MIT
Churn-Prediction.ipynb
Leonardodsch/churn-prediction
1.0 Data Description
df1 = df_raw.copy()
_____no_output_____
MIT
Churn-Prediction.ipynb
Leonardodsch/churn-prediction
**RowNumber:** O número da coluna**CustomerID:** Identificador único do cliente**Surname:** Sobrenome do cliente.**CreditScore:** A pontuação de Crédito do cliente para o mercado de consumo.**Geography:** O país onde o cliente reside.**Gender:** O gênero do cliente.**Age:** A idade do cliente.**Tenure:** Número de anos que o cliente permaneceu ativo.**Balance:** Valor monetário que o cliente tem em sua conta bancária.**NumOfProducts:** O número de produtos comprado pelo cliente no banco.**HasCrCard:** Indica se o cliente possui ou não cartão de crédito.**IsActiveMember:** Indica se o cliente fez pelo menos uma movimentação na conta bancário dentro de 12 meses.**EstimateSalary:** Estimativa do salário mensal do cliente.**Exited:** Indica se o cliente está ou não em Churn. 1.1 Rename Columns
cols_old = ['RowNumber','CustomerId','Surname','CreditScore', 'Geography','Gender', 'Age', 'Tenure', 'Balance', 'NumOfProducts', 'HasCrCard', 'IsActiveMember', 'EstimatedSalary', 'Exited'] snakecase = lambda x: inflection.underscore(x) cols_new = list(map(snakecase, cols_old)) df1.columns = cols_new df1.columns
_____no_output_____
MIT
Churn-Prediction.ipynb
Leonardodsch/churn-prediction
1.2 Data Dimension
df1.shape
_____no_output_____
MIT
Churn-Prediction.ipynb
Leonardodsch/churn-prediction
1.3 Data Types
df1.dtypes
_____no_output_____
MIT
Churn-Prediction.ipynb
Leonardodsch/churn-prediction
1.4 Check NA
df1.isnull().sum()
_____no_output_____
MIT
Churn-Prediction.ipynb
Leonardodsch/churn-prediction
1.5 Fillout Na There are not Nan values in the dataset 1.6 Change Types
#changing the values 0 and 1 to 'yes' and 'no'. It'll help on the data description and analysis. df1['has_cr_card'] = df1['has_cr_card'].map({1:'yes', 0:'no'}) df1['is_active_member'] = df1['is_active_member'].map({1:'yes', 0:'no'}) df1['exited'] = df1['exited'].map({1:'yes',0:'no'})
_____no_output_____
MIT
Churn-Prediction.ipynb
Leonardodsch/churn-prediction
1.7 Descriptive Statistical 1.7.1 Numerical Attributes
# Central tendecy - mean, median # Dispersion - std, min, max, skew, kurtosis skew = df1.skew() kurtosis = df1.kurtosis() metrics = pd.DataFrame(df1.describe().drop(['count','25%','75%']).T) metrics = pd.concat([metrics, skew, kurtosis], axis=1) metrics.columns = ['Mean','STD','Min','Median','Max',' Skew','Kurtosis'] metrics
_____no_output_____
MIT
Churn-Prediction.ipynb
Leonardodsch/churn-prediction
1.7.2 Categorical Attributes
cat_attributes = df1.select_dtypes(exclude=['int64', 'float64']) cat_attributes.apply(lambda x: x.unique().shape[0]) cat_attributes.describe()
_____no_output_____
MIT
Churn-Prediction.ipynb
Leonardodsch/churn-prediction
2.0 Feature Engineering
df2 = df1.copy()
_____no_output_____
MIT
Churn-Prediction.ipynb
Leonardodsch/churn-prediction
2.1 Mind Map Hypotheses 2.2 Hypotheses List 1. Mulheres entram em churn 30% a mais do que os homens 2. Pessoas com credit score menor do que 600 entram mais em churn3. Pessoas com menos de 30 anos entram mais em churn4. Pessoas com balance menor do que que a média entram mais em churn5. Pessoas com salário maior do que a média entram menos em churn6. Pessoas que possuem cartão de crédito e credit score menor do que 600 entram mais em churn7. Pessoas que permaneceram ativas por mais de 2 anos entram menos em churn8. Pessoas que não são ativas entram mais em churn9. Pessoas com mais de 1 produto entram menos em churn10. Pessoas que possuem cartão de crédito e são ativas entram menos em churn 3.0 Variables Filtering
df3 = df2.copy()
_____no_output_____
MIT
Churn-Prediction.ipynb
Leonardodsch/churn-prediction
3.1 Rows Filtering All rows will be used for the analysis. 3.2 Columns Selection
# droping columns that won't be usefull df3.drop(['row_number','customer_id','surname'], axis=1, inplace=True)
_____no_output_____
MIT
Churn-Prediction.ipynb
Leonardodsch/churn-prediction
4.0 EDA
df4 = df3.copy()
_____no_output_____
MIT
Churn-Prediction.ipynb
Leonardodsch/churn-prediction
4.1 Univariate Analysis 4.1.1 Response Variable
sns.countplot(df4['exited'])
_____no_output_____
MIT
Churn-Prediction.ipynb
Leonardodsch/churn-prediction
4.1.2 Numerical Variables
num_atributes = df4.select_dtypes(include=['int64','float64']) num_atributes.hist(figsize=(15,10), bins=25);
_____no_output_____
MIT
Churn-Prediction.ipynb
Leonardodsch/churn-prediction
4.1.3 Categorical Variables
cat_attributes = df4.select_dtypes(include='object') j = 1 for i in cat_attributes: plt.subplot(3,2,j) sns.countplot(x=i, data=df4) plt.tight_layout() j +=1
_____no_output_____
MIT
Churn-Prediction.ipynb
Leonardodsch/churn-prediction
4.2 Bivariate Analysis **H1.** Mulheres entram em churn 30% a mais do que os homens **Falsa!!** Mulheres entram 27% a mais em churn do que homens
aux = df4[['gender','exited']][df4['exited'] == 'yes'].groupby('gender').count().reset_index() aux.sort_values(by='exited', ascending=True, inplace=True) aux['growth'] = aux['exited'].pct_change() aux barplot('gender','exited', aux)
_____no_output_____
MIT
Churn-Prediction.ipynb
Leonardodsch/churn-prediction
H2. Pessoas com credit score menor do que 600 entram mais em churn**Falsa!!** Clientes com credit score maior do que 600 entram mais em churn
aux = df4[['credit_score','exited']][df4['exited'] == 'yes'].copy() aux['credit_score'] = aux['credit_score'].apply(lambda x: '> 600' if x > 600 else '< 600' ) aux1 = aux[['credit_score','exited']].groupby('credit_score').count().reset_index() aux1 barplot('credit_score','exited',aux1)
_____no_output_____
MIT
Churn-Prediction.ipynb
Leonardodsch/churn-prediction
H3. Pessoas com menos de 30 anos entram mais em churn**Falsa!!** Clientes com menos de 30 anos entram menos em churn
aux = df4[['age','exited']][df4['exited'] == 'yes'].copy() aux['age'] = aux['age'].apply(lambda x: ' > 30' if x > 30 else ' < 30' ) aux1= aux[['age','exited']].groupby('age').count().reset_index() aux1 barplot('age','exited', aux1)
_____no_output_____
MIT
Churn-Prediction.ipynb
Leonardodsch/churn-prediction
H4. Pessoas com balance menor do que que a média entram mais em churn**Falsa!!** Clientes com balance menor do que a média entram menos em churn
balance_mean = df4['balance'].mean() aux = df4[['balance','exited']][df4['exited'] =='yes'].copy() aux['balance'] = aux['balance'].apply(lambda x: '> mean' if x > balance_mean else '< mean') aux1 = aux[['balance','exited']].groupby('balance').count().reset_index() aux1 barplot('balance','exited',aux1)
_____no_output_____
MIT
Churn-Prediction.ipynb
Leonardodsch/churn-prediction
H5. Pessoas com salário maior do que a média entram menos em churn**Falsa!!** Pessoas com salário maior do que a média entram mais em churn
mean_salary = df4['estimated_salary'].mean() aux = df4[['estimated_salary','exited']][df4['exited'] == 'yes'].copy() aux['estimated_salary'] = aux['estimated_salary'].apply(lambda x: '> mean' if x > mean_salary else '< mean') aux1 = aux[['estimated_salary','exited']].groupby('estimated_salary').count().reset_index() aux1 barplot('estimated_salary','exited',aux1)
_____no_output_____
MIT
Churn-Prediction.ipynb
Leonardodsch/churn-prediction
H6. Pessoas que possuem cartão de crédito e credit score menor do que 600 entram mais em churn**Falsa!!** Pessoas que possuem cartão de crédito e score menor do que 600 entram menos em churn
aux = df4[['credit_score','has_cr_card','exited']][(df4['exited'] == 'yes') & (df4['has_cr_card'] == 'yes')].copy() aux['credit_score'] = aux['credit_score'].apply(lambda x: '> 600' if x > 600 else '< 600' ) aux1 = aux[['credit_score','exited']].groupby('credit_score').count().reset_index() aux1 barplot('credit_score','exited',aux1)
_____no_output_____
MIT
Churn-Prediction.ipynb
Leonardodsch/churn-prediction
H7. Pessoas que permaneceram ativas por mais de 2 anos entram menos em churn**Falsa** Pessoas que permaneceram ativas por mais de 2 anos entram mais em churn
aux = df4[['tenure','exited']][(df4['exited'] == 'yes')].copy() aux['tenure'] = aux['tenure'].apply(lambda x: '> 2' if x > 3 else '< 2') aux1 = aux[['tenure', 'exited']].groupby('tenure').count().reset_index() aux1 barplot('tenure','exited',aux1)
_____no_output_____
MIT
Churn-Prediction.ipynb
Leonardodsch/churn-prediction
H8. Pessoas que não são ativas entram mais em churn**Verdadeira**
aux = df4[['is_active_member','exited']][df4['exited'] == 'yes'].copy() sns.countplot(x='is_active_member', data=aux)
_____no_output_____
MIT
Churn-Prediction.ipynb
Leonardodsch/churn-prediction
H9. Pessoas com mais de 1 produto entram menos em churn**Verdadeira**
aux = df4[['num_of_products','exited']][df4['exited']=='yes'].copy() aux['num_of_products'] = df4['num_of_products'].apply(lambda x: '> 1' if x > 1 else '< 1') aux1 = aux[['num_of_products','exited']].groupby('num_of_products').count().reset_index() aux1 barplot('num_of_products','exited',aux1)
_____no_output_____
MIT
Churn-Prediction.ipynb
Leonardodsch/churn-prediction
H10. Pessoas que possuem cartão de crédito e são ativas entram menos em churn**Falsa** Pesosas que possuem cartão de crédito e são ativas entram mais em churn
aux = df4[['is_active_member','exited','has_cr_card']][df4['exited'] == 'yes'] sns.countplot(x='is_active_member', hue='has_cr_card', data=aux)
_____no_output_____
MIT
Churn-Prediction.ipynb
Leonardodsch/churn-prediction
4.3 Multivariate Analysis
# changing back to numerical for use in numerical attributes analysis df4['has_cr_card'] = df4['has_cr_card'].map({'yes':1, 'no':0}) df4['is_active_member'] = df4['is_active_member'].map({'yes':1, 'no':0}) df4['exited'] = df4['exited'].map({'yes':1, 'no':0})
_____no_output_____
MIT
Churn-Prediction.ipynb
Leonardodsch/churn-prediction
4.3.1 Numerical Attributes
num_atributes = df4.select_dtypes(include=['int64','float64']) correlation = num_atributes.corr(method='pearson') sns.heatmap(correlation, annot=True)
_____no_output_____
MIT
Churn-Prediction.ipynb
Leonardodsch/churn-prediction
4.3.2 Categorical Attributes
a = df4.select_dtypes(include='object') a.head() # calculate cramer v a1 = cramer_v(a['geography'], a['gender']) a2 = cramer_v(a['geography'], a['geography']) a3 = cramer_v(a['gender'], a['gender']) a4 = cramer_v(a['gender'], a['geography']) d = pd.DataFrame({'geography': [a1,a2], 'gender': [a3,a4]}) d.set_index(d.columns) sns.heatmap(d, annot=True)
_____no_output_____
MIT
Churn-Prediction.ipynb
Leonardodsch/churn-prediction
5.0 Data Preparation
df5 = df4.copy()
_____no_output_____
MIT
Churn-Prediction.ipynb
Leonardodsch/churn-prediction
5.1 Split dataframe into training, test and validation dataset
X = df5.drop('exited', axis=1).copy() y = df5['exited'].copy() # train dataset X_train, X_rem, y_train, y_rem = train_test_split(X,y,train_size=0.8, random_state=42, stratify=y) # validation, test dataset X_valid, X_test, y_valid, y_test = train_test_split(X_rem, y_rem, test_size=0.5, random_state=42, stratify=y_rem) X_test9 = X_test.copy() y_test9 = y_test.copy()
_____no_output_____
MIT
Churn-Prediction.ipynb
Leonardodsch/churn-prediction
5.2 Rescaling
mms = MinMaxScaler() rs = RobustScaler() # credit score - min-max scaler X_train['credit_score'] = mms.fit_transform(X_train[['credit_score']].values) X_test['credit_score'] = mms.fit_transform(X_test[['credit_score']].values) X_valid['credit_score'] = mms.fit_transform(X_valid[['credit_score']].values) # age - robust scaler X_train['age'] = rs.fit_transform(X_train[['age']].values) X_test['age'] = rs.fit_transform(X_test[['age']].values) X_valid['age'] = rs.fit_transform(X_valid[['age']].values) # balance - min-max scaler X_train['balance'] = mms.fit_transform(X_train[['balance']].values) X_test['balance'] = mms.fit_transform(X_test[['balance']].values) X_valid['balance'] = mms.fit_transform(X_valid[['balance']].values) # estimated salary - min-max scaler X_train['estimated_salary'] = mms.fit_transform(X_train[['estimated_salary']].values) X_test['estimated_salary'] = mms.fit_transform(X_test[['estimated_salary']].values) X_valid['estimated_salary'] = mms.fit_transform(X_valid[['estimated_salary']].values) # tenure - min-max scaler X_train['tenure'] = mms.fit_transform(X_train[['tenure']].values) X_test['tenure'] = mms.fit_transform(X_test[['tenure']].values) X_valid['tenure'] = mms.fit_transform(X_valid[['tenure']].values) # num of products - min-max scaler X_train['num_of_products'] = mms.fit_transform(X_train[['num_of_products']].values) X_test['num_of_products'] = mms.fit_transform(X_test[['num_of_products']].values) X_valid['num_of_products'] = mms.fit_transform(X_valid[['num_of_products']].values)
_____no_output_____
MIT
Churn-Prediction.ipynb
Leonardodsch/churn-prediction
5.3 Encoding
le = LabelEncoder() # gender dic = {'Female':0, 'Male':1} X_train['gender'] = X_train['gender'].map(dic) X_test['gender'] = X_test['gender'].map(dic) X_valid['gender'] = X_valid['gender'].map(dic) # geography X_train['geography'] = le.fit_transform(X_train['geography']) X_test['geography'] = le.fit_transform(X_test['geography']) X_valid['geography'] = le.fit_transform(X_valid['geography'])
_____no_output_____
MIT
Churn-Prediction.ipynb
Leonardodsch/churn-prediction
6.0 Feature Selection 6.1 Boruta as feature selector
#X_boruta = X_train.values #y_boruta = y_train.values.ravel() #rf = RandomForestClassifier(n_jobs=-1, class_weight='balanced') #boruta = BorutaPy(rf, n_estimators='auto', verbose=2, random_state=42) #boruta.fit(X_boruta, y_boruta) #cols_selected = boruta.support_.tolist() #cols_selected_boruta = X_train.iloc[:, cols_selected].columns.to_list() cols_selected_boruta = ['age', 'balance', 'num_of_products']
_____no_output_____
MIT
Churn-Prediction.ipynb
Leonardodsch/churn-prediction
6.2 Feature Importance
rf = RandomForestClassifier() rf.fit(X_train, y_train) importance = rf.feature_importances_ for i,v in enumerate(importance): ('Feature: %0d, Score: %.5f' % (i,v)) # plot feature importance feature_importance = pd.DataFrame({'feature':X_train.columns, 'feature_importance':importance}).sort_values('feature_importance', ascending=False).reset_index() sns.barplot(x='feature_importance', y='feature', data=feature_importance, orient='h', color='royalblue').set_title('Feature Importance'); cols_selected_importance = feature_importance['feature'].head(6).copy() cols_selected_importance = cols_selected_importance.tolist()
_____no_output_____
MIT
Churn-Prediction.ipynb
Leonardodsch/churn-prediction
6.3 Columns Selected - As colunas selecinadas para treinar o modelo serão as selecionadas pelo boruta e as 6 melhores classificadas com o Random Forest
cols_selected_importance cols_selected_boruta #cols_selected = ['age', 'balance', 'num_of_products', 'estimated_salary', 'credit_score','tenure'] cols_selected = ['age', 'balance', 'num_of_products', 'estimated_salary', 'credit_score','tenure','is_active_member','gender','has_cr_card','geography']
_____no_output_____
MIT
Churn-Prediction.ipynb
Leonardodsch/churn-prediction
7.0 Machine Learning Modeling
X_train = X_train[cols_selected] X_test = X_test[cols_selected] X_valid = X_valid[cols_selected]
_____no_output_____
MIT
Churn-Prediction.ipynb
Leonardodsch/churn-prediction
7.1 Baseline Model
dummy = DummyClassifier() dummy.fit(X_train, y_train) pred = dummy.predict(X_valid) print(m.classification_report(y_valid, pred)) dummy_result = ml_metrics('dummy', y_valid, pred) dummy_result
_____no_output_____
MIT
Churn-Prediction.ipynb
Leonardodsch/churn-prediction
Cross Validation
dummy_result_cv = ml_results_cv('dummy_CV', DummyClassifier(), X_train, y_train) dummy_result_cv
_____no_output_____
MIT
Churn-Prediction.ipynb
Leonardodsch/churn-prediction
7.2 Logistic Regression
lg = LogisticRegression(class_weight='balanced') lg.fit(X_train, y_train) pred = lg.predict(X_valid) print(m.classification_report(y_valid, pred)) logistic_regression_result = ml_metrics('LogisticRegression', y_valid, pred) logistic_regression_result
_____no_output_____
MIT
Churn-Prediction.ipynb
Leonardodsch/churn-prediction
Cross Validation
logistic_regression_result_cv = ml_results_cv('LogisticRegression_CV', LogisticRegression(class_weight='balanced'), X_train, y_train) logistic_regression_result_cv
_____no_output_____
MIT
Churn-Prediction.ipynb
Leonardodsch/churn-prediction
7.3 KNN
knn = KNeighborsClassifier() knn.fit(X_train, y_train) pred = knn.predict(X_valid) print(m.classification_report(y_valid, pred)) knn_result = ml_metrics('KNN', y_valid, pred) knn_result
_____no_output_____
MIT
Churn-Prediction.ipynb
Leonardodsch/churn-prediction
Cross Validaton
knn_result_cv = ml_results_cv('KNN_CV', KNeighborsClassifier(), X_train, y_train) knn_result_cv
_____no_output_____
MIT
Churn-Prediction.ipynb
Leonardodsch/churn-prediction
7.4 Naive Bayes
nb = GaussianNB() nb.fit(X_train, y_train) pred = nb.predict(X_valid) print(m.classification_report(y_valid, pred)) naive_bayes_result = ml_metrics('Naive Bayes', y_valid, pred) naive_bayes_result
_____no_output_____
MIT
Churn-Prediction.ipynb
Leonardodsch/churn-prediction
Cross Validation
naive_bayes_result_cv = ml_results_cv('Naive Bayes_CV', GaussianNB(), X_train, y_train) naive_bayes_result_cv
_____no_output_____
MIT
Churn-Prediction.ipynb
Leonardodsch/churn-prediction
7.5 SVC
svc = SVC(class_weight='balanced') svc.fit(X_train, y_train) pred = svc.predict(X_valid) svc_result = ml_metrics('SVC', y_valid, pred) svc_result
_____no_output_____
MIT
Churn-Prediction.ipynb
Leonardodsch/churn-prediction
Cross Validation
svc_result_cv = ml_results_cv('SVC_cv', SVC(class_weight='balanced'), X_train, y_train) svc_result_cv
_____no_output_____
MIT
Churn-Prediction.ipynb
Leonardodsch/churn-prediction
7.6 Random Forest
rf = RandomForestClassifier(class_weight='balanced') rf.fit(X_train, y_train) pred = rf.predict(X_valid) pred_proba = rf.predict_proba(X_valid) print(m.classification_report(y_valid, pred)) rf_result = ml_metrics('Random Forest', y_valid, pred) rf_result
_____no_output_____
MIT
Churn-Prediction.ipynb
Leonardodsch/churn-prediction
Cross Validation
rf_result_cv = ml_results_cv('Random Forest_CV', RandomForestClassifier(class_weight='balanced'), X_train, y_train) rf_result_cv
_____no_output_____
MIT
Churn-Prediction.ipynb
Leonardodsch/churn-prediction
7.7 XGBoost
xgb = XGBClassifier(scale_pos_weight=80, objective='binary:logistic', verbosity=0) xgb.fit(X_train, y_train) pred = xgb.predict(X_valid) xgb_result = ml_metrics('XGBoost', y_valid, pred) xgb_result print(m.classification_report(y_valid, pred)) xgb_result = ml_metrics('XGBoost', y_valid, pred) xgb_result
_____no_output_____
MIT
Churn-Prediction.ipynb
Leonardodsch/churn-prediction
Cross Validation
xbg_result_cv = ml_results_cv('XGBoost_CV', XGBClassifier(scale_pos_weight=80, objective='binary:logistic', verbosity=0), X_train, y_train) xbg_result_cv
_____no_output_____
MIT
Churn-Prediction.ipynb
Leonardodsch/churn-prediction
7.8 Results
df_results = pd.concat([dummy_result, logistic_regression_result, knn_result, naive_bayes_result, svc_result, rf_result, xgb_result]) df_results.style.highlight_max(color='lightgreen', axis=0)
_____no_output_____
MIT
Churn-Prediction.ipynb
Leonardodsch/churn-prediction
7.9 Results Cross Validation
df_results_cv = pd.concat([dummy_result_cv, logistic_regression_result_cv, knn_result_cv, naive_bayes_result_cv, svc_result_cv, rf_result_cv, xbg_result_cv]) df_results_cv
_____no_output_____
MIT
Churn-Prediction.ipynb
Leonardodsch/churn-prediction
8.0 Hyperparameter Fine Tuning 8.1 Random Search
# setting some parameters for testing # Number of trees in random forest n_estimators = [int(x) for x in np.linspace(start = 200, stop = 2000, num = 10)] # Maximum number of levels in tree max_depth = [int(x) for x in np.linspace(10, 110, num = 11)] max_depth.append(None) # eta eta = [0.01,0.03] # subsample subsample = [0.1,0.5,0.7] # cols sample colssample_bytree = [0.3,0.7,0.9] # min_child_weight min_child_weight = [3,8,15] random_grid = {'n_estimators': n_estimators, 'max_depth': max_depth, 'eta': eta, 'subsample': subsample, 'colssample_bytree': colssample_bytree, 'min_child_weight': min_child_weight} xgb_grid = XGBClassifier() xgb_random = RandomizedSearchCV(estimator = xgb_grid, param_distributions = random_grid, n_iter = 100, cv = 3, verbose=2, random_state=42, n_jobs = -1) #xgb_random.fit(X_train, y_train) #xgb_random.best_params_ best_params = {'subsample': 0.7, 'n_estimators': 1000, 'min_child_weight': 3, 'max_depth': 30, 'eta': 0.03, 'colssample_bytree': 0.7}
_____no_output_____
MIT
Churn-Prediction.ipynb
Leonardodsch/churn-prediction
8.2 Results
xgb = XGBClassifier(objective='binary:logistic', n_estimators = 1000, eta=0.03, subsample = 0.7, min_child_weight = 3, max_depth = 30, colssample_bytree = 0.7, scale_pos_weight=80, verbosity=0) xgb.fit(X_train, y_train) pred = xgb.predict(X_valid) xgb_result = ml_metrics('XGBoost', y_valid, pred) xgb_result
_____no_output_____
MIT
Churn-Prediction.ipynb
Leonardodsch/churn-prediction
Cross Validaton
xgboost_cv = ml_results_cv('XGBoost_CV', XGBClassifier(objective='binary:logistic', n_estimators = 1000, eta=0.03, subsample = 0.7, min_child_weight = 3, max_depth = 30, colssample_bytree = 0.7 , scale_pos_weight=80, verbosity=0), X_train, y_train) xgboost_cv
_____no_output_____
MIT
Churn-Prediction.ipynb
Leonardodsch/churn-prediction