markdown
stringlengths 0
1.02M
| code
stringlengths 0
832k
| output
stringlengths 0
1.02M
| license
stringlengths 3
36
| path
stringlengths 6
265
| repo_name
stringlengths 6
127
|
---|---|---|---|---|---|
Data Cleaning and Feature Engineering | #We are only interested in the most recent year for which data is available, 2019
WEO=WEO.drop(['2000', '2001', '2002', '2003', '2004', '2005', '2006', '2007', '2008', '2009', '2010', '2011', '2012', '2013', '2014', '2015', '2016', '2017', '2018'], axis = 1)
#Reshape the data so each country is one observation
WEO=WEO.pivot_table(index=["Country"], columns='Indicator', values='2019').reset_index()
WEO.columns = ['Country', 'Current_account', 'Employment', 'Net_borrowing', 'Government_revenue', 'Government_expenditure', 'GDP_percap_constant', 'GDP_percap_current', 'GDP_constant', 'Inflation', 'Investment', 'Unemployment', 'Volume_exports', 'Volume_imports']
WEO.head()
#Describe the dataset
WEO.dropna(inplace=True)
WEO.describe() | _____no_output_____ | CC0-1.0 | Country_Economic_Conditions_for_Cargo_Carriers.ipynb | jamiemfraser/machine_learning |
Key Findings and Insights | #Large differences betweeen the mean and median values could be an indication of outliers that are skewing the data
WEO.agg([np.mean, np.median])
#Create a scatterplot
import matplotlib.pyplot as plt
%matplotlib inline
ax = plt.axes()
ax.scatter(WEO.Volume_exports, WEO.Volume_imports)
# Label the axes
ax.set(xlabel='Volume Exports',
ylabel='Volume Imports',
title='Volume of Exports vs Imports');
#Create a scatterplot
import matplotlib.pyplot as plt
%matplotlib inline
ax = plt.axes()
ax.scatter(WEO.GDP_percap_constant, WEO.Volume_imports)
# Label the axes
ax.set(xlabel='GDP per capita',
ylabel='Volume Imports',
title='GDP per capita vs Volume of Imports');
#Create a scatterplot
import matplotlib.pyplot as plt
%matplotlib inline
ax = plt.axes()
ax.scatter(WEO.Investment, WEO.Volume_imports)
# Label the axes
ax.set(xlabel='Investment',
ylabel='Volume Imports',
title='Investment vs Volume of Imports'); | _____no_output_____ | CC0-1.0 | Country_Economic_Conditions_for_Cargo_Carriers.ipynb | jamiemfraser/machine_learning |
Hypotheses Hypothesis 1: GDP per capita and the level of investment will be significant in determining the volume of goods and services importsHypothesis 2: There will be a strong correlation between government revenues and government expendituresHypothesis 3: GDP per capita and inflation will be significant in determining the unemployment rate Significance Test I will conduct a formal hypothesis test on Hypothesis 1, which states that GDP per capita and the level of investment will be significant in determining the volume of goods and services imports. I will use a linear regression model because the scatterplots shown above indicate there is likely a linear relationship between both GDP per capita and investment against the volume of imports. I will take a p-value of 0.05 or less to be an indication of significance.The null hypothesis is that there is no significant relationship between GDP per capita or the level of investment and the volume of goods and services.The alternative hypothesis is that there is a significant relationship between either GDP per capita or the level of investment and the volume of goods and services. | #Set up a linear regression model for GDP per capita and evaluate
WEO=WEO.reset_index()
X = WEO['GDP_percap_constant']
X=X.values.reshape(-1,1)
y = WEO['Volume_imports']
X2 = sm.add_constant(X)
est = sm.OLS(y, X2)
est2 = est.fit()
print(est2.summary())
#Set up a linear regression model for Investment and evaluate
WEO=WEO.reset_index()
X = WEO['Investment']
X=X.values.reshape(-1,1)
y = WEO['Volume_imports']
X2 = sm.add_constant(X)
est = sm.OLS(y, X2)
est2 = est.fit()
print(est2.summary()) | OLS Regression Results
==============================================================================
Dep. Variable: Volume_imports R-squared: 0.325
Model: OLS Adj. R-squared: 0.305
Method: Least Squares F-statistic: 16.38
Date: Wed, 11 Aug 2021 Prob (F-statistic): 0.000282
Time: 06:38:22 Log-Likelihood: -107.91
No. Observations: 36 AIC: 219.8
Df Residuals: 34 BIC: 223.0
Df Model: 1
Covariance Type: nonrobust
==============================================================================
coef std err t P>|t| [0.025 0.975]
------------------------------------------------------------------------------
const -12.6186 3.839 -3.287 0.002 -20.421 -4.816
x1 0.6569 0.162 4.048 0.000 0.327 0.987
==============================================================================
Omnibus: 8.946 Durbin-Watson: 2.079
Prob(Omnibus): 0.011 Jarque-Bera (JB): 8.455
Skew: 0.822 Prob(JB): 0.0146
Kurtosis: 4.713 Cond. No. 109.
==============================================================================
Notes:
[1] Standard Errors assume that the covariance matrix of the errors is correctly specified.
| CC0-1.0 | Country_Economic_Conditions_for_Cargo_Carriers.ipynb | jamiemfraser/machine_learning |
Is there any connection with the crime and food inspection failures? May be ! For now, I am focusing on the burgalaries only. The burglary data is the chicago's crime data filtered for burgalaries only (in the same time window i.e. first 3 months of 2019). | burglary = pd.read_json('../data/raw/burglary.json', convert_dates=['date'])
burglary.head()
shape = burglary.shape
print(" There are %d rows and %d columns in the data" % (shape[0], shape[1]))
print(burglary.info()) | There are 29133 rows and 26 columns in the data
<class 'pandas.core.frame.DataFrame'>
Int64Index: 29133 entries, 0 to 9999
Data columns (total 26 columns):
arrest 29133 non-null bool
beat 29133 non-null int64
block 29133 non-null object
case_number 29133 non-null object
community_area 29133 non-null int64
date 29133 non-null datetime64[ns]
description 29133 non-null object
district 29133 non-null int64
domestic 29133 non-null bool
fbi_code 29133 non-null int64
id 29133 non-null int64
iucr 29133 non-null int64
latitude 28998 non-null float64
location 28998 non-null object
location_address 28998 non-null object
location_city 28998 non-null object
location_description 29132 non-null object
location_state 28998 non-null object
location_zip 28998 non-null object
longitude 28998 non-null float64
primary_type 29133 non-null object
updated_on 29133 non-null object
ward 29133 non-null int64
x_coordinate 28998 non-null float64
y_coordinate 28998 non-null float64
year 29133 non-null int64
dtypes: bool(2), datetime64[ns](1), float64(4), int64(8), object(11)
memory usage: 5.6+ MB
None
| MIT | notebooks/burglary_01.ipynb | drimal/chicagofood |
Let's check if there are any null values in the data. | burglary.isna().sum()
burglary['latitude'].fillna(burglary['latitude'].mode()[0], inplace=True)
burglary['longitude'].fillna(burglary['longitude'].mode()[0], inplace=True)
ax = sns.countplot(x="ward", data=burglary)
plt.title("Burglaries by Ward")
plt.show()
plt.rcParams['figure.figsize'] = 16, 5
ax = sns.countplot(x="community_area", data=burglary)
plt.title("Burglaries by Ward")
plt.show() | _____no_output_____ | MIT | notebooks/burglary_01.ipynb | drimal/chicagofood |
Burglaries HeatMap | import gmaps
APIKEY= os.getenv('GMAPAPIKEY')
gmaps.configure(api_key=APIKEY)
def make_heatmap(locations, weights=None):
fig = gmaps.figure()
heatmap_layer = gmaps.heatmap_layer(locations)
#heatmap_layer.max_intensity = 100
heatmap_layer.point_radius = 8
fig.add_layer(heatmap_layer)
return fig
locations = zip(burglary['latitude'], burglary['longitude'])
fig = make_heatmap(locations)
fig
burglary_per_day = pd.DataFrame()
burglary_per_day = burglary[['date', 'case_number']]
burglary_per_day = burglary_per_day.set_index(
pd.to_datetime(burglary_per_day['date']))
burglary_per_day = burglary_per_day.resample('D').count()
plt.rcParams['figure.figsize'] = 12, 5
fig, ax = plt.subplots()
fig.autofmt_xdate()
#
#ax.xaxis.set_major_locator(mdates.MonthLocator())
#ax.xaxis.set_minor_locator(mdates.DayLocator())
monthFmt = mdates.DateFormatter('%Y-%b')
ax.xaxis.set_major_formatter(monthFmt)
plt.plot(burglary_per_day.index, burglary_per_day, 'r-')
plt.xlabel('Date')
plt.ylabel('Number of Cases Reported')
plt.title('Burglaries Reported')
plt.show()
burglary['event_date'] = burglary['date']
burglary = burglary.set_index('event_date')
burglary.sort_values(by='date', inplace=True)
burglary.head()
burglary.to_csv('../data/processed/burglary_data_processed.csv') | _____no_output_____ | MIT | notebooks/burglary_01.ipynb | drimal/chicagofood |
Set-up notebook environment NOTE: Use a QIIME2 kernel | import numpy as np
import pandas as pd
import seaborn as sns
import scipy
from scipy import stats
import matplotlib.pyplot as plt
import re
from pandas import *
import matplotlib.pyplot as plt
%matplotlib inline
from qiime2.plugins import feature_table
from qiime2 import Artifact
from qiime2 import Metadata
import biom
from biom.table import Table
from qiime2.plugins import diversity
from scipy.stats import ttest_ind
from scipy.stats.stats import pearsonr
%config InlineBackend.figure_formats = ['svg']
from qiime2.plugins.feature_table.methods import relative_frequency
import biom
import qiime2 as q2
import os
import math
| _____no_output_____ | MIT | code/Taxon profile analysis.ipynb | justinshaffer/Extraction_kit_benchmarking |
Import sample metadata | meta = q2.Metadata.load('/Users/Justin/Mycelium/UCSD/00_Knight_Lab/03_Extraction_test_12201/round_02/sample_metadata/12201_metadata.txt').to_dataframe()
| _____no_output_____ | MIT | code/Taxon profile analysis.ipynb | justinshaffer/Extraction_kit_benchmarking |
Separate round 1 and round 2 and exclude round 1 Zymo, Homebrew, and MagMAX Beta | meta_r1 = meta[meta['round'] == 1]
meta_clean_r1_1 = meta_r1[meta_r1['extraction_kit'] != 'Zymo MagBead']
meta_clean_r1_2 = meta_clean_r1_1[meta_clean_r1_1['extraction_kit'] != 'Homebrew']
meta_clean_r1 = meta_clean_r1_2[meta_clean_r1_2['extraction_kit'] != 'MagMax Beta']
meta_clean_r2 = meta[meta['round'] == 2]
| _____no_output_____ | MIT | code/Taxon profile analysis.ipynb | justinshaffer/Extraction_kit_benchmarking |
Remove PowerSoil samples from each round - these samples will be used as the baseline | meta_clean_r1_noPS = meta_clean_r1[meta_clean_r1['extraction_kit'] != 'PowerSoil']
meta_clean_r2_noPS = meta_clean_r2[meta_clean_r2['extraction_kit'] != 'PowerSoil']
| _____no_output_____ | MIT | code/Taxon profile analysis.ipynb | justinshaffer/Extraction_kit_benchmarking |
Create tables including only round 1 or round 2 PowerSoil samples | meta_clean_r1_onlyPS = meta_clean_r1[meta_clean_r1['extraction_kit'] == 'PowerSoil']
meta_clean_r2_onlyPS = meta_clean_r2[meta_clean_r2['extraction_kit'] == 'PowerSoil']
| _____no_output_____ | MIT | code/Taxon profile analysis.ipynb | justinshaffer/Extraction_kit_benchmarking |
Merge PowerSoil samples from round 2 with other samples from round 1, and vice versa - this will allow us to get the correlations between the two rounds of PowerSoil | meta_clean_r1_with_r2_PS = pd.concat([meta_clean_r1_noPS, meta_clean_r2_onlyPS])
meta_clean_r2_with_r1_PS = pd.concat([meta_clean_r2_noPS, meta_clean_r1_onlyPS])
| _____no_output_____ | MIT | code/Taxon profile analysis.ipynb | justinshaffer/Extraction_kit_benchmarking |
Collapse feature-table to the desired level (e.g., genus) 16S | qiime taxa collapse \
--i-table /Users/Justin/Mycelium/UCSD/00_Knight_Lab/03_Extraction_test_12201/round_02/data/16S/10_filtered_data/dna_bothPS_16S_deblur_biom_lod_noChl_noMit_sepp_gg_noNTCs_noMock.qza \
--i-taxonomy /Users/Justin/Mycelium/UCSD/00_Knight_Lab/03_Extraction_test_12201/round_02/data/16S/06_taxonomy/dna_all_16S_deblur_seqs_taxonomy_silva138.qza \
--p-level 6 \
--o-collapsed-table /Users/Justin/Mycelium/UCSD/00_Knight_Lab/03_Extraction_test_12201/round_02/data/16S/10_filtered_data/dna_bothPS_16S_deblur_biom_lod_noChl_noMit_sepp_gg_noNTCs_noMock_taxa_collapse_genus.qza
qiime feature-table summarize \
--i-table /Users/Justin/Mycelium/UCSD/00_Knight_Lab/03_Extraction_test_12201/round_02/data/16S/10_filtered_data/dna_bothPS_16S_deblur_biom_lod_noChl_noMit_sepp_gg_noNTCs_noMock_taxa_collapse_genus.qza \
--o-visualization /Users/Justin/Mycelium/UCSD/00_Knight_Lab/03_Extraction_test_12201/round_02/data/16S/10_filtered_data/dna_bothPS_16S_deblur_biom_lod_noChl_noMit_sepp_gg_noNTCs_noMock_taxa_collapse_genus.qzv
# There are 846 samples and 1660 features
| _____no_output_____ | MIT | code/Taxon profile analysis.ipynb | justinshaffer/Extraction_kit_benchmarking |
ITS | qiime taxa collapse \
--i-table /Users/Justin/Mycelium/UCSD/00_Knight_Lab/03_Extraction_test_12201/round_02/data/ITS/08_filtered_data/dna_bothPS_ITS_deblur_biom_lod_noNTCs_noMock.qza \
--i-taxonomy /Users/Justin/Mycelium/UCSD/00_Knight_Lab/03_Extraction_test_12201/round_02/data/ITS/06_taxonomy/dna_all_ITS_deblur_seqs_taxonomy_unite8.qza \
--p-level 6 \
--o-collapsed-table /Users/Justin/Mycelium/UCSD/00_Knight_Lab/03_Extraction_test_12201/round_02/data/ITS/08_filtered_data/dna_bothPS_ITS_deblur_biom_lod_noNTCs_noMock_taxa_collapse_genus.qza
qiime feature-table summarize \
--i-table /Users/Justin/Mycelium/UCSD/00_Knight_Lab/03_Extraction_test_12201/round_02/data/ITS/08_filtered_data/dna_bothPS_ITS_deblur_biom_lod_noNTCs_noMock_taxa_collapse_genus.qza \
--o-visualization /Users/Justin/Mycelium/UCSD/00_Knight_Lab/03_Extraction_test_12201/round_02/data/ITS/08_filtered_data/dna_bothPS_ITS_deblur_biom_lod_noNTCs_noMock_taxa_collapse_genus.qzv
# There are 978 samples and 791 features
| _____no_output_____ | MIT | code/Taxon profile analysis.ipynb | justinshaffer/Extraction_kit_benchmarking |
Shotgun | qiime taxa collapse \
--i-table /Users/Justin/Mycelium/UCSD/00_Knight_Lab/03_Extraction_test_12201/round_02/data/shotgun/03_filtered_data/dna_bothPS_shotgun_woltka_wol_biom_noNTCs_noMock.qza \
--i-taxonomy /Users/Justin/Mycelium/UCSD/00_Knight_Lab/03_Extraction_test_12201/round_02/data/shotgun/wol_taxonomy.qza \
--p-level 6 \
--o-collapsed-table /Users/Justin/Mycelium/UCSD/00_Knight_Lab/03_Extraction_test_12201/round_02/data/shotgun/03_filtered_data/dna_bothPS_shotgun_woltka_wol_biom_noNTCs_noMock_taxa_collapse_genus.qza
qiime feature-table summarize \
--i-table /Users/Justin/Mycelium/UCSD/00_Knight_Lab/03_Extraction_test_12201/round_02/data/shotgun/03_filtered_data/dna_bothPS_shotgun_woltka_wol_biom_noNTCs_noMock_taxa_collapse_genus.qza \
--o-visualization /Users/Justin/Mycelium/UCSD/00_Knight_Lab/03_Extraction_test_12201/round_02/data/shotgun/03_filtered_data/dna_bothPS_shotgun_woltka_wol_biom_noNTCs_noMock_taxa_collapse_genus.qzv
# There are 1044 samples and 2060 features
| _____no_output_____ | MIT | code/Taxon profile analysis.ipynb | justinshaffer/Extraction_kit_benchmarking |
Import feature-tables | dna_bothPS_16S_genus_qza = q2.Artifact.load('/Users/Justin/Mycelium/UCSD/00_Knight_Lab/03_Extraction_test_12201/round_02/data/16S/10_filtered_data/dna_bothPS_16S_deblur_biom_lod_noChl_noMit_sepp_gg_noNTCs_noMock_taxa_collapse_genus.qza')
dna_bothPS_ITS_genus_qza = q2.Artifact.load('/Users/Justin/Mycelium/UCSD/00_Knight_Lab/03_Extraction_test_12201/round_02/data/ITS/08_filtered_data/dna_bothPS_ITS_deblur_biom_lod_noNTCs_noMock_taxa_collapse_genus.qza')
dna_bothPS_shotgun_genus_qza = q2.Artifact.load('/Users/Justin/Mycelium/UCSD/00_Knight_Lab/03_Extraction_test_12201/round_02/data/shotgun/03_filtered_data/dna_bothPS_shotgun_woltka_wol_biom_noNTCs_noMock_taxa_collapse_genus.qza')
| _____no_output_____ | MIT | code/Taxon profile analysis.ipynb | justinshaffer/Extraction_kit_benchmarking |
Convert QZA to a Pandas DataFrame | dna_bothPS_16S_genus_df = dna_bothPS_16S_genus_qza.view(pd.DataFrame)
dna_bothPS_ITS_genus_df = dna_bothPS_ITS_genus_qza.view(pd.DataFrame)
dna_bothPS_shotgun_genus_df = dna_bothPS_shotgun_genus_qza.view(pd.DataFrame)
| _____no_output_____ | MIT | code/Taxon profile analysis.ipynb | justinshaffer/Extraction_kit_benchmarking |
Melt dataframes | dna_bothPS_16S_genus_df_melt = dna_bothPS_16S_genus_df.unstack()
dna_bothPS_ITS_genus_df_melt = dna_bothPS_ITS_genus_df.unstack()
dna_bothPS_shotgun_genus_df_melt = dna_bothPS_shotgun_genus_df.unstack()
dna_bothPS_16S_genus = pd.DataFrame(dna_bothPS_16S_genus_df_melt)
dna_bothPS_ITS_genus = pd.DataFrame(dna_bothPS_ITS_genus_df_melt)
dna_bothPS_shotgun_genus = pd.DataFrame(dna_bothPS_shotgun_genus_df_melt)
dna_bothPS_16S_genus.reset_index(inplace=True)
dna_bothPS_16S_genus.rename(columns={'level_0':'taxa','level_1':'sample',0:'counts'}, inplace=True)
dna_bothPS_ITS_genus.reset_index(inplace=True)
dna_bothPS_ITS_genus.rename(columns={'level_0':'taxa','level_1':'sample',0:'counts'}, inplace=True)
dna_bothPS_shotgun_genus.reset_index(inplace=True)
dna_bothPS_shotgun_genus.rename(columns={'level_0':'taxa','level_1':'sample',0:'counts'}, inplace=True)
| _____no_output_____ | MIT | code/Taxon profile analysis.ipynb | justinshaffer/Extraction_kit_benchmarking |
Wrangle data into long form for each kit Wrangle metadata | # Create empty list of extraction kit IDs
ext_kit_levels = []
# Create empty list of metadata subsets based on levels of variable of interest
ext_kit = []
# Create empty list of baseline samples for each subset
bl = []
# Populate lists with round 1 data
for ext_kit_level, ext_kit_level_df in meta_clean_r1_with_r2_PS.groupby('extraction_kit_round'):
ext_kit.append(ext_kit_level_df)
powersoil_r1_bl = meta_clean_r1_onlyPS[meta_clean_r1_onlyPS.extraction_kit_round == 'PowerSoil r1']
bl.append(powersoil_r1_bl)
ext_kit_levels.append(ext_kit_level)
print('Gathered data for',ext_kit_level)
# Populate lists with round 2 data
for ext_kit_level, ext_kit_level_df in meta_clean_r2_with_r1_PS.groupby('extraction_kit_round'):
ext_kit.append(ext_kit_level_df)
powersoil_r2_bl = meta_clean_r2_onlyPS[meta_clean_r2_onlyPS['extraction_kit_round'] == 'PowerSoil r2']
bl.append(powersoil_r2_bl)
ext_kit_levels.append(ext_kit_level)
print('Gathered data for',ext_kit_level)
# Create empty list for concatenated subset-baseline datasets
subsets_w_bl = {}
# Populate list with subset-baseline data
for ext_kit_level, ext_kit_df, ext_kit_bl in zip(ext_kit_levels, ext_kit, bl):
new_df = pd.concat([ext_kit_bl,ext_kit_df])
subsets_w_bl[ext_kit_level] = new_df
print('Merged data for',ext_kit_level)
| Gathered data for Norgen
Gathered data for PowerSoil Pro
Gathered data for PowerSoil r2
Gathered data for MagMAX Microbiome
Gathered data for NucleoMag Food
Gathered data for PowerSoil r1
Gathered data for Zymo MagBead
Merged data for Norgen
Merged data for PowerSoil Pro
Merged data for PowerSoil r2
Merged data for MagMAX Microbiome
Merged data for NucleoMag Food
Merged data for PowerSoil r1
Merged data for Zymo MagBead
| MIT | code/Taxon profile analysis.ipynb | justinshaffer/Extraction_kit_benchmarking |
16S | list_of_lists = []
for key, value in subsets_w_bl.items():
string = ''.join(key)
#merge metadata subsets with baseline with taxonomy
meta_16S_genera = pd.merge(value, dna_bothPS_16S_genus, left_index=True, right_on='sample')
#create new column
meta_16S_genera['taxa_subject'] = meta_16S_genera['taxa'] + meta_16S_genera['host_subject_id']
#subtract out duplicates and pivot
meta_16S_genera_clean = meta_16S_genera.drop_duplicates(subset = ['taxa_subject', 'extraction_kit_round'], keep = 'first')
meta_16S_genera_pivot = meta_16S_genera_clean.pivot(index='taxa_subject', columns='extraction_kit_round', values='counts')
meta_16S_genera_pivot_clean = meta_16S_genera_pivot.dropna()
# Export dataframe to file
meta_16S_genera_pivot_clean.to_csv('/Users/Justin/Mycelium/UCSD/00_Knight_Lab/03_Extraction_test_12201/round_02/results/feature_abundance_correlation_images/table_correlation_16S_genera_%s.txt'%string,
sep = '\t',
index = False)
| _____no_output_____ | MIT | code/Taxon profile analysis.ipynb | justinshaffer/Extraction_kit_benchmarking |
ITS | list_of_lists = []
for key, value in subsets_w_bl.items():
string = ''.join(key)
#merge metadata subsets with baseline with taxonomy
meta_ITS_genera = pd.merge(value, dna_bothPS_ITS_genus, left_index=True, right_on='sample')
#create new column
meta_ITS_genera['taxa_subject'] = meta_ITS_genera['taxa'] + meta_ITS_genera['host_subject_id']
#subtract out duplicates and pivot
meta_ITS_genera_clean = meta_ITS_genera.drop_duplicates(subset = ['taxa_subject', 'extraction_kit_round'], keep = 'first')
meta_ITS_genera_pivot = meta_ITS_genera_clean.pivot(index='taxa_subject', columns='extraction_kit_round', values='counts')
meta_ITS_genera_pivot_clean = meta_ITS_genera_pivot.dropna()
# Export dataframe to file
meta_ITS_genera_pivot_clean.to_csv('/Users/Justin/Mycelium/UCSD/00_Knight_Lab/03_Extraction_test_12201/round_02/results/feature_abundance_correlation_images/table_correlation_ITS_genera_%s.txt'%string,
sep = '\t',
index = False)
| _____no_output_____ | MIT | code/Taxon profile analysis.ipynb | justinshaffer/Extraction_kit_benchmarking |
Shotgun | list_of_lists = []
for key, value in subsets_w_bl.items():
string = ''.join(key)
#merge metadata subsets with baseline with taxonomy
meta_shotgun_genera = pd.merge(value, dna_bothPS_shotgun_genus, left_index=True, right_on='sample')
#create new column
meta_shotgun_genera['taxa_subject'] = meta_shotgun_genera['taxa'] + meta_shotgun_genera['host_subject_id']
#subtract out duplicates and pivot
meta_shotgun_genera_clean = meta_shotgun_genera.drop_duplicates(subset = ['taxa_subject', 'extraction_kit_round'], keep = 'first')
meta_shotgun_genera_pivot = meta_shotgun_genera_clean.pivot(index='taxa_subject', columns='extraction_kit_round', values='counts')
meta_shotgun_genera_pivot_clean = meta_shotgun_genera_pivot.dropna()
# Export dataframe to file
meta_shotgun_genera_pivot_clean.to_csv('/Users/Justin/Mycelium/UCSD/00_Knight_Lab/03_Extraction_test_12201/round_02/results/feature_abundance_correlation_images/table_correlation_shotgun_genera_%s.txt'%string,
sep = '\t',
index = False)
| _____no_output_____ | MIT | code/Taxon profile analysis.ipynb | justinshaffer/Extraction_kit_benchmarking |
Code below is not used NOTE: The first cell was originally appended to the cell above | # check pearson correlation
x = meta_16S_genera_pivot_clean.iloc[:,1]
y = meta_16S_genera_pivot_clean[key]
corr = stats.pearsonr(x, y)
int1, int2 = corr
corr_rounded = round(int1, 2)
corr_str = str(corr_rounded)
x_key = key[0]
y_key = key[1]
list1 = []
list1.append(corr_rounded)
list1.append(key)
list_of_lists.append(list1)
list_of_lists
df = pd.DataFrame(list_of_lists, columns = ['Correlation', 'Extraction kit'])
df.to_csv('/Users/Justin/Mycelium/UCSD/00_Knight_Lab/03_Extraction_test_12201/round_02/results/feature_abundance_correlation_images/table_correlations_16S_genera.txt',
sep = '\t',
index = False)
splot = sns.catplot(y="Correlation",
x="Extraction kit",
hue= "Extraction kit",
kind='bar',
data=df,
dodge = False)
splot.set(ylim=(0, 1))
plt.xticks(rotation=45,
horizontalalignment='right')
#new_labels = ['β20C','β20C after 1 week', '4C','Ambient','Freeze-thaw','Heat']
#for t, l in zip(splot._legend.texts, new_labels):
# t.set_text(l)
splot.savefig('correlation_16S_genera.png')
splot.savefig('correlation_16S_genera.svg', format='svg', dpi=1200)
| _____no_output_____ | MIT | code/Taxon profile analysis.ipynb | justinshaffer/Extraction_kit_benchmarking |
Individual correlation plots | for key, value in subsets_w_bl.items():
string = ''.join(key)
#merge metadata subsets with baseline with taxonomy
meta_16S_genera = pd.merge(value, dna_bothPS_16S_genus, left_index=True, right_on='sample')
#create new column
meta_16S_genera['taxa_subject'] = meta_16S_genera['taxa'] + meta_16S_genera['host_subject_id']
#subtract out duplicates and pivot
meta_16S_genera_clean = meta_16S_genera.drop_duplicates(subset = ['taxa_subject', 'extraction_kit_round'], keep = 'first')
meta_16S_genera_pivot = meta_16S_genera_clean.pivot(index='taxa_subject', columns='extraction_kit_round', values='counts')
meta_16S_genera_pivot_clean = meta_16S_genera_pivot.dropna()
# check pearson correlation
x = meta_16S_genera_pivot_clean.iloc[:,1]
y = meta_16S_genera_pivot_clean[key]
corr = stats.pearsonr(x, y)
int1, int2 = corr
corr_rounded = round(int1, 2)
corr_str = str(corr_rounded)
#make correlation plots
meta_16S_genera_pivot_clean['x1'] = meta_16S_genera_pivot_clean.iloc[:,1]
meta_16S_genera_pivot_clean['y1'] = meta_16S_genera_pivot_clean.iloc[:,0]
ax=sns.lmplot(x='x1',
y='y1',
data=meta_16S_genera_pivot_clean,
height=3.8)
ax.set(yscale='log')
ax.set(xscale='log')
ax.set(xlabel='PowerSoil', ylabel=key)
#plt.xlim(0.00001, 10000000)
#plt.ylim(0.00001, 10000000)
plt.title(string + ' (%s)' %corr_str)
ax.savefig('/Users/Justin/Mycelium/UCSD/00_Knight_Lab/03_Extraction_test_12201/round_02/results/feature_abundance_correlation_images/figure_scatter_correlation_16S_genera_%s.png'%string)
ax.savefig('/Users/Justin/Mycelium/UCSD/00_Knight_Lab/03_Extraction_test_12201/round_02/results/feature_abundance_correlation_images/figure_scatter_correlation_16S_genera_%s.svg'%string, format='svg',dpi=1200)
| _____no_output_____ | MIT | code/Taxon profile analysis.ipynb | justinshaffer/Extraction_kit_benchmarking |
Health, Wealth of Nations from 1800-2008 | import os
import numpy as np
import pandas as pd
from pandas import Series, DataFrame
from bqplot import Figure, Tooltip, Label
from bqplot import Axis, ColorAxis
from bqplot import LogScale, LinearScale, OrdinalColorScale
from bqplot import Scatter, Lines
from bqplot import CATEGORY10
from ipywidgets import HBox, VBox, IntSlider, Play, jslink
from more_itertools import flatten | _____no_output_____ | MIT | 99-Miscel/02-bqplot-B.ipynb | dushyantkhosla/dataviz |
--- Get Data | year_start = 1800
df = pd.read_json("data_files/nations.json")
df.head()
list_rows_to_drop = \
(df['income']
.apply(len)
.where(lambda i: i < 10)
.dropna()
.index
.tolist()
)
df.drop(list_rows_to_drop, inplace=True)
dict_dfs = {}
for COL in ['income', 'lifeExpectancy', 'population']:
df1 = \
DataFrame(df
.loc[:, COL]
.map(lambda l: (DataFrame(l)
.set_index(0)
.squeeze()
.reindex(range(1800, 2009))
.interpolate()
.to_dict()))
.tolist())
df1.index = df.name
dict_dfs[COL] = df1
def get_data(year):
"""
"""
income = dict_dfs['income'].loc[:, year]
lifeExpectancy = dict_dfs['lifeExpectancy'].loc[:, year]
population = dict_dfs['population'].loc[:, year]
return income, lifeExpectancy, population
get_min_max_from_df = lambda df: (df.min().min(), df.max().max()) | _____no_output_____ | MIT | 99-Miscel/02-bqplot-B.ipynb | dushyantkhosla/dataviz |
--- Create Tooltip | tt = Tooltip(fields=['name', 'x', 'y'],
labels=['Country', 'IncomePerCapita', 'LifeExpectancy']) | _____no_output_____ | MIT | 99-Miscel/02-bqplot-B.ipynb | dushyantkhosla/dataviz |
--- Create Scales | # Income
income_min, income_max = get_min_max_from_df(dict_dfs['income'])
x_sc = LogScale(min=income_min,
max=income_max)
# Life Expectancy
life_exp_min, life_exp_max = get_min_max_from_df(dict_dfs['lifeExpectancy'])
y_sc = LinearScale(min=life_exp_min,
max=life_exp_max)
# Population
pop_min, pop_max = get_min_max_from_df(dict_dfs['population'])
size_sc = LinearScale(min=pop_min,
max=pop_max)
# Color
c_sc = OrdinalColorScale(domain=df['region'].unique().tolist(),
colors=CATEGORY10[:6]) | _____no_output_____ | MIT | 99-Miscel/02-bqplot-B.ipynb | dushyantkhosla/dataviz |
--- Create Axes | ax_y = Axis(label='Life Expectancy',
scale=y_sc,
orientation='vertical',
side='left',
grid_lines='solid')
ax_x = Axis(label='Income per Capita',
scale=x_sc,
grid_lines='solid') | _____no_output_____ | MIT | 99-Miscel/02-bqplot-B.ipynb | dushyantkhosla/dataviz |
--- Create Marks 1. Scatter | cap_income, life_exp, pop = get_data(year_start)
scatter_ = Scatter(x=cap_income,
y=life_exp,
color=df['region'],
size=pop,
names=df['name'],
display_names=False,
scales={
'x': x_sc,
'y': y_sc,
'color': c_sc,
'size': size_sc
},
default_size=4112,
tooltip=tt,
animate=True,
stroke='Black',
unhovered_style={'opacity': 0.5}) | _____no_output_____ | MIT | 99-Miscel/02-bqplot-B.ipynb | dushyantkhosla/dataviz |
2. Line | line_ = Lines(x=dict_dfs['income'].loc['Angola'].values,
y=dict_dfs['lifeExpectancy'].loc['Angola'].values,
colors=['Gray'],
scales={
'x': x_sc,
'y': y_sc
},
visible=False) | _____no_output_____ | MIT | 99-Miscel/02-bqplot-B.ipynb | dushyantkhosla/dataviz |
--- Create Label | year_label = Label(x=[0.75],
y=[0.10],
font_size=50,
font_weight='bolder',
colors=['orange'],
text=[str(year_start)],
enable_move=True) | _____no_output_____ | MIT | 99-Miscel/02-bqplot-B.ipynb | dushyantkhosla/dataviz |
--- Construct the Figure | time_interval = 10
fig_ = \
Figure(
marks=[scatter_, line_, year_label],
axes=[ax_x, ax_y],
title='Health and Wealth of Nations',
animation_duration=time_interval
)
fig_.layout.min_width = '960px'
fig_.layout.min_height = '640px'
fig_ | _____no_output_____ | MIT | 99-Miscel/02-bqplot-B.ipynb | dushyantkhosla/dataviz |
--- Add Interactivity- Update chart when year changes | slider_ = IntSlider(
min=year_start,
max=2008,
step=1,
description='Year: ',
value=year_start)
def on_change_year(change):
"""
"""
scatter_.x, scatter_.y, scatter_.size = get_data(slider_.value)
year_label.text = [str(slider_.value)]
slider_.observe(on_change_year, 'value')
slider_ | _____no_output_____ | MIT | 99-Miscel/02-bqplot-B.ipynb | dushyantkhosla/dataviz |
- Display line when hovered | def on_hover(change):
"""
"""
if change.new is not None:
display(change.new)
line_.x = dict_dfs['income'].iloc[change.new + 1]
line_.y = dict_dfs['lifeExpectancy'].iloc[change.new + 1]
line_.visible = True
else:
line_.visible = False
scatter_.observe(on_hover, 'hovered_point') | _____no_output_____ | MIT | 99-Miscel/02-bqplot-B.ipynb | dushyantkhosla/dataviz |
--- Add Animation! | play_button = Play(
min=1800,
max=2008,
interval=time_interval
)
jslink(
(play_button, 'value'),
(slider_, 'value')
) | _____no_output_____ | MIT | 99-Miscel/02-bqplot-B.ipynb | dushyantkhosla/dataviz |
--- Create the GUI | VBox([play_button, slider_, fig_]) | _____no_output_____ | MIT | 99-Miscel/02-bqplot-B.ipynb | dushyantkhosla/dataviz |
I want to analyze changes over time in the MOT GTFS feed. Agenda:1. [Get data](Get-the-data)3. [Tidy](Tidy-it-up) | import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
import partridge as ptg
from ftplib import FTP
import datetime
import re
import zipfile
import os
%matplotlib inline
plt.rcParams['figure.figsize'] = (10, 5) # set default size of plots
sns.set_style("white")
sns.set_context("talk")
sns.set_palette('Set2', 10) | _____no_output_____ | MIT | openbus_10_stuff.ipynb | cjer/open-bus-explore |
Get the dataThere are two options - TransitFeeds and the workshop's S3 bucket. | #!aws s3 cp s3://s3.obus.hasadna.org.il/2018-04-25.zip data/gtfs_feeds/2018-04-25.zip | _____no_output_____ | MIT | openbus_10_stuff.ipynb | cjer/open-bus-explore |
Tidy it upAgain I'm using [partridge](https://github.com/remix/partridge/tree/master/partridge) for filtering on dates, and then some tidying up and transformations. | from gtfs_utils import *
local_tariff_path = 'data/sample/180515_tariff.zip'
conn = ftp_connect()
get_ftp_file(conn, file_name = TARIFF_FILE_NAME, local_zip_path = local_tariff_path )
def to_timedelta(df):
'''
Turn time columns into timedelta dtype
'''
cols = ['arrival_time', 'departure_time']
numeric = df[cols].apply(pd.to_timedelta, unit='s')
df = df.copy()
df[cols] = numeric
return df
%time f2 = new_get_tidy_feed_df(feed, [zones])
f2.head()
f2.columns
def get_tidy_feed_df(feed, zones):
s = feed.stops
r = feed.routes
a = feed.agency
t = (feed.trips
# faster joins and slices with Categorical dtypes
.assign(route_id=lambda x: pd.Categorical(x['route_id'])))
f = (feed.stop_times[fields['stop_times']]
.merge(s[fields['stops']], on='stop_id')
.merge(zones, how='left')
.assign(zone_name=lambda x: pd.Categorical(x['zone_name']))
.merge(t[fields['trips']], on='trip_id', how='left')
.assign(route_id=lambda x: pd.Categorical(x['route_id']))
.merge(r[fields['routes']], on='route_id', how='left')
.assign(agency_id=lambda x: pd.Categorical(x['agency_id']))
.merge(a[fields['agency']], on='agency_id', how='left')
.assign(agency_name=lambda x: pd.Categorical(x['agency_name']))
.pipe(to_timedelta)
)
return f
LOCAL_ZIP_PATH = 'data/gtfs_feeds/2018-02-01.zip'
feed = get_partridge_feed_by_date(LOCAL_ZIP_PATH, datetime.date(2018,2 , 1))
zones = get_zones()
'route_ids' in feed.routes.columns
feed.routes.shape
f = get_tidy_feed_df(feed, zones)
f.columns
f[f.route_short_name.isin(['20', '26', '136'])].groupby('stop_name').route_short_name.nunique().sort_values(ascending=False) | _____no_output_____ | MIT | openbus_10_stuff.ipynb | cjer/open-bus-explore |
Write in the input space, click `Shift-Enter` or click on the `Play` button to execute. | (3 + 1 + 12) ** 2 + 2 * 18 | _____no_output_____ | BSD-3-Clause | python_hpc/3_intro_pandas/0-intro-python.ipynb | sdsc-scicomp/2018-11-02-comet-workshop-ucr |
Give a title to the notebook by clicking on `Untitled` on the very top of the page, better not to use spaces because it will be also used for the filename Save the notebook with the `Diskette` button, check dashboard Integer division gives integer result with truncation in Python 2, float result in Python 3: | 5/3
1/3 | _____no_output_____ | BSD-3-Clause | python_hpc/3_intro_pandas/0-intro-python.ipynb | sdsc-scicomp/2018-11-02-comet-workshop-ucr |
Quotes for strings | print("Hello world")
print('Hello world') | Hello world
| BSD-3-Clause | python_hpc/3_intro_pandas/0-intro-python.ipynb | sdsc-scicomp/2018-11-02-comet-workshop-ucr |
Look for differences | "Hello world"
print("Hello world") | Hello world
| BSD-3-Clause | python_hpc/3_intro_pandas/0-intro-python.ipynb | sdsc-scicomp/2018-11-02-comet-workshop-ucr |
Multiple lines in a cell | 1 + 2
3 + 4
print(1 + 2)
print(3 + 4)
print("""This is
a multiline
Hello world""") | This is
a multiline
Hello world
| BSD-3-Clause | python_hpc/3_intro_pandas/0-intro-python.ipynb | sdsc-scicomp/2018-11-02-comet-workshop-ucr |
Functions and help | abs(-2) | _____no_output_____ | BSD-3-Clause | python_hpc/3_intro_pandas/0-intro-python.ipynb | sdsc-scicomp/2018-11-02-comet-workshop-ucr |
Write a function name followed by `?` to open the help for that function.type in a cell and execute: `abs?` Heading 1 Heading 2 Structured plain text format, it looks a lot like writing text **emails**,you can do lists:* like* thiswrite links like , or [hyperlinking words](http://www.google.com) go to to learn more $b_n=\frac{1}{\pi}\int\limits_{-\pi}^{\pi}f(x)\sin nx\,\mathrm{d}x=\\=\frac{1}{\pi}\int\limits_{-\pi}^{\pi}x^2\sin nx\,\mathrm{d}x$ Variables | weight_kg = 55 | _____no_output_____ | BSD-3-Clause | python_hpc/3_intro_pandas/0-intro-python.ipynb | sdsc-scicomp/2018-11-02-comet-workshop-ucr |
Once a variable has a value, we can print it: | print(weight_kg) | 55
| BSD-3-Clause | python_hpc/3_intro_pandas/0-intro-python.ipynb | sdsc-scicomp/2018-11-02-comet-workshop-ucr |
and do arithmetic with it: | print('weight in pounds:')
print(2.2 * weight_kg) | weight in pounds:
121.0
| BSD-3-Clause | python_hpc/3_intro_pandas/0-intro-python.ipynb | sdsc-scicomp/2018-11-02-comet-workshop-ucr |
We can also change a variable's value by assigning it a new one: | weight_kg = 57.5
print('weight in kilograms is now:')
print(weight_kg) | weight in kilograms is now:
57.5
| BSD-3-Clause | python_hpc/3_intro_pandas/0-intro-python.ipynb | sdsc-scicomp/2018-11-02-comet-workshop-ucr |
As the example above shows,we can print several things at once by separating them with commas.If we imagine the variable as a sticky note with a name written on it,assignment is like putting the sticky note on a particular value: This means that assigning a value to one variable does *not* change the values of other variables.For example,let's store the subject's weight in pounds in a variable: | weight_lb = 2.2 * weight_kg
print('weight in kilograms:')
print(weight_kg)
print('and in pounds:')
print(weight_lb) | weight in kilograms:
57.5
and in pounds:
126.5
| BSD-3-Clause | python_hpc/3_intro_pandas/0-intro-python.ipynb | sdsc-scicomp/2018-11-02-comet-workshop-ucr |
and then change `weight_kg`: | weight_kg = 100.0
print('weight in kilograms is now:')
print(weight_kg)
print('and weight in pounds is still:')
print(weight_lb) | weight in kilograms is now:
100.0
and weight in pounds is still:
126.5
| BSD-3-Clause | python_hpc/3_intro_pandas/0-intro-python.ipynb | sdsc-scicomp/2018-11-02-comet-workshop-ucr |
Since `weight_lb` doesn't "remember" where its value came from,it isn't automatically updated when `weight_kg` changes.This is different from the way spreadsheets work. Challenge | x = 5
y = x
x = x**2 | _____no_output_____ | BSD-3-Clause | python_hpc/3_intro_pandas/0-intro-python.ipynb | sdsc-scicomp/2018-11-02-comet-workshop-ucr |
How much is `x`? how much is `y`? Comments | weight_kg = 100.0 # assigning weight
# now convert to pounds
print(2.2 * weight_kg) | 220.0
| BSD-3-Clause | python_hpc/3_intro_pandas/0-intro-python.ipynb | sdsc-scicomp/2018-11-02-comet-workshop-ucr |
Strings slicing | my_string = "Hello world"
print(my_string) | Hello world
| BSD-3-Clause | python_hpc/3_intro_pandas/0-intro-python.ipynb | sdsc-scicomp/2018-11-02-comet-workshop-ucr |
Python by convention starts indexing from `0` | print(my_string[0:3])
print(my_string[:3]) | Hel
| BSD-3-Clause | python_hpc/3_intro_pandas/0-intro-python.ipynb | sdsc-scicomp/2018-11-02-comet-workshop-ucr |
Python uses intervals open on the right: $ \left[7, 9\right[ $ | print(my_string[7:9]) | or
| BSD-3-Clause | python_hpc/3_intro_pandas/0-intro-python.ipynb | sdsc-scicomp/2018-11-02-comet-workshop-ucr |
Challenge What happens if you print: | print(my_string[4:4]) | BSD-3-Clause | python_hpc/3_intro_pandas/0-intro-python.ipynb | sdsc-scicomp/2018-11-02-comet-workshop-ucr |
|
Saving and Loading ModelsIn this notebook, I'll show you how to save and load models with PyTorch. This is important because you'll often want to load previously trained models to use in making predictions or to continue training on new data. | %matplotlib inline
%config InlineBackend.figure_format = 'retina'
import matplotlib.pyplot as plt
import torch
from torch import nn
from torch import optim
import torch.nn.functional as F
from torchvision import datasets, transforms
import helper
import fc_model
# Define a transform to normalize the data
transform = transforms.Compose([transforms.ToTensor(),
transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))])
# Download and load the training data
trainset = datasets.FashionMNIST('F_MNIST_data/', download=True, train=True, transform=transform)
trainloader = torch.utils.data.DataLoader(trainset, batch_size=64, shuffle=True)
# Download and load the test data
testset = datasets.FashionMNIST('F_MNIST_data/', download=True, train=False, transform=transform)
testloader = torch.utils.data.DataLoader(testset, batch_size=64, shuffle=True) | _____no_output_____ | MIT | Part 6 - Saving and Loading Models.ipynb | manganganath/DL_PyTorch |
Here we can see one of the images. | image, label = next(iter(trainloader))
helper.imshow(image[0,:]); | _____no_output_____ | MIT | Part 6 - Saving and Loading Models.ipynb | manganganath/DL_PyTorch |
Train a networkTo make things more concise here, I moved the model architecture and training code from the last part to a file called `fc_model`. Importing this, we can easily create a fully-connected network with `fc_model.Network`, and train the network using `fc_model.train`. I'll use this model (once it's trained) to demonstrate how we can save and load models. | # Create the network, define the criterion and optimizer
model = fc_model.Network(784, 10, [512, 256, 128])
criterion = nn.NLLLoss()
optimizer = optim.Adam(model.parameters(), lr=0.001)
fc_model.train(model, trainloader, testloader, criterion, optimizer, epochs=2) | Epoch: 1/2.. Training Loss: 1.673.. Test Loss: 0.934.. Test Accuracy: 0.662
Epoch: 1/2.. Training Loss: 1.027.. Test Loss: 0.711.. Test Accuracy: 0.718
Epoch: 1/2.. Training Loss: 0.857.. Test Loss: 0.669.. Test Accuracy: 0.740
Epoch: 1/2.. Training Loss: 0.792.. Test Loss: 0.716.. Test Accuracy: 0.709
Epoch: 1/2.. Training Loss: 0.759.. Test Loss: 0.624.. Test Accuracy: 0.768
Epoch: 1/2.. Training Loss: 0.712.. Test Loss: 0.617.. Test Accuracy: 0.765
Epoch: 1/2.. Training Loss: 0.713.. Test Loss: 0.579.. Test Accuracy: 0.773
Epoch: 1/2.. Training Loss: 0.711.. Test Loss: 0.569.. Test Accuracy: 0.784
Epoch: 1/2.. Training Loss: 0.662.. Test Loss: 0.560.. Test Accuracy: 0.788
Epoch: 1/2.. Training Loss: 0.658.. Test Loss: 0.543.. Test Accuracy: 0.795
Epoch: 1/2.. Training Loss: 0.612.. Test Loss: 0.545.. Test Accuracy: 0.801
Epoch: 1/2.. Training Loss: 0.591.. Test Loss: 0.533.. Test Accuracy: 0.800
Epoch: 1/2.. Training Loss: 0.611.. Test Loss: 0.532.. Test Accuracy: 0.799
Epoch: 1/2.. Training Loss: 0.613.. Test Loss: 0.528.. Test Accuracy: 0.800
Epoch: 1/2.. Training Loss: 0.638.. Test Loss: 0.542.. Test Accuracy: 0.801
Epoch: 1/2.. Training Loss: 0.590.. Test Loss: 0.500.. Test Accuracy: 0.810
Epoch: 1/2.. Training Loss: 0.606.. Test Loss: 0.490.. Test Accuracy: 0.824
Epoch: 1/2.. Training Loss: 0.592.. Test Loss: 0.504.. Test Accuracy: 0.814
Epoch: 1/2.. Training Loss: 0.571.. Test Loss: 0.496.. Test Accuracy: 0.818
Epoch: 1/2.. Training Loss: 0.592.. Test Loss: 0.487.. Test Accuracy: 0.816
Epoch: 1/2.. Training Loss: 0.592.. Test Loss: 0.482.. Test Accuracy: 0.818
Epoch: 1/2.. Training Loss: 0.589.. Test Loss: 0.479.. Test Accuracy: 0.822
Epoch: 1/2.. Training Loss: 0.563.. Test Loss: 0.482.. Test Accuracy: 0.825
Epoch: 2/2.. Training Loss: 0.597.. Test Loss: 0.477.. Test Accuracy: 0.823
Epoch: 2/2.. Training Loss: 0.509.. Test Loss: 0.487.. Test Accuracy: 0.822
Epoch: 2/2.. Training Loss: 0.559.. Test Loss: 0.478.. Test Accuracy: 0.824
Epoch: 2/2.. Training Loss: 0.567.. Test Loss: 0.485.. Test Accuracy: 0.826
Epoch: 2/2.. Training Loss: 0.586.. Test Loss: 0.490.. Test Accuracy: 0.819
Epoch: 2/2.. Training Loss: 0.555.. Test Loss: 0.465.. Test Accuracy: 0.828
Epoch: 2/2.. Training Loss: 0.568.. Test Loss: 0.476.. Test Accuracy: 0.826
Epoch: 2/2.. Training Loss: 0.544.. Test Loss: 0.468.. Test Accuracy: 0.829
Epoch: 2/2.. Training Loss: 0.541.. Test Loss: 0.481.. Test Accuracy: 0.820
Epoch: 2/2.. Training Loss: 0.504.. Test Loss: 0.450.. Test Accuracy: 0.835
Epoch: 2/2.. Training Loss: 0.544.. Test Loss: 0.462.. Test Accuracy: 0.832
Epoch: 2/2.. Training Loss: 0.528.. Test Loss: 0.452.. Test Accuracy: 0.834
Epoch: 2/2.. Training Loss: 0.538.. Test Loss: 0.462.. Test Accuracy: 0.836
Epoch: 2/2.. Training Loss: 0.504.. Test Loss: 0.469.. Test Accuracy: 0.826
Epoch: 2/2.. Training Loss: 0.549.. Test Loss: 0.460.. Test Accuracy: 0.833
Epoch: 2/2.. Training Loss: 0.494.. Test Loss: 0.445.. Test Accuracy: 0.837
Epoch: 2/2.. Training Loss: 0.531.. Test Loss: 0.457.. Test Accuracy: 0.836
Epoch: 2/2.. Training Loss: 0.543.. Test Loss: 0.455.. Test Accuracy: 0.833
Epoch: 2/2.. Training Loss: 0.524.. Test Loss: 0.448.. Test Accuracy: 0.840
Epoch: 2/2.. Training Loss: 0.531.. Test Loss: 0.439.. Test Accuracy: 0.844
Epoch: 2/2.. Training Loss: 0.520.. Test Loss: 0.445.. Test Accuracy: 0.837
Epoch: 2/2.. Training Loss: 0.507.. Test Loss: 0.452.. Test Accuracy: 0.832
Epoch: 2/2.. Training Loss: 0.514.. Test Loss: 0.441.. Test Accuracy: 0.842
| MIT | Part 6 - Saving and Loading Models.ipynb | manganganath/DL_PyTorch |
Saving and loading networksAs you can imagine, it's impractical to train a network every time you need to use it. Instead, we can save trained networks then load them later to train more or use them for predictions.The parameters for PyTorch networks are stored in a model's `state_dict`. We can see the state dict contains the weight and bias matrices for each of our layers. | print("Our model: \n\n", model, '\n')
print("The state dict keys: \n\n", model.state_dict().keys()) | Our model:
Network(
(hidden_layers): ModuleList(
(0): Linear(in_features=784, out_features=512, bias=True)
(1): Linear(in_features=512, out_features=256, bias=True)
(2): Linear(in_features=256, out_features=128, bias=True)
)
(output): Linear(in_features=128, out_features=10, bias=True)
(dropout): Dropout(p=0.5)
)
The state dict keys:
odict_keys(['hidden_layers.0.weight', 'hidden_layers.0.bias', 'hidden_layers.1.weight', 'hidden_layers.1.bias', 'hidden_layers.2.weight', 'hidden_layers.2.bias', 'output.weight', 'output.bias'])
| MIT | Part 6 - Saving and Loading Models.ipynb | manganganath/DL_PyTorch |
The simplest thing to do is simply save the state dict with `torch.save`. For example, we can save it to a file `'checkpoint.pth'`. | torch.save(model.state_dict(), 'checkpoint.pth') | _____no_output_____ | MIT | Part 6 - Saving and Loading Models.ipynb | manganganath/DL_PyTorch |
Then we can load the state dict with `torch.load`. | state_dict = torch.load('checkpoint.pth')
print(state_dict.keys()) | odict_keys(['hidden_layers.0.weight', 'hidden_layers.0.bias', 'hidden_layers.1.weight', 'hidden_layers.1.bias', 'hidden_layers.2.weight', 'hidden_layers.2.bias', 'output.weight', 'output.bias'])
| MIT | Part 6 - Saving and Loading Models.ipynb | manganganath/DL_PyTorch |
And to load the state dict in to the network, you do `model.load_state_dict(state_dict)`. | model.load_state_dict(state_dict) | _____no_output_____ | MIT | Part 6 - Saving and Loading Models.ipynb | manganganath/DL_PyTorch |
Seems pretty straightforward, but as usual it's a bit more complicated. Loading the state dict works only if the model architecture is exactly the same as the checkpoint architecture. If I create a model with a different architecture, this fails. | # Try this
model = fc_model.Network(784, 10, [400, 200, 100])
# This will throw an error because the tensor sizes are wrong!
model.load_state_dict(state_dict) | _____no_output_____ | MIT | Part 6 - Saving and Loading Models.ipynb | manganganath/DL_PyTorch |
This means we need to rebuild the model exactly as it was when trained. Information about the model architecture needs to be saved in the checkpoint, along with the state dict. To do this, you build a dictionary with all the information you need to compeletely rebuild the model. | checkpoint = {'input_size': 784,
'output_size': 10,
'hidden_layers': [each.out_features for each in model.hidden_layers],
'state_dict': model.state_dict()}
torch.save(checkpoint, 'checkpoint.pth') | _____no_output_____ | MIT | Part 6 - Saving and Loading Models.ipynb | manganganath/DL_PyTorch |
Now the checkpoint has all the necessary information to rebuild the trained model. You can easily make that a function if you want. Similarly, we can write a function to load checkpoints. | def load_checkpoint(filepath):
checkpoint = torch.load(filepath)
model = fc_model.Network(checkpoint['input_size'],
checkpoint['output_size'],
checkpoint['hidden_layers'])
model.load_state_dict(checkpoint['state_dict'])
return model
model = load_checkpoint('checkpoint.pth')
print(model) | Network(
(hidden_layers): ModuleList(
(0): Linear(in_features=784, out_features=512, bias=True)
(1): Linear(in_features=512, out_features=256, bias=True)
(2): Linear(in_features=256, out_features=128, bias=True)
)
(output): Linear(in_features=128, out_features=10, bias=True)
(dropout): Dropout(p=0.5)
)
| MIT | Part 6 - Saving and Loading Models.ipynb | manganganath/DL_PyTorch |
import packages | import pandas as pd | _____no_output_____ | MIT | filling missing values.ipynb | bharath1604/pandas |
1.Load data and read | california=pd.read_csv('https://raw.githubusercontent.com/bharath1604/Handling_Missing_Values/master/california_cities.csv',header=None)
california | _____no_output_____ | MIT | filling missing values.ipynb | bharath1604/pandas |
2.Drop the nan values by using (dropna()) [axis=0 means rows and axis=1 means columns] | california.dropna() | _____no_output_____ | MIT | filling missing values.ipynb | bharath1604/pandas |
Deep Learning & Art: Neural Style TransferWelcome to the second assignment of this week. In this assignment, you will learn about Neural Style Transfer. This algorithm was created by Gatys et al. (2015) (https://arxiv.org/abs/1508.06576). **In this assignment, you will:**- Implement the neural style transfer algorithm - Generate novel artistic images using your algorithm Most of the algorithms you've studied optimize a cost function to get a set of parameter values. In Neural Style Transfer, you'll optimize a cost function to get pixel values! | import os
import sys
import scipy.io
import scipy.misc
import matplotlib.pyplot as plt
from matplotlib.pyplot import imshow
from PIL import Image
from nst_utils import *
import numpy as np
import tensorflow as tf
%matplotlib inline | _____no_output_____ | Apache-2.0 | lesson4-week4/Art Generation with Neural Style Transfer - v2/Art+Generation+with+Neural+Style+Transfer+-+v2.ipynb | tryrus/Coursera-DeepLearning-AndrewNG-exercise |
1 - Problem StatementNeural Style Transfer (NST) is one of the most fun techniques in deep learning. As seen below, it merges two images, namely, a "content" image (C) and a "style" image (S), to create a "generated" image (G). The generated image G combines the "content" of the image C with the "style" of image S. In this example, you are going to generate an image of the Louvre museum in Paris (content image C), mixed with a painting by Claude Monet, a leader of the impressionist movement (style image S).Let's see how you can do this. 2 - Transfer LearningNeural Style Transfer (NST) uses a previously trained convolutional network, and builds on top of that. The idea of using a network trained on a different task and applying it to a new task is called transfer learning. Following the original NST paper (https://arxiv.org/abs/1508.06576), we will use the VGG network. Specifically, we'll use VGG-19, a 19-layer version of the VGG network. This model has already been trained on the very large ImageNet database, and thus has learned to recognize a variety of low level features (at the earlier layers) and high level features (at the deeper layers). Run the following code to load parameters from the VGG model. This may take a few seconds. | model = load_vgg_model("pretrained-model/imagenet-vgg-verydeep-19.mat")
print(model) | {'input': <tf.Variable 'Variable:0' shape=(1, 300, 400, 3) dtype=float32_ref>, 'conv1_1': <tf.Tensor 'Relu:0' shape=(1, 300, 400, 64) dtype=float32>, 'conv1_2': <tf.Tensor 'Relu_1:0' shape=(1, 300, 400, 64) dtype=float32>, 'avgpool1': <tf.Tensor 'AvgPool:0' shape=(1, 150, 200, 64) dtype=float32>, 'conv2_1': <tf.Tensor 'Relu_2:0' shape=(1, 150, 200, 128) dtype=float32>, 'conv2_2': <tf.Tensor 'Relu_3:0' shape=(1, 150, 200, 128) dtype=float32>, 'avgpool2': <tf.Tensor 'AvgPool_1:0' shape=(1, 75, 100, 128) dtype=float32>, 'conv3_1': <tf.Tensor 'Relu_4:0' shape=(1, 75, 100, 256) dtype=float32>, 'conv3_2': <tf.Tensor 'Relu_5:0' shape=(1, 75, 100, 256) dtype=float32>, 'conv3_3': <tf.Tensor 'Relu_6:0' shape=(1, 75, 100, 256) dtype=float32>, 'conv3_4': <tf.Tensor 'Relu_7:0' shape=(1, 75, 100, 256) dtype=float32>, 'avgpool3': <tf.Tensor 'AvgPool_2:0' shape=(1, 38, 50, 256) dtype=float32>, 'conv4_1': <tf.Tensor 'Relu_8:0' shape=(1, 38, 50, 512) dtype=float32>, 'conv4_2': <tf.Tensor 'Relu_9:0' shape=(1, 38, 50, 512) dtype=float32>, 'conv4_3': <tf.Tensor 'Relu_10:0' shape=(1, 38, 50, 512) dtype=float32>, 'conv4_4': <tf.Tensor 'Relu_11:0' shape=(1, 38, 50, 512) dtype=float32>, 'avgpool4': <tf.Tensor 'AvgPool_3:0' shape=(1, 19, 25, 512) dtype=float32>, 'conv5_1': <tf.Tensor 'Relu_12:0' shape=(1, 19, 25, 512) dtype=float32>, 'conv5_2': <tf.Tensor 'Relu_13:0' shape=(1, 19, 25, 512) dtype=float32>, 'conv5_3': <tf.Tensor 'Relu_14:0' shape=(1, 19, 25, 512) dtype=float32>, 'conv5_4': <tf.Tensor 'Relu_15:0' shape=(1, 19, 25, 512) dtype=float32>, 'avgpool5': <tf.Tensor 'AvgPool_4:0' shape=(1, 10, 13, 512) dtype=float32>}
| Apache-2.0 | lesson4-week4/Art Generation with Neural Style Transfer - v2/Art+Generation+with+Neural+Style+Transfer+-+v2.ipynb | tryrus/Coursera-DeepLearning-AndrewNG-exercise |
The model is stored in a python dictionary where each variable name is the key and the corresponding value is a tensor containing that variable's value. To run an image through this network, you just have to feed the image to the model. In TensorFlow, you can do so using the [tf.assign](https://www.tensorflow.org/api_docs/python/tf/assign) function. In particular, you will use the assign function like this: ```pythonmodel["input"].assign(image)```This assigns the image as an input to the model. After this, if you want to access the activations of a particular layer, say layer `4_2` when the network is run on this image, you would run a TensorFlow session on the correct tensor `conv4_2`, as follows: ```pythonsess.run(model["conv4_2"])``` 3 - Neural Style Transfer We will build the NST algorithm in three steps:- Build the content cost function $J_{content}(C,G)$- Build the style cost function $J_{style}(S,G)$- Put it together to get $J(G) = \alpha J_{content}(C,G) + \beta J_{style}(S,G)$. 3.1 - Computing the content costIn our running example, the content image C will be the picture of the Louvre Museum in Paris. Run the code below to see a picture of the Louvre. | content_image = scipy.misc.imread("images/louvre.jpg")
imshow(content_image) | _____no_output_____ | Apache-2.0 | lesson4-week4/Art Generation with Neural Style Transfer - v2/Art+Generation+with+Neural+Style+Transfer+-+v2.ipynb | tryrus/Coursera-DeepLearning-AndrewNG-exercise |
The content image (C) shows the Louvre museum's pyramid surrounded by old Paris buildings, against a sunny sky with a few clouds.** 3.1.1 - How do you ensure the generated image G matches the content of the image C?**As we saw in lecture, the earlier (shallower) layers of a ConvNet tend to detect lower-level features such as edges and simple textures, and the later (deeper) layers tend to detect higher-level features such as more complex textures as well as object classes. We would like the "generated" image G to have similar content as the input image C. Suppose you have chosen some layer's activations to represent the content of an image. In practice, you'll get the most visually pleasing results if you choose a layer in the middle of the network--neither too shallow nor too deep. (After you have finished this exercise, feel free to come back and experiment with using different layers, to see how the results vary.)So, suppose you have picked one particular hidden layer to use. Now, set the image C as the input to the pretrained VGG network, and run forward propagation. Let $a^{(C)}$ be the hidden layer activations in the layer you had chosen. (In lecture, we had written this as $a^{[l](C)}$, but here we'll drop the superscript $[l]$ to simplify the notation.) This will be a $n_H \times n_W \times n_C$ tensor. Repeat this process with the image G: Set G as the input, and run forward progation. Let $$a^{(G)}$$ be the corresponding hidden layer activation. We will define as the content cost function as:$$J_{content}(C,G) = \frac{1}{4 \times n_H \times n_W \times n_C}\sum _{ \text{all entries}} (a^{(C)} - a^{(G)})^2\tag{1} $$Here, $n_H, n_W$ and $n_C$ are the height, width and number of channels of the hidden layer you have chosen, and appear in a normalization term in the cost. For clarity, note that $a^{(C)}$ and $a^{(G)}$ are the volumes corresponding to a hidden layer's activations. In order to compute the cost $J_{content}(C,G)$, it might also be convenient to unroll these 3D volumes into a 2D matrix, as shown below. (Technically this unrolling step isn't needed to compute $J_{content}$, but it will be good practice for when you do need to carry out a similar operation later for computing the style const $J_{style}$.)**Exercise:** Compute the "content cost" using TensorFlow. **Instructions**: The 3 steps to implement this function are:1. Retrieve dimensions from a_G: - To retrieve dimensions from a tensor X, use: `X.get_shape().as_list()`2. Unroll a_C and a_G as explained in the picture above - If you are stuck, take a look at [Hint1](https://www.tensorflow.org/versions/r1.3/api_docs/python/tf/transpose) and [Hint2](https://www.tensorflow.org/versions/r1.2/api_docs/python/tf/reshape).3. Compute the content cost: - If you are stuck, take a look at [Hint3](https://www.tensorflow.org/api_docs/python/tf/reduce_sum), [Hint4](https://www.tensorflow.org/api_docs/python/tf/square) and [Hint5](https://www.tensorflow.org/api_docs/python/tf/subtract). | # GRADED FUNCTION: compute_content_cost
def compute_content_cost(a_C, a_G):
"""
Computes the content cost
Arguments:
a_C -- tensor of dimension (1, n_H, n_W, n_C),
hidden layer activations representing content of the image C
a_G -- tensor of dimension (1, n_H, n_W, n_C),
hidden layer activations representing content of the image G
Returns:
J_content -- scalar that you compute using equation 1 above.
"""
### START CODE HERE ###
# Retrieve dimensions from a_G (β1 line)
m, n_H, n_W, n_C = a_G.get_shape().as_list()
# Reshape a_C and a_G (β2 lines)
a_C_unrolled = tf.transpose(tf.reshape(a_C, [-1]))
a_G_unrolled = tf.transpose(tf.reshape(a_G, [-1]))
# compute the cost with tensorflow (β1 line)
J_content = tf.reduce_sum((a_C_unrolled - a_G_unrolled)**2)
/ (4 * n_H * n_W * n_C)
#J_content = tf.reduce_sum(tf.square(tf.subtract(a_C_unrolled,
# a_G_unrolled)))/ (4*n_H*n_W*n_C)
### END CODE HERE ###
return J_content
tf.reset_default_graph()
with tf.Session() as test:
tf.set_random_seed(1)
a_C = tf.random_normal([1, 4, 4, 3], mean=1, stddev=4)
a_G = tf.random_normal([1, 4, 4, 3], mean=1, stddev=4)
J_content = compute_content_cost(a_C, a_G)
print("J_content = " + str(J_content.eval())) | J_content = 6.76559
| Apache-2.0 | lesson4-week4/Art Generation with Neural Style Transfer - v2/Art+Generation+with+Neural+Style+Transfer+-+v2.ipynb | tryrus/Coursera-DeepLearning-AndrewNG-exercise |
**Expected Output**: **J_content** 6.76559 **What you should remember**:- The content cost takes a hidden layer activation of the neural network, and measures how different $a^{(C)}$ and $a^{(G)}$ are. - When we minimize the content cost later, this will help make sure $G$ has similar content as $C$. 3.2 - Computing the style costFor our running example, we will use the following style image: | style_image = scipy.misc.imread("images/monet_800600.jpg")
imshow(style_image) | _____no_output_____ | Apache-2.0 | lesson4-week4/Art Generation with Neural Style Transfer - v2/Art+Generation+with+Neural+Style+Transfer+-+v2.ipynb | tryrus/Coursera-DeepLearning-AndrewNG-exercise |
This painting was painted in the style of *[impressionism](https://en.wikipedia.org/wiki/Impressionism)*.Lets see how you can now define a "style" const function $J_{style}(S,G)$. 3.2.1 - Style matrixThe style matrix is also called a "Gram matrix." In linear algebra, the Gram matrix G of a set of vectors $(v_{1},\dots ,v_{n})$ is the matrix of dot products, whose entries are ${\displaystyle G_{ij} = v_{i}^T v_{j} = np.dot(v_{i}, v_{j}) }$. In other words, $G_{ij}$ compares how similar $v_i$ is to $v_j$: If they are highly similar, you would expect them to have a large dot product, and thus for $G_{ij}$ to be large. Note that there is an unfortunate collision in the variable names used here. We are following common terminology used in the literature, but $G$ is used to denote the Style matrix (or Gram matrix) as well as to denote the generated image $G$. We will try to make sure which $G$ we are referring to is always clear from the context. In NST, you can compute the Style matrix by multiplying the "unrolled" filter matrix with their transpose:The result is a matrix of dimension $(n_C,n_C)$ where $n_C$ is the number of filters. The value $G_{ij}$ measures how similar the activations of filter $i$ are to the activations of filter $j$. One important part of the gram matrix is that the diagonal elements such as $G_{ii}$ also measures how active filter $i$ is. For example, suppose filter $i$ is detecting vertical textures in the image. Then $G_{ii}$ measures how common vertical textures are in the image as a whole: If $G_{ii}$ is large, this means that the image has a lot of vertical texture. By capturing the prevalence of different types of features ($G_{ii}$), as well as how much different features occur together ($G_{ij}$), the Style matrix $G$ measures the style of an image. **Exercise**:Using TensorFlow, implement a function that computes the Gram matrix of a matrix A. The formula is: The gram matrix of A is $G_A = AA^T$. If you are stuck, take a look at [Hint 1](https://www.tensorflow.org/api_docs/python/tf/matmul) and [Hint 2](https://www.tensorflow.org/api_docs/python/tf/transpose). | # GRADED FUNCTION: gram_matrix
def gram_matrix(A):
"""
Argument:
A -- matrix of shape (n_C, n_H*n_W)
Returns:
GA -- Gram matrix of A, of shape (n_C, n_C)
"""
### START CODE HERE ### (β1 line)
GA = tf.matmul(A, tf.transpose(A))
### END CODE HERE ###
return GA
tf.reset_default_graph()
with tf.Session() as test:
tf.set_random_seed(1)
A = tf.random_normal([3, 2*1], mean=1, stddev=4)
GA = gram_matrix(A)
print("GA = " + str(GA.eval())) | GA = [[ 6.42230511 -4.42912197 -2.09668207]
[ -4.42912197 19.46583748 19.56387138]
[ -2.09668207 19.56387138 20.6864624 ]]
| Apache-2.0 | lesson4-week4/Art Generation with Neural Style Transfer - v2/Art+Generation+with+Neural+Style+Transfer+-+v2.ipynb | tryrus/Coursera-DeepLearning-AndrewNG-exercise |
**Expected Output**: **GA** [[ 6.42230511 -4.42912197 -2.09668207] [ -4.42912197 19.46583748 19.56387138] [ -2.09668207 19.56387138 20.6864624 ]] 3.2.2 - Style cost After generating the Style matrix (Gram matrix), your goal will be to minimize the distance between the Gram matrix of the "style" image S and that of the "generated" image G. For now, we are using only a single hidden layer $a^{[l]}$, and the corresponding style cost for this layer is defined as: $$J_{style}^{[l]}(S,G) = \frac{1}{4 \times {n_C}^2 \times (n_H \times n_W)^2} \sum _{i=1}^{n_C}\sum_{j=1}^{n_C}(G^{(S)}_{ij} - G^{(G)}_{ij})^2\tag{2} $$where $G^{(S)}$ and $G^{(G)}$ are respectively the Gram matrices of the "style" image and the "generated" image, computed using the hidden layer activations for a particular hidden layer in the network. **Exercise**: Compute the style cost for a single layer. **Instructions**: The 3 steps to implement this function are:1. Retrieve dimensions from the hidden layer activations a_G: - To retrieve dimensions from a tensor X, use: `X.get_shape().as_list()`2. Unroll the hidden layer activations a_S and a_G into 2D matrices, as explained in the picture above. - You may find [Hint1](https://www.tensorflow.org/versions/r1.3/api_docs/python/tf/transpose) and [Hint2](https://www.tensorflow.org/versions/r1.2/api_docs/python/tf/reshape) useful.3. Compute the Style matrix of the images S and G. (Use the function you had previously written.) 4. Compute the Style cost: - You may find [Hint3](https://www.tensorflow.org/api_docs/python/tf/reduce_sum), [Hint4](https://www.tensorflow.org/api_docs/python/tf/square) and [Hint5](https://www.tensorflow.org/api_docs/python/tf/subtract) useful. | # GRADED FUNCTION: compute_layer_style_cost
def compute_layer_style_cost(a_S, a_G):
"""
Arguments:
a_S -- tensor of dimension (1, n_H, n_W, n_C),
hidden layer activations representing style of the image S
a_G -- tensor of dimension (1, n_H, n_W, n_C),
hidden layer activations representing style of the image G
Returns:
J_style_layer -- tensor representing a scalar value,
style cost defined above by equation (2)
"""
### START CODE HERE ###
# Retrieve dimensions from a_G (β1 line)
m, n_H, n_W, n_C = a_G.get_shape().as_list()
# Reshape the images to have them of shape (n_C, n_H*n_W) (β2 lines)
#a_S = tf.reshape(a_S, [n_C, n_H * n_W])
#a_G = tf.reshape(a_G, [n_C, n_H * n_W])
a_S = tf.transpose(tf.reshape(a_S, [n_H*n_W, n_C]))
a_G = tf.transpose(tf.reshape(a_G, [n_H*n_W, n_C]))
# Computing gram_matrices for both images S and G (β2 lines)
GS = gram_matrix(a_S)
GG = gram_matrix(a_G)
# Computing the loss (β1 line)
J_style_layer = tf.reduce_sum(tf.square((GS - GG)))
/ (4 * n_C**2 * (n_W * n_H)**2)
### END CODE HERE ###
return J_style_layer
tf.reset_default_graph()
with tf.Session() as test:
tf.set_random_seed(1)
a_S = tf.random_normal([1, 4, 4, 3], mean=1, stddev=4)
a_G = tf.random_normal([1, 4, 4, 3], mean=1, stddev=4)
J_style_layer = compute_layer_style_cost(a_S, a_G)
print("J_style_layer = " + str(J_style_layer.eval())) | J_style_layer = 9.19028
| Apache-2.0 | lesson4-week4/Art Generation with Neural Style Transfer - v2/Art+Generation+with+Neural+Style+Transfer+-+v2.ipynb | tryrus/Coursera-DeepLearning-AndrewNG-exercise |
**Expected Output**: **J_style_layer** 9.19028 3.2.3 Style WeightsSo far you have captured the style from only one layer. We'll get better results if we "merge" style costs from several different layers. After completing this exercise, feel free to come back and experiment with different weights to see how it changes the generated image $G$. But for now, this is a pretty reasonable default: | STYLE_LAYERS = [
('conv1_1', 0.2),
('conv2_1', 0.2),
('conv3_1', 0.2),
('conv4_1', 0.2),
('conv5_1', 0.2)] | _____no_output_____ | Apache-2.0 | lesson4-week4/Art Generation with Neural Style Transfer - v2/Art+Generation+with+Neural+Style+Transfer+-+v2.ipynb | tryrus/Coursera-DeepLearning-AndrewNG-exercise |
You can combine the style costs for different layers as follows:$$J_{style}(S,G) = \sum_{l} \lambda^{[l]} J^{[l]}_{style}(S,G)$$where the values for $\lambda^{[l]}$ are given in `STYLE_LAYERS`. We've implemented a compute_style_cost(...) function. It simply calls your `compute_layer_style_cost(...)` several times, and weights their results using the values in `STYLE_LAYERS`. Read over it to make sure you understand what it's doing. <!-- 2. Loop over (layer_name, coeff) from STYLE_LAYERS: a. Select the output tensor of the current layer. As an example, to call the tensor from the "conv1_1" layer you would do: out = model["conv1_1"] b. Get the style of the style image from the current layer by running the session on the tensor "out" c. Get a tensor representing the style of the generated image from the current layer. It is just "out". d. Now that you have both styles. Use the function you've implemented above to compute the style_cost for the current layer e. Add (style_cost x coeff) of the current layer to overall style cost (J_style)3. Return J_style, which should now be the sum of the (style_cost x coeff) for each layer.!--> | def compute_style_cost(model, STYLE_LAYERS):
"""
Computes the overall style cost from several chosen layers
Arguments:
model -- our tensorflow model
STYLE_LAYERS -- A python list containing:
- the names of the layers we would like
to extract style from
- a coefficient for each of them
Returns:
J_style -- tensor representing a scalar value, style cost
defined above by equation (2)
"""
# initialize the overall style cost
J_style = 0
for layer_name, coeff in STYLE_LAYERS:
# Select the output tensor of the currently selected layer
out = model[layer_name]
# Set a_S to be the hidden layer activation from the layer
# we have selected, by running the session on out
a_S = sess.run(out)
# Set a_G to be the hidden layer activation from same layer.
# Here, a_G references model[layer_name]
# and isn't evaluated yet. Later in the code, we'll assign
# the image G as the model input, so that
# when we run the session, this will be the activations
# drawn from the appropriate layer, with G as input.
a_G = out
# Compute style_cost for the current layer
J_style_layer = compute_layer_style_cost(a_S, a_G)
# Add coeff * J_style_layer of this layer to overall style cost
J_style += coeff * J_style_layer
return J_style | _____no_output_____ | Apache-2.0 | lesson4-week4/Art Generation with Neural Style Transfer - v2/Art+Generation+with+Neural+Style+Transfer+-+v2.ipynb | tryrus/Coursera-DeepLearning-AndrewNG-exercise |
**Note**: In the inner-loop of the for-loop above, `a_G` is a tensor and hasn't been evaluated yet. It will be evaluated and updated at each iteration when we run the TensorFlow graph in model_nn() below.<!-- How do you choose the coefficients for each layer? The deeper layers capture higher-level concepts, and the features in the deeper layers are less localized in the image relative to each other. So if you want the generated image to softly follow the style image, try choosing larger weights for deeper layers and smaller weights for the first layers. In contrast, if you want the generated image to strongly follow the style image, try choosing smaller weights for deeper layers and larger weights for the first layers!-->**What you should remember**:- The style of an image can be represented using the Gram matrix of a hidden layer's activations. However, we get even better results combining this representation from multiple different layers. This is in contrast to the content representation, where usually using just a single hidden layer is sufficient.- Minimizing the style cost will cause the image $G$ to follow the style of the image $S$. 3.3 - Defining the total cost to optimize Finally, let's create a cost function that minimizes both the style and the content cost. The formula is: $$J(G) = \alpha J_{content}(C,G) + \beta J_{style}(S,G)$$**Exercise**: Implement the total cost function which includes both the content cost and the style cost. | # GRADED FUNCTION: total_cost
def total_cost(J_content, J_style, alpha = 10, beta = 40):
"""
Computes the total cost function
Arguments:
J_content -- content cost coded above
J_style -- style cost coded above
alpha -- hyperparameter weighting the importance of the content cost
beta -- hyperparameter weighting the importance of the style cost
Returns:
J -- total cost as defined by the formula above.
"""
### START CODE HERE ### (β1 line)
J = alpha * J_content + beta * J_style
### END CODE HERE ###
return J
tf.reset_default_graph()
with tf.Session() as test:
np.random.seed(3)
J_content = np.random.randn()
J_style = np.random.randn()
J = total_cost(J_content, J_style)
print("J = " + str(J)) | J = 35.34667875478276
| Apache-2.0 | lesson4-week4/Art Generation with Neural Style Transfer - v2/Art+Generation+with+Neural+Style+Transfer+-+v2.ipynb | tryrus/Coursera-DeepLearning-AndrewNG-exercise |
**Expected Output**: **J** 35.34667875478276 **What you should remember**:- The total cost is a linear combination of the content cost $J_{content}(C,G)$ and the style cost $J_{style}(S,G)$- $\alpha$ and $\beta$ are hyperparameters that control the relative weighting between content and style 4 - Solving the optimization problem Finally, let's put everything together to implement Neural Style Transfer!Here's what the program will have to do:1. Create an Interactive Session2. Load the content image 3. Load the style image4. Randomly initialize the image to be generated 5. Load the VGG16 model7. Build the TensorFlow graph: - Run the content image through the VGG16 model and compute the content cost - Run the style image through the VGG16 model and compute the style cost - Compute the total cost - Define the optimizer and the learning rate8. Initialize the TensorFlow graph and run it for a large number of iterations, updating the generated image at every step.Lets go through the individual steps in detail. You've previously implemented the overall cost $J(G)$. We'll now set up TensorFlow to optimize this with respect to $G$. To do so, your program has to reset the graph and use an "[Interactive Session](https://www.tensorflow.org/api_docs/python/tf/InteractiveSession)". Unlike a regular session, the "Interactive Session" installs itself as the default session to build a graph. This allows you to run variables without constantly needing to refer to the session object, which simplifies the code. Lets start the interactive session. | # Reset the graph
tf.reset_default_graph()
# Start interactive session
sess = tf.InteractiveSession() | _____no_output_____ | Apache-2.0 | lesson4-week4/Art Generation with Neural Style Transfer - v2/Art+Generation+with+Neural+Style+Transfer+-+v2.ipynb | tryrus/Coursera-DeepLearning-AndrewNG-exercise |
Let's load, reshape, and normalize our "content" image (the Louvre museum picture): | content_image = scipy.misc.imread("images/louvre_small.jpg")
content_image = reshape_and_normalize_image(content_image) | _____no_output_____ | Apache-2.0 | lesson4-week4/Art Generation with Neural Style Transfer - v2/Art+Generation+with+Neural+Style+Transfer+-+v2.ipynb | tryrus/Coursera-DeepLearning-AndrewNG-exercise |
Let's load, reshape and normalize our "style" image (Claude Monet's painting): | style_image = scipy.misc.imread("images/monet.jpg")
style_image = reshape_and_normalize_image(style_image) | _____no_output_____ | Apache-2.0 | lesson4-week4/Art Generation with Neural Style Transfer - v2/Art+Generation+with+Neural+Style+Transfer+-+v2.ipynb | tryrus/Coursera-DeepLearning-AndrewNG-exercise |
Now, we initialize the "generated" image as a noisy image created from the content_image. By initializing the pixels of the generated image to be mostly noise but still slightly correlated with the content image, this will help the content of the "generated" image more rapidly match the content of the "content" image. (Feel free to look in `nst_utils.py` to see the details of `generate_noise_image(...)`; to do so, click "File-->Open..." at the upper-left corner of this Jupyter notebook.) | generated_image = generate_noise_image(content_image)
imshow(generated_image[0]) | _____no_output_____ | Apache-2.0 | lesson4-week4/Art Generation with Neural Style Transfer - v2/Art+Generation+with+Neural+Style+Transfer+-+v2.ipynb | tryrus/Coursera-DeepLearning-AndrewNG-exercise |
Next, as explained in part (2), let's load the VGG16 model. | model = load_vgg_model("pretrained-model/imagenet-vgg-verydeep-19.mat") | _____no_output_____ | Apache-2.0 | lesson4-week4/Art Generation with Neural Style Transfer - v2/Art+Generation+with+Neural+Style+Transfer+-+v2.ipynb | tryrus/Coursera-DeepLearning-AndrewNG-exercise |
To get the program to compute the content cost, we will now assign `a_C` and `a_G` to be the appropriate hidden layer activations. We will use layer `conv4_2` to compute the content cost. The code below does the following:1. Assign the content image to be the input to the VGG model.2. Set a_C to be the tensor giving the hidden layer activation for layer "conv4_2".3. Set a_G to be the tensor giving the hidden layer activation for the same layer. 4. Compute the content cost using a_C and a_G. | # Assign the content image to be the input of the VGG model.
sess.run(model['input'].assign(content_image))
# Select the output tensor of layer conv4_2
out = model['conv4_2']
# Set a_C to be the hidden layer activation from the layer we have selected
a_C = sess.run(out)
# Set a_G to be the hidden layer activation from same layer.
# Here, a_G references model['conv4_2']
# and isn't evaluated yet. Later in the code, we'll assign
# the image G as the model input, so that
# when we run the session, this will be the activations
# drawn from the appropriate layer, with G as input.
a_G = out
# Compute the content cost
J_content = compute_content_cost(a_C, a_G) | _____no_output_____ | Apache-2.0 | lesson4-week4/Art Generation with Neural Style Transfer - v2/Art+Generation+with+Neural+Style+Transfer+-+v2.ipynb | tryrus/Coursera-DeepLearning-AndrewNG-exercise |
**Note**: At this point, a_G is a tensor and hasn't been evaluated. It will be evaluated and updated at each iteration when we run the Tensorflow graph in model_nn() below. | # Assign the input of the model to be the "style" image
sess.run(model['input'].assign(style_image))
# Compute the style cost
J_style = compute_style_cost(model, STYLE_LAYERS) | _____no_output_____ | Apache-2.0 | lesson4-week4/Art Generation with Neural Style Transfer - v2/Art+Generation+with+Neural+Style+Transfer+-+v2.ipynb | tryrus/Coursera-DeepLearning-AndrewNG-exercise |
**Exercise**: Now that you have J_content and J_style, compute the total cost J by calling `total_cost()`. Use `alpha = 10` and `beta = 40`. | ### START CODE HERE ### (1 line)
J = total_cost(J_content, J_style, alpha = 10, beta = 40)
### END CODE HERE ### | _____no_output_____ | Apache-2.0 | lesson4-week4/Art Generation with Neural Style Transfer - v2/Art+Generation+with+Neural+Style+Transfer+-+v2.ipynb | tryrus/Coursera-DeepLearning-AndrewNG-exercise |
You'd previously learned how to set up the Adam optimizer in TensorFlow. Lets do that here, using a learning rate of 2.0. [See reference](https://www.tensorflow.org/api_docs/python/tf/train/AdamOptimizer) | # define optimizer (1 line)
optimizer = tf.train.AdamOptimizer(2.0)
# define train_step (1 line)
train_step = optimizer.minimize(J) | _____no_output_____ | Apache-2.0 | lesson4-week4/Art Generation with Neural Style Transfer - v2/Art+Generation+with+Neural+Style+Transfer+-+v2.ipynb | tryrus/Coursera-DeepLearning-AndrewNG-exercise |
**Exercise**: Implement the model_nn() function which initializes the variables of the tensorflow graph, assigns the input image (initial generated image) as the input of the VGG16 model and runs the train_step for a large number of steps. | def model_nn(sess, input_image, num_iterations = 200):
# Initialize global variables (you need to run
# the session on the initializer)
### START CODE HERE ### (1 line)
sess.run(tf.global_variables_initializer())
### END CODE HERE ###
# Run the noisy input image (initial generated image)
# through the model. Use assign().
### START CODE HERE ### (1 line)
sess.run(model['input'].assign(input_image))
### END CODE HERE ###
for i in range(num_iterations):
# Run the session on the train_step to minimize the total cost
### START CODE HERE ### (1 line)
_ = sess.run(train_step)
### END CODE HERE ###
# Compute the generated image by running the session
# on the current model['input']
### START CODE HERE ### (1 line)
generated_image = sess.run(model['input'])
### END CODE HERE ###
# Print every 20 iteration.
if i%20 == 0:
Jt, Jc, Js = sess.run([J, J_content, J_style])
print("Iteration " + str(i) + " :")
print("total cost = " + str(Jt))
print("content cost = " + str(Jc))
print("style cost = " + str(Js))
# save current generated image in the "/output" directory
save_image("output/" + str(i) + ".png", generated_image)
# save last generated image
save_image('output/generated_image.jpg', generated_image)
return generated_image | _____no_output_____ | Apache-2.0 | lesson4-week4/Art Generation with Neural Style Transfer - v2/Art+Generation+with+Neural+Style+Transfer+-+v2.ipynb | tryrus/Coursera-DeepLearning-AndrewNG-exercise |
Run the following cell to generate an artistic image. It should take about 3min on CPU for every 20 iterations but you start observing attractive results after β140 iterations. Neural Style Transfer is generally trained using GPUs. | model_nn(sess, generated_image) | Iteration 0 :
total cost = 5.05035e+09
content cost = 7877.67
style cost = 1.26257e+08
Iteration 20 :
total cost = 9.43276e+08
content cost = 15186.9
style cost = 2.35781e+07
Iteration 40 :
total cost = 4.84898e+08
content cost = 16785.0
style cost = 1.21183e+07
Iteration 60 :
total cost = 3.12574e+08
content cost = 17465.8
style cost = 7.80998e+06
Iteration 80 :
total cost = 2.28137e+08
content cost = 17715.0
style cost = 5.699e+06
Iteration 100 :
total cost = 1.80694e+08
content cost = 17895.4
style cost = 4.51288e+06
Iteration 120 :
total cost = 1.49996e+08
content cost = 18034.3
style cost = 3.74539e+06
Iteration 140 :
total cost = 1.27698e+08
content cost = 18186.8
style cost = 3.18791e+06
Iteration 160 :
total cost = 1.10698e+08
content cost = 18354.2
style cost = 2.76287e+06
Iteration 180 :
total cost = 9.73408e+07
content cost = 18500.9
style cost = 2.4289e+06
| Apache-2.0 | lesson4-week4/Art Generation with Neural Style Transfer - v2/Art+Generation+with+Neural+Style+Transfer+-+v2.ipynb | tryrus/Coursera-DeepLearning-AndrewNG-exercise |
Predicting Student Admissions with Neural NetworksIn this notebook, we predict student admissions to graduate school at UCLA based on three pieces of data:- GRE Scores (Test)- GPA Scores (Grades)- Class rank (1-4)The dataset originally came from here: http://www.ats.ucla.edu/ Loading the dataTo load the data and format it nicely, we will use two very useful packages called Pandas and Numpy. You can read on the documentation here:- https://pandas.pydata.org/pandas-docs/stable/- https://docs.scipy.org/ | # Importing pandas and numpy
import pandas as pd
import numpy as np
# Reading the csv file into a pandas DataFrame
data = pd.read_csv('student_data.csv')
# Printing out the first 10 rows of our data
data[:10] | _____no_output_____ | MIT | Introduction to Neural Networks/StudentAdmissions.ipynb | kushkul/Facebook-Pytorch-Scholarship-Challenge |
Plotting the dataFirst let's make a plot of our data to see how it looks. In order to have a 2D plot, let's ingore the rank. | # Importing matplotlib
import matplotlib.pyplot as plt
%matplotlib inline
# Function to help us plot
def plot_points(data):
X = np.array(data[["gre","gpa"]])
y = np.array(data["admit"])
admitted = X[np.argwhere(y==1)]
rejected = X[np.argwhere(y==0)]
plt.scatter([s[0][0] for s in rejected], [s[0][1] for s in rejected], s = 25, color = 'red', edgecolor = 'k')
plt.scatter([s[0][0] for s in admitted], [s[0][1] for s in admitted], s = 25, color = 'cyan', edgecolor = 'k')
plt.xlabel('Test (GRE)')
plt.ylabel('Grades (GPA)')
# Plotting the points
plot_points(data)
plt.show() | _____no_output_____ | MIT | Introduction to Neural Networks/StudentAdmissions.ipynb | kushkul/Facebook-Pytorch-Scholarship-Challenge |
Roughly, it looks like the students with high scores in the grades and test passed, while the ones with low scores didn't, but the data is not as nicely separable as we hoped it would. Maybe it would help to take the rank into account? Let's make 4 plots, each one for each rank. | # Separating the ranks
data_rank1 = data[data["rank"]==1]
data_rank2 = data[data["rank"]==2]
data_rank3 = data[data["rank"]==3]
data_rank4 = data[data["rank"]==4]
# Plotting the graphs
plot_points(data_rank1)
plt.title("Rank 1")
plt.show()
plot_points(data_rank2)
plt.title("Rank 2")
plt.show()
plot_points(data_rank3)
plt.title("Rank 3")
plt.show()
plot_points(data_rank4)
plt.title("Rank 4")
plt.show() | _____no_output_____ | MIT | Introduction to Neural Networks/StudentAdmissions.ipynb | kushkul/Facebook-Pytorch-Scholarship-Challenge |
This looks more promising, as it seems that the lower the rank, the higher the acceptance rate. Let's use the rank as one of our inputs. In order to do this, we should one-hot encode it. TODO: One-hot encoding the rankUse the `get_dummies` function in pandas in order to one-hot encode the data.Hint: To drop a column, it's suggested that you use `one_hot_data`[.drop( )](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.drop.html). | # TODO: Make dummy variables for rank
one_hot_data = pd.concat([data, pd.get_dummies(data['rank'], prefix = 'rank_')], axis = 1)
# TODO: Drop the previous rank column
one_hot_data = one_hot_data.drop(['rank'], axis = 1)
# Print the first 10 rows of our data
one_hot_data[:10] | _____no_output_____ | MIT | Introduction to Neural Networks/StudentAdmissions.ipynb | kushkul/Facebook-Pytorch-Scholarship-Challenge |
TODO: Scaling the dataThe next step is to scale the data. We notice that the range for grades is 1.0-4.0, whereas the range for test scores is roughly 200-800, which is much larger. This means our data is skewed, and that makes it hard for a neural network to handle. Let's fit our two features into a range of 0-1, by dividing the grades by 4.0, and the test score by 800. | # Making a copy of our data
processed_data = one_hot_data[:]
# TODO: Scale the columns
processed_data['gpa'] = processed_data['gpa'] / 4.0
processed_data['gre'] = processed_data['gre'] / 800
# Printing the first 10 rows of our procesed data
processed_data[:10] | _____no_output_____ | MIT | Introduction to Neural Networks/StudentAdmissions.ipynb | kushkul/Facebook-Pytorch-Scholarship-Challenge |
Splitting the data into Training and Testing In order to test our algorithm, we'll split the data into a Training and a Testing set. The size of the testing set will be 10% of the total data. | sample = np.random.choice(processed_data.index, size=int(len(processed_data)*0.9), replace=False)
train_data, test_data = processed_data.iloc[sample], processed_data.drop(sample)
print("Number of training samples is", len(train_data))
print("Number of testing samples is", len(test_data))
print(train_data[:10])
print(test_data[:10]) | Number of training samples is 360
Number of testing samples is 40
admit gre gpa rank__1 rank__2 rank__3 rank__4
195 0 0.700 0.8975 0 1 0 0
343 0 0.725 0.7650 0 1 0 0
125 0 0.675 0.8450 0 0 0 1
314 0 0.675 0.8650 0 0 0 1
147 0 0.700 0.6775 0 0 1 0
386 1 0.925 0.9650 0 1 0 0
39 1 0.650 0.6700 0 0 1 0
158 0 0.825 0.8725 0 1 0 0
75 0 0.900 1.0000 0 0 1 0
173 1 1.000 0.8575 0 1 0 0
admit gre gpa rank__1 rank__2 rank__3 rank__4
7 0 0.500 0.7700 0 1 0 0
9 0 0.875 0.9800 0 1 0 0
18 0 1.000 0.9375 0 1 0 0
30 0 0.675 0.9450 0 0 0 1
31 0 0.950 0.8375 0 0 1 0
60 1 0.775 0.7950 0 1 0 0
83 0 0.475 0.7275 0 0 0 1
88 0 0.875 0.8200 1 0 0 0
92 0 1.000 0.9750 0 1 0 0
99 0 0.500 0.8275 0 0 1 0
| MIT | Introduction to Neural Networks/StudentAdmissions.ipynb | kushkul/Facebook-Pytorch-Scholarship-Challenge |
Splitting the data into features and targets (labels)Now, as a final step before the training, we'll split the data into features (X) and targets (y). | features = train_data.drop('admit', axis = 1)
targets = train_data['admit']
features_test = test_data.drop('admit', axis=1)
targets_test = test_data['admit']
print(features[:10])
print(targets[:10])
| gre gpa rank__1 rank__2 rank__3 rank__4
195 0.700 0.8975 0 1 0 0
343 0.725 0.7650 0 1 0 0
125 0.675 0.8450 0 0 0 1
314 0.675 0.8650 0 0 0 1
147 0.700 0.6775 0 0 1 0
386 0.925 0.9650 0 1 0 0
39 0.650 0.6700 0 0 1 0
158 0.825 0.8725 0 1 0 0
75 0.900 1.0000 0 0 1 0
173 1.000 0.8575 0 1 0 0
195 0
343 0
125 0
314 0
147 0
386 1
39 1
158 0
75 0
173 1
Name: admit, dtype: int64
| MIT | Introduction to Neural Networks/StudentAdmissions.ipynb | kushkul/Facebook-Pytorch-Scholarship-Challenge |
Training the 2-layer Neural NetworkThe following function trains the 2-layer neural network. First, we'll write some helper functions. | # Activation (sigmoid) function
def sigmoid(x):
return 1 / (1 + np.exp(-x))
def sigmoid_prime(x):
return sigmoid(x) * (1-sigmoid(x))
def error_formula(y, output):
return - y*np.log(output) - (1 - y) * np.log(1-output) | _____no_output_____ | MIT | Introduction to Neural Networks/StudentAdmissions.ipynb | kushkul/Facebook-Pytorch-Scholarship-Challenge |
Subsets and Splits